id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247808401
pes2o/s2orc
v3-fos-license
Symptomatic pancreatic lipoma managed with a metallic biliary stent: Case report Introduction Mesenchymal tumors comprise only 1 to 2% of all pancreatic tumors, being lipomas a rare variant of mesenchymal tumors of the pancreas. Presentation of case This is the report of an 82-year-old woman who presented at the medical emergency room in a fourth level clinic with five days of nausea evolution, emesis, jaundice, coluria, acholia, and abdominal pain in the right hypochondrium. Diagnostic imaging and ultrasonography discovered and characterized a significant dilation of the intra and extrahepatic bile duct, caused by the presence of a mass in the head of the pancreas of lipomatous origin. The obstruction was successfully managed with a metallic fully covered removable biliary stent. Discussion Some studies have reported the incidence of pancreatic lipoma being 0.08% and 0.012%, and the vast majority of them (>95%) are asymptomatic and properly handled without intervention, however, symptomatic tumors require surgical treatment. In our case, the surgical treatment was not suitable because of her multiple comorbidities, cardiovascular risk and advanced age. Our management continued to be a minimally invasive approach, without general anesthesia and good postoperative results. Conclusion To the best of our knowledge we report the first case of symptomatic pancreatic lipoma in Colombia with unique management, and the second in Latin America. Introduction The majority of both benign and malignant pancreatic neoplasms arise from pancreatic epithelial cells [1], even so, mesenchymal tumors comprise 1 to 2% of all pancreatic tumors and lipomas are a rare variant of mesenchymal tumors of the pancreas [2]. Pancreatic Lipoma (PL) is a benign mesenchymal tumor consisting of mature adipose cells and thin collagen capsule [2]. It is known that the majority of them are asymptomatic; but some can produce pancreatic or biliary obstruction, or both [3]. We report the management of the first case of symptomatic PL in Colombia; the second in Latin America [4]. This work has been reported in line with the SCARE criteria [5]. Case report This is the report of an 82-year-old woman who presented at the medical emergency room in a fourth level clinic with 5 days of evolution of nausea, emesis, jaundice, coluria, acholia, and abdominal pain in the right hypochondrium; without any additional symptomatology. Her medical history included: hypertension, type two diabetes mellitus, dyslipidemia, urinary tract infection, ex-smoker, obesity, and chronic obstructive pulmonary disease gold B (did not require oxygen) due to chronic exposure to wood smoke and cigarette. During his hospital stay, internal medicine service had to continually intervene due to irregularities (high values) in his blood pressure figures. Her main findings laboratory testing were: Alanine Aminotransferase: 477 IU/L Amylase: 117 IU/L Aspartate aminotransferase: 517 IU/L, Alkaline phosphatase: 517 IU/L, Total bilirubin: 2,81 mg/dL, Direct bilirubin: 2,37 mg/dL, Indirect bilirubin: 0,44 mg/dL. Her total abdomen ultrasound reported a dilation of the intrahepatic bile duct, common hepatic duct, and common bile duct; which reaches 14 mm. The gallbladder was markedly distended, with a longitudinal diameter of 14 cm * transverse diameter of 6 cm and a thickness of up to 2.7 mm. Murphy's ultrasound were positive. There was a moderate amount of bile mud inside the gallbladder. The exam concluded the presence of hydrocholecyst and intra and extrahepatic bile duct dilation. Then, a magnetic resonance cholangiopancreatography (MRCP) was performed and reported significant dilation of the intra and extrahepatic bile duct with a transverse diameter of the common hepatic duct of 15 mm ( Fig. 1), with obstruction of the common bile duct as a consequence of a concentric stenosing lesion and the presence of a hyperintense image on T1 and T2 at the level of the head of the pancreas, of approximately 4.5 cm * 3.4 cm (Fig. 2), which suppresses fat suppression sequences compatible with lipoma. There was distention of the gallbladder without evidence of thickening of its walls or stones inside. No filling defects of the bile duct lumen suggestive of lithiasis. Computed tomography (CT) of the abdomen and pelvis focusing on the pancreas showed a distended gallbladder with a transverse diameter of 6.0 cm. No endoluminal lesions. The common bile duct reached a diameter of 1.5 cm and the uncinate process of the pancreas visualized a markedly hypodense image of fat density measuring 4.8 cm * 3.7 cm * 3.4 cm. It was concluded that there was intra and extra-hepatic bile duct dilation, and it might be a tumor of lipomatous origin in the head of the pancreas that conditioned hydrocholecyst. Consecutively, biliopancreatic ultrasonography (EUS) was requested and a biopsy was taken. This procedure was performed by an experienced gastrointestinal surgeon and endoscopist using the radial echoendoscope: Pentax Noblus Hitachi Aloka Medical, Ltd., Tokyo, Japan, L2E-EA045-8 EZU-FS1A. The endoscopic vision showed a significant deformity of the antrum and duodenum due to extrinsic compression. The body and tail of the pancreas had a salt and pepper appearance; 2.5 mm wirsung without lesions inside. In the head of the pancreas, there was a predominantly hyperechoic heterogeneous lesion ( Fig. 3) measuring 35 mm * 30 mm, with well-defined borders whose elastography was in color blue-green (Fig. 4). The transduodenal puncture was performed guided by echoendoscopy with needle No. 22. Acquire™, Boston Scientific Corporation, Natick, MA, USA. The results of the biopsy reported fragments of mature fibrofatty tissues compatible with the clinical and endoscopic diagnostic impression of lipoma. As she was not a candidate for pancreaticoduodenectomy, therefore, On the 11th of October 2021, the aforementioned patient arrived again with adynamia, drowsiness, and upper abdominal pain. Her laboratory tests showed and elevated glycaemia and an alteration in liver profile; suggesting obstructive symptoms. A transparietohepatic cholangiography evidenced that the stent expanded without complications. Then, hepatotropic viruses were taken with negative results. She presented moderate hypokalemia that was resolved on the 15th of October (final values 3.06). Had 1: 160 positive antinuclear antibodies, negative asthma, normal immunoglobulin G, negative cytomegalovirus, pending Epstein-Barr virus. It was very likely autoimmune etiology. A percutaneous liver biopsy was done using a N • 16 gauge core needle in the segment VI of the right lobe, getting 2 cylinders, which were sent for histopathological study. Biopsy results reported a necroinflammatory disease with interface hepatitis stage 2 out of 6 according to the modified ISHAK-KNODELL scale. Discussion Lipomas are benign tumors of fat adipocytes that present as soft, painless masses most commonly seen on the trunk and upper extremity, but can be located anywhere on the body where normal fat cells are present. Their size usually ranges from 1 to 10 cm and their precise cause is unknown [6]. Intestinal lipomas (IL) are benign, slow-growing mesenchymal neoplasms arising from adipose connective tissue in the bowel wall. Their incidence was between 0.035% and 4.4% in large autopsy series while colonoscopy studies put the incidence at between 0.11% and 0.15% [7]. Approximately 65-75% of intestinal lipomas are located in the colon, 20-25% occur in the small bowel; being the second most common site. The stomach is the third most common site and they usually appear in the antrum. Lipomas in the esophagus and duodenum are less common [8]. Lipomas in the pancreas (PL) are a very rare entity. In 2006, Hois [9] et al. reported an incidence rate of PL of 0,08%, and in 2016, Butler [3] et al. reported an incidence of 0,012%; which is a comparable incidence, confirming that PL is a rare finding. The vast majority >95% of PL are asymptomatic [3]; however, some of them can produce pancreatic or biliary obstruction or both. In our case, PL was symptomatic with an intra and extrahepatic bile duct dilation related to the effect of the mass. Despite being a benign entity, in our case PL had an aggressive behavior due to compromise of the bile duct and pancreatic duct. In this old adult patient, the greatest challenge was her age, and that she debuted with obstructive jaundice, which forced the specialists to rule out a periampullary neoplastic pathology. Regarding diagnostic imaging, Butler et al. [3] reported in their retrospective study that 68 PL were diagnosed by CT scans, of which 64 were performed with intravenous contrast. It is also noted that six PL were diagnosed with MRI without contrast media and no PL was diagnosed by ultrasound. In our reported case, we made a complete analysis of the PL using ultrasound, scan, magnetic resonance, and EUS with biopsy allowing us to better characterize the PL mass, clarify the etiology of the stenosing lesion of the common bile duct; since it could be of neoplasm or inflammatory origin. Once PL was diagnosed, knowing what to do is important. Management depends on the clinical presentation of the patient and the size of the lesion. Lipomas are conservatively managed with follow-up imaging especially when the lipoma has well defined margins and causes no obstruction to the pancreatic duct or common bile duct [10]. Both Raut Ch et al. [11] and Butler et al. [3] also concluded that the majority of PLs are properly handled without intervention and that small and asymptomatic lipoma can be observed. However, large and symptomatic tumors require surgical treatment being the major complications obstruction, jaundice, or hemorrhage, all of them related to the effect of the mass. Raut and Fernandez del Castillo [11] provided surgical treatment for symptomatic PLs and the options are enucleation, distal pancreatectomy, pancreatoduodenectomy, and palliative bypass. In our case, the surgical treatment was not suitable because of her multiple comorbidities, cardiovascular risk, medical history and advanced age. For that reason, an endoscopic retrograde cholangiopancreatography with metal biliary stent was performed improving the symptomatology and laboratory testing, with no need of general anesthesia. Conclusions Pancreatic lipomas are a very rare entity. We report a case of a symptomatic PL which was addressed by performing a biliary drainage, and with a metal stent. To the best of our knowledge, this is the first reported case of symptomatic PL in Colombia, the second in Latin America and, the first to be managed with a metallic biliary stent. Provenance and peer-reviewed Not commissioned, externally peer-reviewed. Funding None. Sources of funding None. Ethical approval In our institute, the approval of the ethics committee for the retrospective analysis of a clinical case report is not required. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this Author contribution All authors contributed in the same way: study conceptualization, methodology, design, data analysis, writing -original draft and writing, review & editing. Research registration The submitted case report is not a research study. Declaration of competing interest None.
2022-03-31T15:28:23.156Z
2022-03-29T00:00:00.000
{ "year": 2022, "sha1": "dd1c24b87c6a8ff8354931badf141680202c752d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2022.106972", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f271cfd8b807db0c589d1d20b5be6f7b3cc68fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267688409
pes2o/s2orc
v3-fos-license
Is operation time over the benchmark value a risk factor for worse short-term outcomes after laparoscopic liver resection? Introduction Laparoscopic liver resection is a challenging surgical procedure that may require prolonged operation time, particularly during the learning curve. Operation time significantly decreases with increasing experience; however, prolonged operation time may significantly increase the risk of postoperative complications. Aim To assess whether prolonged operation time over the benchmark value influences short-term postoperative outcomes after laparoscopic liver resection. Material and methods A retrospective cohort study based on data from the National Polish Registry of Minimally Invasive Liver Surgery was performed. A total of 197 cases consisting of left lateral sectionectomy (LLS), left hemihepatectomy (LH), and right hemihepatectomy (RH) with established benchmark values for operation time were included. Data about potential confounders for prolonged operation time and worse short-term outcomes were exported. Results Most cases (129; 65.5%) were performed during the learning curve, while the largest rate was observed in LLS (57; 78.1%). Median operation time exceeded the benchmark value in LLS (Me = 210 min) and LH (Me = 350 min), while in RH the benchmark value was exceeded in 39 (44.3%) cases. Textbook outcomes were achieved in 138 (70.1%) cases. Univariate analysis (OR = 1.11; 95% CI: 0.61–2.06; p = 0.720) and multivariate analysis (OR = 1.16; 95% CI: 0.50–2.68; p = 0.734) did not reveal a significant impact of prolonged surgery on failing to achieve a textbook outcome. Conclusions Prolonging the time of laparoscopic liver resection does not significantly impair postoperative results. There is no reason related to the patients’ safety to avoid prolonging the time of laparoscopic liver resection over the benchmark value. Introduction The laparoscopic approach for liver resection is an established method of treatment for patients with liver tumours.Consecutive prospective studies, including randomised controlled trials, have proven its feasibility and safety [1][2][3].The increase in data Is operation time over the benchmark value a risk factor for worse short-term outcomes after laparoscopic liver resection?has led to the publication of the most recent guidelines for laparoscopic liver surgery.These guidelines discuss indications and patient selection, major technical challenges of the procedure, and general assumptions regarding proper training and implementation in new centres [4]. According to multi-expert consensus, laparoscopic liver surgery should not be developed separately from an open liver surgery program.Proposed difficulty classifications of different type of procedures aim to facilitate maximising the safety of overcoming the learning curves and the efficiency in establishing new programs [5][6][7].Despite optimal case selection, the learning curve for laparoscopic liver resection covers up to 40-60 cases per surgeon [8].Successive acquirement of defined laparoscopic skills [9] results predominantly in reducing the operation time and intraoperative blood loss [10][11][12]. The reduction of operation time is a natural phenomenon observed during the training of any procedure.However, surgical data indicate an increasing risk of postoperative complications with prolonged operative duration [13].Taking this into account, it is worth considering the implementation of a policy not to surpass a particular time during laparoscopic liver surgery, in order to avoid an increased risk of postoperative complications, even during the learning curve.Based on large-cohort data, there are established benchmark values for outcomes of the most repeatable types of liver resections, such as left lateral sectionectomy, left hemihepatectomy, and right hemihepatectomy [14,15].The most desirable surgical outcomes have been defined as textbook outcomes and are established also for laparoscopic liver resection [16]. Aim The aim of this study is to assess whether prolonged operation time over the benchmark value influences short-term postoperative outcomes after laparoscopic liver resection. Material and methods A retrospective cohort study based on data from the National Polish Registry of Minimally Invasive Liver Surgery was performed (registration number in clinicaltrials.gov;NCT05516394) [7].From 2010 to the end of 2022 there were 718 laparoscopic liver resections performed in 8 different departments in Poland. The median number of cases performed per department was 58, while 3 departments had experience of > 100 cases.Among all registered cases, 85.8% were performed by 10 different surgeons.Data from the registry covers the evolution of individual surgeons' learning curves, which were set for 60 procedures.Among all registered cases, only left lateral sectionectomy (LLS), left hemihepatectomy (LH), and right hemihepatectomy (RH) were selected based on the availability of benchmark values for operation time in the specified types of resections.In accordance with the study of Goh et al. the cut-off values were 209.5 min, 302 min, and 426 min, respectively [15].The study cohort included 197 cases. Data about potential confounders for prolonged operation time or worse short-term outcomes were exported from the registry and included the following: body mass index (BMI) [kg/m 2 ], previous abdominal surgeries, preoperative chemotherapy, stage of learning curve, type of tumour, maximum size of the tumour, number of tumours, liver steatosis, liver cirrhosis, application of Pringle's manoeuvre, number of surgeons, number of ports used, and technique of parenchymal transection. For the assessment of short-term outcomes, data on intraoperative blood loss, intraoperative adverse events, postoperative complications, 30-day reoperation, readmission rates, and margin status were collected.Intraoperative adverse effects were defined according to the Oslo classification [17].Postoperative complications were grouped according to the Clavien-Dindo classification [18].Postoperative bile leak was assessed based on the International Study Group of Liver Surgery grading system [19].For the complex assessment of surgical outcomes, Textbook Outcome was evaluated, which was defined as the absence of intraoperative adverse events of grade 2 or higher, postoperative bile leak of grade B or C, severe complications (Clavien-Dindo ≥ 3), postoperative reintervention within 30 days, readmission within 30 days of discharge, in-hospital mortality, and the presence of an R0 resection margin [16]. Statistical analysis The descriptive statistics of the included cases based on resection range and cases exceeding the benchmark operation time value were recognised.Subsequently, univariate and multivariate analyses were designed to assess the risk factors of failing to achieve textbook outcomes.Data analysis was performed using SAS and Microsoft Excel 365.Continuous data were presented as median (Me) with interquartile range (IQR; Q1-Q3) and compared using the Mann-Whitney U test or Kruskal-Wallis test, as appropriate.Categorical data were presented as number (n) with percentage rates (%) and compared using Pearson c 2 or Fisher exact test, as appropriate.Univariate and multivariate logistic regression analyses using the backward stepwise method to calculate the odds ratio (OR) were applied.Statistical significance was set for p < 0.05.The 95% confidence intervals (CI) were reported. Results The study cohort included 73 left lateral sectionectomies, 26 left hemihepatectomies, and 88 right hemihepatectomies (Table I).The median age for the whole study cohort was 63 (51.5-71) years, of whom 93 (47.2%) were female.Previous surgical treatment or neoadjuvant chemotherapy was observed more frequently among patients scheduled for left or right hemihepatectomies.Most cases (129; 65.5%) included in the study were performed during the learning curve, while the largest rate was observed in left lateral sectionectomies (57; 78.1%).A malignant tumour was an indication for surgery in most of the patients (172; 87.3%).Multiple lesions were observed most in patients scheduled for right hemihepatectomy (39; 44.3%).The median operation time exceeded the benchmark value in left lateral sectionectomies (Me = 210 min) and left hemihepatectomies (Me = 350 min), while in right hemihepatectomies a benchmark value of 426 min.was exceeded only in 39 (44.3%) cases.In comparison, in accordance with the operation time benchmark value (Table II) significantly more cases with prolonged operation time were performed during the learning curve (79.6%; p < 0.001).Operation time over the benchmark value was also observed more frequently in multiple lesions (42.7%; p < 0.001) and when the transection technique was based on the ultrasound selection device (86.4%; 0.046). Intraoperative adverse events were observed in 37 (18.8%) of all cases and were comparable, regardless of whether the benchmark operation time value was exceeded or not (Table III).Severe postoperative complications, bile leak B or C, and 30-day reintervention or readmission rates were also similar between the 2 groups.Significantly more negative resection margins were observed in cases with pro-longed operation time (p = 0.011); however, this did not result in a significant difference in textbook outcome rates between the groups (p = 0.757). Textbook outcomes were achieved in 138 (70.1%) cases.During the univariate analysis prolonged operation time was not found to be associated with an increased risk of worse postoperative results (Table IV).However, a significant association was observed between worse short-term results and previous abdominal surgery (OR = 1.88; p = 0.46), the application of the Pringle manoeuvre (OR = 2.51; p = 0.004), or a larger surgical team (OR = 5.01; p = 0.011).Multivariate analysis was performed using the backward stepwise method until the step in which prolonged operation time would be eliminated in the next step (Table IV).Post-hoc analysis was performed to evaluate the relationship between of application the Pringle manoeuvre and worse short-term postoperative results (Table V).Only in 53 (59.6%) cases where Pringle manoeuvre was applied was achievement of textbook outcome observed.A worse rate of textbook outcomes was associated with a significantly increased rate of intraoperative adverse events (p = 0.003) and a significantly decreased rate of R0 resection margin status (p = 0.012). Discussion After establishing the feasibility, safety, and efficacy of laparoscopic liver resection, the next step includes safe dissemination of this technique to the subsequent centres.Laparoscopic liver resection is a demanding procedure, which requires defined skills from advanced laparoscopic and open hepato-biliary surgery training [9].Reasonable patient selection for laparoscopic liver resection is crucial, especially in the beginning of an individual's learning curve.A stepwise increase in case complexity, in accordance with established difficulty scores systems, may provide textbook outcomes on a satisfactory level during the learning curve [7].The significantly increased operating time normally observed in the early phase of every learning curve brings into question its influence on postoperative results.How much should surgeons be concerned about the impact of prolonging the operation time over benchmark values in relation to postoperative results?Should trainees focus on agility training on simulators to maximise operative time reduction?Simultaneously, other potential risk factors of worse short-term postoperative results were assessed in this study. Difficulty of liver resection may affect postoperative results [20].The analysed cohort included 3 types of liver resection, representing all difficulty grades according to Kawaguchi et al. classification [6].The LLS sub-cohort was significantly younger (p = 0.011), which could be related to less malignant indications (0.050) and a significant preference for scheduling surgery during the learning curve (p = 0.11).LH and RH were related to prior surgical or systemic treatment (p = 0.001).Major liver resections were less frequently performed before the completion of the learning curve.Prolonged operating time was observed significantly more often before completing 60 cases of laparoscopic liver resection for individual surgeons (p = 0.001).The duration of the operation over the benchmark value was more frequently seen in patients with multiple lesions (p = 0.001).Parenchyma transection with ultrasound selection was significantly more frequent in prolonged procedures; this dissection technique is known for being precise yet time consuming (p = 0.046). A meta-analysis published by Cheng et al. [13], the largest cohort analysis so far, concluded that there is an association between increased risk of postoperative complications and surgeries with prolonged operative times.The analysis included a variety of specialities; however, the strongest association was observed with surgeries performed in general surgery.The causality of such a phenomenon may be explained in terms of the type of complication.The increased rate of surgical site infections could be attributed to prolonged microbial exposure, diminished efficacy of antimicrobial prophylaxis over time, or prolonged tissue retraction leading to tissue ischaemia and necrosis [21][22][23].Such a pathophysiology is significantly reduced by a laparoscopic approach, for which small incisions for ports decrease the surgical site infection rates.Larger incisions are usually made only for specimen removal, making them far less exposed to infection [24][25][26].Postoperative pulmonary complications are among the most common complications after liver surgery, and an increased rate may be observed, particularly after prolonged surgery [27].Even though intraperitoneal insufflation may result in compromised lung ventilation, Fuks et al. presented an analysis in which they observed a significantly decreased pulmonary complication rate after laparoscopic major liver resections [28].This may be justified by a less painful breathing limitation due to the lack of intensive rib retraction and a large incision in the epigastrium, which leads to faster postoperative pulmonary rehabilitation.Moreover, prolonging surgery simultaneously with increased intra-abdominal pressure may be linked to factors such as increased coagulation, blood stasis, and endothelial damage.These changes could result in a higher incidence of venous thromboembolism or acute kidney injury [29].Additionally, the prolonged duration of procedures, leading to surgical team fatigue, may increase the likelihood of worse postoperative outcomes.The performed study focuses on the impact of prolonged surgery on the ability to achieve textbook outcomes after laparoscopic liver resection.Based on the designed analysis, operation time over the benchmark value does not compromise the rate of textbook outcomes (p = 0.757).The only short-term result that significantly differs regarding the time of surgery is the rate of R0 resection (p = 0.011).However, it is also plausible that worse intraoperative outcomes can prolong the duration of surgery and thus contribute to the positive association between a positive margin and time exceeding the benchmark value.The regression model additionally showed no impact of prolonged surgery on achieving textbook outcomes.Contrary to expectations, applying the Pringle manoeuvre increased the risk over 2-fold in univariate analysis.To clarify such an unexpected association, a post-hoc analysis was performed (Table V).The reason for such an observation was that most surgeons performed the Pringle manoeuvre, especially during the learning curve, reactively to intraoperative adverse events rather than proactively to avoid increased blood loss.The approach for using the Pringle manoeuvre was verified by directly asking the surgeons who performed the operation. The study's limitations include its retrospective character and the limited size of the cohort.However, even if studies designed for stronger evidence were to show such an impact, the current study suggests that, even if the potential association exists, it is likely to be very weak. Conclusions Prolonging time of laparoscopic liver resection does not significantly impair postoperative results.There is no patient safety-related reason to avoid prolonging the time of laparoscopic liver resection over the benchmark value, especially during the learning curve, when additional time is needed for safety and efficient training of a fellow. Table I . Descriptive date of the study cohort Videosurgery and Other Miniinvasive Techniques 1, March/2024 Table II . Comparison in accordance with the operation time benchmark value Table III . Short-term outcomes in accordance with the operation time benchmark value Table IV . Analysis of the risk factors of failing to achieve textbook outcomes Table V . Short-term outcomes in accordance with applying Pringle manoeuvre
2024-02-16T16:19:53.597Z
2024-02-14T00:00:00.000
{ "year": 2024, "sha1": "18d4865e370c170f71d05b7dd2439624905a5340", "oa_license": "CCBYNCSA", "oa_url": "https://www.termedia.pl/Journal/-42/pdf-52416-10?filename=Is%20operation%20time.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9b68cbec5e1c70868437c1d951e04a462710a7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117149627
pes2o/s2orc
v3-fos-license
Beaches as a Factor in Achieving Competitiveness of a Tourist Product-Case Study: Istrian County Beaches are the main part of an integral tourist product of a destination. They represent a highly valuable resource from the aspect of natural, social, economic and recreational potential. They make a tourist product attractive, and represent a motive of arrival to a destination for a certain number of tourists. In order for a beach to be placed in a function of tourist offer and positioning of a tourist destination on the tourist market, and for destination´s tourist product to become more attractive, it is necessary to enrich the offer of beaches according to the desires of market segment while respecting the principles of sustainable development. Beach offer of the Istrian County has been analysed within this paper. While preparing this paper, the authors have used the scientific methodology, which means gathering and the analysis of data from primary and secondary sources. An analysis of domestic and foreign professional and scientific literature has been carried out, as well as the valid law frame of beach managing in the Republic of Croatia. With a goal of analysing the existing state of regulation and managing beaches in the Istrian County, an evaluation of beach resources has been carried out. Introduction Tourism market represents a very dynamic market, and is marked by fast changes from both tourists offer and demand.Contemporary trends register not only a faster growth of tourist preferences, but also that they are becoming increasingly specific.In order to achieve success on the tourist market, as well as competitive positioning, it is necessary to ensure timely adjustment of tourist offer to the tourist demand.Only those destinations, which form destination product based on the desires and needs of tourists, will achieve success (Zadel and Cerović, 2013).Within a Croatian tourist product, classical Sun, sea, sand (3S) offer has remained dominant throughout the years, and the current situation is not different.One of the counties, whose efforts and activities are emphasized in terms of development of this tourism form is the Istrian county.Many investments were made into the development of its beaches and its facilities.Within this paper, the authors will present, through adequate theoretical background, the importance and characteristics of beaches in the Istrian county, as well as the results of an empirical research in which attitudes of tourists towards beaches and bathing tourism offer will be presented.Finally, concluding remarks will be provided. Theoretical Background Competitiveness among tourist destinations has been growing due to numerous international tourism activities, which occur on daily basis.That brings the importance of the assessment of destination performance vis-a-vis other similar and competing destinations (Kozak, 2002).Competitiveness and how to achieve it on the turbulent tourist market has been a subject of numerous scientific and professional studies for many years, and from various points of view.In order to make a clearer insight, the authors will present briefly some of them.Duman and Kozak (2010), i.e., carried out an analysis of tourism resources in Turkish cities by using the content analysis of official tourism websites of Turkish cities.They also surveyed the tourism officials in Turkish cities in order to identify their descriptions of the cities in which promotion they are actively involved.Based on the findings they were able to determine and propose developmental directions, respectively where to focus in order to improve the level of the achieved level of competitiveness of a Turkish tourist product.Goffi (2013) undertook further efforts into developing Ritchie & Crouche´s model (2000) to a higher level by adding further determinants to their original competitiveness model, and testing the model on the Italian destinations of excellence.The model contains: 1. core resources and key attractors (natural resources, green areas, leisure activities, gastronomy and typical services, etc.); 2. tourist services (quantity and quality of accommodation, food services quality, tourist oriented services, etc.); 3.general infrastructures (environmental friendliness and quality of transportation services, quality of road system, medical care facilities, sanitation, sewage and solid waste disposal, etc.); 4. conditioning and supporting factors (accessibility of destination, proximity of other destinations, destination links with major origin markets, value for money in destination tourism experience, etc.); 5. tourism policy, planning and development (political commitment to tourism, integrated approach to tourism planning, clear policies in creating formal employment opportunities, etc.); 6. destination management (effectiveness of destination positioning, effective market segmentation, tourist destination communication, effectiveness in crafting tourism experiences, etc.); and 7. demand factor (awareness of destination, level of repeat visitors, "Fit" between destination products and visitor preferences, etc.).The results of the empirical research and testing confirmed the validity of the model and its success in determining the level of competitiveness of a tourist destination and its offer.According to Dwyer and Kim (2003), in order to achieve competitive advantages for its tourism industry, any destination must enable that adequate "appeal" as well as the experience offered to the tourists, which must be superior among all the alternative destinations available.An existing and potential visit to a destination is inextricably linked to that destination's overall competitiveness. Every destination bases its tourist offer development on comparative advantages (this refers to both natural and created destination attributes) which makes it unique and recognizable on the tourist market, and stimulates potential tourist to visit a certain destination.These attributes need to be maintained and managed carefully while upholding sustainability principles, in order to enable their preservation in the long-run (Garcia Sanchez et al, 2015; Porter, 1990), and achieve competitive advantages on the tourist market. World tourism flows experience numerous changes on daily basis.However, one thing that remained the same over the years is that Sun, Sea, Sand (3S) offer still holding the leading position, due to the fact that numerous destinations primarily base their offer on its natural and other particularities, and beaches represent a primary resource. Beaches can be defined as a cluster of unconsolidated material (e.g.sand, gravel, clay or mixtures thereof), extending from the land edge of the beach, which can be slopes of dunes or seawall, to the depths of the sea, where there is no significant movement of sediment (Gračan et al., 2016, p. 76;Williams and Micaleff, 2009, p. 10). Beaches can be defined in relation to a wide spectre of physical and anthropogenic determinants, which, among other things, include sea physics (primary relates to the influence of waves), material contents, colour of beach sediment (often used in order to describe various types of beaches) and others. In relation to the anthropogenic dimension, the beach type can be determined according to the three main criteria: • arranged sea beach within or outside the settlement is supervised and accessible to everybody under the same condition from the mainland and seaside, including people of reduced mobility, mostly edited and altered natural characteristics, and arranged mainland space from the point of infrastructure and facilities (showers, cabins and sanitation), immediately connected with sea, marked and protected from the seaside; • natural sea beach within or outside the settlement supervised and accessible from the mainland and/or sea side, unequipped from the point of infrastructure, totally preserved found natural characteristics. Rethinking beaches in the context of their use, the Regulation on the Procedure for Granting a Concession on a Maritime Domain (NN 23/04, 101/04, 39/06, 63/08, 125/10 and 83/12 -furthermore defines beaches as: • arranged public beaches -beaches in use to the major number of tourist facilities and citizens; • arranged special beaches -beaches that make a technical and technological unit of an accommodation unit in the sense of Law on Restaurant Business; • natural beaches -beaches on which there was no interventions within their space in the sense of the Regulations that are regulating Spatial Planning and Building, and which must not be limited from the mainland. According to the Bathing Area Registration and Evaluation System (BARE) within the National Programme of Managing and Setting Beaches (2014), they can be divided into the following ones: • Remote beach: usually determined by poor access (mostly by sea path or walking at least 300 metres), they can be in the vicinity or on the edge of rural, or sometimes village (local) areas, but not urban.They cannot be reached by public transport.In the Mediterranean context, in such areas it is possible to find a limited number of holiday houses as well as a small number of restaurant businesses open in the summer period; • Rural beach: situated mostly outside urban/local areas, but can also be situated within the settlements.They are unreachable by public transport, but there are access routes, and they can be reached by public transport.These beaches do not have restaurant facilities.However, in the Mediterranean context, some overland recreational facilities or seasonal beach facilities (i.e.pedalling, "bananas" or water skiing) can be found in rural beaches.In these beaches, the hinterland is not built in a significant measure, however, some accommodation units can be found, but there are no, or there is a small number of permanent social facilities (elementary school, church, store, restaurant business, etc.).They are appreciated by the tourists due to peace and preserved natural qualities; • Local beach: situated outside major urban surroundings and connected with minor, but constant population that has an access to the organized service activities in a smaller range, such as elementary school, church, stores and restaurant businesses.These beaches also can be found in tourist resorts or camps, which are used mostly in summer months, as well as within inhabited areas between urban and rural areas with developed offer of family accommodation.In relation to the rural beaches, the basic difference is their setting.While rural beaches are mostly natural, public beaches are reachable by public transport or by cars; • City (urban) beach: situated in the urban area which serves a large population with well-organized service facilities such as elementary schools, churches, banks, post offices, centres of primary healthcare protection, restaurant businesses and other urban facilities.Nautical tourism ports can be found in the vicinity of city beaches; • Resort beach (which makes a technical-technological unit of some accommodation) has three expressive characteristics: a) it is in the vicinity of accommodation facilities and the majority of its users are the guests of these lodging facilities; b) beach running is the responsibility of the previously mentioned tourist resort, which includes cleaning beaches, ensuring a wide variety of recreational facilities and activities -loungers, pedals, jet-skis, "parasailing", surfing, various activities which include speedboat pulling (the ring, "banana", water-skiing), sailing and diving and coffee shops/restaurants for the beach users.An excellent example for this type of beaches is the organization of the Club Med, an accommodation complex in private ownership, which consists of hotels/bungalows with plenty of restaurant, recreational and entertainment facilities.In some cases, such as "all-inclusive" arrangements, the majority of these services are free for all guests.A large majority of the beach users in a tourist resort uses these facilities for recreational purposes, and not just for relaxation (sunbathing). As a tourist resource, on which bathing tourism offer is based, beaches significantly enrich the tourism offer of the destinations located along coastal areas as well as on river and lake banks.They represent the natural, social and economic (recreational) source.Beaches in tourism represent a highly valuable resource due to the fact that beaches represent one of the main motives for undertaking a journey.The quality of the bathing tourism product depends on how beaches are evaluated.Bathing tourism is characterized by a mass of tourists, so it is necessary to bear in mind that tourism evaluation must reflect the tourist preferences, and, at the same time, the offer must be developed in such a way as to uphold sustainability principles.Only through this, it is possible to achieve tourist satisfaction with the economic effects at the same time (Zadel, 2015).This is also acknowledged by Semeoshenkova & Williams (2011 according to Bojanic, 1992;Vaz et all, 2009), who also emphasized in their work that within tourism industry, as the fastest growing economic sector on world level, beaches are being considered as the main and most important factor which influences that growth.For the majority of tourists, the presence and good quality of a beach and its facilities represents the most important and the most attractive factor in the process of decisionmaking when choosing a holiday destination.Beaches and near shore-based waters provide for tourists a possibility for sunbathing, relaxing, and other activities such as swimming, surfing, yachting, fishing, jet skiing.Beaches represent the most important recreational and leisure areas which significantly influence the development of coastal countries' economy. In order for a tourist destination to achieve a higher level of development of its tourist offer, and to achieve competitiveness on the tourist market, it is important to determine its state, as well as the attitudes that tourists have, in order to rethink which are the best possible ones and how to improve them in the best possible way.In the world, beaches are a significant source of income, and in the last few years, the interest is growing due to the possibility of using their recreational and economic potential.The Republic of Croatia has made significant breaks through the years in terms of this form of tourist offer development. Besides numerous regulations and projects of the European and world councils and organizations dealing with ecology and sustainable development, and on the national level as well, documents and action plans of beach running have been brought with a goal of increasing beach quality and competitiveness, which necessarily brings to the increase of their economic evaluation.In 2018 the Ministry of Environment and Energy has brought the Strategy of Managing Sea Environment and Coastal Area aimed at protecting the marine environment.The significance of beaches for the further development of tourism in the Republic of Croatia was recognized by the Ministry of Tourism, which, in 2014, brought the National Programme of Managing and Arranging Sea Beaches with a unique database of all beaches and their facilities out of which the process of their theming arose.Regional plans of beach theming were made further on the county level.One of them is the Istrian County, which is the subject of research in this paper. Many activities have been focused towards development of various specific forms of tourism, but bathing tourism still remained the leading within the tourist product.One of the Croatian counties which need to be pointed out in particular is the Istrian County.In the following chapter the authors will present a brief theoretical review of the state of the tourist offer of the Istrian County with a particular emphasis on its beach resources, as well as tourists' attitudes towards beaches within the Istrian County.Based on this, they will offer a direction of the future development of bathing tourism in Istria. Characteristics of Beaches as a Part of the Istrian County's Tourist Offer According to the 2015 -2025 Master Plan of Tourism of the Istrian County (2014), Istria is the most western county in the Republic of Croatia, and territorially organized into 41 units of local self-government, respectively 10 cities and towns and 31 larger rural municipalities with Pazin as the seat of the county.Its area covers 3,476 square kilometres, of which the majority, 3,130 square kilometres (90%) belongs to the Republic of Croatia, while the rest belongs to the neighbouring Slovenia and Italy.The majority of the Croatian part of the Peninsula is situated in the Istrian County (2,820 km²).Its climate is conditioned by the fact that Istria, as a peninsula, is surrounded by the sea from three sides, so the climate varies from the Mediterranean to the Continental one.Summers are dry and warm.The temperature is influenced by the mainland, sea, and the elevation.In the summer period, it can reach up to 40°C.The temperature of the sea is the lowest in March (9°-11°C), and the highest in August (up to 24°C).The sea salinity is 36-38‰.In terms of traffic connection, it is well connected by sea, air and road.Over the years, many activities have been invested in the development of a competitive tourist product on the county level (with particular emphasis on specific forms of tourism, such as bathing tourism, rural tourism, nautical tourism, etc., the offer being based on Istria´s comparative advantages), which is witnessed by the growing tourist turnover presented in the following table. The data in the Table 1 have indicated a positive growth of both tourist arrivals and overnights, with foreign tourists as a significant majority, thus indicating that Istria has been indeed recognized as a tourist destination on the international tourist market.Bathing tourism has justifiably remained a dominant tourist product in this county due to high investments in beach running and development.The Table 2 presents all the beaches in the Istrian County that have been awarded with the Blue Flag 1 , international ecology programme of marine and coast environment protection, primarily aimed at a sustainable managing of the sea and the coastal line (which ensures a clean sea and a well maintained and organized environment), divided according to the city where they are situated, their type, sort, and attractions and activities they offer. The data in the Tables 1 and 2 indicate that the majority of beaches in the Istrian County, that are flying the Blue Flag, in a majority represent a combination of natural and arranged beaches, followed by beaches which are entirely natural.The authors have registered only two beaches categorized as entirely arranged.A significant majority of them offers a high number of various facilities for children and adults, which should be able to ensure satisfaction for everyone´s taste and preferences.But is that so in this particular case?In order to determine the state of bathing tourist offer in the Istrian County and the tourist´s attitudes respectively, the authors will present the results of the empirical research carried out. Attitudes Towards Beach Tourist Offer of the Istrian County The authors will present the results of an empirical research, which covers determining the attitudes of the key stakeholders through inquiring tourist´s satisfaction with the beach offer.A comparative analysis of the author´s research has been carried out with the secondary data gained from the results of the research entitled "Beaches Product of a Tourist Destination" carried on by the Institute of Agriculture and Tourism in Poreč, in 2014, and published in 2016. The research was carried from 1 st to 15 th May 2018 on the sample of 173 correctly filled in questionnaires. By monitoring the socio-demographic characteristics of the respondents, it is visible that 66% of the question- For more information please see https://www.adriagate.com/Croatia-en/Blue-flag-beaches-Croatia and http://www.blueflag.global/beaches2/ naire was filled by women, and 34% by men.According to the results of the secondary data of the research from 2014 (2016, pp.[56][57][58][59][60][61], in terms of gender, the majority of respondents were female (57.0%), while 43.0% were male respondents. All the research participants in the 2018 plan to spend their holiday at a destination, and in comparison with the 2014 results, 72.8% of the respondents have pleaded that they will spend a holiday at a destination. In terms of the chosen accommodation during their stay, the majority chose Private accommodation (83%) and only 17% a hotel.In 2014, research participants chose for their holiday private accommodation in the majority (29.9%), a camp (29.0%), followed by hotel accommodation (23.5%).Analysing their way of reaching the beach, in both researches, the respondents stated that they preferred walking (54.8% in 2014 and 62.9% in 2018.), by a car (31.1% in 2014 and 34.3% in 2018), while a small amount preferred to use a bicycle, motorcycle and a boat (14.1% in 2014 and 2.9% in 2018). In terms of duration of their stay on the beach, the majority preferred to stay longer, respectively, 41% of the respondents stated, in 2014, that they preferred to spend 3-5 hours on the beach, while, in 2018, that percentage achieved a growth of 18%.23% of the respondents stated in 2014 that they preferred to stay longer than 5 hours, while, in 2018, an increase of 12% was registered.There was a significant decrease of the results for staying less than 3 hours in 2018 in relation to 2014 (-30%).These indicators presented the average length of the tourists'stay on beaches, and the time during which they could use potential beach facilities, but also the possibility that, by using interesting beach activities they eventually prolong their stay on the beach.The data in the following table indicated that, according to the age structure from 2014, the majority of respondents (42%) most frequently spent their time on the beach with their family, and partners (40%), while in 2018, according to the age structure, prevailed tourists of a younger age group, which is understandable, considering that they spent the time with their friends on the beach (46%).When gathering information on the beach and its facilities, the tourists marked that the previous positive experience had a significant impact, followed by a kind word and information provided in a hotel/camp or a private accommodation.Other sources were marked with a significantly smaller share.When analysing the reasons why the respondents chose particular beaches, the authors have obtained some very interesting results.When choosing a beach, in 2014, the respondents gave advantage to the sea cleanliness (51.9%), followed by the vicinity of the accommodation (49.3%), beauty of the landscape (82.4%), beach access (39.2%) and beach cleanliness (37.9%).A smaller share marked the importance of children's facilities (11.0%), the use of beach props (sunshades and loungers) (10.9%), parking spaces (9.7%), sports facilities (8.7%), entertainment facilities (6.8%).The lowest importance was given to the ability of access for disabled persons in wheelchairs, etc. (0.7%).In 2018, the results were a bit different.It is interesting that, besides sea cleanliness (91.4%) and beauty of the landscape (76.5%), the respondents emphasized the rich gastronomic offer (64.7%), sufficient number of parking spaces (55.9%), but also the ability of brining dogs to the beach (44.1%). These results showed how much habits of potential tourists change, and the significance of individual facilities on the beach, so that beaches are no longer the place of passive rest.The variety of beach facilities would significantly influence the prolonged tourists' stay on beaches.The data obtained in the previous table have shown that respondents in 2014 preferred the use of their own props only (67.1%), followed by props available at the beach (24.1%), restaurant facilities (17.6%), other sport´s facilities (10.0%), children's facilities (9.1%).The lowest interest was shown in surfing facilities (0.9%).However, a change was registered in the tourists' habits in 2018, respectively, when a greater significance was given to the restaurant facilities (76.5%), music and dancing (67.6%), available props at a beach (sunshades, loungers) (61.8%), proper props (52.9%), and to the adrenaline facilities (ski lift, bungee jumping, etc.) (44.1%). The Table 9 presents the grade of the res pon dent´s satisfaction with the individual beach characteristics. As far as the level of satisfaction with the beaches, the respondents emphasized that, in the majority of cases, the preserved natural resources are important for them, respectively, beauty of the landscape (4.4), sea cleanliness (4.3), beach cleanliness (4.2), vicinity of the accommodation (4.1), beach access (4.0) and bathing space and comfort (3.8).However, what is worrying is the lower grade of some elements which can have a significant reflection on possible future decision-making when choosing a holiday destination, such as parking facilities (3.6), comfort on the beach (3.8) and the relation between the quality and price (3.7) of the offer.They were graded with a lower grade, followed by the ability of using props (3.6), availability of sports facilities (3.4), bringing dogs to the beach (3.3), quality of beach facilities (3.6) and boat access (3.4).The lowest grade was given for the ability of access for disabled persons, wheelchairs, etc. (2.7). When asked whether, in their opinion, beaches should be themed, most of the respondents replied negatively (more than 66%).However, when asked if they would have to choose a certain beach theme, the replies proved to have the biggest deviations considering the two periods of research.In 2014, the majority of the respondents chose the theme traditional beach for families with children (30.8%), followed by beaches with a romantic theme (16.9%), beaches with sport facilities (16.0%), party beaches (14.3%), eco beaches (12.4%), etc.A lower interest was shown for themes such as adrenaline beach (5.9%), diving (5.3%), etc.The lowest grade was given to the beaches for same-sex couples (1.6%).The results of the research carried on in 2018 have registered changes.The respondents singled out the themes party beach (77.1%), followed by beach for families with children (68.6%), beach with sports and recreational facilities (51.4%), while the least interest was shown for themes such as beach of culture (8.6%), beach for same-sex couples and nudist beach (11.4%). The Results of the Empirical Research Bearing in mind all the previously mentioned facts, the authors have determined the following symptomatic results: • By monitoring socio-demographic characteristics of the respondents, it can be concluded that in the first monitored period the majority of respondents belonged to the generation group 35-49 of age (33.8%), while, in 2018, the respondents were mostly younger people belonging to the age group 25-34 (41.4%), and age group 18-24 (23.6%).According to their age structure, deviations are present in the beach facilities to which they give greater significance. • In both periods of the research, the majority of the respondents have stated that they will use private accommodation during their stay in the destination. • They arrive at the beach mostly on foot, then in a smaller ratio they use a car as a means of transport.The previously stated facts have indicated that it would be advisable to have a suitable number of parking places in the vicinity of the beach. • In both monitored time periods, visitors most frequently stay on a beach from 3 to 5 hours (41% in 2014, and 59% in 2018).This has indicated the possibility of a prolonged stay on the beach by offering additional facilities and spaces. • The results indicated that, in 2014, the participants primarily went to the beach in the company of their families (42%), while, considering that in 2018 the questionnaire was filled in most frequently by a younger population, the respondents stay on the beach in the company of their friends (46%) and with their partner (31%). • The respondents have also stated their priorities when choosing a specific beach.In both the monitored periods, the most important criteria have been sea cleanliness (51.9% in 2014, which significantly grew up to 91.4% in 2018).Furthermore, in 2014, they stated the importance of the vicinity of their accommodation, while in 2018 the emphasis was given to the beauty of the landscape and the possibility of additional services on the beaches.A symptomatic datum is that they have paid a great attention to the gastronomic offer (64.7%) which was not the case in the previous period of research (6.4%).During their stay on the beach, in the first monitored period, they were mostly interested in the facilities such as renting loungers and sunshades, while in the second period the advantage has been given to the gastronomy and entertainment facilities (music and dancing).That what has been previously mentioned, can be the reflection of the various age groups that participated in the research. • When analysing the satisfaction of the participants with individual elements of the beach offer and additional facilities, in both periods the respondents marked beauty of the landscape (average 4.4), and sea cleanliness (average 4.3).The lowest grade was given to the facilities for disabled persons (average 2.7) and facilities on beaches for dogs (average 3.3). • The tendency of the respondents towards certain beach themes has also been analysed.Beaches for families with children (30.8%) and romantic beaches (16.9%) were the most recognized ones in 2014, while, in 2018, the greatest attention has been given to party beaches (77.1%) and beaches with sports and recreational facilities (51.4%). In the globalized world, beaches have been recognized as a factor in achieving competitiveness of a tourist product, considering that they represent the main centre of tourism; they have become the icons of contemporary tourism, and they have been considered as the main factors on the tourist market.Bearing this in mind, beaches are becoming significant spaces for social recreation and holiday.They are more and more considered as a highly valuable, not just socio-economic but also ecologic and national resource, which demands effective managing. Conclusion In this paper the authors have carried on the analysis of the state of sea beaches in the Istrian County, thus emphasizing, in particular, sea beaches as a resource basis considering the significance of beaches, length of the coastline, beach division, and beach facilities.Furthermore, legal basis has been analysed basically, respectively, the limitations and recommendations for the improvement of the beach managing system.According to the vision of the destination, and with a goal of an optimal distribution of bathers (tourists and domicile population), and with the purpose of providing optimal answers to the tourist preferences, the results of the primary research of the visitor's satisfaction with the beach facilities have been presented in this paper.As an answer to the changing trends in the habits of the potential beach visitors, a comparative analysis of the results of the secondary research carried on in was presented, as well as the research of the authors carried on in 2018 on the sample of 173 respondents.In other words, within the frame of the existing beach theming, according to the National Program of Sea Beach Managing, it is necessary to determine the adequate facilities and services for beach spaces which, according to the preferences of the visitors, represent the attraction resource of the tourist product of Sun, Sand and Sea.Today, various topics are known which in fact suit the target groups on the market: beaches with sports and recreational facilities, urban beaches, beaches with entertainment facilities for young people, romantic beaches, beaches for surfers, diving beaches, adrenalin beaches, nudist beaches, beaches for families with children, party beaches, beaches with health benefits, beaches of culture, eco beaches, resort beaches, beaches for pets, mixed beaches with various zones, and similar ones.But preferences of visitors have changed significantly in the monitored four year interval (how long has passed between two researches carried on).In the monitored periods, the respondents have given the greatest significance to the natural beauties.The conclusion has imposed that the beach visitors in the first period gave an advantage to the beaches the topics being related to facilities for families with children while in the second monitored period a much greater emphasis has been given to beaches with entertainment facilities and party beaches as well. Chart 1 Tendency of the Respondents towards Beach Themes (%) Source: Author´s research; Institute for Agriculture and Tourism (2016, p. 78) Table 1 Tourist Arrivals and Overnights in the Istrian County in the 2014-2017 Period Source: Statistical Bureau of the Republic of Croatia www.dzs.hr Table 2 Characteristics of Beaches in the Istrian County with the Blue Flag Awarded in 2017Public parking, sunbathing lawn, benches, natural shady places, ice cream, fast food, beach bar, coffeehouse, restaurant, kiosk, beach chair rentals, beach umbrella rental, showers, changing rooms, public toilets, barrier-free access to the sea, first aid, lifeguards, seawater slide, inflatable floating island, beach volleyball, rent a bicycle Beach massage, parking, renting equipment for water sports, sunbathing area, bar, fast food, kiosk, shop with equipment for bathing and beach, sunshades and loungers, showers, public toilet, toilet for disabled people, pedals, waterslides, children´s playground, beach volleyball, diving, table tennis, water polo umbrella rental, showers, changing rooms, public toilets, lifeguards, inflatable floating island, children´s playground, beach volleyball, rent a boat, rent a bicycle, parasailing, diving, table tennis, tennis Water sports centre (renting), sunbathing lawn, benches, ice cream, fast food, beach bar, coffeehouse, restaurant, beach chair rentals, beach umbrella rentals, showers, changing rooms, public toilets, disabled toilet, lifeguard, paddle boat, inflatable floating island, children´s playground, beach volleyball, jet-ski, banana boat, tubing, parasailing, snorkelling, children's amusement theme park, mini-golf, tennis Public parking, ice cream, fast food, beach bar, coffeehouse, restaurant, supermarket, kiosk, shop with beach accessories, beach chair and umbrella rental, showers, changing rooms, public toilets, lifeguard, paddle boat, water slide, inflatable floating island, rent a boat, trampoline, tennis, basketball, soccer Beach St. Andrea Rabac Natural Rocks, gravel, tiny gravel Parking, fruit and vegetable shopping, ice-cream, fast food, beach bar, coffee shop, pedals, aqua park-floating park for children, children´s playground, renting boats, scuba-diving, entertainment park, mini-golf Table 3 Accommodation during the Stay at a Destination (%) Table 5 Duration of the Stay on the Beach (%) Table 6 Company on the Beach (%) Source: Author´s research; Institute for Agriculture and Tourism(2016, p. 60) Table 7 Reason for Choosing a Beach (%) Table 8 Use of Facilities while Staying on a Beach (%) Table 10 Tendency of the Respondents towards Beach Themes (%)
2019-03-07T05:29:10.006Z
2018-06-20T00:00:00.000
{ "year": 2018, "sha1": "78706fbc71ed1fa7b79cf81b51c19f616ea72f4e", "oa_license": "CCBY", "oa_url": "https://hrcak.srce.hr/file/297117", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ed5f3a15d196513158084b1d310f8d3e47cf3cd5", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
35811021
pes2o/s2orc
v3-fos-license
Bronchial atresia in a neonate with congenital cytomegalovirus infection : Bronchial atresia (BA) is characterized by a mucus‑filled bronchocele in a blind‑ending segmental or lobar bronchus with hyperinflation of the obstructed segment of the lung. We describe a neonate who presented on his 9 th day of life with respiratory distress. Chest computed tomography showed a soft tissue density involving the right middle lobe (RML). RML lobectomy confirmed the diagnosis of BA. Cytomegalovirus was detected by polymerase chain reaction in blood, urine, and tracheal aspirates which may provide further insight into the pathogenesis of BA. B ronchial atresia (BA) is an anomaly characterized by a mucus-filled bronchocele in a blind-ending segmental or lobar bronchus, with hyperinflation of the obstructed segment of lung. [1] It was first described in the literature by Ramsay in 1953. [2] Subsequently, in 1963, Simon and Reid [3] described it in detail in a series of three patients who had an atretic bronchus in the antero-apical region of the left upper lobe. We report a case of BA in a neonate with congenital cytomegalovirus (cCMV) infection. Case Report A 9-day-old boy presented with a history of increased work of breathing and cyanosis. He was born at 40 weeks gestation via vacuum extraction for fetal distress. His birth weight was 3.0 kg with normal Apgars and did not require any resuscitation. Meconium-stained liquor was noted at delivery. He was discharged home at 48 h of age. Antenatal ultrasound at 22 weeks of gestation demonstrated hyper-echoic changes in the thorax and abdomen which did not progress throughout pregnancy. At presentation to the hospital, he was in moderate respiratory distress and was commenced initially on continuous positive airway pressure, but he required mechanical ventilation for respiratory deterioration in the 2 nd week of life. He had a normal white cell count and C-reactive protein. Blood cultures were negative. CMV was detected by polymerase chain reaction (PCR) in urine and blood, and in endotracheal aspirate samples. In addition, stored blood samples (from newborn screen) taken on day 2 of life were also positive for CMV by PCR. Due to an ongoing requirement for mechanical ventilation associated with left mediastinal shift and lung compression from an overinflated right middle lobe (RML), the child had a RML lobectomy [ Figure 2]. Pathology revealed a bronchocele with an 11-mm mucus plug [ Figure 2] in a sub-segmental bronchus of the RML. Although bronchi were seen to arise from the cyst and communicate with the distal lung causing marked over-inflation, no direct continuity was identified between the bronchocele and proximal bronchi. Cytomegalovirus inclusions with minimal surrounding inflammation were noted on microscopy of the peripheral lung. After excision of the RML, the patient was extubated and gradually weaned off oxygen. However, following anesthesia for a central line placement at 1 month of age for a 6-week course of ganciclovir for cCMV infection, he developed further respiratory distress. A repeat CT scan demonstrated persistent hyperinflation of the residual right lung, especially the right lower lobe. Instead of further lobectomy with permanent loss of lung mass, right lung volume reduction surgery was performed. There was subsequent marked clinical improvement and he was discharged home aged 3½ months and continues to thrive with no respiratory distress. Discussion The etiology of BA remains unknown. It was thought to be caused by an antenatal vascular insult around the 16 th week of gestation during the late stages of lung development. [3][4][5] However, other theories suggest that a nest of proliferating cells loses connection with the distal tip of the developing bronchial bud and continues to branch independently. As a result, normal branching distal to the atresia is maintained without actual connection to the central airway. It is hypothesized that this would occur around the 5 th -6 th week of gestation which is the time the proximal airways develop. This is the time when bronchogenic cysts are thought to develop. [6] As there is no direct communication with the central airways, the hyperinflation distal to the atretic segment is thought to be due to aeration by collateral air drift through the intraalveolar pores of Kohn, the bronchoalveolar channels of Lambert, and the interbronchiolar pores of Martin. This theory is supported by newer imaging techniques using Xenon ventilation CT. [7] In children, BA usually has a symptomatic presentation with cough, respiratory distress, or recurrent infections and has a female predominance (59%). It occurs most commonly in the right lower lobe (39%) followed by left or right upper lobes (23%). [7] A prenatal diagnosis of BA using ultrasound and fetal magnatic resonance imaging MRI has seldom been made. [5,[8][9][10] Postnatally, chest radiographs and CT are the main tools in diagnosis and may show segmental hyperinflation and mucus impaction. Surgical resection of the affected segment should be considered in symptomatic patients. Our patient is particularly interesting because of the congenitally acquired CMV. CMV was identified by PCR in urine, blood, and respiratory secretions in the 2 nd week of life and from stored blood on newborn screen on day 2 of life. These, together with the clinical presentation confirm symptomatic congenital CMV infection. One case previously reported congenital lobar emphysema in a patient with cCMV infection. [11] but BA, to the best of our knowledge, has never been reported in cCMV-infected patients. It is possible that the CMV infection caused the BA, either due to a vascular insult or secondary inflammation at a crucial time of bronchogenesis causing atresia of the affected bronchus, as was previously hypothesized. [11] This finding is supported by the presence of CMV inclusion bodies in the resected lobe. In summary, this is the first reported case of BA occurring with cCMV infection which may give further insight into the pathogenesis of this rare condition.
2018-04-03T02:57:41.395Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "2c9be030b0e5c8c8108e1a502cd598a3a7daada7", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/1817-1737.118497", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "70c1b62f55c5abeea42a679f2e87c9bced3df83d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53650257
pes2o/s2orc
v3-fos-license
Oral Health , Dysphagia , Distress , and Health Service Needs of Head and Neck Cancer Survivors 5 Years Post-Chemoradiotherapy C l i n M e d International Library Citation: Cartmill B, Ward E, MacGinley O, Porceddu S (2016) Oral Health, Dysphagia, Distress, and Health Service Needs of Head and Neck Cancer Survivors 5 Years PostChemoradiotherapy. Int J Oral Dent Health 2:024 Received: December 22, 2015: Accepted: February 08, 2016: Published: February 11, 2016 Copyright: © 2016 Cartmill B, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Cartmill et al. Int J Oral Dent Health 2016, 2:024 Introduction Nonsurgical approaches to head and neck cancer (HNC) treatment, including radiotherapy with or without chemotherapy (chemo)RT, can significantly affect the swallowing function of patients [1][2][3][4].Although evidence supports that many individuals will experience improved swallowing in the months following (chemo)RT, for a considerable proportion of individuals, dysphagia continues to be a persistent issue at one year post-treatment [1,3,[5][6][7].Furthermore, in the limited studies conducted to date, evidence suggests that dysphagia may continue to persist for many years post-treatment and that a subset may even undergo further functional decline [5,[8][9][10]. With increasing numbers of patients living longer following cancer treatment [11], it has become important that the extent and long-term impact of nonsurgical treatment on swallow function is better understood.Many patients who have dysphagia post-treatment will continue to experience persistent swallowing difficulties a number of years later, and in some cases, present with worsening of the condition, thought to be due to the ongoing effects of tissue fibrosis causing continued functional tissue loss, leading in some instances to stiffening and hardening of tissues, and possible stricture formation [11,12].The high prevalence of long term xerostomia [13] and dysgeusia [14] also contributes to ongoing patient-reported swallowing dysfunction due to resultant discomfort and changes to diet choices [15].Radiation induced neuropathy and muscle atrophy are also potential causative factors for long term dysphagia and trismus, as are mucosal sensory changes following radiotherapy [11,16,17]. Existing evidence regarding long term swallowing outcomes, however, comes from only a limited number of studies.Several have focused primarily on the physiological and clinician-rated changes to swallowing function [7,8,10,18,19], and have found that the majority of patients have been unable to resume a normal diet, and present with physiological impairment [7] characterized by deficits in laryngeal movement, epiglottic deflection, tongue base retraction and pharyngeal contraction in the years following treatment [10,18,19].Cancer survivorship literature now places greater emphasis on exploring the long-term impact of oral health effects using patient perceptions of their functional state.As such, there has been growing importance placed on using patient-reported outcome (PRO) measures, rather than clinician directed tools or physiological function measures, to explore the impact of cancer and its treatment.The few studies that have explored long term outcomes using PRO tools [20][21][22] also document the continuing presence of dysphagia and oral health effects more than a year following treatment, as well as the potential for a negative change in function over time.Patients have reported significant worsening of dysphagia and xerostomia to 2 years post-treatment, with lower proportions of patients able to tolerate a full diet at each follow up [5].Swallowing dysfunction consent was obtained from all individual participants included in the study.For inclusion, participants had to have undergone (chemo) RT for HNC with curative intent 5-6 years prior to contact (between January of 2007 to December of 2008).Patients were excluded if they had: undergone palliative treatment or treatment for recurrent HNC, did not receive speech pathology intervention during or following treatment, had any pre-existing medical condition which could have impacted on swallow function, or if they had undergone primary surgical management for their HNC (excluding biopsies, isolated neck dissection or tracheostomy insertion for airway management).Figure 1 outlines the recruitment process.The details of the 20 participants can be found in table 1.The final cohort consisted of 17 males and three females with a mean age of 56.9 years (SD = 9.17) who had received 3D conformal (chemo)RT treatment 5-6 years prior to participating in the current study. Procedure All eligible participants were contacted for consent through a letter of invitation, with a follow-up phone call to determine their desire to participate.Once consented, the research team collected retrospective data from the medical chart regarding each participants swallowing and speech pathology service history for comparison with current status at 5-6 years post-treatment.Using the speech pathology entries and diet recommendations in the medical chart, the patient's level of swallowing function at three prior time points was collected: (a) at commencement of treatment, (b) at week 4-5 of treatment, and (c) on completion of treatment (week 7-8).Functional oral intake at these time points was recorded using the Functional Oral Intake Scale (FOIS; [28] which is a 7 point scale used to describe functional swallowing outcomes and regular dietary intake of patients, where 7 Whilst the available evidence would suggest dysphagia continues to be an issue and may increase in severity for a proportion of patients following nonsurgical treatment for HNC, the current evidence on long term outcomes is limited due to the overall small volume of studies conducted to date, studies providing valid assessment of functional change over time (e.g.lack of pre-and post-treatment data), and the lack of comprehensive examination of swallowing from the patient perspective.Furthermore, in the studies already completed, there has been no discussion of services sought by patients experiencing dysphagia in the long term following treatment.Of the two existing international studies to have reported service patterns for patients with HNC, neither provided extensive information on the provision of swallowing management long-term post-treatment [26,27].Within Australia, the availability and uptake of services for patients with long term dysphagia is unknown.Clinical experience though, would suggest there is a potential absence of services and support mechanisms for patients presenting with long term dysphagia and associated oral health effects.Hence the primary aim of this study was to explore the incidence and nature of long term patient-reported dysphagia, oral health effects, and co-occurring distress following non-surgical treatment for HNC.The secondary aim was to determine the nature and extent of the services sought, and provided for this population. Participants Suitable participants for this study were identified using the patient record system of the Princess Alexandra Hospital (PAH) Radiation Oncology Department in Queensland, Australia.Informed and additional questions were completed either (a) in person with support from a research team member, (b) over the phone with the assistance of a researcher, (c) independently by completing hardcopy versions provided via the mail, or (d) independently via electronic versions of the tools delivered via a secure online survey site (https:// www.surveymonkey.com/). Analysis Data collected from the aforementioned assessment tools was entered into excel and basic descriptive data for the cohort was computed.Qualitative data yielded from any free text responses provided by participants regarding services was analysed using content analysis.This was completed by one member of the research team and validated by a second.Any disagreement was resolved with discussion with a third team member.Statistical comparisons using Friedman's tests with post hoc Wilcoxan tests were used to explore change across the FOIS data collected before, during and at the end of treatment.As the srFOIS data was collected through patient report as opposed to clinician rated, this data was analysed separately and compared to the clinician's FOIS ratings at pre-and post-treatment using planned contrasts (Wilcoxon Signed Ranks tests).Significance was set at p < 0.05.To explore any relationships between current swallowing function, extent of long term swallowing related treatment side effects, distress and the need for services, key data obtained from the srFOIS, the VHNSS v2.0 and service data was triangulated and examined for individual patient patterns. Functional swallowing status (FOIS) Results of the FOIS scores collected from pre-treatment, week 5 of treatment, and post-treatment revealed a significant (χ = 25.423,p = < 0.001) change in swallow functioning across time (Table 2).Post hoc analysis revealed a significant (Z = -3.742,p = < 0.005) decline in swallow function from pre-treatment to week 5 of treatment and also from pre-treatment to post-treatment (Z = -3.528,p = < 0.005).There was no significant (p > 0.05) difference observed between FOIS scores between week 5 and the end of treatment.Overall descriptive statistics revealed the majority (80%) of patients were tolerating a normal diet prior to treatment which dropped to less than 20% at week 5 and at immediately post-treatment (Table 2).One participant in the study cohort required alternative feeding prior to commencing treatment (Functional Oral Intake Scale score of 1-3), which increased to 4 participants in the early post-treatment phase. is a normal diet and scores of 3 and below represent need for non-oral nutritional support.In addition, information was collected from the medical chart regarding the timing and extent of speech pathology services accessed.This information included data related to the timing and number of sessions with speech pathologists after treatment. As part of the prospective data collection, all eligible, consenting participants were contacted and asked to complete a "self-reported" Functional Oral Intake Scale (srFOIS).Although the FOIS tool is typically completed by a clinician, it was adapted for the purposes of this study to be used as a patient-reported tool.The clinician-rated items from 0 to 7 were presented in simplified language and expressed in the first person (Appendix 1) to create the srFOIS.The scale remained the same as the FOIS, with lower numbers representing increased oral intake restrictions. Participants also completed three additional tools at the time of contact, including: (a) the Vanderbilt Head and Neck Symptom Survey version 2.0 (VHNSS v2.0), a reliable and valid HNC specific questionnaire containing 50 items scored from 0 (no symptoms) to 10 (severe symptoms) relating to functional swallowing status and oral health effects [29], (b) the Functional Assessment of Cancer Therapy -Head and Neck (FACT H&N), a general QoL tool validated with the HNC population [30] containing four domains of functioning (physical, social/family, emotional and functional), where higher scores represent improved QoL, and (c) the Distress Thermometer a validated, patient-rated score describing stress levels in the cancer population which is scored from 0 (no distress) to 10 (extreme distress) with additional questions regarding potential reasons for distress across six areas including practical, physical, spiritual, family, emotional and other causes [31].The battery of assessments was specifically selected to provide information on functional oral intake/ dysphagia, while also indicating patient reported oral health and distress. Patients were also asked to report on the type and extent of services accessed, and desired, through three additional multiple choice questions (Appendix 2).These questions were related to the patients': (1) current access to health services; (2) desire for further access; and (3) goals of these services.Both questions (1) and ( 2) had 14 health related services provided as optional responses, with an additional option of "other" services.For question 3, participants were provided with responses regarding what they may wish to gain from accessing services. Data was collected through various methods to optimize participation.As selected by each participant, the assessment tools Using planned contrasts, a significant (Z = 3.596, p = < 0.005) improvement in functional swallow status was observed between the participants FOIS score reported post-treatment, and their current srFOIS at 5-6 years later.Descriptive statistics revealed that 30% had returned to a full normal diet, while the majority (60%) reported they were now tolerating a non-texture modified diet but still must avoid specific foods or liquid items (srFOIS level 6).At 5-6 years post-treatment, only 10% required a texture modified diet (Table 2), compared to immediately following treatment where 45% were on modified texture diets (FOIS level 5).By 5 years post treatment, no participants required alternative feeding.Despite this improvement, at 5-6 years post-treatment the mean srFOIS scores remained significantly (Z = -2.500,p = 0.012) lower than pre-treatment.At pretreatment, the large majority (80%) of patients were managing a full, normal diet, while only 30% had returned to this level at 5-6 years post-treatment. Patient-reported outcomes Results of the VHNSS v2.0 revealed that all participants were reporting having some negative health outcomes (toxicity rating ≥ 1) in each of the main symptom categories at 5-6 years post-treatment (Table 3).Further examination revealed that > 25% of the current cohort reported moderate to severe difficulties (toxicity scores ≥ 4) within all domains except nutrition, mucositis, smell and pain (Table 3).In terms of specific dysphagia related items, moderate to severe difficulties in eating solids, xerostomia, increased eating duration, sensitivity to dryness and sensitivity to spicy, hot or acidic foods were reported by more than half of the cohort. When analysed in relation to parameters described by List et al. [30], the QoL of the participants as determined by the FACT H&N (Table 4) was better than average.List et al. [30] classified QoL scores into those reported by patients with 'good' or 'poor' overall performance as rated by a Karnofsky scale.The mean QoL score indicated by the current cohort in each domain is better than the average score reported by patients with 'good' global functioning with the exception of social wellbeing.The mean social wellbeing score was slightly below the mean score reported by List et al. [30]. Distress Data from the Distress Thermometer tool revealed that 45% of participants were experiencing some degree of ongoing general distress (scores > 0) at 5-6 years post-treatment, with 25% of patients' reporting moderate to severe (> 4) levels of distress.Main causes of distress reported by the cohort are reported in table 5. Fatigue was a source of distress for 40% of the cohort, while 30% of the participants reported eating and drinking difficulties as a cause. Services Review of patient medical records revealed that 60% of patients accessed speech pathology services following completion of (chemo) RT, of which an average of three sessions were attended.The majority of these services were provided in the initial six weeks after (chemo)RT.Only one participant was reported to have received speech pathology services beyond 12 weeks following treatment completion.Regarding services participants were currently accessing, 50% remained involved with health professionals, although this was largely contact with medical professionals (40% otolaryngologist; 25% general practitioner, and 15% radiation oncologist).Only 5% continued to visit a speech pathologist 5-6 years following treatment.The majority of participants reported limited desire for further services beyond those already being sought, with only two patients expressing interest in receiving further support from any health professionals. Data triangulation Comparison of the data obtained from the dysphagia related items of the VHNSS v2.0, the distress scale and the services data revealed that there was some relationship between these key data points (Table 6).The participants with increased functional swallowing deficits (srFOIS ≤ 6) on average reported more moderate to severe swallowing related side effects on the VHNSS than those tolerating a normal diet (srFOIS = 7).There were also no participants tolerating a non-texture modified diet (srFOIS = 7) who reported moderate to severe distress levels.However, desire for services did not appear to be related to high distress, the lowest srFOIS scores or the most significant levels of dysphagia related side effects. Discussion The results of this study indicate that the majority of patients report ongoing dysphagia 5-6 years following (chemo)RT for HNC.Furthermore all patients were reporting ongoing negative oral health.Distress was an issue for almost half of the individuals.Despite these multifaceted physical and emotional issues, few additional services were desired or being sought by patients to assist in their management.The current data supports previous studies in which swallowing difficulties, psychological distress and continuing side effects have been reported by patients' long term following nonsurgical HNC treatment [5,21,22,32] and contributes important new insights into long term service needs. Comparison of the FOIS data collected from before and immediately following treatment, confirmed that the majority of the current cohort experienced significant dysphagia during, and at the end of (chemo)RT treatment.Comparison of that data with patient reported swallowing status at 5-6 years post-treatment revealed that significant functional improvements had been experienced.With respect to the severity of dysphagic symptoms, only 10% of the current cohort continued to require texture modified diets, compared to 33% in the Frowen et al. [8] cohort at 5-6 years post-treatment.However, swallowing function had not returned to pre-treatment levels, with only 30% of the cohort able to manage a full unrestricted diet at long term follow up.This incidence is lower than that reported by Cartmill et al. [5], Frowen et al. [8], and Newman et al. [33], who found that 42%, 59%, and 72% of their cohorts, respectively, had returned to normal diets more than a year following radiotherapy or chemoradiotherapy.Conversely, Berg et al. [7] found that no patients in their cohort of 32 were tolerating a normal diet at 14-68 months post-chemoradiotherapy.The current findings support the growing body of literature which indicate that swallowing difficulties remain a chronic condition for a proportion of patients in the long term following HNC management [5,6,8,9,11]. Patient report revealed that the nature of the difficulties Note: Tick = a score of ≥ 4/10 on the domain items of VHNSS v2.0 and DT, and indication of desire for further health services; cross = no scores ≥ 4/10 on the domain items of VHNSS v2.0 and DT, and no desire for further health services. experienced were largely relating to difficulties eating solids and an increased eating duration.A proportion (≥ 30%) of the participants also indicated moderate to severe symptoms of coughing after swallowing, reported getting food stuck in the throat and mouth, and experienced choking on solids.Similar results were reported for the heterogenous cohort of patients 6-166 months post-HNC treatment studied by Cooperstein et al. [29].It is possible that these specific swallowing difficulties/symptoms (coughing, food sticking, choking) reported by participants in the current study relate to the presence of pharyngeal residue post swallow and subsequent post swallow penetration/aspiration.Such physiological difficulties have previously been reported as ongoing issues following nonsurgical HNC treatment [8,10,19]. The current cohort also reported ongoing negative oral health alongside their dysphagia.Almost all of the participants reported experiencing xerostomia, with the majority having moderate to severe issues affecting their ability to sleep, talk and chew.Increased mucosal sensitivity in relation to dryness and spicy, acidic and hot foods were reported by the majority of patients, as well as altered taste and thick mucus.Oral health difficulties in relation to mucosal sensitivity and taste change resulted in patients altering their diet choices.Mucosal sensitivity, causing eating and drinking issues, has been well recognized in prior studies [15,34].Previous studies have similarly reported continued incidences of xerostomia, taste changes, swallowing difficulties, limited range of movement and altered mucus production many years after treatment [5,21,22,35].The significant ongoing issues relating to xerostomia and mucosal sensitivity reported by the current cohort of HNC patients adds to the growing body of evidence supporting the presence of multiple long term difficulties contributing to functional deficits of oral intake. Recent qualitative research by Nund et al. [36] found that patients may experience ongoing distress associated with dysphagia following non-surgical treatment.Indeed, distress was found to be a continuing issue in the current cohort, with a quarter reporting distress levels of 4 or greater on the Distress Thermometer, an indicator of clinically significant distress [37].This finding is comparable to previous reports of a 27% distress rate in patients up to 10 years following treatment (surgical or nonsurgical) [38].In the current cohort, 30% of participants identified eating and drinking as a cause of their ongoing distress.Bjordal and Kaasa [32] similarly found that 64% of patients with high levels of swallowing difficulties had high levels of distress, while half of those experiencing dry mouth, taste problems, coughing and difficulties related to mucus production were also classified as clinically distressed.The long term distress levels reported in both the current and previous studies indicate that despite extended periods of time following treatment, a patient's psychological adaptation to their dysphagia is not ensured. Interestingly, despite reporting a degree of ongoing distress many years after treatment, global QoL of the current cohort was found to be positive.It was found that mean scores in each domain (except social wellbeing) were higher than those provided by patients rated as having good global performance scores by List et al. (1996).Good QoL, even in the presence of ongoing treatment related deficits beyond 1 year after nonsurgical treatment, is a recurring trend in the literature exploring patient-reported outcomes [21,35,39].de Graeff et al. [35] proposed that possible reasons for this discrepancy are patient adaptation causing a possible response shift, with less fear of recurrence/death many years after treatment.These reasons may also be proposed to account for the current positive QoL findings. All the patients included in this study accessed speech pathology services during their nonsurgical treatment, with the majority receiving intervention in the first 12 weeks following completion.However beyond this acute phase, only 1 patient continued to receive speech pathology services.Little is known regarding post-discharge speech pathology services for this population, as the only two studies [26,27] which have examined services provided to HNC patients have failed to provide details regarding ongoing post-treatment intervention.The information obtained from the participants in the current study revealed that although half continued to receive ongoing services from medical professionals (Ear, Nose and Throat specialists and general practitioners), very few reported receiving any allied health services, which would suggest that minimal ongoing engagement with rehabilitation services was undertaken in the long term.Furthermore, it was found that only a small number of participants wish for further services despite ongoing dysphagia, negative oral health effects and distress.The lack of interest in further intervention could be related to the positive QoL reported by the cohort, whereby patients acknowledge their ongoing difficulties, but retain a positive QoL and therefore do not seek any further additional intervention in the long term.This argument is consistent with a degree of adaption to their difficulties years after treatment [35,36].Equally though, it is possible that participants may be unaware of any benefits which could be provided from further intervention, and as such, they fail to seek services.It is important that individuals are made aware of possible services which could assist them, so they can make informed choice about seeking out any further supports long term post-treatment. Using triangulation of key data obtained relating to ongoing oral health effects, distress and desire for services, a relationship was revealed between increased side effects and decreased swallowing function.There was also increased distress found in those not tolerating a normal diet.Interestingly though, a desire for services was not indicated by the patients with the most severe ongoing side effects, distress levels or swallowing difficulties.This may indicate that patients have adapted and accepted their current level of difficulty with no desire for intervention, as discussed previously.Or it may also suggest that they may be unaware of what services are available, or how to access these services to aid their swallowing, psychological and general wellbeing.It is also possible that participants were considering only those services which targeted supportive intervention for physical impairments.Considering the levels of distress and long term negative changes to the health state which persist post-(chemo) RT, it may be that services which support patients' psychosocial needs may be most appropriate for them at this stage of their cancer survivorship journey. Although the current study has provided further validation of the extent and nature of long term swallowing and treatment related changes for HNC patients 5-6 years following (chemo)RT, there are a number of limitations that must be acknowledged.Although the focus of this research was specifically to explore the patients' perspective of their current functioning, the addition of a clinical assessment of swallowing status in future studies could provide greater insight into the physiological factors contributing to the swallowing difficulties being reported.Equally, including a qualitative component to explore the issue of services, service needs and overall awareness of services available, would have provided more insight into the needs of this population and their awareness of the services available.A further limitation of the current study was the small cohort numbers.Although recruitment included a 2 year treatment window, exclusion criteria, mortality and difficulties contacting participants 5-6 years after treatment resulted in significant attrition of the sample size, which is a natural consequence of the population being studied, and the authors acknowledge a potential response bias which such attrition rates.However, future studies which recruit from multiple sites could allow patterns to be examined in larger, more representative cohorts.Consistent with the nature of HNC and its management, it is also recognized that the cohort was not homogeneous, with participants representing a range of cancer stages and tumor locations, and having undergone differing modes of non-surgical treatment.Additionally, no consistent reporting of HPV status was available for this cohort.Unfortunately due to the small cohort size, no sub-analysis of outcomes by tumor demographics is possible.Future studies with larger cohorts that could potentially stratify by cancer site, stage and treatment modality may find differential patterns regarding the extent and severity of long term outcomes in certain subpopulations.This warrants further investigation so that patients can be more fully informed regarding potential long term outcomes following different treatments. Conclusion The current study adds to the body of literature investigating the continuing presence of dysphagia in the long term following (chemo)RT.Ongoing patient-reported swallowing difficulties and oral health effects, including xerostomia, were highlighted 5-6 years post-treatment.Distress was a continuing factor in the population, with at least one third attributing this to their eating and drinking issues.Despite this, patient-reported global QoL was good.This positive QoL was potentially reflected in the small number of patients requesting the need for support services related to their HNC care.Despite participants indicating little interest in seeking further services, the fact that the current data confirms the presence of long term dysphagia, persistent negative oral health effects, and ongoing distress, highlights the need to ensure support structures are in place and available for patients beyond the acute care stage.Further studies into this issue are needed to determine the nature of services that may be of most benefit for patients in the survivorship phase. 5 20 Total year oral health outcomes of HNC survivors Completed and returned survey n = Total excluded: n = 12 Further exclusions based on medical chart review n= 12 Total excluded: n = 36 Declined participation n=7; Lost to follow up n=22 Returned an incomplete survey = 1 Figure 1 : Figure 1: CONSORT flow diagram outlining the patient cohort selection process. Table 1 : Demographic Details of the Cohort at Presentation. Table 2 : Functional Swallowing Ability (FOIS) and Self-Reported Functional Swallow Ability (srFOIS) Relative to Treatment (n = 20).Calculations based on 19 participants due to 1 participant missing week 5 data a Table 3 : Frequency and Severity of Oral Health Outcomes on the VHNSS v2.0. Table 4 : Patient Reported Long term Quality of Life Following Treatment Using the FACT-H&N a a Head and neck specific domain not completed ISSN: 2469-5734 Cartmill et al.Int J Oral Dent Health 2016, 2:024 Table 6 : Data Triangulation of Swallowing, Oral Health, Distress and Service Information.
2018-11-09T05:51:03.072Z
2016-02-11T00:00:00.000
{ "year": 2016, "sha1": "24c36c02290d7139791f077bb4983b23f9d3825d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.23937/2469-5734/1510024", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "24c36c02290d7139791f077bb4983b23f9d3825d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245458632
pes2o/s2orc
v3-fos-license
Sanitary practices associated with animal welfare in the control of mastitis in the dairy herd Agricultural practices are a powerful tool with health management contributing to the improved performance of production of animal welfare. The study aimed to verify that farming practices are important tools in the control of bovine mastitis and the degree of knowledge of the producers with respect to the practices. Guided visits were carried out the rural properties, when they were distributed educational materials and tools, made from recyclable material, used in the management of sanitary practices related to animal welfare. To evaluate the physiological profile were collected biological samples such as: blood, feces and milk, and for assessing the sanity and animal welfare were used a questionnaire. Blind 20 cows were selected for a total of 79 rooms of which breast; 36 rooms were negative to the CMT, but 12 were of these were positive to Staphylococcus and microbiology microorganism of higher occurrence. Animal welfare was compromised by the incidence of Subclinical Mastitis, infestation of flies and the absence of prophylactic measures. It is concluded that the use of good agricultural practices is a suitable tool associated with the animal welfare and important in the identification of bovine mastitis, and a lack of information and knowledge regarding best practices, especially in relation to preventive management. Introduction The world chain of animal protein production has grown constantly, in parallel with consumer demands for products that fit their needs and social alignment (Alexandrino, et al., 2020). Dary cattle is present in various properties of the rural settlements, usually being the main source of income of small producers. However, many face difficulty in staying in the activity because they are characterized by a low productive potential, with serious problems related to the quality of the milk and the sanity of the herd. In contrast to this fact, the more conscious consumers are increasingly demanding both with the quality of the product but also with creating standards with ethics (Molento, 2005). It is not feasible to transfer animal welfare assessment protocols developed for intensive systems to extensive systems or from rangeland-to pasture-based cattle because each system needs a different protocol (Kaurivi, et al., 2020). From the moment that we recognize animal sentience and we found that the interactions of animals with the environment interfere with their productive profitability, issues related to sanitary management started to be raised. The exact knowledge of the factors that intervene in the animal's productive life, given the example of the stress determined by environmental variations, makes it possible to adapt the management . The concern for animal welfare has been receiving prominence on the world stage, including in Brazil, where ever we find regulations that promote actions that improve the quality of life of the animals, as the Normative Instruction n° 56 which establishes general procedures of recommendations of good practices of well-being for farm animals and economic interest, created by the Ministry of Agriculture, Supplies and Livestock (Brazil, 2008). The actions of PC-UFAL seek to understand the human-animal-environment relationship in the One Health panorama, in addition to guiding interventions in three main spheres of well-being, (Molento, 2007;Escodro et al. (2012), and physical, evaluating whether the animal is capable of normal organic growth and functioning, good health and maintenance of an adaptation to the adult life; behavioral, evaluating whether the environment is consistent with the one in which it evolved and adapted; mental, assessing whether the equine lives with a sense of mental satisfaction or at least mental stress free (Ribeiro, et al., 2020). One of the alternatives to ensure the quality and productivity of milk and ensure good conditions of animal welfare is the implementation of good agricultural practices (BPA). The BPA are constituted by a set of rules and procedures aimed at ensuring the health, nutrition (food and water), welfare, the environment and safety of animals and milking hygiene, though, so that such practices are adopted, it is necessary to raise dairy farmers about the importance of the BPA (FAO, 2013). Properly planned assessments can identify risk factors for poor welfare, aid in the development of interventions, and be used to monitor and evaluate changes in practice (Dunston-Clarke, et al., 2020;Fraser, 2006;Knierim & Winckler, 2009). Mastitis is one of the main causes of low quality and productivity of milk (Ballou, et al., 1995). In addition to cause economic losses, especially in reducing production and milk quality, mastitis brings serious problems for animal welfare since they are responsible for causing pain and discomfort to the animals, when affected by mastitis (Bond, et al., 2012). The present work aimed to verify that the adoption of good agricultural practices is a good tool for the diagnosis of bovine mastitis and raise the small producer of rural settlements on the importance of the implementation of good agricultural practices related to animal welfare. Welfare Assessment and Data Collection The study was conducted in four dairy properties (P1, P2, P3 and P4) belonging to the rural settlements, northwestern region of São Paulo State. 50% of the cows were selected in milk of each property. The sample group consisted of 20 bovine females (four P1, seven P2, five P3 and four P4) Girolanda breed, chosen at random, primiparous and multiparous, at different stages of lactation (initial, intermediate and final), with and without mastitis and breast 79 rooms in total, as a ceiling was nonfunctional. At the time of the visit, the producers received guidance about the importance of good agricultural practices related to animal welfare when they were distributed and tools developed from pet bottles, for the mug test dark background and other containing antiseptic solution that should be used on roofs during the pre and post dipping. Were distributed folders and explanatory banners containing content the good agricultural practices of well-being related mainly to calves, milking, and animal health management. As a suggestion, we advise that the banner containing detail best practices procedures were fixed milking in the milking pens, making the applicability of the procedures in the routine of the producer. To the profile of the production system of dairy properties and the ambience was applied a structured questionnaireguided interview concerning the number of animals, breed, milk production, nutritional management, sanitary and hygienic management, control measures and prevention of mastitis. The physical examination of mammary glands and the background dark mug test for the detection of clinical mastitis and California Mastitis Test (CMT) for the diagnosis of Subclinical Mastitis. After the asepsis of the ceilings with alcohol 70% was performed the milk sample collection representing each breast rooms, forming a "pool" by animal, which was submitted immediately, evaluation of somatic cell count with the direct test portable DeLaval Cell Counter ®. The samples were seeded on blood agar culture media enriched with 5% defibrinated horse blood and MacConkey agar and incubated in bacteriological kilm immediately at 37° C, for a period of not less than 72 hours, being monitored every 24 hours. At the end of the incubation period were the cultures were considered as positive when the growth of three or more colonies of the same microorganism, and how contaminants to those who have three or more different agents (NMC, 1999). The cultures have been identified according to the characteristics and macromorphological, micromorphological and Gram stain. Later, for identification of microorganisms proceeded with the biochemical tests according to Quinn, et al. (1994). Immediately after collecting milk samples, 5 ml of blood were obtained by puncture of the jugular vein with the aid of vacuntainer ® tubes containing anticoagulant (EDTA ethylenediaminetetraacetic-). Feces were collected directly from the rectal ampulla of cows with the help of a glove of palpation. Stool samples were processed by following the technique of Gordon and Whitlock (1939) modified by Ueno and Gonçalves (1998) for count of eggs per gram of feces (EPG) of gastrointestinal nematodes. Data Analysis The cows were subjected to visual and tactile evaluation of bodily reserves at specific points of the body of the animal, according to the methodology proposed by Wildman et al. (1982), using a scale from 1 (very thin) to 5 (very fat), with range of 0.50 points. The variables were tested by the Fisher exact test and Kruskal Wallis, adopting the significance level of 5%. Ethics and Biosafety Committee This research paper was approved by the Ethics Committee on the use of Animals, under protocol number 2013-04418. Results and Discussion The cattle of milk is the main agricultural activity of the properties evaluated.As noted in this study, the cows of the breed and crossbred Gir (Gir x Dutch) are widely used for milk production in Brazil due to its high capacity to adapt to the tropical climate and its satisfactory performance (Sharma, et al., 1996). With respect to the type of food, it was found that most used the pasture as the main source, and nutritious food supplementation with bulky was held in the most critical periods of the year, when there was a shortage of pastureland being sugar cane, briquette and elephant grass (Pennisetum purpureum), the elements of choice. The property P1 is not provided to cows, and P2, P3 and P4 provided commercial concentrate on individual trough during milking lactating cows only according to the quantity of milk produced. The animals had ad libitum access to water from well damn. The limitations on the consumption of water and the presence of shadow can compromise the level of animal welfare, especially in tropical climates (Armstrong, 1994). On all properties, manual milking was being performed once daily in the morning, with the presence of the calves at the foot of the cows. The calves were getting before the beginning of milking to promote the stimulus to the descent of the milk, remaining tied next to mothers, and at the end of milking were loose in the pasture. The milk was packaged in drums of milk and immediately after milking was driven to the cooling tank. The sale was held at the dairy in the region. In general, the facilities of the properties were precarious and low technological level. On all properties mud accumulation was observed in the vicinity of milking installations, and also was P3 property observed accumulation of mud in the milking barn. The presence of mud accumulated in pens can cause discomfort, hygiene problems, and difficulty of the displacement of the animals, as well as influence the sanity of animals that are more exposed, especially diseases of environmental mastitis (Samantet, al., 2014). Simple procedures for disinfecting the teats before milking were observed on P2 and no producer was soaking in disinfectant solution the teats after milking. According to Mandal et al. (2011), the simple disinfecting the teats before milking with a chlorinated solution to 750 ppm can cause on 91.3% reduction of coliform and 85.3% of Staphylococcus coagulase positive. Only the property P2 reported using the mug of dark background, however, its use was not held every day. No producer reported using the CMT test, and it is noteworthy that three of these producers were unaware of the practice of CMT. The use of the CMT is an important tool to diagnose the health conditions of the dairy herd and allows producers to take preventive measures for the more effective control of Subclinical Mastitis (Brito, et al., 1997). The supply of food post milking is a management practice that stimulates the animals to remain standing until the closing of the sphincter, reducing the ceiling mastitis cases caused by pathogens (Costa, et al., 1998). The absence of the practice, probably favored the animals lie down immediately after milking, as noted in the property P4. Intramammary antibiotic treatment for the dry cows was carried out on only a single property (P4). Second, Makovec & Ruegg (2003) such a procedure can eliminate 80% of mastitis in drying and prevent up to 80%, the emergence of new infections during the period. The presence of ectoparasites was observed mainly ticks and fly-horns, and producers have reported perform the tactical control of ectoparasites, only when infestation levels were considered high by the producers. The presence of ticks reduces the welfare of cows, as well as cause skin irritation, rash, and anemia, can transmit parasites that cause sorrow or disease Piroplasmosis, a parasitic tick (Furlong & Sales, 2007). Already the presence of horns fly is related to the transmission of diseases and especially stress that cause the animals in an attempt to get rid of these parasites causing productivity losses (Bianchin & Adams, 2002). Despite the properties evaluated virtually adopt the good practice of milking was not verified cases of clinical mastitis. All rooms rated breast presented negative results to the test of the mug of dark background and none of the cows were observed changes evident in the mammary glands. Of the 79 breast rooms evaluated in the CMT, 37 (46.84%) presented a positive reaction (tab. 1). The stage of lactation (p = 0.549) and the number of delivery (p = 0.416) did not influence significantly the results of CMT. Cows in late lactation have a higher amount of positive score to CMT, probably due to lower milk production and greater scaling epithelial tissue (Fagan, et al., 2008). With respect to the determination of the CCS emerged that despite observing high values of CCS on P3 property (median 1, 24x105 cells/mL) when compared to the properties P1, P2 and P4 that showed median of 3, 92x105células/mL, 2, 1 27x105células/mL, 0x105células/mL, respectively, were not significant difference of CCS among the cows of the evaluated properties (p = 0.3216). When considering the value limit of CCS of 5x105células/mL, determined by Normative paragraph 62 of the Ministry of Agriculture, Livestock and Supply (MAPA) from July 2014, only property P3 would not be with legislation. According to Brito et al. (1999), as used for antisepsis procedures before and after milking, associated with adoption of milking line have been identified as one of the main factors contributing to the decrease of the CCS. In the microbiological examination, of the 79 rooms there was a higher frequency of assessed breast isolation of contagious pathogens, with predominance of the bacteria of the genus Staphylococcus, followed by Corynebacterium spp. (Table 1). Staphylococcus aureus is recognized as the primary etiological agent, in cases of subclinical mastitis in dairy herds (Ferreira et al., 2006). The high presence of bacteria of the genus Corynebacterium is indicative of absence or flaw in the aseptic process of ceilings after milking (Brito et al., 1999). Considered the results of the CMT and microbiological examination showed that in36 rooms that breast reacted negatively to the CMT bacteria isolation occurred in 12 of these breast rooms. The CMT test is not sensitive to identify subclinical mastitis in its early stages when the increase of the cell count of infected animals (Orange & Machado, 1994). In relation to the ECC was not identified extremely thin or obese cows, animals take advantage of good nutritional status, showing 2.5 to 3.5 between ECC. The cows that show the extremes of ECC are at greater risk of having metabolic problems and diseases, reduction in milk production and difficulty at the time of calving (Ferguson, et al., 1994). Parasitological examination all samples evaluated presented negative results to count nematodes by grams of feces. The adult bovine animals acquire immunity to helminth around 18 months of age and eliminate a few eggs of parasites in the stool, corroborating the results of lower infestation of helminths in cows of this work (Baran, et al., 2013). The analysis of the hemogram introduced significant changes indicative of diseases infectious and parasitic diseases. The main challenge is still to break with the distrust of the settlers. However, it is only with people's awareness of the direct relationship between well-being and productivity that it will definitely contribute to improving the quality of life of farm animals. All producers were aware of the animal's quality of life, but it is very difficult to provide such practices since the socioeconomic and health conditions of the families are precarious. Conclusion Therefore, we can conclude that the use of agricultural practices is suitable for the identification of bovine mastitis, however, there is a lack of knowledge in relation to good animal welfare practices, mainly related to preventive management. The producers awareness of the implementation of good agricultural practices in the dairy activity proved to be fundamental to improving the conditions of animal welfare, productivity and quality of milk, promoting the improvement of collective wellbeing. In this context, other works must be carried out with the same objective.
2021-12-25T16:17:25.595Z
2021-12-22T00:00:00.000
{ "year": 2021, "sha1": "b534cc0d19c7a7ad6fd98142904d6ce0473b538d", "oa_license": "CCBY", "oa_url": "https://rsdjournal.org/index.php/rsd/article/download/24467/21395", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6dfd0f6f21211d2a55943028a11245517873bf4c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
232290431
pes2o/s2orc
v3-fos-license
Molecular Rotors as intracellular probes of Red Blood Cell stiffness The deformability of red blood cells is an essential parameter that controls the rheology of blood as well as its circulation in the body. Characterizing the rigidity of the cells and their heterogeneity in a blood sample is thus a key point in the understanding of occlusive phenomena, particularly in the case of erythrocytic diseases in which healthy cells coexist with pathological cells. However, measuring intracellular rheology in small biological compartments requires the development of specific techniques. We propose a technique based on molecular rotors-viscosity-sensitive fluorescent probes-to evaluate the above key point. DASPI molecular rotor has been identified with spectral fluorescence properties decoupled from those of hemoglobin, the main component of the cytosol. After validation of the rotor as a viscosity probe in model fluids, we showed by confocal microscopy that, in addition to binding to the membrane, the rotor penetrates spontaneously and uniformly into red blood cells. Experiments conducted on temperature-stiffened red blood cells show that molecular rotors can probe their overall rigidity. A simple model allowed us to separate the contribution of the cytosol from that of the membrane, providing a quantification of cytosol rigidification with temperature, consistent with independent measurements of the viscosity of hemoglobin solutions. Our experiments demonstrate that the rotor can be used to quantify the intracellular rheology of red blood cells at the cellular level, as well as the heterogeneity of this stiffness in a blood sample. This technique opens up new possibilities for biomedical applications, diagnosis and disease monitoring. Introduction Red blood cells (RBCs) are the main cellular component of blood and perform the essential function of carrying oxygen through the body. These cells essentially consist of a deformable membrane containing a hemoglobin solution considered as a Newtonian fluid [10] in the healthy case where the mean corpuscular hemoglobin concentration (MHC) is approximately 32 g/dL. The physical properties of the components -membrane bending and shear resistance, cytosol viscosity -condition the deformability of the red blood cells and their flow dynamics. This is crucial for the proper functioning of the circulatory system, particularly for microcirculation where RBCs have to squeeze through complex networks of narrow capillaries. The physical properties of red blood cells vary over their lifetime (∼ 100 − 120 days). Cells become denser due to dehydration, resulting in a 25 − 30% increase in the concentration of corpuscular hemoglobin between the more and less dense fractions [7,8]. Since the viscosity of hemoglobin so-lutions varies sharply between concentrations of 30 and 40 g/dL [56,9], the densification of red blood cells leads to an increase in their internal viscosity from about 5 to 20 mPa.s, resulting in significant heterogeneity of blood samples [66,20]. This is not without consequences. It is well documented in the literature on erythrocyte suspension flows and rheology that the dynamics of cells (or more generally capsules and vesicles) in simple and complex flows strongly depends on the viscosity ratio between the internal fluid (cytosol) and the suspending fluid (plasma or buffer) [15,19,48,50,42,57]. In addition, the reduced deformability of senescent cells affects their elimination by mechanical filtration through the spleen. Several pathological conditions also lead to alterations in the deformability of red blood cells. This is the case of sickle cell disease, where red blood cells become rigid under the effect of deoxygenation, with serious consequences on the microcirculation in small capillaries, which can lead to occlusion phenomena and vaso-occlusive crises. All this indicates that the deformability and stiffness of red blood cells is a key haemorheological pa-rameter for diagnosis [5,14]. It is likely that jamming and occlusion phenomena are strongly influenced by the presence and quantity of rigid cells rather than by the average rigidity of the sample: a small fraction of rigid red blood cells (i.e. irreversibly sickled cells in the case of sickle cell disease) may be sufficient to nucleate occlusions despite an apparently low average rigidity value. The details of the distribution of mechanical properties of RBCs -which highlights the heterogeneity of a given blood sample -is therefore an information that could be useful in assessing the risk of occlusion. A number of techniques have been proposed to quantify the deformability of red blood cells. Atomic force microscopy (AFM) has been used to characterize the stiffness of RBCs at the cellular level, in healthy and pathological cases [46,47]. The technique can reveal significant variations in cell elasticity in a research context, but requires heavy equipment, and is not suitable for statistical measurements of large-scale samples in a clinical practice. Ektacytometry, developed in the 1980's, is based on light diffraction through a blood sample in shear flow, to measure a RBC elongation index [6], with the intrinsic limitation of providing only the average deformability of the red blood cells in the sample and challenging interpretation of the results [54,52,60,55]. In recent years, microfluidic techniques have been proposed to characterize deformability at the cellular level [23,35]. Although they partly answer the question of the dispersion of RBC properties in a sample, they require heavy image processing and shape analysis giving indirect information on the mechanical properties. Other techniques have focused specifically on membrane shear and bending moduli, including pipette aspiration [13], electric field deformation [17], optical tweezers [58,12], and optical microrheology [53,3,62]. It appears that there is currently a need for sub-cellular rheology techniques allowing quantitative measurements of the mechanical properties of different parts of the cell and operating on large samples. As further evidence, in the particular case of RBCs, measurements of the viscosity of the internal hemoglobin solution have mostly been performed by extracting the cell's hemoglobin content by lysis followed by conventional rheology techniques on relatively large samples [9,51,31]. Molecular rotors (MRs) have recently proven their usefulness in measuring the rheological properties of intracellular media for the purpose of characterization and diagnostics [37,36,67]. This is a group of fluorescent molecules having the ability to form twisted intra-molecular charge transfer (TICT) states, upon photoexcitation. Relaxation from the TICT conformation occurs either by emission of a red-shifted emission band or by non-radiative relaxation, with a lower excited-state/ground-state energy gap than the planar, locally excited (LE) state [25]. For viscosity measurements, molecular rotors with a single emission band are most preferred because emission from the planar (nontwisted) LE conformation is highly sensitive towards viscosity [43,40,25], and relatively insensitive towards the solvent polarity [2,39,24]. It has been demonstrated in a wide range of viscosities and in both polar and non polar fluids, that the quantum yield φ of the LE peak increases with viscosity η, according the so-called Förster-Hoffmann equation, log φ = C+b log η, where C and b are solvent and dye dependent constants [43,22,40,32,24,63]. These rheology dependent properties have been used for real-time monitoring of aggregation and polymerization processes [44], to report aggregation and protein conformational changes [28,38], and study phospholipid bilayers and cell membranes [39,21,61]. Here we study the ability of a selected molecular rotor to quantify the heterogeneity of a blood sample, and to characterize the rigidity and intracellular rheology of red blood cells. After characterizing its spectral properties, which should not interfere with those of hemoglobin, and its effectiveness as a viscosity probe in simple fluids, we demonstrate that, in addition to attaching to the membrane, the rotor penetrates spontaneously and uniformly into the red blood cells. We show that it is possible to distinguish red blood cells with different levels of deformability in a blood sample, and to quantify the heterogeneity of this sample. The evolution of the average stiffness of a blood sample and the distribution of stiffness in the sample are measured by varying the temperature. The fluorescence of the cytosol alone is estimated at different temperatures using a simple geometrical model of RBC allowing the separation of cytosol and membrane contributions. The estimated rigidification of cytosol with temperature is consis-tent with independent measurements of the viscosity of hemoglobin solutions. This opens up opportunities for the development of rapid and inexpensive hematological tests for diagnosis and clinical monitoring of patients. Red Blood Cell samples and hemoglobin solutions Preparation and stiffening of red blood cells. Blood samples from healthy donors were provided by Etablissement Français du Sang (EFS Rhône-Alpes and EFSÎle-de-France). Red Blood Cells were washed three times by dilution in phosphate buffer saline (PBS) and centrifugation at 1, 000 × g. They were resuspended in PBS to reach a hematocrit value of 0.25%. A 40 mM stock solution of DASPI (trans-4-[4-(Dimethylamino)styryl]-1methylpyridinium iodide, ref. 336408 from Sigma-Aldrich) in DMSO was prepared and then diluted in PBS to form two working solutions with concentrations of 1 mM and 10 mM. The above RBC suspensions were stained by adding 10% (v/v) of the DASPI solution to reach a final DASPI concentration of 0.1 mM or 1 mM. They were incubated for 2 hours at room temperature to allow DASPI to diffuse through the membranes, into the cytosol. Modulation of RBC mechanical properties was induced by temperature variations. RBCs were studied at 18 • C, 24 • C, 30 • C, 37 • C and 43 • C. Hemoglobin solutions. Hemoglobin solutions were prepared from blood samples. Red blood cells were washed three times in PBS buffer. After a final centrifugation at 1, 200 × g for 10 min, all supernatant was removed. The RBC pellet was exposed to 3 freeze-thaw cycles (−80 • C to 37 • C) to lyse red blood cell membranes. Membrane residues were then extracted by addition of CCl 4 to the lysate (1/3 of the volume), strong agitation and ultra-centrifugation at 10, 000 × g for 15 min. Hemoglobin solutions of various concentrations were obtained by dilution in PBS or by concentration using centrifugal ultrafiltration units (Vivaspin 2 PES 30 kDa from Sartorius). The hemoglobin concentration of the final solutions was measured by colorimetry (MAK 115 Hemoglobin Assay Kit from Sigma-Aldrich). The pH of hemoglobin solutions was of constant value 7 ± 0.2 for all concentrations. Preparation of model fluids for viscosity measurements. Water/glycerol and glycerol/ethylene glycol mixtures were prepared at different glycerol concentrations (0% v/v to 70% v/v). DASPI (8 mM stock solution in PBS) was added to reach a final concentration of 0.01 mM, 0.1 mM or 1 mM. Sample temperature control For the microrheology and fluorescence microscopy experiments (sections 3.1.2 and 3.2.2), the samples were observed on an inverted Leica DMI 8 microscope. The temperature of the samples was controlled and varied by an objective heater through the immersion oil in contact with the sample (Bioptechs Inc., Butler, PA, USA) with an accuracy of ±0.1 • C. Particle tracking microrheology The viscosity of the solutions (water/glycerol, glycerol/ethylene glycol and hemoglobin) was measured by particle tracking microrheology. In experiments described in section 3.1.2, the viscosity of the solutions was measured in situ with microrheology experiments, before proceeding with the fluorescence measurements. The technique is based on the Brownian motion of microscopic probe particles immersed in the fluid. Microrheology experiments are generally carried out on small volumes of about 1 µL. The technique is interesting in biological samples where minute volumes can be studied [1,49], and also in heterogeneous samples where the technique gives access to the amplitude of the heterogeneities in the sample [11]. Calibrated polystyrene beads (0.994 µm in diameter) were dispersed in the solutions to be studied at a volume fraction of less than 1%. The thermal motion of the tracers was recorded for 20 s at a rate of 100 Hz with an Or-caFlash v2+ fast sCMOS camera. This camera is mounted on a Leica inverted bright field microscope DMI 8, including an oil immersion objective with 100X magnification (N A = 1.3, depth of field: ∼ 200 nm). Custom image analysis software allows us to track the x and y positions of any bead close to the objective focus plane. For a reliable analysis, special attention was brought to record only the beads far from the glass slide surfaces. For each tracer, the time-averaged mean-squared displacement (MSD) was calculated as follows: < The Fluorescence EEM of hemoglobin alone (f) was also measured to give evidence that there is no significant contribution of hemoglobin in the range of interest for DASPI. As can be seen, the fluorescence emission intensity (a,b,c) generally increases with the solution viscosity, measured in situ with microrheology experiments. In hemoglobin solutions however (d,e), the emission peak value is lower than in DMSO for similar viscosities because of the high absorbance of the solution and the behaviour of the rotor in an aqueous protein medium. For a purely viscous fluid, the ensemble-averaged MSD increases linearly with the lag time t, according to: < ∆r 2 (t) >= 4Dt (in two dimensions), where D is the diffusion coefficient. In this case, the viscosity η can be estimated with the Stokes-Einstein relation, D = kT /6πRη, with R the diameter of the bead and kT the thermal energy [16]. Glycerol/water and ethylene glycol/glycerol solutions studied in section 3.1.2, were homogeneous and purely viscous, allowing for a simple determination of the sample viscosity. Fluorescence microscopy and segmentation of red blood cells For fluorescence measurements of the DASPI molecular rotor in the solutions (water/glycerol, glycerol/ethylene glycol and hemoglobin) and RBC suspensions, samples were placed between a glass slide and a coverslip separated by a 250 µm thick spacer. For RBCs, the glass slides and coverslips were pre-treated with a plasma cleaner to prevent the formation of echinocytes. Observation of the samples was carried out by fluorescence microscopy (Leica DMI 8, Leica Microsystems GmbH, Wetzlar, Germany) at 100X magnification with excitation between 460 − 500 nm and de-tection between 565 − 625 nm. The images were recorded with a fast sCMOS camera (OrcaFlash4.0 v2+, Hamamatsu Photonics France S.A.R.L., Massy, France). The intensity measurements of the fluorescence level of the images were performed using ImageJ software (ImageJ, Rasband, W.S., ImageJ, U. S. National Institutes of Health, Bethesda, Maryland, USA, https://imagej.nih.gov/ij/, 1997-2018). A macro program allows us to detect and segment RBCs on an image, measure their average fluorescence intensity, their surface area and their circularity. The circularity parameter is adjusted to select RBCs according to their orientation (front or side view) and to exclude echinocytes. In the experiments described in section 3.2.2, the segmentation is designed to measure the intensity on an area slightly smaller than the actual RBC surface area, in order to eliminate membranes contributions at the edges. During the analysis, the background fluorescence level is subtracted from the total image. This normalization allows us to obtain the signal actually produced by the red blood cells and to compare the realizations obtained when the temperature is varied. Confocal microscopy The penetration and localization of the DASPI molecular rotor in red blood cells was characterized with a Leica TCS SP8 confocal microscope equipped with a 40X oil-immersion objective (1.3 numerical aperture). Diluted red blood cell samples (hematocrit 0.5%), incubated with DASPI, were placed in a PDMS microchannel on the microscope stage. Confocal slices were acquired with a 0.5 µm step in the z-direction to measure the fluorescence signal at different levels in RBCs. The excitation wavelength was 488 nm and the depth of field was 600 nm for a pinhole size of 63.5 µm. The laser power settings and detection gain were kept constant between experiments for comparison purposes. Image processing allowed us to extract fluorescence intensity profiles, averaged over a region of interest in the (x, y) plane, for different z. Spectrophotometry Fluorescence Excitation Emission Matrices (EEMs) of DASPI in various solvents (DMSO, Glycerol, Ethylene Glycol) and hemoglobin solutions were mea-sured with a spectrophotometer at room temperature. The excitation wavelength was set between 350 nm and 650 nm and the emission spectrum was scanned between 450 nm and 750 nm. The excitation and emission slits were 5 nm, the scan speed was 600 nm/min and the voltage of the PMT detector was 600 V. Absorption spectra were measured with a Varian Cary 50 UV-Vis spectrophotometer (Agilent) at room temperature. All experiments were performed in 2 mL PS spectro cells (LP ITALIANA SPA). Hemoglobin was highly diluted to form a solution of reduced absorbance (1000x dilution ∼ 33 mg/dL concentration). DASPI concentration was set to 3.3 µM and the solutions were prepared with an 8 mM DASPI stock solution. Figure 2: Absorbance spectra of the solutions (a, e, f) were also measured to (i) confirm the presence of compounds (Hb, DASPI) that were highly diluted to minimize hemoglobin absorbance, (ii) quantify the influence of compounds in the DASPI spectral range. Spectral properties We selected the DASPI molecular rotor (trans-4-[4-(dimethylamino)-styryl]-1-methylpyridinium iodide, 336408 from Aldrich) from the stilbene group [26] for its fluorescence characteristics. These are decoupled from those of hemoglobin known to have a broad absorption spectrum and intrinsic fluorescent properties in the ultraviolet [64]. Fluorescence Excitation Emission Matrices of DASPI were measured in four selected solvents, at 1-(a,b,c,e) shows that the fluorescence spectra of DASPI are similar from one solvent to another. In the investigated range, DASPI is a single band emission molecular rotor, with fluorescence excitation and emission peaks respectively around 480 nm and 600 nm. More importantly, the excitationemission spectrum of DASPI in hemoglobin ( Fig. 1-(d,e)) is outside the intrinsic fluorescent spectrum of hemoglobin known to be in the ultraviolet with excitation maxima below 400 nm and emission below 460 nm [64]. The Fluorescence EEM of hemoglobin alone was also measured to demonstrate that there is no significant contribution of hemoglobin in the DASPI spectral range (spectrum (f)). From the EEM spectra (a,b,c) in Fig. 1, we can observe that the rotor has a fluorescence emission intensity which increases with viscosity. In hemoglobin solutions however (spectra (d,e)), the emission peak value is lower than in DMSO (spectrum (a)) for similar viscosities due to the high absorbance of hemoglobin and the behaviour of the rotor in an aqueous protein medium. This absorbance remains non-negligible even if the emission of DASPI is around a minimum of local absorption of hemoglobin as shown in Fig. 2 around ∼ 600 nm. The absorbance spectra of Fig. 2 also attest to (i) the presence of compounds (Hb, DASPI) that have been highly diluted to minimize the absorbance of hemoglobin, (ii) the influence of these compounds in the spectral range of DASPI. Förster-Hoffmann relations for viscosity A second step was to evaluate the sensitivity of DASPI to the local viscosity of the surrounding medium. In general, the photo physical sensitivity of fluorescent MRs depends on their interactions with the environment, which are influenced by viscosity, polarity and solubility [24,26,30]. Temperature is also a parameter that can affect the rate of TICT formation, and thus influence the parameter C of the Förster-Hoffmann equation [40,39,32,27]. The emission intensity of DASPI was measured by fluorescence microscopy (excitation at 480 ± 20 nm, emission at 595 ± 30 nm) in ethylene glycol/glycerol solutions and aqueous glycerol solutions whose viscosity was measured in situ by microrheology, just before the fluorescence intensity measurements. We measured the emission intensity of DASPI at different temperatures between 16 and 20 • C and at differ- 1 mM (Fig. 3-(a,b)). The relative concentration of the two solvents in each solution was modified to extend the achievable viscosity range, typically between 2 − 200 mPa.s. Ethylene glycol/glycerol solutions were chosen as one of the study systems because their polarity remains approximately constant when the relative concentration between the solvents and therefore the viscosity varies, as the polarities of the two liquids -glycerol and ethylene glycol -are similar [24]. In aqueous solutions of glycerol however, the polarity of the solvent decreases with increasing viscosity, based on the dielectric constant values of water and glycerol (water: 80.1 F/m, glycerol: 42.5 F/m, ethylene glycol: 37.7 F/m) [24]. . From our measurements, we deduce that temperature has little influence on the C parameter, and thus on the signal emitted by DASPI, as highlighted specifically at the 0.01 mM rotor concentration where all data points coming from different temperatures are on the same Förster-Hoffmann power-law curve, in consistency with a previous study [30]. Similarly, polarity has little effect on rotor response when comparing ethylene glycol/glycerol solutions of constant polarity and aqueous glycerol solutions of variable polarity, which is also consistent with previous studies where molecular rotors with a single emission band showed an emission intensity strongly dependent on viscosity but not on solvent polarity [39,24,2]. At low viscosity around 1 − 2 mPa.s in the aqueous glycerol solutions, a deviation is observed from the Förster-Hoffmann law as already observed in previous studies [26,24]. Molecular rotor DASPI in red blood cells 3.2.1. Penetration in healthy red blood cells The penetration of the DASPI molecular rotor into healthy red blood cells was studied by confocal microscopy. Images of cells with different orientations were taken during their slow sedimentation in the observation microchannel. Fig. 4 shows stacks of slices for three different RBCs, corresponding to different focus planes in the z-direction. We observed that the membrane fluorescence signal is slightly stronger, certainly due to the interaction between the molecular rotor and the membrane components. Inside the cell (cytosol), DASPI fluorescence is homogeneous in the observation plane, regardless of the z-position and orientation of the cell. Confocal microscopy allows us to unambiguously separate the fluorescence response from the membrane and cytosol. Indeed, as soon as the upper and lower membranes are out of the depth of field (∼ 600 nm), they no longer contribute to the field of view. This indicates that the fluorescence signal observed inside the cell, far enough from the membrane, comes only from the cytosol, without any signal from the membrane. The molecular rotor is thus uniformly distributed throughout the volume of the red blood cells. Further evidence of the uniform and effective penetration of the molecular rotors into red blood cells is provided by the intensity profiles of these red blood cells in the z direction. By scanning different parts of the red blood cells, we show that the DASPI fluorescence signal is present throughout the entire red blood cell, over a distance that corresponds approximately (given the depth of field) to the known typical sizes of the red blood cell: diameter 8 µm (Fig. 4b), thickness at the edge 3 µm (Fig. 4a), thickness in the biconcave zone ∼ 1 µm (Fig. 4a). Sensitivity of DASPI fluorescence to red blood cell stiffening Lowering the temperature is known to increase the rigidity of red blood cells at two levels: the viscosity of hemoglobin, main component of the cytosol, increases [4,34] and the membranes become stiffer [41,68,65]. In this work, we use temperature Figure 5: Viscosity of hemoglobin solutions as a function of temperature, measured by microrheology. Hemoglobin concentration ranges from 25.5 g/dL to 38.5 g/dL. Viscosity decreases with temperature and increases with concentration. For a given concentration range, viscosity values are spread over a wider range at low temperature than at high temperature. as a parameter to control the stiffness of RBCs. It was varied between 18 • C and 43 • C. Independent measurements of the viscosity of hemoglobin solutions as a function of temperature are shown in Fig. 5. Viscosity was measured by microrheology in hemoglobin solutions with concentrations between 25.5 g/dL and 38.5 g/dL. Our experiments show that hemoglobin behaves like a purely viscous fluid (not shown) and that the solutions are homogeneous (not shown) and that the viscosity decreases with temperature at fixed concentration, and increases with concentration at fixed temperature. For a given concentration range, viscosity values extend over a wider range at low temperature than at high temperature. This evolution of viscosity with temperature is consistent with the literature [4,34]. Fig. 6 shows images of red blood cells at different temperatures (Fig. 6-a) as well as the intensity profiles of the cells labelled with a star (Fig. 6-b). The intensity profiles all show homogeneous backgrounds, sharp peaks at the edges and a plateau inside the cell. Both the background (i.e. PBS solution) and the RBC profiles increase as the temperature decreases. Since DASPI signal does not depend on temperature in the studied range (sec- 2), this increase in the peak and plateau fluorescence when temperature decreases demonstrates that the fluorescence signal of DASPI is sensitive to, and increases with, the rigidity of RBCs. The observed peaks on the profiles are further evidence, along with confocal microscopy observations, that the membrane-DASPI interactions result in a stronger fluorescence response than that of cytosol-DASPI. A closer look at the plateaus (for example clearly visible here at 30 • C) reveals small height variations that reflect the cell geometry and suggest that the cytosol contribution is proportional to the local thickness of the cell. The viscosity-intensity dependence of the background was determined at each temperature using microrheology in the PBS surrounding the RBCs. Fig. 6-c shows that the rotor emission increases with the PBS viscosity and that the data can be adjusted with a Förster-Hoffman power law of exponent x = 0.7 1 . To ensure a statistically relevant sampling, we studied blood samples from five different donors with mean corpuscular hemoglobin concentrations (MCHC) between 30 g/dL and 35 g/dL. About four hundred cells were imaged at each temperature, and the average intensity was measured on each cell (see section 2.4). Fig. 7-a gathers the data together within a boxplot. As the temperature decreases, the median fluorescence intensity increases (by 35% in 1 mM DASPI solutions and 26% in 0.1 mM DASPI solutions) and the collected data are more scattered. The same pattern is observed when considering RBCs coming from a single donor (Fig. 7-b): the fluorescence intensity distribution of RBCs shifts towards high intensities and becomes wider at low temper- atures. This dispersion of red blood cell fluorescence intensities, measured at a given temperature, reflects the variability of their properties: variation in the mean corpuscular hemoglobin concentration from one donor to another (Fig. 7-a) and dispersion due to the age of the cells within a sample (Fig. 7a-b). Variations in the physico-chemical properties of membranes may also play a role in the dispersion of the data, the separation of the two contributions will be discussed in the following section. However, these experiments prove that the DASPI molecular rotor can quantify the rigidity of RBCs as well as the dispersion, or heterogeneity, of the mechanical properties of red blood cells in a sample. Estimation of cytosol fluorescence using a simple model of RBC In fluorescence microscopy experiments (section 3.2.2), both the membrane and the cytosol contribute to the signal measured on RBCs. We propose here a simplified RBC model to separate the two contributions and to find the evolution of the cytosol fluorescence with temperature (i.e. stiffness). We show how geometrical parameters can influence this estimation and that we can in principle relate fluorescence measurements to the mechanical properties of the cytosol, i.e. the internal viscosity of hemoglobin. Simplified RBC model In section 3.2.2, all imaged RBCs were chosen with the same front view orientation. In a simplified model, we consider RBCs with constant height in their center and curved edges, as represented in Fig. 8-(a,b,c). The camera being along the z axis, the fluorescence image obtained is a projection in the (x, y) plane, consisting in a matrix of squared units of area h 2 p . In order to take into account the effect of diffraction, h p is defined as the optical resolution of the microscope, i.e. the radius of the Airy disk: h p = 1.22 · λ/ (2 · N A) = 277 nm. We focus on the fluorescence intensity profile obtained along the x axis, averaged over the rectangle centered in y = 0 and height h p (Fig. 8-(b,d)). A typical intensity profile is shown in Fig. 8-d with two peaks at the edges and a plateau at the center. Assuming that the absorbance of hemoglobin has little effect on the considered lengthscales, the intensity represented in Fig. 8-d is the integral of the intensities over z. In Fig. 8-c, the corresponding 2D section of the RBC in the (x, z) plane is illustrated. The two peaks of intensity M correspond to the curved edges, where the membrane dominates and projects a higher fluorescence intensity; the plateau of intensity P corresponds to the central part of the cell, which is, in this model, a cytosol of constant height with two membranes, top and bottom. Peak and plateau intensities (M, P ) can both be measured on the intensity profiles ( Fig. 8-d). The cytosol fluorescence cannot be measured independently. However we show in Fig. 8-d that its contribution should be proportional to the local thickness of the cell. In the following, we estimate the fluorescence of the cytosol in the central part (C) and its contribution to the measurable plateau intensity (C/P ). The relevant parameters are as follows: • geometrical parameters: • h m : thickness of the membrane; • H i : thickness of the cytosol along the z axis in the central part; • R: average radius of curvature at the edges of the RBC; • Fluorescence intensities: • φ m m −3 : fluorescence intensity per unit volume of the membrane; • φ i m −3 : fluorescence intensity per unit volume of the cytosol. Cytosol and plateau intensities (C and P ) are related to the terms defined above as follows: As h p is large compared to the membrane thick- 2Rh p : area of cytosol, in orange on Fig. 8-c-zoom. This leads to the following formula for the peaks intensity: Finally, the cytosol intensity reads: In practical cases, h p H i so that the denominator of equation 3 cannot be zero. Equation 3 indicates that C depends on the geometrical parameters R and H i and the values M and P which can be measured on the intensity profiles (Fig. 8). C can also be expressed, using equation 1, with the ratio of elementary intensities Φ m /Φ i : In the following, we will focus on C and C/P which represents the contribution of cytosol to the plateau intensity P , measured in fluorescence microscopy experiments (section 3.2.2). Estimation of the cytosol fluorescence We estimate C and C/P with temperature, by first defining acceptable ranges for the geometrical parameters R and H i , then using equation 3 for a statistical sampling of about 100 RBCs, representative of the data dispersion in Fig. 7-a. We also focus on unravelling the dispersion of data related to the intrinsic properties of RBCs (internal viscosity, membrane stiffness) and to the geometrical unknown (R and H i ). A RBC is classically described in the literature as biconcave, with a maximum height at the edges of 2-3 µm, a radius of curvature of about half this maximum height, and a minimum height at the center of 0.8-1 µm [18]. Therefore, in our model, R is chosen in the range 1 µm ≤ R ≤ 1.5 µm and H i in the range 1.3 µm ≤ H i ≤ 1.9 µm as an average between the maximum and minimum heights. By varying R and H i within these ranges, we cover a large number of realistic shapes for a RBC. We note G1 (R = 1.5 µm, H i = 1.3 µm) and G2 (R = 1 µm, H i = 1.9 µm) the geometries leading to extreme values of C/P for a given M/P (see equation 3). G1 corresponds to a thin RBC with sharp edges, and G2 to a thick RBC with rounded edges. We also note G3 (R = 1.25 µm, H i = 1.6 µm) the geometry corresponding to the mean value of the range considered. G1, G2 and G3 are shown in Fig. 9. In this section, we consider that for a given RBC, R and H i do not vary with temperature. This hypothesis is supported by recent observations by Jaferzadeh et al. [33] showing that the shape of the edges does not vary significantly with temperature, and that although fluctuations in the central part are more important at higher temperatures, the mean thickness does not vary. We evaluated the C/P and C values, separating or not the effects of RBC geometry and statistical sampling. As indicated above, approximately 100 RBCs per temperature were considered and the corresponding pairs (M ,P ) were measured on each of their intensity profiles. In Fig. 9-b, the P values from section 3.2.2 were added for comparison. The first way of processing the data was to visualise the effect of the statistical sampling alone, when the geometry is set at its mean value G3. It delineates lightcolored zones corresponding to the data dispersion of C/P and C (Fig. 9-(a,b)). The continuous lines within these zones correspond to their median val-ues. A second way of processing data was to quantify the effect of the possible geometries only. To do this, the geometry was set at the G1 or G2 extremes. Next, the resulting median values of C/P and C were calculated and are represented by dashed lines. These dashed lines define dark bands that quantify the dispersion due to possible geometries. Finally, both effects, statistical sampling and geometry, were combined, and are shown in the intermediate color bands (all possible geometries for 100 RBCs). In all cases, the bands corresponding to C/P values increase roughly linearly with temperature. In parallel, the ones corresponding to C decrease with temperature, as evidenced in particular in Fig. 9-b, when the geometry is fixed at the extremes or its mean value (dashed and solid lines). This decrease in the cytosol signal with temperature, caused by the sharp decrease of P , is consistent with the decrease of the viscosity of hemoglobin solutions measured in Fig. 5. We focus now specifically on the dispersion of C and C/P with temperature. The dispersion of C/P remains constant when considering the combined effects of geometry and statistical sampling (intermediate color band). Conversely, the one of C decreases with temperature. Since its light band is larger than the dark band which is quasi constant in width, this decrease is mainly due to the effect of statistical sampling alone. It thus reflects the intrinsic heterogeneity of a blood sample and in particular the dispersion of internal viscosity. This is quite consistent with the decrease in the dispersion of hemoglobin viscosity values (independent measurements in section 3.2.2). While more information on the cell geometry would be needed to reduce uncertainties in the determination of C, this simple model shows that it is in principle possible to separate the contributions of the cytosol and membrane from simple fluorescence profile measurements, and that the derived values are consistent with independent viscosity measurements on hemoglobin solutions. The same study could be devoted to the membrane by considering (P − C)/2 = h 2 p h m Φ m which represents the intensity of a membrane element of surface area h 2 p and thickness h m (equation 1) 2 . We can however note that the increase in C/P reflects a decrease in the Φ m /Φ i ratio (equation 4), which means that the contribution of the membrane decreases much faster than that of the cytosol when temperature increases, a feature that remains to be explored in terms of rotormembrane interactions. Discussion and Conclusion In this paper, we study the ability of molecular rotors to characterize the intracellular rheology of red blood cells. We identified a molecular rotor (DASPI: trans-4-[4-(dimethylamino)-styryl]-1methylpyridiniiodide), that is suitable for the study of hemoglobin and intracellular medium of red blood cells. For this purpose, the excitationemission spectra of DASPI were measured in selected solutions: DMSO, ethylene glycol, glycerol and hemoglobin. In all fluids, DASPI presented a fluorescence excitation peak around 480 nm and emission peak around 600 nm. We have thus shown that the excitation-emission spectrum of DASPI does not overlap with the intrinsic fluorescent spectrum of hemoglobin in the ultraviolet domain [64]. Moreover, its emission spectrum is in the range where the absorbance of hemoglobin is lowest (∼ 600 nm, Fig. 2), which allows it to be used at moderate concentrations. Tested in simple solutions such as ethylene glycol/glycerol and glycerol/water solutions, DASPI shows sensitivity to the local viscosity η, and exhibits a Förster-Hoffmann dependency in a range covering two decades, log φ = C + b log η, with φ the LE-state quantum yield and b and C two solvent and dyedependent parameters. The exponent b, which is a measure of the efficiency of the molecular rotor in the solvent, was found to be around 0.7 in both solutions, which is close to the predicted theoretical values of 2/3 [22] or 0.6 [43]. The parameter C increases by about one and a half decade when the concentration of DASPI increases by two decades, which allows to adjust the dye concentration to the sensitivity of the intensity reading device in potential applications. The comparison between ethylene glycol/glycerol and glycerol/water solutions -one where the polarity effects are minimized, the other with variable polarity -confirms the previous result obtained by Haidekker and colleagues [24] that there is no impact of variable polarity on obtaining a Förster-Hoffmann equation between quantum yield and viscosity. In addition, our data show that temperature has little influence on the C parameter, which allows us to use it as a RBC stiffening control parameter in the section 3.2.2. The penetration of the molecular rotor (DASPI) has been studied in healthy red blood cells. We showed using confocal microscopy that the molecular rotor spontaneously penetrates into red blood cells. With a depth of field of approximately 600 nm, confocal microscopy allows us to unambiguously separate the fluorescence response of the cytosol and the lower and upper membranes as soon as the focal plane is far enough away from the membrane. We therefore revealed that the molecular rotor is uniformly distributed throughout the volume of RBCs, giving evidence that it is a suitable probe for characterizing their intracellular rheology. As fluorescence profiles include a contribution from the membrane with a locally strong signal, the specific interactions between DASPI and the cell membrane should receive particular attention in future studies in order to reach quantitative understanding of the influence of structural and rheological properties of membranes on MR fluorescence. In order to highlight and characterize the sensitivity of DASPI to RBC rheological properties, we varied temperature as a way to control cytosol and membrane properties, as a decrease in temperature leads to an overall stiffening of RBCs. Results show that for a given healthy blood sample, the measured fluorescence intensity at the whole cell level varies nearly by a factor 2 between 18 • C and 43 • C in correlation with variations of cytosol viscosity and membrane elasticity. We showed that by making reasonable assumptions about the RBC geometry, it is possible to separate and quantify the respective contributions of the cytosol and the membrane to the overall fluorescence signal and thus to derive information on the modifications of both components due to temperature change. We show in particular that the estimated contribution from the cytosol reproduces well the intrinsic heterogeneity of the RBC sample, and is consistent with independent measurements on hemoglobin solutions. Finally, by simply varying temperature for healthy cells, we demonstrate by this example that in principle the DASPI molecular rotor can be used to derive measurements of intracellular viscosity and quantify sample heterogeneity. For a more quantitative analysis, the partial absorption of both incident and emitted light by hemoglobin will have to be modeled and taken into account, which offers prospects for future studies. Further investigations should also focus on calibration curves of the DASPI response in hemoglobin solutions, taking into account hemoglobin absorbance. While our study shows that the rotor response is unequivocally correlated to rheological changes in healthy hemoglobin induced by temperature variations, these results should be generalized to systematic variations in hemoglobin concentration, as well as to studies on pathological situations where the structure of hemoglobin may be altered. Our results are a proof of concept that MRs can be used as nanoscale rheological probes in small compartments such as red blood cells. Their use in this context of evaluation of mechanical properties at the cellular level is a promising prospect, in particular because it would make it possible to measure the distribution of cell properties over large samples, unlike other techniques that only allow the measurement of average values over the entire sample. The alteration and dispersion of cell properties in pathological cases can indeed be critical for clinical aspects and such a technique opens new perspectives for medical diagnosis purposes by providing more detailed and quantitative information through a rather simple and direct measurement of fluorescence. Conflicts of interest There are no conflicts to declare.
2021-03-22T01:16:21.885Z
2021-03-19T00:00:00.000
{ "year": 2021, "sha1": "86a569220932043a324ebfd3d41b31e4275d1cbe", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/sm/d1sm00321f", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "86a569220932043a324ebfd3d41b31e4275d1cbe", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry", "Physics" ] }
133240914
pes2o/s2orc
v3-fos-license
Modeling Extreme Conditions of Sewage Plumes in Central – South Coastal Region of São Paulo State – Brazil The study region is located in the central-south coast of Sao Paulo State (Brazil), centered at 24.5°S 46.5°W, and is influenced by three outfalls of the city of Praia Grande and sewage releases by Itanhaem River. The objective of this study is analyzed the dispersion of effluent plumes emitted by the submarine outfalls and the Itanhaem River, concerning the concentration of contaminants in extreme conditions (summer month). For that, Delft3D model’s hydrodynamic and water quality modules were used, as well as the Visual Plumes model. Results of hydrodynamic simulations were consistent with the hydrodynamics features established in the literature. The near-field modeling showed the influence of currents in the transport and initial dilution of effluent plumes. The far-field modeling’s results of the plumes from the submarine outfalls and Itanhaem River showed that those plumes do not exceed the maximum levels established by the National Environmental Council when reaching the shore. Once observations indicate that the local beaches are often classified as inappropriate for bathing and with bad water quality, streams that dump raw and untreated sewage directly on the beaches can be identified as the responsible for the environmental contamination of the study area shore. INTRODUCTION In outfalls, usually the sewage passes through only preliminary treatment, with subsequent disinfection with chlorine.However, when dumping untreated sewage, the sea allows dilution.After the sewage is dumped into the marine environment, either by a submarine outfall or through streams that flow into the sea, the mixture of effluents then occurs.In the case of emissaries, the mixture takes place in three zones, with different spatial-temporal scales: near field, intermediate field and far field (Lamparelli, 2007;Harari et al., 2013;Gregorio, 2009;Delfim, 2011;Subtil, 2012). The region of near-field has interactions with the surface, the pycnocline region and the bottom, with great influence of physical processes and pipe diffusing features, which can significantly affect the initial mixture.In the near field, the spatial scale is of the order of 10 to 100 m, while the timescale is in the range of seconds to minutes. The region of intermediate-field is the zone of stability between the regions of near and far fields, and has a dynamic dependence on the momentum and buoyancy forces of sewage discharge and the local current intensities.Its spatial and time scales are typically from 100 to 1000 m and from minutes to hours, respectively.From that point, with the hydrostatic equilibrium achieved, is characterized the presence of the far-field region, where the effluent begins to behave like a plume, whose dispersion is controlled by advection and diffusion processes.In the far-field, the spatial scale is approximately 10 3 -10 5 meters while the time scale ranges in order of hours to days. The near-field models are used to simulate the mixing processes in the initial release region, being dependent on both environmental factors (such as intensity of currents, stratification of the water column and the sea turbulence), as well as releasing features (number of emitting orifices, their dimensions, etc.).Moreover, far-field models are used to simulate the dispersion of sewage without the need to take into account how is performed their release into the marine environment, being used for simulations of effluent dispersion in coastal regions and estuaries, starting from previously obtained results of near field models (Delfim, 2011). artigos técnicos outubro 2016 However, the cities of Peruibe, Itanhaem and Mongaguado not have outfalls, and discharge part of their wastewater without any proper treatment, directly into streams, estuaries and the sea (Jakob, 2002).One of the main destinations of these effluents is the Itanhaem River (24.20°S 46.80°W), whose estuary has typical vegetation of mangroves and is greatly influenced by the sea (Souza Pereira & Camargo, 2004). OBJECTIVES The main objective of this study was to analyze the dispersion of the effluent plumes emitted by outfalls of Praia Grande and the Rio Itanhaem, through numerical modeling. METHODOLOGY Delft3D is a model used to perform numerical simulations, determining spatial and temporal variations and interactions between hydrodynamic phenomena, sediment, ecology and water quality, especially in natural environments such as coastal regions, rivers and estuaries, but also in artificial coastal environments, as ports and docks (Deltares, 2013). It was established in Delft3D (D3D) a grid model to cover the region of interest for the hydrodynamic modeling, in spherical coordinates, type C of Arakawa (Mesinger & Arakawa, 1976).This computational grid contains 227 by 227 horizontal cells, being inclined at 45° counterclockwise with horizontal spacing of 350 m, and vertical spacing comprising five equidistant sigma layers (Harari et al., 2006). The water quality module Delft3D D3D-WAQ allows physical and chemical processes specific to each pollutant to be activated independently, aiming the creation of scenarios to represent real dispersion and decay of pollutants into the environment (Deltares, 2013a). Visual Plumes software (VP), Version 1.0, was used (Frick et al., 2001;Frick, 2004), for modeling the near and intermediate field due to sewage discharge from the three emissaries, more specifically, its module Three Dimensional Updated Merge (UM3).Mixing processes in the initial region were simulated from: technical information of the outfalls; concentration values and decay of the concerned pollutants; and results of D3D-FLOW in hydrodynamics characterization of environment where sewage plumes were released. The UM3 is a Lagrangian 3D model of initial dilution of the plume, through the equations of conservation of mass, horizontal momentum and energy; UM3 performs near-field and intermediate-field simulations of plumes derived from ocean outfalls (Baumgartner, Frick & Roberts, 1994). The fecal coliform pollutant Escherichia coli (E.coli) were selected for this study.The upper limit for a coastal water to be considered as appropriate for bathing, as function of E.coli concentration, is 800 MPN / 100ml, in at least 80% of the samples (BRASIL, 2001). Values of concentrations of pollutants released by Rio Itanhaem were obtained in Quiñones (2000), for the periods of summer (January and February) of 1998.The release of pollutants was continuous and constant along time.Temperature and salinity values were obtained at the river mouth following Souza Pereira & Campbell (2004).On the other hand, values of the average monthly discharges of Rio Itanhaem were obtained in DAEE (2015). The hydrodynamics and the near, intermediate and distant field modeling was performed to artigos técnicos outubro 2016 February 2012, considered an extreme case because it is a summer month, with a large number of tourists.As the research aimed to monitor the pollution plumes only due to the outfalls of Praia Grande and Rio Itanhaem discharge, it was used, as an initial condition, zero concentration values across the entire grid. RESULTS In this section will initially be presented a selection of modeling results of plumes, referring to the near and intermediate fields (VP model results), for February 2012, with angular histograms of dilution, travelled distance and concentration of E. coli (Figure 1). Next, time series of E.coli concentrations, at the surface and the bottom, are presented in Figures 2 and 3, as given by the far-field modeling results for February 2012.Finally, Figures 4 and 5 near the mouth of the Rio Itanhaem, besides the resulting plumes of Praia Grande emissaries are already with significantly reduced coliform concentrations to reach this area, the river plume has less potential pollution plumes than of the three submarine emissaries, due to the initial values of E.coli considered. Monitoring of beaches of the municipality of Itanhaem, carried out by CETESB ( 2013), showed the same percentage of time with adequate water quality as given by D3D-WAQ results.However, the model results and in situ sampling on the beaches of Praia Grande did not agree, and one possible explanation for this discrepancy is that the simulations in D3D-WAQ model considered as polluting sources only the three outfalls of Praia Grande and the Rio Itanhaem discharges, while the in situ samples had the influence of other sources of pollution in the coastal region, such as streams that dump raw sewage directly into the Praia Grande coast (SABESP, 2010).When pollution of these streams joins with the plumes of the emissaries, results an increased concentration of coliforms, which can exceed the bathing limits, making the beaches improper for use by bathers. DISCUSSION The maps of pollutant plumes distribution show that the entire coastal strip, until approximately the depths of 25 to 30 m, is subject to the impacts of effluent plumes coming from Praia Grande emissaries and the mouth of the Rio Itanhaem, both in surface layer and the bottom layer. As for time series and statistical calculations, concentrations were reported for Escherichia coli above 800 MPN / 100ml limit in most cases, much of simulated time, in points located between the outfalls as well as at points located in their vicinity. In points located on the beaches of the municipalities of Praia Grande and Itanhaem, no model result exceeded the E.coli limit of saline water Class 1 considered as appropriate for bathing.This is because the concentration of a pollutant decrease due to the dispersion of the plumes, so the tendency is to occur a gradual reduction of the concentration of coliforms according to the distance covered by the plumes; in fact, there is a high dynamism of the sewage purification process (Ferreira, 2015), which in the case of fecal coliforms is a function of incident solar radiation, chlorinity and temperature of seawater.As for the area artigos técnicos outubro 2016 CONCLUSIONS According to the annual reports of beaches quality of Sao Paulo State, published by CETESB ( 2013), the Praia Grande, Mongagua, Itanhaem and Peruibe beaches have registered improper conditions of bathing, especially during the summer months.The use of D3D-WAQ model, together with the hydrodynamic results of D3D-FLOW, for monitoring the dispersion of plumes of emissaries PG1, PG2 and PG3 and Rio Itanhaem, showed that their plumes hit the coast until the depths of about 25 to 30 m.However, pollution from such sources at local beaches does not exceed the limits of bathing, being above the limit values only near the outfalls.Therefore, one can say that the Praia Grande outfalls are well sized and operate correctly, and, along with the Rio Itanhaem, cannot be considered as responsible for improper bathing beaches in the region. In the case of Praia Grande, where bathing in general remained improper in most of the beaches in the summer of 2012 (CETESB, 2013), the 10 streams that flow directly in the coastal area of the city can be considered as the major responsible for pollution and poor water quality in its coastline, as they bring raw and untreated sewage directly to the beaches (SABESP, 2010). show distributions of E. coli at the surface and bottom, in the periods of maximum values in Praia Grande and Rio Itanhaem, in February 2012. Figure 1 :Figure 2 :Figure 3 :Figure 4 :Figure 5 - Figure 1: Angular histograms of dilution, travelled distance and concentration of E. coli plumes of PG3 emissary to each of its diffusers, in February 2012.
2018-12-21T09:12:15.710Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "d9a2956f04b785e1d90824390f217cc8610042c1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4322/dae.2016.017", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d9a2956f04b785e1d90824390f217cc8610042c1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
1731888
pes2o/s2orc
v3-fos-license
Five mucosal transcripts of interest in ulcerative colitis identified by quantitative real-time PCR: a prospective study Background The cause and pathophysiology of ulcerative colitis are both mainly unknown. We have previously used whole-genome microarray technique on biopsies obtained from patients with ulcerative colitis to identifiy 5 changed mucosal transcripts. The aim of this study was to compare mucosal expressions of these five transcripts in ulcerative colitis patients vs. controls, along with the transcript expression in relation to the clinical ulcerative colitis status. Methods Colonic mucosal specimens from rectum and caecum were taken at ambulatory colonoscopy from ulcerative colitis patients (n = 49) with defined inflammatory activity and disease extension, and from controls (n = 67) without inflammatory bowel disease. The five mucosal transcripts aldolase B, elafin, MST-1, simNIPhom and SLC6A14 were analyzed using quantitative real-time PCR. Results Significant transcript differences in the rectal mucosa for all five transcripts were demonstrated in ulcerative colitis patients compared to controls. The grade of transcript expression was related to the clinical disease activity. Conclusion The five gene transcripts were changed in patients with ulcerative colitis, and were related to the disease activity. The known biological function of some of the transcripts may contribute to the inflammatory features and indicate a possible role of microbes in ulcerative colitis. The findings may also contribute to our pathophysiological understanding of ulcerative colitis. Background Ulcerative colitis (UC) is a disorder characterized by chronic mucosal inflammation of the large intestine. It is frequently associated with various extraintestinal manifestations. The inflammation may be limited to the rectum (proctitis), but mucosal lesions often continue more prox-imally (left-sided UC) or additionally embrace the transverse colon (extensive colitis) or the entire large bowel (pancolitis). The immune and cellular (non-immune) response is dysregulated in both the acute and the chronic phase of UC [1,2]. In Scandinavia, UC has been found to affect individuals of all ages, with an annual incidence of about 15 per 100 000 [3,4] and a prevalence of about 300 per 100 000 inhabitants [5]. The pathogenesis and pathophysiology of UC are still under investigation [6]. We can tentatively say that the cause and onset of the disease is polygenic with environmental interaction; that is, there is a genetic predisposition [7][8][9] in combination with eliciting environmental factors which may precipitate the phenotype of UC [10]. In addition, interaction between the colonic epithelium and microbiological flora as well as a disintegrated mucosal barrier function may be important factors in the onset and development of UC [6]. The use of microarray technique analyses on mucosal specimens obtained from both patients with established UC and controls has allowed identification of candidate genes, which are valuable in research on UC pathogenesis. However, these UC candidate genes must be carefully selected, since recent evaluations of microarray data have revealed considerable divergence after examination of similar tissues [11][12][13]. Such divergent results are commonly presented in studies using pooled patient samples. In the present study however, the transcripts selected are based on our earlier individual whole-genome microarray screening and quantitative real-time PCR (RT-PCR) in patients with UC [14], where five changed genes/transcripts were identified; aldolase B, elafin, MST-1, simNIPhom (similar to NIP homolog), and SLC6A14. The pathophysiological properties of SimNIPhom have not yet been clarified, but the other transcript products have potential importance in secretion [15,16], anti-microbiological activity [17], and cell-mediated immune response [18]. The primary aim of the present study was to define differences in the mucosal expression of five selected transcripts, retrieved from two different colonic locations in UC, by using a quantitative RT-PCR technique. We also aimed to evaluate the influence of ongoing anti-inflammatory treatment as well as the importance of the colonic UC extension and the severity class. Patients and tissue specimens Before the colonoscopy procedure, consecutive male and female subjects (UC patients and controls, >18 y) were recruited to the present study. The UC diagnosis was based on the medical history, endoscopic findings, histological examination, laboratory tests, and the clinical disease presentation. The extent of UC and the clinical activity were classified in accordance with the Montreal Classification [19]. In brief, the colonic inflammatory involvement is defined as extension (letter E) combined with a number between 1-3 (E1 denotes proctitis, E2 leftsided UC, and E3 extensive colitis). In addition, the clinical severity grade (letter S) is defined. The S-score ranges from clinical remission (S0) to severe UC (S3). Mucosal biopsies were obtained from the rectum (10-15 cm proximal from anal verge) and caecum from all participants. The remaining demographic and clinical data are presented in Figure 1. RNA isolation The biopsy specimens were immediately stored in RNAlater solution for isolation of RNA. The RNA-later-preserved biopsies were homogenized in a lysis buffer from the GenElute Mammalian Total RNA kit (Sigma, St. Louis, MO.) and total RNA was isolated according to the manufacturer's instructions. The RNA concentration was measured spectrophotometrically. Quantification by real-time polymerase chain reaction (RT-PCR) Two μg of total RNA from each sample were converted into cDNA. The cDNA synthesis was performed as described previously [20]. Oligonucleotide primers purchased from MWG-BIOTECH AG (Ebersberg, Germany) were used for the relative quantification (ABI-7500 system, software version 1.3) (Table 1). Glyceraldehyde-3phosphate dehydrogenase (GAPDH) was used as a reference gene in all experiments. The expression level in each sample was compared with a calibrator by using the ΔΔC Tformula (ΔC T(calibrator) -ΔC T(sample) ). Statistical analysis Descriptive statistics and the Wilcoxon signed rank test (SAS, Statview ® ) were used. Median values are presented. Ethics The study was approved by the local research ethical committee. All patients were given oral and written information before entering the study. Informed consent was obtained from all patients and controls. Results The mean duration of UC was 9.3 years (proctitis 9.6 years, left-sided colitis 9.5 years and extensive colitis/pan-colitis 9.2 years). Neither age nor gender was matched between the UC group and the control group. In order to evaluate any differences in transcript expressions within the control group (n = 67) with respect to background diagnoses (anaemia, diverticulosis, irritable bowel disease and polyposis), statistical analysis of each background diagnosis were compared to the remaining group of controls. No significant differences (p > 0.05) were detected for any of these diagnoses. Significantly higher transcript expressions of aldolase B and SimNIPhom and significantly lower transcript expressions of elafin, MST-1, and SLC6A14 were found in caecal biopsies in comparison to rectal biopsies from the control group ( Figure 2). The only significant differences between rectal and caecal transcript expressions in UC patients were the decreased transcript expressions of elafin and SLC6A14 in caecal biopsies in comparison to rectal biopsies. Sum 7 9 33 Comparison of rectal biopsies from controls (n = 67) with rectal biopsies from UC patients with inflammatory activity in accordance with Montreal classifications S1-S3 (n = 28) showed significant elevations (p < 0.05) in UC patients of all transcript expressions with the exception of MST-1, which showed significantly (p < 0.05) decreased expression in UC patients. The same analysis of caecal biopsies from controls and patients with S1-S3 UC (n = 16) showed significantly elevated transcript expressions of aldolase B and SLC6A14 only. Distal biopsies from controls, compared with UC patients without inflammatory activity (S0), showed increased transcript expression in aldolase B only (median -1.62 vs. 1.0, p = 0,012). All other transcript analyses from both locations showed no significant differences (p > 0.05). All transcript analysis with respect to UC extension showed that left-sided (E2) and total colitis (E3) differed significantly from controls (p < 0.05); this was not the case for proctitis (E1). RT-PCR result (ΔΔCt(=ΔCt target -ΔCt calibrator )) for controls (filled dots) and UC patients (▲) presented as median values and 25 th and 75 th percentil (bars) Statistical analysis concerning the influence of antiinflammatory treatment on the transcript expressions within the UC cohort showed no statistical differences (p > 0.05) when comparing UC patients with ongoing corticosteroids (n = 9) or azathioprine (n = 7) respectively with the remaining UC patients. However, the 27 patients treated with mesalazine show a significant increase in aldolase B (median 0.48 vs. 3.02, p = 0.035) in comparison to the remaining UC patients. Discussion Genetic predisposition, psychological stress, nutritional and environmental influences, intestinal pathogens and disturbed intestinal barrier function have all been proposeas pathogenetic factors in UC [6]. However, current knowledge about the pathogenesis and pathophysiology of UC [21] is incomplete. Moreover, with the exception of a few general serological inflammatory activity biomarkers, even less information is available regarding mucosaassociated transcript changes and their potential pathogenetic and pathophysiological role in UC [22]. This lack of knowledge may sometimes lead to uncertainty in diagnosis, judgement of prognosis and clinical management of UC patients. On the basis of exsisting knowledge of the biological functions of the transcripts investigated in this study, it is reasonable to believe that the demonstrated alterations might be related to predisposition and/or the pathophysiological response in UC. It is intriguing that aldolase B and SLC6A14 were up-regulated in rectal as well as caecal mucosa in UC compared to controls. Aldolase B is known to be mainly expressed in the intestinal villus cells and it has a central role in the glycolytic pathway. It also participates in regulation of intestinal secretion [16]. Since SLC6A14 is also known to encode a Na + /Cldriven amino acid transporter B(0+) [15], the up-regulation of aldolase B and SLC6A14 might be a common pathophysiological response, aimed at counteracting the exaggerated loss of fluid seen in UC. Theoretically, the up-regulation of these two transcripts could be a local response to the increased feacal/fluid stream, where bioactive molecules comprise ability to regulate transcript expression. Additionally, since the inflammatory activity and load of fluid over time is usually most pronounced in the distal part of the colon, the registered changes in aldolase B and SLC6A14 may reflect long-term inflammatory activity. Our finding that aldolase B from distal biopsies is significantly elevated during the remission phase (S0) indicates that the regulation of this transcript not only is secondary to the inflammatory activity. The involvement of the microflora and its importance in the onset, development and preservation of UC has been discussed [6]. The SLC6A14 transcript expression is therefore also interesting in this respect, since it is involved in the host's antibacterial response [21]. In addition, the defensine-like epithelium associated antimicrobial molecule elafin, antagonizing human neutrophil elastase preventing tissue injury via inhibition of excessive release of proteolytic enzymes from inflammatory cells is interesting in this context [18]. The present results confirm an elafin transcript enhancement in caecal as well as rectal biopsies from patients with UC. Thus, the combined elevation of elafin and SLC6A14 may contribute to an amplified defence reaction aimed to restoration and maintenance of the mucosal integrity. This finding may indicate a pathogenetic role of the microflora in UC. MST-1 was included in the present study due to its alterations in UC, as shown in our previous experiment [14], although it was excluded from that publication due to deviation in its control group. MST-1 is known to be capable of inhibiting cell-mediated immune responses via down-regulation of IL-12 production and subsequently inhibition of macrophage activation [23]. Consequently, the observed down-regulation of MST-1 in rectal specimens may contribute to an enhanced cellular immune response in UC. A reasonable explanation of the concomitant decreased MST-1 transcript expression and increased aldolase B, SLC6A14, and elafin transcript expression is that the changes describe a pathophysiological response to a more pronounced inflammatory and, possibly, an exaggerated microbial load in at least the rectal part of the colon mucosa. The fifth identified significantly up-regulated transcript (in rectum only) SimNIPhom (similar to the numb-interacting homolog), encodes a hypothetical protein, at present of unknown pathophysiological importance. Our results supports that specimens from the rectal mucosa are more suitable for further analysis of the selected transcripts, due to the more predictable inflammatory involvement in the rectum and its availability for direct inspection and easy biopsy sampling. Our data can not answer whether the observed changes in expressions of the five selected transcripts may be present in e.g. other inflammatory, infectious or autoimmune conditions since this study uniquely focused on UC patients compared to non-inflamed controls. Conclusion The five changed gene transcript expressions have relation to UC, its extension and clinical severity. Whether the presented results will contain discriminative potential of importance for the medical care of patients with UC in future clinical practice remains to be elucidated.
2014-10-01T00:00:00.000Z
2008-08-12T00:00:00.000
{ "year": 2008, "sha1": "b1d1894c8083966f5d502c8ad780700356f6a837", "oa_license": "CCBY", "oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/1471-230X-8-34", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83b477be076ead4fd5b682c0cff23917bf3d1ef1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3650643
pes2o/s2orc
v3-fos-license
Joint Physical Layer Coding and Network Coding for Bi-Directional Relaying We consider the problem of two transmitters wishing to exchange information through a relay in the middle. The channels between the transmitters and the relay are assumed to be synchronized, average power constrained additive white Gaussian noise channels with a real input with signal-to-noise ratio (SNR) of snr. An upper bound on the capacity is 1/2 log(1+ snr) bits per transmitter per use of the medium-access phase and broadcast phase of the bi-directional relay channel. We show that using lattice codes and lattice decoding, we can obtain a rate of 1/2 log(0.5 + snr) bits per transmitter, which is essentially optimal at high SNRs. The main idea is to decode the sum of the codewords modulo a lattice at the relay followed by a broadcast phase which performs Slepian-Wolf coding with structured codes. For asymptotically low SNR's, jointly decoding the two transmissions at the relay (MAC channel) is shown to be optimal. We also show that if the two transmitters use identical lattices with minimum angle decoding, we can achieve the same rate of 1/2 log(0.5 + snr). The proposed scheme can be thought of as a joint physical layer, network layer code which outperforms other recently proposed analog network coding schemes. I. INTRODUCTION, SYSTEM MODEL AND PROBLEM STATEMENT We consider the bi-directional relaying problem where two users try to exchange information with each other through a relay in the middle. More specifically, we study a simple 3-node linear Gaussian network as shown in Fig. 1. Nodes A and B wish to exchange information between each other through the relay node R, however, nodes A and B cannot communicate with each other directly. Let u A ∈ {0, 1} k and u B ∈ {0, 1} k , be the information vectors at nodes A and B (vectors are denoted by bold face letters such as u throughout the paper). The information is assumed to be encoded into vectors (codewords) x 1 ∈ R n and x 2 ∈ R n at nodes A and B, respectively, and transmitted. We assume that communication takes place in two phases -a multiple access (MAC) phase and a broadcast phase, which are briefly described below. a) MAC phase: During the MAC phase, nodes A and B transmit x 1 and x 2 in n uses of an AWGN channel to the relay. It is assumed that the two transmissions are perfectly synchronized and, hence, the received signal at the relay y R ∈ R n is given by where the components of z are independent identically distributed (i.i.d) Gaussian random variables with zero mean and variance σ 2 . Further, it is assumed that there is an average transmit power constraint of P at both nodes and, hence, E[||x 1 || 2 ] ≤ nP and E[||x 2 || 2 ] ≤ nP . b) Broadcast phase: During the broadcast phase, the relay node transmits x R ∈ R n in n uses of an AWGN broadcast channel to both nodes A and B. It is assumed that the average transmit power at the relay node is also constrained to P and that the noise variance at the two nodes is also σ 2 . Node A forms an estimate of u B , namelŷ u B and node B forms an estimate of u A , namelyû A . An error is said to occur if either u B =û B or u A =û A , i.e., the probability of error is given by It is assumed that the communications in the MAC and broadcast phases are orthogonal. For example, the communications in the MAC and broadcast phase could be in two separate frequency bands (or in two different time slots) and, hence, the MAC phase and broadcast phase do not interfere with each other. To keep the discussion simple, we will assume that the MAC and broadcast phases occur in different time slots. This can be easily Broadcast phase x R y 1 = x R + z 1 y 2 = x R + z 2 Fig. 1. System Model with 3 Nodes generalized to the case when 2n dimensions are available for communication, out of which n dimensions are allocated to the MAC phase and n dimensions are allocated to the broadcast phase. Note that the signal-to-noise ratio (SNR) for all transmissions is snr = P/σ 2 and, hence, we refer to it as simply the SNR without having to distinguish between the SNRs of the different phases. Similarly, we restrict our attention to the case when the both nodes A and B wish to exchange identical amount of information and, hence, we can simply refer to one exchange rate without having to distinguish between the rate for A and B separately. Formally, we define the exchange rate R ex,scheme for an encoding/decoding scheme as the maximum information rate that can be exchanged reliably, i.e., R ex,scheme = max k n : P e → 0 as n → ∞. The exchange capacity C ex is then the supremum of R ex,scheme over all possible encoding/decoding schemes. II. MAIN RESULTS AND COMMENTS We mainly consider the case when the MAC and broadcast phase are both restricted to using exactly n uses of the channel each. For this case, the main results in this paper are • The exchange capacity satisfies C ex ≤ 1 2 log 1 + P σ 2 because the MAC and broadcast phase each consist of n AWGN channel uses. • An exchange rate of R ex,Lattice = 1 2 log 1 2 + P σ 2 is achievable using the lattice coding scheme with dither and lattice decoding discussed in Section VI. The same exchange rate can also be obtained using the lattice coding scheme (without dither) and minimum angle decoding discussed in Section VII. At high SNR, these lattice based coding and decoding schemes are nearly optimal because their rates approach the upper bound. • An exchange rate of R ex,JD = 1 4 log 1 + 2P σ 2 is achievable using the joint decoding scheme in Section VIII. At low SNR, these scheme is nearly optimal because the rate approaches the upper bound. • Clearly, any rate of the form βR ex,JD + (1 − β)R ex,Lattice is achievable for any 0 ≤ β ≤ 1 by time sharing between the two schemes. This outperforms the recently proposed analog network coding idea in [3] over the entire range of SNR. III. RELATED PRIOR WORK Recently, there has been a significant amount of work on coding for the bi-directional relay problem [1]- [7]. In [2], Katti et al., showed the usefulness of network coding for this problem. Although they do not consider the physical layer explicitly in this work, the natural extension of their solution to our problem would work as follows. The 2n channel uses available for signaling would be split into three slots with 2n/3 channel uses each. In the first time slot, u A is encoded using an optimal channel code for the AWGN channel into x 1 and transmitted from node A. Similarly, in the second time slot u B is encoded into x 2 and transmitted from node B. At the relay, u A and u B are decoded and then the relay forms u R = u A ⊕ u B and encodes u R into x R using an optimal code for the Gaussian channel and broadcasts into both nodes. The two nodes decode u R and then since they have u A and u B , they can obtain u B and u A at the nodes A and B, respectively. Here, the physical layer and network layer are completely separated and coding (or mixing of the information) is performed only at the network layer. In the system model considered in Fig. 1, the physical layer naturally performs mixing of the signals from the two transmitters. The schemes that take advantage of this can be referred to as joint physical layer coding and network coding solutions. One such scheme called analog network coding was recently proposed in [3]. In this case, the MAC phase and broadcast phase use n uses of the AWGN and are orthogonal to each other. Gaussian code books are used at the transmitters to encode u A into x 1 and u B into x 2 , respectively. Analog network coding is an amplify (rather than scale) and forward scheme where the received signal at the relay during the MAC phase y R , is scaled to satisfy the power constraint and transmitted during the broadcast phase, i.e., x R = P 2P +σ 2 y R . It can be seen that this scheme can achieve an exchange capacity of 1 2 log 1 + P/σ 2 3P +σ 2 P , which is higher than that achievable with the pure network coding scheme in [2] for high SNR. The schemes proposed in this paper can be thought of as decode and forward schemes which outperform the amplify and forward scheme in [3]. In a very recent work [6], it is conjectured that an exchange rate of 1 2 log 1 + P σ 2 can be achieved, however, no scheme is given or even conjectured. The scheme in this paper is a constructive scheme that performs close to the rate conjectured in [6]. The lattice decoding scheme discussed in Section VI is similar to that used by Nazer and Gastpar [20] for the problem of estimation the sum of two Gaussian random variables, but was independently proposed in the previous version of this paper [19]. IV. AN OPTIMAL TRANSMISSION SCHEME FOR THE BSC CHANNEL To motivate our proposed scheme, we first consider a system where the physical layer channels are all binary symmetric channels. i.e., x 1 ∈ {0, 1} n , x 2 ∈ {0, 1} n and the signal received at the relay is where ⊕ denotes binary addition and e is an error sequence whose components are 0 or 1 with probability q and 1 − q respectively and are i.i.d. Similarly, during the broadcast phase also let the channel be a BSC channel with crossover probability q. In this case, an upper bound on the exchange capacity can be seen to be 1 − H(q) since this is the maximum information that can flow to any of the nodes from the relay. This can be achieved using the following coding scheme. In this scheme, the two nodes use identical capacity achieving binary linear codes of rate 1 − H(q). Consider again the received signal at the relay given in (1). Notice that since x 1 and x 2 are codewords of the same linear code, x 1 ⊕ x 2 is also a valid codeword from the same code which achieves capacity over a BSC channel with crossover probability q. Hence, the relay can decode x 1 ⊕x 2 and transmit the result during the broadcast phase. The nodes A and B can also decode x 1 ⊕ x 2 and since they have x 1 and x 2 , they can obtain x 2 and x 1 , respectively. This scheme achieves an exchange rate of 1 − H(q) and is therefore optimal. Random codes versus structured codes: It is quite interesting to note that if random codes, i.e., codes from the Shannon ensemble were used instead of linear codes, x 1 ⊕ x 2 cannot be decoded at the relay. The linearity (or group structure) of the code is exploited to make x 1 ⊕ x 2 decodable at the relay and, hence, structured codes with a group structure outperform random codes for this problem. Examples of schemes were structured codes outperform random codes have been given in Korner and Marton [8] and more recently by Nazer and Gastpar in [9], [10]. V. UPPER BOUND ON THE EXCHANGE RATE FOR GAUSSIAN LINKS Let us now return to the problem outlined in Section I, where the channels between the nodes and the relay are AWGN channels. We restrict our attention to schemes where the MAC phase and broadcast phase are both orthogonal to each other and use n channel uses (or dimensions). With this restriction, a simple upper bound on the exchange rate can be obtained as follows. Consider a cut between the relay node and node A. The maximum amount of information that can flow to either of the nodes from the relay is 1 2 log 1 + P σ 2 . Hence, the exchange capacity is upper bounded by We now consider coding schemes and analyze their performance. VI. NESTED LATTICE BASED CODING SCHEME WITH LATTICE DECODING As shown in Section IV for the BSC channel, codes with a group structure (linear codes) enable decoding of a linear combination (or, sum) codewords at the relay. This motivates the use of lattice codes for the Gaussian channel since lattices have a similar group structure with respect to real vector addition. We begin with some preliminaries about lattices [12], [14]. An n-dimensional lattice Λ is a subgroup of R n under vector addition over the reals. This implies that if λ 1 , λ 2 ∈ Λ, then λ 1 +λ 2 ∈ Λ. For any x ∈ R n , the quantization of x, Q Λ (x) is defined as the λ ∈ Λ that is closest to x with respect to Euclidean distance. The fundamental Voronoi region V(Λ) is defined as V(Λ) = {x : Q Λ (x) = 0}. The mod operation is defined as (x mod Λ) = x − Q Λ (x). This can be interpreted as the error in quantizing an x to the closest point in the lattice Λ. The second moment of a lattice is given by and where V (Λ), the volume of the fundamental Voronoi region is denoted by V (Λ) = V(Λ) dx. The normalized second moment of the lattice is then given by Let us define the covering radius of a lattice R u as the radius of the smallest n-dimensional hyper sphere containing the Voronoi region V(Λ). Also let R l denote the effective radius of V(Λ), which is the radius of the n-dimensional hyper sphere having the same volume as V(Λ). Now we can define a Rogers-good Lattice [12, (69) where c and a are constants. This implies, In this work, we are interested in nested lattices. Formally, we can say the lattice Λ c (the coarse lattice) is nested in the lattice Λ f (the fine lattice) if Λ c ⊆ Λ f [13]. Let the fundamental Voronoi regions of the lattices, Λ c and Λ f be V(Λ c ) and V(Λ f ). The existence of nested lattices for which G(Λ c ) ≈ 1/(2πe) and G(Λ f ) ≈ 1/(2πe) has been shown in [13], [14]. The number of lattice points of the fine lattice in the basic Voronoi region of the coarse lattice is given by the nesting ratio V (Λc) V (Λf ) . Lattice codes can be used to achieve capacity on the single user AWGN channel under maximum likelihood (ML) [15], [17], [11]. More recently, nested lattices have been shown to achieve the capacity of the single user AWGN channel under lattice decoding [12], [14]. The main idea in [12], [14] is to use the coarse lattice as a shaping region and the lattice points from the fine lattice contained within the basic Voronoi region of the coarse lattice as the codewords. The existence of good nested lattices that achieve capacity has been shown in [12]. A. Description We now describe our encoding and decoding schemes for the bi-directional relaying problem using nested lattices. The encoding and decoding operations during the MAC and broadcast phase are explained below. A general schematic is also shown in Fig. 2. MAC Phase: Let there be k information bits in the information vector u A and u B and, hence, the exchange rate is R = k/n. At node A, the information vector u A is mapped onto a fine lattice point t 1 ∈ {Λ f ∩ V(Λ c )}, i.e., the set of all fine lattice points in the basic Voronoi region of the coarse lattice is taken to be the code. An identical code is used at node B and the information vector u B is mapped onto the codeword t 2 ∈ {Λ f ∩ V(Λ c )}. We then generate dither vectors d 1 and d 2 which are randomly generated n dimensional vectors uniformly distributed over V(Λ c ). The dither vectors are mutually independent of each other and are known at both the relay node and the nodes A and B. Now node A and node B form the transmitted signal x 1 and x 2 as follows By choosing an appropriate coarse lattice with second moment P , the transmit power constraint will be satisfied at both nodes. Assuming perfect synchronization, the relay node receives y R given by where z is the noise vector whose components have variance σ 2 . The main idea is that the relay decodes t = (t 1 + t 2 ) mod Λ c from the received signal y R . Broadcast Phase: In the broadcast phase, the relay node transmits the index of t using a capacity achieving code for the AWGN channel. The index (or, equivalently, t) can be obtained at the nodes A and B, and using a mod Λ operation, t 2 and t 1 can can be recovered at nodes A and B, respectively. This scheme can also be thought of as a decode and forward scheme where the relay decodes a function of t 1 and t 2 , namely t = (t 1 + t 2 ) mod Λ c and forwards this to the nodes. Notice however, that the relay will not know either t 1 or t 2 exactly, it only knows t. B. Achievable rate Before we discuss the achievable exchange rate with the lattice encoding scheme and decoding scheme discussed above, we need a few definitions and lemmas. The coding rate of the nested lattice code is defined as the logarithm of the number of lattice points of the fine lattice in Λ f ∩ V(Λ c ) which is given by Lemma 1: Let t 1 and t 2 be independent random variables which are uniformly distributed over the set of all fine where R is the coding rate of the nested lattice code. Proof: We restate some of definitions in [12]. Let R u be the covering radius of Λ c and let B(R u ) be the n-dimensional ball of radius R u . Let ρ 2 be the second moment (per dimension) of the smallest ball containing V(Λ c ), i.e., Note that V(Λ c ) has a second moment P and hence P < ρ 2 . Let Z * be a random variable given by where R u is the covering radius of the coarse lattice Λ c and R l is the radius of the n-dimensional hyper sphere whose volume is equal to the volume of the basic Voronoi region V(Λ c ). Let X 1 and X 2 be two independent random variables which are uniformly distributed over V(Λ c ) and let Z ∼ N (0, σ 2 I) be an n-dimensional Gaussian vector independent of X 1 and X 2 . Further, let Z eq = (1 − α)(X 1 + X 2 ) + αZ, where α = 2P 2P +σ 2 . Notice that Z eq is not Gaussian. We next state a Lemma which is a modified version of [12,Lemma 11] by Erez and Zamir which essentially shows that there exist good lattices for which Z eq can be well approximated by a Gaussian of nearly the same variance and the approximation gets better as n → ∞. Lemma 3 (Modified version of [12,Lemma 11]): Let Λ c be a lattice which is both Rogers-good as well as Poltyrev-good. Then, for any x, Proof: To prove the modified version of [12,Lemma 11], we can repeat the steps in [12], and equation [12,200] can be restated with the notation in this paper as Similarly for Z 2 , we get Combining (14) and (15) and also the definition of Z * as Z * = (1 − α)(Z 1 + Z 2 ) + αN, we can get the proof of the modified version of [12, Lemma 11]. We next state the theorem which is an application of the above Lemmas. This is very similar to [12, Theorem 5]. Theorem 4 (modified version of [12, Theorem 5]): Let x 1 and x 2 be realizations of two independent random variables which are uniformly distributed over V(Λ c ) and let z be a realization of an n-dimensional Gaussian vector Z ∼ N (0, σ 2 I). Further, let z eq = (1 − α)(x 1 + x 2 ) + αz, where α = 2P 2P +σ 2 . For any coding rate R < 1 2 log( 1 2 + P σ 2 ), there exists a sequence of n-dimensional nested lattice pairs (Λ Proof: The proof follows closely the proof of [12,Theorem 5]. We mention the places where the proof in [12] that have to be modified. Equation (81) in [12] must be modified to take into account that we have two transmitters. Equation [12, (81)] must be modified with Lemma 2. Also Equation [12, (82)] must be modified with Lemma 3. After this, we can continue with the proof in [12] by calculating the Poltyrev exponents and also using the fact that Rogers-good and Poltyrev-good lattices exist. Continuing with these steps in [12] shows that we can obtain the rate of R < 1 2 log 1 2 + P σ 2 , which proves the theorem. We are ready to present the main theorem in this Section which is given below. Proof: MAC Phase: We will first show that the probability of error in decoding t from y R can be made arbitrarily small for asymptotically large n. During the MAC phase, the relay tries to decode to t = (t 1 + t 2 ) mod Λ from the received signal y R (given in (6) as follows. The decoder at the relay node formst = (αy R + d 1 + d 2 ) mod Λ c (α will be determined later) and finds the lattice point in the fine lattice that is closest tot, i.e., the estimate of t is Q Λf (t). Using the distributive property of the mod operation,t can be written as: Due to the group structure of the lattice, t is a lattice point in the fine lattice (more precisely, t ∈ {Λ f ∩ V(Λ c )}). From Lemma 1, it can be seen that t is uniformly distributed over V(Λ c ). Further, note that t 1 and t 2 are independent of z, x 1 and x 2 and, hence, we can define an equivalent noise term as z eq = αz − (1 − α)(x 1 + x 2 ) such that t and z eq are independent of each other. The second moment of Z eq is given by eq ] and the resulting optimum values of α and σ 2 eq are α opt = 2P 2P +σ 2 and σ 2 eq,opt = 2P σ 2 2P +σ 2 . From Theorem 4, it can be seen that there exist nested lattices of rate R lattice < 1 2 log 1 2 + P σ 2 for which P r{Z eq ∈ V (n) f } → 0, as, n → ∞. and, hence, the probability of decoding error P e = P r{Q Λf (t) = t} → 0 as n → ∞. Hence, we can use a rate of at each of the nodes and (t 1 + t 2 ) mod Λ c can be decoded at the relay. Broadcast Phase: In the broadcast phase, the relay node transmits the index of t using a capacity achieving code for the AWGN channel. Since the capacity of the AWGN is 1 2 log 1 + P σ 2 which is higher than R Lattice in (17) and, hence, the index (or, equivalently, t) can be obtained at the nodes A and B. Since node A already has u A and, hence, t 1 , it needs to recover u B or, equivalently, t 2 , from t and t 1 . This can be done as follows. Node A computes (t − t 1 ) mod Λ c , which can be written as Similarly, t 1 can also be obtained in node B by taking (t − t 2 ) mod Λ c . Hence, an effective rate of R ex,Lattice < 1 2 log 1 2 + P σ 2 can be obtained using nested lattices with lattice decoding. We conclude by noting that, at high SNR, this scheme approaches the upper bound of 1 2 log 1 + P σ 2 and is therefore nearly optimal. This scheme can be interpreted as a Slepian-Wolf coding scheme using nested lattices, i.e., the relay wishes to convey t 1 + t 2 to node A, where some side information (namely, t 1 ) is available. Thus, the broadcast phase in effect uses nested lattices for solving the Slepian-Wolf coding problem. This scheme can also be thought of as a decode and forward scheme where the relay decodes a function of t 1 and t 2 , namely t = (t 1 + t 2 ) mod Λ c and forwards this to the nodes. Notice however, that the relay will not know either t 1 or t 2 exactly, it only knows t. Since the nested lattice code we have used is a capacity achieving code for the AWGN channel, one does not have to encode t again. The relay can simply transmit t to the nodes A and B and t can be decoded at the nodes A and B in the presence of noise at the nodes A and B. Notice that E[||t|| 2 ] ≤ P and, hence, the power constraint will be satisfied at the relay node. VII. LATTICE CODING WITH MINIMUM ANGLE DECODING In the previous section we observed that nested lattice decoding can achieve an exchange rate of 1 2 log( 1 2 + P σ 2 ) with lattice decoding alone. This naturally leads us to the question, can we achieve a better performance by using other decoding schemes? In this section we study a suboptimal decoder called the minimum angle decoder [11]. A. Description We next briefly explain our minimum angle decoding scheme. We have two transmitters communicating simultaneously to a central router. Both of them have the same power constraint P . The noise in the channel is Gaussian having a variance σ 2 . As we have seen in previous sections, choosing a good lattice code provides us with a considerable rate gain compared to random codes. Here we consider a n-dimensional lattice Λ n ⊂ R n . Let T √ P be an n-dimensional closed ball, centered at the origin and having a radius √ nP , and let V n ( √ nP ) be the hyper-volume of T √ P . This can be treated as a power constraint. Our codewords will be composed of lattice points in the sphere T √ P . Our encoding strategy would be that, each transmitter chooses a lattice point corresponding to its message index and transmits synchronously over the the Gaussian channel. Here we have no nested lattice construction or the use of dither in the encoding stage. At the receiver we will be interested in decoding to the sum of these lattice points. Minimum Angle Decoder: A minimum angle decoder discussed here makes a decision based on lattice points in a thin n-dimensional spherical shell T ∆ √ 2P . = {x ∈ R n : n(2P − δ) ≤ x ≤ n(2P + δ)}, δ is small and non-zero. It takes the received vector and finds the lattice point, whose projection on the thin shell, is closest to the received vector. An optimal decoder can be seen to be a maximum likelihood decoder. However, it is very tough to analyze this decoder. Hence, we analyze the minimum angle decoder, which is more tractable analytically. We next state the main theorem of this section, B. Achievable rate Theorem 6: For the bi-directional relaying problem considered in Section I, there exists at least one n-dimensional lattice Λ n such that an exchange rate of R ex < 1 2 log( 1 2 + SN R) is achievable using a minimum angle decoder as n → ∞. Proof Sketch Transmitted lattice points x 1 and x 2 from sphere of radius nP Lattice points sum (x 1 + x 2 ) concentrated at sphere of radius P n2 It is well known that the volume of an n-dimensional sphere is concentrated mainly on the surface of the sphere as the dimension becomes large. It is also known that, if we intersect a lattice with a n-dimensional sphere, then most of the lattice points will be concentrated very close to the surface [11]. In the course of our proof, we will show that the sum of any two such randomly chosen lattice points is also concentrated on a thin spherical shell T ∆ √ 2P at a radius of √ 2nP . Hence, the probability of error will be largely depended on the lattice points in the thin spherical shell T ∆ √ 2P . We will be using the Blichfeldt's principle to show that there exist translations (one for each user) of the lattice Λ n where the sum points are concentrated (see Theorem 9, Lemma 12 and Lemma 13). Once concentration for the sum of lattice points can be established, we can perform minimum angle decoding. In minimum angle decoding, we are interested only in the angle between the different lattice points on the thin spherical shell. It must be noted that the choice of the lattice Λ n must be such that it must act as a good channel code. The Minkowski-Hlawka theorem (Theorem 11 and Lemma 14) can be used to show existence of such lattices. Choosing volume of the lattice's Voronoi region appropriately allows us to compute an achievable rate of this scheme. C. Detailed proof First let us provide some definitions and notation, • Λ n denotes an n-dimensional lattice and let P n be the basic Voronoi region of the lattice. n2P and is defined as T ∆ √ 2P = {x ∈ R n : n(2P − δ) ≤ x ≤ n(2P + δ)}, δ is small and non-zero. • C 1 and C 2 are codewords formed by intersection of lattice points of hyper-spheres and are given by, • C 2 are codewords formed by intersection of lattice points of hyper-spheres and are given by, This denotes the combined collection of pairs of codewords of both the transmitters. This denotes the codeword pairs whose sum lies on the thin shell T ∆ √ 2P . This denotes the code word pairs whose sum does not lie on the thin shell. It must be noted that the set formed by the sum of codewords in C ∆ ⊕ need not be the same as C ∆ √ 2P and at low SNR this may lead to significant difference between ML and minimum angle decoding. ⊕ and M ⊕ denote the cardinality of C 1 , C 2 , C ⊕ , C ∆ ⊕ and C ⊕ respectively. • For a given code C, let us denote the average probability of error, under minimum distance decoding as P C . • Let us define a projection function π. This projects a n dimensional vector onto to an inner sphere of radius n(2P − δ). It is defined as π(x) = ( n(2P − δ)/ x )x. It is easy to see that minimum distance decoding is equivalent to maximum likelihood decoding in the presence of Gaussian noise. As mentioned before the set of lattice points whose sum lies in the thin spherical shell T ∆ √ 2P is much larger than the lattice points whose sum lies outside the spherical region. Hence the average probability of error will not be affected much by these lattice points. This motivates us to express the average probability of error P C⊕ as a sum of two terms as made more clear in the lemma below [11]. Lemma 7: Proof: Let the ordered pair (x 1 , x 2 ) ∈ C ⊕ , denote that x 1 ∈ C 1 and x 2 ∈ C 2 . We next follow similar steps of the proof as given in [11]. Let P C⊕ (x 1 , x 2 ) = P C⊕ (x 1 + x 2 ), denote the probability of error in decoding to the sum (x 1 + x 2 ), for a pair (x 1 , x 2 ) ∈ C ⊕ . x 1 and x 2 are transmitted simultaneously by user 1 and 2 respectively and the router receives the sum x 1 + x 2 corrupted with some Gaussian noise. Now letP C⊕ , represent a suboptimum decoder which maps the received point to the nearest sum x 1 + x 2 , where (x 1 , x 2 ) ∈ C ∆ ⊕ . Hence we can bound the average probability of error as follows Here in (a) the first term follows since M ⊕ is the cardinality of C ⊕ and the probability of error using our suboptimal decoder is 1, when (x 1 , x 2 ) ∈ C ⊕ . In (b) π refers to the projection function defined before and P π(C ∆ ⊕ ) , represents the minimum angle decoder described before. The inequality can be shown to be true by referring to the discussions on [11, Lemma 2]. Next since we are interested in lattice points projected on to the inner sphere of radius n(2P − δ), we can define a decoding algorithm that looks at the angle between the lattice points, the minimum angle decoder. We next establish some more definitions. Let B θ (y) denote a n-dimensional cone centered at the origin and having the axis passing through y. Let θ be the half-angle of the cone and y ∈ R n be non-zero. We next define a sub-optimum decoding function given as follows, or this can also be expressed as represents the region of the cone B θ (x 1 +x 2 ), that does not intersect with any other cone corresponding to the other lattice codeword points x , located in the thin spherical shell. During decoding, when we receive a vector that falls in the region A θ (x 1 , x 2 ), we decode to the sum codeword (x 1 + x 2 ). It may not be possible to decode to the individual codewords x 1 and x 2 , as different pairs of codewords may yield the same sum. However it must be noted, that in the forward phase, we are interested in decoding only to the sum of the transmitted codewords. Let P θ denote the probability of error using the sub-optimum decoder. Then, we have In the above (a) follows because we use the union bound. In (b), the first term follows, because due to symmetry, the probability is not dependent on the particular x 1 + x 2 . The second term follows as we define p θ (x, x ) as p θ (x, x ) = Pr(π(x) + Z n ∈ B θ (x )). In (c), we replace x , by g + x 1 + x 2 and the characteristic function , corresponds to lattice points on the thin shell at radius √ n2P . Hence the average probability of error can be bounded as The rest of the proof deals with bounding each of the three terms in the above equation by an arbitrarily small quantity, to make the probability of error tend to zero as n → ∞. Below we briefly explain the requirements. • The first term can be made very small by choosing the angle θ appropriately. In effect, we need the noise Z n to be contained inside the cone B θ (t 0 ) with high probability as the dimension n becomes large. • For the second term we need the number of codeword pairs whose sum of codewords lies outside the thin spherical shell must be shown to be much lesser than the total number of codeword pairs. In other words we need to show that the sum of lattice points are concentrated in the thin spherical shell around the radius √ n2P . This is shown in Lemma 12. • The third term has a summation which is difficult to evaluate and hence we bound it by an integral and evaluate the resulting integral. • Finally, we require that the number of codewords in each of the inner sphere must be sufficiently large to achieve rates close to 1 2 log( 1 2 + SN R). The Blichfeldt's principle(see Theorem 9) can be applied to show concentration of codeword pairs. Lemma 13 in Appendix C, is an application of the Blichfeldt's principle that guarantees that for any given lattice, we can find translations that satisfy Also it makes sure that we can find enough codewords in the hyper spheres of radius √ nP , such that we can achieve a rate of 1 2 log( 1 2 + SN R). The Minkowski-Hlawka theorem (see Theorem 11 in Appendix A) is used to establish the existence of at least one lattice such that the summation of the third term can be bounded by an integral. This theorem along with Lemma 13 in Appendix C, are used together in Lemma 8 to obtain bounds on both the second and third term. Hence we can effectively rewrite (19) by using these bounds to get, We can bound the integral, as shown in [18, p. 623-624] to get, Now we next need to choose the appropriate values for θ and d n to make the probabilities go to 0. For the second term, consider sin ∠(t 0 + Z, t 0 ) which is given by Hence we choose sin θ = σ 2 2P −δ+σ 2 . For the third term a good choice of d n is d n = V n ( n(2P + 2δ)) sin n θ = π n/2 (n(2P + 2δ)) n/2 (sin n θ) Γ(n/2 + 1) This choice helps us to make the third term tend to 0 for large n. The third term then can be rewritten as given below. We use the results in [11, p. 277], to bound the Gamma functions to get, 2 ) This decays to 0 exponentially as n → ∞. Now, the achievable rate can be obtained from the number of lattice points in the sphere of radius √ nP . This value M (Λ n , s * 1 ), M (Λ n , s * 2 ), from the lemma, can be seen to be greater than V 8dn . Hence the rate R is given by, Choosing δ arbitrarily small, a rate of 1 2 log 1 2 + P σ 2 can be achieved. D. Relationship with ML decoder There are some conditions under which the the minimal angle decoder will perform like the ML decoder. (1) It can be easily seen that, for Gaussian noise, the ML decoder is equivalent to minimum distance decoder. (2) If all the codewords are concentrated (with high probability) in the thin shell then, we do not lose much by neglecting the codewords outside the thin shell. (3) Suppose the width of the thin shell is very small and almost all the codewords have approximately the same distance from the origin. Then, calculating minimum distance from received vector to the codewords is equivalent to calculating the minimum angle. (4) Suppose almost all the lattice points on the thin shell are codewords. Then, decoding to any lattice point in the thin shell does not sacrifice performance. We are dealing with Gaussian noise so the first condition is easily satisfied. In the course of our proof, we will observe by applying the Blichfeldt's principle that there exists a concentration of codewords at the thin shell. Hence condition (2) also holds. Moreover, we will also let the width of the thin shell become arbitrarily small, hence what we are doing is very close to ML decoding. The 4th condition appears not hold to at low SNR, however. In this case, all lattice points in the thin shell may not be codewords. Hence, we think this may lead to the sub-optimality at low SNR. The theorem shows that, for SNR < 1/2, we get zero rate. We know that, for random Gaussian codebooks, joint decoding of both codewords is possible though. Therefore, we think that the ML decoder may give a better performance at low SNR. VIII. JOINT DECODING BASED SCHEME While the aforementioned scheme is nearly optimal at high SNR, the performance of this scheme at low SNR is very poor. In fact, for P/σ 2 < 1/2, the scheme does not even provide a non-zero rate. In this regime, we can use any coding scheme which is optimal for the multiple access channel and perform joint decoding at the relay such that u A and u B can be decoded. Then, the relay can encode u A ⊕ u B and transmit to the nodes. Any coding scheme that is optimal for the MAC channel can be used in the MAC phase. A simple scheme is time sharing (although it is not the only one) where nodes A and B transmit with powers 2P for a duration of n/2 channel uses each but they do not interfere with one another. In this case, a rate of can be obtained. It can be seen that this is optimal at asymptotically low SNR, since log(1+snr) ≈ snr for snr → 0. The performance of these schemes is shown in Fig. 4 where the upper bound and the achievable rates with these proposed schemes are shown. The achievable rate with the analog network coding scheme is also shown. It can be seen that our schemes outperform analog network coding. However, it must be noted here that the scheme proposed here requires perfect synchronization of the phases from the transmissions, whereas the analog network coding scheme does not. Clearly, we can time share between the lattice based scheme and the joint decoding based scheme in order to obtain rates of the form R ex = βR ex,JD + (1 − β)R ex,Lattice . It can be shown that between the SNRs of -0.659 dB and 3.46 dB, time sharing between the two schemes results in better rates than the individual schemes. In both the lattice based scheme and the joint decoding based scheme, if the restriction to use n channel uses during the MAC phase and n during the broadcast phase is removed, i.e., only the total number of uses is constrained to be 2n, better schemes can be easily designed. Similarly, different power sharing may also lead to better schemes. A. Description We can extend the above results to multiple hops. We can again show that rate of 1 2 log 1 2 + P σ 2 is achievable using structured coding even in the multiple hop scenario. It should be noted that the advantage of this scheme over the amplify and forward scheme [3] becomes more pronounced in the multi-hop case, since at each stage for the amplify and forward scheme, the channel noise is amplified and hence the amplify and forward scheme will suffer a huge rate loss as the number of hops increase. The problem model is shown in Fig. 5. The relay nodes and the nodes A and B can transmit only to the two nearest nodes. During a single transmission slot (n uses of the channel), a node can either listen or transmit. That is, it can not do both simultaneously. We explain our structured coding scheme using a simple example of a 3-relay network. The different transmissions are shown in the table given below. Here node A and node B have data that need to be exchanged between each other. Each node has a stream of packets. Node A has packets named u A,1 , u A,2 , . . . and node B has packets named u B,1 , u B,2 , . . .. In the first transmission slot the nodes A and B transmit. Nodes A, B transmit vectors x A,1 and x B,1 , respectively using our proposed lattice coding scheme. At the beginning of transmission, the node R 2 has no data to transmit in the first transmission slot, and hence it remains silent. The node R 1 and R 3 decode to x 1,1 mod Λ and x 2,1 mod Λ, respectively. During the second transmission slot the nodes R 1 and R 3 transmit, while the other nodes remain silent. So, in each stage the nodes transmit and the listening nodes decode to a lattice point in the fine lattice in the Voronoi region of the coarse lattice. In every second transmission slot a new packet is transmitted to the relay nodes by the nodes A and B as can be seen from Table I. During slots 2, 4, 6 nodes A, B transmits new packets into the relay channel. From this example, we can see that at the 4th slot the node A and B decode x B,1 and x 1,2 respectively. This is because the node A receives (x 1,1 + x 1,2 + x 2,1 ) mod Λ during the 4th transmission and, hence, since x 1,1 , x 1,2 are already known at the node A, the node A can decode to x 2,1 using modulo operation. The same argument holds for node B. From every two transmissions from this stage a new packet can be decoded at each node. This shows that for sufficiently large number packets we can achieve the rate of 1 2 log( 1 2 + P σ 2 ). A similar encoding scheme can be used for L = 2 nodes also, in the first slot, node A and R 2 transmit, while the others listen. In the next slot R 1 and node B transmit while the others listen and decode. Again the same rate of Proof: We can easily prove that the theorem holds for a general case L relay nodes in between. In our coding scheme in every two slots a new packet is sent out from the nodes A and B. After an initial 2L transmission slot delay, in every two slots the relay nodes receives a new packet from the other nodes. Here, we mean that every two slots the relay node decodes to a lattice point which is a linear function of a new packet. Hence, at the decoding stage at the nodes B and A, we can decode after every two slots since only one variable is unknown, since only one new packet (or function of new packet) moves from one node to the other. Hence, we can still achieve the Decodes x2,2 Transmits 4(x1,1 + x2,1) + 2(x1,2 + x2,2) + x1,3 + x2,3 Transmits Decodes x1,2 rate of 1 2 log( 1 2 + P σ 2 ). Moreover, the functions in each stage are bounded for a finite L and, hence, we can always perform the decoding at the receiver nodes. X. CONCLUSION We considered joint physical layer and network layer coding for the bi-directional relay problem where two nodes wish to exchange information through AWGN channels. Under the restrictive model of the MAC and broadcast phase using n channel uses separately, we showed upper bounds on the exchange capacity and constructive schemes based on lattices that is nearly optimal at high SNR. At low SNR joint decoding based schemes (optimal coding schemes for the MAC channel) are nearly optimal. These schemes outperform the recently proposed analog network coding. Interestingly, our result shows that structured codes such as lattice codes outperform random codes for such networking problems. We also showed that minimal angle decoding also leads to similar results. We also showed extensions of this scheme to a network with many relay nodes, where the advantages of the proposed scheme over simple amplify and forward will be higher than in single relay case. APPENDIX A BLICHFELDT'S PRINCIPLE AND MINKOWSKI-HLAWKA THEOREM Theorem 9 (Blichfeldt's Principle [16]): Let f be a Riemann integrable function with bounded support. If Λ n is a lattice with fundamental region P n then Let us define the following function V ⊕ as follows, Here V ⊕ = (V n ( √ nP )) 2 , represents the square of the volume of an n-dimensional sphere of radius √ nP . dV represents the n-dimensional volume element in rectangular co-ordinates. We next establish the following corollary Corollary 10: Proof: Let us define a function For a fixed u, f (u, v) can be seen as a function with bounded support and also can be seen to be integrable. Hence we can apply the Blichfeldt's principle to get Now h(u) can again be seen as a Riemann integrable function with bounded support, and hence the Blichfeldt's principle could be applied again to get the following, Above (a) follows since we have a finite number of non-zero terms, and hence the integral and the summation can be interchanged. Also in (b), M ⊕ (Λ n , s 1 , s 2 ) is the number of pairs of lattice points for the translations s 1 and s 2 . In short the Corollary 8 relates the square of the volume of an n-dimensional sphere and number of pairs of lattice points for different translations. Theorem 11 (Minkowski-Hlawka): Let f be a nonnegative Riemann integrable function with bounded support. Then for every d ∈ R + and n ≥ 2, there exists a lattice Λ n with determinant det(Λ n ) = d such that The Minkowski-Hlawka theorem gives us a way to connect a series of discrete sums with a continuous integral. This will find applications in our probability of error calculations. APPENDIX B HYPER VOLUME CONCENTRATION LEMMA Lemma 12: Let V ⊕ be defined as then we can choose n sufficiently large such that, V ⊕ V⊕ < δ, for every given positive δ. Proof: First we perform a change of variables in the integral, by substituting x = u + v. This gives, Let us consider first the inner integral, for a fixed x,given by, This geometrically represents the hyper volume of intersection of two hyper-spheres, whose centers are at a distance x , from each other. This is pictorially shown in Fig. 6. The calculation of hyper volume of intersection, reduces to obtaining the hyper volume of the conical section and a cone. This is shown pictorially in the second diagram in Fig. 6. Here opq represents the hyper cone and oprq represents the conical section. Here we denote by V cs (|x|) to represent the volume of the conical section and V co (|x|) as the volume of the cone. The integral can hence be be evaluated as To simplify calculations we can bound the integral as, Hence, But V ⊕ is given by V 2 , where V is the hyper volume of an n-dimensional hyper sphere of radius √ nP denoted by V n ( √ nP ). The value of V cs ( x ) depends only on the distance of x from the origin. To make evaluation of the integral easier, we change the volume element to circular co-ordinates and integrate. Thus the integral now becomes, where l ⊕ is defined as the union of closed intervals,(will be given later). Now substituting V cs (r) from above gives, Now let us choose cos θ = r 2 √ nP . Then change of variables gives Substituting for V n ( √ nP ), we get, 2 n cos n−1 θ sin θ θ ψ=0 sin n−2 ψdψdθ 2 n cos n−1 θ sin θ θ ψ=0 sin n−2 ψdψdθ Now we will use the bound by Shannon for the first term. We can apply the bound for θ < π/2. Hence we split the integral into two terms to apply the bound. For the second term we will bound sin ψ by 1. This gives, 2 n cos n−1 θ sin θ sin n−1 θ (n − 1) cos θ dθ Simplifying things further we get, 2 n−1 sin n−1 θ cos n−1 θ tan θdθ 2 n−1 cos n−1 θ sin θ π 2 dθ (33) Next we bound tan θ in the first term by tan(π/3 + η) and sin θ in the second term by 1. We also bound the factorial using the bound given in Urbanke. This yields, (sin 2θ) n−1 dθ Again we can see that since θ does not take the value π/4 we can bound the first term appropriately. In the second term the maximum value of cos θ can be used to bounded it appropriately. This is given below as follows. From the above we can easily see, that both terms tend to 0 as n → ∞. From this the lemma follows. APPENDIX C APPLICATION OF BLICHFELDT'S PRINCIPLE TO SHOW EXISTENCE OF GOOD TRANSLATIONS Lemma 13: Let Λ n be a lattice having a fundamental region P n and let det(Λ n ) be it's fundamental volume and define P * n = (s 1 , s 2 ) ∈ P n × P n : M 1 (Λ n , s 1 ) ≥ V n ( √ nP ) 8det(Λ n ) ; where δ n > 0 and can be made arbitrarily small for sufficiently large n Proof: Let us define the following sets, Therefore P * n = (F n × G n ) O n . Define the complements here. Hence Since, O C n ⊂ P n × P n , we can bound the second integral again as shown below. The summation h1∈Λn χ T (h 1 + s 1 ) can be seen to count the number of lattice points in the sphere T n , for the translation s 1 . This is by definition equivalent to M (Λ n , s 1 ). Similarly we can replace the other summation by M (Λ n , s 2 ), to get Since (F n × G n ) C ⊂ (F n × P n ) ∪ (P n × G n ) we can bound the integral to get We use the condition s 1 ∈ F n and s 2 ∈ G n , to get Since F C n , G C n ⊂ P n and Pn dV (s 1 ), Pn dV (s 2 ) = det(Λ n ), we obtain, Finally using V ⊕ = (V n ( √ nP )) 2 , we obtain V ⊕ ≤ 2 P * n M ⊕ (Λ n , s 1 , s 2 )dV (s 1 )dV (s 2 ) From the above lemma it can be seen that the measure of P * n must be non-zero and hence, there must be at least some translations (s 1 , s 2 ) of the lattice, where the requirements hold.
2008-05-01T14:54:13.000Z
2008-04-30T00:00:00.000
{ "year": 2010, "sha1": "074aa02f0534c742768af027d390bb60f0121627", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0805.0012", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a70ecb8637d2c56e31e7c4c765991f6e071c140e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
225615822
pes2o/s2orc
v3-fos-license
Seasonal incidence of insect pests of okra The field experiments were conducted during two different seasons i.e. kharif 2018 and rabi 2018 to study the seasonal incidence of insect pests of okra viz., leafhopper, aphid, whitefly, mite, dusky cotton bug Helicoverpa armigera and Earias vittella. The results on seasonal incidence of insect pests of okra revealed that the activity of leafhopper, aphid, whitefly and Helicoverpa armigera were more in rabi season than kharif season. Similarly, the activity of Earias vittella was more in kharif season than rabi season. Introduction Vegetables constitute an important item of our food, supplying vitamins, carbohydrates and minerals needed for a balanced diet. Their value is important especially in underdeveloped and developing countries like India, where malnutrition abounds (Randhawa, 1974) [12] . Among the vegetable crops grown in India, okra (Abelmoschus esculentus L.), also known as lady's finger or bhendi belongs to the family Malvaceae is an important crop grown throughout the year. It has good nutritional value, particularly the high content of vitamin C (30 mg/100 g), calcium (90 mg/100 g), iron (1.5 mg/100 g) and other minerals like magnesium and potassium, vitamin A and B, fats and carbohydrates (Aykroud, 1963) [4] . On the other hand, the demand for vegetable oils is rapidly increasing due to the growing human population and the expanding oil industry with health promoting oil components, the exploration of some under-utilized and newer resources of vegetable oils is of much concern (Schalau, 2002) [13] . Though okra finds its origin in South-Africa, India stands top in area and production. It is cultivated in an area of 5.11 lakh hectares with a production of 62.19 lakh tons in India and in Karnataka it is cultivated in an area of 11,140 hectares with a production of 90,270 tons (Anon., 2018) [3] . The major okra growing states includes Assam, Uttar Pradesh, Bihar, Orissa, West Bengal, Maharashtra, Andhra Pradesh and Karnataka (Anon., 2018) [3] . One of the important limiting factors in the cultivation of okra is insect pests. Many of the pests occurring on cotton are found to ravage okra crop. As high as 72 species of insects have been recorded on okra (Srinivas Rao and Rajendran, 2003) [17] Of which, the sucking pests comprising of Aphid, Aphis gossypii Glover, leafhopper, Amrasca biguttula biguttula Ishida, whitefly, Bemisia tabaci Gennadius and mite, Tetranychus cinnabarinus Boisduval caused significant damage during the early stages of the crop, while at later stage fruit borers like Earias spp. and Helicoverpa armigera (Hb.) caused considerable loss to the crop to the tune of 91.6 per cent. In general the overall damage due to insect pest amounts to 48.97 per cent loss in pod yield (Kanwar and Ameta, 2007) [7] . Material and Methods Investigation on seasonal incidence of insect pests of okra was carried out at the Main Agricultural Research Station (MARS) and Department of Agricultural Entomology, College of Agriculture, University of Agricultural Sciences (UAS), Raichur, during 2018-2019. The location of experimental site is situated at North Eastern dry zone (zone-II) of Karnataka between 16º 15´ latitude, 77 º 20´ longitudes and at 398.37 m above mean sea level. The okra variety Ankur-46 was sown during August 4 (kharif) and October 23 (rabi). The crop was raised with a spacing of 60 × 45 cm in a plot size of 5 guntas under irrigated conditions with all the agronomic practices as per the recommendation except plant protection measures (Anon., 2014) [2] . After the germination of the crop, observations were recorded at weekly intervals to determine the seasonal incidence of important insect pest of okra crop till the harvest of crop during Kharif and Rabi of 2018-19. To assess the incidence of sucking insect pests, viz., leafhopper, aphid, whitefly, mites and dusky cotton bug were recorded on top three leaves at weekly interval on randomly selected fifty plants. The per cent damage of Helicoverpa armigera and Earias vittella were estimated by counting both damaged and total number of fruits. The observations were recorded at weekly intervals starting from 30 days after sowing up to maturity of the crop. The per cent fruit damage was calculated as below; Results and Discussion Studies on the seasonal incidence of insect pests of okra was carried out during Kharif and rabi, 2018-19 and results of the observations recorded at weekly intervals on insect pests of okra are presented below. Aphid, A. gossypii (Glover) Both nymphs and adults aphids were found sucking the sap from ventral surface of the leaves and the affected leaves curled downwards and became inverted cup shaped. During kharif 2018, the activity of the aphid was noticed throughout the cropping season and varied between zero to 29.20 per top three leaves with mean population of 16.33 per top three leaves. The zero incidence was recorded in first and second week of August. However, the incidence started from third week of August (8.36/top three leaves) and there was a gradual increase from August forth week to October second week with a maximum population of 29.20 aphids per top three leaves. Later, population gradually decreased from third week of October (26.10/top three leaves) and there was no incidence from last week of October (Table 1). The present findings are in line with the findings of Slosser et al. (1998) [15] who reported that population of A. gossypii increased during August and October. But Damasia et al. (2013) [6] and Singh et al. (2013) [14] reported that aphid population gradually increased and reached its peak during the first fortnight of October. During rabi 2018, the population of aphid varied from zero to 28.55 per top three leaves, with mean population of 16.90 per top three leaves. The zero incidence was recorded during first and second week of November. Later, the incidence started from third week of November (12.50/top three leaves) and there was a gradual increase from fourth week of November to first week of January with a maximum population of 28.55 per /top three leaves. However, population gradually decreased from second week of January (23.40/top three leaves) and there was no incidence from last week of January onwards ( Table 2). The present findings are in line with Anitha and Nandihalli (2008) [1] who reported, on rabi crop, sown during last week of November, the incidence of aphid started from 49 th standard week and reached its peak during first week of January with 24.91 aphids per top three leaves. Leafhopper, A. biguttula biguttula (Ishida) Adults were green with two black spots on either side of forewings at the posterior region of the body. Both nymphs and adults were found sucking the sap from the leaves and caused serious hopper burn and drying of leaves, resulting in stunted growth. The affected leaves became yellow, crinkled, curled and showed marginal browning. During kharif 2018, the activity of the leafhoppers was noticed throughout the cropping season and varied between zero to 20.20 per top three leaves with mean population of 8.17 per top three leaves. There was no incidence recorded during first and second week of August. However, the incidence started from third week of August (5.10/top three leaves) and there was a gradual increase from August fourth week to September fourth week (39 th SMW) with a maximum population of 20.20 leafhoppers per top three leaves. Further, the population of leafhoppers gradually decreased from second week of October (10.90/3 leaves) and population declined from last week of October (Table 1). Present findings are in line with the findings of Srinivasa (1993) [18] who reported that kharif, September and October months are very much congenial for leafhopper population buildup. Similarly, Damasia et al. (2013) [6] also reported peak population of leafhoppers in fourth week of September (19.43/top three leaves). During rabi 2018, the population of leafhopper varied from zero to 15.47 per top three leaves, with mean population of 8.90 per top three leaves. There was no incidence during first week of November. However, the incidence started from second week of November (13.43/top three leaves) and there was a gradual increase from third week of November to last week of December (52 nd SMW) with a maximum population of 15.47 per top three leaves. Later, population gradually decreased from third week of January (9.55/top three leaves) and there was no incidence from fourth week of January onwards ( Table 2). The current findings are in line with the findings of Anitha and Nandihalli (2008) [1] who reported leafhopper on rabi crop (November sown) during 49 th SW and its peak during first week of January. The activity of the pest might be related to the crop growth stage irrespective of the sowing time. Whitefly, Bemisia tabaci. Both nymphs and adults of whitefly, B. tabaci were found feeding on ventral surface of leaves. During kharif 2018, the activity of the whiteflies was noticed throughout the cropping season and varied between zero to 13.50 per top three leaves with mean population of 5.51 per top three leaves. There was no incidence recorded in first, second and third week of August. Later, the incidence started from fourth week of August (1.25/top three leaves) and there was a gradual increase from September first week to October first week with a maximum population of 13.50 whiteflies per top three leaves. Later, population gradually decreased from third week of October (11.90/top three leaves) and there was no incidence from last week of October (Table 1). These results are in line with Singh et al. (2013) [14] who reported peak population of whitefly (12.40/leaf) during 39 th standard week (fifth week of September). Similarly, Damasia et al. (2013) [6] also reported peak population of whitefly during third and fourth week of September. During rabi 2018, the population of whiteflies varied from zero to 10.90 per top three leaves, with mean population of 6.02 per top three leaves. The zero incidence was recorded in first and second week of November. However, the incidence started from third week of November (5.31/top three leaves) and there was a gradual increase from fourth week of November to second week of January with a maximum population of 10.90 per top three leaves. Later, population gradually decreased from third week of January (7.60/top three leaves) and there was no incidence from fourth week of January onwards ( Table 2). The results of the present investigations are in close agreement with the observations made by Mani and Singh (2012) [10] who reported peak incidence of whiteflies during 1 st standard week (first week of January). Fruit borer, Helicoverpa armigera (Hubner) The larvae of H. armigera were found feeding on the fruits. During kharif 2018, the activity of the H. armigera larvae was noticed throughout the cropping season and varied between zero to 3.45 larvae per plant with mean population of 1.25 larvae per plant. There was no incidence in August month. However, the incidence started from first week of September (1.57 larvae/ plant) and there was a gradual increase from September second week to October first week with a maximum population of 3.45 larvae per plant. Later, population gradually decreased from second week of October (2.77 larvae/ plant) and there was no incidence from fourth week after October (Table 1). The present studies are supported by the observations recorded by Nath et al. (2011) [11] from Uttar Pradesh. They reported that infestation of larvae of H. armigera appeared on the crop between third and fourth week of August reaching its peak densities on second and third week of September. Similarly, Kumaranag (2015) [9] reported that the larvae were noticed between third and fourth week of September, with the highest larval densities observed on 40 th SMW. During rabi 2018, the population of H. armigera larvae varied from zero to 4.53 larvae per plant, with mean population of 1.31 larvae per plant. The zero incidence was recorded in November month. However, the incidence started from first week of December (0.84 larvae/ plant) and there was a gradual increase from second week of December to fourth week of January with a maximum population of 4.53 larvae per plant and there was no incidence from fourth week of January onwards ( Table 2). The present findings are in line with Kumaranag (2015) [9] who reported that the larvae of H. armigera noticed between third and fourth week of September, with the highest larval densities observed on 40 th SMW.
2020-08-06T09:07:57.737Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "9e15572445e63dde6bdab9afa888d558d8277c07", "oa_license": null, "oa_url": "https://www.chemijournal.com/archives/2020/vol8issue4/PartD/S-8-2-70-415.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f9788bd9455506d03800ad2346fbe7ed5885ebe5", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
250677510
pes2o/s2orc
v3-fos-license
Present status and future plan of research in high magnetic fields at KYOKUGEN in Osaka University After brief introduction of history and facility of the High Magnetic Field Laboratory at KYOKUGEN in Osaka University, we describe our high field and multi-frequency electron spin resonance (ESR) apparatus by utilizing pulsed and superconducting magnets for the fields up to about 60 T. For the ESR measurements in pulsed magnetic fields, several Gunn and backward oscillators, and a far infrared laser are used as millimeter and submillimeter wave sources. In steady magnetic fields up to 16 T, we have utilized a vector network analyzer with extensions which covers the frequencies between 8 and 700 GHz almost continuously. The latter ESR apparatus is used not only dense magnetic materials but also weak ones, such as metalloproteins. Therefore, we have developed high sensitive multi-frequency ESR apparatus. To extend ESR studies further, we are now constructing ESR apparatus for much higher fields up to 70 T and the wide frequency range up to 7 THz. Magnetization and transport measurements are also performed in high magnetic fields up to 70 T and 60 T, respectively and magnetization measurements under high pressure up to 1 GPa. We plan to develop a 50 T wide bore pulsed magnet with the diameter of about 50 mm for the use of high sensitive ESR and high pressure measurements above 1 GPa. Introduction The High Magnetic Field Laboratory at Osaka University headed by Date [1,2] was founded in 1975 as the high magnetic field facility of the Faculty of Science [3]. The requirement of high magnetic field generation resulted from antiferromagnetic resonance (AFMR) experiment [2]. Non-destructive high field pulsed magnets which consist of maraging steel polyhelix coils were developed and the generation of magnetic field up to 107 T was reported [2]. In 1987, the laboratory was reorganized as a section of the Research Center for Extreme Materials (abbreviated to KYOKUGEN) of Osaka University for the development of cooperative studies with other sections [4]. Then, the center was reorganized again in 1996 including the low temperature section and was named the Research Center for Materials Science at Extreme Conditions (KYOKUGEN). The High Magnetic Field Laboratory at new KYOKUGEN was headed by the last author, Kindo and he developed a newly designed non-destructive pulsed magnet [5]. The magnet was made by winding with Cu-Ag wire and reinforced by a maraging steel cylinder and generated 80.3 T in a 10 mm bore with 7 msec pulse duration. This pulse duration is much longer than that of the previous magnet (about 0.4 msec) and is very useful for measurements of metal materials. Kindo and his coworkers developed the apparatus for magnetization and magnetoresistance measurements down to 80 mK in magnetic fields up to 60 T and for magnetization measurements up to about 50 T under high pressure up to 1.2 GPa [5]. From April of this year (2006), the center started as a new organization and is named the Center for Quantum Science and Technology under Extreme Conditions (KYOKUGEN). In the High Magnetic Field Laboratory at KYOKUGEN, we have two high magnetic field facilities. The first facility is a new one built in 1988 and the second facility is an old one built in 1975. The first and second facilities are equipped with 1.5 MJ (20 kV, 7.5 mF) and 1 MJ (13.3 kV, 11.6 mF) capacitor bank systems, respectively. The former bank system uses pressurized air gap switches and the latter one uses thyristor switches. Figures 1(a) and (b) show the pulse shapes of the magnetic field generated with pulsed magnets used at the first and second high field facilities, respectively. The pulse shape at the first facility has a short duration of about 7 msec due to the small inductance of the magnet 0.7 mH, and that at the second one has a long tail with the pulse duration of about 40 msec due to the inductance of about 7 mH. Therefore, we usually carry out magnetization and ESR measurements on high resistive samples, e.g. insulators and semiconductors at the first facility, and magnetization and transport measurements on conductive samples at the second facility. In the first facility, we also use a magnet with very short duration of about 0.4 msec made from maraging steel. High field and multi-frequency ESR As mentioned in the introduction, the High Magnetic Field Laboratory at Osaka University originated from the requirement of high magnetic field generation in AFMR measurements. In the 1960s, electron spin resonance (ESR) measurements were done by utilizing millimeter wave sources like Klystron and pulsed magnets up to about 20 T. Some remarkable findings such as spin cluster resonance in CoCl 2 ·2H 2 O [6] were reported during this period. In the 70s and 80s, high frequency ESR measurements using HCN and H 2 O FIR lasers were performed with short pulsed magnets made from maraging steel, but ESR experiments for the millimeter wave region were not performed because of the destruction of the Cu wave guide or resonator due to a very short pulse duration. In the late 80s, Cu-Cr-Zr-wire wound long pulsed magnets were developed and millimeter ESR measurements restarted in magnetic fields up to about 40 T. ESR spectra from the transitions within the excited triplet state in the S=1 one-dimensional Heisenberg antiferromagnet, the so-called Haldane magnet, were observed with this pulsed field ESR apparatus [7]. Kindo developed pulsed magnets made by winding Cu-Ag wire and reinforced by a maraging steel cylinder in the late 90s [5]. The second author, Kimura, started to build high field ESR apparatus consisting of this magnet, several Gunn oscillators, a FIR laser and an InSb detector from 1999 [8]. This apparatus mainly uses for magnetic materials which show some field induced phase transitions at high magnetic fields [8]. Since October in 2004, the first author, Hagiwara, has transferred from the Institute of Physical and Chemical Research (RIKEN) to the High Magnetic Field Laboratory at Osaka University as the successor to Kindo bringing his multi-frequency ESR apparatus developed at RIKEN. This multi-frequency ESR apparatus is composed of a 16 T superconducting magnet (Oxford Instruments, UK) and a vector network analyzer MVNA with some extensions (ABmm, France) which covers the frequencies between 8 and 700 GHz, almost continuously. He also brought several Gunn oscillators and backward oscillators (200 and 300 GHz). Two dark gray areas below 60 T and 2000 GHz in figure 2 indicates the frequency-field windows which are covered with our present ESR apparatus. Figure 2. Frequency-field range covered with high field and multi-frequency ESR apparatus at KYOKUGEN in Osaka University including the range with that under construction. Inset: Multilayer pulsed magnet made by winding Cu-Ag wire reinforced by a strong tensile textile, Zylon and a maraging steel cylinder. High sensitive multi-frequency ESR High sensitive multi-frequency ESR apparatus equipped with the superconducting magnet has been developed [9] to study metalloproteins and magnetic granular thin films by the third author, Yashiro and Hagiwara since 2001 when they worked at RIKEN. To cover wide frequency range (35-600 GHz), we have chosen two types of resonators, one of which is a Fabry-Perot resonator (FPR) for the use of higher frequencies (50-600 GHz) that has the highest finess value reported so far. The others are ordinary cylindrical resonators with TE 011 mode for 35, 50, 70, 95 and 130 GHz, because they have theoretically one order of magnitude higher sensitivity than that of the FPR. Moreover, for the latter resonators, we have achieved very stable matching under a large magnetic field sweep (e.g. 16 T) by changing the relative angle between the longitudinal axis of the wave guide and that of the cavity. By their high sensitivity and stability, we successfully observed multi-frequency ESR spectra of a metalloprotein with an integer spin system for the first time [9]. Magnetization and transport measurements Magnetization measurements have been done in magnetic fields up to about 70 T at the first facility and up to about 60 T at the second facility. Transport measurements such as magnetoresistance and Hall resistance are carried out in magnetic fields up to about 60 T at the second facility. The fourth author, Yoshii, and his collaborators have performed magnetization and magnetoresistance measurements on several kinds of rare-earth compounds, one of which is a rare-earth tetraboride RB 4 that possesses a unique topology characterized by orthogonal dimers equivalent to the Shastry-Sutherland lattice [10]. He has re-developed magnetization measurement apparatus under high pressure up to about 1 GPa by referring to the previous development at KYOKUGEN [5]. For precise transport measurements in steady low fields, he has also constructed similar apparatus equipped with a 12 T superconducting magnet to that used in pulsed magnetic fields. Future plan For high field multi-frequency ESR measurements, we are now developing a 70 T ESR apparatus by using a multi-layer Cu-Ag pulsed magnet as indicated in the inset of figure 2 which is used for magnetization measurements in magnetic fields up to about 70 T. To detect ESR signals at higher frequencies above 2000 GHz, we will utilize a semiconductor chip of Ge:Sb which covers the frequencies up to about 7000 GHz. The frequency-field window covered with this ESR apparatus under construction is shown by the light gray area in figure 2. As for high sensitive ESR apparatus, we will make TE 011 single mode resonators for 170 and 220 GHz and a highly stable FPR with a stable matching mechanism. In addition to these developments, construction of low temperature ESR equipments such as a 60 T pulsed field ESR apparatus down to about 0.5 K and a 16 T steady field ESR one down to about 0.1 K is now in progress. We plan to develop a wide bore pulsed magnet with the diameter of about 50 mm which is expected to generate the magnetic fields up to about 50 T. By utilizing this magnet, it will be possible to make high field ESR apparatus with high sensitivity by introducing the ESR resonators into this magnet. In addition, we want to expand the pressure region in high magnetic field measurements up to about 50 T to obtain a wide field-pressure-temperature phase diagram for magnetic materials. Therefore, we will put high pressure cells such as an indenter-type high pressure cell into the wide bore pulsed magnet.
2022-06-28T03:17:27.358Z
2006-01-01T00:00:00.000
{ "year": 2006, "sha1": "a6126efe99ba56a101cb5d6c0c93088721ebaf74", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/51/1/149/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a6126efe99ba56a101cb5d6c0c93088721ebaf74", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
12342227
pes2o/s2orc
v3-fos-license
Segregation of Seizure Traits in C57 Black Mouse Substrains Using the Repeated-Flurothyl Model Identifying the genetic basis of epilepsy in humans is difficult due to its complexity, thereby underlying the need for preclinical models with specific aspects of seizure susceptibility that are tractable to genetic analyses. In the repeated-flurothyl model, mice are given 8 flurothyl-induced seizures, once per day (the induction phase), followed by a 28-day rest period (incubation phase) and final flurothyl challenge. This paradigm allows for the tracking of multiple phenotypes including: initial generalized seizure threshold, decreases in generalized seizure threshold with repeated flurothyl exposures, and changes in the complexity of seizures over time. Given the responses we previously reported in C57BL/6J mice, we analyzed substrains of the C57BL lineage to determine if any of these phenotypes segregated in these substrains. We found that the generalized seizure thresholds of C57BL/10SNJ and C57BL/10J mice were similar to C57BL/6J mice, whereas C57BL/6NJ and C57BLKS/J mice showed lower generalized seizure thresholds. In addition, C57BL/6J mice had the largest decreases in generalized seizure thresholds over the induction phase, while the other substrains were less pronounced. Notably, we observed only clonic seizures during the induction phase in all substrains, but when rechallenged with flurothyl after a 28-day incubation phase, ∼80% of C57BL/6J and 25% of C57BL/10SNJ and C57BL/10J mice expressed more complex seizures with tonic manifestations with none of the C57BL/6NJ and C57BLKS/J mice having complex seizures with tonic manifestations. These data indicate that while closely related, the C57BL lineage has significant diversity in aspects of epilepsy that are genetically controlled. Such differences further highlight the importance of genetic background in assessing the effects of targeted deletions of genes in preclinical epilepsy models. Introduction While mapping seizure-related quantitative trait loci (QTL) and identifying genes responsible for modifying baseline seizure threshold has been successful in rodents [1][2][3][4][5][6][7][8][9], discovery of genes beyond initial seizure threshold (e.g., changes in seizure threshold over time, development of more complex seizures, and/or epileptogenesis) remains challenging. This is in part due to a limited number of preclinical models for studying such complex traits. However, identifying genes beyond baseline seizure threshold is critical, since they could be targeted for therapeutic intervention possibly leading to better treatments for epilepsy. To this end, we have employed a mouse model of epilepsy using repeated exposure to the chemoconvulsant flurothyl, which we refer to as the repeated-flurothyl model, and have utilized this approach to investigate the genetic and environmental factors that influence seizure progression and seizure complexity [10][11][12]. While there are similarities between the repeated-flurothyl model and electrical or chemical kindling [13], we believe that the repeated-flurothyl model has several advantages over these seizure paradigms. Unlike the electrical kindling model, there is no need for the implantation of electrodes, thereby making the repeatedflurothyl model higher throughput. Unlike traditional chemical kindling models, flurothyl is inhaled; therefore there is less experimental error due to injection variability issues. Importantly, flurothyl seizures can be induced repeatedly without toxicity or ill effects [12]. If the administration of flurothyl continues for a sufficient period of time, seizures always occur. Therefore, the latency to the onset of flurothyl seizures represents a direct measure of seizure susceptibility. The greatest advantage of the repeated-flurothyl model is that this paradigm results in a progression of seizure behaviors that begin as clonic seizures, but change over time to seizures with tonic manifestations. This change in seizure complexity, which involves the interaction of two independent seizure expression networks (the forebrain and brainstem seizure networks [12,[14][15][16][17][18][19][20][21]), is not observed in other kindling models. Consequently, this alteration in seizure phenotype allows for the dissection of genes and mechanisms that are responsible for the propagation of ictal discharge from the forebrain seizure circuitry mediating clonic seizure expression to the brainstem seizure circuitry that mediates tonic seizure expression. This aspect of the repeated-flurothyl model is unique in that it provides a framework for better understanding why humans with epilepsy can develop more complex seizures over time [22][23][24][25]. Lastly, previous work has identified critical subcortical structures that are involved in mediating this change in seizure phenotype, further allowing for the mechanistic dissection of molecular processes responsible for this reorganization in mice exposed to the repeated-flurothyl model [26]. The recent elucidation of the importance of subcortical structures in the expression of generalized seizures in the human epileptic population particularly supports the importance of understanding the molecular processes controlling this phenotype in this preclinical model [27,28]. Previously, we determined that C57BL/6J (6J) and DBA/2J (D2) mice have divergent seizure responses following exposure to the repeated-flurothyl model [11,29]. Whereas 6J mice had higher initial generalized seizure thresholds (GST) with flurothyl, D2 mice had lower initial GST. Interestingly, 6J mice also had decreased GST following eight repeated flurothyl-induced seizures with D2 mice having no decreases in GST across these eight seizure trials [11]. Lastly, following these eight flurothyl seizure trials and a 28day incubation period and final flurothyl rechallenge, 6J mice have a change in their seizure phenotype, which does not occur in D2 mice. This change in seizure phenotype occurred as a result of presumptive reorganizational (epileptogenic) changes in the brain, such that when 6J mice were retested with flurothyl, they developed more complex seizures (clonic seizures that uninterruptedly progressed into brainstem seizures) [11,29]. Thus, elucidating the mechanisms that cause these differences in seizure responsivity between 6J and D2 mice can lead to the discovery of genes that can modify these seizure traits. Such information is critical for understanding these mechanisms and for developing strategies to target these processes. Since the genetic diversity between 6J and D2 mice is comparatively high, in relation to the C57BL substrains, it is difficult to determine if these sub-phenotypes are due to a small number of QTLs affecting all of the traits, or if some or all of these traits can act independently. Previous work has shown seizure susceptibility differences to pilocarpine in two C57BL substrains, C57BL6 mice and C57BL6/N mice, and found that even within these substrains, the choice of animal vendor can affect seizure susceptibility (presumably through environmental factors or genetic drift) [30,31]. Thus, our goal in this work was to investigate seizure behaviors in more divergent substrains of the C57BL line to determine if they were similar or divergent in various aspects of seizure progression. Ethics Statement All testing was performed under approval of the Institutional Animal Care and Use Committees of the Albany Medical College in accordance with The National Institutes of Health's Guide for the Care and Use of Laboratory Animals. Experimental Design Mice were exposed to the repeated-flurothyl model as previously described [10][11][12]26,29,[32][33][34][35][36][37]. Briefly, mice were placed in a closed chamber and a 10% flurothyl solution (bis(2,2,2-trifluoroethyl) ether; Sigma-Aldrich) made in 95% ethanol was infused through a glass syringe on to a gauze pad suspended at the top of the chamber at a rate of 100 ml/min using a motorized syringe pump. One mouse at a time was tested in the flurothyl chamber using a new gauze pad for each trial. The latency to the first myoclonic jerk expressed before the onset of a generalized seizure was recorded. Myoclonic jerks were defined by brief, but severe, contractions of the neck and body musculature occurring while the mouse maintained postural control [10,12,29]. The latency from the start of the flurothyl infusion to the expression of a myoclonic jerk was used as a measurement of the myoclonic jerk threshold (MJT) [29]. A generalized seizure was defined as a loss of postural control [10,12]. When a mouse had a generalized seizure, the top of the chamber was removed exposing the mouse to room air. The latency from the start of the flurothyl infusion to the loss of postural control was used as a measurement of the generalized seizure threshold (GST) [10,12]. Mice received a single flurothyl-induced seizure each day for 8 consecutive days (induction phase). The induction phase was followed by a 28-day incubation phase in which the mice were simply placed in the animal facility. After the incubation phase, mice were rechallenged with flurothyl. The seizure behaviors were scored according to a number classification system [10,12]: Grade 1 -a loss of posture, clonus of hindlimbs and/or forelimbs, and facial clonus including chewing; Grade 2 -grade 1 and low intensity bouncing; Grade 3 -grade 2 and wild running and hopping; Grade 4 -grade 3 and hindlimb and/or forelimb treading; Grade 5 -grade 4 and bilateral tonic extension of the forelimbs; Grade 6 -grade 5 and bilateral extension of the hindlimbs; and Grade 7 -grade 6 followed by death. Importantly, seizure grades 1-2 are classified as forebrain seizures, since these seizures are clonic in nature and involve forebrain structures for their expression [12,[38][39][40][41][42]. Seizure grades 3-7 denote a seizure type that begins as a clonic (forebrain) seizure where the animal losses postural control, regains posture, and rapidly progresses to a seizure with tonic manifestations (brainstem seizure). Therefore, we refer to such seizures as forebrainRbrainstem seizures. Such seizures are denoted brainstem, since their seizure expression is controlled by a brainstem seizure network [17][18][19]32,37,39,43]. Genetic and genomic analysis Heritability (H 2 ) was determined by dividing the between-strain variance by the sum of the within-strain and between-strain variance. Haplotype diversity was investigated for the three major substrains used in this study through publically available data and tools on the Mouse Phylogeny Viewer [44]. Statistical analysis One-way analysis of variance (ANOVA) followed by Newman-Keuls post-hoc comparisons were used to assess changes between strains. Repeated measures ANOVA followed by Newman-Keuls post-hoc comparisons were utilized to determine significance across repeated flurothyl seizure induction trials (kindling). GST and MJT on day 8 of the induction phase and on retest following incubation were compared using Student's t-test. Chi-square analysis was used to compare the percentage of animals changing their seizure phenotype. The point biserial correlation coefficient was used to determine correlation coefficients between the change in seizure phenotype and the other seizure parameters measured. Regression analysis was performed for parametric data. Statistical analyses were performed using Statistica (StatSoft). Results Given the unique seizure characteristics of C57BL/6J (6J) mice [11,12,26,32,33], we examined substrains of C57BL mice to determine their initial myoclonic jerk threshold (MJT; as determined by the latency from the start of flurothyl infusion to the appearance of myoclonic jerks [12,29]), decreases in MJT with repeated flurothyl exposures, initial generalized seizure threshold (GST; as determined by the latency from the start of flurothyl infusion to the expression of a generalized seizure [11,12,26,32,33]), decreases in GST with repeated flurothyl exposures, and the evolution of more complex seizure phenotypes over time (forebrainRbrainstem seizures). To determine whether the rate of decreases in MJT across the 8 seizure induction trials were different between the substrains, the slopes of the decreases in MJT across trials were calculated. There were significant differences between the slopes of the substrains ( Fig. 1 and Table 1; F 4,55 = 9.24, P,0.00001). 6NJ and KSJ mice had shallow slopes, 6J mice had the steepest slope with 10SNJ and 10J mice having comparatively moderate slopes ( Fig. 1 and Table 1). To determine whether the rate of decreases in GST across the 8 seizure induction trials were different between the substrains, slopes of the GST decreases were calculated. With this analysis, there were significant differences between the slopes of the substrains (Fig. 2 and Table 1; F 4,55 = 8.27, P,0.00001). Whereas 6NJ and KSJ mice had shallow slopes, 6J mice had the steepest slope with 10SNJ and 10J having comparatively moderate slopes ( Fig. 2 and Table 1). All of the substrains tested maintained their GST upon the 28day flurothyl retest compared to the GST for trial 8 of the induction phase (no significant differences; Fig. 2). Changes in seizure complexity over time. In agreement with previous published results, ,80% of 6J mice expressed a more complex forebrainRbrainstem seizure on flurothyl retest (Fig. 3) [11,12]. For the C57BL substrains, we found that 25% of 10SNJ mice (P,0.04) and 25% of 10J mice (P,0.04) expressed a forebrainRbrainstem seizure phenotype (Fig. 3). In addition, none of the 6NJ and KSJ mice expressed a more complex seizure phenotype on flurothyl rechallenge (Fig. 3). Chi-square analysis demonstrated that there were significant differences between substrains (X 4 = 28.45; P,0.001). Lastly, of the 60 mice in 5 substrains rechallenged with flurothyl following the incubation Figure 1. C57BL substrain differences in myoclonic jerk thresholds. The latency to the first myoclonic jerk (myoclonic jerk threshold (MJT)) on each seizure trial was determined for 5 C57BL substrains (n = 12 mice/substrain: 10SNJ, 10J, 6J, 6NJ, and KSJ) by exposure to 10% flurothyl during eight induction trials followed by a 28-day rest period and a single flurothyl retest. 10SNJ and 6J mice have the highest baseline MJT that are statistically indistinguishable. However, 6NJ and KSJ mice have significantly lower initial MJT as compared to all other substrains (P,0.0001). Additionally, four of the five individual substrains (6J, 10SNJ, 10J, 6NJ) showed significant differences in MJT across the 8 seizure trials (P,0.0001), except KSJ (P = 0.09). 1 significantly different from 10SNJ, 10J, and 6J (P,0.02); 2 significantly different from 10SNJ and 10J (P,0.04); 4 significantly different from all other substrains (P#0.05); 6 significantly different from 10SNJ and KSJ (P,0.02); 7 significantly different from 6J, 6NJ, and KSJ (P,0.01); 8 significantly different from 10SNJ, 10J, and KSJ (P,0.02); 9 significantly different from all substrains except 6J (P,0.03); 10 significantly different from 6J and KSJ (P,0.04); 11 Heritability of these seizure traits among the C57BL substrains In an attempt to correlate the seizure phenotypic diversity within these substrains with existing genetic data, we queried the publically available data on the mouse phylogeny viewer (http:// msub.csbio.unc.edu). Data were only available for 3 of the 5 strains studied here, but haplotypes from these strains indicate many regions that are similar along with numerous regions of haplotype diversity that are scattered throughout the genome (Fig. 4). Genetic diversity is greatest between the 6J and KSJ lines. Thus, any attempt to perform association studies will require a significant increase in the number of substrains studied, or an alternative mapping approach to localize our effects to a specific QTL region followed by a localized haplotype analysis. Figure 2. C57BL substrain differences in generalized seizure thresholds. The latency to a generalized seizure (generalized seizure threshold (GST)) on each seizure trial was determined for 5 C57BL substrains (n = 12 mice/substrain: 10SNJ, 10J, 6J, 6NJ, and KSJ) by exposure to 10% flurothyl during eight induction trials followed by a 28-day rest period and a single flurothyl retest. The baseline GST of 10SNJ mice and 10J mice were similar to that of 6J mice, whereas 6NJ and KSJ mice have significantly lower initial GST (P,0.01). For all C57BL substrains, there is a significant decrease in GST following repeated seizures (P,0.0001), which was independent of their initial GST. On flurothyl rechallenge, GST did not differ from their corresponding last seizure (seizure trial 8). 1 significantly different from 10SNJ, 10J, and 6J (P,0.01); 2 significantly different from 10SNJ and 10J (P,0.01); 3 significantly different from KSJ (P,0.01); 4 significantly different from all other substrains (P,0.05); 5 significantly different from 10SNJ, 6J, and KSJ (P,0.05); 6 significantly different from 10SNJ and KSJ (P,0.05); 7 significantly different from 6J, 6NJ, and KSJ (P,0.05); 8 significantly different from 10SNJ, 10J, and KSJ (P,0.05). doi:10.1371/journal.pone.0090506.g002 Figure 3. Flurothyl-induced seizure behaviors in C57BL substrains following 8 seizures, a 28-day incubation phase, and a final flurothyl challenge. While none of the 6NJ and KSJ mice expressed a more complex forebrainRbrainstem seizure on flurothyl rechallenge, 25% of 10SNJ mice, 25% of 10J mice, and ,80% of 6J mice did express a more complex forebrainRbrainstem seizure. This demonstrates that the evolution of more complex seizures, following exposure to the repeated-flurothyl model, is controlled by alleles in the B6 genetic background. Chi-square analysis demonstrated a significant difference between substrains (X 4 = 28.45; P,0.001; n = 12/substrain). Two out of 60 mice tested died on flurothyl retest (two B6 mice had a grade 7 seizure (tonic forelimb/hindlimb extension followed by death)). Discussion Among the inbred strains evaluated to date in the repeatedflurothyl model, C57BL6/J (6J) mice have shown the most interesting responses in modeling: seizure progression through changes in GST following repeated seizures and the evolution of more complex seizures over time [11]. Our survey of five substrains of closely related C57BL mice demonstrates that genetic control over these seizure characteristics is divergent among their shared ancestry. Other studies have found similar significant differences between C57BL substrains in alcohol preference, pain threshold, fear conditioning, and maximal electroshock seizures that are in part attributable to genetic differences [3,[45][46][47][48][49] indicating that these sublines may be a fruitful source for haplotype refinement in positional cloning studies. Recently, cocaine responsivity in C57BL/6 and C57BL/6N substrains was mapped to a QTL responsible for 70% of this phenotypic difference leading to the identification of a nonsynonymous mutation in the cytoplasmic FMRP interacting protein 2 gene (Cyfip2) [50]. This study highlights the importance of examining mouse substrains as a powerful approach to reveal important genetic causes or modifiers of a phenotype or trait. Since seizures are multifactorial, it is important to demonstrate whether specific effects on seizure susceptibility are a result of genetic influences or are related to the inherent properties of the convulsive stimuli. While no previous study has systematically analyzed these substrains with flurothyl, Ferraro et al., 2004 and 2011 demonstrated that, for baseline GST, C57BLKS mice are more susceptible to maximal electroconvulsive shock induced seizures compared to C57BL/10Sn, C57BL/6, and C57BL/10 mice. In fact, there was a trend for C57BL/10Sn mice to have higher maximal electroshock seizure thresholds than C57BL/6 mice, and for C57BL/6 mice to have higher thresholds than C57BL/10 mice [2,3]. This was similar to what we observed with flurothyl, particularly following repeated exposures to flurothyl, indicating that these differences in seizure susceptibility are genetic and are not the result of the specific type of convulsant used. Our data suggest that while a percentage of mice in 3 of the 5 substrains tested undergo a change in seizure phenotype to a forebrainRbrainstem seizure following seizure induction, incubation and flurothyl retest (6J, C57BL/10SNJ (10SNJ), and C57BL/ 10J (10J) mice), these substrains also have comparable baseline GST, with the substrains not changing their seizure phenotype (C57BL/6NJ (6NJ) and C57BLKS/J (KSJ) mice) having significantly lower initial GST. This may indicate that strains of mice with higher GST may be more susceptible to developing more complex seizures over time. Indeed, we have previously reported that BALB/cJ, C3H/HeJ, and 129S1/SvImj mice have GST similar to 6J mice, and also have a high percentage of mice expressing forebrainRbrainstem seizures during the flurothyl induction phase [11]. Understanding the pathophysiological processes underlying these plasticity changes could potentially lead to the development of new therapeutics directed against novel targets. In addition to seizure complexity, we also observed substrain differences in the reductions in GST over the 8 seizure induction trials (''kindling''). Notably, while 6J, 10J and 10SNJ mice have comparable GST on the 1 st day of induction, 6J, 10J, and 6NJ mice are most similar by day eight. This indicates that although initial GST has a direct impact on the rate of kindling, independent processes that segregate within C57BL substrains also modify this phenotype. While it is impossible to formally test this without experimental crosses, these data support an additive model for seizure threshold, where 10SNJ, 10J and 6J mice retain more alleles contributing to high GST than 6NJ or KSJ mice. However, all of the substrains appear to ''kindle'' to some extent with their absolute rate depending on their initial GST. Despite the close phylogenetic relationships between these lines, our initial attempts to correlate genotype with phenotype have shown that there is likely insufficient power for such an analysis on these data. Genetic divergence between these substrains is widespread particularly in KSJ, which not surprisingly shows the greatest difference in seizure progression from 6J. However, utilization of strains like 6NJ and 6J, in conjunction with alternative mapping approaches, should provide critical data to focus our attention on the causative mutations in future work. In penicillin-induced epilepsy models, 6J mice showed differences in the relative power of delta, theta, alpha, beta, and gamma bands compared to BALB/c mice, that is partly due to the differences in their seizure susceptibility [7]. Similarly, EEG power spectrum analyses in C57BL substrains exposed to the repeatedflurothyl model may reveal important insights regarding these frequency bands. Interestingly, in electrical kindling, a recent report showed that the evolution of high frequency discharges, particularly in the beta and gamma frequency range, predicted the future appearance of fully kindled seizures in kindled rabbits [51]. Therefore, it would be particularly interesting to determine whether differences/changes in EEG frequency bands across the repeated-flurothyl model, in C57BL substrains, and especially in 6J mice, can predict whether a mouse will change its seizure phenotype upon flurothyl rechallenge. Identification of such an EEG signature could serve as an important biomarker that could predict whether an individual might develop more complex seizures over time. Currently, we are taking advantage of the phenotypic diversity across all inbred strains to determine the major QTL controlling seizure traits in the repeated-flurothyl model. These data in conjunction with deep sequencing of the discovered QTL regions in C57BL substrains should help refine the candidate intervals and aide in the discovery of causative mutations. Author Contributions Figure 4. Illustration of haplotype diversity between 10J, 6J and KSJ mice. Each horizontal bar represents one of the mouse chromosome haplotypes. Differential shading between segments depict ancestral haplotype blocks consistent with the default parameters described on the mouse phylogeny viewer website. Differences in shading between lines show regions of genetic divergence between strains. doi:10.1371/journal.pone.0090506.g004
2017-04-05T19:06:29.758Z
2014-03-03T00:00:00.000
{ "year": 2014, "sha1": "6191a5a4b628f30f0b0a86ef675bea106a35bcd7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0090506&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6191a5a4b628f30f0b0a86ef675bea106a35bcd7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245050478
pes2o/s2orc
v3-fos-license
Seasonal Variation and Severity of Acute Abdomen in Japan: A Nine-Year Retrospective Analysis The seasonal incidence of acute abdomens, such as appendicitis, is reportedly more common in summer but is reported less frequently in Asia. Additionally, seasonal variations in the severity of acute abdomens have been evaluated insufficiently. This study evaluated the seasonal variations in the incidence and severity of acute abdomens in Japan. This retrospective observational study used a multicenter database containing data from 42 acute hospitals in Japan. We included all patients diagnosed with acute appendicitis, diverticulitis, cholecystitis, and cholangitis between January 2011 and December 2019. Baseline patient data included admission date, sequential organ failure assessment score, presence of sepsis, and disseminated intravascular coagulation. We enrolled 24,708 patients with acute abdomen. Seasonal admissions for all four acute abdominal diseases were the highest in summer [acute appendicitis, (OR = 1.35; 95% CI = 1.28–1.43); diverticulitis, (OR = 1.23; 95% CI = 1.16–1.31; cholecystitis (OR = 1.23; 95% CI = 1.11–1.36); and cholangitis (OR = 1.23; 95% CI = 1.12–1.36)]. The proportion of patients with sepsis and disseminated intravascular coagulation as well as the total SOFA score for each disease, did not differ significantly across seasons. Seasonal variations in disease severity were not observed. Seasonal variations have been reported in the incidence of several diseases. For example, cardiac arrest and asthma have been reported to be most common in winter [10,11]. Similarly, several studies have reported that the frequency of acute appendicitis was higher in summer [12,13], and some have reported that admissions for diverticulitis and acute cholecystitis were more common in summer [12,14,15]. However, seasonal variations in the severity of acute abdomen have been reported less frequently, especially in Japan. Since the four seasons in Japan are clearly separated compared to the other countries where previous studies were conducted, the analysis of seasonal variations in Japan may yield clear distinctions among seasons. For example, the average temperature in Tokyo is 14.7 degrees Celsius (11.1-19.7) degrees during spring, 26.4 (22.9-27.4) degrees Celsius during summer, 19.1 (14.1-22.9) degrees Celsius during fall, and 6.3 (5.7-7.3) degrees Celsius during winter. Therefore, we attempted to analyze the seasonal trends in acute abdomen using a nationwide database used for claiming medical fees in Japan. Specifically, we aimed to investigate the monthly and seasonal variations in the frequency and severity of acute abdominal conditions, namely, acute appendicitis, diverticulitis, acute cholecystitis, and acute cholangitis. Design and Setting This retrospective observational study was conducted using routinely collected data from electronic medical records provided by Medical Data Vision (MDV, Tokyo, Japan). The MDV database contains electronic health insurance claims and diagnosis procedure combination (DPC) payment system data from approximately 400 acute hospitals, accounting for approximately 23% of the total claims in Japan and including approximately 30 million patients until October 2019. The database includes data on age, sex, laboratory values, admission date, primary diagnoses, concomitant diagnoses, complication diagnoses, medical procedures, prescriptions, drug administration, discharge status, and hospital length of stay. This study included patient data from 42 acute hospitals (approximately 1.2% of all the acute hospitals in Japan) with laboratory data among all acute hospitals registered in the MDV database. Diagnoses were recorded based on the International Classification of Diseases Tenth Revision (ICD-10) codes. This study was conducted in accordance with the principles of the Declaration of Helsinki. The study protocol was approved by the Institutional Review Board of Osaka General Medical Center, Osaka, Japan (approval no. #S201916015). Informed consent was not required because of the anonymous nature of the retrospective data. Study Population The flowchart outlining patient selection for this study is shown in Figure 1. We identified all adult patients who required unplanned hospital admission and were diagnosed with an infection between 1 January 2011 and 31 December 2019. In this study, infection was defined by the inclusion of any of the ICD-10 infection codes previously proposed by the Institute for Health Metrics and Evaluation (IHME) [16] in the primary diagnosis or the diagnosis that triggered hospitalization. Among these patients, those diagnosed with acute appendicitis (ICD-10 codes K350, K351, K352, K353, K358, K359, and K36), diverticulitis (K572 and K573), acute cholecystitis (K800, K801, K804, K810, K811, and K818), or acute cholangitis (K803 and K830), regardless of emergency surgery during hospitalization, were included in the study. Patients with missing age data were excluded from the analysis. Data Collection We collected the following data for evaluation of baseline patient characteristics: Age, sex, date of admission, Charlson comorbidity index (CCI), [17] Sequential Organ Failure Assessment (SOFA) score and SOFA sub-scores, intensive care unit admission, use of catecholamine, surgery with general anesthesia, and underlying Sepsis-3 and DIC. Sepsis-3 was defined by an increase of 2 or more points from the total SOFA score on admission, which was calculated retrospectively. In this study, we used the modified SOFA score listed in Table S1, which omits cardiovascular subscore 1 (mean arterial pressure < 70 mmHg) and respiratory subscore 4 (PaO 2 /F I O 2 < 100), because data for these variables were not provided in the MDV database. The Japan coma scale, which is used for calculating neurological sub-scores instead of the Glasgow coma scale, has four main grades (grade 0 = alert; grade 1 = possible verbal response without any stimulation, not lucid; grade 2 = possible eye-opening, verbal, and motor response upon stimulation; and grade 3 = no eye-opening and coma upon stimulation). DIC diagnoses were based on ICD-10 codes (D65, O450, O460, O723, and O081) instead of the established diagnostic criteria for DIC. We collected the following data on general outcomes: in-hospital mortality, length of hospital stay, and emergency surgery with general anesthesia. To examine seasonal variations, we defined the seasons as follows: spring (1 March-31 May), summer (1 June-31 August), fall (1 September-30 November), and winter (1 December-28 February). Statistical Analysis To compare seasonal variations in the frequency and severity of acute abdomen (acute appendicitis, diverticulitis, acute cholecystitis, and acute cholangitis), analyses were performed using a nonparametric test, an extension of the Wilcoxon rank test for continuous variables, and a logistic regression test. Categorical variables are presented as numbers and percentages, while continuous variables were presented as median and interquartile ranges. All statistical inferences were performed using a 2-sided p value at the 5% significance level. All analyses were performed using JMP 15.0 software (SAS Institute, Tokyo, Japan). Study Population The total number of infectious disease inpatients during the study period was 166,145. After applying the inclusion and exclusion criteria, 24,708 patients with acute abdomen were included in this study. The diagnoses on admission were as follows: acute appendicitis (42.5%, n = 10,500), diverticulitis (32.3%, n = 7993), acute cholecystitis (12.6%, n = 3114), and acute cholangitis (12.6%, n = 3101). The baseline patient characteristics for each diagnosis are shown in Table 1. The incidence of acute appendicitis was higher in younger people; however, diverticulitis, acute cholecystitis, and acute cholangitis were more common in older adults. Sepsis-3 as an underlying condition was more frequent in acute cholecystitis and acute cholangitis. The outcome measures for all patients for each diagnosis are presented in Table 2. The mortality rates for all four diseases were higher in older adults. In particular, acute cholecystitis and acute cholangitis showed higher mortality rates than the other conditions. Data are expressed as percent or median and interquartile range, as indicated. Monthly and Seasonal Variations in Admissions The monthly admission rate per 100,000 MDV inpatients for each disease is shown in Figure 2. From 2011 through 2019, the rate of acute abdomen admissions was the lowest in February (6660.8/100,000 MDV inpatients) and the highest in August (9573.8/100,000 MDV inpatients), showing a 43.7% increase. Similarly, the monthly admission rates of individual acute abdomen conditions were the lowest in December to February and the highest in May to June. The seasonal admission rates per 100,000 inpatients with MDV for each disease are shown in Figure 3. The rate of admissions for acute abdomen was the lowest in winter (21,581.9/100,000 MDV inpatients) and the highest in summer (28,230.3/100,000 MDV inpatients), representing a 30.8% increase. The analysis of seasonal admissions for each diagnosis is listed in Table 3. The rate of seasonal admissions for acute appendicitis was the highest in summer (odds ratio (OR) = 1.35; 95% confidence interval (CI) = 1.28-1.43) and the lowest in winter. Similarly, the rate of admissions for diverticulitis was higher in summer than in winter (OR = 1.23; 95% CI = 1.16-1.31), similar to the findings for acute cholecystitis (OR = 1.23; 95% CI = 1.11-1.36) and acute cholangitis (OR = 1.23; 95% CI = 1.12-1.36). Data are expressed as a percent or mean with 95% confidence interval, as indicated. p-Value for trend test using the logistic regression analysis, as appropriate. Seasonal Severity of Each Disease The proportions of cases showing sepsis as an underlying condition for each disease in all four seasons are shown in Figure 4. The total SOFA score and organ sub-scores for acute appendicitis, diverticulitis, acute cholecystitis, and acute cholangitis showed no meaningful differences across months and seasons. In analyses of variations and regression, no seasonal variation in the incidence of Sepsis-3 as an underlying condition was observed for any of the four diseases. Principal Findings Our study showed that the incidence rates of acute appendicitis, diverticulitis, acute cholecystitis, and acute cholangitis in the summer were higher than those in the winter. The seasonal incidence of each disease was consistent with that reported in previous studies [12]. To the best of our knowledge, this is the first study to showed that the incidence rate of acute cholangitis was the highest in summer and lowest in winter. Although the seasonal pattern of infections for the blind-ending tubular structure has been reported, a similar trend of admission rate was observed in acute cholangitis. Assumed Mechanisms Causing Seasonal Variations of Acute Abdomen Previous studies have reported that bacterial and viral infections, fiber intake, temperature, humidity, daily sun exposure, and atmospheric pressure were related to seasonal variations in acute appendicitis, with the highest incidence occurring in the summer [18][19][20]. Acute appendicitis is caused by obstruction of the lumen due to lymphoid hyperplasia, fecaliths, and neoplasms. The incidence of infectious gastroenteritis, which can occur in lymphoid hyperplasia, reportedly increases with increasing temperature. In addition, infections by Escherichia coli, which was the most common pathogen of acute appendicitis, peaked in the summer [21,22]. These findings could be associated with the higher admission rate during summer. The primary etiological factors for acute cholangitis and cholecystitis are gallstones [23]. Cholestasis and impaired intestinal motility may promote gallstone formation [24]. Dehydration during summer may also influence gallstone formation. Biliary culture tests detected gram-negative bacilli such as Escherichia coli and Klebsiella spp., which were reported to show a higher incidence in bloodstream infections during summer and a higher incidence with rising temperatures [25,26]. These factors could be associated with the seasonality of acute cholecystitis and cholangitis. Our findings showed the seasonal incidence of acute abdomen in Japan with four distinct seasons and may predict relevant risk factors for diseases in the summer. Disease Severity across Seasons A previous study reported that the incidence of sepsis was higher in winter, and the rate of respiratory sepsis was higher in winter. In addition, sepsis due to the gastrointestinal system has been reported to have no seasonality [27]. Our study focused on the severity of each disease; the findings did not show any clear seasonal variations, consistent with the results of previous studies. Studies have suggested that the seasonality of respiratory infection was associated with low temperature, which suppressed host immune responses, but all four diseases in our study were common diseases not related to immune deficiency [28]. Limitations Our study had several limitations. First, the records of the diagnoses for acute appendicitis, diverticulitis, acute cholecystitis, and acute cholangitis recorded in the MDV database may have included errors because diagnoses recorded in administrative claims databases generally show lower accuracy than those recorded in prospective studies. Similarly, under-or overestimation and misclassification of the underlying conditions may have occurred. Incorrect diagnoses may have occurred in cases where attending physicians recorded ICD-10 diagnoses outside their fields of expertise or image diagnosis was not performed, particularly in mild cases. Second, the acute hospitals included in this study accounted for only 1.2% of the total hospitals in Japan, but the study nevertheless included a larger number of patients than previous studies in Japan. Third, we could not obtain the pathological and diagnostic imaging findings to assess the reliability of the diagnoses, which is likely to lead to an overestimate of cases. Conclusions The findings of this retrospective observational study of 24,708 acute abdomen patients over a nine-year study period in Japan suggested that emergency admissions for acute appendicitis, diverticulitis, acute cholecystitis, and acute cholangitis were highest in the summer. However, no variations were observed in the severity of acute abdomen among the four seasons.
2021-12-12T16:18:56.089Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "ea7e17724fcfe9d9294de90c66a20d7309b2d8de", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4426/11/12/1346/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b4602226205d5d2e71d9abac313c551f9c5b1864", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
205647828
pes2o/s2orc
v3-fos-license
Mycorrhiza stimulates root-hair growth and IAA synthesis and transport in trifoliate orange under drought stress Root-hair growth and development regulated by soil microbes is associated with auxin. In this background, we hypothesized that mycorrhizal fungal inoculation induces greater root-hair growth through stimulated auxin synthesis and transport under water stress conditions. Trifoliate orange (Poncirus trifoliata) was inoculated with an arbuscular mycorrhizal (AM) fungus (Funneliformis mosseae) under well-watered (WW) and drought stress (DS) for 9 weeks. Compared with non-AM seedlings, AM seedlings displayed significantly higher density, length, and diameter of root hairs and root indoleacetic acid (IAA) level, whereas lower total root IAA efflux, regardless of soil moisture status. Root PtYUC3 and PtYUC8 involved in IAA biosynthesis were up-regulated by mycorrhization under WW and DS, whereas AM-modulated expression in PtTAA1, PtTAR2, PtYUC4, and PtYUC6 depended on status of soil moisture. Mycorrhizal inoculation down-regulated the transcript level of root auxin efflux carriers like PtPIN1 and PtPIN3, whereas significantly up-regulated the expression of root auxin-species influx carriers like PtABCB19 and PtLAX2 under DS. These results indicated that AMF-stimulated greater root-hair growth of trifoliate orange under DS that is independent on AMF species is related with mycorrhiza-modulated auxin synthesis and transport, which benefits the host plant to enhance drought tolerance. P-uptake in promoting the plant growth, plant water relations or photosynthetic capacity under DS, thereby, providing an evidence for enhancing drought tolerance of host plant under DS. The magnitude of plant responses to mycorrhization is considered highly dependent on the compatibility between AMF species and host plant species 15 . It is still not clear whether mycorrhizas could play a role in root hair growth, with an exception of D. versiformis 14 . The mechanisms regarding mycorrhizal effects on root-hair growth of host plants are unknown. It is well documented that root-hair initiation is manipulated by two different pathways, viz., developmental pathway and the environmental/hormonal pathway 16 . In roots, a variety of of phytohormones, like auxin, ethylene, jasmonic acid, brassinosteroid, and strigolactone participate in root-hair growth and development, but auxin is most extensively studied [17][18][19] . Auxin is primarily synthesized at the shoot apex, transported to the root tip by vascular tissues of the stem, finally moves in a basipetal orientation towards the elongation zone through root peripheral tissues 20 . Such auxin efflux at the root apex is mainly controlled by various Pin-formed (PIN) auxin efflux carriers, besides AUXIN RESISTANT 1/LIKE AUX1 (AUX1/LAX) auxin influx carriers and some members of the ATP-BINGING CASSETTE B (ABCB) transporters (auxin efflux proteins) 21,22 . Besides auxin transport, auxin synthesis is controlled by a number of genes, such as tryptophan aminotransferase (TAA), tryptophan aminotransferase related (TAR), flavin monooxygenase-like enzyme (YUC), etc. 23 . Trifoliate orange (Poncirus trifoliata L. Raf.) is a widely used rootstock in citriculture in Southeast Asia, the root configuration of which is characterized by distinctively few and short root hairs which, accompanied with highly drought-sensitive nature 24 . Based on our previous results 13 , we hypothesized that mycorrhizal inoculation with Funneliformis mosseae could induce greater root-hair growth of trifoliate orange through enhanced auxin synthesis and transport under DS for enhanced drought tolerance. To confirm this hypothesis, trifoliate orange seedlings were inoculated with Funneliformis mosseae and subsequently exposed to well-watered (WW) and DS conditions. The responses were evaluated through root-hair morphology, root auxin concentration, root auxin effluxes, and relative expression of root auxin relevant genes. Results Mycorrhizal colonization of roots. No mycorrhizal colonization was observed in non-AM roots. AMFinoculated seedlings showed 55.6-61.4% of root mycorrhizal colonization ( Table 1). As much as 9.4% reduction in root mycorrhizal colonization was observed under DS than under WW. Plant growth. Plant growth traits, including plant height, stem diameter, leaf number, and shoot and root biomass were adversely affected by DS treatment, as compared with WW treatment, regardless of AM or non-AM seedlings (Table 1). On the other hand, AM seedlings showed significantly higher these plant growth-related traits than non-AM seedlings, irrespective of WW or DS condition. Root-hair features. Length, diameter and density of root hairs were significantly increased by DS treatment, in comparison with WW treatment (Fig. 1). AM seedlings displayed better root hair features than non-AM seedlings under both WW and DS, in a range of 41% and 15% higher for root hair length, 50% and 40% higher for root hair density, and 16% and 25% higher for root hair diameter, respectively. Root IAA concentration. Concentration of root indole-3-acetic acid (IAA) was significantly reduced by DS treatment, as compared to WW treatment (Fig. 2). AM seedlings exhibited significantly higher root IAA concentration than non-AM seedlings by 36% and 37% under WW and DS condition, respectively. Total root IAA efflux. An IAA efflux was observed in trifoliate orange from the root to the rhizosphere, regardless of WW or DS treatment. The DS treatment produced a significant reduction in total root IAA efflux by 3% in non-AM seedlings and an increase by 35% in AM seedlings (Fig. 3). AMF inoculation conferred a significant reduction in total root IAA efflux by 58% and 41% under WW and DS, respectively, in relative to non-AMF treatment. Root IAAO activity. Compared with WW treatment, root IAA oxidase (IAAO) activity was significantly increased by DS treatment in AM or non-AM seedlings (Fig. 4). There were no significant difference in root IAAO activity between AM and non-AM seedlings exposed to both WW and DS. Table 1. Effects of an arbuscular mycorrhizal fungus (AMF), Funneliformis mosseae, on plant growth performance of trifoliate orange (Poncirus trifoliata) seedlings exposed to well-watered (WW) and drought stress (DS). Data (means ± SD, n = 4) followed by different letters in the column indicate significant differences (P < 0.05) between treatments. Figure 1. Effects of an arbuscular mycorrhizal fungus (AMF), Funneliformis mosseae, on average density, length, and diameter of root hairs of trifoliate orange (Poncirus trifoliata) seedlings exposed to well-watered (WW) and drought stress (DS). Data (means ± SD, n = 4) followed by different letters above the bars indicate significant differences (P < 0.05) between treatments. Figure 2. Effects of an arbuscular mycorrhizal fungus (AMF), Funneliformis mosseae, on root IAA concentration of trifoliate orange (Poncirus trifoliata) seedlings exposed to well-watered (WW) and drought stress (DS). Data (means ± SD, n = 4) followed by different letters above the bars indicate significant differences (P < 0.05) between treatments. Effects of an arbuscular mycorrhizal fungus (AMF), Funneliformis mosseae, on root IAA effluxes of trifoliate orange (Poncirus trifoliata) seedlings exposed to well-watered (WW) and drought stress (DS). Data (means ± SD, n = 4) followed by different letters above the bars indicate significant differences (P < 0.05) between treatments. Transcript levels of root EXPAs. Root EXPAs such as PtEXPA4, PtEXPA5, and PtEXPA7 genes were significantly up-regulated by DS, compared with WW, irrespective of AM or non-AM-seedlings ( Fig. 7a-c). Discussion In this study, inoculation with F. mosseae showed a significant increase in length, density, and diameter of root hairs in trifoliate orange under DS. These observations are in agreement with the study carried out by Zou et al. 13 in trifoliate orange colonized by Diversispora versiformis under DS. It also suggested that mycorrhiza-stimulated root hair growth in trifoliate orange is independent of AMF species. Better root hair growth in mycorrhizal trifoliate orange plants under DS provides a much higher nutrient foraging ability for the inoculated plants to absorb more water and nutrients from mycorrhizosphere, eventually alleviating negative effects of drought on plants 25 . The present study showed that mycorrhizal inoculation produced a remarkably increased root IAA level in trifoliate orange seedlings exposed to either WW or DS, in accordance with our previous study in trifoliate orange seedlings colonized by Claroideoglomus etunicatum, D. versiformis, F. mosseae, and Rhizoglomus intraradices 12 . Auxin is now considered as a key regulator in the whole process of root-hair initiation, growth and development 18,19,26 . Any increase in the root IAA accumulation under mycorrhization, thereby, would benefit root-hair growth and plant growth performance. Auxin is transported from one cell to another cell, following a strict directionality in uptake and efflux of carrier proteins involved 23 . Auxin efflux is regulated through members of PIN family, the sequence of which encodes a family of auxin efflux carriers 22 . Our study revealed that mycorrhizal seedlings displayed lower root IAA efflux than non-mycorrhizal seedlings under either WW or DS treatment. Under DS, mycorrhizal treatment down-regulated the transcript level of root PtPIN1 and PtPIN3, but not PtPIN4, indicating a potential reduction in the amount of auxin efflux and finally resulting in an elevated accumulation of root IAA in AM plants over non-AM plants. reported that PIN1, PIN3, and PIN4 were located in stele cells, collectively responsible for auxin flow towards the quiescent center (QC), close to root tip for auxin reflux. Therefore, the lower expression level of PtPIN1 and PtPIN3 in AM versus non-AM plants exposed to DS is speculated to decrease the auxin flow towards the QC, thereby, reducing the auxin reflux 27 and inducing greater auxin accumulation in the root-hair zones to stimulate root-hair growth. In fact, auxin reflux is associated with the five PIN proteins (PIN1, PIN2, PIN3, PIN4 and PIN7) 27,28 . Further studies are needed to decode the functioning of the PIN family on AMF-induced root hair modification, especially under DS condition. It is well known that YUC (YUCCA encoding a flavin monooxygenase) and TAA/TAR (Tryptophan Aminotransferase of Arabidopsis) are two families of genes associated with auxin biosynthesis in plants 29 . In our work, the expression of root PtTAA1, PtYUC3, and PtYUC8 under WW and root PtTAR2, PtYUC3, PtYUC4, and PtYUC8 under DS was up-regulated by AMF inoculation, relative to non-AMF treatment. In the indole-3-pyruvic acid (IpyA) pathway, TAA1 and its close homologue, PtTAR2 convert L-Trp into IpyA, and YUC enzymes synthesize IAA from IpyA 30 . Our present study showed a different capacity in conversation of L-Trp into IpyA by mycorrhization from WW to DS. And, PtYUC3 and PtYUC8 were jointly activated by mycorrhization, regardless of WW or DS, implying the high responsiveness of the two genes to mycorrhization. Initiation and growth of root hairs require loosening of cell-wall components, mediated by cell wall-loosening expansin proteins (EXPs), represented by two major EXP subgroups, viz., EXPA and EXPB 31 . In our study, the transcript level of root PtEXPA4 and PtEXPA7 genes was not affected by AMF inoculation under WW, but the relative expression of root PtEXPA5 was down-regulated. Under DS, AM seedlings were characterized by relatively higher transcript level of root PtEXPA5 and lower transcript level of root PtEXPA4 and PtEXPA7 genes, indicating that AMF-mediated expression of root PtEXPAs is strongly dependent on soil moisture status. Inactivation or down-regulation in the expression of root PtEXPAs upon mycorrhization (except an up-regulated expression of root PtEXPA5 under DS) further suggested that the member of root EXPAs is not stimulated by AMF to initiate growth elongation of root hairs. Cell-to-cell auxin transport is dependent on two families of auxin-species carrier proteins, viz., ABCB family and AUX1/LAX family of influx carriers 32 , besides PIN carriers. ABCB auxin transporters are involved in the polar transport of IAA in plants, whilst ABCB1 and ABCB19 operate long-distance IAA-transport 33 . The AUX1/LAX family of PM permeases possess H + -symport activity to transport auxin into the cells 34 . Our work showed no changes in the expression of root PtABCB1 and PtABCB19 genes in response to mycorrihization under WW. Nevertheless, the root PtABCB1 transcript level was decreased and transcript level of root PtABCB19 was increased under DS in response to AMF inoculation. These observations warranted that mycorrhizal inoculation only induced the up-expression of root PtABCB19 under DS to accelerate long-distance auxin-transport. A considerably higher transcript level of root PtAUX1, PtLAX2, and PtLAX3 genes under WW was observed in AM than in non-AM seedlings. And, a higher expression level of root PtLAX2 and lower expression level of root PtLAX1 were observed in AM seedlings than in non-AM seedlings under DS, suggesting that these auxin carrier proteins, especially PtLAX2 responded well to mycorrhization for cell-to-cell auxin transport under DS. IAAO is usually involved in auxin catabolism and negatively correlated with IAA levels, thereby, regulating the concentration of IAA 35 . In our work, DS treatment induced a higher root IAAO activity in both AM and non-AM seedlings, thereby, leading to the lower root IAA level in DS-treated seedlings. AMF-inoculation did not alter the In short, the present study confirmed our proposed hypothesis that mycorrhizal inoculation induced greater root-hair growth of trifoliate orange, closely associated with auxin pathway under DS, where mycorrhiza activated the auxin relevant genes (PtYUC3 and PtYUC8), up-regulated the auxin-species influx carrier genes (PtABCB19 and PtLAX2), and down-regulated the auxin-species efflux carrier genes (PtPIN1 and PtPIN3). , PtLAX3 (f), PtPIN1 (g), PtPIN3 (h), and PtPIN4 (i) in trifoliate orange (Poncirus trifoliata) seedlings exposed to wellwatered (WW) and drought stress (DS). Data (means ± SD, n = 4) followed by different letters above the bars indicate significant differences (P < 0.05) between treatments. density is 880 μmol/m 2 /s, day/night temperature 28/21 °C, and relative humidity 85%) in the campus of Yangtze University, Hubei, China. Methods AM and non-AM seedlings were kept at 75% of maximum water holding capacity of the soil (soil WW status) for 11 weeks. Afterwards, half of the seedlings were still maintained under WW status for 9 weeks, and the other half seedlings were exposed to 55% of maximum water holding capacity (soil DS status) of the soil for 9 weeks. Soil water levels in the pots were measured daily by weighing, and the amount of water lost was supplied to maintain the designated soil water levels. Experimental design. The experiment consisted of four treatments with a completely randomized block arrangement: i. the seedlings inoculated with F. mosseae under WW (WW + AMF), ii. the seedlings inoculated without F. mosseae under WW (WW-AMF), iii. the seedlings inoculated with F. mosseae under DS (DS + AMF), and iv. the seedlings inoculated without F. mosseae under DS (DS-AMF). Each treatment had four replicates, for a total of 16 pots, each pot having 3 seedlings. Variable determination. Root mycorrhizal colonization. As many fifteen 1-cm-long root segments per seedlings were cleared by 10% KOH solution at 95 °C for 1.5 h and then stained with 0.05% trypan blue in lactophenol for 5 min 36 . Root mycorrhizal colonization was calculated as the percentage of mycorrhizal infected root lengths against total observed root lengths. Plant growth. Plant growth-related parameters like plant height, stem diameter, and leaf number per plant were determined in all the seedlings. After harvested, the seedlings were divided into shoots and roots, to measure their fresh weight. Root hairs. Eight 1.5-cm-long root hair zones at 3 cm away from the root tip in tap root and 1 st -, 2 nd -, and 3 rd -order lateral roots were selected, fixed by 2.5% glutaraldehyde solution with 0.1 mM sodium cacodylate buffer (pH 7.4), dehydrated step by step with alcohol using increasing concentration, dried with critical-point drying, and finally sprayed with by metals 12 Root IAAO activity. Root IAAO activity was measured using the ELISA assay (BYE97073, Shanghai Bangyi Biotechnology Co. Ltd, China) according to the user's guide. Quantitative RT-PCR. Freezed root sample was ground in liquid nitrogen. Root total RNA was extracted using an EASY spin Plus plant RNA kit (RN 38, Aidlab Biotecnolohies Co. Ltd, China). After DNase treatment, total RNA was reversely transcribed to cDNA using the PrimeScript TM RT reagent kit (PK02006, Takara Bio. Inc, Japan). Quantitative real-time PCR (qRT-PCR) were performed using the Power SYBR Green PCR Master Mix kit (Applied Biosystems, CA, USA) on a 7900HT Fast Real-time PCR System (Applied Biosystems, CA, USA). The amplification protocol consists of one cycle of 95 °C for 10 min, followed by 40 amplification cycles of 95 °C for 15 s, 56 °C for 30 s, and 72 °C for 30 s. The primers for selected auxin efflux carriers (PIN1, PIN3, and PIN4), auxin influx carriers (AUX1, LAX1, LAX2, LAX3, ABCB1, and ABCB19), auxin synthesized genes (TAA1, TAR2, YUC3, YUC4, YUC6, and YUC8), and root hair-specific expansin genes (EXPA4, EXPA5, and EXPA7) in the qRT-PCR were shown in Table 2, as per the design from Citrus sinensis cDNA sequences (http://citrus.hzau.edu. cn/orange). The relative fold change in gene expression was calculated by the 2 −△△Ct method 39 , where the reference gene β-actin was acted as the control. Statistical analysis. The data were statistically analyzed using one-way ANOVA (SAS, version 8.1). Data of root AM colonization were arcsine transformed prior to ANOVA analyses. The Duncan's multiple range tests were used to compare significant differences among treatments at P < 0.05. Table 2. Gene-specific primer sequences used in this work for qRT-PCR.
2018-04-03T02:32:36.234Z
2018-01-31T00:00:00.000
{ "year": 2018, "sha1": "02065588838854b3cc5ab08c2422fba86b9b85cb", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-20456-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3535bb2ab31e0e0de54623eb8822afe87e64a6df", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258851423
pes2o/s2orc
v3-fos-license
The Effect of Radial-Shear Rolling Deformation Processing on the Structure and Properties of Zr-2.5Nb Alloy The rheological properties of the Zr-2.5Nb alloy by the strain rate range of 0.5–15 s−1 and by the temperature range of 20–770 °C was studied. The dilatometric method for phase states temperature ranges was experimentally determined. A material properties database for computer FEM simulation regards the indicated temperature-velocity ranges were created. Using this database and DEFORM-3D FEM-softpack, the radial shear rolling complex process numerical simulation was carried out. The contributed conditions for the ultrafine-grained state alloy structure refinement were determined. Based on the simulation results, a full-scale experiment of Zr-2.5Nb rod rolling a on a radial-shear rolling mill RSP-14/40 was carried out. It takes in seven passes from a diameter of 37–20 mm with a total diameter reduction ε = 85%. According to this case simulation data, the total equivalent strain in the most processed peripheral zone 27.5 mm/mm was reached. Due to the complex vortex metal flow, the equivalent strain over the section distribution was uneven with a gradient reducing towards the axial zone. This fact should have a deep effect on the structure change. Changes and structure gradient by sample section EBSD mapping with 2 mm resolution were studied. The microhardness section gradient by the HV 0.5 method was also studied. The axial and central zones of the sample by the TEM method were studied. The rod section structure has an expressed gradient from the formed equiaxed ultrafine-grained (UFG) structure on a few outer millimeters of the peripheral section to the elongated rolling texture in the center of the bar. The work shows the possibility of processing with the gradient structure obtaining and enhanced properties for the Zr-2.5Nb alloy, and a database for this alloy FEM numerical simulations are also presents. Introduction The nuclear energy industry development is also associated with the extension of the service life of existing CANDU channel type reactors. At the same time, more energyintensive and competitive nuclear setup of this type are being developed [1][2][3]. The most important structural elements of channel reactors are pressure pipes [4][5][6][7], the integrity of its the normal operation and safety of nuclear power plants determines. The design life of The deformation non-monotonicity and metal flow turbulence is the main radialshear rolling feature. This nice phenomenon by the workpiece caused different zones trajectory-speed characteristics of plastic flow differences [31]. These features are shown in the Figure 1 scheme. This is why the most intense shear deformations in the sliding lines intersection zone are localized. Each small trajectory-oriented element of the outer layer is subjected to compression strain along the workpiece radius, compression strain in the flow direction along the helical trajectory, and stretching strain across the helical trajectory. The values and vector of all processes have a gradient along the workpiece radius. Metal flow currents have no sharp border and the fact that the additional grain refinement conditions are added [31]. The axial workpiece zone metal flow currents look like a normal pressing process by the all-round 3rolls pressure for workpiece causation. The metal is simply extruded from the central zone. The strain rate is also decreased, and the metal flow direction and the workpiece axis are matched. The metal structure should to stretched to become the texture. All of this is theory by S. Galkin and described in detail in [31]. This work goal is the evaluation of the radial shear rolling process for the Zr-2.5%Nb zirconium alloy structure refinement applicability to increase its performance. Materials, Methods, and Equipment One of the common zirconium-based alloys, the Zr-2.5% Nb alloy, was chosen for this research. This alloy is used as a material for CANDU pressure pipes, and can also be used as a nuclear fuel cladding tube material and its end plugs. There are few works on the severe plastic deformation of this alloy, and its radial shear rolling has not yet been carried out. To understand the concept of metal flow and its plasticity features in regards to the temperature and speed conditions of the Zr-2.5% Nb alloy radial shear rolling, and taking into account the plastic deformation thermal effect, a plastometric study was carried out. The plastometric tests by the method of uniaxial compression of cylindrical specimens with a 10 mm working zone using the strain rates of 0.5-15 s −1 and the temperature range of 20-770 °C were implemented. Cylindrical samples Ø10 × 12 mm were cut from an annealed bar Ø37 mm. The grain size corresponded to six points according to ASTM E112. The ratio between the length and diameter of the sample is h/d = 1.2. An increase in this ratio on zirconium alloys is undesirable since it leads to sample collapse and loss of stability during upsetting. The continuous loading test conditions by the Gleeble 3800 plastometric unit by the «Pocket Jaw» module were carried out. The Gleeble 3800 setup makes it possible to The deformation non-monotonicity and metal flow turbulence is the main radialshear rolling feature. This nice phenomenon by the workpiece caused different zones trajectory-speed characteristics of plastic flow differences [31]. These features are shown in the Figure 1 scheme. This is why the most intense shear deformations in the sliding lines intersection zone are localized. Each small trajectory-oriented element of the outer layer is subjected to compression strain along the workpiece radius, compression strain in the flow direction along the helical trajectory, and stretching strain across the helical trajectory. The values and vector of all processes have a gradient along the workpiece radius. Metal flow currents have no sharp border and the fact that the additional grain refinement conditions are added [31]. The axial workpiece zone metal flow currents look like a normal pressing process by the all-round 3rolls pressure for workpiece causation. The metal is simply extruded from the central zone. The strain rate is also decreased, and the metal flow direction and the workpiece axis are matched. The metal structure should to stretched to become the texture. All of this is theory by S. Galkin and described in detail in [31]. This work goal is the evaluation of the radial shear rolling process for the Zr-2.5%Nb zirconium alloy structure refinement applicability to increase its performance. Materials, Methods, and Equipment One of the common zirconium-based alloys, the Zr-2.5% Nb alloy, was chosen for this research. This alloy is used as a material for CANDU pressure pipes, and can also be used as a nuclear fuel cladding tube material and its end plugs. There are few works on the severe plastic deformation of this alloy, and its radial shear rolling has not yet been carried out. To understand the concept of metal flow and its plasticity features in regards to the temperature and speed conditions of the Zr-2.5% Nb alloy radial shear rolling, and taking into account the plastic deformation thermal effect, a plastometric study was carried out. The plastometric tests by the method of uniaxial compression of cylindrical specimens with a 10 mm working zone using the strain rates of 0.5-15 s −1 and the temperature range of 20-770 • C were implemented. Cylindrical samples Ø10 × 12 mm were cut from an annealed bar Ø37 mm. The grain size corresponded to six points according to ASTM E112. The ratio between the length and diameter of the sample is h/d = 1.2. An increase in this ratio on zirconium alloys is undesirable since it leads to sample collapse and loss of stability during upsetting. The continuous loading test conditions by the Gleeble 3800 plastometric unit by the «Pocket Jaw» module were carried out. The Gleeble 3800 setup makes it possible to simulate the various metal-forming processes conditions [32][33][34]. The temperature accuracy is about ±1 • C. The test temperature chromel-copel thermocouple wire in the sample central part was controlled. It was connected by the Gleeble 3800 "Therwocouple welder" kit. The thin graphite-based gaskets as a lubricant for the tests were used. The ISO-T model working heads of the test instrument were additionally lubricated by OKS255 graphite grease after each test. The structural transformations as applied to rolling conditions to study the dilatometric tests were carried out. Cylindrical specimens Ø5 × 10 mm were cut from an annealed bar Ø37 mm. The grain size corresponded to six points according to ASTM E112. These studies were carried out using a deformation dilatometer DIL805 A/D. The samples were heated to a temperature of T = 700-800 • C at a heating rate of 10-20 • C/min. The strain rate wasέ = 0.5 s −1 , and the strain degree ε = 0.5. The cooling rate after deformation was 0.5 • C/s. The Deform-3D program (SFTC, Columbus, OH, USA) for computer simulation by the finite element method (FEM) was carried out. The RSP-14/40 rolling mill from Częstochowa University of Technology [29,30] for real technical parameters for the radial shear rolling basic FEM-model creation was used. This rolling mill will be described in detail. The original 37 mm diameter workpiece 150 mm in length with the reductions indicated in Table 1 was rolled. Based on previous experience with this mill using other materials and its computer simulations, the reductions were determined [30,[35][36][37]. The workpiece material is Zr-2.5%Nb alloy. This material is not presented in the Deform database, and due to it, the plastometric studies results as a new database library were imported. As a result, a new library of the studied material was created for the Deform program. The heating temperature of 530 • C was chosen as the maximum possible to exclude the α→α + β phase transition; the roll speed of the RSP-10/30 mill was a nominal 100 rpm. The billet and rolls contact zone friction coefficient of 0.7 were taken as the Deform recommended value. The KOMPAS-3D (by Askon) CAD-softpack for the rolls geometry 3D model *.STL drawing was used. The workpiece material as an elastic-plastic type was chosen. The rolls as rigid bodies type was modeled. The computer simulation results verification on the Częstochowa University of Technology by the RSP-14/40 rolling mill (ZAO "ISTOK ML", Moscow, Russia) was carried out. The RSP-14/40 rolling mill at NUST MiSIS was developed. The rolling setup for the hot deformation of round bars made from unusual materials, low-plasticity, compacted powder, and casted bars was developed. The workpiece's bar diameter range is 40-8 mm. The main unit setup has a special rigid design three-roll stand rolls with specially calibrated rolls of 90 mm in diameter. The mill has two different diameters roll sets for different ranges of rolling bars. Large roll sets can roll 40-19 mm diameter bars. However, in practice, the rolling mill does not roll 40 mm, and the minimum possible diameter turned out to be 37 mm. Therefore, just such a rolling route was chosen in order to roll the billet to the maximum deformation from the one heating. The same rolling mill for equiaxed 300-700 nm ultrafine-grained structure obtained for austenitic stainless steel AISI-321 and Zr-1%Nb alloys was previously successfully used [35][36][37]. It has wide adjustments, high stand rigidity, and ease of operation. The RSP-14/40 rolling mill is shown in Figure 2. The same rolling mill for equiaxed 300-700 nm ultrafine-grained structure obtained for austenitic stainless steel AISI-321 and Zr-1%Nb alloys was previously successfully used [35][36][37]. It has wide adjustments, high stand rigidity, and ease of operation. The RSP-14/40 rolling mill is shown in Figure 2. For experimental rolling, a 40 mm initial diameter Zr-2.5%Nb alloy rod was used. It was prepared by hot (650 °C) pressing with drawing μ = 25. Then the bar was machined to a diameter of 37 mm. The structure of the original bar was recrystallized with a grain size of six points according to ASTM E112. The original pressed rod mechanical properties, according to the manufacturer's data, are as follows: tensile strength = 520 MPa, yield strength = 390 MPa, elongation = 17.5%. To assess the mechanical properties by the HV 0.5 method, a Shimadzu HMV-G31ST microhardness tester with a Vickers indenter tip was used. Microhardness was measured with load 5 (4.903) N and a dwell time of 5 s with the step of 0.25 mm. Microhardness was measured on smooth etched specimens after EBSD. The use of electrolytic etching guarantees the absence of a surface layer deformed by the abrasive and the most accurate measurements. Each point on the microhardness graphs is the average of five measurements. The microhardness was chosen due to the presence of a gradient structure, which makes it difficult to correctly use tensile tests. The heating temperature was set at 530 °C, and the heating of the initial rod with a diameter of 37 mm was carried out in a preheated muffle furnace for 40 min. An infrared thermal imaging camera aimed at the deformation zone was used to control the temperature regime. Sample cutting for all types of subsequent sample preparation was performed on a precision cutting machine Brilliant-220 (QATM) with intensive water cooling and a cutting speed of 15 μm/s to minimize deformation-temperature damage to the structure. Coarse-grained cut-off wheels were used at 500 RPM. Grinding and polishing with a Sapphire-520 machine (QATM) were carried out. The scheme of cutting the bars for sample preparation and the places of analysis are schematically shown in Figure 3. For experimental rolling, a 40 mm initial diameter Zr-2.5%Nb alloy rod was used. It was prepared by hot (650 • C) pressing with drawing µ = 25. Then the bar was machined to a diameter of 37 mm. The structure of the original bar was recrystallized with a grain size of six points according to ASTM E112. The original pressed rod mechanical properties, according to the manufacturer's data, are as follows: tensile strength = 520 MPa, yield strength = 390 MPa, elongation = 17.5%. To assess the mechanical properties by the HV 0.5 method, a Shimadzu HMV-G31ST microhardness tester with a Vickers indenter tip was used. Microhardness was measured with load 5 (4.903) N and a dwell time of 5 s with the step of 0.25 mm. Microhardness was measured on smooth etched specimens after EBSD. The use of electrolytic etching guarantees the absence of a surface layer deformed by the abrasive and the most accurate measurements. Each point on the microhardness graphs is the average of five measurements. The microhardness was chosen due to the presence of a gradient structure, which makes it difficult to correctly use tensile tests. The heating temperature was set at 530 • C, and the heating of the initial rod with a diameter of 37 mm was carried out in a preheated muffle furnace for 40 min. An infrared thermal imaging camera aimed at the deformation zone was used to control the temperature regime. Sample cutting for all types of subsequent sample preparation was performed on a precision cutting machine Brilliant-220 (QATM) with intensive water cooling and a cutting speed of 15 µm/s to minimize deformation-temperature damage to the structure. Coarsegrained cut-off wheels were used at 500 RPM. Grinding and polishing with a Sapphire-520 machine (QATM) were carried out. The scheme of cutting the bars for sample preparation and the places of analysis are schematically shown in Figure 3. The microstructure by transmission electron microscopy (TEM) using a bright field and electron diffraction mode on the JEM-1400 Plus microscope (Jeol Ltd., Tokyo, Japan) at an accelerating voltage of 120 kV and magnification ×8000-×35,000 was studied. The structure gradient was studied on a CrossBeam-540 scanning electron microscope (SEM) The microstructure by transmission electron microscopy (TEM) using a bright field and electron diffraction mode on the JEM-1400 Plus microscope (Jeol Ltd., Tokyo, Japan) at an accelerating voltage of 120 kV and magnification ×8000-×35,000 was studied. The structure gradient was studied on a CrossBeam-540 scanning electron microscope (SEM) (Carl Zeiss, Oberkochen, Germany) at 20 kV using a NordlyssNano EBSD detector (Oxford Instruments, Abingdon, UK). The TEM sample preparation by the jet thinning method was carried out. A 10 mm long sample from the central part was cut. Then it was half cut and several axial region longitudinal thin (0.3 mm) sections were obtained. Thus, the better radial shear rolling microstructure characterization longitudinal section was used. Then, by electrolytic jet thinning, the final TEM samples were prepared on a TenuPol-5 unit (Struers, Copenhagen, Denmark) A3 electrolyte (600 mL Methanol, 360 mL Butylcellosolve, 60 mL perchloric acid). Some of the samples were used to test sample preparation modes. For electrolytic polishing of TEM samples, longitudinal plates 0.3 mm thick were cut from the central part of the rod, and 3 mm discs were knocked out of them with a disc punch tool from Gatan (USA), then these blanks were thinned to form a hole at a voltage of 20 V. For SEM/EBSD, thicker (2 mm) plates were also cut. The cutting scheme is shown in detail in the experimental section. Sample preparation of zirconium samples was also carried out by electrolytic polishing using a LectroPol-5 setup (Struers, Denmark) for SEM/EBSD in a solution of the already mentioned electrolyte A3. The Zr-2.5%Nb Alloy Rheological Properties Based on Gleeble 3800 plastometer plastometric tests results, the Zr-2.5%Nb alloy stress-strain flow curves graphs regarding the 0.5-15 s −1 strain rates range and 20-770 • C temperature range were plotted. To construct each curve, three tests were carried out. A total of 30 tests were carried out. The flow curves are shown in Figure 4. It can be observed that the deformation resistance decreased by approximately 80% as a result of the change in temperature from 20 °C to 770 °C, whereas the opposite phenomenon occurred due to an increase in the rate of deformation from 0.5 to 15 s −1 . At a temperature of 20 °C, the increase in deformation resistance is no more than 5%. At 770 °C, the increase in resistance to deformation is approximately ~10%. This difference in the influence of the strain rate can be explained by the thermal effect of plastic deformation. It can be observed that the deformation resistance decreased by approximately 80% as a result of the change in temperature from 20 • C to 770 • C, whereas the opposite phenomenon occurred due to an increase in the rate of deformation from 0.5 to 15 s −1 . At a temperature of 20 • C, the increase in deformation resistance is no more than 5%. At 770 • C, the increase in resistance to deformation is approximately~10%. This difference in the influence of the strain rate can be explained by the thermal effect of plastic deformation. At temperatures close to room temperature, it is higher than at higher deformation temperatures. The type of flow curves depends on the different temperature and strain rate combinations as evidenced by the following points. The increase in the number of slip systems involved in deformation combined with the ongoing processes of dynamic recovery leads to the hardening coefficient showing a marked decrease between 350-500 • C and 0.3-0.4 strain rate values. Depending on the strain rate, the 770 • C flow curves are dome-shaped with a maximum deformation resistance at corresponding values of 0.15-0.30. Further, the deformation resistance reaches a steady state at a point slightly below 200 MPa. This phenomenon is typical for HCP metals and alloys. These materials are distinguished not only by a significant thermal effect under conditions of cold and warm deformation at high rates but also by an expressed anisotropy of properties with the textural inhomogeneity. The Dilatometric Properties of Zr-2.5%Nb Alloy Six dilatometric experiments were carried out where during the cooling process after deformation, the change in the length of the samples was recorded. During the tests, temperature regions were noted in which there was a change in the nature of the curves in the "temperature-length" diagram. An analysis of the results of dilatometric studies of the Zr-2.5%Nb alloy showed that the influence of the heating and testing temperature, the cooling rate after heating does not clearly manifest itself in the change in the elongation of the samples during cooling after deformation. It was noted that with a drop in temperature, structural changes occurred in the samples, which were reflected in an increase or decrease in their length. The determination of the expansion coefficient made it possible to distinguish three temperature regions of the alloy under study, which clearly differ from each other in the slope of the curve sections on the "temperature-length" diagram ( Figure 5). When the samples were cooled from T = 770 • C to T = 650 • C, the coefficient of expansion of the Zr-2.5%Nb alloy ranged from -0.4 to -0.2. The negative value of the expansion coefficient is explained by the fact that the metal contracts during cooling. However, with further cooling to T = 530 • C, the expansion coefficient is already -0.03, that is, the process of narrowing the samples slows down. Most likely, in the temperature range of 530-650 • C, the rearrangement of the lattice of the Zr-2.5% Nb alloy from bcc to hcp ends [38]. Since in the hcp lattice, the ratio of the lengths of the faces perpendicular to each other is greater than in the bcc lattice, the expansion in the alloy under study during cooling can be explained by this rearrangement. With further cooling to T = 200 • C, the softening coefficient was in the range from −0.03 to +0.03. expansion of the Zr-2.5%Nb alloy ranged from -0.4 to -0.2. The negative value of the expansion coefficient is explained by the fact that the metal contracts during cooling. However, with further cooling to T = 530 °C, the expansion coefficient is already -0.03, that is, the process of narrowing the samples slows down. Most likely, in the temperature range of 530-650 °C, the rearrangement of the lattice of the Zr-2.5% Nb alloy from bcc to hcp ends [38]. Since in the hcp lattice, the ratio of the lengths of the faces perpendicular to each other is greater than in the bcc lattice, the expansion in the alloy under study during cooling can be explained by this rearrangement. With further cooling to T = 200 °C, the softening coefficient was in the range from −0.03 to +0.03. The Zr-2.5%Nb Alloy Radial-Shear Rolling Computer Simulation The Deform software (SFTC) for FEM simulation was used. The initial 37 mm diameter billet and 150 mm length according to Table 1 specified compressions to 20 mm diameter was rolled. based on a previous study of Zr-1Nb alloy [30] compressions bypass were determined. The Zr-2.5%Nb Alloy Radial-Shear Rolling Computer Simulation The Deform software (SFTC) for FEM simulation was used. The initial 37 mm diameter billet and 150 mm length according to Table 1 specified compressions to 20 mm diameter was rolled. based on a previous study of Zr-1Nb alloy [30] compressions bypass were determined. To analyze the metal processing level during deformation, the parameter "equivalent strain" is usually used. Since radial-shear rolling is a cross-type of rolling, it is advisable to study the equivalent strain in the cross-section of the workpiece-it will allow evaluating not only the numerical values of the parameter but also the nature of its distribution over the cross-section during deformation. When analyzing the equivalent strain ( Figure 6), it was found that the distribution of this parameter has a ring-type-in all cross sections there are clear ring zones of strain development. At first pass, when compression was 2 mm per pass, the difference of strain values between the center and surface has a smooth gradient view. After the last pass with a 2 mm compression (pass 4), in the axial zone the strain level is approximately 9.3, in the surface zone, where the shear deformation maximum effect is detected, the approximate strain level is 15 (Figure 7). When the compression level was increased up to 3 mm per pass, this led to an increase in the strain level difference between the center and surface. After two passes (pass 6) in the axial zone, the strain level is approximately 17.5, in the surface zone, where the maximum shear deformation effect is detected, the strain level is approximately 23.5. The Zr-2.5%Nb Alloy Radial Shear Rolling and Its Microstructure Changes The experimental rolling by the RSP-14/40 rolling mill at Częstochowa Un Technology was carried out regarding the setup limiting mechanical and tec conditions. During the experiment, there were roll jamming in several cases. rolling pass, the bar was quickly removed and placed in a furnace for saving rolling was conducted for a 1.5-3 mm diameter reduction per pass to the 20 diameter. The 20 mm final workpiece diameter is the technological limit for The smaller diameter roll set replacement and installing requires a lot of tim reason, the smaller roll diameter rolling by one heating is not possible. A short-term 50-150 °C surface temperature increase was detected as the formation thermal effect. The final rolled workpiece was air cooled. Comparison of the shape change of the rolled experimental bar with the obtained FEM model shows good convergence of the results and the same sh front end with depression due to the vortex flow of metal in the bar. The com shown in Figure 8. In addition, the annular zones of strain distribution over section of the rod correlate with the resulting structure gradient. The greater mation value (during simulation), the smaller the grain (during a laboratory ex In addition, the results of verification should include measurements of the the from the surface of the bar with the data obtained during the simulation. Mode the temperature on the bar surface with an error of no more than 10%. When the compression level was increased up to 3 mm per pass, this led to an increase in the strain level difference between the center and surface. After two passes (pass 6) in the axial zone, the strain level is approximately 17.5, in the surface zone, where the maximum shear deformation effect is detected, the strain level is approximately 23.5. At the last 7th pass, the billet was deformed from 23 mm to 20 mm. But for these roll construction, such deformation mode led to intensive processing over the whole crosssection. Further deformation at this mill is impossible as rolls start to touch each other. In this case, the metal during rolling obtained the deformation of compression mainly, and it is more like an extrusion process. The result of this deformation stage is a sharp decrease in the strain values difference in the cross-section. Therefore, in the axial zone, the strain level is approximately 25.5, and in the surface zone, where the maximum effect of shear deformation is observed, the strain level is approximately 27.5. Based on the computer simulation results, recommendations were developed for rolling bars from the Zr-2.5%Nb alloy to ensure a high workout of the structure without metal crushing. The Zr-2.5%Nb Alloy Radial Shear Rolling and Its Microstructure Changes The experimental rolling by the RSP-14/40 rolling mill at Częstochowa University of Technology was carried out regarding the setup limiting mechanical and technological conditions. During the experiment, there were roll jamming in several cases. After each rolling pass, the bar was quickly removed and placed in a furnace for saving heat. The rolling was conducted for a 1.5-3 mm diameter reduction per pass to the 20 mm final diameter. The 20 mm final workpiece diameter is the technological limit for large rolls. The smaller diameter roll set replacement and installing requires a lot of time. For this reason, the smaller roll diameter rolling by one heating is not possible. A short-term 50-150 • C surface temperature increase was detected as the plastic deformation thermal effect. The final rolled workpiece was air cooled. Comparison of the shape change of the rolled experimental bar with the previously obtained FEM model shows good convergence of the results and the same shape of the front end with depression due to the vortex flow of metal in the bar. The comparison is shown in Figure 8. In addition, the annular zones of strain distribution over the cross-section of the rod correlate with the resulting structure gradient. The greater the deformation value (during simulation), the smaller the grain (during a laboratory experiment). In addition, the results of verification should include measurements of the thermal effect from the surface of the bar with the data obtained during the simulation. Model c predicts the temperature on the bar surface with an error of no more than 10%. reason, the smaller roll diameter rolling by one heating is not possible. A short-term 50-150 °C surface temperature increase was detected as the plastic de formation thermal effect. The final rolled workpiece was air cooled. Comparison of the shape change of the rolled experimental bar with the previousl obtained FEM model shows good convergence of the results and the same shape of th front end with depression due to the vortex flow of metal in the bar. The comparison i shown in Figure 8. In addition, the annular zones of strain distribution over the cross section of the rod correlate with the resulting structure gradient. The greater the defor mation value (during simulation), the smaller the grain (during a laboratory experiment) In addition, the results of verification should include measurements of the thermal effec from the surface of the bar with the data obtained during the simulation. Model c predict the temperature on the bar surface with an error of no more than 10%. The study of the microstructure of TEM is shown in Figures 9 and 10. TEM images of the peripheral zone (Figure 9, left) show equiaxed dislocation-saturated fine grains ranging in size from 700 to 1100 nm. A more detailed picture of the complex dislocation structure is shown in Figure 11. The electron diffraction pattern shows the oriented texture absence and presents the high-angle grain boundaries dominance. This structure type is the best for reaching high property levels. The study of the microstructure of TEM is shown in Figures 9 and 10. TEM images of the peripheral zone (Figure 9, left) show equiaxed dislocation-saturated fine grains ranging in size from 700 to 1100 nm. A more detailed picture of the complex dislocation structure is shown in Figure 11. The electron diffraction pattern shows the oriented texture absence and presents the high-angle grain boundaries dominance. This structure type is the best for reaching high property levels. The axial (central) zone structure presented in Figure 9 (right) has also significance. Instead of large randomly orientated grains, after the radial shear rolling processing, the mixture of long narrow elongated strongly deformed grains was formed. Sharp edge straight parallel boundaries are visible clearly and show one direction high-deformed texture. The grains' orientation here is the same and correspond to the rolling direction. The axial (central) zone structure presented in Figure 9 (right) has also significance. Instead of large randomly orientated grains, after the radial shear rolling processing, the mixture of long narrow elongated strongly deformed grains was formed. Sharp edge straight parallel boundaries are visible clearly and show one direction high-deformed texture. The grains' orientation here is the same and correspond to the rolling direction. To characterize the gradient of the structure observed on the TEM, EBSD mapping was used with a step of 2 mm. The original EBSD images and its misorienation data you can find in supplementary materials by Figures S1-S13. The captured maps were recognized and statistically processed. The main indicators taken from the maps are the average grain size (bars) and the average size ratio (red graph). The average grain size does not sufficiently characterize the structure gradient, whereas the grain size ratio provides much more information about this. The grain aspect ratio was taken by dividing the smaller side by the longer side which will equal the 0…1 range. Close to 1 value means the grain is close to the circle shape, and close to 0 value means the grain is close to the strip shape. The average value changes from 0.5 in the peripheral zone to 0.3 in the axial zone. It is safe to say that the outer 2-3 mm of the 10 mm rod radius has an equiaxed UFG structure with high-angle boundaries. Then, as we approach the center of the bar, the predominant orientation of the grain changes, and the shape becomes more and more elongated. At the same time, on large maps, zones of a texture similar to that shown in Figure 9 (right) are interspersed with individual large grains that look like recrystallized or small clusters of small relatively equiaxed grains. This can be seen from the EBSD thumbnails in Figure 11. An important conclusion should also be made that under the conditions of the formation of a non-equilibrium gradient structure, the main characterization method should be considered detailed EBSD mapping at different scales, since the TEM analysis is too local and, as can be seen from the correlation of the TEM and EBSD results for the axial zone, the results may not match exactly. TEM should be considered as an auxiliary method aimed at a more detailed study of one or another of the discovered types of structure. Another method for characterizing the gradient structure is the measurement of microhardness over the section shown in Figure 12. Analysis of the results confirms the gradient nature of the formation of the structure over the cross-section of the bar. To characterize the gradient of the structure observed on the TEM, EBSD mapping was used with a step of 2 mm. The original EBSD images and its misorienation data you can find in supplementary materials by Figures S1-S13. The captured maps were recognized and statistically processed. The main indicators taken from the maps are the average grain size (bars) and the average size ratio (red graph). The average grain size does not sufficiently characterize the structure gradient, whereas the grain size ratio provides much more information about this. The grain aspect ratio was taken by dividing the smaller side by the longer side which will equal the 0 . . . 1 range. Close to 1 value means the grain is close to the circle shape, and close to 0 value means the grain is close to the strip shape. The average value changes from 0.5 in the peripheral zone to 0.3 in the axial zone. It is safe to say that the outer 2-3 mm of the 10 mm rod radius has an equiaxed UFG structure with high-angle boundaries. Then, as we approach the center of the bar, the predominant orientation of the grain changes, and the shape becomes more and more elongated. At the same time, on large maps, zones of a texture similar to that shown in Figure 9 (right) are interspersed with individual large grains that look like recrystallized or small clusters of small relatively equiaxed grains. This can be seen from the EBSD thumbnails in Figure 11. An important conclusion should also be made that under the conditions of the formation of a non-equilibrium gradient structure, the main characterization method should be considered detailed EBSD mapping at different scales, since the TEM analysis is too local and, as can be seen from the correlation of the TEM and EBSD results for the axial zone, the results may not match exactly. TEM should be considered as an auxiliary method aimed at a more detailed study of one or another of the discovered types of structure. Another method for characterizing the gradient structure is the measurement of microhardness over the section shown in Figure 12. Analysis of the results confirms the gradient nature of the formation of the structure over the cross-section of the bar. In the near-surface layers (at a distance of 10 mm), a predominantly recrystallized structure with an average grain size of 0.75 μm is noted. Such a structure was obtained as a result of significant local shear deformations at the temperature of the beginning of rolling Т~530 °С, taking into account the thermal effect of plastic deformation up to ∆Т~150 °С and cooling in the air after rolling. In fact, the temperature at the time of rolling on the metal surface increased to T~680 °C for a short time (no more than 10 s), and then, due to the small cross-section of the metal, it cooled in air to T = 300 °C in 35 s. The average cooling rate is Vcool = 10 °C/s according to the results of pyrometric studies. In view of the significant local work hardening as a result of radial shear rolling, a short-term thermal effect was sufficient for the formation of a UFG recrystallized structure in the process of dynamic recrystallization. At a distance of 7 mm from the center, a predominantly recrystallized structure is also observed; however, the average grain size is at the level of 1.0 μm. Such a structural state is probably associated with an increase in the time of thermal influence due to a decrease in the cooling rate. At a distance of 4 mm from the center and in the center, the structure acquires a state after hot deformation processing of the Zr-2.5% Nb alloy by hot pressing at T = 650-700 ° C, in which the turbulent movement of the metal predominates in the presence of local areas with tensile stresses. Some single grains are characterized by an elongated structure. The pressing process is longer in time and is characterized by lower strain rates. The presence in the metal of a temperature field T~500-680 ° C and an unevenly distributed stress-strain state over the cross-section of the rod explains its inhomogeneous structural state. The decrease in the hardness value in the direction from the center to the periphery of the section of the bar Ø10 mm is associated with the degree of recrystallization. According to the results of the study, in the peripheral regions, the degree of recrystallization is higher than in the center, regardless of the grain size The structure of bars Ø20 mm made of Zr-2.5%Nb alloy is not typical of the structure obtained by traditional technology using pilger rolling mills. In the near-surface layers (at a distance of 10 mm), a predominantly recrystallized structure with an average grain size of 0.75 µm is noted. Such a structure was obtained as a result of significant local shear deformations at the temperature of the beginning of rolling T~530 • C, taking into account the thermal effect of plastic deformation up to ∆T~150 • C and cooling in the air after rolling. In fact, the temperature at the time of rolling on the metal surface increased to T~680 • C for a short time (no more than 10 s), and then, due to the small cross-section of the metal, it cooled in air to T = 300 • C in 35 s. The average cooling rate is Vcool = 10 • C/s according to the results of pyrometric studies. In view of the significant local work hardening as a result of radial shear rolling, a short-term thermal effect was sufficient for the formation of a UFG recrystallized structure in the process of dynamic recrystallization. At a distance of 7 mm from the center, a predominantly recrystallized structure is also observed; however, the average grain size is at the level of 1.0 µm. Such a structural state is probably associated with an increase in the time of thermal influence due to a decrease in the cooling rate. At a distance of 4 mm from the center and in the center, the structure acquires a state after hot deformation processing of the Zr-2.5% Nb alloy by hot pressing at T = 650-700 • C, in which the turbulent movement of the metal predominates in the presence of local areas with tensile stresses. Some single grains are characterized by an elongated structure. The pressing process is longer in time and is characterized by lower strain rates. The presence in the metal of a temperature field T~500-680 • C and an unevenly distributed stress-strain state over the cross-section of the rod explains its inhomogeneous structural state. The decrease in the hardness value in the direction from the center to the periphery of the section of the bar Ø10 mm is associated with the degree of recrystallization. According to the results of the study, in the peripheral regions, the degree of recrystallization is higher than in the center, regardless of the grain size. Conclusions Based on the results of plastometric and dilatometric studies, using computer simulation to argue the thermomechanical parameters of radial shear rolling of zirconium Zr-2.5%Nb alloy bars, the calculations were carried out. It is noted that the rolling temperature should not exceed T = 530 • C in order for the deformation process to take place advantageously in the single-phase α-region. It is advisable to give single reductions per pass in the range from 10 to 25% in order to provide an ultrafine-grained structure in the near-surface layers and prevent metal destruction. The verification on the RSP-14/40 radial-shear rolling mill was performed. The round bar by the 37 mm→20 mm route with a total diameter reduction of about ε = 85% was processed. The equiaxed ultrafine-grained 700-800 nm structure was reached. The formed structure over the cross-section of the sample has a gradient character. The zone occupied by the UFG structure formed on the periphery of the sample, whereas in the center an oriented rolling texture with an admixture of 1.0-1.5 µm grains was formed. The research demonstrated the radial shear rolling applicability for Zr-2.5%Nb alloy deformation for the UFG structure produce. It is suitable to consider the radial shear rolling implementation mainly towards the end of the technological cycle of manufacturing products in order to maintain the achieved effect in finished products. The laboratory experiment and the computer simulation also result in high convergence demonstrated. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ma16103873/s1, Figure S1: EBSD 0 mm distance from the center; Figure S2: EBSD 2 mm distance from the center; Figure S3: EBSD 4 mm distance from the center; Figure S4: EBSD 6 mm distance from the center; Figure S5: EBSD 8 mm distance from the center; Figure S6: EBSD 10 mm distance from the center; Figure S7: IPF colouring; Figure S8: Misorientation 0 mm distance from the center; Figure S9: Misorientation 2 mm distance from the center; Figure S10: Misorientation 4 mm distance from the center; Figure S11: Misorientation 6 mm distance from the center; Figure S12 Funding: The state budget-funded grant No. AP08052429 "Development of production technology and study of prospects of ultrafine-grained zirconium with improved mechanical properties and enhanced radiation resistance in nuclear engineering" of the program "Grant funding for young scientists on scientific and (or) scientific and technical projects for 2020-2022" (Customer-the Ministry of Education and Science of Republic of Kazakhstan). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable.
2023-05-24T15:17:53.210Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "4b3baac5749230c6403fd799a564816d33b74989", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ma16103873", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c1eb3bdd081998c4a4522fa2d06e6c17a9abbe73", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
119822868
pes2o/s2orc
v3-fos-license
Design of magnetic materials: Co$_2$Cr$_{1-x}$Fe$_{x}$Al Doped Heusler compounds Co$_2$Cr$_{1-x}$Fe$_{x}$Al with varying Cr to Fe ratio $x$ were investigated experimentally and theoretically. The electronic structure of the ordered, doped Heusler compound Co$_2$Cr$_{1-x}$Fe$_{x}$Al ($x=n/4, n=0,1,2,3,4)$ was calculated using different types of band structure calculations. The ordered compounds turned out to be ferromagnetic with small Al magnetic moment being aligned anti-parallel to the 3d transition metal moments. All compounds show a gap around the Fermi-energy in the minority bands. The pure compounds exhibit an indirect minority gap, whereas the ordered, doped compounds exhibit a direct gap. Magnetic circular dichroism (MCD) in X-ray absorption spectra was measured at the $L_{2,3}$ edges of Co, Fe, and Cr of the pure compounds and the $x=0.4$ alloy in order to determine element specific magnetic moments. Calculations and measurements show an increase of the magnetic moments with increasing iron content. The experimentally observed reduction of the magnetic moment of Cr can be explained by Co-Cr site-disorder. The presence of the gap in the minority bands of Co$_2$CrAl can be attributed to the occurrence of pure Co$_2$ and mixed CrAl (001)-planes in the $L2_1$ structure. It is retained in structures with different order of the CrAl planes but vanishes in the $X$-structure with alternating CoCr and CoAl planes. Introduction A great scientific interest is attracted by materials with a complete spin polarization [1].Such materials, being a metal for spin up and a semiconductor (or insulator) for spin down electrons, are called half-metallic ferromagnets [2,3] (HMF).Heusler compounds have been considered potential candidates to show this property [2].Theoretical calculations predicted an energy gap for minority electrons for the half-Heusler compound NiMnSb [2,4] which, however, has been controversially discussed [5][6][7].Similarly, a HMF like behaviour was found by Plogmann et al [8] for the Cobalt based Heusler alloy Co 2 MnSn. Heusler compounds belong to a group of ternary intermetallics with the stoichiometric composition X 2 YZ ordered in an L2 1 -type structure, many of which are ferromagnetic [9].X and Y are transition metals and Z is usually a main group element.Y may also be replaced by a rare earth element.The cubic structure consists of four interpenetrating fcc lattices.The two fcc sub-lattices of the X atoms combine to a simple cubic sub-lattice.Remarkably, the prototype Cu 2 MnAl is a ferromagnet even though none of its constituents is one [10].The L2 1 structure of the Heusler compounds is shown in figure 1a. The large variety of possible compositions of the Heusler compounds allows easily to produce materials with predictable magnetic properties.The easiest way to compose new materials is the exchange of one or more of the elements X, Y, or Z.This is indeed widely used in experiments and theory.See Refs.[16,[22][23][24][25][26][27][28] for examples on Co 2 containing Heusler compounds.However, the differences between the materials are rather rough.A better fine-tuning of magnetic properties may be obtained if substituting one or another constituent only partially. Such a way to design new materials is possible using a deliberate substitution of elements.Most oftenly the main-group element is kept fixed because the magnetic properties are mainly governed by the transition metal constituents.This leads to alloys of the type X 2−x Y 1+x Z, (X 1−x X' x ) 2 YZ or X 2 (Y 1−x Y' x )Z (x = 0, .., 1).The first type is still a ternary alloy whereas the second and third type result in quaternary alloys.Co based alloys of the first two types have been investigated by various groups [12,19,[29][30][31][32][33]. This work focuses on quaternary alloys of the third type.The Cobalt-Aluminium based alloy of the X 2 (Y 1−x Y' x )Z type with Y=Cr and Y'=Fe is particularly of interest as base for materials design.This arises from the fact that the pure (x = 0 or 1) compounds exhibit the same lattice parameter within 0.1%.Therefore, substituting partially one element by the other will lead to a material with the same lattice parameter but changed electronic and magnetic properties.Starting from the pure Cr containing compound, the partial substitution by Fe may be seen as d-electron doping. Co 2 Cr 0.6 Fe 0.4 Al is of special interest because a relatively high magneto-resistance ratio of up to 30% was found in powder samples in a small magnetic field of 0.1T [34,35].Thin films of the compound were successfully grown by several groups [36][37][38][39].A magneto-resistance ratio of 26.5% [40] (at 5K) and 19% [41] (at room temperature) was found for a tunnelling magneto-resistance (TMR) element of the same compound.Very recently, Marukame et al [42] reported a TMR ratio of 74% at 55K for a Co 2 Cr 0.6 Fe 0.4 Al-MgO-CoFe magnetic tunnel junction.A spin polarization of only less than 49% was found for poly-crystalline samples by means of Andreev reflections [43].The observation of an incomplete spin polarization may not only be caused by the model used to interpret the data [43,44] but also by the properties of the sample.Clifford et al [45] reported recently a spin polarisation of 81% in point contacts of Co 2 Cr 0.6 Fe 0.4 Al. For the purpose of the present study, doped Heusler alloys Co 2 Cr 1−x Fe x Al were prepared by arc-melting under an argon atmosphere.The resulting specimens were dense poly-crystalline ingots.Structural properties were measured using X-ray diffraction as a standard method.The cubic structure with a lattice constant of about 5.73 Åwas confirmed for all samples.Flat discs (8mm diameter by 1mm thickness) were cut from the ingots.The discs were mechanically polished for spectroscopic experiments. Field dependent magnetic properties were measured by SQUID -magnetometry (temperature: 4K to 300K) and by the magneto-optical Kerr effect (MOKE) at room temperature.The remanent magnetization was less than 10% of the saturation magnetization pointing on a soft magnetic material.Saturation was achieved for external fields above 0.2T at 300K.Especially the Co 2 CrAl samples showed large differences in the total moment varying from 1µ B to 3µ B per formula unit.This depends mostly on the post-processing of the samples like annealing followed either by quenching or slow cooling at different rates.In some cases, X-ray diffraction exhibited pronounced super-structures pointing on a tetragonal distortion of the unit cell.It should be noted that some types of disorder cannot be detected easily by X-ray powder diffraction as the scattering coefficients of Co and Cr are very similar.The same applies for neutron diffraction.Due to the nearly equal scattering length of Cr and Al, in particular, it is not possible to distinguish between ordered L2 1 and disordered B 2 structures.Therefore, we will use a detailed analysis of the magnetic properties to gain information about structural disorder of the samples.It should be noted that a mixing of Cr and Fe atoms in doped compounds will be hardly detectable by X-ray diffraction.This is caused by the very similar scattering coefficients of the constituting elements.X-ray magnetic circular dichroism (MCD) in soft X-ray absorption spectroscopy was performed at the First Dragon beamline of NSRRC (Hsinchu, Taiwan).The MCD measurements at the Cr, Fe and Co L 2,3 absorption edges for Co 2 Cr 1−x Fe x Al (x = 0, 0.4, and 1) were carried out in order to investigate element specific magnetic properties and compare them with theoretical predictions.A Co 2 CrAl sample free of superstructure but with too low magnetic moment was selected for the MCD measurements in order to explain the large deviation from the expected value of 3µ B per formula unit.Co 2 Cr 0.6 Fe 0.4 Al was selected for the MCD measurements as this composition has shown the largest effect in measurements of the magneto resistance. More details about the experiment and the data analysis are reported in Ref. [35,46].The present work reports on calculations of the electronic and magnetic properties of ordered Heusler compounds of the type X 2 (Y (1−i/4) Y' i/4 )Z.The calculated properties are compared to experimental values.Deviations from the L2 1 structure are discussed on hand of ordered structures.Random alloys of the X 2 (Y (1−x) Y' x )Z type with non-rational values of x as well as random disorder (for examples see references [47,48]) will not be discussed here. Calculational Details Self-consistent band structure calculations were carried out using the scalarrelativistic full potential linearised augmented plane wave method (FLAPW) provided by Blaha et al [49,50] (Wien2k).The exchange-correlation functional was taken within the generalized gradient approximation (GGA) in the parametrization of Perdew et al [51].For comparison, calculations were also performed using the linear muffintin orbital (LMTO) method provided by Savrasov [52] (LMTART 6.5) on different levels of sophistication from simple atomic sphere approximation (LMTO-ASA) to full potential plane wave representation (FP-LMTO-PLW).A 20 × 20 × 20 k-point mesh was used for the integration in cubic systems. The properties of the pure Cr or Fe containing compounds where calculated in F m3m symmetry using the experimental lattice parameter (a = 10.822a0B , a 0B = 0.529177 Å) as determined by X-ray powder diffraction.All muffin tin radii where set to nearly touching spheres with r MT = 2.343a 0B in both full potential methods.The overlapping spheres where set to r MT = 2.664a 0B for the ASA calculations. The full formula sum of the cubic cell is X 8 Y 4 Z 4 , with X = Co, Y = Cr or Fe and Z=Al and is reduced to X 2 YZ = Co 2 YAl.Exchanging Y and Z leads indeed to identical structures.(See figure 1 and table 1 for the positions of the atoms.) The calculation of mixed random alloys is not straight forward in both (FLAPW and LMTO) calculational methods.However, substituting some Cr atoms of the L2 1 structure by Fe leads in certain cases to ordered structures that can be easily used for calculations.Ordered, mixed compounds may have the general formula sum X 8 (Y (1−x) Y' x ) 4 Z 4 with Y = Cr and Y' = Fe.These structures have integer occupation of Y and Y' if x = i/4 with i = 1, 2, 3. Start with the Cr atoms occupying the corners of the cube and the centre of the faces.The Al atoms are located at the middle of the cube-edges (see: figure 1a).Replacing the Cr atom at (0,0,0) by Fe leads to the structure with x = 1/4 (see: figure 1b).The same structure may also be found if starting with Cr and Al exchanged and then replacing the Cr atom at (1/2,1/2,1/2) by Fe.The symmetry of these structures is again cubic but reduced to P m3m.The only difference is that the base atoms are shifted. Exchanging simply the Cr atoms of this structure by Fe leads to the structure with x = 3/4.The accompanied formula sums are Co 8 Cr 3 FeAl 4 and Co 8 CrFe 3 Al 4 . Again, start with the Cr atoms occupying the corners and face-centres of the cube.Replace two of the face Cr atom, say at (0,1/2,1/2), (1/2,0,1/2) and at opposite faces by Fe.The result is the structure with x = 1/2.This structure is initially cubic but can be reduced to tetragonal symmetry (P 4/mmm) with the formula sum Co 4 CrFeAl 2 . The three different structures are illustrated in figure 1.The cell shown for x = 1/2 has a reduced, tetragonal symmetry.The z-axis may be chosen such that the Other ordered structures are found from larger elementary cells.The cubic cell doubled in all three directions has the overall formula sum X 64 (Y (1−x) Y' x ) 32 Z 32 .The special case with x = 13/32 is very close to the compound Co 2 Cr 0.6 Fe 0.4 Al. Results and Discussion The electronic structure of the pure and doped compounds will be discussed in the following.First, the band structure and the density of states of the ordered compounds are presented.This is followed by a more specific discussion of the magnetic properties on hand of measured and calculated magnetic moments. A structural optimization was performed for Co 2 CrAl and Co 2 FeAl using FLAPW in order to verify using experimental lattice parameter.The energy minima were found to appear at lattice parameter being less than 0.5% smaller compared to the experimental values, in both materials.The calculated bulk moduli were 217 GPa and 210 GPa for the Cr and the Fe containing compounds, respectively.None of the results discussed below changes significantly if using the optimized lattice parameter instead of the experimental one.In particular, the overall spin moments stay the same and the half-metallic behavior of the ordered compounds retains.Very small deviations appear for elemental resolved values.Those are already sensitive to the setting of the r MT and the number of k-points used for integration, as is well known.The only small differences in the observed and optimized structures do not allow to notice any deviation from Vegards law for the mixed compounds. Band structure and density of states The self-consistent FLAPW band structure of Co 2 CrAl is shown in figure 2. The energy scale is referenced to the Fermi-energy (ǫ F ).The typical Heusler gap is located at about 6eV binding energy.It separates the low lying s bands from bands of predominately d character.These low lying s bands emerge mainly from the main group element, here Al.This gap is very small in the Al containing compounds.Much larger gaps are found for example in Sn containing compounds like Co 2 TiSn [53] or the half-Heusler NiMnSb [54]. From the spin resolved bands, it is seen that the majority bands cross or touch the Fermi-energy (ǫ F ) in rather all directions of high symmetry.On the other hand, the minority bands exhibit a gap around ǫ F thus confirming a HMF character.For Co 2 CrAl, the width of the gap is given by the energies of the highest occupied band at the Γ-point and the lowest unoccupied band at the Γ or X-point.The smaller value is found between Γ and X, thus it is an indirect gap.It should be noted that the direct gap at the Γ-point is only 60meV wider.Therefore, a small change in the parameters of the calculation may already change the character of the gap from indirect to direct.Indeed, some LMTO calculations resulted in a direct gap, most probably just for that reason. We will restrict the following comparison of the doped compounds to the ∆direction being parallel to [001].The ∆-direction possesses in all cases C 4v symmetry.It has the advantage that the compound with x = 1/2 can be compared directly to the others even so it is calculated for tetragonal symmetry where the corresponding Λ-direction is between Γ and Z. The ∆-direction is perpendicular to the Co 2 (100)-planes.As will turn out later, just the ∆-direction plays the important role for the understanding of the HMF character and magnetic properties of Heusler compounds.This role of the ∆-direction was also pointed out by Öĝüt and Rabe [55]. The band structures in ∆-direction of the pure (x = 0, 1) and the doped (x = 1/2) compounds are displayed in figure 3 for energies above the Heusler gap.In general, the doped compounds exhibit much more bands compared to the pure ones as a result of the lowered symmetry.Therefore, results are shown only for the mixed compound with equal Fe and Cr concentration. Compare the majority bands of the two pure compounds.At a first sight, the Fermi-energy is just higher in the Fe case compared to Cr, as expected from the larger number of d-electrons.A closer look reveals more detailed differences.The indirect gap of the ∆-direction (clearly seen for the Cr based compound) is not only shifted below ǫ F but nearly closes for the Fe based compound.This gap is also nearly closed in the majority bands of the mixed compound with x = 1/2 as well as those with x = 1/4, 3/4 (not shown here).This observation calls a rigid band model in question.In that case the bands would be simply filled with increasing number of d-electrons, leaving the shape of the bands unchanged. More interesting is the behaviour of the minority bands as those determine the HMF character of the compounds.Comparing again the Cr and the Fe based compounds, one finds that the energies of the states at Γ are nearly the same below ǫ F .The shapes of the bands close to Γ are similar, too.The situation is different at X where the unoccupied states are shifted toward ǫ F in Co 2 FeAl compared to Co 2 CrAl. It is worthwhile to note that the first unoccupied minority band of Co 2 FeAl just touches ǫ F at X. Therefore, any temperature above 0K will immediately destroy the HMF gap due to the smearing of the DOS around ǫ F (for additional temperature effects destroying the minority gap see e.g.: [56,57]).Chioncel et al [58] reported for NiMnSb the occurrence of nonquasiparticle states just below the minority conduction band.A similar effect would immediately destroy the HMF character if appearing in Co 2 FeAl, too.The band structure of Co 2 MnAl shows a similar behaviour at Γ like Co 2 FeAl.Here the minority bands are even crossing slightly the Fermi-energy, as was found in calculations for comparison with the iso-electronic compound Co 2 Cr 1/2 Fe 1/2 Al.Even if accounting for small numerical deviations while calculating ǫ F for the two (Lines are drawn to guide the eye.The full drawn line follows the limit for the HMF-gap) compounds, they may not be good candidates for spin-injection devices.The mixed compounds are described by P lattices.Therefore, the Brillouin zone of these compounds is generally smaller compared to the F lattice.This results in a seemingly back-folding of bands from the larger F -Brillouin zone into a smaller one.This effect is accompanied with some additional splitting (removed degeneracies) at points of high symmetry. Due to the manifold of bands in the mixed compounds, it is not easy to compare the results directly, therefore we concentrate on the width of the gap in the minority bands.This gap is mainly characterized by the bands in ∆-direction as found from the band structure for all directions of high symmetry (not shown here).More specifically, it is given by the energies at the Γ and X points of the Brillouin zone. The width of the gap in the minority bands is shown in figure 4. The direct band gap at the Γ-point becomes successively smaller with increasing iron content x and ranges from 750meV at x = 0 to 110meV at x = 1.The direct gap of Co 2 FeAl is only 60meV wider than the indirect one between Γ and X.The direct gap at Γ is much wider in Co 2 FeAl, therefore this compound is characterized by the indirect gap only. The character of the gap changes from indirect to direct if comparing pure and mixed compounds, respectively.This change of the character of the gap in the ∆direction is a consequence of the smaller Brillouin zone in the mixed compounds that leads to a so called back-folding of bands. The total density of states (DOS) is shown in figure 5 for varying iron content x.The gap at the Fermi-energy is clearly seen in the DOS of the minority states for all compounds.The total DOS shows also that the Heusler-gap at about 6eV binding energy is nearly closed. The majority DOS at the Fermi energy decreases with increasing iron concentration x.The density of majority electrons at ǫ F is a crucial point for spectroscopic methods investigating the spin polarization, like spin-resolved photoemission.A complete spin polarization may be only detectable if there is a high majority density.The same may be true for spin injection systems where one is interested in a high efficiency. It is also seen that the minority DOS seems to be much less effected by the Fe doping compared to the majority DOS.Mainly the unoccupied part of the DOS above ǫ F changes its shape but not the occupied part.In summary, it is found that doping of the compound with Fe changes mainly the occupied majority and the unoccupied minority DOS.Again it is seen that doping by iron results not just simply in a shift of the DOS as would be expected from a rigid band model.Majority and minority densities are altered in a different way. More details of the change in the DOS and electronic structure can be extracted if analysing the partial DOS (PDOS), that is the atom type resolved density of states.The PDOS of the pure compounds is compared in figure 6 to the PDOS of the mixed compound with equal Cr and Fe content (x = 1/2). From figure 6, it is seen that the high majority DOS at the Fermi-energy emerges from Cr. Both, Co and Fe exhibit only a small majority PDOS at ǫ F .Overall, the change of the majority DOS of Co 2 Cr 1−x Fe x Al around ǫ F can be clearly attributed to the increasing amount of iron with respect to chromium.The minimum in the minority DOS around ǫ F is mainly restricted by the shape of the Co PDOS.This indicates that the HMF like behaviour is mainly characterized by Co.The steep increase of the minority PDOS of Cr and Fe is mainly located in the unoccupied part above ǫ F .Doping with Fe does not only change the total DOS but also the PDOS of Co and Cr.In particular, the slight shift of the Cr PDOS to lower energies causes an additional decrease of majority states at ǫ F .This shift increases with increasing Fe concentration as was found from the PDOS for x = 1/4 and 3/4 (not shown here).The slight energy shift of the PDOS will result in a small change of the local magnetic moments at the Co and Cr sites, as will be shown below. The aluminium PDOS stays rather unaffected from the Cr or Fe concentration.It exhibits only small energy shifts. Magnetic moments SQUID magnetometry and MCD were used to determine total and partial magnetic moments of Co 2 Cr 1−x Fe x Al.Details of these measurements are reported in Ref. [46].The measured values are extrapolated to T = 4K and calibrated using the total moments measured by SQUID.Total and element specific magnetic moments were extracted from the band structure calculations reported above.Figure 7 compares measured and calculated spin magnetic moments. The calculated total spin magnetic moment follows the rule of thumb for Heusler compounds: N is the cumulated number of valence electrons (here: 4s, 3d for the transition metals Co, Cr, or Fe and 3s, 3p for the main group element Al).The value calculated from equation 1 is shown as full line in figure 7a. The calculated spin magnetic moments of Co and Fe are in agreement with the measured values.The measured, lower total value at small Fe concentration can be attributed clearly to a too low moment at the Cr sites. The calculated spin moments of Co and Cr increase slightly with increasing Fe concentration, whereas the Fe moment stays nearly constant.This increase is explained by an energy shift of the partial densities of Co and Cr as discussed above.The calculated Al spin moment is negative, independent of the Fe concentration.This points on an anti-ferromagnetic order of the Al moments with respect to the transition metal moments.However, those induced moments at the Al sites are only very small. All values found here for the pure compounds are in the same order of those calculated by Galanakis et.al.[28] using the Korringa-Kohn-Rostokker (KKR) method. It may be interesting to compare the magnetic moments of Co 2 Cr 0.5 Fe 0.5 Al with those of the nominally iso-electronic compound Co 2 MnAl.Our calculations yielded 3.81µ B for the total spin moment per formula unit and 0.84µ B and 2.58µ B for the partial spin moments of Co and Mn, respectively.These values are similar to those found by other groups [23,28] using KKR.The Co moment is larger in the mixed compound (1.01µB ) and the average Cr 0.5 Fe 0.5 moment (2.24µ B ) is smaller compared to Mn.It is interesting to note that the minority states were shifted at the Γ-point to energies above ǫ F , such that the HMF-gap became closed in Co 2 MnAl.A variation of the lattice parameters showed that this is not causing the differences between the two materials.The differences are caused by the different local potentials in those compounds. Most evidently, the calculated magnetic moment of the Co 2 CrAl compound does [59].It was previously considered that mainly the Co atoms carry the magnetic moment, whereas the contribution of Cr and Al atoms remains negligible [60].This empiric assumption may describe the present element specific measurements revealing a moment of ≈ 2 × 0.55µ B for Co 2 but only ≈ 0.2µ B for Cr.However, it does not explain the physics behind that observation.One may assume that the high magnetic moment of Cr is an artefact of a particular calculation scheme.Therefore, different calculation schemes were used to check a probable occurrence of such effects.The results are summarized in table 2. The results of Galanakis et al [28] derived from KKR-calculations are shown for comparison.The partial and total magnetic moments calculated by Kellou et al [61] at a smaller lattice parameter (10.758a 0B ) are in the same range. It is seen from table 2 that all values stay comparable within few %.Therefore, the deviation from the experiment cannot be attributed to the peculiarities of one or another theoretical method.The Cr moment is about (1.6±0.1)µB rather independent of the method of calculation. On the other hand, a reduction of the observed Cr moment may be caused by site-disorder, such that part of the Cr atoms become anti-ferromagnetically ordered either among each other or with respect to the Co atoms.The latter results in a ferrimagnetic order.A mixture of ordered and disordered crystallites will then result in a measured value being too small compared to the calculated moment.Such a disorder has to be realized with the same lattice parameter because X-ray diffraction did not show any superstructure in the samples investigated here. To estimate the influence of disorder in the CrAl-planes, the band structure was also calculated for the hypothetical cP 16 ferrite structure (P m3m).In this structure, successive CrAl-(001)-planes are Cr or Al rich with 3 : 1 or 1 : 3 ratios, respectively.The calculated magnetic moments of Cr were about 1.5µ B .Lin and Freeman [62] used the tetragonal tP 4 structure (P 4/mmm) to simulate a Y/Al disorder in Ni 2 YAl (Y=Ti, V, etc.).In this structure one has the series Co-Cr-Co-Al for the consecutive (001)-planes instead of Co-(CrAl) (Note that (100) and (001) planes are not equivalent in the tetragonal structure).Again, the calculated magnetic moment of Cr has about the same magnitude.A ground state with an anti-ferromagnetic or ferrimagnetic order of the Cr atoms could not be verified in either of both structures.Therefore, those structure can be excluded as possible candidates for explanation of the reduction of the magnetic moment.This is in accordance with the X-ray diffraction data where no additional Bragg peaks were observed.Extra (001) Bragg peaks are expected for both structures, for example.However, it is worthwhile to note that the compound still exhibited HMF character in the cP 16 structure.More promising are calculations for the X-structure (F 43m) [63].(Note that the Pearson code of this structure is cF 16 like for L2 1 .)This structure was also considered among several phases with random anti-site disorder by Feng et al [64] investigating the physical properties of Fe 2 VAl.It is similar to the C1 b Half-Heusler phase but with the vacancies filled up in a different way compared to the L2 1 structure.The X-structure consists of successive CoCr and CoAl (001)-planes. The calculation reveals for the X-structure an anti-ferromagnetic ordering of the Cr atoms with respect to Co and thus a reduced overall magnetic moment.The magnetic moments derived for the different structures are summarized in table 3. The calculated, element specific and total magnetic moments are compared to experimental values in table 3.For Co the sum of both atoms is given.The average of the calculated values for Cr and Al are given for the structures with inequivalent atomic positions. A more detailed analysis of the X-structure shows not only an anti-ferromagnetic order of Cr but also an enhancement of the Co magnetic moments.Their values are 0.94µ B in CoAl and 0.76µ B in CoCr (001)-planes.Evidently the shortest distance between two Co atoms is smaller in the X-structure compared to the L2 1 -structure (see table 4).The shortest Co-Cr and Cr-Cr distances stay the same in both structures. The enhancement of the Co moments in all three non-L2 1 structures does not allow to explain the experimentally observed moments directly because the measured value is already smaller as the calculated moment of the L2 1 structure.This is mainly a property of the particular sample investigated here.Other samples exhibited higher overall magnetic moments. The gap in the minority band structure is closed in the X-structure, that means it is not longer a half-metallic ferromagnet.The vanishing of the gap was also found for Co 2 FeAl in the X-structure, but with the Fe atoms aligned ferromagnetically with respect to Co.It is interesting to note that the cP 16 structure still exhibits the gap for both pure compounds.From this observation it can be concluded that the existence of the gap is directly related to the occurrence of Co 2 and mixed CrAl (001)-planes and that the L2 1 -structure is not the only possibility for the presence of a HMF gap in Heusler-like compounds X 2 YZ.This finding can be understood easily considering symmetry.The magnetization will reduce the symmetry.Applying any special orientation along one of the principal axes (e.g.: [001]) will reduce the symmetry from F m3m to I 4/mmm, at least.This is in accordance with the fact that the O h point group cannot describe ferromagnetic order.The properties of the electron spin will cause a further reduction to I 4/m, as vertical mirror operations would change the sign of the spin.In particular, the Γ-point of the ferromagnetic Heusler compounds will belong to the D 4h (C 4h ) colour group and not longer to O h like in the paramagnetic state.The point group in brackets assigns the magnetic symmetry with removed vertical mirror planes.The X-point becomes Z (D 4h (C 4h )) and the point group symmetry of the Λ-direction (formerly ∆) is reduced to C 4v (C 4 ) in the ferromagnetic case. The Γ and Z points of the cP 16 or tP 4 structures, as representatives for Cr-Al disorder, have in the ferromagnetic state also D 4h (C 4h ) symmetry.Therefore, those structures are expected to behave similar like the L2 1 -structure.However, the local symmetry of the atomic sites in the three structures is different what may explain the vanishing of the HMF gap in the tP 4 structure. The Γ and Z points of the X-structure, as representative for Co-Cr anti-site disorder, have in the ferromagnetic state the lower D 2d (S 4 ) point group symmetry.Therefore, that structure is expected to behave different from the L2 1 -structure.Indeed, the most pronounced difference is the local environment of the atoms in the (001)-planes. The local site symmetries of the atoms are summarized in table 4. It displays the symmetries of atoms and high symmetric points of the Brillouin-zone for the different structures for paramagnetic (PM) and ferromagnetic (FM) order.The direction of magnetization was chosen to be along the z-axis for FM order.Cr and Al atoms occupy in the cP 16 structure two inequivalent sites with different local symmetry.The distances between various atoms are given for a fixed lattice parameter a.The X-point becomes Z and the ∆-direction becomes Λ in tetragonal symmetry.The point group symmetries of the Brillouin-zone may serve to avoid confusion about the irreducible representation of bands being different in the PM and FM state. The Cr magnetic moment is also too small in the mixed Co 2 Cr 1−x Fe x Al alloy, as seen from figure 7. The Fe magnetic moment comes close to the value expected from the calculation.This observation points not only on a site-disorder but also on the possibility of phase separation resulting in Fe and Cr rich grains of the polycrystalline sample.The overall magnetic moment comes close to the calculated value as result of the much higher value for Fe compared to Cr.From this observation it is clear that measuring only the total moment is not enough to characterize these alloys completely. Similar calculations were performed for the Half-Heusler compound NiMnSb in order to explain a missing full spin polarization in that compound.In the XYZ C1 b -structure one has a series of pure X followed by mixed YZ (001)-planes.The calculations were performed for NiMnSb, MnSbNi, and SbNiMn, keeping the lattice parameter fixed.The major difference between the three types is the local environment of the Ni atoms in the (001) planes.Only in the first type, the alternating (001)planes contain either purely Ni or mixed (MnSb) layers.The exchange of Ni and Mn or Sb resulted in the loss of the HMF character and in turn in a reduction of the spin polarization at the Fermi-energy.Indeed, such an intermixing must not prevail through the hole crystal.It is just enough to have disorder in regions close to the surface or interface in order to explain a reduced spin polarization in spectroscopic methods.These findings are in agreement with the results of Orgassa et al [65] for random disorder in NiMnSb. The assumption of doped, ordered compounds may not hold in every case, especially if considering non half-or non quarter-integer fractions for the iron concentration x.Calculations concerning the spectroscopic properties for more general, non-rational iron concentration, resulting in random alloys, are in preparation.However, the results found here for superstructures are in well agreement to those of Miura et al [47] for random alloys. Summary The electronic structure of the pure and doped Heusler compound Co 2 Cr 1−x Fe x Al with varying iron content (x) was calculated by means of different theoretical methods.Element specific magnetic moments were determined from MCD measurements at the Cr, Fe, and Co L 2,3 absorption edges of the Heusler alloys Co 2 CrAl, Co 2 Cr 0.6 Fe 0.4 Al, and Co 2 FeAl. The calculations revealed a ferromagnetic coupling between the 3d atoms as well as an anti-ferromagnetic alignment of the Al magnetic moments with respect to the moments of the 3d elements.However, the Al moments are very small and induced by the surrounding polarized atoms.The calculations predict the Co 2 Cr 1−x Fe x Al compound to be a half-metallic ferromagnet.The size of the minority gap ranges from 100meV to 800meV .The smallest band-gap around the Fermi-edge in the minority bands was found for the compound with x = 1.Co 2 CrAl and, more pronounced, Co 2 FeAl turned out to have an indirect Γ − X gap.The mixed compounds exhibit a direct gap at the Γ point being caused by the reduced symmetry. It was shown that the origin of the minority gap in Co 2 CrAl is the geometrical structure and local symmetry of the atoms.It appears in the L2 1 -structure with successive CrAl and Co 2 (001)-planes but not in the X-structure with successive CoCr and CoAl (001)-planes. In summary, it was shown how theoretical methods can be used to design new materials with predictable magnetic properties. Acknowledgments The authors thank all members of NSRRC (Hsinchu, Taiwan) for their help during the beamtimes.G.H.F. and S.W. are very grateful to Yeukuang Hwu (Academia Sinica, Taipei) and his group for support during the experiments in Taiwan. Figure 4 . Figure 4. Minority Band Gap in Co 2 Cr 1−x FexAl.(Lines are drawn to guide the eye.The full drawn line follows the limit for the HMF-gap) Figure 7 . Figure 7. Magnetic moments of Co 2 Cr 1−x FexAl.The measured element specific and total moments for (x = 0, 0.4, and 1) are compared to calculated values for (x = 0, 1/4, 1/2, 3/4, and 1).The full line in a) corresponds to the thumb-rule, dashed lines in b)...e) are drawn through the calculated values to guide the eye. Table 1 . Symmetry.Space groups, and Wyckoff positions of the constituents in ordered Co 2 Cr 1−x FexAl.axis coincides with one of the cubic axes of the initial structure.The symmetry and the lattice sites of the structures are summarized in table 1. Table 2 . Magnetic moments in Co 2 CrAl.Element specific (per atom) and total spin magnetic moments (per formula unit) calculated by different methods.the measured one.The experimental value is only about 1.3µ B whereas theory predicts a value of about 3µ B per formula unit.The value found here is in rough agreement to the ground state magnetic moment of 1.55µ B per formula unit reported earlier by Buschow Table 3 . Structural dependence of the magnetic moments in Co 2 CrAl.Note: The sum of the magnetic moments of the two individual Co atoms and average values for Cr and Al are given to make different structures comparable.The total spin magnetic moment is given per formula unit. Table 4 . Site symmetries in Co 2 CrAl.PM := paramagnetic order, FM := ferromagnetic order with B [001].The nearest distance d AB between two atoms A and B is given in Å.The second and 4 th lines for each structure give the Wyckoff positions of the atoms in the PM and FM state.
2019-04-14T02:09:41.656Z
2005-10-08T00:00:00.000
{ "year": 2005, "sha1": "9220baf736e6af591b4b0f84bb349c7f1264857c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0510203", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9220baf736e6af591b4b0f84bb349c7f1264857c", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
221092346
pes2o/s2orc
v3-fos-license
Distribution and prevalence of ticks and tick-borne disease on sheep and cattle farms in Great Britain Introduction The most abundant and widespread tick species in Great Britain, Ixodes ricinus, is responsible for the transmission of a range of pathogens that cause disease in livestock. Empirical data on tick distribution and prevalence are required to inform farm management strategies. However, such data are largely unavailable; previous surveys have been rare and are usually relatively localised. Methods A retrospective questionnaire survey of farmers was used to assess the reported prevalence of ticks on livestock across Great Britain. Spatial scan statistics and kernel density maps were used to assess spatial clustering and identify areas of significantly elevated risk, independent of the underlying distribution of respondents. Logistic regression models were used to identify risk factors for tick presence. Results Tick infection risk to livestock is shown to be spatially aggregated, with areas of significantly elevated risk in north Wales, northwest England and western Scotland. Overall, the prevalence of farms reporting tick presence was 13% for sheep farms and 6% for cattle farms, but in “hot spot” clusters prevalence ranged between 48–100%. The prevalence of farms reporting tick-borne disease overall was 6% for sheep and 2% for cattle, but on farms reporting ticks, prevalence was 44% and 33% for sheep and cattle farms, respectively. Upland farming, larger flock sizes, region and the presence of sheep on cattle farms were all significant risk factors for tick presence. Conclusions These data have important implications for assessing both the risk of tick-borne disease in livestock and optimising approaches to disease management. In particular, the study highlights the need for effective livestock tick control in upland regions and the southwest, and provides evidence for the importance of sheep as tick maintenance hosts. Background In livestock husbandry, ticks are important both as direct blood-feeding parasites and as vectors of a range of production-limiting pathogens with economic and welfare impacts on the livestock industry through reduced production and animal mortality [1,2]. The most widespread tick vector of livestock pathogens in northern Europe is Ixodes ricinus [3,4], with clinical cases occurring during the periods of tick activity, primarily from the spring through to autumn. Predicting the distribution and incidence of tick-borne disease (TBD) can be complex, since it depends on both the availability of hosts and abundance of questing ticks, which varies across seasons, years and regions [5], reflecting variations in local microclimate and habitat [6]. However, it is also affected by the prevalence of pathogens within co-occurring transmission hosts [7] and the immunity generated by prior exposure [8,9]. While I. ricinus is widespread in the UK, populations are highest in areas where the habitat, microclimate and host availability are conducive to high survival [10]. These are generally areas of rough grassland, heath, moorland and woodland with a moist vegetation layer, where the relative humidity remains above the critical value of 80%, required to prevent desiccation [3,10]. These areas often have high populations of wild hosts, such as rabbits, deer or ground nesting birds and are unsuitable for crops, so can support only extensive livestock grazing [3,11]. Sheep in particular are thought to be one of the most important host species for all I. ricinus life-cycle stages in pasture or moorland [12]. Control is difficult as I. ricinus is generally non-host specific, infecting a variety of mammals and birds and spending the majority of its life-cycle off-host in the environment [13]. For sheep in the UK, ticks are particularly important in the transmission of tick-borne fever (anaplasmosis), louping-ill virus (LIV) and tick pyaemia. Anaplasmosis is a widely dispersed disease throughout Europe and can be major problem in livestock production, affecting both sheep and cattle [14]. The bacterium, a ruminant-specific variant of Anaplasma phagocytophilum, infects granulocytes, which can result in secondary infections due to immunosuppression [15]. Transstadial transmission of A. phagocytophilum can occur, whereby the pathogen is transmitted from one tick developmental stage to the next. Louping-ill virus (LIV), also called infectious ovine encephalomyelitis, can also be transmitted by transstadial transmission. LIV is an acute viral disease which affects the brain and the nervous system caused by a flavivirus closely related to the causal agent of tick-borne encephalitis. Louping-ill has been reported from most regions in the north and west of the UK, but has not been found in central or east England [16]. Louping-ill has also been identified in Ireland and regions of France and Norway [17]. The disease is characterised by nasal discharge, fever, depression, ataxia, paralysis and coma, leading in many cases to death; morbidity in lambs can be up to 50% [18]. However, following early infection, lifelong immunity is sustained. Pyaemia results from the infection of lambs or sheep with Staphylococcus aureus. It is not directly transmitted by ticks, but S. aureus, usually found on the skin, may become pathogenic when transferred mechanically to the bloodstream via the bite of a tick. There is a strong association between tick-borne fever and pyaemia [19]. Pyaemia affects lambs born on, or newly introduced to, a tick infected area, and shows a peak in spring when tick abundance is high. Regarding cattle in the UK, I. ricinus also transmits A. phagocytophilum and LIV, but importantly in some areas it is also a vector of Babesia divergens, the causal agent of redwater [2]. In the process of asexual division, intraerythrocytic Babesia cause lysis of erythrocytes, leading to haemoglobinaemia, haemoglobinuria and fever. In naïve adult hosts, infection may cause death within a few days. Milder forms of the disease, associated with juvenile or immune hosts, are characterized by fever and inappetence for a period of several days. In addition to transstadial transmission, Babesia is also transmitted via transovarial transmission within the tick, allowing the larvae, nymphs and adults of the next generation to transmit infection to cattle [2]. Despite the known range of tick-borne pathogens and concern over their impact on the welfare of livestock, there is very little quantitative information available about the prevalence of tick-borne disease in many areas of the UK. Previous systematic surveys in the UK have most usually been undertaken in the context of public health [20,21], companion animal health [22][23][24], game birds [25], or by measuring tick abundance in the environment [26], which is not necessarily a good proxy for tick attachment risk [27]. Those studies of tick prevalence on livestock in the UK that have been undertaken, have usually been focussed on localised geographical regions with little area-wide context [28,29]. Variability in sampling approach, time and context, also make reliable comparison between studies difficult. Furthermore, the fact that relatively few acaricidal pharmaceutical products are available with a label claim for efficacy against ticks in livestock, indicate that the control of ticks and TBD represents something of a neglected issue. Appropriate strategies for tick and TBD management require an assessment of risk [30] and this necessitates up-to-date data on tick prevalence and distribution [31] in relation to livestock hosts. The aim of the work reported here, therefore, was to investigate the prevalence and spatial distribution of ticks and tick-borne disease reported in cattle and sheep in Great Britain and then to identify areas of elevated risk of tick attachment to livestock using spatial distribution modelling. Questionnaire survey A two-page retrospective postal questionnaire survey was sent to sheep and cattle farmers in Great Britain. The sample area was first stratified into 6 regions: Scotland, Wales, north, central, southwest and eastern England. A total of 7200 questionnaires were sent to a randomised selection of farms in each region sourced from a commercial database [32]. Questionnaires were only sent to farms meeting the following criteria: more than 50 sheep, or more than 20 beef cattle, or more than 30 dairy cattle, to avoid surveying smallholdings and 'hobby farmers' , which may not be representative of commercial farms. Power analysis was used to obtain regional sample sizes to accurately estimate the proportion of cases in each region. The number of questionnaires sent out in each region, was based on the number of cattle or sheep holdings in each [33,34], an estimated prevalence rate of 15%, an estimated response rate of 30% (based on previous farm-based survey studies, e.g. [35,36], a confidence level of 95% and a margin of error of 5% (Win Episcope v.2.0; [37]). The questionnaire was sent out in November of 2018, and asked for general information about the holding and information about livestock numbers, tick presence and cases of TBD in the previous 12 months between November 2017 and October 2018, to control for temporal differences in tick abundance and distribution. The questionnaire contained separate sections for sheep and for cattle (see Additional file 1: Text S1). The distribution of respondents was externally validated by qualitative comparison with the distribution of cattle and sheep holdings in the UK [38,39]. Farm characteristics of respondents were externally validated by qualitative comparison with the ratio of dairy to beef farms [34] and the ratio of upland to lowland farms [40]. Questionnaires were also checked for internal consistency by removing questionnaires with missing tick presence/absence data and by qualitatively comparing monthly reported tick prevalence with expected temporal trends. Prevalence analysis Responses for sheep or cattle farms were analysed separately, except when a direct comparison was made between sheep and cattle in reported tick prevalence. Differences in tick prevalence (proportion of farms reporting tick presence compared to tick absence) between regions and between farm terrain types (upland/lowland) were tested using Chi-square in R (version 3.6.1; [41]) using the chisq.test function. If expected values were less than five, Monte Carlo simulated P-values were used [42]. All prevalence values are reported ± their 95% confidence intervals. Spatial analysis Farm postcodes were used for spatial analysis of cases (reported tick presence) and controls (reported tick absence) and converted to latitude and longitude [43]. Deviation from complete spatial randomness (CSR) was assessed by plotting significance envelopes of the G function, based on Monte Carlo simulation (100 repeats; Gest function in the spatstat R package (v.1.60-1; [44]). To identify case "hot spots" (areas which contain a higher density of points than would be expected with CSR) whilst accounting for the underlying distribution of the data points, the spatial relative risk of a respondent reporting the presence of ticks was estimated from the relative densities of cases and controls using the risk function in the sparr R package (v.2.2-13; [45]). An adaptive bandwidth was used, to compensate for potential over-smoothing in dense areas, calculated symmetrically with respect to cases and controls [45]. Diggleʼs edge correction was applied [45]. Asymptotic tolerance contours of P-values were plotted to show statistically significant areas of elevated risk (tol.contour function in the sparr R package; [46]). Spatial clustering was assessed on different spatial scales using envelopes of the L-function (a standardised version of the K-function), which calculates the number of data points within a specified radius of each point (Lest function in the spatstat R package). L-functions were compared between case and control points to detect whether case points were more clustered than clustering caused by the underlying point distribution. Clustering was assessed for significance using SaTScan TM [47], which uses Monte Carlo discrete spatial scan statistics to detect non-random clusters of cases, whilst adjusting for the underlying spatial distribution of the data points. A Bernoulli model was used as data were binary. Maximum cluster size was set to a radius of 150 km to prevent inappropriately large clusters. Risk factors Risk factors for tick presence were tested using multivariable logistic regression models, applied using the glm function in R with 'family = binomial' . Selected variables which met assumptions for logistic regression were first analysed using univariable logistic regression for continuous independent variables and Chi-square for categorical independent variables. Any variables with a P-value < 0.25 were selected for multivariable analysis [48]. The number of variables included in the initial multivariable model did not exceed the frequency of the least common outcome (presence of ticks) divided by 10 [49]. Categorical variables were dummy-coded and the reference levels were selected as those with the lowest probability of reporting ticks [50]. The final models were selected using stepwise selection to minimise the AIC (Akaike information criterion) value. The variance inflation factor (VIF) was used to check that multicollinearity between explanatory variables was low (< 4), using the vif function in the car R package (v.3.0.5; [51]). Model accuracy was assessed using the area under the receiver operating curve (AUC) (AUROC function in InformationValue R package (v.1.2.3; [52]) which plots sensitivity (the true positive rate) against 1 -Specificity (the false positive rate) at different threshold values. Values range from 0.5 to 1.0, with 1.0 depicting a perfect model, which would correctly detect 100% of both true and false positives. The threshold for sensitivity and specificity was selected to optimise both, by using maximum Youden's Index (optimalCutoff function in InformationValue R package). Odds ratios and confidence intervals were calculated as exp(β), where β is the coefficient estimate and using profile likelihood confidence intervals (confint function in R), respectively, to assess the relative impact of variables in the final model on the reported presence of ticks. Tick prevalence After internal validation, the total number of sheep farm respondents was 642 and the total number of cattle farm respondents was 797. The overall prevalence of farms reporting tick presence was 13.2% (CI ± 2.6; n = 85) for sheep farms and 6.2% (± 1.7; n = 49) for cattle farms. Overall, the prevalence of sheep farms with reported tick presence was higher than the prevalence of cattle farms with reported tick presence (χ 2 = 18.41, n = 1380, P < 0.001). When stratified by region, the prevalence of sheep farms reporting ticks was higher than the prevalence of cattle farms reporting ticks in all regions, but this difference was only significant in Wales (χ 2 = 4.93, n = 256, P < 0.05) and the north of England (χ 2 = 8.97, n = 368, P < 0.01; Fig. 2). Spatial distribution of ticks G function analysis showed that respondent density differed significantly from CSR, as would be expected due to the heterogeneous nature of underlying farm density. Comparison of the case/control L functions showed that case points (reported tick presence) were more clustered than control points (reported tick absence) at radii > 5 km for sheep and > 7.5 km for cattle. The relative risk of farmers (Fig. 3a) reporting sheep ticks and tolerance contours showed that north Wales, northwest England and western Scotland, were areas of statistically significantly elevated risk ( Fig. 3a; P < 0.05). Cases in these areas were also confirmed as significantly clustered by SaTScan TM analysis (Table 1). Spatial heterogeneity in predicted relative risk was smaller for cattle ticks, but similar to the analysis for sheep farms, areas of statistically significantly elevated risk were identified in north Wales, northwest England and Scotland (Fig. 3b; Table 1). Although the reported prevalence of cattle tick cases was highest in southwest England, when considering case points on a more continuous geographical scale, taking into account the underlying distribution of respondents, cases were not found to be significantly clustered in this region. Risk factors for tick presence For sheep, variables included in the initial model, based on univariable analysis were: terrain type (upland/lowland), flock size, farm type (organic/conventional) and region, but farm type was eliminated from the model during stepwise selection. After farms with missing data were removed, 480 remained in the final model. The VIF was < 4 for all variables in the final model. Significant risk factors for reported tick presence on sheep were upland terrain, larger flock sizes and being located in southwest England (Table 2; AUC = 0.77, χ 2 = 480, residual deviance = 305.4 (df = 472), null deviance = 377.0 (df = 479)) ( Table 2). For cattle, variables included in the initial model, based on univariable analysis, were: terrain type (upland/lowland), livestock type (cattle only farm/cattle and sheep farm), cattle type (beef farm/dairy farm/both) and region, but cattle type was eliminated from the model during stepwise selection. After farms with missing data were removed, 711 remained in the final model. The VIF was < 4 for all variables in the final model. Significant risk factors for reported tick presence on cattle were upland terrain, presence of sheep and being located in southwest England (Table 2; AUC = 0.73, χ 2 = 711, residual deviance = 285.5 (df = 704), null deviance = 319.1(df = 710)). Tick-borne disease (TBD) The prevalence of farms reporting at least one TBD case was 5.7% (± 1.8; n = 37) for sheep and 2.0% (± 1.0; n = 16) for cattle. Of those that reported finding ticks on their animals, 43.5% (± 10.5; n = 37) of sheep respondents and 32.7% (± 13.1; n = 16) of cattle respondents also reported having at least one TBD. Of farms reporting disease, 5.4% (± 7.3; n = 2) of sheep disease cases and 18.8%(± 19.1; n = 3) of cattle disease cases were reported to be diagnosed by a veterinarian or diagnostic laboratory. In sheep, the most common TBD was tick-borne fever (4.2 ± 1.5%; n = 27) (Fig. 4). In cattle, redwater was the most reported TBD (1.2 ± 0.8%; n = 10) (Fig. 4). The density of respondents reporting sheep disease was highest in Wales and northwest England and cattle disease in southwest England. Due to the low number of disease case points, it was generally not possibly to identify areas of significantly elevated risk; however, SaTScan TM did identify a significant cluster of tick pyaemia in sheep in northwest England and of redwater in cattle in southwest England (Table 3). Of sheep farm respondents reporting disease, 97.1% (± 5.4; n = 34) were from upland farms. Of cattle farm respondents, 57.1% (± 24.3; n = 8) were from upland farms. Discussion The spatial analysis approach used here identifies clusters, areas which contain a higher density of points than would be expected whilst accounting for the underlying distribution of the respondents to the survey. The distribution of tick infestation and tick-borne disease prevalence in sheep and cattle reported here are consistent with the known distribution of I. ricinus [22,53,54]. Overall, 13% of sheep farms and 6% of cattle farms reported that their animals had had ticks in the study year, but with areas of significantly higher prevalence in north Wales, northwest England and western Scotland. Livestock in these regions primarily graze upland pastures and this was a significant risk factor for tick presence. The prevalence of tick infestation on upland farms was higher than the national prevalence at 24% and 10% for sheep and cattle, respectively and the prevalence of ticks on farms in statistically significant "hot spot" clusters ranged between 48-100%. Upland regions, which are classified by the EU as 'Less Favoured Areas' characterized by rough grazing, heathland and moorland [55], often contain a high density of questing ticks due to the combination of appropriate microclimates suitable for tick survival and abundant wildlife hosts [12,56] and are therefore areas of high contact between livestock and ticks. Although tick populations can still be high in lowland regions, they are more limited by the lower availability of suitably humid microhabitats [57]. It is notable that for cattle the presence of sheep on the farm was a significant risk factor for tick infestation. Although deer are important hosts for ticks [25], especially in Scotland [58], sheep have been shown to maintain stable tick populations in upland regions, in the absence of other wildlife hosts, acting as hosts for all I. ricinus life-cycle stages [12,59]. Hence, sheep are able to act as important maintenance hosts for tick populations in upland areas. It was suggested by Evans [60] that on mixed farms, because sheep are turned out onto pasture earlier than cattle, sheep may be a particularly important food source for the early spring population of ticks and the presence of sheep co-grazing may increase the tick population, but in some circumstances may also help to reduce the infestation on cattle. Although under some conditions there may be a positive relationship between pathogen prevalence and tick density, this is highly variable [61] and tick presence or absence has been found to be a better predictor of pathogen transmission risk than tick abundance [62]. Therefore, risk based upon presence and absence data gives valuable information on the areas where livestock are most at risk from tick-borne disease, although presentation of clinical cases will also depend upon population immunity. Host density is also important for disease transmission, as has been found with LIV models [1] and the areas of elevated risk for tick presence are also generally areas of high livestock density [38,39]. Although there were too few disease cases in the present study to allow relative disease risk to be mapped, the density of reported disease cases generally mirrored the density of reported tick cases. However, an exception was redwater in cattle, where a significant cluster of cases was found in southwest England. In 2006, Barton et al. [63] also found a high reported prevalence of redwater in a survey of cattle farms in the south-west, with 66% of farms reporting ticks also reporting redwater. Redwater is endemic to the UK, but clinical cases are generally only apparent when there is a breakdown in population immunity [9]. Cases may be more prevalent in the southwest because of less consistent contact between cattle and ticks, resulting in occasions where cattle are unexposed at a younger age, but are then later grazed on tick infested pastures. Further investigation of seroprevalence in cattle, the prevalance of B. divergens in questing ticks and the management factors that lead to a higher redwater risk in this area, is required. Relatively high levels of variation in the number of cases of tick-borne diseases across regions has been demonstrated previously [64]. Using records of bovine Climate and the much lower populations of sheep and cattle in the east were considered to contribute to this pattern [64]. In contrast, qualitative assessment of redwater cases reported in a survey of Irish farmers and veterinary practitioners found no observable foci of infection [9]. However, spatial statistics quantifying risk are necessary to elucidate spatial patterns which are not obvious based on qualitative assessment alone and correct for sampling bias [65]. The underlying distribution of respondents can vary for a number of reasons, such as sample selection bias due to differences in response rates between regions or farming sectors, or simply due to the underlying distribution of farms, which may lead to false conclusions of "hot spots" for infection in regions of high farm density based on qualitative assessment alone. The analyses applied to the data here provide robust statistical estimates of the spatial distribution of risk, taking into account the potential spatial bias of respondents through applying a presence/absence design. When analysing risk, it is important to consider the effects of spatial scale [66]. When considered on a continuous scale in the spatial analysis, cases in southwest England were not significantly clustered, but the southwest was significantly associated with tick presence in the multivariable analysis ( Table 2). The high tick prevalence in this region cannot be explained by the other factors in the models. Mapping on the relatively broad scale used here may not detect fine-grained variations in risk [66], but at this scale results are buffered against variations in microclimate which affect tick distribution on a local scale [67], allowing relative risk to be assessed in relation to broader trends, such as host density and the macroclimate [26]. Host densities and climatic variables were not directly included in the models however, so it is important to note that factors that appear as significant correlates of tick and TBD prevalence may be proxies for these more-influential drivers. Table 1 The location (latitude and longitude), radius (km), number of respondents, tick prevalence and relative risk for significant clusters of cases of farms with tick infestation in sheep and cattle, as identified by SaTScan TM analysis of data from a retrospective questionnaire survey in Great Britain *P < 0.05, **P < 0.01, ***P < 0.001 Farm Cluster location (lat/long of cluster centroid) Some caution is also required with questionnaire surveys. Although they allow the collection of large data sets, they rely on accurate reporting by farmers. Reporting tick presence requires farmers to be aware of what ticks are and to be in close enough contact with livestock to spot their presence. The higher reported tick prevalence in sheep compared to cattle, for example, may be associated, to some degree, with the more frequent handling of sheep compared to beef cattle. Similarly, in terms of TBD, it is likely that farmers are under-diagnosing and may be misinterpreting clinical signs; notably, overall farmers reported that only around 11% of reported TBD cases were confirmed by a veterinarian or laboratory and two farms reported the presence of redwater in sheep, despite this not being an ovine disease. It should also be noted that this study excluded farms with relatively small numbers of animals, specifically to exclude smallholders and 'hobby farmers' , since they might not be representative of commercial husbandry practices. However, the proportion of such holdings varies across the country, which may affect the contributions of these animals to the overall landscape prevalence of ticks and TBD. This possibility requires further investigation. Effective control in "hot spot" regions, treating livestock with insecticides so that they act as "lethal traps", may result in reduced tick attachment, not just to livestock, but also to other tick hosts [56,68]. Upland farming areas represent 74% of the UK's national parks [55], therefore the areas of elevated risk to livestock include areas of high potential contact between people and ticks. Effective control of ticks on livestock, particularly sheep in these areas, could reduce the risk of tick bites in the human population and minimise Lyme disease transmission via ticks co-feeding on sheep [59]. Treatment of livestock hosts has been shown to be effective in reducing disease risk to other hosts in LIV disease models, when deer populations are low [69]. However, overuse of insecticides with this strategy is also likely to hasten the selection for resistance, so alternative methods of tick control, such as the use of resistant or resilient breeds or pasture spelling, may be more appropriate [12,31], although care should be taken if population immunity is suspected. Fig. 4 The percentage (± 95% confidence intervals) of regional farm respondents to a retrospective questionnaire survey in Great Britain reporting tick-borne disease for sheep (a) and cattle (b) Table 3 The location (latitude and longitude), radius (km), number of respondents, disease prevalence and relative risk for significant clusters of tick-borne pyaemia cases in sheep and redwater in cattle, as identified by SaTScan TM analysis of data from a retrospective questionnaire survey in Great Britain *P < 0.05 Disease cluster Cluster location (lat/long of cluster centroid)
2020-08-11T13:15:03.724Z
2020-08-10T00:00:00.000
{ "year": 2020, "sha1": "afddab4f92d076e22ec8ff0e1f6fdd4cef7c5fe2", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-020-04287-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "552bcd8dbeb7498b45144f6804cc2545e03c02a2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
269911336
pes2o/s2orc
v3-fos-license
Greening the Digital Frontier: A Sustainable Approach to Software Solutions : In the 21st century, Sustainable Software Engineering practices have emerged as a pivotal process, transforming the landscape of traditional software engineering. In previous eras the focus was primarily on hardware and software development, and sustainability was often overlooked. Little attention was given to the technical, economic, environmental, social, and individual dimensions of sustainability. As software continues to play a crucial role in various aspects of our lives, contemporary software development practices have yielded significant negative impacts on the economy, society, humans, and the environment. To address these challenges, the concept of Climate Conscious Software Engineering has gained prominence. The shift towards green and sustainable software development seeks to create software that not only caters to the present and future needs of users but also minimizes adverse effects on the environment and society. This paradigm is increasingly influencing Global Software Engineering (GSE) practices. This paper delves into the foundational principles of Sustainable Software Engineering as defined by the Green Software Engineering foundation. It explores the issues and challenges in climate conscious software engineering and suggests a few recommendations based on reference studies. Introduction In current technological landscapes, the need to address environmental concerns has given rise to the concept of green software.This is a paradigm shift in software development that prioritizes the reduction of greenhouse gas emissions.Unlike usual approaches that often aim for carbon neutrality, the essence of sustainable software lies in actively minimizing emissions throughout its life cycle [1].At the center of this paradigm is the pursuit of carbon efficiency, a strategy that revolves around enhancing energy efficiency, fostering carbon awareness, and optimizing hardware utilization [2]. As we delve into the nuanced realm of green software, it becomes imperative to understand the characteristics that define it and the pivotal role it plays in mitigating the ecological footprint of the digital domain.By focusing beyond the conventional path of carbon neutrality, green software endeavors to redefine the benchmarks for environmental sustainability in software development. The crux of building sustainable software lies in an organizations commitment to minimizing carbon emissions through strategic interventions.This article explores the three primary activities involved in green software's carbon efficiency: the enhancement of energy efficiency, the cultivation of carbon awareness, and the optimization of hardware utilization [2].Through a comprehensive examination of these facets, we aim to elucidate the transformative potential of green software in fostering a more sustainable and ecologically responsible digital future. As we navigate through the landscape of green software, our objective is to contribute to sustainable computing.By clarifying these existing principles and practices that define green software, this article seeks to provide valuable insights for researchers, developers, and policymakers alike, fostering a joint commitment to building a carbon -neutral digital ecosystem. Principles of Climate Conscious Software 6 core competencies are required to define, build, and run sustainable software applications according to Microsoft's training on sustainable software [4]. b) Energy efficiency Energy measures the amount of electricity used, and electricity is closely related to carbon emissions.This principle aims to maximize utilization rates by consolidating workload onto fewer servers with the highest utilization rates possible. Paper d) Hardware efficiency This principle seeks to reduce the carbon emissions associated with device disposal by minimizing the amount of carbon.One approach is to spread the carbon emissions over the expected lifespan of the device, thus mitigating environmental impact. e) Measurement: Software carbon intensity [6] quantifies the carbon emissions across various software applications.It represents the carbon equivalent emitted during specific and locations the software operates, measured in grams of carbon equivalent per kilowatt -hour Process Issues and Challenges with Sustainable Software The software development process continues to face challenges in maintaining relevance and sustainability, crucial for ensuring the quality of the end product.'The Green Software development model' [3] introduces recommendations in each phase of software development: considering the shelf life of the software during Requirement gathering, achieving simplicity in design, using hardware resilient Application programming interface (API), Automating tests and promoting performance testing and resource profiling. Challenges during the Software development process Software development encompasses a broad spectrum of skills and disciplines, which include identifying user needs, values, and features essential for supporting the final product [8].Typically, the software process consists of five primary phases: requirement specification, design, implementation, testing, and maintenance [9].Certain activities within these phases may pose challenges, particularly in meeting contemporary demands such as reducing paper usage, minimizing e -waste generation, and managing carbon footprint and energy efficiency. a) Requirement phase The software requirements specification (SRS) is produced as a result of the requirement gathering and analysis process [10].Some potential green analysis criteria consist of assessing viability, requirements, and tests [11].Gathering requirement specifications electronically is essential to conserve resources like paper and protecting the environment.Software should also embrace new hardware technologies for improved energy efficiency and adaptability to power -down modes during operation [12] b) Design phase The software requirements specification (SRS) is the primary artifact during the design phase.The initial design should strike a balance to minimize the need for frequent design changes.It should not be overly extensive, but should instead encourage practices that conserve resources [13] c) Implementation phase During this phase, developers write source code in specific programming languages according to their preferences and the project's approach.The implementation phase focuses on programming, emphasizing the avoidance of duplicate code, custom hardware APIs, and resource -intensive APIs.Furthermore, practices such as paired programming, code reuse, and automated code generation support the minimization of energy consumption, thereby supporting energy conservation.[14] d) Testing phase The testing phase involves identifying and rectifying product errors until they align with the quality standards outlined in the SRS.Various types of tests, including integration and system testing, are conducted to ensure software's reliability and functionality.It is recommended to utilize automated testing and reuse test cases to assess performance scalability and resource usage [10].Functionality and measurement could be used for the analysis of sustainable practices [8].While functionality refers to all tests in the requirement, measurement specifies the product energy consumption [15] e) Maintenance phase Software maintenance occurs when issues arise that require repair or improvement.Maintenance ensures that the sustainability and quality of the product are upheld and ensured beyond the development phase [15].To enhance cost -effectiveness and energy efficiency, managers should offer training or courses to staff on both old and new programming languages to support the maintenance process.Environmental sustainability can be promoted during the implementation phase by ensuring that program development is clear and comprehensible to programmers.This facilitates swift internal maintenance work and contributes to improved energy efficiency, quality, and longevity of the product.[13] Software waste In essence, software waste refers to resources utilized without yielding any benefit, encompassing characteristics, objects, conditions, processes, and actions within project elements.In software development, wastes serve as friction and persists throughout the entire development process until the final product is produced [16].Software waste frequently occurs due to issues such as scope ambiguity, unclear requirements, inadequate specification and design, unnecessary features, technical challenges, team conflicts, and disorganized programming throughout the development process [18].Building Incorrect features or products, backlog mismanagement, rework, overly complex solutions, unnecessary cognitive load, psychological stress, waiting or multitasking, knowledge loss, and ineffective communication in software development also result in software waste.Enhancements to software engineering productivity involves integrating sustainability principles, policies, and practices into Extreme Programming methodologies [19].These include fostering knowledge sharing, maintaining a positive team attitude, and prioritizing code quality.Additionally, implementing policies like team code ownership, standardized schedules, with the aim of reducing technical debt can improve productivity.Developers can adopt practices like test -driven development, continuous refactoring, pair programming, and knowledge sharing. Green Software Process The Green software methodology entails efficiently utilizing resources to address software needs while considering economic, social, and environmental impacts.From a software standpoint, resources refer to the natural elements essential to sustain human needs while minimizing waste.For example, cloud computing offers an alternative approach to conserve energy and physical space.Software sustainability involves extending software lifespan and minimizing waste generation during development and operation.Agarwal, Nath, and Chowdhury suggest integrating green software development cycle and sustainability criteria [12].Building sustainable software involves using solid principles, practices, and processes that enhance the resilience of software's technical sustainability [8].The 'green' factor serves as a metric for evaluating the environmental friendliness of the software development process against set standards.Its application should guarantee environmentally conscious practices in software development that are sustainable for future generations. Recommendations The authors have consolidated a curated list of recommendations based on reference studies [20].These suggestions can be used at various stages in the software development lifecycle to build sustainable software products.• Go for a static frontend and shift processing and storage to server -side microservices. • Operate microservices as switchable function -like services to conserve resources. • Shift operations into asynchronous background processes for flexibility and resource optimization • Enable incremental over -the -air updates for efficient updates. • Provide public APIs for all core functionalities and data export • Include resource usage measurements in integration tests for actionable insights. Development • Eliminate unnecessary libraries; prioritize those offering essential functionality. • Reduce hardware dependencies for client -side applications • Display resource usage for individual operations to clients. • Disable staging and test environments when not in use Figure 1 : Figure 1: Green Software Principles [2] -Source: greensoftware.org a) Carbon efficiency This principle aims to minimize the amount of carbon, emitted from the most common Green House Gas.As per the United Nations Net -Zero coalition [5], carbon emissions need to be reduced by 45% by 2030 and reach net zero by 2050.For software development, this means efficient use of computational resources like reducing server load and maximizing the use of hardware resources. Figure 2 : Figure 2: SCI calculation [6] f) Climate commitments • Carbon Reduction: At the highest level, carbon mitigation involves either offsets or abatement.Offsetting entails reducing emissions elsewhere, while reduction or elimination focuses on preventing carbon emissions.Offsets offer a means to complement carbon reduction efforts, with mechanisms like compensation and neutralization.• Carbon neutral: The PAS 2060 standard was published by the British Standards Institution and its principles are used as the widely accepted carbon neutrality standard [7] • Net zero: Net zero entails both reducing and offsetting residual emissions by employing carbon removal techniques.Mostly 90% of emissions are eliminated and the remaining 10% are permanently neutralized. SR24304114256 International Journal of Science and Research (IJSR) ISSN: 2319-7064 SJIF (2022): 7.942 Volume 12 Issue 3, March 2023 Fully Refereed | Open Access | Double Blind Peer Reviewed Journal www.ijsr.net c) Carbon awareness: This principle advocates for aligning application operation with the availability of carbon resources.In software development, this entails adjusting application runtime based on demand shifting, such as computing during periods or in regions with lower carbon intensity, which measures the amount of carbon emissions per kilowatt -hour of electricity consumed. Table 1 : Recommendations for best practices while building software products and applications Functional • Minimize hardware usage • Empower users to manage and monitor resource consumption effectively in the application • Allow users to disable unnecessary functions • Support delay tolerance and slow connectivity • Support an offline version of the software application.• Disable unsupported features on older hardware.• Prompt users to remove outdated data or suggest data removal.• Monitor feature creep Architecture • Design a Micro -services -based application • Analyze performance and resource usage before building native client applications • Decouple back -end and front -end
2024-05-20T15:11:49.138Z
2023-03-05T00:00:00.000
{ "year": 2023, "sha1": "702cd58a52b7df30570d1f1a8520a5b273c06c47", "oa_license": null, "oa_url": "https://doi.org/10.21275/sr24304114256", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a5908affc29ae3e4bd8bc40e55e22714b28678b4", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [] }
233679621
pes2o/s2orc
v3-fos-license
A Review of Sensors and Their Application in Internet of Things (IOT) Internet of Things (IOT) reflects advantages of information centric sensor network (ICSN), wireless sensor and actuator network (WSAN). The IOT becomes smarter and intelligent when combined with smart sensors. Application of sensors is one of the identified standards of technology by researchers of IOT. IOT allows communication among people and machine everywhere. The connection may employ any random path in the network and offers much application. Sensor gave new edge to IOT. A large number of sensors communicate and transfer information to serve value – added services. Sensor paves application as supply chain management, military, irrigation, aerospace, automobile, retail. Wireless sensor network (WSN) and single sensor provides smart parking, lightning, traffic management, smart water and agriculture, structural health, military, smart buildings and transport system, etc. This article presents a comprehensive review of sensors and its types and significance of sensor application in IOT through internet for the purpose of detection and retrieval of information from anywhere at any time as far as feasible. INTRODUCTION Sensor, an electronic device detects the physical stimuli and converts raw data into machine or human readable form. Sensor is basically devices which can detect and respond to optical or electrical signal. It converts measured physical characteristics into electrical measurable signal. Size of sensor network depends on the sensor nodes formed. Connection amid two nodes can be done via wire or wireless. A small low power sensor node is integrated in information network. It possesses virtual personality, storing data and physical attribute using smart interface. Smart sensor nodes available at reasonable price easily provide ease to access information globally. By 2050, it is estimated that half of the world's population will reside in urban geographic location [1][2][3]. Environment Monitoring In present scenario world is facing serious environmental issues due to brisk augment in population and pollution through industries and other sources. A catalog of few types of sensors for monitoring environment includes humidity, wind, pressure, speed, temperature, light intensity, salinity, toxic gases, and oxygen. For detecting the existing pollutants signal conditioning unit with limited potential and sensor array are employed. Evaluation of Environmental System The two prominent virtues for health care among living organism are pure water and fresh air. Pollutants are those substances when added to water alters its natural properties. Pollutants are generally categorized as pathogens which are untreated human sewage, waste from nuclear, thermal, oil or gas-based power plant, organic matter such as dead plants and animals and chemical waste from factories. Inorganic pollutants may be acidic, salt or heavy metal. Plastic food packaging consumed by living organism produces a chemical called thalate ester which has carcinogenic effect on the body. The water pollution can be categorized as non-point pollution and point pollution. The plastic food packaging when dumped in lake, sea or ocean causes a serious health hazard [4].When the water pollution occurs due to nature dumpster such as floods, pesticides sprinkled to flora termed as non-point type. Otherwise, if the pollutants added to water which may be pathogens, factorydisposal, etc. are known to cause point pollution. Addition of distinguished toxic gases such as ammonia, carbon monoxide (CO), Sulphur dioxide (SO 2 ), nitrogen oxide (NO 2 ), ozone (O 3 ), and particulate material (PM) to composition of air led to air pollution [5,6]. The expose to UV rays may cause skin cancer. The major source of smog is waste disposal from industries and automobiles. The mixture of fog suspended in air with smoke is defined as smog. The particulate material lower than 10 micro meters in size are not visible by human naked eye when suspended in air. It causes breathing problem in humans due to blockage observed in air intake system. However, sensors need to be developed to prevent and detectparticulate material, measuring air and water pollutants. Present sensor monitors presence of pollutants in the medium. SMART SENSORS The embedded intelligence with basic sensing mechanism is known as smart sensor [7]. Five elements for building smart sensor are sensing element, memory, interface unit, signal processor unit and software [8]. Different sensing elements detects signal for its processing, data interpretation, validity and data logging [9]. Smart sensors can accomplish access to various facilities offered by smart cities as depicted in Fig. 1. Several techniques implemented by sensors for collecting information through various sources are under continuous observation. Fig1: Smart city benefits Sensors using silicon for their development as possessed by piezo resistive, porous silicon and MEMS [6]. Sensors are fabricated via different transduction techniques as shown in Table 1. These techniques provide an ease to fabricate, design, operate and low cost in context of sensors development. Sensors are capable in detection of electronic circuit, industries and environmental applications. The sensors are designed to provide healthy and protected life for humans, flora and fauna. Secondly, for checking the superiority of air, irrigation soil, lakes and rivers. Further, sensors reserve the natural resources for human sustainability. A systematic review is required on rainfall, volcano eruption, flush floods, soil erosion, and many more. Sensor being necessary part of IOT. The data is initially collected and then analyzed. There are some parameters which are required for various IOT applications. It includes latency, energy consumed, accuracy, etc. Some basic receiver driven approaches are discussed below for data collection framework [10]. Data driven This approach assumes tradeoff among frequency of measured request and data accuracy. The scope for data accuracy scheme is widening when large number of sensors employed in particular geographic location. Time driven The elapsed time is related to the timeliness of data measurement. It may be defined as the timestamp from last measurement which is lower than maximum delay. Privacy driven This approach tries to maintain security issue by altering accuracy in results retrieved by individuals. The strategy involves time and data driven approaches by shortening the time duration of sensor requested for data. Energy driven This strategy targets to maximize gain for specific data accuracy. It is represented by utility function. Sensors Classification The end use application serves the criteria for sensors classification. Harbor Research and Postscapes investigation initially offered the initial nomenclature for sensor categorization. In this review paper sensors were arranged into different categories reported below (see Table 2.). Sensor Remark Smart grid Such sensors take care of efficient power generation. Electronic This sensor gathers information from sensor equipped devices such as camera, mobile phones etc. Chemical This sensor detects any unwanted chemical impurities in water and air. Biosensors This class of sensors is related to animals and humans. Any sort of biological information is conveyed through these sensors. Ambient These sensors help in examining environmental concerns. Electric This category of sensor provides information related to electricity consumption. Motion These sensors sensed the movement of people or things under consideration. Position This class of sensors trace the position of any object existing globally or locally. Identification This sensor provides a unique identity to an object for the sake of its identification. Machine vision This sensor collected data in form of images that can be further processed by using computers. Load/Force The load/force sensor recorded the deformation observed by any system due to application of load on it. Hydraulic These sensors are used for determining and controlling the flow rate of liquid generally water. Presence These sensors are used in security system for the purpose of identifying the presence of any unwanted thing in an unauthorized area. sound level observed in surroundings. Interaction Such sensors examine the human behavior from outside and act accordingly. Sliders and buttons fall under this category of sensor. Smart Grid Sensors These sensors guarantee power generation, its transmission and distribution efficiently from source to end users. A smart grid technique is categorized into five segments depending upon their role, sensors, component advancement, communication and supporting decision system [4]. Electronics sensor. Such sensors sense the stored data from variety of sensor including mobile phone, security street camera, etc. The analyzed data serve a welfare state of living to citizen in terms of security, level of accuracy in generating data and transmitting data [11]. Detection of various forms of energy via electroscope, voltage detection, magnetic anomalies, etc. is the major functionality performed by these sensors. In smart urban, sensors when combined with neural network develop and undergo analyzing data from image, speech recognition, natural language processing and video [12]. Chemical sensor These sensors identify the presence of chemical composition in the air and problem linked to it. Such class of sensor includes oxygen sensors, pH sensor, gas sensor, smoke detector sensor, carbon dioxide sensors, catalytic bead sensors, electronic nose, etc. Chemical sensor can sense the physical and chemical properties of a system. It is implacable in medical field to diagnose allergy in living body [13]. Biosensor These sensors work in the field of bio medicine. Neutron and MEMS sensors are utilized for ionizing and subatomic sensors. It works on the principle of transduction and adopt optical techniques for attaining accuracy at a fast pace. Measurement of non-polar molecules, anomalies could be detected via electro chemical sensors [14,15]. Concentrations of acolyte are shown by electrical properties and chemical property of ions. These are either wearable or could be implanted on /in living body so as to collect data of particular subject representing biological information. Biosensors include heartbeat and breathe sensor, body posture sensor, sensor for elderly and kids, etc. Biosensors are of four types as illustrated in Fig. 2. Fig 2: Biosensors Classification Bioreceptor also called as recognition element. It senses biological characteristics in living body. Different types of bio receptor include anti body, enzyme, antigen, protein, etc. Transducer converts particular characteristics into equivalent electrical signal. Different types of transducer include optical, calorimeter, electromechanical, mass change, etc. Signal processor blocks filters noise using finite impulse response filter (FIR). Amplification is performed to enhance the strength of signal so as to display or transmit for storing and analyzing. Ambient sensor This sensor is combination of different sensors likewise temperature, atmospheric pressure, light, humidity sensor. The collected data from sensor helps in monitoring environmental issues. Motion sensor Motion sensor detects the movement of living being or objects. Two vital components included are gyroscope and accelerometers. The range of variation of axis from 3 up to 9 axes. Electric sensor These are multi variant data provider where parameters such as tension, current and electricity related data is sensed. Electric sensor could monitor how much energy is consumed at a site. Identification sensor These sensors are capable to identify any object to the system. The cards or tags have utility for string semantic purpose. Radio frequency identification (RFID) and near field communication (NFC) are the elements of IOT and modern identifying techniques. Position sensor These sensors locate the position of object which may globally or referred locally. Global positioning space (GPS) sensor provides data spatial information covering 2G coordinates to complex collection of data. Examples of position sensors are magnetometers, fixed wireless network locationization service, GPS via received signal strength (RSS) processing information. Presence sensor These are passive infrared sensor (PIR). They can detect presence of living organism in particular authorized area. It is implemented in security systems and many more. Machine vision sensor The vision based sensor collect data and forward for utilization of computer aided/ assisted vision to serve IOT platform. They have applicability in context of both conventional and infrared security cameras. This kind of sensor employs the image processing techniques for sensing the detection of movement of human or entities. These sensors classified under interaction sensor have pipelining for cognizant living organism interaction. They observe human behavior from surroundings and respond accordingly. Sliders and buttons are illustrations under classification of such sensors. Acoustic sensor These sounds activated gadgets collect the data of sound wave to forward same to respective application. The examples of devices which implement sound wave are piezoelectric sensor and microphones. Force sensor Speedometer sensor and load sensor are gathered into category of force sensors. It is the measure of force applied to them externally. Hydraulic sensor Water quality which monitors and measure the liquid level, its flow intensity and properties are termed as hydraulic sensors. It includes water and other liquids. Object information sensors This sensor falls into category of context application sensor. It provides data about any entity. PRESENT STATUS Morais et al. [16] contributed in building up understanding of dynamic IOT to data researchers by plotting the IOT featuring seven prominent scenario's with particular description of its variables, quantifier and sensors organization. Secondly, recognition of nineteen data types has been accomplished. Amir Badshah et al. [17] explored the procedure to navigate transport unit in smart cities using implementation of vision based sensor and advanced normalized phase correlation. Thus, avoids deploying (GPS) Global Positioning System as well as calibrated sensors. Such mapping of vehicles requires image registration. The investigator successfully got accurate result in comparison to calculated data from GPS corresponding to estimated position accuracy. Gao chong et al. [18] examined the athlete and comeback of smart home system based on browse/ server (B/S) module. The architecture of purposed smart home system along with its hardware design and its implementation criteria has been explained. The user can remotely control the house hold objects. The outline performance of the above-mentioned system is flexible and beneficial. Soumya Basak et al. [19] introduced (RMS) Remote Monitoring Station. It is achieved when interconnection of internet with wireless network, (MQTT) Message Queuing Telemetry Tracking is established, the sensors and CC3200 launch pad by Texas Instrument. The application lies in providing climatic alerts to farmer helping them in growth and maintenance of crop production. A.R. Ali et al. [20] discussed the architecture of wireless smart sensors, its associated standards, protocols, network topologies and information regarding its implementation. The mobility setup in network of wireless smart sensors via variety of protocols likewise IEEE 1451. B. Soh et al. [12] explored the element responsible for setting up smart city that is 'network sensors'. The authors proposed variety of solutions and ethical implications to address challenges faced during evolution of new techniques. It includes delivery of service & optimization, security & safety management, traffic control & parking, smart building, public transport and many more. Yaw-Wen kuo et al. [21] focused on the application of IOT offered to daily routine of users applied in the automation system through environment sensor. Due to lack of availability in transmission of electricity, low transmission and long range of power is required so that sensor node draw less amount of current during transmission. The role of (MAC) media access control and RF radio technologies has been discussed. The author designed commercial module based sensor node which implemented IEEE 802.15.4e (TaSCH) time slotted channel hopping. Alcaya et al. [22] explored the detailed architecture for angle of attack sensors and mentioned brief description of its drawing. A water management system for angle of attack sensor and issues related with sensor has been focused. Himadri Nath Saha et al. [23] surveyed literature on disaster management on risk identified during disaster and its preparedness. The phenomena of disaster management includes emergency response, allocation of resources, reaction planning and ends up at recovery of disaster using early warning, sensors and IOT standard. KamanashisBis et al. [24]studied three chief factor of sensor nodes as energy with associated position and supremacy of link when kept inside or outside the network by deploying numerous base station performing simulations in mat lab and calculated life span of node by applying one of the routing practices. Xuxun Liu et al. [25] implemented (ACO) ant colony optimization. Researcher developed scheme of global optimal distance acquirement by adopting the process of network lifetime estimation and accomplished with the objective of high energy consumed throughput network and low energy dissipation from sensor nodes. Wenzhengxu et al. [26] reconnoitered when nodes are partially charged through convey of energy wirelessly by implementing magnetic resonant coupling. Canvasser compared state of art benchmark with the outcome of entire space travelled by mobile charger 1 to 15% elongated and maximum total sensors lifetime is 9% with reverence to average energy expiration duration per sensor by means of two types of algorithms. Loizos Kanaris et al. [27] verified and compared the remarks of simulation performed during designing of IOT networks. In this experiment, connection of two simulators as done with the network layer and 3D polymetric radio propagation. The author proposed methodology in flowchart form in case of TruNET-Cooja interconnection. Tao Liu et al. [28] explored the structure of IOT technologies and implementation of its functionalities. The application of IOT serving in clinical care and real time ECG monitoring using telemedicine technology has been discussed. Henrich C. Pohls et al. [29] implementation of software based ECC on constrained devices for secure digital signature. ECDSA p 160 NIST is the software prototype. The aim is to reconstruct the previous signed bit representation. Soumya Kanti Datta et al. [30] concluded the feasibility of integrated semantics computing on android powered mobile devices. Discussion for embedding M3 framework with its light weight version in mobile application has been done. Thus, architecture of machine to machine and its associated prototype has been implemented. Prahlada Rao B. B et al. [31] explored the project on working of cloud computing and IOT for addressing big data issues. The various applications such as irrigation, environmental monitoring and augmented reality are realized on cloud using sensor as service. Qian Zhu et al. [32] implemented IOT gateway working as bridge between network of network and wireless sensor network. The gateway based on Zig bee-GPRS protocols. The author finds application in smart city, industrial and environmental monitoring. Uday Shankar Shanthamallu et al. [33] presented brief review on conceptual machine learning and applicability of its algorithms. A variety of modals including supervised, unsupervised methods and deep learning processes has been focused upon. In the paper, it was mentioned that Galaxy S5 has inbuilt 26 number of different sensors, namely, proximity, camera, pressure, humidity, gyro, microphone, magnetometer, accelerometer, infra-red, etc. The applications of algorithm are utilized for anomaly detection, sensor networking, and pattern recognition and at different layers of IOT architecture. S.S.Navghane et al. [34] designed sketch for implementing IOT based smart dustbins together with Wi-Fi module, weight and IR sensors for contributing hygiene in the society. The current status of garbage could be retrieved on mobile web browser via Wi-Fi with html page. Interfacing of microcontroller possessing IR sensor with central system is done using Wi-Fi module. Khirod Chandra Sahoo et al. [35] analyzed intruder detection in order to maintain security. The author discussed the application and characteristics of (PIR) passive infrared sensor capable of detecting the anomalous motion even in dark surroundings. Zig bee is used for creating wireless sensor network and ESP 8266 module to transmit data to a remote server. The global system for mobile communication (GSM) sends text alerts to respective authorities regarding detection of intruder. All sensor nodes connected to center node utilize Zig bee for transmitting and receiving data wirelessly. SENSOR BASED IOT APPLICATIONS 4.1 Clinical Care In case of emergency to patient, certain parameters such as blood pressure, glucose level, heart condition are necessary to monitor prior to prescribing medicine. Only sensor based IOT for healthcare could provide fast access to these test. It provides remote monitoring of patients. An electronic sensor when connected to patient could retrieve information of chemical imbalance in physical, psychological and behavior. The information regarding patient from any location is made available to medical practitioner around the clock via IOT devices wirelessly. It enables doctors for recommendations distantly. Smart City The rising population curve is the critical reason for the requirement of smart cities. Urban dynamics like water, land, health, etc could be improved sustainably and efficiently [36]. Thermal sensors track transmission and distribution path of energy with regard to changing weather conditions [37]. The quality of service (QoS) provided in cities turning them smarter is only possible with advancement in cloud computing and fusion of smart sensors in IOT devices [15]. A blend of advanced sensors for storing information and analyzing it for communication so as to manage cities assets brought up concept of smart city [38]. A smart grid technique is categorized into five segments depending upon their role, sensors, component advancement, communication, supporting decision system, etc. [39]. The application of IOT plays vital role for implementing and conceptualizing smart city where sensor nodes are deployed for connectivity [12]. The public service has been revolutionary served with the advancement of smart sensors with IOT devices in cities [40]. Applications of smart city are presented with adoption of digitization program by government around the globe. It is responsible for policy and infrastructure layout of smart city [41]. IOT and Medical Robotics Robots are programmed machines so as to accomplish respective task in distinguish fields. Such as healthcare, industrial, etc. Smart sensor combined to IOT devices provides input to the robots. The obtained characteristics are sent to server of healthcare centre using protocols like Bluetooth, zig bee, wireless fidelity (Wi-Fi), etc. Smart Home Smart home include control by any of the communication equipment and display via all kinds of interfaces [42].Home gateway fixes different interfaces and covers variety of communication techniques. Features of smart home are as follows: firstly, compatible to different communication technologies. Secondly, fetching information about smart home becomes convenient as it provides ubiquitous service. Lastly, comprehensive perception serves many logical and physical sensors monitors real time smart home. The architecture model relies on IOT for a smart home is depicted in Fig. 3. Sensing layer Large number of physical and logical sensors monitors household objects. Some of the sensing terminal includes environment monitor sensor, camera, GPS, household object controlling sensor, etc. Short range wireless, Zig bee, sensing technology finds broader application in smart home due to its [43]. The meshing architecture is used for composing wireless sensor network (WSN). Home gateway This is connector among perception and network layer. It allows transmission of information due to its compatibility with various communication interfaces. It is also called 'Protocol conversion unit'. Network layer Perception layer transmits information for processing network layer via home gateway. Hybrid network integration technique provides information access to each network. Examples of wireless and wired communication are mobile cellular network, internet, wireless local area network (WLAN). Application layer This layer contains concrete application support platform. It acts as interface among users and smart home system. Smart Parking Due to increase in number of vehicles, parking has emerged as crucial component in smart transport system. Improper parking results in traffic congestion. Parking done in unauthorized area led to mismanagement in transport system. Sensors could be deployed in parking location. Information regarding availability of parking slot could be fetched by users through application in android smart phone. Earlier investigations on vehicular traffic unfolds that about 30% of traffic congestion occurs due to parking of vehicles done according to connivance of driver [44]. The advent of smart parking provides optimization in terms of fuel consumption, manual efforts and time savings. The investigators have implemented ultrasonic and infrared sensors, Raspberry Pi3 board for smart parking using IOT. The information detected by sensors is stored in cloud using a processing module. The status of vacant parking space could be retrieved by user through android smart phone application. The block diagram of WSN based smart parking application using IOT is represented in Fig. 4. Sensor Module A sensor node is deployed at every parking slot. Ultrasonic and infrared sensors are responsible for updating information regarding vacant status for parking. Raspberry Pi module It is used as System on Chip (SOC). It is lower cost computer with smaller size as that of credit card. Thing Speak Thing Speak is a web service related to IOT, developed for the purpose of saving sensed data of various IOT based applications. Further, it assists in plotting the output data in graphical form [45]. Mobile user All tablet or smart phone users are mobile users whether they use them while traveling or put these gadgets statically. Smart irrigation [19] In order to achieve the target of ideal yield from the fields, smart agriculture has been adopted. The message queuing telemetry tracking (MQTT) protocol, wireless networking is combined with internet of things (IOT) so as to monitor the real time field environment. Data from field at any instant of time could be collected by Sensors, MQTT and CC3200 launch pad by Texas Instrument. After that, it is sent to remote monitoring station (RMS). Factors influencing growth of crop can be monitored likewise soil fertility level, water management, sunlight intensity, dosage of fertilizers, altering climatic conditions-humidity and temperature. Components deployed for the test setup are discussed below. Light Detection Resistor (LDR) It is photosensitive device to detect sunlight intensity. The resistance is very low in case of exposure to light and vice versa. Soil Moisture Sensor Soil sensor is useful to detect the content of moisture in soil. The module output is at lower level at times when excess water present in soil. Sensor operates on 20 mA, 5V. Water Pump The operating range of water pump is 6 to 9 V DC. Its functionality is to suck water via a nozzle and pump it out with larger force, through another nozzle. Four channel relay module It works at range of 5V, 10A. Relay may act as amplifier or switch. It amplifies smaller value of current generated by sensor. Texas Instrument developed first CC3200 Launch pad on chip internet. The Wi-Fi interfacing connection 802.11 b/g/n at low power consumption. Wi-Fi unit board is compatible with MQTT & HTTP protocols. At rate of 80 MHz, micro controller unit also possess 32 bit advanced RISC machinecortex M4. DC Motor It converts electrical energy into mechanical energy. In comparison to normal dc motor, greater torque and a greater number of rotations per minute (RPM) is provided by L shaped motor. It has 300 RPM. CONCLUSION AND FUTURE WORK The innovated devices possess the capabilities to actuate and communicate with other devices which brought adaptability to the concept of Internet of Things, here the role of actuator and sensor play incalculably in the environment and new-fangled capabilities are accessible through vast information resources. The IoT is an upcoming technology for creating revolutionary application domain of Plug n Play smart devices. The future research work relies on utility application by 2020 and transport field by 2025 and thereafter. The former includes energy production and its recycling, large scale WSN, high protection, deploying and forgetting networks, self-adaption system of system, cloud computing and online analysis with data storage, renewable material and nano power units, monitoring of critical infrastructure, smart grid and housing metering. Furthermore, a lot of research work is required for transportation under application domain. These are enlisted as smart tags for automobile management and logistics, motor vehicle to infrastructure, system level analytics, and autonomous means of transportation using IOT services, heterogeneous systems with interaction between other sub network, smart traffic, automatically driven vehicles, intelligent transport and logistics.
2021-05-05T00:08:57.870Z
2021-03-18T00:00:00.000
{ "year": 2021, "sha1": "a7af58660243e1396236e22638e31fe65b7d7a27", "oa_license": null, "oa_url": "https://doi.org/10.5120/ijca2021921148", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "354c0d7ecc7fd8a97113994ce47beae7bd4a7415", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
17086398
pes2o/s2orc
v3-fos-license
Multiorgan Detection and Characterization of Protease-Resistant Prion Protein in a Case of Variant CJD Examined in the United States Background Variant Creutzfeldt–Jakob disease (vCJD) is a prion disease thought to be acquired by the consumption of prion-contaminated beef products. To date, over 200 cases have been identified around the world, but mainly in the United Kingdom. Three cases have been identified in the United States; however, these subjects were likely exposed to prion infection elsewhere. Here we report on the first of these subjects. Methodology/Principal Findings Neuropathological and genetic examinations were carried out using standard procedures. We assessed the presence and characteristics of protease-resistant prion protein (PrPres) in brain and 23 other organs and tissues using immunoblots performed directly on total homogenate or following sodium phosphotungstate precipitation to increase PrPres detectability. The brain showed a lack of typical spongiform degeneration and had large plaques, likely stemming from the extensive neuronal loss caused by the long duration (32 months) of the disease. The PrPres found in the brain had the typical characteristics of the PrPres present in vCJD. In addition to the brain and other organs known to be prion positive in vCJD, such as the lymphoreticular system, pituitary and adrenal glands, and gastrointestinal tract, PrPres was also detected for the first time in the dura mater, liver, pancreas, kidney, ovary, uterus, and skin. Conclusions/Significance Our results indicate that the number of organs affected in vCJD is greater than previously realized and further underscore the risk of iatrogenic transmission in vCJD. Introduction Variant Creutzfeldt-Jakob disease (vCJD) was first reported in 1996 as a novel non-inherited form of prion disease [1]. Although the disease bears some of the classical features of the sporadic form of the human transmissible spongiform encephalopathies (TSE) or prion diseases, it has distinctive features [2][3][4]. Most remarkably, vCJD presents at an average age of 26 years [4]; histopathologically is characterized by the presence of plaques containing prion protein surrounded by vacuoles referred to as ''florid'' or ''daisy'' plaques [1]. Furthermore, the abnormal and pathogenic prion protein isoform (hereafter identified as PrP res ) associated with vCJD has features that are unique among non-inherited human prion diseases [5]. PrP res is thought to be the major or lone component of the infectious agent of prion disease, the so called ''prion''. However, occasionally not direct correlation has been reported between PrP res and infectivity [6]. The distinctive features of vCJD, along with its detection in the UK following the peak of the British epidemic of the prion disease bovine spongiform encephalopathy (BSE), pointed to the consumption of prion-contaminated beef products as the possible source of infection [1,7]. Successful transmission to non-human primates and transgenic mice expressing the human prion protein (human PrP) with replication of major features of the vCJD phenotype, provided overwhelming evidence supporting the notion of cattle-to-human transmission [8][9][10]. These findings established vCJD as the first Western world prion disease to be acquired by oral infection. Kuru, discovered in the 1950s, was endemic among New Guinea tribes practicing ritualistic cannibalism [11][12][13]. The oral route of prion infection in vCJD raised the possibility that tissues and organs, beside the central nervous system (CNS), might also be affected. To date, PrP res has been reported in several tissues and organs outside the CNS of vCJD patients (Table 1) [14][15][16][17][18][19]P. Brown,unpublished data]. Although the amount of PrP res in non-neural tissues is small compared to that in the brain, the risk posed by the spread of even small amounts of PrP res has been underscored by the iatrogenic transmission of vCJD from blood donors in the preclinical phase of the disease [20]. We examined the main characteristics and tissue distribution of PrP res in a case of vCJD, in which the disease was most likely acquired in the UK but which is officially referred to as an American case because illness onset occurred in the US [21]. In an extensive autopsy examination, sodium phosphotungstate (NaPTA) precipitation, a highly sensitive method of PrP res detection [14,22], was used to establish the presence and estimate the relative amounts of PrP res in several organs and tissues made available to the National Prion Disease Pathology Surveillance Center (NPDPSC). Collection and Processing of Tissues A whole body autopsy was performed within 20 hours from death. The National Prion Disease Pathology Surveillance Center (NPDPSC) received frozen and fixed tissue samples. Frozen tissue included slices from one cerebral and cerebellar hemisphere, portions of pituitary gland and dura mater as well as samples from the trachea, breast, heart, lung, esophagus, stomach, duodenum, jejunum, ileum, colon, liver, spleen, pancreas, adrenal gland, kidney, urinary bladder, uterus, ovary, mesenteric lymph nodes, diaphragm and skin. The skin was taken from the chest wall. In addition, paraffin blocks or sections from the same tissues were also received. Frozen tissues were stored at 280uC. Histopathology and Prion Protein Immunohistochemistry Histology and immunohistochemistry were carried out as previously described [23] on brain sections from frontal, temporal and parietal neocortices (the occipital cortex was unavailable), neostriatum, thalamus, cerebellar hemisphere and on sections from all received tissues. Immunohistochemistry was carried out with the monoclonal antibody 3F4 to the PrP residues 109-112 [24]. Genetic Analysis Genotyping was performed on genomic DNA extracted from blood as previously described [25]. Preparation of Tissue Homogenates Tissue homogenates (TH) (10%, wt/vol) were prepared at 4uC in phosphate buffered saline (PBS) lacking Ca 2+ and Mg 2+ , 1% Sarkosyl (pH 7.4), followed by centrifugation at 10006g for 5 minutes to remove cellular debris. Excess of collagen was eliminated by homogenizing tissue and removing the white and dense fraction containing mainly collagen from the fraction rich in parenchymal tissue. Contamination of non-nervous tissue with brain tissue that might have occurred at autopsy was controlled by sampling the depth of the organs and discarding the tissue at the surface. Dura mater where this procedure was unsuitable was rinsed extensively with PBS before homogenization. Sodium Phosphotungstate Precipitation (NaPTA) Precipitation with NaPTA was carried out according to Wadsworth et al [14] with minor modifications. Briefly, 100 mg of wet tissue were homogenized (10% wt/vol) with PBS lacking Ca 2+ and Mg 2+ , 2% Sarkosyl (pH 7.4), followed by centrifugation at 10006g for 5 minutes to remove cellular debris. A fraction of the supernatant was collected and frozen for immunoblot analysis, whereas a second fraction of 500 ml was mixed with an equal volume of PBS prepared as above. Samples were adjusted to a final concentration of 50 units/ml of Benzonase and 1 mM of MgCl 2 and incubated at 37uC for 30 minutes, followed by the addition of 81.3 ml of a pre-warmed solution containing 4% NaPTA and 170 mM MgCl 2 . After incubation for another 30 minutes at 37uC and constant agitation, samples were centrifuged at 16,0006g for 30 minutes. The supernatant was discarded whereas the pellet was resuspended in 200 ml of PBS containing 0.1% Sarkosyl (pH 7.4) with the addition of 50 ml EDTA 250 mM (pH 8), in order to remove the white precipitate present in solution. After an additional centrifugation at 16,0006g for 30 minute, supernatants were discarded and the pellets re-suspended in 30 ml of PBS containing 0.1% Sarkosyl. Immunoblot. Aliquots of TH or NaPTA-precipitated samples were either examined untreated or after treatment for 1 hour at 37uC with proteinase K (PK) (specific activity 44 units(U)/mg, Sigma Aldrich) at the concentration of 2 U/ml (1 U/ml corresponds to 23 mg/ml when PK specific activity is 44 U/mg) for 60 minutes at 37uC while constantly agitated. The reaction was terminated by addition of 3 mM of phenylmethylsulfonyl fluoride. Samples were diluted in sample buffer (final concentration: 3% sodium dodecyl sulfate [SDS], 4% ß-mercaptoethanol, 10% glycerol, 2 mM EDTA, 62.5 mM Tris, pH 6.8) and boiled for 10 minutes before loading. For deglycoyslation of the protein, samples were denatured and incubated in the presence of recombinant peptide N glycosidase F (PNGase F) according to the manufacturer's protocol (New England Biolabs). Protein samples were separated in 15% Tris-Glycine SDS-PAGE gels using gel electrophoresis apparatus holding running gels of different lengths (Criterion 7 cm and home made 15 cm high-resolution system, Bio-Rad). Proteins were transferred to Immobilon P (Millipore) for 2 h at 65 V, blocked in 5% (w/v) non-fat milk powder in TBS containing 0.1% (v/v) Tween-20 (TBST) (blocking solution), and incubated overnight at 4uC with selected antibodies. After several washes in TBST, membranes where incubated with a 1:4,000 dilution of a peroxidase-conjugated secondary antibody in TBST for 60 minutes at room temperature, washed in TBST and visualized by enhanced chemiluminescence (Amersham ECL Plus, GE Healthcare) on Kodak BioMax XAR films (Eastman Kodak). Two antibodies to human PrP were used: the monoclonal antibody 3F4 (to residues 109-112) and the rabbit antisera 2301 (to residues 220-231). Ethics Statement This study was conducted according to the principles expressed in the Declaration of Helsinki. No Institutional Review Board review was required because federal regulations do not require approval of research on deceased patients by the Board. Written informed consent for use of patient information/tissue specimens for research purposes has been obtained. Clinical History Clinical data on the present patient have been reported in detail [21]. Briefly, the patient lived in Britain until the age of 13 and immigrated to the US in 1992. In early November 2001, at the age of 22 years, the patient was evaluated for depression, emotional instability and memory loss, followed one month later by involuntary movements, gait disturbances and incontinence. During the ensuing three months, the patient's motor and cognitive deficits worsened, and confusion, hallucination, dysarthria, bradykinesia, and spasticity also occurred. The diagnosis of vCJD was made following brain magnetic resonance imaging and confirmed by immunoblot and immunohistochemistry of tonsil tissue. She received an experimental treatment with quinacrine for 3 months, but showed only minimal and transitory improvement. The patient died in June 2004, 32 months after the clinical onset. Histopathological Examination Both gray and white matter structures were severely atrophic with nearly total loss of neurons and replacement of the neuropil with prominent gemistocytic astrogliosis ( Fig. 1A and B). Thus, the typical spongiform degeneration was not observed. Instead, there were irregular extracellular spaces consistent with the astroglial scarring present in the cerebral cortex (Fig. 1B). Macrophages were also present occasionally, especially in the white matter, and probably reflected Wallerian degeneration. The cerebral and cerebellar cortices and the basal ganglia were more affected than the thalamus. Many mono-centric plaques, often large and occasionally surrounded by ''pseudo vacuoles'', were present preferentially in the deep cerebral cortex and superficial white matter as well as, to a lesser extent, in the cerebellar cortex and white matter (Fig. 1B). All the organs that had been examined (see Material and Methods for details) were unremarkable except for the kidney and the descending colon which evidenced lymphocytic inflammatory infiltrates (data not shown). In the kidney the infiltrates displayed a focal follicular pattern consistent with interstitial nephritis whereas in the descending colon the lymphocytic infiltrates were linear and located in the sub-mucosa. Immunohistochemical staining for PrP of brain sections revealed numerous well circumscribed as well as more diffuse PrP deposits consistent with unicentric plaques or early plaques (also called plaque-like) formations, which were especially prominent in the very superficial and deep cortical layers ( Fig. 1C and D). Granular and ''synaptic'' immunostaining patterns were easily detectable in basal ganglia and thalamus. The cerebellum showed a leopard skin-like immunostaining and plaque-like patterns in the molecular and granule cell layer, respectively ( Fig. 1E and F). Polarized light examination confirmed that the plaques contained amyloid (data not shown). No PrP immunostaining was detected in any of the tissues examined outside the brain. Genetic analysis demonstrated methionine homozygosity at codon 129 and no mutations or other variations in the open reading frame of the PrP gene. Characterization of Brain PrP Immunoblot analyses of the PK-digested total homogenate (TH) from all cerebral cortices examined displayed the characteristic electrophoretic mobility and glycoform ratios of the PrP res described in vCJD ( Fig. 2A) [5,26]. In the cerebellum PrP res showed a slightly faster migration (Fig. 2B). When a high resolution gel (15%, 15 cm long) was used, the PrP res unglycosylated form in the cerebellum appeared to resolve into three bands, which included the band corresponding to the PrP res type 2 of 19 kDa and two additional bands that migrated about 0.5 kDa and 1 kDa faster (Fig. 2C). The upper band containing the diglycosylated PrP isoform was over-represented in all brain regions examined including the cerebellum (Fig. 2B and D). Total PrP and PrP res were best represented in the temporal cortex and cerebellum while they were present in the least amount least amount in the occipital cortex ( Fig. 2B and E). In addition, we confirmed the presence of a 17 kDa PrP res fragment matching the anchorless PrP res type 2 fragment previously described in sporadic CJD (sCJD) and vCJD [27], whereas the 12/13 C-terminal fragment commonly present in sCJD was not detected (data not shown) [28]. These two findings are in agreement with the previously reported molecular characteristics of PrP res from vCJD [27]. To assess whether PrP res types 1 and 2 co-occurred in brain as previously reported [29], we digested the TH with a high concentration (32 U/ml) of PK and used high resolution gels (15%, 15 cm long), a technique that allows for the detection of even small amounts of PrP res type 1 and 2 (up to 3-5% of total PrP res ) when they co-exist [30]. This procedure failed to demonstrate PrP res type 1 in the brain regions examined in this case (data not shown). Detection of PrP res in Non-Nervous Tissues PrP res could be easily detected in the dura mater, the pituitary and adrenal glands, and the uterus using direct blotting of the TH (Table 2 and Fig. 3A and B). Detection in the skin required doubling the TH concentration (equivalent to 4 mg of wet tissue) but this procedure failed to reveal PrP res in other organs (Fig. 3B, 3C, and data not shown). With NaPTA precipitation, we easily detected PrP res in the mesenteric lymph nodes, spleen, ileum, pancreas, skin, and to a lesser extent, in the descending colon, liver, ovary and kidney ( Table 2 and Fig. 3D). The unequivocal identification of PrP res in the kidney required multiple sampling and an additional two-fold loading of the gel but these procedures failed to reveal PrP res in the ascending colon ( Fig. 3C and D). Compared to direct TH blotting, NaPTA preparations often revealed a slower electrophoretic migration of up to 0.5 kDa (Fig. 3C, 3D and data not shown), as previously reported [14]. A significant over-representation of the diglycosylated form with a ratio comparable to that of the brain was apparently maintained in all the organs except the pituitary gland, the skin and some of the TH preparations from the uterus where diglycosylated and monoglycosylated isoforms had nearly the same concentration (Table 2, Fig. 3). Generally, in the NaPTA preparations the unglycosylated form was less well represented than in the TH preparations (Table 2, Fig. 3 and data not shown). Detection and characterization of PrP res and PK-sensitive PrP in brain. A: Immunoblot of total homogenates (TH), treated with proteinase K (PK), obtained from the frontal cortex of sCJDMM1, sCJDMM2 (representing PrP res types 1 and 2, respectively) and the present case showing the over-representation of the upper band (Diglyc.) containing the diglycosylated form, and the co-migration of the lowest band (Unglyc.), containing the unglycosylated form, with the corresponding band of sCJDMM2. B: Immunoblot of TH from the four regions of the cerebral cortex and the cerebellum, treated with PK as indicated. The cerebellar unglycosylated PrP res isoform generates a thicker and overall slightly faster migrating band than the corresponding PrP res from the cerebral cortex. C: A high-resolution immunoblot (15%, 15 cm long gel) confirms that the monoglycosylated and unglycosylated PrP res isoforms from the cerebellum have a faster electrophoretic mobility than the corresponding forms from the cerebral cortex, and shows that the cerebellar unglycosylated isoform resolves into three fragments including a 19 kDa band, corresponding to PrP res type 2, and two additional bands of slightly lower relative molecular weight (arrowheads); T: Temporal; Cb: Cerebellum. In A-C membranes were probed with the mAb 3F4. D and E: Ratios of the PrP res glycoforms (D) and of the total PrP and PrP res (E) obtained from the same brain regions examined in panel B. Each bar represents the mean 6 SD of three densitometric determinations on each of two tissue samples. doi:10.1371/journal.pone.0008765.g002 Notably, the antibody 2301 to the PrP C-terminal region revealed that in contrast to findings in the brain, most of the PKsensitive PrP had the electrophoretic mobility of approximately 18 kDa (after deglycosylation) in all non-neural tissues, whereas the full length isoform appeared to be underrepresented (Fig. 4, data not shown). This finding was particularly prominent in the uterus but least evident in the pituitary gland (Fig. 4). Epitope mapping indicated that the 18 kDa fragment was truncated at the N-terminus matching the characteristics of the fragment identified as C1 (Fig. 4) [31]. In addition to being PK-sensitive, C1 could be easily distinguished from the unglycosylated form of PrP res detectable in PK untreated samples, named C2, which electrophoretically migrated to 19 kDa as in other organs (Fig. 4). Discussion Our study confirms the diagnosis of vCJD in the present case, based on the characteristics of the PrP res and the methionine homozygosity at codon 129 of the PrP gene, the last feature being invariably present in vCJD [32]. However, we also observed two unusual features in this case. The first is the long disease duration of 32 months, which is more than twice the 14 month mean duration of the British cases of vCJD [3]. However, cases of up to 40 months duration after the diseases onset have been reported [3,33]. The second unusual feature is the absence of typical spongiform degeneration which likely stemmed from the long duration of the disease. The long disease duration likely led to extensive loss of neurons, in which most of the vacuoles are formed, with ensuing astroglial scar [34]. As previously reported [21], the BSE exposure most likely occurred between the early eighties, when the BSE epidemic emerged in the UK, and 1992, when the patient immigrated to the US. This assumption is consistent with an incubation period of 9 to 21 years, which correlates well with the medium incubation period of 17 years estimated for the UK cases of vCJD [35]. The brain PrP res of the present case displayed the glycoform ratio and electrophoretic mobility characteristic of the PrP res associated with vCJD [5]. One exception is the cerebellum where the monoglycosylated and unglycosylated PrP res isoform migrated slightly faster than the PrP res from other brain regions and resolved in three bands. The variation in PrP res electrophoretic characteristics between the cerebellum and the cerebral cortex is not surprising for it has also been observed in sCJD [36]. Yet to our knowledge it has never been reported in vCJD. Finally, contrary to previous reports [29], PrP res type 1 did not co-occur with type 2. This discrepancy might stem from our rigorous PrP digestion with PK and from the use of different antibodies, an approach that rules out the possibility that partially cleaved fragments derived by the incomplete digestion of PrP Sc be misinterpreted as the type 1 fragment [30,37]. The major finding of the present study is the demonstration that PrP res is present in a number of non-CNS tissues and organs which previous studies had reported as free of PrP res (Table 1 and 2) [14-19, P. Brown, unpublished data]. These tissues include the dura mater, skin, liver, kidney, pancreas, descending colon, uterus and ovary (Table 2 and Fig. 3). The use of NaPTA, along with the long disease duration, may both have contributed to the undisputed detection of PrP res in these organs in this case. The glycoform ratio of the brain PrP res was not retained in every peripheral organ examined (Fig. 4). In the pituitary gland and the skin the diglycosylated and monoglycosylated PrP res isoforms were about equally represented thus the diglycosylated isoform was not dominant. On the other hand, electrophoretic mobility appeared to match that of the brain. Variations in the glycoform ratio could be assessed only on the TH because the glycoform ratio, as well the electrophoretic mobility, is affected by NaPTA enrichment [14]. The presence of prion in the human dura mater is not surprising because sCJD has been transmitted following transplantation of dura obtained from sCJD-affected cases [38]. However, to our knowledge this is the first immunoblot demonstration of PrP res in the dura mater in any prion disease. The detection of relatively large amounts of PrP res in the dura mater raises the possibility of contamination with brain tissue at autopsy. Although this possibility cannot be completely ruled out, extensive rinses in PBS were performed before homogenization in some experiments without observing a reduction in the amount of the PrP res detected. Prion infectivity of kidney and liver has been demonstrated by bioassay in other human prion diseases [39], and PrP res has been observed in the kidney of scrapie infected sheep [40]. The presence of PrP res has also been reported in kidney, liver and pancreas of scrapie infected mice in association with lymphofollicular proliferation [41]. This last finding is relevant to the present case in which multiple lymphocytic infiltrates with follicular pattern were present in the kidney. However, contrary to this report, we observed no significant inflammatory reaction in any of the other tissues which contained PrP res . A puzzling finding of our study is the presence of PrP res albeit in small amounts in the kidney but not in the urinary bladder. This apparent discrepancy is relevant to the recent demonstrations of prion infectivity in urine NaPTA: PrP res searched following enrichment with sodium phosphotungstate precipitation. 4 Amount of PrP res expressed as percentage of PrP res present in the frontal cortex. Glyc. ratio: glycoform ratio expressed as percentage of the sum of the three isoforms and representing diglycosylated:monoglycosylated:unglycosylated forms. Data listed for tissue positive in both TH and NaPTA preparations were obtained from TH. 5 PrP res was previously reported in colon with no specification of the segment examined. doi:10.1371/journal.pone.0008765.t002 of animals carrying experimental or naturally occurring prion diseases [42][43][44][45][46]. It would indicate that prion infectivity in urine is acquired from the kidney while the urinary bladder acts as a bystander. However the amount of PrP res we observed in the kidney was minimal, and might have not been sufficient to infect the urine and to propagate to the bladder in detectable amounts. Indeed we failed to demonstrate PrP res in the urine in the present case even after hundred-fold urine concentration (data not shown). Obviously more studies are needed to clarify this issue. The present study also demonstrates for the first time the presence of PrP res in the skin in a human prion disease. Previously, PrP res has been detected in the skin from animals with experimental or naturally occurring scrapie [47] as well as in the antler velvet of elk affected by CWD [48]. Furthermore, it is remarkable that we observed PrP res in the uterus and the ovary, a finding which implicates the reproductive system, thereby raising the possibility of maternal transmission of vCJD. Vertical transmissibility of prion infection has been demonstrated in transgenic mice infected with BSE [49]. Related literature on human prion diseases is very scanty. Pregnancy completed to delivery has been reported in sCJD, iatrogenic CJD and vCJD [50,51]; however, transmission to the progeny has not been examined in detail or confirmed in any of these cases. The first detailed determination of PrP C and PrP res in the reproductive and gestational tissues from a sCJD patient has been carried out only recently [51]. Although this study failed to detect PrP res , remarkably it showed that, in uterine tissue obtained at biopsy, most of the PK-sensitive PrP is truncated at the N-terminus and matches the C-terminal PrP C fragment C1 which is generated during normal PrP C metabolism [51]. Similarly, in the present case we observed that the C1-like fragment was largely predominant over the full-length PrP C in the uterus, and it was easily digested by PK but it was present along with a significant amount of characteristic vCJD PrP res (Fig. 4). Since the Nterminus of the PrP res type 2 associated with vCJD is at residues 92-99, the uterine PrP res must have formed from the full length PrP C rather than from C1, the N-terminus of which is at residues 111-112 [31,52]. These findings raise the question of the origin of the PrP res found in the uterus, a question that is currently unanswered. A similar question may be raised for the urine, in Figure 3. Detection of PrP res in non-nervous tissues. A: PrP res from dura mater and frontal cortex (1:24 dilution) from the present case is compared to PrP res of the frontal cortex from sCJDMM1 (type 1) and sCJDMM2 (type 2). B: Two film exposures of immunoblots from non-nervous tissues compared with that of the frontal cortex (1:10 dilution). Pituitary gland, adrenal gland and uterus are clearly positive while the bands in the remaining preparations are considered to be non specific. C: PrP res from skin (double TH loading, equivalent to 4 mg of wet tissue) is barely detectable in TH (Lane 3) compared with frontal cortex TH, which is diluted 1:4 (lane 1) or 1:130 (lane 2). Skin PrP res is better detectable along with the kidney PrP res after sodium phosphotungstate (NaPTA) precipitation of PrP res (Lanes 4 and 5), especially after long exposure (Lanes 4L and 5L) (kidney TH loaded in double amount; probed with mAb 3F4). D: PrP res from mesenteric lymph nodes and other visceral organs recovered following NaPTA precipitation and compared with TH from the frontal cortex (1:120 dilution) and sCJDMM1 following two film exposures. All organs but ascending colon are positive. Of note, unglycosylated isoform is underrepresented in all NaPTA precipitated samples as compared with that of the TH preparations (see panel A, lanes 3 and 4 and panel B). A-D: Membranes were probed with the mAb 3F4. doi:10.1371/journal.pone.0008765.g003 which although the prion infectivity has been demonstrated in animals by bioassay [42][43][44][45][46], the only detected form of PrP under normal condition in animals and humans, is a fragment matching the C1 [53, 54, Notari et al., unpublished data]. All these considerations notwithstanding, the widespread presence of PrP res in visceral organs that we observed in the present case further reinforces the concerns over iatrogenic transmission of vCJD. These concerns are already compelling given the multiple reports of vCJD transmission by blood transfusion. Figure 4. Characteristics of PK-sensitive PrP. Immunoblot analysis of total homogenate from brain, pituitary gland and uterus are shown. The samples, with or without previous PK treatment, were deglycosylated with PNGase F. Membranes were probed with the mAbs 3F4 and 2301 as indicated. A: The brain has relatively large amounts of full-length isoform and PrP res C2 fragment but the N-terminus truncated PrP C fragment (C1) is poorly represented. In addition, brain preparation shows a previously unreported PK-sensitive fragment with molecular weight of 25 kDa (arrow), detectable only in deglycosylated samples, of undetermined origin. B: The C1 fragment is relatively better represented in the pituitary gland C: while it is overly abundant in the uterus. doi:10.1371/journal.pone.0008765.g004
2014-10-01T00:00:00.000Z
2010-01-19T00:00:00.000
{ "year": 2010, "sha1": "3328f1e70aa67274b4d0e15065009e62f7995006", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0008765&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3328f1e70aa67274b4d0e15065009e62f7995006", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221094173
pes2o/s2orc
v3-fos-license
Empowering Youth to Build BRIDGES: Youth Leadership in Suicide Prevention Suicide is a prevalent health issue for youth and understanding youth experiences is critical for the development of effective prevention strategies. Although youth perceptions regarding suicide are relatively well studied, there is a paucity of youth voices in the planning, design, facilitation, and implementation of suicide prevention research. This study examines youth perceptions of suicide prevention through a community-academic partnership with the Youth Council for Suicide Prevention (YCSP). Working together as co-researchers, the YCSP conducted a modified Group Level Assessment with over 200 youth to understand youth perspectives on suicide prevention. The findings were used by the council to inform outreach and prevention activities that directly affect YCSP members and their peers. Suicide is a serious public health issue impacting communities and is now the second leading cause of death for children aged 12-17 in the United States (Centers for Disease Control [CDC], 2015). While researchers and practitioners continue to examine the reasons for the rise in adolescent suicide and to address them through evidence-based practices, the CDC has advocated for youth and communities to be included as essential drivers of suicide prevention (2015). The evidence for a number of suicide prevention strategies are mixed (Miller et al., 2009;Robinson et al., 2013), however, those which employ peer-to-peer approaches seem to reap the most positive outcomes (Bunney et al., 2002;Randell et al., 2001;Stuart et al., 2003;Wyman et al., 2010). Though the aforementioned peer frameworks have been effective in decreasing suicidal ideation and enhancing healthier coping behaviors, youth were not involved in the planning, design, or implementation of the strategies as recommended by the CDC (2015) and World Health Organization (WHO, 1993). The mixed evidence of strategies and exclusion of youth from research and intervention development warrants the exploration of youth participation in suicide prevention. This study describes the involvement of youth in the planning, design, facilitation, and implementation of suicide prevention research for contextualized findings on youth perceptions of suicide prevention that inform outreach. Youth Perceptions of Suicide Prevention Despite lack of youth voices in suicide prevention efforts, researchers have investigated youth perceptions regarding barriers to suicide prevention. The scholarship reiterates the role of stigma, trust, and confidentiality as major obstacles to help-seeking (Curtis, 2010;Gilchrist & Sullivan, 2006;Thapa et al., 2015). When asked why young people may not seek help, 60% of youth in one study believed suicidal youth lack someone they can confide in (Gilchrist & Sullivan, 2006). While parents believed their children can confide in them regarding suicidality, most youth were concerned with parents not being able to cope. Other studies have found that youth are more willing to seek help for another person but less willing to seek help for themselves due to stigma and a perceived need for self-reliance (Curtis, 2010). In response to these barriers, much of the literature proposes a need for comprehensive mental health education that teaches healthy coping behaviors and targets youth willingness to seek help through mental health services, peers, and resourceful adults (Curtis, 2010;Del Mauro & Jackson Williams, 2013;Gilchrist & Sullivan, 2006). Youth Participatory Action Research Though youth-led, health-related initiatives are recommended, there is still a scarcity of youth voices in suicide prevention research. Youth Participatory Action Research (YPAR) may help fill this gap by offering a unique action-oriented approach that engages youth as equitable co-researchers to investigate health and social problems that matter to them (Rodriquez & Brown, 2009). YPAR is considered an approach to research rooted in critical theoretical frameworks that encourages young people to critically analyze their social contexts and identify and challenge the social injustices that impede their development (Cammarota & Fine, 2008;Foster-Fishman et al., 2010;Rodriguez & Brown, 2009). While research is one component of YPAR, youth who participate in YPAR are also involved in a pedagogical process where they acquire knowledge about their social contexts and become empowered to take action to change their lives (Cammarota & Fine, 2008). This philosophical stance can be traced to Paulo Freire, a philosopher-practitioner who emphasized the role of critical reflection in social change (Maguire, 1987;McIntyre, 2002;Selener, 1997) and was concerned with empowering marginalized members of society to challenge their social injustices through critical consciousness (Freire, 1970). In the context of suicide prevention, YPAR can be employed to engage young people in an iterative cycle of critical reflection and action around adolescent suicide in their communities, among other health issues germane to youth. In fact, as YPAR becomes more widespread in different disciplines, it is touted for translating research on complex health issues into actionable plans for the reduction of health disparities (Minkler & Wallerstein, 2008). YPAR posits that youth are experts on issues affecting them and should be involved as equitable co-researchers throughout each phase of the research process (Israel et al., 2010). YPAR has been shown to contribute to health programs that better meet the needs of youth while building knowledge and skills that youth apply to their own lives, making more healthful decisions of their own (Suleiman et al., 2006). As an additional benefit, youth who become engaged in YPAR create a ripple effect and encourage their own peers, parents, teachers, and medical practitioners to become involved in health and social issues like suicide prevention (Israel et al., 2010) -a strategy that is vital to reducing the rates of teen suicide (CDC, 2015;Suleiman et al., 2006; U.S. Department of Health and Human Services, Office of the Surgeon General, and National Action Alliance for Suicide Prevention [UDHHS], 2012;WHO, 1993). Youth Council for Suicide Prevention The YPAR approach of involving young people as co-researchers may prove useful for the construction of tailored health interventions that are more relevant to youth (Lindquist-Grantz & Abraczinskas, 2018), thus potentially reducing adolescent suicide attempts. To address this need in the Greater Cincinnati region, Cincinnati Children's Hospital Medical Center (CCHMC) developed the Youth Council for Suicide Prevention (YCSP) in 2013. Since 2013, the YCSP has employed YPAR to engage young people in critical reflection and action around the issue of adolescent suicide in Cincinnati. While CCHMC initially developed the council to inform suicide prevention in the emergency department, the YCSP has traditionally followed the principles of YPAR where young people are involved in all decision-making matters as equitable partners of the council. The adults on the YCSP (two doctoral students) act as facilitators of the group, who embed activities, discussions, and trainings within council meetings to help YCSP members make decisions and execute project plans. YCSP members are continuously engaged in a critical reflection of their schools and communities, where they identify priorities, people, and areas of importance and develop action plans to address adolescent suicide in Cincinnati. YCSP members repeat this iterative cycle of reflection and action each year to make a more profound impact on their schools and communities. Over time, council members demonstrate leadership by taking more responsibility of YCSP projects and sometimes bringing YCSP projects into their own schools, which may be evidence of positive youth development (Lerner, 2005) and critical consciousness (Freire, 1970). The YCSP is comprised of 28 youth from 11 different schools who design and participate in various research and action projects centered on suicide prevention through the YPAR process described above. All youth participate in council activities on a voluntary basis; many were motivated by their own personal experiences with suicide. Past projects have included: advising researchers on suicide screening in the emergency department through questionnaires and concept mapping; interviewing peers about effective suicide prevention communication strategies; surveying peers and parents about effective strategies for encouraging communication about suicide prevention; and presenting research and workshops at five regional high school conferences. The council accepts applications for new members twice a year with rolling membership to ensure a wide variety of youth voices. Applications are sent to guidance counselors across the Greater Cincinnati region who are asked to share the application with two to three students. Current council members also share the application with their friends. To date, no applicants have been denied membership on the council, as the YCSP values young people with a variety of life experiences. However, the council seeks high schoolers with a passion for mental health, a commitment to attend meetings, and a strong letter of recommendation. Most applicants are Caucasian females who are high achieving students -perhaps due to a number of factors, such as guidance counselor recommendations, interest in mental health, and our network of schools and stakeholders. The purpose of this study is to describe how the YPAR approach was used to involve young people as equitable co-researchers in the planning, design, facilitation, and implementation of suicide prevention research and how the findings informed outreach activities pursued by the YCSP. YCSP was granted a Non-Human Subjects Determination by the Cincinnati Children's Hospital Medical Center Institutional Review Board for research conducted by YCSP or with YCSP members. In addition, all YCSP members have assented to the publication of this manuscript and the disclosure of the YCSP name for publication. YCSP members encourage dissemination of our work and actively present themselves as leaders in suicide prevention, with their primary goal being to normalize the issue of mental illness and suicide through transparency and open conversations. The YCSP believes this transparency is beneficial for the advancement of youth leadership in suicide prevention and other youth councils working to address social and health issues such as teenage suicide. Method Participants The study participants included over 200 students who attended an annual local high school student leadership conference. Due to the YCSP workshop being the most attended session at past conferences, the council has been invited to facilitate workshops for the past five consecutive years. The students in the current study participated in a modified Group Level Assessment (GLA) (Vaughn & Lohmueller, 1998, 2014) facilitated by the YCSP as part of the conference workshop. While there is no demographic data available about students who attended our particular workshop, 570 high school students from 66 different schools and from a range of backgrounds (race, socio-economic status, ethnicity, gender, etc.) in the Greater Cincinnati area attended the conference to learn how to engage in service-learning. The workshop was well-received by attendees who then voted for YCSP as one of two organizations to win a grant to conduct additional outreach activities throughout the region. Data Collection and Analysis The council co-designed a modified GLA in which over 200 high school students generated ideas about strategies for suicide prevention and ways to empower youth to take action in their own communities and schools. GLA is a validated qualitative and participatory method developed by Lohmeuller (1998, 2014). Unlike traditional focus groups which require more time for completion and are expert-driven, GLA is a method where "timely and valid data are collaboratively generated and interactively evaluated with relevant stakeholders leading to the development of participant-driven data and relevant action plans" (Vaughn & Lohmeuller, 2014, p. 336). The full GLA process involves seven steps. First, prompts probing about suicide prevention were designed by the YCSP and written on adhesive flip charts which were placed around the room on the walls (See Table 1). The council decided on these prompts through a brainstorming session where questions about 5. Teens would not attempt suicide if… 6. One way I can prevent suicide in my school or community is… suicide prevention were written in a shared Google document by all council members. Through a democratic discussion, the YCSP weighed the pros and cons of each prompt and discarded or modified them to suit the purposes of the workshop. Council members prioritized six questions about suicide prevention that would lend action-oriented information to help the YCSP form action plans. YCSP members guided the participants through an overview of the GLA and a short, warm-up activity (Step 1: Climate Setting). YCSP members then instructed participants to respond to all 6 prompts using markers (Step 2: Generating), then after all responses were written, invited them to participate in a gallery walk to read the responses of their peers (Step 3: Appreciating). The council completed the remaining GLA steps by co-analyzing and prioritizing the data, then generating youth-driven themes that emerged from the workshop attendee's responses. The adult facilitators of the council asked each YCSP member to first reflect on the data individually in terms of what it meant to them and their peers (Step 4: Reflecting). After this, the facilitators split the council into 4 small groups of 5-6 youth and assigned them 1-2 flip charts. The facilitators asked the small groups to discuss the responses on their assigned flip charts and identify themes, which were described as patterns, similarities, differences, or anything that 'pops out' in the responses. The small groups then reported their themes to the larger group (Step 5: Understanding). One of the facilitators recorded these themes on flip chart paper for the larger group to see. The analysis process concluded with the large group prioritizing and consolidating the themes into similar categories through democratic discussion (Step 6: Selecting). Through discussion, the council decided ways in which themes were related to one another and should be consolidated, discarded, or developed into new themes. The council used the resulting themes to develop action plans and outreach activities (Step 7: Action). Through YPAR and through the GLA methodology, council members were involved in the project design, data collection, data analysis, interpretation, and dissemination of findings through action plans that took place in the community. YCSP members reviewed this manuscript, which was written by the facilitators of the council, in order to enhance the validity of the study findings. Findings and Discussion YCSP council members analyzed the responses from the conference using the GLA methodology. Their thematic analysis revealed 6 major themes: (1) Belonging; (2) Red-Light, (3) Isolation; (4) Dedication; (5) Guidance and Education; and (6) Stigma. The council strategically assembled these themes into an acronym -BRIDGES -which they believe represents what youth think about suicide prevention. While each letter of the acronym represents its own theme, together, BRIDGES means involving multiple groups of people, resources, and systems to combat suicide -a strategy that aligns with recommendations for suicide prevention (CDC, 2015;USDHHS, 2012;WHO, 1993). Each theme was developed through discussions that unfolded during the analysis and interpretation phases of this study. This section is organized by first discussing how each theme emerged during analysis with quotes from the conference as support. Then, each theme is discussed in the context of the literature on suicide prevention. Belonging The belonging theme emerged from a series of conversations about the responses the YCSP had collected from the original six prompts. Using the GLA analysis process, council members reported frequent and noteworthy responses from the flip charts. The prompt "teens would not commit suicide if…," in particular, sparked many conversations that led to the belonging theme. In response to this prompt, students wrote things like "people were accepting of all," "they felt loved," "they weren't called names," "they had a safe environment," and "they had a strong network of people for help." These quotes are just some of many that capture the desire for connectedness in the students' responses. The YCSP discussed the frequency of these responses and supported them with their own experiences as young people who desire feelings of belongingness in their schools and social circles. The YCSP then brainstormed words that would capture the essence of these responses and agreed that belongingness was appropriate. Like the conference participants indicated through their responses to the GLA prompts, many teens feel like they do not "fit in" with peers within their schools and communities. Studies have that found that teens are hesitant to seek help because they lack someone they can confide in (Gilchrist & Sullivan, 2006) or hold a perceived need to rely on themselves (Curtis, 2010). Using the BRIDGES concept, individuals, communities, and institutions can foster feelings of belongingness in teens. At the individual level, peers, parents, teachers, and others can make teens feel like they belong by making themselves available as supportive, unbiased resources. Although this is a difficult task, education may be one of the ways to achieve these goals. Mental health education in schools may help young people feel more comfortable discussing personal-emotional issues (Randell et al., 2001;Robinson et al., 2013;Tang et al., 2009). In addition, more consistent and comprehensive curriculum on mental health in schools may help adolescents better understand suicide and develop empathy for peers who may be struggling with mental health issues (Curtis, 2010;Del Mauro & Jackson Williams, 2013, Gilchrist & Sullivan, 2006. Some interventions have successfully targeted perceived burdensomeness and failed belongingness among students (Joiner, 2009). One program specifically described as a belongingness intervention involved mailing letters expressing concern to high-risk individuals after they refused ongoing treatment (Motto & Bostrom, 2001). When matched with another control group that did not receive letters, there was a demonstrable difference in suicide rate between the two groups after five years -specifically, those that received the letters experienced fewer deaths by suicide. Another study compared treatment without follow-up to a follow-up intervention that included continued communication between at-risk patients and clinical staff (Fleischmann et al., 2008). In this study, too, the interpersonal component resulted in more feelings of belongingness and fewer deaths from suicide. Schools can help promote belongingness and connection among students by utilizing these programs and by following up with at-risk students. When consistent in their actions, schools have the ability to develop a culture which makes students feel like they belong and are supported in their particular school communities. Cultivating this culture is one of the ways in which students may feel more comfortable confiding in someone about their suicidal thoughts, whether that be peers, teachers, guidance counselors, or other school personnel. Red-Light The red-light theme was formed by the council due to a large number of responses about warning signs, causes, outcomes, or means of suicide. For instance, in response to the prompt "____ is often ignored in suicide prevention for youth," students wrote things like "the obvious signs of suicidal thoughts," "the cause of these thoughts," "statements made by suicidal teens," "cries for help," "the gravity of the situation," and "warning signs." Further, in response to the prompt "The most important health/social/community issues for Cincinnati youth are…," students wrote responses such as "depression," "sadness," "drugs," "violence," "coping," "bullying," and "stress." Students at the conference also referenced "guns," "pills," and "cutting" in response to the prompt that asked what comes to mind when they hear the word 'suicide.' The YCSP discussed the commonalities between these responses and identified them as warning signs, causes, outcomes, or means of suicide. The YCSP chose the word "Red-Light" to represent these responses as one theme. As mentioned previously, suicide is the second leading cause of death for children aged 12-17 in the United States (CDC, 2015). The responses from the youth in this study augment these statistics, drawing attention to the influence of existing mental health issues on suicide outcomes. These findings remind school and health practitioners to be aware of the signs and take action to intervene when adolescents display behavioral changes, such as drug use, fighting, and behaviors associated with depression and sadness (Bae et al., 2005;Kann et al., 2016). Being knowledgeable about resources is another way of supporting teens who may be experiencing suicidal thoughts and mental health concerns. Within the BRIDGES framework, this means being responsible for teaching and learning the warning signs and most effective ways to intervene, but also ensuring that educational, medical, and governmental institutions provide the provisions for this to be possible. With over ten million primary and secondary school students in the United States requiring mental health intervention, it is becoming increasingly important for schools to connect students to mental health services (National Center for Health Statistics, 2011). Since students spend most of their young lives in school, schools should strive to offer more comprehensive strategies for suicide prevention that involve multiple approaches, including accessible mental health care, curriculum-based programing, and professional development for school personnel. Isolation Isolation was another theme that emerged through discussions about the flip chart responses. Across a number of flip charts, youth wrote responses such as "sadness," "loneliness," "suicidal feelings," and "feeling as if there is nobody to confide in." There were also comments that said teens would not commit suicide if "they had a friend to turn to," and "had someone that listened to them." The council noted the frequency of these comments and connected them to their own experiences with feeling alone or knowing friends or family who isolated themselves. It is through these conversations that the council developed the "Isolation" theme. Social isolation is a major correlate of suicide alongside depression and loss of friends (Greydanus et al., 2010). Research finds that teens are less willing to seek help for themselves due to stigma (Curtis, 2010). Isolation may be reduced by utilizing programs that focus on building protective behaviors and engaging youth through peer models (Bunney et al., 2002;Randell et al., 2001;Stuart et al., 2003;Wyman et al., 2010). These include curriculum-based programs in schools, skill training programs, and research frameworks like YPAR which engage youth as more than passive participants. Following the tenants of BRIDGES may be another helpful model for reducing isolation -fostering environments which encourage belongingness, making more opportunities for education, and providing and being knowledgeable about resources which support students in time of need. Changing the ways in which we respond to mental illness and suicide may break down stigma and encourage youth to reach out to friends, family, and other adults with personalemotional issues. Dedication The Dedication theme emerged due to the number of names that were listed in response to prompt "When I hear the word suicide, I think of…" Many students at the conference wrote down actual names of people who died by suicide. Aside from names, students wrote about people they knew such as friends, sisters, uncles, and cousins had attempted or died by suicide. The council discussed the weight of these responses and the commonality of suicide. As such, the council decided to create a "Dedication" theme to represent the importance of remembering those who have suffered from mental illness, as a way to raise awareness and showcase the impact of suicide on surrounding family and friends. According to the council, reading these responses was an emotional experience. The Dedication theme is an especially rich aspect of the data, because it shows that suicide is a personal issue to youth -with suicide being the second leading cause of death for adolescents (CDC, 2015), many teens know peers who have committed suicide. While there seems to be no literature on the impact of dedications in suicide prevention, the council discussed the power of making dedications to raise awareness about suicide. Rather than hearing about a teen from across the country who committed suicide, they believed it is much more poignant to learn that someone you knew was struggling with suicidal thoughts. Dedications may also be an opportunity for schools to raise awareness about suicide and offer support to students who may be experiencing similar difficulties. The youth on the council mentioned that some of their schools could have provided more support in the way of helping students grieve their peers after a suicide. Therefore, this can be viewed as an important moment for school personnel to target at-risk students who may be especially vulnerable after the loss of a friend. Guidance and Education This theme is comprised of resources that would help prevent suicide, such as people, places, and sources of education. For example, in response to the prompts "I would turn to ___ if I were feeling suicidal," and "Teens would not commit suicide if…," students wrote responses such as "if they had support," and "if there was better mental health education." The students also identified people or places who they would turn to, such as "teachers," "church," "god," "doctors," "parents," and "friends." Students also wrote about the need for a "safe environment" and "a strong network of people for help." Thus, the council developed the theme "Guidance and Education" to represent all of the resources necessary for suicide prevention. The essence of the Guidance and Education theme is consistent with the messages reiterated from the CDC (2015), USDHHS (2012), and WHO (1993), who recommend multiple groups and strategies to be utilized in suicide prevention efforts. These findings validate the BRIDGES philosophy which emphasizes engaging multiple stakeholders, systems, and approaches to construct comprehensive prevention strategies. With youth spending most of their time at school in particular, school personnel are in a unique position to provide evidence-based methods for teaching about mental health and suicide. Evidence-based programs involving multiple approaches will help students better understand mental illness and suicidal thoughts in themselves, as well as how to intervene when their peers are experiencing the same issues. In addition, acquiring more knowledge about suicide will reduce the stigma and encourage more conversation about the topic. As stated earlier, over ten million students require mental health services (National Center for Health Statistics, 2011). This makes it paramount for schools to focus on better integration with mental health care through school-community collaborations (Adelman & Taylor, 2000). Such collaborations make services more accessible for underserved, hard to reach students while building capacity for diverse psychosocial contexts seen in schools. For example, one study redesigned mental health services in a school to "support children's learning within communities of concentrated urban poverty," finding that the communitytailored services led to improved behavior compared to the usual mental health services (Atkins et al., 2015, p. 848). The responsibility of schools as sources of mental health education and care is an essential aspect of the BRIDGES framework. Research shows that when communities become involved with school-based services, they provide support networks, learn and teach coping skills, and participate in governance around services (Adelman & Taylor, 2000;Ballard et al., 2013). Whether or not schools are capable of providing services within their own institutions, they should move toward a "nexus point" model in which they connect their unique communities to mental health services (Haddad et al., 2017). More collaboration between the community and school may be an avenue for constructing tailored interventions that are relevant to student experiences. Stigma The stigma theme was created from a number of responses alluding to the stigma associated with speaking about mental health. For instance, some students wrote that teens would not commit suicide if "people didn't judge you for who you are," and if "[people would] listen more [and] judge less." Some people also wrote that "if people were more open minded," and "if they didn't feel that they weren't accepted," teens would not commit suicide. The council discussed these themes and agreed that they were associated with the stigma around speaking about emotional difficulties and mental health issues. As such, the council grouped these responses into the "Stigma" theme. The literature repeatedly identifies stigma, trust, and confidentiality as barriers to suicide prevention (Curtis, 2010;Gilchrist & Sullivan, 2006;Thapa et al., 2015). Peer-to-peer mental health education may be useful in reducing stigma or negative stereotypes while increasing knowledge about suicide among youth (Curtis, 2010;Del Mauro & Jackson Williams, 2013;Gilchrist & Sullivan, 2006). The YCSP members discussed the importance of interventions and activities that target small groups of students on issues related to mental health. Future interventions should focus on targeting small groups of peers who may relate to the experiences of other youth, opening up conversations that are more comfortable and normalized. In addition, the YCSP has recommended social media as a prevention tool. Although cyber bullying plays a part in mental health issues for teens, youth believe that social media can still be used to raise awareness and reduce isolation in teens. Local Implementation of BRIDGES As the final step in analysis, students on the council were engaged in a conversation about action plans based on the data gathered from the workshop. The youth discussed the overall meaning of BRIDGES and how the characteristics of the acronym should apply to their future outreach activities. With the importance of schools and conversations occurring in small groups, they discussed developing presentations, workshops, a video series, and other outreach events modeled after their BRIDGES philosophy. Social media was also discussed as an avenue to raise awareness and promote their outreach activities, particularly during National Children's Mental Health Awareness Month. One idea that was actually executed, for instance, was to use each letter of the acronym BRIDGES for each day of posts on their social media profiles (Twitter, Instagram, and Snapchat) while engaging youth in contests and raffle drawings that led up to a fundraising and awareness gala. The gala was organized by the YCSP in spring 2017 and invited teens, parents, school personnel, researchers, and physicians to learn about the research projects of the YCSP, hear talks from youth and local experts in mental health, participate and vote in a youth art gallery, and network with other stakeholders passionate about ending suicide. The components of this event and future outreach activities reflect the BRIDGES philosophy of involving multiple groups of stakeholders and strategies to combat suicide. Limitations and Future Directions Although this study presents a host of strengths for researchers interested in partnering with youth to enhance the relevance, rigor, and reach of their scholarship (Balazs & Morello-Frosch, 2013), there are some limitations worth noting. First, the majority of youth on the YCSP are Caucasian females who are high achieving students. Ensuring equal representation of youth from a wide spectrum of demographic backgrounds is imperative to constructing generalizable knowledge and relevant interventions for all adolescents. The research team is currently engaging in discussions with community stakeholders to understand how to best market council membership to marginalized populations of students. Another potential limitation is that council members did not determine the focus of the council, which was developed by the hospital in response to the growing need for suicide prevention in Cincinnati. Traditionally, in YPAR, young people determine the problem of focus at the beginning stages of the partnership (Cammarota & Fine, 2008). Without involving young people in defining the problem, we can only rely on the assumption that young people volunteered for this cause because they truly see teen suicide as a problem worth addressing. Additionally, participants engaging in the analysis phase of the project could enhance the effect of the present study. While participants in this study were involved in data generation, the time constraints of the workshop did not allow for participants to become involved in analysis, which is typically a component of the GLA method. As a result, themes produced by the YCSP may not necessarily represent the views of the participants. Still, by presenting at the workshop, the research team elicited the responses from a large number of youth. The data was also analyzed by youth from the YCSP, generating themes that are youth-driven rather than researcherdriven. Additionally, students who attended this conference may have different perspectives on suicide prevention than other young people in area schools and in the community. These biases in our sample should be considered when developing suicide prevention strategies. Engaging youth as co-researchers allows for the co-construction of tailored, contextualized prevention strategies that are relevant to youth perspectives. This study demonstrates YPAR as a potentially effective strategy for suicide prevention because it (1) involves peers working together toward solving a common health issue; (2) involves the community in implementing social change; (3) requires that youth lead the efforts; (4) has been used before to improve health programs to better meet the needs of youth; (5) increases knowledge and skill in youth; and (6) provides relevancy and context to suicide prevention strategies. Future research should aim to test the effectiveness of suicide prevention strategies which are designed and implemented using a YPAR approach. Conclusion The themes identified by the council are consistent with much of the literature, which proposes a need for multiple groups to become involved in suicide prevention efforts. Furthermore, prevention strategies which demonstrate the most success use multiple approaches that do not just target youth, but also their peers and the systems that affect their lives. YPAR is an effective way to accomplish these goals because it engages youth as active participants of their own change, making YPAR particularly useful for suicide prevention. Through conducting research in this particular study, the students on the council were able to make meaning of what other youth think about suicide prevention. They not only identified salient themes but combined each theme together to tell a story -a story about building BRIDGES between all of the stakeholders that are essential for suicide prevention. BRIDGES provides a framework to make youth feel like they belong, to learn about the warning signs, to combat isolation, to incorporate personal dedications when possible, to offer evidence-based guidance and education, and to work to reduce the stigma that surrounds the topic of suicide. Finally, it is a useful guiding framework for organizing suicide prevention activities that respond to contextualized youth needs.
2020-07-09T09:14:24.887Z
2020-07-06T00:00:00.000
{ "year": 2020, "sha1": "c6464dc49cd76d37dbe1ece3d5a56791d3ac8007", "oa_license": "CCBY", "oa_url": "https://collaborations.miami.edu/articles/10.33596/coll.41/galley/89/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6069cdfbd440110c754924354921f43a088d39e8", "s2fieldsofstudy": [ "Psychology", "Education", "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
58556104
pes2o/s2orc
v3-fos-license
On-table extubation in neonates undergoing anoplasty: an experience of anesthetic management on the concept of fast-tracking anesthesia Abstract Fast-track anesthesia (FTA) is difficult to achieve in neonates due to immature organ function and high rates of perioperative events. As a high-risk population, neonates require prolonged postoperative mechanical ventilation, which may lead to contradictions in cases where neonatal intensive care unit resources and ventilator facilities are limited. The choice of anesthesia strategy and anesthetic can help achieve rapid postoperative rehabilitation and save hospitalization costs. The authors describe their experience with maintaining spontaneous breathing in neonates undergoing anoplasty without opioids or muscle relaxants. This retrospective chart review included neonates who underwent anoplasty in the authors’ institution. Twelve neonates who underwent the procedure with atomized 5% lidocaine topical anesthesia around the glottis, combined with sevoflurane sedation and caudal anesthesia facilitating tracheal intubation without opioid and muscle relaxant comprised the FTA group. Ten neonates who underwent the intervention with routine anesthesia techniques in the same period comprised the control group (group C). The surgical success rate in the FTA group was 91.7%. There were no severe complications related to lidocaine administered around the glottis. Extubation time was significantly shorter in the FTA group than in group C (4 [2.5, 5.2] vs 81.5 [60.6, 96.8], respectively; P < .01). The duration of stay in the surgical intensive care unit (SICU) was longer in group C than in the FTA group (2 [2.0, 2.6] vs 1 [0.9, 2.0], respectively; P = .006,). A statistically significant lower rate of extubation-cough was noted after endotracheal tube removal in the FTA group compared with group C (18% vs 90%, respectively; P < .001). There was no difference in the duration of anesthesia or hospitalization costs between the 2 groups. No neonates required re-intubation after extubation. On-table extubation via 5% atomized lidocaine topical anesthesia around the glottis for tracheal intubation combined with sevoflurane sedation and caudal anesthesia without opioid and muscle relaxant was feasible in neonates undergoing anoplasty. This reduced time to extubation, length of SICU stay and saved resources. A similar trend in cost savings was also found; nevertheless, more studies are needed to confirm these results. Introduction Anorectal malformation has an incidence 1/5000 [1] and is a common congenital digestive tract malformation that always requires surgical intervention involving general anesthesia. Anesthesiologists have encountered several challenges in anesthesia management for neonates who undergo surgery under general anesthesia due to the immaturity of various organs, poor tolerance to anesthetic agents, and low oxygen reserve. In routine work on neonates, general anesthesia with multiple drugs has always been used during surgical repair, leading to long extubation times and high medical expenses. As a high-risk population, neonates often require prolonged postoperative mechanical ventilation, which may lead to contradictions in cases where neonatal intensive care unit resources and ventilator facilities are limited. Furthermore, prolonged exposure to inhalation anesthetics has been shown to be toxic in research involving neonatal rodents. [2] Nevertheless, it is unacceptable to perform surgery in neonates without the use of anesthetics. The notion that newborns cannot feel pain has long been abandoned. Further research demonstrated that repeated pain stimulation in neonates led to behavioral difficulties later in life. [3] Intravenous (IV) opioids have been a good choice to maintain adequate intraor postoperative analgesia. However, due to immature liver and kidney function, opioid clearance rates are decreased, resulting in a delay to awakening and respiratory depression, which are not safe for neonates. [4] The purpose of fast-track anesthesia (FTA) is to remove the endotracheal tube early, decrease respiratory complications, shorten the length of hospital stay, reduce mortality and costs, and return to productivity early. The appropriate choice of anesthesia maneuvers and anesthetics can facilitate rapid postoperative rehabilitation. Some clinical evidence indicated that rapid recovery after surgery was closely related to anesthesia. [5,6] However, FTA was difficult to achieve in neonates because of physical differences from adults. Therefore, it is worth exploring methods to minimize intra-operative anesthetic exposure, remove the endotracheal tube early, accelerate recovery, and decrease medical costs in neonates with lower resources. There is no definitive evidence supporting the best maneuver for anesthesia in neonates. In our institution, different approaches are chosen by different anesthesiologists. Caudal anesthesia is a well-established and valuable adjunct to general anesthesia in pediatric patients undergoing perineal surgical intervention. We describe our experience with maintaining spontaneous breathing via 5% atomized lidocaine topical anesthesia around glottis in tracheal intubation, combined with sevoflurane sedation and caudal block without opioid and muscle relaxant, in neonates undergoing anoplasty. Methods This retrospective chart review was approved by the Ethics Committee of Chengdu Women's and Children's Central Hospital (Chengdu, China). Given the retrospective nature of the study and the use of anonymized patient data, requirements for informed consent were waived. This study enrolled neonates with anorectal malformation who were scheduled for anoplasty, either elective or emergency. Neonatal surgical procedures in the FTA group were performed under combined general and caudal anesthesia in a referral hospital during the period from May 2016 to May 2017. The airway was locally anesthetized with 5% atomized lidocaine to facilitate intubation and maintain spontaneous breathing. In addition to the FTA group, neonates who were diagnosed with anorectal malformations and underwent surgical intervention (ie, anoplasty) in the same period were designated as a control group (C group). The perioperative management remained unchanged over the same time period. Eligibility criteria included neonates with a gestational age >35 weeks and scheduled for either elective or emergency anoplasty. Neonates with assisted ventilation, central nervous system (CNS) disorders, coexisting spinal issues, congenital heart diseases, or coagulation disorders were excluded. All cases were performed by attending anesthetists with ≥5 years' experience in pediatric anesthesia. As a rule in the authors' institution, only anesthetists ≥5 year' experience attending in pediatric anesthesia can perform anesthesia in neonates. The protocol for the management of the FTA group was as follows. IV access and naso-gastric tube were established in the neonatal ward. In the operation room, electrocardiograph (ECG), non-invasive blood pressure, end-tidal carbon dioxide (ETCO 2 ) and pulse oximetry were monitored. Anesthesia was induced using 4% sevoflurane in 100% oxygen (2 L/min) initially via mask. Spontaneous breathing was closely monitored. If the respiratory rate decreased to <20 breaths/min, the concentration of sevoflurane was rapidly reduced to 3%. Eight minutes later, topical anesthesia with 5% atomized lidocaine (Lidocaine Aerosol, China) (4.5 mg [1 spray]) was performed under laryngoscope guidance around glottis, followed by continuous inhalation of 4% sevoflurane for 2 min. No opioid or muscle relaxant was administered. Subsequently, a tube was inserted into the glottis. All of the caudal blocks were performed after induction of anesthesia. In the FTA group, the caudal epidural space was identified using a 22-gauge hypodermic needle under ultrasound guidance in the left lateral position (Fig. 1). Lidocaine (1 mL/kg [0.5%]) was injected after confirmation in epidural space via ultrasonography. All caudal blocks were performed by the attending anesthetist. Anesthesia was maintained with sevoflurane and was discontinued 3 min before the conclusion of surgery. Procedures in group C were performed using traditional general anesthesia with or without caudal block. For safety and to improve the efficiency of the operation, removal endotracheal tube was performed in SICU. Patients in both groups received postoperative ventilatory support until fully awake. Primary outcomes included extubation time, length of SICU stay, and hospitalization costs. Secondary outcomes included duration of anesthesia, surgical success rate, and perioperative complications. Extubation time was defined as the end of the skin suture to tracheal tube removal. Indicators of extubation in neonates included fully awake with limb movement, with regular breathing and tidal volume 5-7 mL/kg. Perioperative data were retrieved from the neonate database. Duration of anesthesia was defined as the time from the first carbon dioxide reading to the neonates' leaving the operation room. Surgical success rate was defined as the proportion of neonates who had completed the surgery without additional anesthetics or anesthetic techniques. Perioperative complications included: bradycardia or tachycardia (heart rate <100 beats/min or >180 beats/min); hypotension or hypertension (blood pressure <20% or >20% baseline value); respiratory depression (respiratory rate <20 breaths/min); reintubation: neonates requiring re-endotracheal intubation in the first 24 h postoperatively; respiratory-related events, such as perianesthesia cough, laryngospasm, or bronchospasm. All the events were recorded in detail and managed appropriately. All data were analyzed using SPSS version 13.0 (IBM Corporation, Chicago, IL). The student t test was used to compare normally distributed data. The chi-squared test was used for categorical data. P < .05 was considered to be statistically significant. Results A total of 22 neonates were enrolled in the present retrospective pilot study, including 12 who underwent the new anesthesia technique (FTA group) and 10 in the control group (group C). In the FTA group, 11 neonates successfully completed the surgery using this new technique alone, and only 1 neonate required additional fentanyl (5 mg/kg) during the surgery due to a change in the surgical technique. Overall, the surgical success rate using the new technique alone was 91.7%. Demographic characteristics were comparable between 2 groups (Table 1), including gestational weeks, duration of surgery, and the number of neonates who were diagnosed with lung disease before surgery. In the FTA group, all neonates who received 5% atomized lidocaine around the glottis underwent general anesthesia combined with caudal block, whereas 4 patients in group C received general anesthesia alone. One common side effect of sevoflurane in the neonates was respiratory inhibition, especially in high concentrations. Therefore, induction using 4% of sevoflurane with 5% atomized lidocaine around the glottis resulted in no respiratory inhibition and intubation-cough in the spontaneous breathing group. Because the choice of anesthetic had a significant influence on the incidence of intubation-cough and respiratory inhibition, we focused on the spontaneous breathing group only. No patient developed significant abnormal heart rate or blood pressure, except 1 neonate who experienced transient hypotension after caudal block and recovered after of IV fluid administration (10 mL/kg). No neonates required re-intubation after being extubated. Discussion In our study, 5% atomized lidocaine topical anesthesia administered around glottis for tracheal intubation, combined with sevoflurane sedation and caudal block without opioid or muscle relaxant in neonates, failed to reduce hospitalization costs or the duration of anesthesia. However, the technique reduced extubation time, length of SICU stay, and the incidence of extubationcough, without severe complications. We described an alternative anesthesia technique demonstrating that on-table extubation in neonates was fast, feasible and safe in a low-resource setting. Sevoflurane induction, which can be performed easily and safely, was tolerated by the majority of children; however, it produced dose-related respiratory depression. Some studies have reported that sevoflurane achieved satisfactory intubation conditions without adjuvants [7,8] . However, other clinical studies reported that high concentrations of sevoflurane could lead epileptiform electroencephalogram activity and bradycardia. [9,10] There has been no evidence to demonstrate that low concentrations sevoflurane are unsafe; however, excessively low levels are insufficient to satisfy intubation or surgical conditions. Substituting opioids and neuromuscular relaxants are required to Table 1 Characteristics and operative data. Values are presented as mean (SD), median (IQR) or number (proportion). facilitate tracheal intubation and to avoid high inspired concentrations of sevoflurane. Fentanyl is the most frequently used opioid analgesic in infants and children, and helps patients maintain stable hemodynamics and avoid stress responses. [11] The required dose of fentanyl in anesthesia is, however, highly variable, with a half-life ranging from 317 min to 1062 min. [12] The half-life is prolonged by a factor of 1.5 to 3 times in neonates with increased intraabdominal pressure. Fentanyl has a high hepatic extraction ratio, and clearance relies, in large part, on hepatic blood flow. In addition, hepatic perfusion was diminished in neonates with anal atresia due to increased intra-abdominal pressure, which could slow fentanyl clearance. [12] Increases in plasma fentanyl levels have been associated with significantly prolonged ventilation support, [12] which was similar to what was observed in our study. In a previous study, Silva et al [13] reported that fentanyl was associated with respiratory depression requiring a rescue intervention in adults. In our routine practice, breathing support after surgery in neonates is required until complete anesthetic clearance. On the other hand, short-term exposure to morphine has been reported to promote neuronal apoptosis, [14] and may impair cognitive functioning in adult rodents. [15] To date, however, no clinical study has confirmed that single-short term exposure to fentanyl leads to neurotoxicity and learning deficits in neonates; nevertheless, surgeons and clinicians should remain aware of this problem. Infants are more sensitive to muscle relaxants, to which their responses vary to a greater degree. [16] In pediatric anesthesia, muscle relaxants have most commonly been used to facilitate intubation and improve surgical conditions. For these purposes, an unnecessary high dose of muscle relaxant may be administered to achieve satisfactory conditions, which may lead to a prolonged effect and influence the reversal time. [17] The extension of muscle relaxant reversal time results in prolonged mechanical ventilation time, which undoubtedly results in a waste of financial and human resources. Considering these issues, we were prompted to find an alternative safe and fast anesthesia technique for neonates undergoing anoplasty, which would facilitate early extubation, reduce length of SICU stay, and save hospitalization costs. The concept of an anesthesia technique for neonates undergoing anoplasty should address rapid wake-up while satisfying analgesic requirements. Topical lidocaine anesthesia administered around the glottis in tracheal intubation, combined with sevoflurane sedation and caudal block without opioid and muscle relaxant, may be the best choice. Spontaneous breathing should last the entire procedure, and intubation was an option and remedy in the event of an accident in neonates. Intratracheal application of topical lidocaine anesthesia has been widely used in children to facilitate intubation or attenuate intubation stress responses, [18] which was an adequate solution to the problem of intubation analgesia. A previous study reported that 6.5% topical lidocaine spray in neonatal intubation did not induce any considerable side effects. [19] Current evidence supports the local application of lidocaine <4 mg/kg to be safe in pediatric patients. [20] Atomized lidocaine has several advantages over non-atomized liquid, including higher plasma concentration and an increased surface area because of the smaller atomized particles when compared with conventional lidocaine spray. [21] Hence, it was necessary to use atomized lidocaine around the glottis to inhibit the intubation response while reducing the dosage of lidocaine as much as possible. In our study, we demonstrated that administering 5% topical lidocaine anesthesia around glottis with sevoflurane sedation could facilitate complete tracheal intubation without any side effects. A previous study reported that precise caudal block with realtime ultrasound guidance, which reduced opioid consumption, was used as a supplement to general anesthesia for neonates. [22] The duration of anoplasty in our institution is approximately 1 h; therefore, a single injection of lidocaine-maintained analgesia, except for 1 case in which the surgery changed and additional fentanyl was administered. Most neonates have no health-insurance or have healthinsurance that does not cover inpatient care of newborns in developing countries. [23] Postoperative continuous mechanical ventilation and high-level neonatal care service in the SICU comprise the major portion of hospitalization costs. Daily intensive care unit consumption in Indian private hospitals is similar to that in North America. [24] Although there were no data from China, the costs should also be similarly exorbitant. The burden of medical expenses, which may lead to catastrophic bankruptcy, can be only borne by the family. In developing countries, insufficient medical resources, lack of medical workers, and the shortage of equipped facilities required us to improve the efficiency of treating more patients. Efficiency improvement could be achieved by decreasing operation room turnover time, and increasing operation room productivity and utilization rate of SICU. In a previous cardiac surgery study, on-table extubation helped curtail costs and shorten hospital length of stay. [25] In our study, we identified a significant reduction of SICU stay by early extubation. This result was similar to the observation from a previous study by Beamer. [25] In addition, we also noted a decrease in hospitalization costs, but the difference was not significant because of our limited sample size. The study has some limitations. To date, the safety and efficacy of this technique in neonates undergoing anoplasty has been demonstrated in our institution. However, the sample size was too small to generalize our results across all facilities. We are currently planning to conduct a randomized controlled trial, which has been registered in a Chinese Clinical Trial Registry (ChiCTR-INR-17012576), in which we compare 5% atomized lidocaine trachea anesthesia combined with sevoflurane sedation and caudal block and routine general anesthesia with caudal block to provide more convincing evidence confirming the advantages as well as longterm prognosis. To optimize postoperative pain management, ropivacaine will be used as the local anesthetic for its long duration of sensory analgesia. When the study is completed, the raw data will be uploaded to a repository via the website http://www.chictr. org.cn/index.aspx within 6 months.
2019-01-22T22:31:14.870Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "d7c8c385b3b38adc254b060403f686242ff2ab2f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000014098", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7c8c385b3b38adc254b060403f686242ff2ab2f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233599977
pes2o/s2orc
v3-fos-license
LEARNING MODEL DURING LONG DISTANCE LEARNING Abstrak: Online Distance Learning (ODL) is carried out because of the Covid-19 pandemic conditions which require people to carry out social distancing to reduce the risk of transmission. However, in the learning process, it still requires a learning model as a learning guide to achieve learning objectives so that students can achieve the desired competencies. In the 2013 curriculum, there are 3 learning models offered, namely Project Based Learning, Problem Based Learning and Discovery Learning. This research is a literature study that uses articles as data sources which are then compared as the author's ideas in terms of learning models in the ODL period. The results of this study indicate that the learning model offered in the 2013 Curriculum can still be used in ODL. However, it still has to be equipped with supporting tools and media so that the learning process can be carried out smoothly according to what has been planned. INTRODUCTION Berdiati, 2014). Thus, the learning model is a conceptual framework in which there are systematic procedures for organizing student learning experiences in such a way as to achieve learning goals. The learning model is one of the elements commonly used in the learning process. This is because a teacher needs a model to make it easier for them to achieve their learning goals. Gunter et al (1990) define the learning model as the steps of a procedure used to achieve learning objectives. Joyce & Weil (1980) define the learning model as a conceptual framework that is used as a guide in carrying out learning. According to Trianto (2015), the learning model is a plan or a pattern that is used as a guide in carrying out classroom learning or learning in tutorials. The learning model is a conceptual framework that describes a systematic procedure for organizing a learning system to achieve certain learning objectives and serves as a guide for learning designers and teachers in planning and carrying out learning activities (Saefuddin & In the 2013 curriculum, there are three learning models that support teaching and learning activities in schools. As for this matter is based on the Regulation of the Minister of Education issued regarding the implementation of learning in the even semester of the 2020/2021 academic year where in the letter it is allowed to conduct face-to-face learning with several conditions, but there are still many schools that do not meet the requirements proposed so that he prefers to continue to do distance learning. RESULT AND DISCUSSION Result The learning process must continue to be carried out even though the world is currently being hit by the Covid-19 Pandemic. The most appropriate solution is Distance Learning by means of online (in a network). But there is a new problem, namely how to learn from the ideal? Online learning is simply learning that is done virtually through various existing virtual applications. Either by using a computer, Android or other supporting device. Even though learning is online, teachers must still prioritize achieving the learning objectives. Teachers must be able to adapt to existing conditions so that they can still make students achieve predetermined learning objectives. Mulyasa (2013: 100) states that teachers must realize that learning has a very complex nature because it involves pedagogical, psychological, and didactic aspects simultaneously. Therefore, online learning still has to go through a planning, implementation, and evaluation process like any learning process that occurs face-to-face. Based on some of the things above, a problem arises, namely, can the three learning models in the 2013 curriculum be integrated into the school ODL process? This research aims to find out whether Project Based Learning, Problem Based Learning, and Discovery Learning are still well used in the learning process in the ODL period and convey recommendations for their theoretical use. Majid (2011: 17) says that planning can be interpreted as the process of preparing subject matter, using teaching media, using teaching approaches and methods, and assessing the time allocation that will be carried out at a certain time to achieve predetermined learning objectives. Based on this statement, even the ideal online learning plan should follow all the necessary processes. However, in the online learning process, you must still follow constructivism theory in which students play a more active role in the learning process. Therefore, the teacher must provide material in the form of stimuli and stimuli to direct students in compiling a conclusion that is in accordance with the competencies to be mastered. RESEARCH METHODS `1 In this study the authors used literature study with various articles as data sources. The contents of this paper are the results of the author's thoughts regarding the learning model used in the 2013 Curriculum and its implementation in the period of distance learning carried out due to the Covid-19 pandemic outbreak. The data in this study were taken from books and journals which were compared into a recommended learning model offered by the 2013 Curriculum and its implementation in online ODL. In online learning you must also use media to help teachers carry out the learning process. As for online learning, learning media can be in the form of videos, images and various other digital files that can be used virtually. In short, online and conventional learning can be done in almost the same way. However, there must be some adjustments from using hardfile to softfile so that it can be used in the application used. independently. Then it is hoped that the experience will be converted into more meaningful knowledge for them. The advantages of implementing the project based learning model according to Kurniasih (2014: 83) are: "(1) increasing the learning motivation of students to learn, encouraging their ability to do important work, and they need to be respected; (2) improve problem solving skills; (3) make students more active and successful in solving complex problems; (4) enhancing collaboration: (5) encouraging students to develop and practice communication skills; (6) improve the skills of students in managing resources; (7) provide experience to learners learning and practice in organizing projects and making allocations of time and other resources such as equipment for completing tasks; (8) provide learning experiences that involve students in a complex and designed to develop according to the real world; (9) involving students to learn to retrieve information and demonstrate the knowledge they have, then implement it in the real world; (10) make the learning atmosphere fun, so that students and educators enjoy the learning process". There are 3 learning models used in Curriculum 2013, namely Project Based Learning, Problem Based Learning, and Discovery Learning. The following will discuss each of these learning models. Sani (2014: 172) says project based learning can be defined as a learning with long-term activities that involve students in designing, making and displaying products to solve real-world problems. According to Kosasih (2014: 96) project based learning is a learning model that uses a project or activity as its goal. Furthermore, Bie (in Nglimun, 2013: 185) emphasizes project based learning, namely: "a learning model that focuses on the concepts and main principles (central) of a discipline, involving students in problem solving activities and other meaningful tasks. , giving students the opportunity to work autonomously constructing their own learning, and ultimately producing valuable and realistic student work products." Project Based Learning Besides the advantages of project based learning, there are several weaknesses of project based learning according to Sani (2014: 177), namely "(1) it takes a lot of time to solve problems and produce products; (2) requires sufficient costs; (3) need teachers who are skilled and willing to learn; (4) requires adequate facilities, equipment and materials; (5) is not suitable for students who give up easily and do not have the required knowledge and skills; (6) difficulty involving all students in group work ". From the various opinions of the experts above, it can be concluded that project based learning is learning that focuses on the activities of students in understanding a concept that is in accordance with the learning objectives so that they get more meaningful learning and can build their knowledge. Students are expected to get real experience in the learning process both in groups and Problem Based Learning According to Arends (2008) Problem Based Learning is learning that has a focus on presenting authentic and meaningful problems to students, where all these problems function as a means to carry out investigations and investigations. At the beginning of learning, students are given several problems which they then analyze for solutions. Here the teacher acts as a problem giver, questioner and facility provider in the analysis process that focuses on learning objectives. According to Asis Saefuddin and Ika Berdiati in the book Effective Learning (2014: 56), states that the Discovery Learning Learning Model is defined as a learning process that occurs when the learner is not presented with lessons in its final form, but through the process of finding. Teachers must be able to provide opportunities for students to be active in the learning process and guide them so that they can be in accordance with the desired learning objectives. Furthermore J. Richard in Roestiyah N.K. (2012: 20) states that Discovery Learning is a way of teaching that involves students in the process of mental activities through exchange of opinions, with discussions, seminars, reading on their own and trying on their own, so that children can learn on their own. So that the teaching and learning situation moved from the teacher dominated learning situation to the student dominated learning situation. Based on the definition of discovery learning that has been put forward by several experts, it can be concluded that discovery learning is a learning process where students are expected to find their own answers to existing problems by being given stimuli in the form of questions that are tailored to the learning objectives. The advantages of the PBL model according to Shoimin (2016) include: 1) students are trained to have the ability to solve problems in real situations, 2) have the ability to build their own knowledge through learning activities, 3) learning focuses on problems so that unrelated material is unnecessary learned by students. This reduces the burden on students by memorizing or storing information, 4) scientific activity occurs in students through group work, 5) students are accustomed to using sources of knowledge, both from libraries, the internet, interviews, and observations, 6) students have the ability to assess their own learning progress, 7) students have the ability to carry out scientific communication in discussion activities or presentations of their work, and 8) individual learners' learning difficulties can be overcome through group work in the form of peer teaching. The advantages of discovery learning model according to Asis Saifuddin and Ika Budiarti (2014: 57-58) are as follows: a) Helping students to improve and improve cognitive skills and processes. The discovery effort is key in this process, depending on how one learns to learn. b) The knowledge gained from this model is very personal and powerful because it strengthens understanding, memory, and transfer. c) Generating a sense of pleasure in students because of the growing sense of investigation and success. d) This model allows students to develop quickly and at their own pace. e) This model can help Meanwhile, the shortcomings of the PBL model (Shoimin, 2016) include: 1) PBM cannot be applied to every subject matter, there is a part of the teacher who plays an active role in presenting the material. PBM is more suitable for learning that requires certain abilities related to problem solving, and 2) in a class that has a high level of student diversity there will be difficulties in the division of tasks. Discovery Learning students strengthen their self-concept because they gain confidence in working together. f) Helping students eliminate skepticism (doubt) because it leads to final and definite or definite truth. g) Students will understand basic concepts and ideas better. h) Helps develop memory and transfer to new learning process situations. i) Encourage students to think and work on their own initiative. j) Encourage students to think intuition and formulate their own hypotheses. j) Increase the level of appreciation for students. k) The possibility of students learning by utilizing various types of learning resources. l) Can develop individual talents and abilities. m) This model raises the assumption that there is a readiness of the mind to learn. infrastructure in accordance with the online ODL process. Discussion Based on the results previously described, it can be concluded that ODL can use all the learning models offered by the 2013 curriculum. However, in its implementation, ODL requires various supporting facilities and infrastructure such as android, personal computers, and other devices. This is what must be equipped to facilitate the online learning process. The government through the Ministry of Education and Culture, Directorate General of Early Childhood Education, Basic Education, and Secondary Education through circular number 8202 / C / PD / 2020 concerning the Internet Quota Provision Program for Students provides quota assistance for students and students. This quota assistance is expected to ease the burden on students and teachers in undergoing ODL. The shortcomings of the Discovery Learning learning model according to Asis Saifuddin and Ika Budiarti (2014: 58) are as follows: a) Discovery teaching is more appropriate for developing understanding, while developing aspects of concepts, skills, and emotions as a whole gets less attention. b) In some disciplines, discovery learning is not good at measuring the ideas put forward by students. c) Does not provide opportunities to think that will be found by students because it has been selected in advance by the teacher. In essence, Project Based Learning, Problem Based Learning, and Discovery Learning with ODL online can still be done by providing starting materials to students to start the learning process online. Then students can complete teacher assignments be it projects, problem solving and finding solutions either independently or in groups in their respective places of residence. The tasks carried out in groups can be done by implementing 3M (washing hands, maintaining distance, and wearing masks) and with the Covid-19 protocol. Furthermore, the teacher gives time to students to carry out their duties while providing online directions regarding the assignments given so that the desired competencies can be achieved. The directions can be given daily or weekly, depending on the needs of the learning process. Based on the characteristics in the above discussion, it can be concluded that all learning models used in the 2013 curriculum can be implemented with the ODL process. However, in its implementation there needs to be media and equipment that support the learning process so that the learning process can run well. For example, in discovery learning, teachers must actively guide students in the process of finding answers to various problems so that students get the desired competence. Of course, this must be supported by various facilities and approach. Boston: Allyn and Bacon. CONCLUTION Online Distance Learning is carried out because of the Covid-19 pandemic conditions which require people to carry out social distancing to reduce the risk of transmission. However, the learning process still requires a model as a teacher's guide in carrying out the learning process in order to achieve learning objectives so that students can achieve the desired competencies. In the 2013 curriculum, there are 3 learning models offered, namely Project Based Learning, Problem Based Learning and Discovery Learning. The results of this study indicate that the three learning models can still be used in ODL. However, it still has to be equipped with supporting tools and media so that the learning process can be carried out smoothly according to what has been planned.
2021-05-04T22:05:31.912Z
2021-04-04T00:00:00.000
{ "year": 2021, "sha1": "736adc8024fa86459a7f14837c06112a5d9cb806", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.20527/jee.v2i1.3183", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "408b003a15789bdb02447541531b070e18770f6f", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119192385
pes2o/s2orc
v3-fos-license
Identification of extra neutral gauge bosons at the International Linear Collider Heavy neutral gauge bosons, Z's, are predicted by many theoretical schemes of physics beyond the Standard Model, and intensive searches for their signatures will be performed at present and future high energy colliders. It is quite possible that Z's are heavy enough to lie beyond the discovery reach expected at the CERN Large Hadron Collider LHC, in which case only indirect signatures of Z' exchanges may occur at future colliders, through deviations of the measured cross sections from the Standard Model predictions. We here discuss in this context the foreseeable sensitivity to Z's of fermion-pair production cross sections at an e^+e^- linear collider, especially as regards the potential of distinguishing different Z' models once such deviations are observed. Specifically, we assess the discovery and identification reaches on Z' gauge bosons pertinent to the E_6, LR, ALR and SSM classes of models, that should be attained at the planned International Linear Collider (ILC). With the high experimental accuracies expected at the ILC, the discovery and the identification reaches on the Z' models under consideration could be increased substantially. In particular, the identification among the different models could be achieved for values of Z' masses in the discovery (but beyond the identification) reach of the LHC. An important role in enhancing such reaches is played by the electron (and possibly the positron) longitudinally polarized beams. Also, although the purely leptonic processes are experimentally cleaner, the measurements of c- and b-quark pair production cross sections are found to carry important, and complementary, information on these searches. Introduction Electroweak theories beyond the Standard Model (SM) based on spontaneously broken extended gauge symmetries naturally envisage the existence of heavy, neutral, vector bosons Z ′ . The variety of the proposed Z ′ models is somewhat broad, and for definiteness in the sequel we shall focus on the so-called Z ′ SSM , Z ′ E 6 , Z ′ LR and Z ′ ALR models. Particular attention has recently been devoted to the phenomenological properties and the search reaches on such scenarios, and in some sense we may consider these Z ′ models as representative of this New Physics (NP) sector [1,2]. A typical manifestation of the production of such states is represented by (narrow) peaks observed in the cross sections for processes among SM particles at high energy accelerators, for example in the invariant mass distributions for Drell-Yan dilepton pair production at the Fermilab Tevatron or at the CERN LHC hadronic colliders. Current experimental search limits on M Z ′ at 95% C.L., from Drell-Yan cross sections at the Tevatron, generally range in the interval 0.8-1 TeV, depending on the particular Z ′ model being tested [3]. Even higher 95% C.L. limits, of the order of 1.14-1.4 TeV are obtained for the Z ′ χ , Z ′ LR , and Z ′ SSM models, from electroweak high precision data [2]. Clearly, the eventual discovery of a peak should be supplemented by the verification of the spin-1 of the assumed underlying Z ′ , vs. the alternative spin-2 and spin-0 hypotheses corresponding, e.g., to exchanges of a Randall-Sundrum graviton resonance [4] or a sneutrino [5]. This kind of analysis relies on appropriate angular differential distributions and/or angular asymmetries. Finally, once the spin-1 has been established, the particular Z ′ scenario pertinent to the observed signal should be identified, see, e.g., Refs. [6][7][8][9][10][11][12][13][14][15]. From studies of Drell-Yan processes at the LHC with a time-integrated luminosity of 100 fb −1 , it turns out that one can expect, at the 5-σ level, discovery limits on M Z ′ of the order of 4-4.5 TeV, spin-1 identification up to M Z ′ ≃ 2.5-3 TeV and potential of distinction among the individual Z ′ models up to M Z ′ ≃ 2.1 TeV (95% C.L.). For masses above the direct search limits mentioned above, and LHC luminosity at the design value, access to Z ′ manifestations may be provided by indirect, virtual exchange effects causing deviations of cross sections from the SM predictions, if M Z ′ is not excessively heavy. However, at the LHC, model identification from Drell-Yan dilepton mass distributions and forward-backward asymmetries may be problematic due to limited statistics [16]. An alternative resource for the observation of virtual heavy gauge boson exchanges should be represented by the next generation e + e − International Linear Collider (ILC), with center of mass energy √ s = 0.5-1 TeV and typical time-integrated luminosities L int ∼ 0.5-1 ab −1 [17,18], and the really high precision measurements that will be possible there. Indeed, the baseline configuration envisages a very high electron beam polarization (larger than 80%). Also positron beam polarization, around 30%, might be initially obtainable and perhaps already available for physics. This polarization could be raised to about 60% or higher in the ultimate upgrade of the machine. The polarization option might represent an asset in order to enhance the discovery reaches and identification sensitivities on NP models of any kind [19], therefore also on Z ′ exchanges in interactions of SM particles. Previous analyses, based on various final state channels and possible experimental observables, show that sensitivities to quite high Z ′ masses could in principle be attained at the ILC (qualitatively, of the order of M Z ′ ∼ (10 − 20) · √ s for the highest planned luminosity, see, e.g., [1,[20][21][22][23][24], and references therein). The ILC parameters have recently been fixed in the Reference Design Report [17], so that it should be interesting to reconsider the identification of Z ′ models in the light of the numbers reported there. We will here focus on the fermion-antifermion production reactions at the polarized ILC: As basic experimental observables for the Z ′ analysis, as an alternative to integrated observables like the total cross sections and/or angular-integrated asymmetries, we here choose the differential angular distributions for the above processes, that allow to exploit the information contained in the different portions of the final state phase space by a binned analysis. Particular emphasis will be given to the comparison between the cases of unpolarized and polarized initial beams, as regards the expected potential of ILC in identifying the Z ′ models of interest here, for M Z ′ values of the order of and beyond the limits accessible at the LHC. Indeed, concerning the Z ′ mass, there are two scenarios. The first one is represented by the interval in M Z ′ between the expected identification and discovery limits at the LHC: here, we can assume the Z ′ to have already been discovered at some M Z ′ (but the model not identified), so that the model identification (or equivalently the determination of the coupling constants) could be performed at the ILC, based on the deviations of cross sections from the SM predictions for the determined Z ′ mass. For earlier attempts along this line see, e.g., Ref. [25]. The second mass range is above the LHC discovery limit and, here, with M Z ′ unknown, both discovery and identification reaches should be assessed for the ILC. In the following, in Sec. 2 we give a brief introduction to the different Z ′ models considered in the analysis, and give the corresponding leading order expressions of the polarized differential cross sections for processes (1), mostly in order to establish the notations. In Secs. 3 and 4 we present the results of our analysis for the discovery and identification reaches on the individual Z ′ models at the ILC; and finally, Sec. 5 contains some concluding remarks. Polarized observables and Z ′ models The analysis at the ILC is somewhat different from the corresponding studies of Drell-Yan processes at the LHC. Deviations of the various observables from SM predictions, such as cross sections and asymmetries, due to the interference of the SM amplitude with schannel exchanges of the Z ′ , graviton resonance G or sneutrinoν, might be observed at the ILC. However, in the latter case, there is no interference ofν with the SM exchanges [5]. Conversely, in the case of the spin-2 KK graviton exchange, the interference with the SM exchanges vanishes when one integrates over the full angular range [26], whereas for differential observables such interference survives. Nevertheless, it turns out [27] that the sensitivity at the ILC with √ s = 0.5 TeV and L int = 500 fb −1 to a KK graviton resonance in processes (1) is of the order 0.8 TeV (1.9 TeV) for the graviton coupling constant c = 0.01 (c = 0.1), which is well within the expectations for discovery and identification at the LHC. Accordingly, the KK excitation would have been either discovered or excluded by the time the ILC will be operating. Therefore, from the considerations above, one can conclude that for √ s < M Z ′ , as will be the case for the ILC, only the interference of the Z ′ amplitude with the SM one could be visible at the ILC in processes (1). Accordingly, it might not be so indispensible to perform angular analyses such as those foreseen for Drell-Yan processes at the LHC, in order to differentiate the spin of the exchanged intermediate heavy quantum states, because only Z ′ should be able to lead to appreciable interference effects. The polarized differential cross section for the Bhabha process e + + e − → e + + e − , where γ and Z can be exchanged also in the t-channel, can be written at leading order as (see, e.g., Refs. [28,29]): with the decomposition In Eqs. (2) and (3), the subscripts t and s denote helicity cross sections with SM γ and Z exchanges in the corresponding channels, z = cos θ and the subscripts L, R denote the respective helicities, P − and P + denote the degrees of longitudinal polarization of the e − and e + beams, respectively. 1 In terms of helicity amplitudes: According to the previous considerations the amplitudes G ee αβ,i , with α, β = L, R and i = s, t, are given by the sum of the SM γ, Z exchanges plus deviations representing the effect induced by a Z ′ boson: Here, u, t = −s(1 ± z)/2 (we are neglecting fermion masses), g L = − cot 2 θ W and g R = tan θ W with θ W the electroweak mixing angle, whereas g ′ L and g ′ R are characteristic of the particular Z ′ model. In the annihilation channel, below the Z ′ mass, the Z ′ interference with the SM will be destructive in the LL and RR cross sections, whereas it can be of either sign in the LR and RL cross sections. The polarized differential cross section for the leptonic channels e + e − → l + l − with l = µ, τ can be obtained directly from Eq. (2), basically by dropping the t-channel contributions. The same is true, after some obvious substitutions, for the annihilations into cc and bb final states, in which case also the color (N C ) and QCD correction factors, 1 In the recent review [19], the opposite sign convention for positron polarization was adopted. C s ≃ N C [1 + α s /π + 1.4 (α s /π) 2 ], must be taken into account. The s-channel helicity amplitudes for the process (1) with f = e, t can be written as [28]: where in the latter expression α = β. As anticipated, the Z ′ models that will be considered in our analysis are the following: (i) The Z ′ scenarios originating from the exceptional group E 6 spontaneous breaking are defined in terms of a mixing angle β. The specific values β = 0, β = π/2 and β = − arctan 5/3, correspond to different E 6 breaking patterns and define the popular scenarios Z ′ χ , Z ′ ψ and Z ′ η , respectively. (ii) The left-right models, originating from the breaking of an SO(10) grand-unification symmetry, and where the corresponding Z ′ LR couples to a combination of righthanded and B − L neutral currents (B and L denote baryon and lepton currents), specified by a real parameter α LR bounded by 2/3 < ∼ α LR < ∼ √ 2. The particular value α LR = √ 2 corresponds to a pure L-R symmetric model (LRS). (iii) The Z ′ ALR predicted by the 'alternative' left-right scenario. (iv) The so-called sequential Z ′ SSM , where the couplings to fermions are the same as those of the SM Z. Detailed descriptions of these models, as well as the specific references, can be found, e. g., in Ref. [1]. All numerical values of the Z ′ couplings needed in Eq. (5) are collected, for example, in Table 1 of Ref. [14]. Discovery of Z ′ In the absence of available data, the assessment of the expected 'discovery reaches' on the various Z ′ s needs the definition of a 'distance' between the NP model predictions and those of the SM for the basic observables that will be measured. The former predictions parametrically depend on the Z ′ mass and its corresponding coupling constants, while the latter ones are calculated using the parameters known from the SM fits. Such a comparison can be performed by a standard χ 2 -like procedure. As anticipated in Sec. 1, we divide the full angular range into bins and identify the basic observables with the polarized differential angular distributions for processes (1), O = dσ(P − , P + )/dz, in each bin. Correspondingly, the relevant χ 2 can symbolically be defined as: Notice that not only the different beam longitudinal polarizations, but eventually also the various processes 'f ' in Eq. (1) are combined in the definition (7). Here, one assumes to have produced a set of 'data', for example by using the dynamics specified by L int = 500 fb −1 (1000 fb −1 ), compared to the results expected from Drell-Yan processes at the LHC at the 5-σ level [14]. Three options of polarization are considered at the ILC: unpolarized beams, P − = P + = 0; polarized electron beam, |P − | = 0.8; both beams polarized, |P − | = 0.8 and |P + | = 0.6. a given Z ′ model, and δO in the denominator denotes the corresponding 'experimental' uncertainty on O, combining statistical and, if possible, systematical ones. According to the previous considerations, the χ 2 , besides the number of degrees of freedom, is basically a function of the chosen Z ′ model parameters. In particular, if the coupling constants are fixed at specific values, it will depend solely on the Z ′ mass, and we vary this parameter. The discovery sensitivity to the Z ′ under consideration can in this case be identified as the limiting value of M Z ′ for which the value of χ 2 (M Z ′ ) has the probability needed for exclusion of the SM at a desired confidence level (in what follows, we shall impose 95% C.L.). In the cases where cos β-or α LR -dependent couplings are considered, SM exclusion regions can be defined analogously. To derive the expected 'discovery' limits on Z ′ models at the ILC, for the 'annihilation' channels in Eq. (1), with f = e, t, we restrict ourselves to combining in Eq. (7) the (P − , P + ) = (|P − |, −|P + |) and (−|P − |, |P + |) beam polarization configurations, that are the predominant ones. For the Bhabha process, f = e, we combine in (7) the cross sections with all four possible polarization configurations, i.e., (P − , . Numerically, following the ILC Design Report [17], we take for the electron beam |P − | = 0.8. For the positron beam, |P + | = 0.3 is discussed as possibly available 'free of charge' already in the ILC initial running conditions. However, such a small positron polarization will turn out not to affect our evaluated discovery and identification reaches on Z ′ s considerably. We shall therefore present numerical results for two cases, unpolarized positron beam |P + | = 0, and |P + | = 0.6 representing the 'ultimate' upgrade. Regarding the ILC energy and the time-integrated luminosity (which, for simplicity, we assume to be equally distributed among the different polarization configurations defined above), still according to Ref. [17], we will give explicit numerical results for c.m. energy √ s = 0.5 TeV with time-integrated luminosity L int = 500 fb −1 , and for the 'ultimate' upgrade values √ s = 1.0 TeV with L int = 1000 fb −1 . The assumed final state identification efficiencies governing, together with the luminosity, the expected statistical uncertainties, are: 100% for e + e − pairs; 95% for l + l − events (l = µ, τ ); 35% and 60% for cc and bb, respectively [17,18]. As for the major systematic uncertainties, they originate from errors on beam polarizations, on the time-integrated luminosity, and the final-state reconstruction and energy efficiencies. For the longitudinal polarizations, we adopt the values δP − /P − = δP + /P + = 0.25%, rather ambitious, especially as far as P + is concerned, but strictly needed for conducting the planned measurements at the permille level, see, e.g., Refs. [30][31][32]. 2 As regards the other systematic uncertainties mentioned above, we assume for the combination the (perhaps conservative) lumpsum value of 0.5%. The systematic uncertainties are included using the covariance matrix approach [33][34][35]. Concerning the theoretical inputs, for the SM amplitudes we use the effective Born approximation [36] vertices, with m top = 175 GeV and m H = 120 GeV. The numerically dominant O(α) QED corrections are generated by initial-state radiation, for both Bhabha scattering and the annihilation processes in (1). They are accounted for by a structure function approach including both hard and soft photon emission [37], and by a flux factor method [38], respectively. Effects of radiative flux return to the s-channel Z exchange are minimized by the cut ∆ ≡ E γ /E beam < 1 − M 2 Z /s on the radiated photon energy, with ∆ = 0.9. In this way, only interactions that occur close to the nominal collider energy are included in the analysis and, accordingly, the sensitivity to the manifestations of the searched-for nonstandard physics can be optimized. By numerical studies based on the ZFITTER code [39], other QED effects such as final-state and initial-final state emission are found, in the processes e + e − → l + l − (l = µ, τ ) and e + e − → qq (q = c, b), to be numerically unimportant for the chosen kinematical cuts. Finally, correlations between the different polarized cross sections (but not between the individual angular bins) are taken into account in the derivation of the numerical results, that we present in Fig. 1. The figure includes a comparison with the discovery potential of the LHC with luminosity 100 fb −1 , from the Drell-Yan processes pp → l + l − + X (l = e, µ) (at the 5-σ level). These values provide a representative overview of the sensitivities of the reach in M Z ′ on the planned energy and luminosity, as well as on beam polarization. Distinction of Z ′ models Basically, in the previous subsection we have assessed the extent to which Z ′ models can give values of e + e − differential cross sections that can exclude the SM hypothesis to a prescribed C.L. Such 'discovery reaches' are represented by upper limits on Z ′ masses, for which the observable deviations between the corresponding Z ′ models and SM predictions are sufficiently large compared to the foreseeable experimental uncertainties on the cross sections at the ILC. However, since different models can give rise to similar deviations, we would like to determine the ILC potential of identifying, among the various competing possibilities, the source of a deviation, should it be effectively observed. These ID-limits should obviously be expected to lie below the corresponding ILC discovery reaches and, for an approximate but relatively simple assessment, we adapt the naive χ 2 -like procedure applied in the previous subsection. To this purpose, we start by defining a 'distance' between pairs of Z ′ models, i and j with i, j denoting any of the SSM, SM, ALR, LRS, ψ, η, χ, but i = j. We assume for example model i to be the 'true' model, namely, we consider 'data' sets obtained from the dynamics i, with corresponding 'experimental' uncertainties, compatible with the expected 'true' experimental data. The assessment of its distinguishability from a j model, that we call 'tested' model, can be performed by a χ 2 comparison analogous to (7), with the χ 2 defined as: As an illustration, the angular behavior of the deviations in the numerator of Eq. (8) for the unpolarized annihilation e + e − → bb is depicted in Fig. 2, for the case where the 'true' model is i = ALR, with M Z ′ = 2.5 TeV for all models, at the ILC with √ s = 0.5 TeV and L int = 500 fb −1 (actually, in this figure,∆ is the relative deviation, . Basically, considering that the ILC will start when the LHC will already be operating at the design energy and luminosity, as anticipated previously, we can envisage two cases requiring somewhat different strategies. Z ′ mass known In the first case we assume that the Z ′ mass is already measured at the LHC, but perhaps not 'identified' there, and the value is within the ILC discovery reaches for both models i and j. In this case one should set M Z ′ i = M Z ′ j ≡ M Z ′ in Eq. (8) and, accordingly, the χ 2 becomes a function of only the Z ′ i and Z ′ j coupling constants. If both the Z ′ i and Z ′ j couplings are fixed numerically, like in the example of Fig. 2, distinguishability can be assessed by varying M Z ′ , up to the point where the χ 2 ij reaches the critical value suitable for exclusion of the 'tested' model j by the 'true' model i at the desired confidence level. If the above mentioned couplings are, instead, the β-or α LR -dependent ones, 'confusion' domains between 'true' and 'tested' models can analogously be determined by means of Eq. (8) in the model parameter plane (cos β, α LR ) for fixed values of M Z ′ i = M Z ′ j ≡ M Z ′ . By definition, in these 'confusion' domains, that depend on the actual value assumed for M Z ′ , the cross sections corresponding to definite values of β and α LR cannot be distinguished from each other at the desired confidence level. Correspondingly, the 'complementary' regions in the above mentioned parameter plane can define the 'resolution' domain of the 'tested' model by the the 'true' model hypothesis, and determine in this way the identification limit on the latter. As an illustrative example of application of the ID-criteria exposed above, we evaluate the 'resolution' regions between E 6 ('true') and left-right LR ('tested') models in the plane (cos β, α LR ) for different values of M Z ′ . Figures 3-6 show the regions of 'resolution' obtained from the processes e + e − → e + e − , l + l − (l = µ, τ ), cc and bb, for M Z ′ =4.5 TeV, 3.5 TeV and 2.5 TeV at √ s = 0.5 TeV and L int = 500 fb −1 , and for different values of beam polarization. Notice that, in these figures, the horizontal axis includes also the values of β specific of the χ, ψ and η models, while the vertical axis includes the value of α LR representative of the LRS model. 3 Figures 3-5 clearly demonstrate the complementary roles of the processes with different final states, in particular, that the process e + e − → bb can potentially be the most efficient one in distinguishing E 6 and left-right models from each other (it provides the largest resolution domains). Conversely, the purely leptonic processes, f = e, µ, τ in (1), turn out to determine much less extended 'resolution' areas, in particular they cannot discriminate Also, as can be seen from these figures, the leptonic processes are found to provide 'confusion' domains (white) located in the 'central' part of the plane (cos β, α LR ), around (1/4, 3/2), whereas the processes into qq final states exhibit the opposite feature. Therefore, as shown in Fig. 6, the combination of all processes f is expected to dramatically reduce the 'confusion' area in the above mentioned plane and to determine the largest possible domain in which the considered Z ′ models can be mutually distinguished from one another. The substantial role of electron polarization and, to a somewhat lesser extent, of positron polarization in shrinking 'confusion' domains, leading to enlarged model 'resolution' domains, can also be seen in these panels. Combining all processes, and with both beams polarized, a 'confusion' turns out to persist only in the minute corners shown in Fig. 6, for M Z ′ = 4.5 GeV, and nothing at the lower masses. It is interesting to compare these resolution regions with the corresponding ones resulting from the assumed Z ′ discovery in the Drell-Yan process at the LHC, shown in the lower-right panel of Fig. 6. This figure shows that, at the LHC, for the discovery of a 4.5 TeV Z ′ the corresponding resolution region is found to cover only a narrow strip, 1.4 < ∼ α LR and −0.5 < ∼ cos β < ∼ 0.8. Even for M Z ′ = 2.5 TeV, at the LHC, there are only modest hyperbola-like strips at | cos β| > ∼ 0.5 where models can be distinguished. We now continue the above analysis in a somewhat different direction, namely, we wish to determine the limiting value of M Z ′ up to which a particular cos θ-dependent E 6 model, assumed to be true, can be identified at the ILC in the sense that all the other, potentially competing, Z ′ models can be excluded. The results from this analysis are shown in Figs. 7-9. Figure 7 exhibits the exclusion limits vs. β on the models LR, ALR, SSM and SM (recall that exclusion of the SM determines the discovery reaches), once a E 6 model is 'true' (all processes combined). For the LR 'tested' model, the corresponding curve in Fig. 7 is obtained, for each β, by varying α LR in the full allowed range, which gives LR exclusion limits M Z ′ (α LR ), and choosing the minimum value of such M Z ′ s (in this way the whole class of LR models, as well as the LRS, are excluded). The solid line labelled 'SM' represents the discovery reach, i.e., the Z ′ mass up to which the SM can be excluded. The overall identification range is shown as the shaded (yellow) region. One can see that, in this case, the identification of the class of E 6 models considered here is basically determined by the exclusion of the class of the LR models and, for the 'central' values of cos β, by the SM (i.e., by the discovery reach). Consequently the ID-limit is, in a (somewhat broad) range around cos β = 0, essentially identical to the discovery limit, whereas it is substantially smaller in the two intervals close to | cos β| = 1. Figure 7 shows that, numerically, for | cos β| < 0.9 the ID-limit is as large as M ID Z ′ ≃ 3 − 4 TeV, and for cos β near ±1 E 6 models become more and more difficult to distinguish from the competitor ones. The right panel of Fig. 7 shows the corresponding identification reaches for the polarized case, |P − | = 0.8 and |P + | = 0.6, and the quantitative improvements that can be achieved in this case. Similarly, the identification limits on LR models vs. the parameter α LR can be read off from Fig. 8. The curve labelled as 'E 6 ' is obtained by a procedure analogous to the The E 6 model is assumed to be 'true' while the others (SSM, LR, ALR, SSM and SM) are taken as tested models. The identification range is indicated as the shaded (yellow) area. Right panel: Similar, but for the polarized processes. curve 'LR' in Fig. 7, and the solid curve 'SM' represents the exclusion limits of the SM (hence the discovery reaches). In this case, the identification of the class of LR Z ′ s turns out to be determined basically by the exclusion of the class of E 6 models, generally not so much below the discovery limit for all values of α LR . On the other hand, the figure shows rather high identification limits, of the order of M ID Z ′ ≃ 3.0 − 4.6 TeV in the range, say, 0.9 < ∼ α LR < ∼ √ 2, whereas they substantially decrease for smaller α LR . We can conclude, from Figs. 7 and 8, that the identification reach at the ILC, already at √ s = 0.5 TeV and L int =500 fb −1 , exceeds the corresponding discovery reach at the LHC. In fact, the full integrated luminosity considered here might be not quite indispensible for this identification. In Table 1 we show the required integrated luminosity, at the two ILC energies of 0.5 and 1 TeV, for the identification of these different models, realized as a Z ′ at 2.5, 3.5 or 4.5 TeV (within the discovery reach of the LHC). Finally, in Fig. 9 we summarize the information, of a similar kind as represented in Figs. 7 and 8, relevant to the cases where the ALR model or the SSM model is assumed 'true' (upper and lower panels, respectively). As usual, the figure shows the limiting value TeV if polarization is available. In a sense, the ALR and SSM models are the most 'orthogonal' ones since, if either of them is assumed 'true', the other one can be excluded up to a really high value of M Z ′ . Z ′ mass not known The second kind of situation is met in the case where the Z ′ mass cannot be known a priori, e.g., the Z ′ is too heavy to be discovered at the LHC [say, M Z ′ >4-5 TeV], but deviations from the SM predictions can still be observed at the ILC. Actually, models with different Z ′ masses and coupling constants can in principle be the source of a deviation from the SM predictions observed at the ILC. With the coupling constants held fixed numerically at the theoretical values pertinent to the Z ′ i and Z ′ j models under consideration, the χ 2 ij of Eq. (8) becomes a function of the two masses, M Z ′ i and M Z ′ j , both assumed to lie in the respective ILC discovery ranges. In this case, one can derive a contour in the (8) is consistent with 'confusion' of i and j at the desired confidence level. The region encircled by such a contour will be the 'confusion' (or 'no distinction') domain between the 'true' model i and the 'tested' model j and correspondingly, in the complementary domain the hypothesis j could be excluded if i is assumed to be 'true'. We refer to this latter, complementary, region as 'resolution' region. One can iterate this procedure and generate pairwise 'confusion' and 'exclusion' regions in the two-dimensional planes of parameters for all models j = i. As will be illustrated graphically in the remaining part of the paper, a common feature of such 'exclusion' regions is that the relevant contours admit, for each j (and obviously fixed i) a minimum value M , the 'tested' model j can be excluded regardless of M Z ′ j . We finally assume, as identification limit on the i model at ILC, the smallest of the values M (j) Z ′ i for j = i, for which all tested models will be excluded by the hypothesis of i being 'true'. Of course, such ID-value of M Z ′ should be smaller (or at most equal), than the ILC discovery reach on model i. This procedure can finally be iterated, in turn, to all the different Z ′ models and the assessment of corresponding ID-reaches. This naive χ 2 procedure can also be extended in a straightforward way to estimating exclusion ranges-and corresponding identification limits-in the cases where cos β-and/or α LR -dependent Z ′ models are considered in Eq. (8). Examples of pairwise 'confusion' regions and corresponding contours, relevant to the Z ′ models chosen in Fig. 1, are shown in Fig. 10. In this figure, the various steps of the procedure outlined above, as well as the final derivation of the ID-limits, can easily be followed. As an example of how to read this figure, consider the hypothesis that the η model is 'true' (lower left panel), with M Z ′ = 6 TeV. Then, if instead the ψ or χ model should be true, the mass would have to exceed 4.2 or 6.3 TeV, respectively. Finally, Fig. 11 shows the comparison of identification reaches or distinction bounds on the Z ′ -models considered in Fig. 1, together with the corresponding bounds on M Z ′ obtained from the process pp → l + l − + X at the LHC with c.m. energy 14 TeV and time-integrated luminosity 100 fb −1 . We assume, for the ILC, the same c.m. energy, luminosty and beam polarization as in Fig. 1. The figure speaks for itself, and in particular ) and L int = 500 fb −1 (1000 fb −1 ), compared to the results expected from Drell-Yan processes at the LHC at 95% C.L. [14]. Two options of polarization are considered: unpolarized beams P − = P + = 0 and both beams are polarized, |P − | = 0.8 and |P + | = 0.6. clearly exhibits the roles of the ILC parameters. In summary, one might be able to distinguish among the considered Z ′ models at 95% C.L. up to M Z ′ ≃ 3.1 TeV (4.0 TeV) for unpolarized (polarized) beams at the ILC (0.5 TeV) and 5.3 TeV (7.0 TeV) at the ILC (1 TeV), respectively. In particular, the figure explicitly manifests the substantial role of electron beam polarization in sharpening the identification reaches. Positron polarization can also give a considerable enhancement in this regard (if measurable with the same high accuracy as for electron polarization), although to a more limited extent in some cases. Clearly, our analysis is greatly simplified by the fact that the vector and axial vector couplings of the considered Z ′ s are fixed theoretically. If we wanted to determine them in general, namely, with both masses and coupling constants a priori free variables, the χ 2 analysis should be five-dimensional with, in addition, the limitation that for M Z ′ ≫ √ s (contact-interaction regime), M Z ′ could not be simultaneously extracted. In principle, data at different collider energies could be utilized in this regard, for Z ′ masses not too far from √ s [40]. Concluding remarks We have explored in some detail how the Z ′ discovery reach at the ILC depends on the c.m. energy, on the available polarization, as well as on the model actually realized in Nature. The lower part of this range, up to M Z ′ ≃ 5 TeV, will also be covered by the LHC, but the identification reach at the LHC is only up to M Z ′ < 2.2 TeV. In this LHC discovery range, the cleaner ILC environment, together with the availability of beam polarization, allow for an identification of the particular Z ′ version realized. Actually, this ILC identification range extends considerably beyond the LHC discovery range. Specifically, the ILC with polarized beams at √ s = 0.5 TeV and 1 TeV allows to identify all considered Z ′ bosons if M Z ′ < ∼ (6 − 7) × √ s. This represents a substantial extension of the the LHC reach.
2009-12-15T06:27:38.000Z
2009-12-15T00:00:00.000
{ "year": 2009, "sha1": "e3af54e90c402623c7e0c4f3153da84ab27ab1ef", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0912.2806", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e3af54e90c402623c7e0c4f3153da84ab27ab1ef", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
187660915
pes2o/s2orc
v3-fos-license
Neurodevelopment in Infants with Moderate Neonatal Risk and Its Association with Biological and Environmental Factors Moderate-risk neonates (MRNs) are newborns who usually remain hospitalized in neonatal intensive care units (NICU) after birth. Although they have low rates of mortality, morbidity burden may be significant and involve neurological risk. The aim of this study was to estimate the prevalence of neurodevelopmental disorders and the influence of biological and socio-environmental factors on the neurodevelopment of MRN. A cross-sectional study was performed on a sample of 162 MRNs aged 2–24 months, who remained in NICU ≥ 72 h after birth, with gestational age (GA) ≥ 34 weeks, birth weight ≥ 1500 g, and normal neurological and clinical examinations by the time of hospital discharge. Four neurodevelopmental areas were assessed using the Argentinian test PRUNAPE: language (LG), fine and gross motor skills (FM and GM), and personal-social skills (PS). Data from biological (gestational, perinatal and postnatal) and socio-environmental factors were collected through parental questionnaires. Thirty-four percent of infants failed the test. Gross motor was the most affected area (14.2%), followed by LG (11.7%), FM (7.4%), and PS (4.3%). Among gestational factors global failure was associated with drugs and alcohol consumption (p ≤ 0.029). Language was associated with maternal smoking (p = 0.007; OR 3.5), FM (p = 0.009; OR 13.0), and GM (p = 0.002; OR 10.6) with drug use, and both LG (p = 0.000; OR 22.6) and GM (p = 0.007; OR 16.2) with alcohol consumption during pregnancy. Infants born by cesarean had a higher risk of failure than those born by vaginal delivery (p = 0.049; OR: 2.2), as well as infants with pathological complementary diagnosis (p = 0.001; OR 2.7). Mechanical ventilation was associated with FM disorders (p = 0.025; OR 4.2). Children with siblings had a higher risk of failing the test than only children (p = 0.041; OR 2.0). Rate of neurodevelopmental disorders in MRN exceeds widely that of the general population. GM was the most affected area. Maternal addictions, cesarean birth, pathological complementary studies, MV, and having siblings are factors associated with failure in the screening. group can be divided into high, moderate, and low neonatal risk. High-risk neonates are characterized by high morbidity and mortality and the need for specific care. They are more likely to develop sensory and neurodevelopmental disorders during infancy and childhood (Doyle et al. 2014;Ryckman et al. 2017). On the other hand, low-risk babies are apparently healthy newborns, born at term, with no serious family, gestational, or perinatal history, with a normal physical examination and adequate adaptation to the extrauterine environment (Cheng et al. 2008;Daga et al. 1996). Between both extremes are the moderate-risk neonates (MRNs). At birth, these infants usually require hospitalization in Neonatal Intensive Care Units (NICU). These neonates constitute an Bundefined area,ŝ ince they have low mortality but high morbidity-including several pathologies that usually do not compromise their lives and from which they recover quickly-but do not belong to normal population (Vericat and Orden 2017). It is an increasing group hardly assignable to a single nosological category, due to the large variety of involved pathologies. However, MRNs are a neurological risk group which may suffer developmental disorders, even if they had not been seriously damaged during neonatal period (McGowan et al. 2011;Vericat and Orden 2017). The MRN population is basically composed by two kinds of patients: late preterms and another group of children born at term with low-severity pathologies, such as respiratory infections, hyperbilirubinemia, and congenital infections, among others. Their neurodevelopment outcomes and possible sequelae do not always get the same attention as seen in extreme premature or children with severe diagnosis (Cheong et al. 2017). Therefore, early detection of neurodevelopment disorders in this group should be one of the most relevant actions to be done in pediatric primary care health. Developmental screening tests can be used for this purpose, especially in those newborns that remained in NICU and require a more systematic follow-up (Orton et al. 2018;Simard et al. 2011). This study aimed to know the prevalence of neurodevelopmental disorders in MRN who remained at NICU as newborns, as well to identify the biological and socio-environmental factors associated with failures detected by a developmental screening test. Method Participants A cross-sectional study was performed on a non-probabilistic sample of infants, who were hospitalized at birth in the NICU of the San Roque Hospital in Gonnet (La Plata, Argentina). The study included infants aged 2-24 months hospitalized at birth in NICU ≥ 72h, with birth weight (BW) ≥ 1500 g, a gestational age (GA) ≥ 34 weeks, and clinically and neurologically normal by the time of hospital discharge. Children with genetic syndromes, severe physical or neurological malformations, intraventricular hemorrhage ≥ III, retinopathy of prematurity > III, infection of central nervous system (CNS), severe hyperbilirubinemia, seizures, and/or perinatal asphyxia were excluded and referred to the Neurology Department for their clinical follow-up. Procedure The sample size was estimated on 10% of prevalence rate of developmental disorders in children under 2 years old (Lejarraga et al. 2005), with a reliability of 95% and an accuracy of 5% (n = 100 patients). After reviewing 400 clinical records from the NICU of the hospital, 189 of them were selected according to the inclusion criteria. We eventually contacted the parents of 162 children, who agreed to participate in the study. They were given an appointment to conduct an interview. Measures Psychomotor development was assessed using the Prueba Nacional de Pesquisa (PRUNAPE), an Argentinian validated test (sensitivity 80%, specificity 93%, PPV 94%) for the screening of language (LG), fine and gross motor skills (FM and GM), and personal-social (PS) development in children up to 6 years old (Lejarraga et al. 2008). The PRUNAPE was the first screening test constructed on a sample of 3000 Argentinian children from all over the country. The Argentinian Society of Pediatrics (SAP) recommended its use in children with or without risk factors. Although it is not commonly used in premature babies, it can be applied by correcting the GA. The test comprises a set of 79 standardized milestones plotted in horizontal bars that represent the percentiles of the age at which the landmarks are achieved. To apply the test, the chronological or corrected age (if GA < 40 weeks) is calculated. On the graph, a vertical line is drawn crossing the chronological age (x axis) defining two types of milestones: type A, whose 90th percentile is to the left side of the age line and type B, which is crossed by the age line between the 75th and 90th percentiles (Fig. 1). Compliance with type A milestones is more important than type B, since the former were already reached by 90% of children of that particular age. Three possible outcomes are possible: pass/fail/refuse to collaborate (RC). A child fails the test when he/she fails to perform either a milestone A or two milestones B. This result may indicate a developmental disorder and it is necessary to retest 15 days later. If the same result persists, a specific follow-up and diagnosis is required. In addition to neurodevelopment, weight, length, and cephalic perimeter were measured. Information about birth date, GA, weight, length, and cephalic perimeter at birth was collected from each Personal Child Health Record. The remaining independent variables were obtained through a socio-environmental questionnaire carried out on parents. These variables were grouped around four dimensions or factors, the first three related to pregnancy, birth, and the postnatal period (Table 1): (1) gestational: maternal variables related to pregnancy; (2) perinatal: variables related to delivery and stay in NICU; (3) postnatal: including pathologies after hospital discharge, early stimulation, and breastfeeding pattern; and (4) socio-environmental: including socio-economic and environment characteristics related to parental education and occupation, child's siblings, and housing conditions (public services and neighbor characteristics). Data Analyses Associations between neurodevelopmental disorders and exposure factors 1-4 were explored by chi-square and Fisher's tests. A logistic regression model allowed us to establish the relationship between our binary outcome variables (i.e., to pass/to fail the global test, and each one of the neurodevelopmental areas) and the explanatory variables. ) to the x axis at the child's age, two types of milestones are defined: a whose 90th percentile is to the left of the line, that is, 10% of the population did not achieve that milestone and b crossed by the line between the 75th and 90th percentiles, i.e., 25% of children of that age have not reached the milestone yet. Source: adapted from Lejarraga et al. (2005) Results were expressed as odds ratios (ORs) with 95% confidence intervals. Data processing was done using the statistical program R version 3.1. Results The sample comprised 95 boys and 67 girls who averaged 9.1 ± 6 months old, median GA 37 weeks (range 34-41 weeks), BW 2613 ± 723 g, cephalic perimeter 33.3 ± 2.2 cm, and birth length 47.5 ± 3.5 cm. Half of them were late preterm (37 > GA ≥ 34) and more than a half had low BW (2500 g > BW ≥ 1500 g). Forty-four percent of them was born by cesarean and remained approximately 12 days in NICU (range 3-38 days). PRUNAPE Five percent of infants refused to collaborate, 61% passed and 34% failed the test. Among those who failed the PRUNAPE, 61% failed in one area, 25% in two areas, and 14% in three or four areas. The most affected areas were GM skills (14.25%) and LG (11.7%), followed by FM skills (7.4%) and PS development (4.3%) (Fig. 2). No significant effects were found related to anthropometric characteristics at birth or current dimensions in both sexes. Gestational Factors Nearly 12% of mothers were older than 35 years and 11% had been adolescent mothers. Twenty percent of them had smoked, drank alcohol (3%), or consumed illegal drugs (2%) like marijuana or cocaine during pregnancy. Although smoking did not affect the global test result, failure in LG was 3.5 times higher in children of smoking mothers. Drug consumption affected significantly the global test result with an odd 4.8 times higher in children of drug users. Additionally, drug users' children failed 13.0 times more in FM and 10.6 times more in GM. Alcohol consumption was also significantly associated with higher test failure. Children whose mothers drank some alcohol during pregnancy failed 3.2 times more than those whose mothers did not drink. Alcohol affected particularly LG and GM skills. In LG, failure was 22.6 times higher and in GM 16.2 higher (Table 2). Among the maternal pathologies, TORCH reached 10.5% and chronic diseases 33%, with hypertension as the most frequent. No significant association was found between maternal pathologies and the test outcome. Perinatal Factors Children born by cesarean section failed the test two times more than those born by vaginal delivery (p = 0.049; OR 2.2). Nearly 15% of children had low Apgar score (< 7) 5 min after birth and 10% showed pathologies of metabolic or endocrinological etiology: diabetic mothers, congenital hypothyroidism, or neonatal hypoglycemia. Almost 40% of the MRN stayed at NICU due to respiratory causes such as ventilatory distress or meconium syndrome (MAS), and lower respiratory diseases, among the most important. At NICU, 29% of them had sepsis and received antibiotic treatment. Other pathologies were malformations, including cleft lip-alveolus palate (8.6%), and abnormal results for otoacoustic emissions and brain CT scans (25.3%). These children failed the test 2.7 more than those with normal studies (p = 0.001). Numbers of days in NICU, mechanic ventilation (MV), or light therapy requirements were not significantly different between children who passed or failed the test. However, failure in FM skills was 4.2 higher in children that required MV. Postnatal Factors Bronchiolitis, gastroenteritis, and pneumonias were the most frequent pathologies after hospital discharge (43%), but none of these affected significantly the test outcome. There was also no significant association between the result of the test and early stimulation (10%), exclusive breastfeeding (7%), and ferrous sulfate or vitamin supplementation (7%). Socio-Environmental Factors The parents were around 27 and 30 years old (mother and father, respectively), and their families consisted of 5 members (2 children) on average. Seventytwo percent of the fathers had not completed high school and 36% were underemployed or unemployed. Similarly, 66% of mothers had not completed high school and 85% did not work. There were no significant differences between the test result and parental age, education, or occupational status. However, children without siblings failed the test significantly less than those who had (p = 0.041; OR 2.0). Most families lived in masonry houses (74%) with electricity (100%) and drinking water (80%), but less than half had sewage (40%) and around 25% resided near open dumps, contaminated water sources or polluting factories. However, none of these variables affected the test result. Discussion Over a third of studied children failed PRUNAPE, exceeding the prevalences obtained in Argentina for healthy age peers and low-income children (Garibotti et al. 2013;Lejarraga et al. 2008Lejarraga et al. , 2018Rebollo et al. 2008). However, the results obtained in this study were more similar to other detected in preterm infants of lower GA and/or very low birth weight (Allen 2008;Jarjour 2015). Gross motor skills and LG were the areas with more failure, unlike FM and PS skills, probably because of their greater complexity. The most relevant factors associated with these alterations are discussed below. Gestational Factors The relationship between maternal addictions and neurodevelopment has been subject of multiple studies. In this study, it was found that maternal drug consumption affected both FM and GM skills. Research in animals and humans suggests that drugs act as intrauterine stressors, modifying genetic expression involved with neuroendocrine, autonomic, and immune systems functions. While these epigenetic changes might be adaptive in the short term, they could program these systems and end up being non-adaptive in the long term (Vassoler et al. 2014). Prenatal abuse of cocaine may cause specific problems of neurodevelopment and learning in primary school (Kundakovic and Jaric 2017). Neurobehavioral disorders-as irritability or excitability-sleeping problems, and transitory neurological alterations such as tremor, hypertonicity, and extension postures have been described in drug user's infants (Conradt et al. 2018;Gkioka et al. 2016;Neri et al. 2015). Maternal alcohol consumption was also associated with greater failure of PRUNAPE. It is well known that alcohol alters the migration of fetal neurons and glial cells. Also, Fig. 2 Results of the test PRUNAPE children with alcoholic mothers may develop microcephaly, hydrocephalus, cerebral dysgenesis, neuroglial heterotopias, and anomalies in the corpus callosum, ventricles, and cerebellum (Del Campo and Jones 2017). In agreement with our results, a longitudinal study found a linear association between alcohol consumption and mental, sensory, and motor development in young children. Problems in behavior and the social area were found to be significantly associated with different levels of prenatal alcohol exposure (Williams Brown et al. 2010). Language disorders were significantly higher in children whose mothers had smoked tobacco during pregnancy, compared to children of non-smoking mothers. The action mechanism of tobacco can be twofold, either through carbon monoxide, which causes inactivation of hemoglobin and reduction of placental flow, or through nicotine, which produces neuronal damage by constriction of uterine blood vessels. In addition, cigarette smoke contains tar, toxic gases, and cyanide that also affect the developing brain (Polanska et al. 2017). The relationship found between smoking and language disorders has been described in a follow-up of children exposed to maternal smoking, who showed lower cognitive scores (Fried and Watkinson 1990). The study by Law et al. (2003) concluded that newborns of smoking mothers were more excitable, hypertonic, and showed more signs of abstinence in the CNS, gastrointestinal system, and in the visual area and had trouble being alert. However, in this study, we did not find motor alterations in such children. Perinatal Factors There is a growing global trend of cesarean delivery. Its immediate effects such as respiratory distress have been the subject of several studies. The possible cerebral sequelae of cesarean section have received less attention, although more is known about its effects of this procedure on intrauterine brain maturation. It has been estimated that it is possible to reduce cesareans without medical justification by 80% (Chang et al. 2013). This procedure is one of the main factors responsible for high prevalence of late preterm infants (Ceriani Cernadas 2015), although this figure varies. In Argentina and other Latin American countries, it reaches 30% of births and up to 50% in private healthcare centers (Villar et al. 2006). There were several causes for cesarean section in the study sample, involved both maternal and fetal risk. In the last case, it might have a secondary impact on test failure. In fact, cesarean section was associated with both a general failure in the test and a failure in FM skills. However, the effects of cesarean section on human psychomotor development have barely been explored (Kapellou 2011). In animal models, alterations in spatial memory and behavior have been observed, as well as alterations in dendritic morphology and different structures of the CNS (Juárez et al. 2017;Simon-Areces et al. 2012), suggesting that similar processes could occur in the fetal human brain. Our results support previous studies that reported adverse effects of cesarean section on LG and FM development in apparently healthy children (Deoni et al. 2019;Fraile Sánchez 2015), as well as learning difficulties, dyslexia, dyspraxia, and attention deficit disorder (MacKay et al. 2010). However, the evidence is still limited and requires additional monitoring and control of biases and confounding factors. There were no differences between children with and without respiratory pathologies, but MV was associated with LG failures, suggesting the relevance of severe respiratory pathology. The main effects of MV on the CNS are related to alterations in cerebral perfusion and higher intracranial pressure. Either of the two situations can produce neuronal damage and therefore developmental disorders (Tsai et al. 2014). Children with pathological diagnoses in otoacoustic emissions, brain scans, and visual and auditory potentials did not pass the test. The alterations in pathological ultrasounds were intraventricular hemorrhage (IVH) grade I and II and mild ventricular dilatations. The histopathology of IVH includes the destruction of the germinal matrix. Its magnitude is directly related to the extent and degree of bleeding, and the impact on development can be immediate but also in the long term (Bolisetty et al. 2014). It has been reported that severe neonatal abnormalities are predictive factors for development at school age in children with pathological brain ultrasound (Soares et al. 2017). Narberhaus et al. (2008) found that adolescents with a history of prematurity and neonatal intraventricular hemorrhage (IVH) had a lower IQ and specific dysfunctions in learning and verbal memory, not being itself attributable to prematurity but to the presence of IVH. Children who failed otoacoustic emissions (OEs) and hearing potentials (HPs) were possibly considered as hearing impaired and had greater failure in PRUNAPE than those with normal EO and HP. In accordance, the effects of hearing loss on the so-called working memory or short-term memory, as well as on reading and academic skills at later ages had been described (Fagan et al. 2007). The positive effect of early stimulation on neurodevelopment is widely known (Brazelton 2018;Cioni et al. 2016). In contrast, early stimulation did not improve the development of the children evaluated in this study, probably because of little parental assistance to early stimulation programs-less than 10%. Those who assisted stimulation probably perceived certain level of delay in their children. This small number of attendance could be related to the lack of association found in the study. Socio-Environmental Factors It has been widely documented that socioeconomic and environmental conditions have marked influence on neurodevelopment (Desplats 2015;Rauh and Margolis 2016). In addition, it has been suggested that the effects of the family environment in which the child develops appear to be weak in children under 12 months but become stronger with age (McDonald et al. 2018). The studied population belonged to low-income suburban groups with a high rate of school dropout and low parental education and occupational status. These characteristics per se were not related to the outcome of PRUNAPE, either because of the homogeneous environmental characteristics or because most children were less than 12 months old. The presence of siblings was a relevant variable with adverse effects on the result of the test. Two hypotheses could explain this phenomenon. Blake (1981) proposed the Bdilution modelb ased on family resources. Money, time, and leisure contribute greatly to children's cognitive development and educational opportunities. When these resources are limited, firstborn children usually have greater access to them than subsequent ones. On the other hand, Zajonc (2001) proposed the Bconfluence model^centered on the intellectual environment of the family. He argued that only firstborn children have full attention of their parents and are more exposed to adult language, while children who are born later experience the less mature and infantile speech of their older siblings. This aspect of the theory would help explain why firstborns tend to get better scores on verbal ability tests. That is why, in families with more adults and fewer children, parents have more time and energy to create a stimulating environment for children. In accordance, some studies in low-income Latin American children (Schonhaut et al. 2005;Urzúa et al. 2010) found an inverse relationship between motor performance and the number of siblings. In our study, we found that the number of siblings negatively influenced the four areas of development, highlighting as well the relevance of this social variable in the neurological development of low-income children. In summary, the prevalence of neurodevelopmental disorders in MRN is similar to that described in preterm infants less than 32 weeks old and with birth weight less than 1500 g. Gross motor skills were the most affected, followed by LG, FM skills, and PS area. Exposure to drugs and alcohol during pregnancy was significantly associated with developmental disorders, showing specific associations between drug use and FM and GM skills and alcohol and GM skills, while maternal smoking was associated with LG development. Among the perinatal factors, cesarean delivery and pathological studies were associated with overall failure of PRUNAPE, and MV was associated with FM disorders. The presence of siblings was a relevant variable with adverse effects on the result of the test. Limitations and Future Research Directions Finally, since the patients were all recruited from a public hospital, our results cannot be generalized to all MRN children but only to those who attend the public health system (almost 50% of pediatric population). This is a major limitation of the current study and future research should include children from higher socioeconomic status as well. Additionally, follow-up studies are required to understand the trajectories of MRN and to verify correlation of present results with later neurodevelopmental and behavior disorders. Vericat, A., & Orden, A. B. (2017). Riesgo neurológico en el niño de mediano riesgo neonatal. Acta Pediátrica de México, 38(4), 255-266. Villar, J., Valladares, E., Wojdyla, D., Zavaleta, N., Carroli, G., Velazco, A., et al. (2006) Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2019-06-13T13:24:37.518Z
2019-05-21T00:00:00.000
{ "year": 2019, "sha1": "61b7f7e0eedfb2a530809abd026ac2963ead0119", "oa_license": "CCBYNCSA", "oa_url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/144185/Documento_completo.pdf?sequence=1", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "3492841a2ca00268e07b72d713d076fc7d1c9017", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16835738
pes2o/s2orc
v3-fos-license
Taxonomic Identity Resolution of Highly Phylogenetically Related Strains and Selection of Phylogenetic Markers by Using Genome-Scale Methods: The Bacillus pumilus Group Case Bacillus pumilus group strains have been studied due their agronomic, biotechnological or pharmaceutical potential. Classifying strains of this taxonomic group at species level is a challenging procedure since it is composed of seven species that share among them over 99.5% of 16S rRNA gene identity. In this study, first, a whole-genome in silico approach was used to accurately demarcate B. pumilus group strains, as a case of highly phylogenetically related taxa, at the species level. In order to achieve that and consequently to validate or correct taxonomic identities of genomes in public databases, an average nucleotide identity correlation, a core-based phylogenomic and a gene function repertory analyses were performed. Eventually, more than 50% such genomes were found to be misclassified. Hierarchical clustering of gene functional repertoires was also used to infer ecotypes among B. pumilus group species. Furthermore, for the first time the machine-learning algorithm Random Forest was used to rank genes in order of their importance for species classification. We found that ybbP, a gene involved in the synthesis of cyclic di-AMP, was the most important gene for accurately predicting species identity among B. pumilus group strains. Finally, principal component analysis was used to classify strains based on the distances between their ybbP genes. The methodologies described could be utilized more broadly to identify other highly phylogenetically related species in metagenomic or epidemiological assessments. Currently, whole genome sequences are obtained in a faster, cheaper, and more reliable way than was possible previously and can be accessed via public databases [13]. Concomitantly, bioinformatics tools were developed to use these data in an attempt to circumscribe bacterial species. These include the in silico DNA-DNA hybridization H (is-DDH), average nucleotide identity (ANI) among shared genes, tetranucleotide frequency correlation coefficients, and multilocus sequence analysis (MLSA) using the core genome of a genus [14,15]. The common characteristic of these genome-scale techniques is that they relay the confidence of the genome sequence assignment used as reference. Unfortunately, to upload a genome sequence, rigorous quality control regarding its taxonomic identity is not required. While there are well-curated genomic database [16], many genomes deposited in public databases are misnamed, mainly because of the common practice of identifying strains using 16S rRNA gene sequence data alone [17]. In this study, we used information available from databases to resolve the identity of B. pumilus group strains at a species level. In order to attempt this, we first determined the identity of available genome sequences using ANI correlation and core-based phylogenomic analyses. In addition, we performed a hierarchical cluster analysis based on gene function repertoires. Moreover, the Random Forest (RF) algorithm was used to rank genes based on their performance as phylogenetic markers, and principal component analysis (PCA) was conducted to accurately predict species identities by using genetic distances for the most important genes. Nucleotide sequence data All genomes used in this work are listed in Table 1 and S1 Table. For the construction of the pipelines, we included all available genome sequences from the B. pumilus group (accessed January 2015). For comparative proposes, genomic data from B. amyloliquefaciens subsp. plantarum FZB42(T) and B. subtilis 168(T) were also included. ANI calculation and correlation analysis ANI values were calculated as described by Repizo et al. [18] by using the JSpecies software with the BLAST algorithm [14]. The Pearson correlation matrix was conducted using the built in R package "stats" [19], and the correlation plot was constructed and ordered by hierarchical clustering using the R package "corrplot" [20]. In silico DNA-DNA hybridization calculation Estimates of is-DDH were made using the Genome BLAST Distance Phylogeny (GBDP) 2.0 Web server (http://ggdc.dsmz.de/distcalc2.php), and whole sequence length formulae d 0 and d 6 are described in Meier-Kolthoff et al. [15]. Phylogenomic tree construction Orthologous genes were assigned using all 4175 CDS from B. subtilis 168(T) as queries for bidirectional best-hit BLAST searches [21] against the CDS of all bacterial genomes under study (Table 1) and an E-value of 1E -30 . Orthologous genes present in all microorganisms (BLAST defined core genes) were individually aligned using ClustalW2 [22], and concatenated using the Perl script catfasta2phyml.pl (http://www.abc.se/~nylander/catfasta2phyml/). The alignment was trimmed using GBlock 0.91b [23] and used to infer the evolutionary history of the strains with the Randomized Axelerated Maximum Likelihood algorithm (RAxML [24]), and the GTRGAMMAX model. This model was selected using jModelTest 2 software [25]. Reliability of the inferred tree was tested by bootstrapping with 1000 replicates. Hierarchical clustering and dendrogram comparison Biological functions of proteins were inferred by correlation with orthologous group assignment using the OrthoMCL software [26] and an E-value of 1E -5 . In the case that a particular species had more than one protein from the same group of orthologs, only the protein with the lower E-value was considered for the cluster analysis. In the case that OrthoMCL did not assign an orthologous group to a particular protein, its function was correlated from its best matching OrthoMCL-DB protein. Presence or absence of particular biological functions in the microorganisms were used as a binary scoring method (function present in a given strain = 1, absent = 0) and analyzed by average hierarchical clustering implemented using the R package "pvcluster" [27]. Distance measurements were calculated using the Manhattan distance function. Phylogenomic and functional dendrograms were compared and visualized with the R package "dendextend" [28]. Training and evaluation of decision tree forests and determination of gene importance for bacteria classification For the construction of decision tree forests, the RF algorithm was used. Distances between BLAST-defined core genes were calculated using the R package "ape" [29] and used as variables. The classes (or outputs) used were the suggested names of the species resulting from the genomic, phylogenomic and functional cluster analysis (following the pipeline described in Fig 1). Eleven strains were arbitrarily selected and used to train each forest (Table 1). For this, 100000 classification trees were constructed with a seed value of 12345. The importance of the variables was computed using internal out-of-bag estimates as described by Breiman [30]. The 13 strains from the B. pumilus group that had not been used to train the forests were used as a test set (Table 1) to construct a confusion table and calculate its misclassification rate. Clustering and outclass strain detection through PCA using genetic distances of most important genes For the PCA, the genetic distances of ybbP of the strains under study (listed in Table 1 and S1 Table) were computed with the R package "ape" [29]. The PCA was conducted using the R built-in package "stats" [19], and distances were used as variables. Principal component 1 (PC1) vs. principal component 2 (PC2), and 95% confidence interval ellipses for each class were plotted with the R package "ggbiplot" [31]. Results and Discussion Circumscription of Bacillus pumilus group strains in species using whole-genome data To resolve the taxonomic identity of strains of the Bacillus pumilus group, a pipeline to circumscribe them at species level was employed. This pipeline integrated genomic, phylogenomic and functional information (Fig 1). First, ANI values of any two genomes among B. pumilus group strains were calculated. As the ANI cut-off value for bacterial species demarcation is not precisely established, we performed a correlation analysis to cluster related taxa ( Fig 1A). We compared the information obtained using these analyses with the evolutionary history of the strains that was inferred using an MLSA analysis ( Fig 1B). To reduce the effect of differences in evolution rates, or the presence of horizontally acquired genes, only core genes were used in the analysis. Additionally, with the aim of filtering out horizontally acquired genes post-speciation, we also defined the gene core from strains of B. subtilis 168(T) and B. amyloliquefaciens FZB42(T). Finally, we performed hierarchical cluster analysis based on the codified function repertoires of microorganisms ( Fig 1C). It should be highlighted that assignment of proteins or genes to given orthologous groups is a critical but challenging procedure [32]. For this assignment, we used the OrthoMCL algorithm that has been shown to accurately predict protein function, and the OrthoMCL-DB database that contains 1398546 proteins and 150 genomes including eukaryotes, archaea and prokaryotes ( [33], http://www.orthomcl.org). In this pipeline, coherence between MLSA and functional dendrograms was used to reinforce a given bacterial species circumscription. Different topologies were used to recognize ecologically distinct strains of a given monophyletic group (Fig 1). The ANI approach. The ANI analysis performed as described in Materials and methods, and depicted in Fig 2 shows that B. pumilus group strains cluster in four different sub-groups. One of these groups is composed of a single strain, B. xiamenensis HYC-10(T), which shared less than 91% ANI with the other strains (S2 Table). We also observed that two strains of B. altitudinis (the type strain 41KF2b(T) and B-388) cluster together with five strains assigned as B. pumilus (B4133, INR7, MTCC B6033, BA06 and S-1), B. aerophilus (C772) and B. stratosphericus (LAMA 585) (Fig 2). These strains shared more than 98% ANI with each other and less than 90% with members of other clusters (S2 Table). A third cluster was also found composed of six and three strains assigned as B. pumilus (B4129, WP8, B4134, B4107, and CCMA-560) and B. safensis (the type strain FO-36b(T), S9, and VK), respectively (Fig 2). All cluster III members shared more than 96% ANI and less than 93% with other clusters. The last cluster consisted of four strains from the B. pumilus species (the type strain ATCC 7061(T), 7P, SAFR-032, and B4127) (Fig 2). However, the ANI values shared between these strains were close to the ±94% considered as the ANI boundary for the taxonomic circumscription of prokaryotic species [34] (S2 Table). Hence, is-DDH values were computed to evaluate whether cluster IV circumscribes strains from the same species. For this, DDH were predicted using the GBDP web tool and whole genome formulae, obtaining values of over 87%, which were larger than the 70% generally assumed to be the cut-off for species demarcation (S3 Table) [15]. The phylogenomic approach. One hundred and nine conserved genes (listed in S4 Table) were found with a reciprocal best-hit BLAST search, using all CDS of B. subtilis 168(T) as a search query. Our approach was based on the assumption that these genes belonged to the Bacillus genera core, were not transferred horizontally post-speciation, and have evolved concomitantly following a similar topology to the species under analysis. However, the existence of such non-transferable genes or even the concept of core genes is under discussion [35]. Nevertheless, the number of core genes found were very similar to the current 44 putative core genes identified for Eubacteria [35]. From a reliability perspective, this is more than the 20 genes proposed to be sufficient to provide high-confidence phylogenetic reconstruction [36]. Remarkably, the phylogenomic tree constructed on the basis of the alignment of these 109 core genes showed similar clustering to B. pumilus group strains obtained based on ANI values (Fig 3, left tree). The functional repertoire approach. An ecotype is defined as a genetically cohesive group that shares genetic adaptations to a particular set of habitats, resources, and conditions [17,37]. The ecotype concept has been recently proposed as rational basis for demarcating The diagram describes the informatics tools used and the pipeline integrating genomic (A), phylogenomic (B) and functional (C) approaches for bacteria circumscription. A) ANI approach. ANI values of any two genomes among strains under study were calculated and then used to perform a correlation analysis. B) Phylogenomic approach. Core genes were searched using BLAST in all bacterial genomes under study. Orthologous genes were individually aligned, concatenated, and trimmed. Finally, the best substitution model was selected, and the evolutionary history inferred. C) Encoded function repertoires approach. The functions of all codified protein analyzed were assigned, and the presence or absence of particular biological functions in each of the microorganisms was determined. Finally, this binary information was used to perform a hierarchical cluster analysis. Similarities or differences between phylogenomic (B) and functional (C) dendrograms were used to define ecologically distinct strains, or reinforce a species definition. When necessary, complementary analyses like is-DDH were performed. doi:10.1371/journal.pone.0163098.g001 bacterial taxa [17,34,38]. As different ecotypes could be identified by comparing genome content [17] the function of all CDS of strains under analysis (Table 1) were assigned and compared. As a result, 3128 different functions were identified for the 88223 CDS analyzed, 2731 of which were associated with an OrthoMCL-DB orthologous group and the 397 remaining were assigned on the basis of the best hit to OrthoMCL-DB proteins without a defined orthologous group (Fig 4). Amongst the B. pumilus group strains, 1927 functions were found that represent the core functions of the phylogenetic group. When B. subtilis 168(T) and B. amyloliquefaciens FZB42(T) were included in the analysis, 1724 common functions were found (Fig 4). This value was significantly higher than the 109 core genes found using a BLAST reciprocal best-hit search. This discrepancy highlights the dissimilar criteria used by both methodologies (see Materials and methods). Moreover, the hierarchical cluster analysis constructed based on the functional repertoires among the 26 strains were mainly determined by the 1907 non-core functions, rather than by the common functions. Remarkably, in Fig 3 it is shown that both cluster analyses resulted in very similar topologies. Interestingly, strains SAFR-032 and LAMA 585 did not cluster with Cluster II and IV, respectively (right tree in Fig 3). SAFR-032 seemed to be close to the rest of Cluster IV strains, and a closer association could not be observed because of this cluster being less conserved, as the ANI and phylogenetic approach suggested. On the other hand, LAMA 585 was relatively distant from Cluster IV. This discrepancy is the [14] and used for a Pearson correlation matrix construction conducted using R [19]. The plot shows the correlation constructed and ordered by hierarchical clustering using the R package "corrplot" [20]. The minimum percentages of ANI values between strains of a given cluster are indicated in brackets. result of the loss of 171 functions that all Cluster II strains except LAMA 585 have, and the acquisition or conservation of 28 functions that were found only in the latter. Our hypothesis is that these functions could have been lost or gained because of genome deletions or horizontal gene transfer events that allow the bacterium to adapt to a specific environment. Integration of genomic, phylogenomic and functional approaches. Inconsistencies between current species assignment and the three clustering approaches described above suggested that at least 13 out of the 24 B. pumilus group strains are currently misnamed in databases. Cluster I is composed of a single Type strain of the species B. xiamenensis and therefore there are no arguments to invalidate its assignment. Conversely, according to database information, Cluster II was integrated by at least four different species. We suggest that strain members of this cluster should be assigned as B. altitudinis since group together with the Type strain 41KF2b(T). Concordantly, recently BA06 and S-1 strains were associated and proposed to be B. altitudinis species [12]. B. invictae Bi.FFUP1T, a member of the B. pumilus group was also renamed, based on its is-DDH and ANI values, and phenotypic analysis, as a B. altitudinis strain [39]. These facts clearly indicate that this taxonomic rank will continue to evolve. Regarding LAMA 585, we suggest that this strain could be classified as a different ecotype of the B. altitudinis species. To further support this, is-DDH between LAMA 585 and other members of Cluster II were calculated using GGDC 2.0 and whole sequence length formulae [15]. Phylogenomic and functional dendrogram comparisons were performed and plotted with the R package "dendextend" [28]. A) Phylogenomic dendrogram. 109 BLAST core genes were individually aligned, concatenated and trimmed resulting in a final alignment containing a total of 104022 residues. The evolutionary history of the indicated strains was inferred with RAxML algorithm [24]. Reliability of the inferred tree was tested by bootstrapping with 1000 replicates. When not indicated, the bootstrap support values were 100. B) Functional dendrogram. Biological functions of proteins encoded in the genome of the indicated strain (Types in bold) were inferred using the OrthoMCL software [26] and then used as a binary score for hierarchical cluster analysis implemented with the R package "pvcluster" [27]. Values of over to 91% were obtained for LAMA 585 and any Cluster II member (S3 Table) that supports our hypothesis. Moreover, Branquinho et al. have recently proposed that the species B. aerophilus and B. stratosphericus should be rejected [40]. As our three independent approaches consistently indicated that all B. safensis strains, including the Type strain FO-36b(T) [41], were only found in Cluster III, we propose that the B. pumilus strains WP8 [5], B4134, B4107, CCMA-560 [42] and Fairview [43] should be renamed as B. safensis. Interestingly, while CCMA-560 is still designated as B. pumilus in Gen-Bank and RefSeq databases, during the preparation of this manuscript, it was named as member of the B. safensis species in a recent publication [10]. Cluster IV are composed of B. pumilus species, including the Type strain ATCC 7061, and as discussed above they appear belong to the same species. However, ANI values among these strains (S2 Table) were close to the ±94% considered as the taxonomic boundary [34], and functional cluster analysis data was not consistent with a high degree of conservation among them. Therefore, the incorporation of new genome sequences may be needed to better describe the relationships between members of this cluster in terms of ecotypes, subspecies, or genomovars. Table 1 summarizes the new species assignment proposed based on our analysis. Phylogenetic marker selection using genetic distance data Selection of alternative marker genes that provide prokaryotic species boundaries at higher resolution than 16S rRNA is a challenging but necessary task to reconstruct genealogies [44,45]. Therefore, we analyzed the 109 B. pumilus group core genes defined in the phylogenomic approach described above to rank them and select the best-performing markers to circumscribe species in this taxonomic group. For this purpose, we used the RF algorithm as it is RF is a machine-learning algorithm that generates unpruned decision trees using a random subset of input variables. To classify a new object, each tree uses its input data to made a prediction (or "vote"), and the forest chooses the classification with the most votes [30]. In this study, we built a species classifier using RF and genetic distance as input data. We chose this algorithm since not only could it be used to measure variable importance, but it also runs fast and efficiently on large databases and does not require much fine-tuning of parameters, so the methodology is easily accessible for many users [30]. To create the variables to construct the forest, we first calculated genetic distances between conserved genes (G) and their homologs in all strains or microorganisms (O) under study ( Fig 5). This procedure generated a number of variables (V) equal to the total number of genes under analysis (or G x O). As we used the 109 BLAST-defined core genes of B. pumilus group as input data, the number of variables defined was 2834 (or 109 G x 26 O). Distances between orthologs were calculated using the R package "ape" [29] and then used to compute their means, variances, and maximums. While it was proposed that RF avoids overfitting [30], more recently it was indicated that special attention is required for some data distributions such as those with small sample sizes [46]. Therefore, we decided to use a subset of our input data to train the forest, and another independent subset to test its error rate. Microorganisms included in each subset are indicated in Table 1. The forest of decision trees was finally constructed considering all 2834 V, and as a response to the proposed species designation for the 11 strains of the training dataset. To evaluate the forest performance, the species of the 13 strains of the test dataset was predicted with the RF classifier and used to calculate the misclassification rate. We found that the identities of the strains were predicted accurately in all cases tested. The importance of each gene was obtained by using the internal out-of-bag estimation of the RF algorithm [30]. In Table 2, the 10 most important genes are indicated, and in S4 Table a complete ranked list of all 109 core genes of the B. pumilus group are indicated. Interestingly, ribosomal protein sequences (RPS) that were proposed to be used for resolving the whole bacterial domain at subspecies level [47] had the lowest importance index for the classification of B. pumilus group species. Moreover, RPS genes are the most conserved sequences, as indicated by the means of their genetic distances (S4 Table). In Fig 6A it is shown that there is a positive correlation between the mean of the genetic distance and the importance of the gene for the classification. Genes with a more recent evolutionary history that were implied in regulation, transport or sporulation functions in Firmicutes were more important for the classification of B. pumilus group strains. For example, the most important gene listed in Table 2, ybbP (more Fig 5. Pipeline to circumscribe bacteria as well as to rank genes base on their importance. First, gen distances among all individually aligned core genes are calculated. Then, a forest of decision trees is constructed considering all variables and as classes the suggested species names that resulted from the genomic, phylogenomic and functional cluster analysis (pipeline described in Fig 1). The importance of the variables are computed using RF algorithm [30]. Finally, distances of the most important gene are used to perform a PCA to circumscribe bacteria and identify outclasses. Further analysis (base on phylogenomic, genomic, and experimental phenotypic information) have to be performed to classify those outlier strains. a. Distances of orthologs were calculated using the R package "ape" [29] and then used to compute their means, variances and maxima. b. Importance of each gene was computed using internal out-of-bag estimates as described by Breiman [30] with a forest composed by 100000 classification trees trained by the 11 strains mentioned in Table 1, and the input data from all 109 core genes. The 10 most important genes are listed. c. For error rate calculation, a new forest of 100000 classification trees was constructed for each gene and trained by the same dataset, but with distance data of the individual gene. Species identity of the 13 strains that belong to the test set ( 6. Ranking of genes based on their RF importance and PCA for outlier detection. A) Importance and error rate plot. Importance of each gene was computed using RF and plotted versus its gen distance mean. Symbols representing the percentage of the classification error rate are depicted. B) PCA plot. The PCA was conducted using R [19] and as variables, the distances to each of the ybbP orthologs from strains listed in Tables 1 and S4. PC1 vs. PC2 and 95% confidence interval ellipses were plotted with the R package "ggbiplot" [31]. Symbols used for strains listed in Table 1 recently called cdaA) is conserved in nearly all Firmicutes (but not all bacteria), and seems to be responsible for the synthesis of the cyclic di-AMP, an essential secondary messenger that is required for cell wall homeostasis [48]. As an attempt to evaluate the performance of each of the 109 core genes, their classification error rates were calculated. For this, 109 new forests of classification trees were constructed, one for each of the B. pumilus core genes. Each individual forest was trained by the distance data of the specific gene under analysis using the training dataset. Then, the species of the 13 strains in the test dataset were predicted using each individual classifier. The misclassification rate for each gene was estimated by comparing them with their true identities. As depicted in Fig 6A, more important, but also less conserved genes classified the strains of the test set most accurately. To resolve the identity of poorly represented species (as B. xiamenensis) or even discover species that are not represented at all in the classifier, we decided to include a PCA in our pipeline ( Fig 5). Therefore, a PCA was performed using ybbP distances to each of the 26 strains under analysis generating 26 variables. We found that strains of the same species were clustered together with 95% confidence, and that B. xiamenensis, B. subtilis 168(T), and B. amyloliquefaciens FZB42(T) were clearly identified as outliers (S1 Fig). Finally, to exemplify how the pipeline described in Fig 5 works for bacteria circumscription we used information from 15 B. pumilus group genomes that became available during the preparation of this manuscript (last accessed June 2015, S4 Table). A PCA was run including the genetic distance values of the ybbP gene of the new strains. Fig 6B shows the plot of PC1 and PC2 that resulted from this analysis, and this explained the 91.1% and 4.6% variance, respectively. We found that strains LK31, LK18, LK23, LK33, LK5, W3, and RIT380 clustered with B. altitudinis strains and consequently should be assigned as a member of this species. On the other hand, strains JPL_MERTA2, RIT372, SCAL1, and 15.1 grouped with the B. safensis species ( Fig 6B). Thus, our analysis suggested that strains SCAL1 and 15.1 should be renamed as B. safensis. It is worth mentioning that ANI, is-DDH and MLSA analyses for strains SCAL1, 15.1, LK31, LK18, W3, LK23, LK33, LK5, and RIT380 were consistent with our suggested species assignations (S5 Table and In Fig 6B it could also observed that strains LK12, LK21, LK32 and DSM 26896 were located outside of the 95% confidence ellipses defined by species with validated identity. Interestingly, DSM 26896 was the only bacterium of the B. invictae species with its genome sequence available. Hence, PCA was able to detect non-represented species that could be overlooked using RF. As strains LK12, LK21, and LK32 were named as B. pumilus but did not group together with strains of Cluster IV, they may belong to a different species. Noteworthy, ANI, is-DDH and MLSA analyses for LK12, LK21, and LK32 suggested that they are B. safensis strains (S5 Table and S2 Fig). The discrepancy with the PCA was due by the significance level used in the analysis. Nevertheless, it is important to note that to assign a new species identity to these strains, a comparative polyphasic analysis with reference strains that include phenotypic, genotypic, and phylogenetic approaches should be performed. Conclusions The performance of new technologies in DNA sequencing as well as their low cost has resulted in a large number of genome sequences becoming available in a short time. However, the evolution of fast, standardized, and accurate procedures to properly identify such sequences at species level has yet to be established. This challenging task is crucial since only accurate classified genome data would guaranty reliable analysis in data mining or comparative genomics. Furthermore, misnaming the source of the available sequences would generate distortion in species description. For example, if a strain was isolated from an infection event and was incorrectly identified as being a member of a particular species, it could lead to the entire species being reported as potentially unsafe. This was the circumstance in which the safety of B. pumilus species had to be reviewed by European Food Safety Authority, owing to two instances of severe sepsis in neonatal infants caused by what was presumed to be a B. pumilus strain [50]. In this study, we first proposed 13 reassignments to the B. pumilus group strains used as references. We also suggested that the existence of non-identical topologies in phylogenomic and encoded function repertoire dendrograms might contribute to the definition of ecotypes. Finally, we made use of genetic distances and RF algorithms to rank and select gene markers for the construction of a species classifier based on a PCA. This procedure could be more broadly used for the accurate and reliable identification of highly related species. Moreover, selection of specific markers could be essential when no whole-genome information is available, such as in metagenomic studies where entire genomes could not be reconstructed, or during diagnostic or screening tests in epidemiological studies where a high number of samples need to be handled. S1 Fig. Species clustered by PCA. The PCA was conducted using the R package "stats" [19] and the distances to each of the ybbP orthologs from strains listed in Table 1 were used as variables. PC1 vs. PC2 and 95% confidence interval ellipses were plotted with the R package "ggbiplot" [31]. Symbols used for the strains listed in Table 1 The evolutionary history of the indicated strains was inferred with RAxML algorithm [24]. Reliability of the inferred tree was tested by bootstrapping with 1000 replicates. Type strains are indicated in bold. (TIF) S1 Table. Proposed species names and assembly data for Strains used in PCA analysis. (XLSX) S2 Table. Percentage of ANI value between any two strains. ANI values were calculated as described by Repizo et al. [18] using the JSpecies software with BLAST algorithm [14]. Pearson correlation matrix was conducted using the R package "stats" [19], and the correlation plot was constructed and ordered by hierarchical clustering using the R package "corrplot" [20]. (XLSX) S3 Table. is-DDH analysis of LAMA 585 and Group IV strains. Estimates of is-DDH were made using the GBDP 2.0 Web server (http://ggdc.dsmz.de/distcalc2.php) and whole sequence length formulas d 0 and d 6 described in Meier-Kolthoff et al. [15]. (XLSX) S4 Table. Statistics for the 109 core genes. a. Distances of orthologs were calculated using the R package "ape" [29] and used to calculate their means, variances and maxima. b. Importance of each gene was calculated using internal out-of-bag estimates as described by Breimen [30] with a forest composed by 100000 classification trees trained by the 11 strains mentioned in Table 1, and the input data of all 109 core genes. c. To calculate the rate of errors, a new forest of 100000 classification trees was constructed for each gene and trained by the same dataset, but with distance data for the specific gene. The species of the 13 strains mentioned as test set in Table 1 were predicted and used to calculate the misclassification rates of each gene. d. Function and locus names for each gene were obtained for the reference sequence NC_000964. 3 Table. Percentage of ANI and is-DDH values between strains with proposed new species assignations and type strains. ANI values were calculated as described by Repizo et al. [18] using the JSpecies software with BLAST algorithm [14]. is-DDH were estimated using the GBDP 2.0 Web server (http://ggdc.dsmz.de/distcalc2.php) and d 6
2018-04-03T01:34:37.697Z
2016-09-22T00:00:00.000
{ "year": 2016, "sha1": "117b2b3beaaf7b15458c6f7771099f6b801b92c6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0163098&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "117b2b3beaaf7b15458c6f7771099f6b801b92c6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
5321706
pes2o/s2orc
v3-fos-license
Coxsackievirus B4 vertical transmission in a murine model Background Life-threatening infections with type B Coxsackieviruses (CV-B) are frequently encountered among newborns and are partly attributed to vertically-transmitted virus. Our current study investigates this alternative way of contamination by CV-B, using a mouse model. Methods Pregnant Swiss mice were intraperitoneally inoculated with CV-B4 E2 at gestational day 10(G) or 17G. Dams and offspring were monitored for mortality and morbidity, and sampled at different time-points to document the infection and explore eventual vertical transmission. Results Inoculation at day 10G induced an important rate of abortion and a decrease in the number of delivered pups per litter, whereas inoculation at day 17G was marked by preterm delivery and significant behavioral changes in dams. Only one case of spastic paralysis and one case of pancreatitis were recorded among surviving pups. Seroneutralization revealed anti-CV-B4 neutralizing antibodies in infected dams and their partial transfer to offspring. Viral genome detection by RT-PCR and viral progeny titration in several tissues (dams’ uteri, amniotic sac, amniotic fluid, placenta, umbilical cord, pancreas and heart) attested and documented CV-B4 vertical transmission to the majority of analyzed offspring. Virus detection in fetuses suggests transplacental transmission, but perinatal transmission during delivery could be also suggested. Vertically transmitted CV-B might even persist since prolonged viral RNA detection was noticed in the pancreas and heart from offspring born to dams inoculated at day 17G. Conclusion This model of CV-B4 vertical transmission in mice, in addition to allow a better understanding of CV-B infections in fetuses and newborns, constitutes a useful tool to investigate the pathogenesis of CV-B associated chronic diseases. Background Type B Coxsackieviruses (CV-B) are common encountered pathogens that, although mostly limited to asymptomatic and subclinical infections, are known for their wide tropism and for their broad spectrum of associated diseases (reviewed in [1]). Indeed, when the infection is symptomatic, it is generally localized to the gastrointestinal tract (the primary site of replication for those enteric viruses), and more rarely to the oropharynx. When virus replication persists despite the immune response, the virus reaches the blood circulation through mesenteric lymph nodes, then several target tissues such as heart, pancreas, spleen, liver, spinal cord, etc. Indeed, CV-B have been associated to several acute (meningitis, myocarditis, pancreatitis, encephalitis) and chronic diseases (chronic myocarditis, dilated cardiomyopathy, type 1 diabetes) that are often severe, even life-threatening, particularly in newborns and young children, thus constituting a serious public health problem [1][2][3]. The six CV-B serotypes (CV-B1 to 6) belong to the Enterovirus B species, from the Enterovirus genus (actually encompassing at least 271 human serotypes distributed in 7 species), of the Picornaviridae family [4,5]. They are small, non-enveloped, icosahedral, positivesense single-stranded RNA viruses. Due to their resistance in the environment, CV-B are essentially transmitted through the fecal-oral mode, and occasionally through the respiratory route [6]. The high frequency of CV-B infections among neonates however suggests a possible vertical transmission of those viruses, at least in some cases [3,7]. Several epidemiological, serological and virological arguments are in favor of this hypothesis. Indeed, increased levels of anti-CV-B antibodies have been found in pregnant women in association with an infection of the offspring [8,9]. The viral genome has also been detected in maternal and offspring tissues [2,9,10]. Vertical transmission of CV-B may occur either in utero (antenatally) through the transplacental way [11], or perinatally during delivery [9]. CV-B vertical transmission has been associated to an elevated risk of abortion [8,10,[12][13][14][15] and stillbirth [16,17]. In the case of live birth, vertically transmitted CV-B seem largely involved in many life-threatening diseases affecting fetuses, newborns and young infants [2,3,7,18,19]. On the basis of the presence of a viremia or the appearance of clinical symptoms, about 22% of fatal CV-B infections of the neonates, result from an intra-uterine infection [7]. Moreover, maternal CV-B infections during pregnancy would predispose offspring to the development of autoimmune diseases such as type 1 diabetes [20]. Infections with CV-B during pregnancy are however generally neglected compared to those by other pathogens such as rubella virus, Zika virus, Toxoplasma, etc.…. Considering the frequency of that mode of contamination by CV-B, the width and the severity of its consequences, CV-B vertical transmission deserves further investigation, in an attempt to develop preventive and/or therapeutic strategies. In this context, our current study aims to better explore CV-B4 vertical transmission using a mouse model. Mice All animals used in this investigation were handled in the animal facility of the Faculty of Pharmacy of Monastir, in accordance with the standards of general ethics guidelines, and maintained in specific "pathogen-free" conditions with unlimited access to food and water. Adult outbred Swiss albino mice (Pasteur Institute, Tunis) were mated (three females per male were caged together) until successful fertilization (through formation of a vaginal plug) was checked. The day the vaginal plug was observed was considered the first day of gestation (day 1G). Mice inoculation and follow-up Pregnant mice were inoculated intraperitoneally at two different time points, either at day 10 or 17 of gestation (day 10G or 17G), with 2 × 10 5 TCID 50 CV-B4 E2 units contained in 200 μl culture medium. Naïve mice served as negative controls. Pregnancy was monitored by daily weighing from day 10G until delivery. Animals were also observed for mortality, morbidity and behavioral changes. Offspring born to dams inoculated at day 10G were sacrificed, using isoflurane, at day 17G and days 0 and 5 from birth (six offspring at each time point, each three born to one different dam). By the same, those born to dams inoculated at day 17G were sacrificed (only if delivery occurred at least 2 days later (starting from day 19G)) at days 0 and 5 from birth, then later at days 21, 30, 50 and 70 (since, as suggested by the findings of Bopegamage et al., [22], inoculation at that pregnancy stage would have an effect easier to observe in older offspring). Amniotic fluids, placentas, internal organs from fetuses and newborn pups, then blood samples from the tail vein (for dams and for offspring at least 21 days old) were used for anti-CV-B4 antibody titration by seroneutralization. Dams' uteri, as well as umbilical cords, amniotic sacs, amniotic fluids and placentas were collected whenever possible (at day 17G), as key tissues in vertical transmission. Offspring's pancreases and hearts, spleens and small intestines were also collected. Sampled tissues were rinsed with cold PBS (to remove eventually contaminating blood) and treated for histological examination, viral RNA detection and progeny titration (spleens and small intestines were used only for histological examination). Antibody titration by seroneutralization Blood sampling being impossible in fetuses (sampled at day 17G) and newborn pups (sampled at days 0 and 5 from birth), we used the whole set of internal organs that were homogenized in an equal volume of sterile 1% penicillin/streptomycin PBS used beforehand for rinsing inside the animal after dissection. Placentas were processed separately. After 15 min of centrifugation at 900 × g, the recovered supernatants were used for antibody titration, as the amniotic fluids (two samples, each one being a pool of the amniotic fluids of the three fetuses belonging to the same litter) and the sera (obtained after centrifugation of the blood sampled from dams and offspring of 21 days and older), following a recently described procedure [24]. The neutralizing titer was defined as the reciprocal of the last dilution of sample that totally inhibited the viral cytopathic effect (CPE) as observed under an inverted microscope. Results are plotted as mean neutralizing titers ± standard deviations (SDs). Viral genome detection For each experimental condition (virus inoculation at day 10G, or 17G, and in the absence of virus inoculation), the presence of CV-B4 E2 RNA was checked at different p.i. times in the pancreas and heart of six mice (three offspring born to each of two dams) and, whenever possible (at day 17G), in placentas, amniotic fluids, and dams' uteri, according to the procedure described below. RNA extraction Washed and snap-frozen tissues were homogenized by crushing in Tri-Reagent (Sigma), and then centrifuged at 12,000 × g for 10 min at 4°C. Recovered supernatants were then subjected to total RNA extraction by the acid guanidium thiocyanate-phenol-chloroform procedure, as described by Chomczynski and Sacchi [25]. Sterile nuclease-free water and supernatant of CV-B4 E2infected HEp-2 cells, submitted to the same extraction procedure, served as negative and positive controls, respectively. Extracted RNA was then dissolved in 50 μl of nuclease-free water (Promega), quantified using the Nanodrop 2000 (UV-Vis Spectrophotometer, Thermo Scientific) and stored at −80°C until use in reverse transcription (RT)-PCR assays. cDNA synthesis was performed with about 100 ng of RNA using 0.1 μM of the anti-sense 007 primer and the M-MLV reverse transcriptase (Invitrogen), according to the manufacturer's instructions. The PCR was carried out with 3 μl of cDNA samples and 0.4 μM each primer in a total volume of 50 μl containing 2.5 U of Taq Paq5000 DNA Polymerase (Agilent technologies), 0.2 mM each dNTP and 2 mM MgCl 2 . The PCR mixture was subjected to a first denaturation step for 3 min at 94°C, followed by 30 cycles of amplification, consisting of denaturation for 20 s at 94°C, annealing for 20 s at 55°C, and extension for 30 s at 72°C, followed by a final extension step for 5 min at 72°C. All reactions were performed by using a preheated Eppendorf thermal cycler. RNA extracted from supernatant of CV-B4 E2infected HEp-2 cells was reverse transcribed, and amplified according to the same procedure described above, and served as a positive control. A negative control (no RNA) was also included in each reaction. Samples showing negative results were subjected to beta-actin mRNA amplification, as an internal control to ensure the integrity of extracted RNA and the absence of RT-PCR inhibitors. Semi-nested (sn)-PCR RT-PCR products showing negative results were subjected to a subsequent sn-PCR with internal primer sense 006: 5′-TCCTCCGGCCCCTGAATGCG-3′ and antisense primer 007, generating a 155 bp fragment [27]. The similar reaction mix and the same cycling program were used, except that the annealing temperature was 60°C. A positive control (DNA amplified from the RNA extract of supernatant of CV-B4 E2-infected HEp-2 cells) and a negative control (no DNA) were included in each reaction. Viral progeny titration For each experimental condition (virus inoculation at day 10G, or 17G, and in the absence of virus inoculation), infectious virus titration was performed, by the limiting dilution method, at different p.i. times (day 17G and days 0 and 5 from birth) in the pancreas and heart of six mice (three offspring born to each of two dams) and, whenever possible (at day 17G), in uteri, umbilical cords, amniotic sacs, amniotic fluids and placentas, according to the procedure described below. Briefly, snapfrozen tissues were weighed and crushed in 1% penicillin/streptomycin PBS, and then centrifuged at 12,000 × g for 10 min at 4°C. Supernatants were diluted 10-fold in MEM with 2% FCS, inoculated (100 μl) onto confluent HEp-2 cells (10 4 cells/well) in 96-well culture plates. Cultures were incubated at 37°C in a humidified atmosphere with 5% CO 2 and examined daily for CV-B4 CPE up to 7 days p.i.. Cells were then stained with crystal violet for 30 min. Finally, wells were rinsed with water and plates were examined for the highest dilution showing CPE. Viral titers were calculated according to the method of Reed and Muench [21], and expressed as mean titers (TCID 50 /mg of tissue) ± SDs. Statistical analysis Data are summarized as means ± SDs. The two-sided paired Student's t test was used to compare the mean weight, at different time points, between infected and control animals. The Wilcoxon rank-sum test was used to compare the number of pups per litter between infected and control animals, as well as neutralizing antibody titers between offspring born to dams inoculated at either day 10G or 17G. Statistical significance was defined by P-values less than 0.05. Effect of CV-B4 E2 on pregnancy outcome Pregnant mice were monitored by daily weighing from day 10G until delivery. Thus, starting from day 17G, we observed a significant decrease (p = 0.003) in the weight of dams inoculated with CV-B4 E2 at day 10G compared to negative control pregnant dams (Fig. 1a). The weight of mice inoculated at day 17G remained however, until delivery, comparable to the one of negative control mice (Fig. 1a). That decrease in the weight of dams inoculated at day 10G was associated to a high rate of abortion (53%) (Fig. 1b) and to a 37% reduction in the number of offspring per litter in the remaining cases compared to negative control dams (5.77 ± 0.93 vs. 9.25 ± 1.04, p = 0.0001, Fig. 1c). No pregnancy loss was observed among dams inoculated at day 17G and the number of offspring per litter was comparable to the one in negative control dams (8.7 ± 0.97 vs. 9.25 ± 1.04, Fig. 1c), but delivery occurred earlier since it never exceeded day 20G (days 17G, 18G, 19G and 20G in 27, 36, 27 and 10% of the cases, respectively, versus day 21G to 22G in the negative control group and in dams inoculated at day 10G) (Fig. 1a). In addition, as soon as delivery, 33% of dams inoculated at day 17G were characterized by an unusual behavior, even aggressivity, which respectively manifested through abandon of their litter (pups are abandoned with their placentas and their wastes and are not breastfed), and through killing and devouring their pups. Effect of CV-B4 E2 on offspring No signs of morbidity were observed among offspring from CV-B4 E2-inoculated dams, except one case of spastic paralysis in a neonate pup coming from virus inoculation at day 10G. At the microscopic level, histopathological analysis revealed a unique case of pancreatitis with fatty degeneration of acinar cells on a section of a pancreas sampled at day 30 after birth in an offspring born to a dam inoculated with CV-B4 E2 at day 17G (Fig. 2). CV-B4 E2 vertical transmission Detection of anti-CV-B4 neutralizing antibodies in virusinoculated dams and their offspring Before talking about a vertical transmission of CV-B4, it was essential to verify the infection of inoculated dams. Fig. 1 Effect of CV-B4 E2 on pregnancy outcome. a Evolution of the mean body weight of dams during pregnancy depending on whether they were inoculated or not with CV-B4 E2, and at each gestational day. (Results are representative of an experiment with n = 3 for negative control mice, n = 7 for dams inoculated with CV-B4 E2 at day 10G, and n = 5 for dams inoculated with CV-B4 E2 at day 17G). b Evolution of the mean body weight of dams inoculated at day 10G depending on whether there was pregnancy loss or not (Results are representative of an experiment with n = 4 and n = 3, respectively). c Variation of the number of offspring per litter depending on the inoculation period. (n = 9 for negative controls, n = 13 for mice inoculated at day 10G, and n = 10 for mice inoculated at day 17G) For this purpose, we searched for anti-CV-B4 neutralizing antibodies in the sera of dams inoculated either at day 10G (6 dams, 2 per time point) or day 17G (4 dams, 2 per time point) (Fig. 3a). Inoculation at day 10G resulted in important neutralizing titers that peaked by day 0 (≥1280, 11 to 12 days p.i.), then rapidly decreased thereafter (40 by day 5 after birth). Inoculation at day 17G induced low neutralizing titers (20) detectable by day 0 (only when delivery occurred at day 20G, 3 days p.i.) that increased thereafter (≥1280) at day 5 and later (day 21, data not shown). Four out of the six placentas sampled at day 17G revealed positive for anti-CV-B4 neutralizing antibodies (mean titer 226.67), whereas both pooled amniotic fluids revealed negative. Anti-CV-B4 neutralizing antibodies could also be found in offspring born to the above analyzed dams, inoculated either at day 10G or 17G (Fig. 3b). Mean neutralizing titers whilst being non-significantly higher in offspring born to dams inoculated at day 10G, where relatively low in both cases in the beginning but Fig. 2 Histopathological changes in offspring born to dams inoculated with CV-B4 E2. Pancreas, Heart, spleen and small intestine sections of six offspring (each three born to one different dam) from each of control dams and dams inoculated with CV-B4 E2, either at day 10G or 17G, were analyzed by hematoxylin/eosin staining at different p.i. times. Histopathological changes were found only in one pancreas of an offspring born to a dam inoculated at day 17G and sampled 30 days post-partum (b). Inflammatory foci and fatty degeneration of acini are indicated by little and large arrows, respectively. No anomalies were observed in all other analyzed sections. Representative microscopic observation of a pancreas section from a negative control offspring taken at day 30 post-partum (a). Gr ×400 Fig. 3 Kinetics of anti-CV-B4 neutralizing antibodies after virus inoculation of pregnant dams. In order to evaluate the presence and the amount of anti-CV-B4 neutralizing antibodies, seroneutralization was performed at different time-points on (a): sera from dams inoculated with CV-B4 E2 at either day 10G or 17G, (b): supernatants of homogenized internal tissues from offspring born to dams inoculated with CV-B4 E2 at either day 10G or 17G and (c): sera from offspring (at least 21 days old) born to dams inoculated with CV-B4 E2 at day 17G. Results are plotted as mean neutralizing titers ± SD, n = 2 for dams and n = 6 for offspring. The proportion of seropositive animals is indicated on each bar. Neutralizing antibodies were not detected in any sample from all negative control mice progressively increased with the increase in the proportion of seropositive animals (up to days 5 and 21 in offspring born to dams inoculated at days 10G and 17G, respectively). Mean neutralizing titers then decreased over time with the decrease in the proportion of seropositive animals, but were still measurable 90 days after birth in offspring born to dams inoculated at day 17G (not performed in those born to dams inoculated at day 10G) (Fig. 3c). No neutralizing activity could be evidenced in samples from negative control animals (dams and their offspring). Detection of CV-B4 genome in tissues from virus-inoculated dams and their offspring The presence of viral RNA was investigated in selected key tissues for vertical transmission (uterus, amniotic fluid and placenta sampled at day 17G), as well as in CV-B4 privileged targets, namely pancreas and heart. Table 1 summarizes the results of RT-PCR and sn-RT-PCR in those tissues sampled at different p.i. times, from two dams and three offspring of each, as described in the Methods section. Thus, CV-B4 E2 RNA was detected in 2/2 uteri, 1/2 amniotic fluids and 3/6 placentas, sampled at day 17G from animals inoculated at day 10G. Evidently, animals inoculated at day 17G were not sampled at that time point. As regards the privileged target tissues of CV-B4 E2, viral RNA was found in all (18/18) pancreases sampled, from day 17G through day 5 after birth, in offspring born to dams inoculated at day 10G, and in 11/12 pancreases, sampled at days 0 and 5, in offspring born to dams inoculated at day 17G. Regarding the heart, 14/18 and 10/12 tissues, sampled up to day 5, were positive for CV-B4 E2 RNA in the case of inoculation at day 10G or day 17G, respectively. These findings reveal CV-B4 E2 vertical transmission to 18/18 offspring born to dams inoculated at day 10G, and to 11/12 offspring born to dams inoculated at day 17G. However, as mentioned in the Methods section, pups born to dams inoculated at day 17G were followed for a longer period (up to day 70 after birth), to address the issue of an eventual persistence of the virus. As summarized in Table 2, CV-B4 E2 RNA was detected until 70 days after birth in 8/15 pancreases sampled from day 21 through day 70, and until 50 days after birth in 7/15 hearts sampled at the same time points, thus revealing 10/15 additional offspring infected through vertical transmission of the virus from dams inoculated at day 17G (total 21/27). Some samples required a subsequent sn-PCR round to reveal positive for viral RNA (22 among a total of 74 positive samples (29.73%)). Viral RNA was not found in samples from negative control dams and offspring. Detection of infectious CV-B4 particles in tissues from virus-inoculated dams and their offspring To better document the infection, a variety of tissues were analyzed for the presence of infectious virus, at Uteri from two pregnant dams inoculated at day 10G, together with amniotic fluid (pool) and placentas of three offspring of each dam, were sampled at day 17G. Pancreases and hearts from six offspring (each three born to one different dam) were sampled at each of days 17G, and 0 and 5 post-partum when dams were inoculated at day 10G, and at days 0 and 5 post-partum when dams were inoculated at day 17G. Results are summarized as positive (+) or negative (-). * Result obtained by sn-RT-PCR different time points, as described in the Methods section. A viral progeny could be evidenced in samples taken from several tissues of dams inoculated at day 10G and their offspring, namely pancreas and heart but also umbilical cord, placenta, amniotic sac, uterus and amniotic fluid (sampled only at day 17G) (Fig. 4a). The most elevated titers were recovered from pancreas, heart, umbilical cord, placenta, then amniotic sac, uterus and amniotic fluid. The mean viral titers decreased (even nullified in the pancreas) by day 0, then increased by day 5 after birth. As regards virus inoculation at day 17G, only offspring pancreases and hearts were analyzed and a viral progeny could also be evidenced at both day 0 and day 5 from birth (Fig. 4b). The mean viral titers increased between day 0 and day 5 and, pancreases showed higher titers than hearts. Discussion Vertical transmission is a way of contamination by CV-B that, despite constituting a serious problem, as explained above, is not sufficiently recognized and not thoroughly investigated. Most of our current knowledge on this subject comes from clinical observations in humans. Few investigations on CV-B vertical transmission were carried on mice [22,[28][29][30][31][32][33], an experimental tool however frequently used to explore various aspects of infection by those viruses, difficult to address in humans. Different viral and mice strains were used, together with different inoculation periods and routes, as well as different methods of analysis. Altogether, those studies generated numerous data, but the issue of CV-B vertical transmission is not totally elucidated and disserves further work. It is in this context that joins our current study. As in a previous study [26], we performed our experiments with the outbred Swiss albino mouse strain for a better representation of the heterogeneity inside the human population. We equally used the viral strain CV-B4 E2 that revealed to target numerous tissues. The time of inoculation during pregnancy was an important point to consider since, according to previous studies, it seems to highly influence the outcome of CV-B infection in offspring [22,28,[31][32][33]. Indeed, Dalldorf and Gifford [28], who were the first to investigate CV-B vertical transmission in mice, noticed that CV-B1 pancreatic line intraperitoneally inoculated to mice of the Albany Standard strain in the third week of gestation, resulted in an increase in the morbidity/mortality rate among offspring (from 20 to 43% in the first and second week, respectively, to 77% in the third week), and thus to an increase in the severity of the infection at that Table 2 RT-PCR results for prolonged viral RNA detection in pancreas and heart following CV-B4 E2 inoculation at gestational day 17 Day Pancreases and hearts from six or three offspring (each three born to one different dam) were sampled at each of days 21, 30, 50 and 70 post-partum. Results are summarized as positive (+) or negative (-). * Result obtained by sn-RT-PCR Fig. 4 Kinetics of viral progeny in several tissues following CV-B4 E2 inoculation at either gestational day 10 (a) or 17 (b). Uteri from two pregnant dams inoculated at day 10G, together with amniotic sacs, amniotic fluids (pool), umbilical cords and placentas of three offspring of each dam, were sampled at day 17G. Pancreases and hearts from six offspring (each three born to one different dam) were equally sampled at each of days 17G, and 0 and 5 post-partum when dams were inoculated at day 10G, and at days 0 and 5 post-partum when dams were inoculated at day 17G. Samples were subjected to viral progeny titration by the Reed and Muench's method as described in the Methods section. Results are plotted as mean TCID 50 /mg (except for the amniotic fluid where they are expressed as mean TCID 50 /ml) ± SD, n = 6. No trace of viral progeny could be evidenced in any tissue from all the negative control animals stage. Conversely, Lansdown [31] reported that intramuscular CV-B3 inoculation of Swiss mice during the first week of gestation (day 4G or 8G) induces more pregnancy loss than an inoculation of the same virus during the second week (day 12G). Here, fetal wastage was attributed to the nutritive deficit resulting from destruction of the maternal exocrine pancreas by the virus. By the same, in the investigation carried by Modlin and Crumpacker [32] with outbred CD-1 mice, oral inoculation with CV-B1 in the first week of gestation (day 7G), despite causing a less severe infection in dams, induced significantly more abortions than inoculation in the third week (day 14G and 16G). In another investigation, also carried on CD-1 mice, maternal oral inoculation by CV-B4 E2 at day 4G or day 17G had little effect on pregnancy outcome, whereas infection at day 10G affected dams and/or offspring [22]. In that same study, inoculation at day 17G predisposed to an aggravation of the consequences (severe pancreatic inflammations and hyperglycemia) of a post-natal challenge of pups by the same viral strain [22]. That difference in susceptibility to infection along the different gestational periods was attributed to physiological changes in hormonal rates that would be associated to a decrease in immunity [28,32]. Variations in the level of expression of the Coxsackie/ Adenovirus receptor (CAR) protein, which revealed to be an essential molecule for the embryonic development [34,35], can also explain that difference in susceptibility to CV-B infection during the different stages of gestation. Being inspired by the results of Bopegamage et al., [22] working with the same viral strain as well as outbred mice, we chose to inoculate our mice at day 10G (second week) and 17G (third week of gestation). As illustrated in the Results section, inoculation at day 10G, but not at day 17G, was followed by a significant weight loss in pregnant dams associated to an important rate of abortion and a reduced number of offspring per litter, which is in agreement with what has been observed previously [22,28]. A reduction in the mean body weight of newborn pups was also noticed following virus inoculation at day 10G (data not shown) which is reminiscent of what was previously reported in fetuses sampled at the end of pregnancy following maternal inoculation at day 8G [31,36]. Weight loss could just reflect morbidity among infected animals or at least changes on their state of health. Such consequences were not observed following inoculation at day 17G, maybe because the infection occurs too late during pregnancy to affect its outcome. Inoculation at that gestational stage seems however to have more effect on dams, here manifesting through premature delivery, then unusual behavior (possibly because delivery occurs during the acute phase of the infection). An increase in susceptibility of dams to CV-B infection with advancing pregnancy has already been described by other teams [28,32]. Cannibalism (destruction of litters by their mothers) at birth has equally been reported in one of those studies together with evident morbidity of dams for at least 1 week postpartum [28]. No delay in the fetal growth could however been evidenced as previously reported by others [36,37], whatever was the inoculation period. By the same, morbidity among offspring manifested only in two cases in the current investigation (one case of paralysis, and one case of pancreatitis), which is negligible if we consider the total number of examined pups, and reminiscent of the results of Bopegamage et al., [22] that observed normal histology and normal blood glucose levels in offspring born to CV-B4 E2-inoculated dams. In order to document CV-B4 E2 infection, we began with a rather simplistic approach, namely the detection of anti-CV-B4 antibodies by seroneutralization. Indeed, numerous studies documented a neutralizing response in CV-B-inoculated mice, a response that, as in humans, can be considered as an indirect proof of infection [24,38,39]. It has been previously reported that 90% of pregnant CD-1 outbred Swiss mice orally inoculated with CV-B3, late in pregnancy, developed IgG antibodies to CV-B3 starting from 5 days p.i. [33]. Those antibodies seemed to protect offspring against postnatal mortality, but not against stillbirth. To the best of our knowledge, only two studies already reported maternal transfer of anti-CV-B antibodies to offspring in mice, and the effect of such antibodies seems rather contradictory and disserves further investigation [22,40]. Indeed, passively transferred maternal antibodies enhanced the infection of offspring during challenge in the investigation of Bopegamage et al., [22], whereas they protected challenged offspring from infection in the work of Larsson et al., [40]. In our current work, anti-CV-B4 antibodies were retrieved in fetuses (day 17G) and, even after birth, roughly maintained at lower levels than in dams, which suggests a partial transplacental transfer (as strengthened by antibodies detection in the placenta) of maternal antibodies. The progressive increase in the number of seropositive pups, then their decrease after day 21, let us equally think about an additional transfer via breastfeeding, rather than a de novo synthesis by those too young animals. Indeed, the human milk was shown to contain anti-enterovirus antibodies that can neutralize the virus in vitro [41] and protect newborns from infection [42]. Although not a common diagnostic strategy, the detection of anti-CV-B5 antibodies in the amniotic fluid has been reported [43], hence the idea of including such sample in our analysis that, all the same, gave negative results. Although maternally transferred neutralizing antibodies could be responsible for the protection of our pups from morbidity and mortality, they did not prevent vertical transmission of the virus to them, as discussed below. Considering the fact that the virus needs 1 to 2 days to reach fetuses [29,30,32] and that offspring born to dams inoculated 1 day before delivery escape from infection [28], only tissues from offspring delivered at day 19G and day 20G were considered in the analysis for virus infection following inoculation at day 17G. Here we investigated CV-B4 vertical transmission by two complementary approaches, viral RNA detection and viral progeny titration (to evaluate virus replication). Both methods were concordant since revealing the same high proportion of infected offspring (18/18 and 11/12 following inoculation at day 10G and day 17G, respectively, details not shown in the results of progeny titration). In a previous study with CD-1 mice orally inoculated with CV-B3, virus could be recovered from fetal tissue in only a small percentage (3 to 13%) of pregnancies [33]. CAR is highly expressed in several fetal organs, which explains the increased susceptibility of fetuses, but equally of pregnant dams, to CV-B infections [15,34,35]. Several tissues can be targeted during in utero CV-B4 Infection. Virus detection in fetuses is a direct proof of antenatal in utero virus transmission. This is supported by virus detection in key tissues in vertical transmission (uterus, amniotic sac, amniotic fluid, placenta and umbilical cord). Otherwise, perinatal transmission during delivery cannot be excluded and is often supported by the reincrease of viral titers after birth. Infection of dams' uteri was evidenced by viral RNA detection and virus isolation in the current study, and virus isolation in a previous one [32]. An investigation outlining the involvement of CAR in the susceptibility to CV-B3 in ICR mice, showed that CAR is highly expressed in the epithelium and glands of the uterine endometrium [15], thus, making the uterus a privileged target for CV-B, as observed in the investigation of Modlin & Crumpacker [32]. Virus detection in the amniotic fluid has already been reported in humans [2], but the current investigation gives for the first time an evidence of infection of the mouse amniotic sac and amniotic fluid. Virus detection in the placenta and the umbilical cord, and at levels comparable to those in fetal tissues, strengthens in utero virus transmission through the transplacental way. Indeed, it was outlined that the placenta is a target for CV-B3 and CV-B4 [29,30,32,33]. The failure of detecting the virus 7 days p.i. in some samples is not so intriguing since it has already been reported that virus infects the placenta 1 to 2 days after maternal infection but persists at important levels only 3 to 4 days [32]. CV-B4 E2 detection in offspring's pancreas seems rather evident for that so-called diabetogenic strain that is well known for its pancreotropism [44]. Viral RNA and progeny detection in offspring's hearts is equally not surprising since, in addition to be a main target of CV-B4, the fetal heart highly expresses CAR. That latest plays an essential role in early cardiac development and regulates cardiac remodeling in the embryo [35,45]. Indeed, CAR-knock-out mice die in the 11th gestational day due to cardiac anomalies [35,46]. Our experimental model is the first to describe virus persistence following vertical transmission, since viral RNA could be detected until 50 and 70 days postpartum in the heart and the pancreas, respectively. In previous studies, CV-B3 [29] and CV-B4 [30] inoculated during the third week of pregnancy, were found in fetal tissues for a period that never exceeded 3 to 4 days. In the investigation by Bopegamage et al., [22], who equally used RT-PCR, no trace of infection was evident 30 days postpartum and authors did not search for the virus before that time point (what let them think that even vertical transmission did not occur). Virus persistence is considered as one of the main mechanisms leading to the development of chronic diseases associated to CV-B infections. Conclusion Finally, by addressing numerous issues and combining several approaches, the current report constitutes a fairly complete investigation of CV-B vertical transmission that provides a broader and clearer picture of this still poorly known contamination route. The actually described experimental model not only allows a better understanding of CV-B infections in fetuses and newborns, but also constitutes a useful tool to investigate the genesis of CV-B associated chronic diseases, mainly those with an auto-immune component, such as type 1 diabetes, since the autoimmune process is known to be initiated as soon as fetal life.
2017-08-03T02:12:48.811Z
2017-01-31T00:00:00.000
{ "year": 2017, "sha1": "58bd9eac4293be02cd192e302849206cff1fcb8d", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-017-0689-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "291d658919309b604ac6d12d37eaa099bf52939a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
232294572
pes2o/s2orc
v3-fos-license
Pledget reinforcement and traction compression as adjunctive techniques for suture-based closure of arterial cannulation sites in percutaneous endovascular aneurysm repair—initial experience Suture-based vascular closure devices are used in percutaneous endovascular procedures. However, failures are not uncommon. We have described our initial experience with two adjunct techniques to reinforce the suture-based vascular closure device (ProGlide; Abbot Vascular, Santa Clara, Calif) after percutaneous endovascular aneurysm repair. The threads of the ProGlide device (Abbot Vascular) were passed through a pledget with the help of a needle, which was secured to the puncture site to allow for traction compression. The use of the techniques can be helpful if the suture-based vascular closure devices fail to achieve immediate and complete hemostasis. The use of these adjuncts could reduce the incidence of closure-related complications after percutaneous endovascular procedures. The use of percutaneous access for endovascular aneurysm repair (EVAR) has resulted in shorter recovery times and reduced the incidence of wound complications compared with surgical femoral artery exposure. 1 Femoral artery closure during percutaneous EVAR (pEVAR) has often been performed using suture-based closure devices such as the ProGlide closure device (Abbot Vascular, Santa Clara, Calif). The failure of such devices to achieve complete hemostasis has not been uncommon, [2][3][4][5] and hemorrhage can necessitate emergency cutdown, fascial closure, or the prolonged use of external compression devices, introducing the risk of wound infection and/or damage to nearby neurovascular structures. 4,5 In the present report, we have described the use of two adjunct techniques to suturebased closure devices during pEVAR. METHODS The two adjuncts we have described were introduced as novel techniques in our unit. The first 20 patients who had undergone pEVAR with the use of these techniques were included in the present study. All the included patients provided written informed consent before the procedure. Any immediate and delayed complications were prospectively documented. Patient selection. All the patients scheduled to undergo EVAR who had had common femoral arteries (CFAs) considered suitable for the use of a percutaneous technique were considered for the two techniques. The CFAs were required to be >7 mm in diameter with no significant stenosis and to have at least an w1-cm segment of the anterior wall without calcification for consideration of the pEVAR technique. Patients with CFAs with severe calcification in their anterior wall without any gap and those with CFAs with significant stenosis were excluded. In addition, patients who had undergone previous groin open vascular surgery were not included in the present initial series, although we have subsequently performed this technique successfully for such patients requiring repeat groin surgery. Only one groin in our small series was excluded from percutaneous access, and this groin was scheduled and planned for open surgery with cutdown and femoral endarterectomy because of severe stenosis of the CFA. Surgical technique. Every CFA was punctured using ultrasound guidance with the needle at a 45 angle. An adequate size skin incision (w8 mm) was made to accommodate the larger sheath required for EVAR. The tissue between the skin puncture site and the intended arterial puncture site was dissected using an artery clip under ultrasound guidance along the intended needle route toward the anterior wall of the artery. The puncture needle was then advanced without any undue compression from the ultrasound probe to avoid puckering of the subcutaneous tissue. This helped the Pro-Glide knot (Abbot Vascular) to slip with less friction and allowed the polytetrafluoroethylene (PTFE) pledget (CR Bard, Tempe, Ariz) to slide over the thread toward the arterial puncture site. A 6F sheath was advanced initially over the wire to predilate the track before insertion of the ProGlide closure device (Abbot Vascular). After percutaneous arterial access for EVAR, two suturebased closure devices (ProGlide; Abbot Vascular) were deployed as a preclosure technique. 6 EVAR was then performed without modification. Closure of the femoral artery was begun using a standard method with the suture-based closure device in accordance with the instructions for use 6 ; however, the sutures were not cut. The puncture site was assessed to determine the presence of complete hemostasis. If complete hemostasis had not been achieved, the following adjunct procedure was initiated. If bleeding from the puncture site indicated that the ProGlide devices had not worked at all, we would consider placement of a third ProGlide device or the use of other methods such as surgical cutdown. We maintained the wire access until adequate hemostasis had been confirmed and we were sure we could achieve control using the adjunct techniques. We have used these techniques since June 2018 and have described the outcomes for the initial 20 patients in the present report. Pledget reinforcement. None of the threads of the closure devices was cut. The two ProGlide devices (Abbott Vascular) have four threads. We chose to use the two longer threads of the four, although different combinations can be used. A square (7.9-mm  3-mm) PTFE felt pledget (CR Bard) was pierced by an 18-gauge needle in two places (one at a time), and each pair of sutures was passed through the pledget from the sharp end of the needle toward the hub (Fig 1, A). The needle was withdrawn, and the pledget was pushed through the skin puncture site using surgical forceps. The knot pusher was then used to push down the PTFE pledget along the thread to seal the puncture site. The PTFE pledget was kept pushed against puncture site using the knot pusher for $1 minute to assess the extent of the hemostasis. The knot pusher could be pushed in different directions to achieve better hemostasis. Once hemostasis was considered to have occurred with the pledget, we removed the wire (if not previously removed). A second knot was then tied over the PTFE pledget (Fig 1, B) to secure the pledget over the arterial puncture site with six throws (Video). The knot pusher can be used for tightening after every throw of the knot over the pledget. Care should be taken to not fracture the threads during the knot tying. Traction compression. If absolute hemostasis was not achieved with the use of the pledget, traction compression was applied instead of using traditional unidirectional manual compression or an external compression device (ie, FemoStop compression system; Abbot Vascular). For traction compression, a small gauze swab was wrapped around the closure device sutures (Fig 2, A), which were used to guide the swab through the soft tissue to the puncture site. All four threads from the two ProGlide devices were held taut and perpendicular to skin. An artery clip was used to grip the sutures at the lowest external point with adequate tension on the threads, thereby pulling the vessel and the PTFE pledget up toward the swab, with the swab held down by an artery clip to compress the puncture site for 5 to 10 minutes (Fig 2, B). This helps to obtain fine hemostasis by compressing the tissue between the puncture site and the artery clip. We assessed the hemostasis after 5 minutes. If blood were still oozing, an additional 5-minute period of compression was applied. Once absolute hemostasis had been achieved, the artery clip was detached, the swab was removed, and all four sutures were cut. Once hemostasis was achieved, the skin was closed in accordance with surgeon preference such as the use of skin glue. Any immediate complications such as hematoma, a requirement for surgical cutdown, or fascial closure were all documented. All patients underwent duplex ultrasonography and computed tomography (CT) angiography at 1 month and were followed up in the outpatient clinic to assess for any complications related to the EVAR and for groin complications, including infection, hematoma, and pseudoaneurysm. RESULTS The first 20 patients (median age, 76 years; 15 men) who had undergone pEVAR in our unit were included in the present study. Of the 40 groins in the 20 patients, the femoral artery in 1 groin of 1 patient was heavily diseased with near occlusion and required a previously planned femoral endarterectomy. For the remaining 39 groins, percutaneous femoral access was successful. Only one ProGlide suture had broken during the initial deployment, and an additional ProGlide device was deployed in one groin. The patients had required 16F to 22F sheaths for pEVAR. All planned percutaneous access procedures were successful using the preclosure technique of two ProGlide devices and PTFE pledget in the present series. No immediate complications such as hematoma were encountered. No patient had required unexpected surgical cutdown. Although every patient who had undergone EVAR had received 5000 IU heparin during the procedure, protamine was not used for any of the patients to reverse the heparin. All 20 patients were seen in our outpatient clinic and had undergone the routine 1-month postoperative CT scan and arterial duplex ultrasound scan to assess for any endoleaks, pseudoaneurysms, and stenosis. At the 1-month follow-up examination, no groin infection, significant stenosis, or pseudoaneurysm was found in any of the patients. All 40 groins, including the 1 that had undergone endarterectomy, remained healthy without any complications. The CT scan confirmed a healthy CFA, and the mean distance between the luminal edge of the CFA and the PTFE pledget was 2.9 mm (Fig 3), although the distance was >4 mm in six groins. The median distance between the anterior wall of the femoral artery and the skin was 32 mm (interquartile range, 28-42 mm). For the six groins in which the pledget had been situated >4 mm from the arterial lumen, the distance was 36, 40, 41, 41, 43, and 44 mm. However, no pseudoaneurysm of any size had developed in any of the patients even those with the pledget >4 mm from the arterial lumen. DISCUSSION The failure of closure devices after percutaneous access increases the morbidity and potential risks associated with EVAR. 1 The adjunctive procedures we have described have the potential to reduce the need for immediate or subsequent groin cutdown, manual compression, or the use of external compression devices. The PTFE pledget was railed over two of the four threads from the two ProGlide devices (Abbott Vascular). Although in the present series, we used the two longer threads from the two ProGlide devices, it is possible to rail the pledget over the two threads of the first ProGlide or over the threads of the second ProGlide device. It is also possible to use a second PTFE pledget over the remaining two sutures if required. The effect of the pledget is primarily to apply pressure against the puncture site to seal the bleeding site. The described technique is limited to reinforcing closure with suture-based closure devices such as the ProGlide device (Abbot Vascular) and would not be suitable for clip-or plug-based closure devices. Furthermore, it requires successful initial deployment of the suturebased device but could still be incorporated if the device does not achieve adequate hemostasis. The complications associated with pledgets are rare but include the possibility of infection. The use of biodegradable pledgets could represent an acceptable substitute in the future. The added cost of the procedure is equivalent to the cost of the pledget, which would be offset if the need for groin cutdown and surgical arterial closure is reduced. External compression devices (eg, FemoStop; Abbot Vascular) can narrow the arterial lumen, especially if high pneumatic pressure is used. External compression also has the potential to deliver an inaccurate pressure or to impinge on neighboring structures such as the femoral nerve, its branches, or the common femoral vein. When applied after a procedure in the conscious patient, external compression devices can also serve as a considerable source of discomfort. The traction compression technique we have described allows for focal application of pressure to the puncture site without restricting the arterial lumen or compromising the neighboring structures. The use of traction compression provides additional hemostasis by compressing the tissue between the puncture site and skin by pulling the arterial wall up and pressing the skin down against the puncture site. The use of this technique could be especially valuable for low punctures that lack the underlying bony stability of the femoral head, against which the CFA has traditionally been compressed. Again, its use is limited to that of an adjunct to suture-based closure devices and cannot be incorporated once the sutures have been cut. Unlike manual compression, traction compression does not occupy the operator once applied. Postoperative CT confirmed that the pledget was not in direct contact in some patients. However, the pledget helped to control the bleeding by acting as an external seal with target compression at the point of puncture. The mean distance between the anterior wall of the CFA and pledget was 2.9 mm. The distance included the arterial wall thickness and some fascial tissue, which acted as a buttress between the arterial wall and pledget. If the ProGlide knot is completely tight, the PTFE pledget might not be required to achieve prompt hemostasis. However, a common issue is that the knot will not be completely tight against arterial wall, leaving a bleeding puncture site. Thus, a PTFE pledget that is railed over the thread will help to exert direct compression, sealing the puncture site and achieving better hemostasis. None of the patients in our initial experience required any additional external compression beyond the maximum of 10 minutes (average, 5 minutes) of traction compression. All femoral percutaneous access sites were sealed successfully with the use of two ProGlide devices, a PTFE pledget, and the traction compression technique. These methods appear to be very promising adjunct techniques to complete percutaneous endovascular procedures that require a sheath size of #22F with confidence. CONCLUSION We have presented novel adjunct techniques (pledget reinforcement and traction compression) for suturebased percutaneous arterial closure. Their use results in minimal additional procedural costs or time and has the potential to reduce the need for external compression and surgical cutdown. Further comparative studies are underway to assess the objective benefits of these adjunct techniques.
2021-03-22T17:36:02.374Z
2021-01-28T00:00:00.000
{ "year": 2021, "sha1": "1211cefb6acbc13246e0b7a6a6b73b72631af75d", "oa_license": "CCBYNCND", "oa_url": "http://www.jvscit.org/article/S2468428721000058/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1211cefb6acbc13246e0b7a6a6b73b72631af75d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248004002
pes2o/s2orc
v3-fos-license
Age and Alzheimer’s Disease-Related Oligodendrocyte Changes in Hippocampal Subregions Oligodendrocytes (OLs) form myelin sheaths and provide metabolic support to axons in the CNS. Although most OLs develop during early postnatal life, OL generation continues in adulthood, and this late oligodendrogenesis may contribute to neuronal network plasticity in the adult brain. We used genetic tools for OL labeling and fate tracing of OL progenitors (OPCs), thereby determining OL population growth in hippocampal subregions with normal aging. OL numbers increased up to at least 1 year of age, but the rates and degrees of this OL change differed among hippocampal subregions. In particular, adult oligodendrogenesis was most prominent in the CA3 and CA4 subregions. In Alzheimer’s disease-like conditions, OL loss was also most severe in the CA3 and CA4 of APP/PS1 mice, although the disease did not impair the rate of OPC differentiation into OLs in those regions. Such region-specific, dynamic OL changes were not correlated with those of OPCs or astrocytes, or the regional distribution of Aβ deposits. Our findings suggest subregion-dependent mechanisms for myelin plasticity and disease-associated OL vulnerability in the adult hippocampus. In recent years, multiple studies were performed to identify long-term, age-related changes in the OL population and myelination (Young et al., 2013;Yeung et al., 2014). However, current knowledge of adult OL changes is limited to selected brain areas, such as the corpus callosum and cerebral cortex, where OLs are either abundant (Young et al., 2013;Hill et al., 2018) or where longitudinal in vivo OL imaging is achievable (Hill et al., 2018;Hughes et al., 2018). In contrast, despite several previous studies, age-related changes in hippocampal OLs have not been clearly characterized. Most past studies relied on histological approaches with labeling of OLIG2 (Valério-Gomes et al., 2018), myelin proteins (Desai et al., 2009;Vanzulli et al., 2020) or both (Chao et al., 2020) to observe hippocampal OLs. In this study, we used mouse tools for genetic labeling of OLs and OPC-specific Cre-loxP fate tracing to follow age-related oligodendroglial changes in the hippocampus. Our analysis focused on different hippocampal subregions, each of which may represent well-established neuronal circuits composed of specific axonal relays. Moreover, we determined how hippocampal OLs change in APP/PS1 mice, a mouse model of AD-like amyloidogenesis. Our results show that different hippocampal subregions exhibit distinct patterns of age-related OL growth and OL vulnerability to AD-like disease conditions. This suggests that hippocampal myelin plasticity underlies specific aspects of age-and AD-dependent cognitive changes related to hippocampal functioning. Oligodendrocytes Increase With Age at Different Rates in Different Hippocampal Subregions To understand how hippocampal OLs change with age, we examined hippocampi of Mobp-EGFP mice at the ages of 0.5, 1, 3, 7, and 12 months. Because oligodendrogenesis and new myelination are partly regulated by neuronal activity in the mature brain (Gibson et al., 2018;Mitew et al., 2018), we assumed that different neuronal circuits are subjected to differential use or patterning of neuronal activity with age. Therefore, we also asked whether hippocampal OL changes are subregion-dependent by analyzing four hippocampal subregions for OL changes: stratum lacunosum-moleculare (SLM) of CA1, the dentate gyrus (DG) hilus (hereafter called CA4), stratum lucidum (SL) and stratum radiatum (SR) of CA3, and SR of CA1 (Figure 2A). The density of EGFP + OLs increased significantly with age in all examined hippocampal subregions, including adult age points (Figures 2B-F). However, the patterns of OL increase differed among those subregions. At P15, the SLM (CA1) exhibited the highest OL density compared to other regions (e.g., 106.5/mm 2 in SLM vs. 28.8/mm 2 in CA4, p = 0.03, Student's t-test) (Figures 2C,D). Except for the SR of CA1, all areas showed increased OL density, particularly from P15 to 1 month of age (Figures 2B-F Fate Analysis of Adult Oligodendrocytes Progenitor Cells in the Hippocampus To confirm the subregion specific OL changes in the hippocampus, we followed the fate of adult OPCs in the aging brain. To analyze OPC fates, we crossed Pdgfra-CreER mice, a line of OPC-specific tamoxifen-inducible Cre mice (Kang et al., 2010) with R26-EGFP (RCE) Cre reporter mice ( Figure 3A). Multiple injections of tamoxifen into Pdgfra-CreER; RCE mice between P70 and P73 resulted in EGFP expression in > 65% of NG2 + OPCs in CA1 and CA4 of the hippocampus when the brains were observed at P77 (P70 + 7) (Figure 3B). At P70 + 7, only 2 and 12% of EGFP + cells were ASPA + OLs in CA4 and CA3, respectively (Figures 3C,F). To estimate new oligodendrogenesis after P70 for the following 10 months, 12month-old Pdgfra-CreER; RCE mice were examined. Consistent with the results of EGFP + OL quantification (Figure 2), 12month-old mice had much higher densities of EGFP + cells and EGFP + ASPA + mature OLs in the CA4 and CA3, compared to younger counterparts (Figures 3C-E). 12-month-old mice also had a greater proportion of mature OLs among EGFP + cells in the CA3 and CA4 region, but not in the SLM or SR of CA1 (Figure 3F), suggesting a greater accumulation of EGFP + ASPA + OLs in CA3 and CA4 with age. Thus, both our quantitative analysis of MOBP-EGFP + OLs and adult Cre-loxP OPC fate analysis suggest that the CA3 and CA4 region are the most active areas of adult oligodendrogenesis in the hippocampus. These findings suggest that subsets of hippocampal neurons are subjected to new adult myelination at different rates. Age-Related Changes in Myelinated Processes in the Hippocampus With confocal microscopy of MBP, we noticed that MBP + processes in 12-month-old-mice were thinner or shorter than those of 1-month-old mice (Figures 4A,B). These differences suggest more compacted myelin status and shorter internodes at old age as shown before (Young et al., 2013). Similar changes were observed more clearly with EGFP-stained OL processes in Mobp-EGFP mice ( Figure 4C). The arborization patterns of EGFP + OL processes in CA4 were more complex, but processes were thinner, at 12-months than at 1-month ( Figure 4C). Of note, the differential increases of OL densities in hippocampal subregions (CA3 and CA4) were not correlated to changes in MBP immunoreactivity (pixel density) in the corresponding areas (data not shown). The disproportionate change in MBP + immunoreactivities relative to OL number may be partly related to poor access of MBP (or GFP) antibodies to the compact myelin in the aged CNS (Gonsalvez et al., 2019). However, we observed marked increases in Caspr + puncta, a paranodal marker, in the SR of CA3 from 1 to 12 months with age, indicating increases in nodes of Ranvier as can be predicted by OL density increases (Young et al., 2013) in this hippocampal subregion (Figures 4D,E). To assess a possible change in axonal density in the SR of CA3 with age, we used anti-NF200 immunostaining as a neurofilament marker. NF200 + immunoreactivity changed from a fibrous to a more granulated pattern with age ( Figure 4F), and its density was significantly reduced (160.5 ± 8.8 for 1 month vs. 71.8 ± 2.3 for 12 months; p < 0.005). We interpreted these changes as reflecting either axonal property changes or reduced density in this subregion with age. These results suggest that OL increases in CA3 are not driven by NF200 + axon increases with age. Oligodendrocytes Progenitor Cell and Astrocyte Number Change With Age in the Hippocampus To understand other glial changes with age in the hippocampus, OPC changes were analyzed in the same four hippocampal subregions at four different ages (0.5, 1, 3, and 12 months). OPCs are a major population of proliferating cells, and their density is Table 1 for p-values of pairwise comparisons for (C-F). (G) Change in EGFP + OL density from 3 to 12 months of age. One-way ANOVA and Tukey's test. **p < 0.01; ***p < 0.001. n = 3 mice for each group. known to be maintained in the adult brain (Dawson et al., 2003). Our results showed that NG2 + OPC densities decreased from 1 to 12 months of age in most subregions (Figures 5A-E), but in CA4, such an OPC decrease was not observed ( Figure 5C). In our analysis, however, different hippocampal subregions displayed differing rates of decline throughout the lifespan. For example, from 1 to 3 months of age, there was a significant decline in OPC number in the CA3 and SR of CA1, while the SLM of CA1 and CA4 do not exhibit such an early decline (Figures 5B-E). From 3 months throughout adulthood (to 12 months of age), there was virtually no significant change in OPC number in all regions (Figures 5B-E). Thus, our results indicate that hippocampal OPC number is relatively stable in adulthood after initial decreases before 3 months, consistent with previous results obtained from other CNS areas (Zhu et al., 2011). Moreover, these age-related OPC changes in adulthood did not inversely correlate with concurrent OL increases in each subregion (Figures 2C-G). We also compared GFAP + SOX9 + hippocampal astrocyte densities at 1 and 12 months of age. Except for the subgranular zone (SGZ) of the DG (Sun et al., 2017), anti-SOX9 immunoreactivities were localized in the nuclei of GFAP + astrocytes ( Figure 6A). We noted that there were prominent regional differences in densities of SOX9 + GFAP + astrocytes in the hippocampus at 1 month, and astrocytes were usually at higher density in SLM and CA4 than in CA3 and SR of the CA1 at both 1 and 12 months of age (e.g., 1098.6/mm 2 in SLM vs. 441.3 in the SR of CA1 at 1 month, p < 0.0001, Oneway ANOVA) (Figures 6A,B). However, unlike OLs and OPCs, SOX9 + cell density did not decrease significantly with age in most subregions except for the SLM of CA1 ( Figure 6C). The results (F) Percentages of ASPA + OLs among EGFP + cells. Two-Way ANOVA and Sidak's multiple comparison test. *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001. n = 3 or 4 mice for each group. thus far suggest that each of three macroglial populations (i.e., OLs, OPCs, and astrocytes) follows a distinct regional pattern for its age-related changes in the hippocampus. Prominent Oligodendrocyte Loss in CA3 and CA4 of the Hippocampus in APP/PS1 Mice To understand how OLs are affected by AD-like disease conditions in different hippocampal subregions, OL densities were compared between 12-month-old control and APP/PS1 mice. Whereas OL densities of the two groups were comparable in the SLM and SR of CA1, there was a significant decrease in EGFP + OLs in CA3 and CA4 of APP/PS1 mice (Figures 7A,B). Interestingly, the area of Aβ plaques in CA3 was smaller than those in other hippocampal subregions in APP/PS1 mice (Figures 7C,D). Moreover, there was no loss of EGFP + OLs in the SLM of CA1 (Figure 7B), although this region had the highest levels of Aβ plaques among the examined areas in 12-month-old APP/PS1 mice (Figures 7C,D). Therefore, the regional pattern of hippocampal OL loss in APP/PS1 mice does not appear to be correlated with the levels of the extracellular Aβ plaque deposits. However, densitometric analysis of MBP + immunoreactivities did not show MBP reduction as clearly as the loss of MOBP-EGFP + OLs in APP/PS1 mice (Figures 7E,F). These results suggest that AD-like disease conditions cause hippocampal OL loss in a subregion-dependent manner, but Aβ plaque-induced cell toxicity is not a primary cause of the OL loss in 12-month-old APP/PS1 mice. Our results also raise an issue that histological analysis of MBP-immunoreactivities may not always reliably reflect pathological OL loss as noted by others (Gonsalvez et al., 2019). Hippocampal Adult Oligodendrogenesis Is Not Impaired in APP/PS1 Mice It is possible that the subregion specific OL loss in APP/PS1 mice was due to decreased adult oligodendrogenesis. To test this possibility, we crossed Pdgfra-CreER; RCE with APP/PS1 mice and administered tamoxifen into Pdgfra-CreER; RCE; ± APP/PS1 mice at P70. The fates of P70 OPCs were analyzed by examining EGFP + cells in the hippocampus at 12 months (P70 + 300). Although there was OL loss in CA3 and CA4 of APP/PS1 mice (Figure 7B), we found no significant difference between control and APP/PS1 groups in the densities of total EGFP + cells (Figures 8A,B), EGFP + ASPA + OLs (Figures 8A,C), or the percentage of mature OLs among EGFP + cells ( Figure 8D) in all hippocampal subregions. The NG2 + OPC densities were also comparable between control and APP/PS1 mice (Figures 8E,F). We also quantified newly formed myelinating OLs that are marked with anti-breast carcinoma amplified sequence 1 (BCAS1) antibodies (Fard et al., 2017), but observed very few BCAS1 + cells in the hippocampus, with no difference in their numbers between the two groups ( Figures 8G,H). These results suggest that the selective loss of CA3 and CA4 OLs in APP/PS1 mice is not attributed to impairment in adult oligodendrogenesis or loss of OPCs. However, despite the intact potential of OPCs to differentiate into mature OLs, adult oligodendrogenesis may not be sufficient to restore normal OL density in the CA3 and CA4 of APP/PS1 mice. We also analyzed disease-related changes in hippocampal astrocyte density in APP/PS1 mice. Notably, despite marked upregulation of GFAP near Aβ deposits (Figure 9A), the number of SOX9 + astrocytes were not increased in all hippocampal regions ( Figure 9B). Instead, SOX9 + cell densities were significantly decreased in the SLM of 12-month-old APP/PS1 mice (Figure 9B), which indicates a different regional pattern than that of OL loss in these mice ( Figure 7B). These results suggest that distinct mechanisms underlie glial cell loss for OLs and astrocytes in APP/PS1 mice. In summary, we found that CA3 and CA4 are the areas of most active adult oligodendrogenesis in the hippocampus, and that, in the same areas, OLs are more susceptible to diseased conditions in APP/PS1 mice. In contrast, astrocytes in SLM of the CA1, but not in other regions, undergo age or disease-related loss. Adult Oligodendrocyte Density Changes With Age The thickness of white matter continues to increase in the adult brain to certain mid-ages in humans (Tamnes et al., 2010;Westlye et al., 2010;Lebel et al., 2012). However, to what degree new OL addition contributes to white matter change is unclear (Walhovd et al., 2014;Lebel and Deoni, 2018). OL numbers remain stable in the human corpus callosum from childhood into late ages, with no overt signs of OL turnover, although the myelin exchange rate is high (Yeung et al., 2014). In contrast, new OLs are continuously added to the upper layers of the mouse cortex (Hill et al., 2018;Hughes et al., 2018), even up to 2 years of age (Hill et al., 2018). Genetic lineage tracing of OPCs also reveals continuing oligodendrogenesis in the adult mouse corpus callosum, although the rate of oligodendrogenesis declines with age (Rivers et al., 2008;Kang et al., 2010;Young et al., 2013). Thus, OLs increase with age in at least some brain areas-however, the rate and pattern of age-dependent oligodendroglial growth likely differ depending on species, the brain region, and the time window of OL assessment. Hippocampal Oligodendrocytes and Myelinated Processes in Normal Aging In this study, we found an age-dependent OL increase in the mouse hippocampus up to 12 months of age, and our results reveal that different hippocampal subregions exhibit different rates of OL number growth, indicating subregionspecific mechanisms for late myelination in this brain area. The hippocampus is the central brain area for learning and memory formation (Moser et al., 2015;Eichenbaum, 2017) and is an OL-sparse gray matter area in mice. The importance of axonal myelination in the region has not been clearly defined (Bonetto et al., 2021). Nonetheless, several studies noted agerelated decline in CNPase + fibers in CA1 (from 4 to 14 months) (Hayakawa et al., 2007), or MBP + processes in CA3 (from 6 to 24 months) (Vanzulli et al., 2020), emphasizing myelin loss in the aged hippocampus. However, interpreting CNP or MBP immunostaining results requires caution, because tissue penetration of antibodies against myelin proteins to aged myelin is often limited by its lipid-rich compact myelin status (Gonsalvez et al., 2019). Our confocal microscopy of MBP + or EGFP + cell processes showed increased and more complex patterns of OL processes in 12-month-old brains than in young counterparts, although OL process or internodes appeared much thinner in older mice. We interpret these observations as evidence that the number of OL processes and myelin internodes increase in parallel with OL soma in this age window. Quantitative Analysis of Oligodendrocytes Instead of assessing myelin protein antigenicity, we directly quantified discrete EGFP-labeled OL soma and profiled agedependent OL addition in the hippocampus. Our results reveal that hippocampal subregions CA3 and CA4 are the most active and steady areas of adult oligodendrogenesis. These results were further supported by genetic lineage tracing results of OPCs that were EGFP-labeled after P70. Compared to the CA3 and CA4, new OL addition to the CA1 was insignificant during the same age window. The SLM of CA1, where entorhinal cortex layer III neurons project their axons, does not exhibit further OL addition from 3 months, despite its abundance of OLs relative to other hippocampal subregions. In earlier studies, similar myelin increases in CA3 and CA4 regions of the adult brain have not been noted. Interestingly, Abrahám et al. (2010) observed that, unlike other hippocampal subregions, the levels of MBP + myelinated axons in the CA4 further increased from 11 years of age to adulthood in humans. Although this study did not assess myelin changes after adulthood, their results suggest late or continuous myelination in CA4. Oligodendroglial Changes Are Distinct From Those of Oligodendrocytes Progenitor Cells and Astrocytes In contrast to OLs' steady increase in CA3 and CA4, OPC densities decline until 3 months of age but either remain unaltered or decrease to a small degree afterward. Thus, the prominent OL increases in CA3 and CA4 are not inversely correlated with a change in the number of OPCs; instead, they may be caused by other extracellular mechanisms that stimulate the OPC-to-OL transition. Our OPC quantification results are different from other data suggesting that OPC density decreases with age in the hippocampus (Chacon-De-La-Rocha et al., 2020;Vanzulli et al., 2020). However, the discrepancy among studies may be due to different ages examined in previous studies (14 or 24 months as an old age point). In contrast, we quantified OPC numbers only up to 12 months of age. Astrocytes display more reactive transcriptomal changes with age in the cortex, hippocampus, and straitum (Clarke et al., 2018). Nonetheless, astrocyte densities appear to be stable throughout life on the basis of the lack of changes in S100β + cell density in the cortex and CA3 of the hippocampus (Grosche et al., 2013), or single cell-transcriptome data obtained from the whole mouse brain (Ximerakis et al., 2019). Consistent with these observations, our quantification of GFAP + Sox9 + cells suggests that astrocytes in most hippocampal subregions are stable with age. One exception is the SLM of CA1, where the astrocyte density is the highest among hippocampal subregions. The significant astrocyte decrease in the SLM with age points to a regional pattern in astroglial changes different than that of OLs. Thus, all three macroglial cell populations in the hippocampus may follow distinct patterns of number change with age. Although astrocytes and microglia play critical roles in the early OL development and survival throughout the CNS (Domingues et al., 2016), OL density changes in local brain regions at later ages may be subjected to additional regulatory mechanisms, including circuitspecific neuronal activities (Kougioumtzidou et al., 2017;Hughes and Stockton, 2021). It is unclear whether neuronal activity controls astrocyte density. Subregion-Specific Oligodendroglial Susceptibility to Alzheimer's Disease-Like Conditions Oligodendroglial abnormalities have been implicated in various age-related neurodegenerative diseases. In particular, abnormal changes in OL-lineage cells and myelin damages were observed in postmortem AD brain tissues (Mitew et al., 2010;Behrendt et al., 2013;Tse et al., 2018) and in mouse models of AD (Desai et al., 2009;Tse et al., 2018;Chacon-De-La-Rocha et al., 2020;Vanzulli et al., 2020). For example, in 3xTg-AD mice, significant losses of MBP + immunoreactivities were observed in CA1 (Desai et al., 2009) at different ages. Subtle changes in OPC morphology or density have also been noted in mouse models of AD (Chacon-De-La-Rocha et al., 2020;Vanzulli et al., 2020) and patients (Nielsen et al., 2013;Zhang et al., 2019), although whether morphological changes in OPCs are functionally linked to OL change is unclear. A recent study indirectly assessed OL loss by measuring the decrease in fractions of CNPase + cells among Olig2 + cells in the CA1, CA2-3, and DG of 10-monthold APP/PS1 mice (Chao et al., 2020). We also observed a loss of hippocampal OLs in the same mouse model, but when observed at 12 months, OL loss was regionally restricted to CA3 and CA4. It is not clear why the results differed between these two studies. Nonetheless, it should be noted that different methods for OL identification may have different sensitivity for OL detection. We found that the regional patterns of OL loss in the hippocampus are not inversely correlated with the levels of Aβ deposits in 12-month-old APP/PS1 mice, suggesting another mechanism driving OL loss at this stage of the disease. Indeed, OL loss precedes Aβ pathology in another AD mouse model (Tse et al., 2018). Moreover, Cre-loxP-dependent OPC fate analysis results indicate that OPC's potential for new oligodendrogenesis was not impaired in the CA3 and CA4. Therefore, our results suggest that the OL loss in CA3 and CA4 in APP/PS1 mice may be due to the death of early-born OLs without sufficient OL replenishment, despite intact OL generation. Unlike our findings, recent studies reported increased OL turnover in the diseased hippocampus. Those conclusions were based on the observed increase of new adult born OLs without net OL density change in another AD mouse model (J20 or PDGF-APP Sw , Ind ) (Ferreira et al., 2020) and a mouse model of tauopathy (MAPT P301S ) (Ferreira et al., 2020(Ferreira et al., , 2021. Changes in hippocampal astrocytes have been observed in mouse models of AD, including reduced astrocyte number (Olabarria et al., 2011;Beauquis et al., 2013) and atrophy or hypertrophy of astrocytes, dependent on their association with Aβ deposits (Rodríguez et al., 2009;Olabarria et al., 2010). We also identified a decrease in astrocyte density that was specific to the SLM of CA1. Our findings suggest that the OLs and astrocytes are subjected to different AD-related stresses which are subregion-specific. Significance of Continuous Oligodendroglial Addition to CA3 and CA4 With Age It is unclear why CA3 and CA4 in the hippocampus are particularly active in adult oligodendrogenesis, but also vulnerable to AD-like conditions in APP/PS1 mice. The significance of the hippocampal subregion-dependent differential myelination will also be an important question. DG and CA3 are known to be critical for pattern separation, the formation of distinct memories from experiences with overlapping elements (Leutgeb et al., 2007;Bakker et al., 2008;Duncan and Schlichting, 2018). With age, there can be region-specific changes to the DG and CA3 activity, associated with deficits to pattern separation (Yassa et al., 2011a,b;Reagh et al., 2018). Thus, active adult myelination in these regions may support such memory distinctions, and its loss may lead to confused memory. Collectively, our results reveal subregion-specific and dynamic adult OL changes in the hippocampus through healthy aging or in AD-like disease conditions. These findings warrant further studies to identify specific subgroups of hippocampal neurons as the targets of adult myelination and determine how this region-specific myelination shapes memory processes and other aspects of cognition. Tamoxifen Administration Cre activity was induced with tamoxifen (Sigma-Aldrich, Cat# T5648) administration to Pdgfra-CreER mice. Tamoxifen was dissolved (20 mg/ml) in a mixture of sunflower seed oilethanol (10:1), and then ethanol was evaporated in a vacuum concentrator for 30 min. Forty mg/kg (b.w.) of tamoxifen was intraperitoneally (i.p.) injected twice daily with at least a 6-h interval between injections. A total of 10 doses of tamoxifen was injected into the Pdgfra-CreER mice; R26-EGFP; ± APP/PS1 mice between P70 and P74. Tissue Preparation Mice were deeply anesthetized with sodium pentobarbital (70 mg/kg, i.p.) and subjected to brief trans-cardiac perfusion with PBS and subsequently 4% paraformaldehyde (PFA, in 0.1 M phosphate buffer, pH 7.4). Brains were isolated and incubated in 4% PFA at 4 • C overnight for post-fixation. Fixed brains were then incubated in 30% sucrose (in PBS) at 4 • C for at least 36 h for cryoprotection. Tissues were embedded and frozen on Tissue-Tek optimum cutting temperature (O.C.T.) compound with dry ice. Twenty or 35 µm-thick coronally-cut brain sections were prepared using a cryostat (Leica) and kept in PBS complemented with 0.1% sodium azide. Selection of Hippocampal Subregions for Cell Quantification Four hippocampal subregions were selected for OL quantification: (1) stratum lacunosum-moleculare (SLM) of CA1, (2) the dentate gyrus (DG) hilus (CA4), (3) stratum lucidum (SL) and stratum radiatum (SR) of CA3, and (4) SR of CA1 (see Figure 2A). The rationale for this regional selection is related to the importance of major hippocampal circuits and their axonal projection. The SLM of CA1 includes axons of the direct perforant pathway originating from layer III of the entorhinal cortex (EC) (Suh et al., 2011). The DG, CA3, and SR + SO of CA1 constitute the indirect perforant pathway and represent relays of three synaptic connections. In the indirect perforant path, layer II neurons of the EC project to dendrites of DG granule cells in the DG molecular layer. The dentate granule cells project their axons to CA3 through CA4, the DG hilar region, forming mossy fibers. However, the CA4 and CA3 areas also include various types of interneurons (Markwardt et al., 2011;Pelkey et al., 2017). The CA3 pyramidal neuron axons project to SR of the CA1, which forms the Schaffer collateral pathway. Image Acquisition Fluorescent images were captured using Axio-Imager M2, an epifluorescence microscope (Zeiss), and the Axiovision (7.0) software (Zeiss). Three to 4 sections were assessed per mouse, and at least three mice per group were used for analysis. Hippocampal subregions were determined based on DAPI-based nuclear staining patterns and The Mouse Brain in Stereotaxic Coordinates (Franklin and Paxinos, 2008). Confocal images were obtained with a laser scanning microscope TCS SP8 (Leica) and processed with LAS X software (Leica). Quantification and Statistical Analysis Cell counting and hippocampal subregion outlining were manually performed using the ZEN software (Zeiss). The percentage of the area of Aβ plaques and mean pixel values of MBP + or NF200 + were determined using Image J. Prism 9.0 (GraphPad) was used for the statistical analysis and graph drawing. Statistical significance was determined with a twotailed unpaired Student's t-test (for two-group comparisons) or one-way ANOVA with the Tukey post-hoc test (more than twogroup comparisons) or one-way ANOVA with Dunnett's test (for comparisons with a common control). When effects of age or AD-like conditions are compared across hippocampal subregions, two-way ANOVA and Sidak multiple comparison tests were performed. Error bars represent the standard error of the mean (SEM). "n" represents the number of mice used in each experiment, and three or four mice per group were used unless otherwise stated. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by the Temple University IACUC. AUTHOR CONTRIBUTIONS LD performed immunostaining and quantitative analysis, interpreted results, and wrote the manuscript. EG-F sampled mice and performed confocal microscopy. IC sampled mice, performed immunostaining, and quantitative analysis. SK conceived, designed, and supervised the study, interpreted results, and wrote the manuscript. All authors contributed to the article and approved the submitted version.
2022-04-08T13:16:49.124Z
2022-04-07T00:00:00.000
{ "year": 2022, "sha1": "557fa8b08db091b66ebdf80ef6e6f8682240ae97", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "557fa8b08db091b66ebdf80ef6e6f8682240ae97", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
6890428
pes2o/s2orc
v3-fos-license
Osteoporosis: the current status of mesenchymal stem cell-based therapy Osteoporosis, or bone loss, is a progressive, systemic skeletal disease that affects millions of people worldwide. Osteoporosis is generally age related, and it is underdiagnosed because it remains asymptomatic for several years until the development of fractures that confine daily life activities, particularly in elderly people. Most patients with osteoporotic fractures become bedridden and are in a life-threatening state. The consequences of fracture can be devastating, leading to substantial morbidity and mortality of the patients. The normal physiologic process of bone remodeling involves a balance between bone resorption and bone formation during early adulthood. In osteoporosis, this process becomes imbalanced, resulting in gradual losses of bone mass and density due to enhanced bone resorption and/or inadequate bone formation. Several growth factors underlying age-related osteoporosis and their signaling pathways have been identified, such as osteoprotegerin (OPG)/receptor activator of nuclear factor B (RANK)/RANK ligand (RANKL), bone morphogenetic protein (BMP), wingless-type MMTV integration site family (Wnt) proteins and signaling through parathyroid hormone receptors. In addition, the pathogenesis of osteoporosis has been connected to genetics. The current treatment of osteoporosis predominantly consists of antiresorptive and anabolic agents; however, the serious adverse effects of using these drugs are of concern. Cell-based replacement therapy via the use of mesenchymal stem cells (MSCs) may become one of the strategies for osteoporosis treatment in the future. Background Osteoporosis, a bone disease involving the appearance of porous bone, is characterized by low bone mass and microarchitectural deterioration of bone tissues, leading to reduced bone strength and a consequent increase in fracture risk [1,2]. Osteoporosis is increasingly recognized as a major public health concern that affects more than 200 million people worldwide and causes more than 8.9 million fractures, and mainly hip fractures, per year [3]. The incidence of osteoporosis has dramatically risen because the life expectancy of the population has been increasing in every geographic region [4]. The consequences of osteoporosis are significant, as is the financial burden, estimated at approximately US $17 billion for more than 2 million fractures in the US [5]. The prevention of this disease and its associated fractures is considered essential to health maintenance, quality of life, and independence in the elderly population. According to the World Health Organization (WHO) criteria, osteoporosis is defined as having a bone mineral density (BMD) value that is 2.5 standard deviations or more (T-score ≤ -2.5) below the average value for young healthy women, as measured by dual-energy x-ray absorptiometry (DXA), which is the most validated technique (i.e., the gold standard) [1,2]. A low BMD not only is a major risk factor for fractures but also is an independent risk factor for death [6]. BMD testing of the hip and spine is required for a densitometric diagnosis of osteoporosis. Measurements of bone strength other than bone density at these sites may predict fracture risk but cannot be used to diagnose osteoporosis [7]. BMD remains the best tool to assess fracture risk, but it cannot predict the fracture risk in certain cases, particularly in type 2 diabetes patients, who usually have a higher BMD and an increased fracture risk [8]. The majority of osteoporotic fractures occur in individuals with a BMD level within the osteopenic range (-2.5 < T-score < -1) or even with normal BMD levels [9]. The Fracture Risk Assessment (FRAX) tool integrates BMD and clinical risk factors such as age, gender, the history of fracture, the parental history of hip fracture, current smoking, excessive alcohol intake, rheumatoid arthritis, glucocorticoid use, and other causes of secondary osteoporosis. FRAX has been shown to be a more applicable prediction tool to estimate the risk and probability of fracture in an individual over the next 10 years. FRAX can also provide general clinical guidance for treatment decisions [10]. Over the past few decades, there have been great advances in understanding of the physiologic process of bone remodeling, together with associated pathologic conditions. The underlying pathogenesis of osteoporosis involves an imbalance of bone homeostasis that results from many causes, such as hormone deficiency, genetic disorders, use of certain medications, and medical conditions. Osteoporosis is characterized by low bone mass and density, which lead to an increased fracture risk. The aims of treatment for osteoporosis are to reduce bone loss and maintain bone density, especially in patients who have fractures or a high risk of fractures. Many therapeutic drugs for treating osteoporosis are available in the market, most of which have relied on bone resorption inhibition, such as bisphosphonate, and several drugs are being developed for treatment in the future. However, controversies have confounded the treatment of osteoporosis. Thus, new treatments based on the promotion of bone regeneration or alternative cell-based therapy for osteoporosis patients are expected to be investigated. Stem cells are expected to have great therapeutic potential, particularly in regenerative medicine. Specifically, stem cells could be a promising cell source for cell-based therapy for osteoporosis. In the present review, the current understanding of mesenchymal stem cells (MSCs) and their roles in osteoporosis, the genetics and transcriptional regulation involved in the pathogenesis of osteoporosis, the signaling pathways associated with osteoporosis and trends in using stem cells as cell-based therapy for osteoporosis is summarized. Imbalance of bone homeostasis Bone formation generally comprises three basic steps: synthesis of extracellular protein matrix (osteoid) by osteoblasts; matrix mineralization by coating the protein matrix with a layer of mineral, and predominantly calcium phosphate in the form of crystals of hydroxyapatite; and bone remodeling, which is a process that occurs throughout human life. Bone remodeling is essential to maintain the integrity of the skeleton and serves as storage for mineral homeostasis [11]. Via an interactive process called coupling, this process is balanced by the functions of bone-resorbing osteoclasts and bone-forming osteoblasts in early adulthood. When the bone loses its mineral content and density and develops osteopenia, this culminates in osteoporosis, which is associated with a risk of bone fractures [11,12]. Osteoporosis is normally related to increasing age, consistent with the fact that most of the older population is affected by this condition. It has been shown that genetics could be another explanation for the pathogenesis of osteoporosis. The results of laboratory studies have indicated that osteoporosis is caused by an imbalance of the coupling interactive process, with increased bone resorption relative to bone formation. In this regard, the imbalance is a consequence of changes at the cellular level, by which osteoclast development is enhanced but osteoblast differentiation is insufficient because of impaired activity and enhanced apoptosis [13,14]. Underlying transcriptional regulation and genetics Although MSCs have the ability to undergo multipotent differentiation, cell fate determination and differentiation toward either osteoblasts or adipocytes are well regulated by lineage-specific transcription factors such as runt-related transcription factor 2 (Runx2) and osterix (Osx) for osteoblasts and peroxisome proliferatoractivated receptor gamma (PPARγ) for adipocytes, suggesting an inverse correlation between osteogenesis and adipogenesis [15][16][17][18][19]. During these processes, intrinsic (genetic) and/or environmental (local and/or systemic) conditions interplay to specify cell fate toward one of the possible lineages. Several lines of evidence have demonstrated that osteoporotic MSCs have defects in intrinsic signals that cause functional alterations, leading to poor osteogenic differentiation capacity and favoring increased adipogenesis [13,20]. A recent study of microarray analyses demonstrated that MSCs from elderly patients with primary osteoporosis have a distinct transcriptome compared with control MSCs and elderly donor non-osteoporotic MSCs, as shown by enhanced mRNA expression of osteoporosis-associated genes (RUNX2, lipoprotein receptor-related protein 5; LRP5, collagen type 1 alpha1; COL1A1), genes involved in osteoclastogenesis (CSF1, PTH1R), and genes coding for inhibitors of wingless-type MMTV integration site family (Wnt) and bone morphogenetic protein (BMP) signaling, indicating intrinsic deficiencies in self-renewal and differentiation potential in osteoporotic MSCs [21]. Interestingly, transcriptional alterations may reflect epigenetic changes as part of the process of age-related osteoporosis [21]. Nevertheless, the regulatory mechanisms underlying the pathogenesis of osteoporosis have been linked to genetics. Several approaches, including linkage analysis in families, animal studies, candidate gene association studies, and genome-wide association studies (GWAS), have been used to identify the genes responsible for osteoporosis [22]. Linkage analysis is the classical approach that is used to study BMD variation [23,24]. Linkage studies in animals such as mice, rats and primates provide another way to identify genes that regulate bone density and other phenotypes relevant to osteoporosis. The Alox15 gene has been found to regulate bone density in mice, and this finding was confirmed in Alox15-knockout mice showing increased BMD [25]. Candidate gene association studies have been widely used in the field of osteoporosis, analyzing polymorphic variants in candidate genes and relating them to the carriage of a specific allele or haplotype. Candidate genes such as sclerostin (SOST), COL1A1, ESR1, LRP5, TGF-β1 and VDR have been extensively investigated on a large scale [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40]. Due to advances in genotyping technologies, GWAS have been applied to study osteoporosis, and large numbers of single-nucleotide polymorphisms (SNPs) have been identified. A GWAS by Richards et al. reported the identification of SNPs that are significantly associated with decreased BMD and increased risks of osteoporotic fractures and osteoporosis when they are located near the TNFRSF11B (osteoprotegerin or OPG) and LRP5 genes [41]. Another study, by Styrkarsdottir et al., used an extended GWAS to identify four new genome-wide significant loci; this loci were near the SOST gene at 17q21, the MARK3 gene at 14q32, the SP7 gene at 12q13 and the TNFRSF11A (receptor activator of nuclear factor kB or RANK) gene at 18q21 and were associated with the heritability of BMD [42]. However, genetic studies of osteoporosis-susceptibility genes need to be further explored. Signaling pathways associated with osteoporosis Over several decades, signaling pathways in bone homeostasis have been extensively studied. Dysregulation of these signaling pathways is associated with bone diseases, including osteoporosis. Major signaling pathways that govern the bone regenerative process are OPG/RANK/RANK ligand (RANKL), Wnt, and BMP signaling. Bone homeostasis is maintained by the balanced function of osteoblasts and osteoclasts. The key regulators involved in this balancing process, equilibrating between bone formation and bone resorption, have been extensively explored. The OPG/RANK/ RANKL system is one of the most important signaling pathways in bone metabolism ( Fig. 1). Dysregulation of the OPG/RANK/RANKL system has been reported in osteoporosis. OPG, recently designated as TNFRSF11B and serving as a member of the tumor necrosis factor (TNF) receptor family, was first identified as a crucial component that is secreted by osteoblasts; bone marrow stromal cells [43]; and other cells, such as regulatory T (T reg) cells [44]. OPG protects the skeleton from excessive bone resorption by acting as a soluble decoy receptor that can bind to RANKL [45]. The binding of OPG and RANKL subsequently prevents RANKL from binding to its receptor, RANK [43]. The overexpression of the gene encoding OPG results in the development of high bone mass and reduced osteoclast numbers and activity [46]. OPG-deficient mice demonstrate osteoporosis, with an excessive number of osteoclasts [47,48]. RANKL functions as an osteoclast-activating factor secreted by activated T cells and represents a potent molecule that binds to RANK, which is expressed on osteoclast precursors, known as preosteoclasts [49]. RANKL-RANK binding drives osteoclast differentiation and maturation. The activation of RANK through the binding of RANKL induces the activation of transcription factors such as c-fos, NFAT, and nuclear factor kappa B (NF-kB) in preosteoclasts and initiates several downstream signaling pathways, and especially the NF-kB pathway [50,51]. RANKL-deficient mice exhibit osteopetrotic bones, or thickened bones, due to a defect in osteoclast development [52]. Moreover, RANKL relies on the presence of macrophage colony-stimulating factor (M-CSF), which is a cofactor for RANKL/ RANK-mediated osteoclastogenesis [53]. However, experimental data revealed that RANKL alone could stimulate bone resorption in mice lacking M-CSF [54]. In contrast, M-CSF alone is insufficient to activate osteoclasts [55]. Therefore, RANKL plays a crucial role in osteoclastogenesis, and this phenomenon is required for bone resorption. Under physiologic conditions, OPG/RANKL is in equilibrium and preserves bone homeostasis. The OPG/RANKL ratio is an important factor to use to determine bone mass and skeletal integrity [56,57]. Under osteoporotic conditions, RANKL is upregulated, which is associated with downregulation of OPG [58]. Moreover, several Bone homeostasis regulation by OPG/RANK/RANKL system. RANKL which secreted by activated T cells functions as an osteoclast-activating factor by binding to its receptor, RANK, which is expressed on preosteoclasts. RANKL-RANK binding induces the activation of several transcription factors in preosteoclasts and initiates several downstream signaling pathways that drive osteoclast differentiation and maturation. OPG which secreted by osteoblasts, bone marrow stromal cells, and T reg cells acts as a soluble receptor that can bind to RANKL and subsequently prevents RANKL-RANK binding. Under physiologic conditions, OPG/RANKL is in equilibrium and preserves bone homeostasis. Under osteoporotic conditions, RANKL is upregulated, which is associated with downregulation of OPG. Several proinflammatory cytokines are secreted from T helper cells (Th1/Th2/Th17) stimulating and upregulating RANKL expression and mediating osteoclast formation and activity, which are linked to increased bone resorption cytokines are elevated, and particularly TNF-α, IL-1, IL-4 and IL-6, in osteoporosis [59]. These proinflammatory cytokines modulate the RANKL/RANK ratio by stimulating and upregulating RANKL expression on T cells. Interestingly, this emerging role of the OPG/RANK/RANKL system not only is relevant to bone biology but also has been discovered beyond the immunological system. The cross-regulation between bone and immune cells is considered as a bone immunological niche [60]. Considering bone resorption, data in the literature have revealed that impairment of T cell subpopulations and their proinflammatory cytokine patterns are implicated in the pathogenesis of osteoporosis. At the bone tissue level, Th1 and Th2 cells play a role through their secreted cytokines, including RANKL, mediating osteoclast formation and activity, which are linked to bone resorption [61]. Furthermore, Th17 cells, a distinct lineage of proinflammatory T helper cells, were more recently identified as a potent T cell subpopulation that has a role in bone destruction [62]. Th17 cells have been found to increase in number in many bone diseases, and particularly osteoporosis [63]. Th17 cells produce IL-17, which functions in mediating osteoclast differentiation [63,64]. It has been shown that Th17 cells also produce RANKL, directly contributing to bone loss [62]. Additionally, the Th17 population in the bone marrow and peripheral blood is large in estrogen-deficient osteoporosis [65]. Collectively, Th1/Th2/Th17 cells and their cytokines might play a key role as potent pro-osteoclastogenic mediators underlying the pathogenesis of osteoporotic development. Wnts are secreted glycoproteins that, when bound to their cognate receptors, can stimulate intracellular signaling cascades that play important roles in cell developmental processes, including osteogenesis [66,67]. The binding of Wnt ligand to Frizzled receptor and the LRP6 coreceptors to form a complex stimulates the canonical Wnt/βcatenin pathway, whereas binding to the ROR2/RYK coreceptors stimulates noncanonical Wnt signaling [68]. In fact, signaling induced by Wnt/β-catenin is well established and generally plays a role in osteoblastogenesis by promoting the commitment and differentiation of MSCs toward the osteoblast lineage, which in turn suppresses adipogenesis through the inhibition of PPARγ-induced genes [66,69]. Wnt/ β-catenin signaling plays a role in osteoblast maturation and indirectly reduces osteoclastogenesis by stimulating the secretion of OPG, a natural inhibitor of RANKL [70,71]. Considering the components of Wnt signaling, humans and mice with altered expression of LRP5 and Wnt10b have alterations in bone mass [72,73]. Loss-of-function LRP5 mutation causes abnormality in bone formation [72]. A genetic study found that WNT10B polymorphisms have an impact on low bone mass and osteoporosis risk [74]. As previously stated, Wnt10b seems to be the most positive modulator of bone regeneration and homeostasis. Supporting these findings, a decreased number and decreased function of osteoblasts have been found in Wnt10b -/knockout mice, coupled with a 30 % reduction in bone volume and BMD [75,76]. Stevens et al. have found that heterozygous Wnt10b +/mice showed a significant reduction in trabecular bone at 6 months of age, and both the number of bone marrow-derived MSCs and osteoblast differentiation were affected [76]. In another study, signaling through Wnt2, Wnt3 or Wnt3a induced proliferation and maintained the self-renewal of MSCs, whereas Wnt5a, Wnt5b or Wnt11 supported osteogenesis [77,78]. β-catenin deficiency arrests osteoblast development at an early stage in mesenchymal osteoblastic precursors and impairs the maturation and mineralization of committed osteoblasts [67,79]. Rodríguez et al. reported that osteoporotic MSCs had a diminished proliferation rate as well as decreased mRNA expression of Wnt signaling and the downstream components GSK-3β, LRP6 and OSX [28,80]. In addition, dickkopf-1 (DKK-1) and SOST are endogenous inhibitors of the canonical Wnt/β-catenin pathway that is specific to bone [70,81]. Genes coding for these inhibitors show enhanced expression in osteoporotic MSCs in humans [35]. Clinically, the serum DKK-1 level has been found to be significantly higher in patients with low BMD and postmenopausal osteoporosis [82]. Several findings have revealed crosstalk between Wnt signaling and other signaling factors, such as BMPs. In particular, BMP-2 has a synergistic effect with Wnt ligands and β-catenin, inducing bone formation through Wnt/β-catenin signaling and downstream T cell factor/lymphoid enhancer factor (TCF/LEF) transcriptional activity [83,84]. In addition to Wnt signaling, the BMPs, belonging to the transforming growth factor beta (TGF-β) superfamily, are responsible for numerous cell regulatory processes, including osteogenic differentiation and regulation of bone formation [85]. Upon binding of BMP ligand, signal transduction is initiated through the interaction between two serine-threonine kinase cell surface receptors (BMP receptors (BMPRs)). In particular, BMPR-IA and BMPR-IB are involved in MSC differentiation [86]. BMP-2, BMP-4, BMP-7, BMP-9, and BMP-13 are commonly studied in the context of MSC differentiation-related osteoblastogenesis and bone formation [87,88]. Notably, BMP-2 promotes Runx2 expression in mesenchymal osteoprogenitors and also promotes Osx and distal-less homeobox 5 (Dlx5) expression in osteoblasts [89][90][91][92][93]. BMP-3 is an exception because it inhibits osteogenesis [94]. BMPs function as both autocrine and paracrine factors, and their synthesis is induced by BMPs themselves via local feedback mechanisms. Evidence has shown that MSCs from osteoporosis patients are impaired in function and this alteration is associated with BMP signaling [95,96]. However, BMP antagonists have been described, including noggin (NOG) and gremlin (GREM). Overexpression of NOG, as shown in transgenic mouse studies, results in decreased BMD because of increased inhibition of bone formation [97,98]. SNPs in the NOG gene are associated with osteoporosis-related phenotypes in humans [99]. GREM is detectable in the skeleton, and its overexpression causes osteopenia and fractures [100]. Genetic variants of GREM2 are associated with BMD, and GREM2 is considered a susceptibility gene for osteoporosis [101]. Osteoporosis treatments Current options for the treatment of osteoporosis are predominantly drug-based agents that either inhibit bone resorption or directly stimulate bone generation to increase bone mass. Non-pharmacological treatments via calcium and vitamin D consumption have been given to patients who have a high risk of osteoporosis related to insufficient calcium and vitamin D intake and postmenopause [102,103]. Pharmacological treatments are given to patients who are diagnosed with osteoporosis who have already had a fracture or who have a high risk of osteoporotic fracture or re-fracture. Bisphosphonates, which are synthetic compounds that decrease bone resorption by promoting osteoclast apoptosis [104,105], are the most common medications prescribed as firstline drugs for osteoporosis treatment. Several bisphosphonates have been approved as drugs for the treatment of osteoporosis, including alendronate [106], ibandronate [107], risedronate [108], and zoledronate [109]. However, serious side effects, such as osteonecrosis of the jaw and atypical femoral fractures, have been described in patients under long-term bisphosphonate treatment [110,111]. Although serious adverse events are rare for current antiresorptive compounds and do not represent a major concern, the development of drugs with higher efficacy in improvement of bone quality and prevention of fractures is still necessary. Other antiresorptive drugs that can serve as alternatives for osteoporosis treatment include denosumab, a RANKL inhibitor that blocks the main pathway involved in osteoclast formation and activation [112], and calcitonin, a naturally occurring peptide [113]. Hormone therapy, such as therapy with estrogen [114] and with selective estrogen receptor modulators (SERMs) acting as estrogen agonists, such as raloxifene [115], has been used in postmenopausal women to slow the bone breakdown process, maintain bone density, and reduce fracture risk. However, long-term side effects, and particularly the development of breast cancer, and risks of cardiovascular events and thromboembolism limit the use of estrogen and SERMs as treatment strategies for osteoporosis [116,117]. In contrast with antiresorptive drugs, anabolic drugs that can increase bone formation, rather than preventing bone loss, are of interest to rebuild bone, increase bone strength, and reduce the risk of fractures in osteoporosis patients. To date, approved anabolic drugs have been limited to parathyroid hormone (PTH) and its analog, teriparatide (recombinant human PTH(1-34)), which are considered as treatments for patients with severe osteoporosis. Nevertheless, it was reported that administering a high dose of teriparatide for a long period increased the incidence of osteosarcoma in an animal study [118]. Although evidence of osteosarcoma has not been reported in patients taking teriparatide, treatment with teriparatide is not allowed beyond 2 years according to the FDA. One therapeutic drug, strontium ranelate, which is thought to have dual actions on bone metabolism, both increasing bone formation and decreasing bone resorption, represents as a potential agent for the treatment of postmenopausal osteoporosis to reduce the risk of vertebral and hip fractures [119]. Considering the costs and disadvantages of prolonged treatment with drugs and hormones in osteoporosis patients, cell therapy may be a good alternative candidate therapeutic strategy to treat osteoporosis in the future. Stem cell-based therapy for osteoporosis Cell therapy has attracted considerable clinical attention for the treatment of various diseases for many decades. Stem cells are believed to be an ideal source of cells for cell replacement therapy for bone diseases due to their properties of self-renewal and plasticity, which can repair or regenerate damaged tissues. Candidate stem cell types include embryonic stem (ES) cells, induced pluripotent stem (iPS) cells and somatic stem cells such as MSCs. The use of ES and iPS cells is limited due to ethical issues and virusbased derivation methods [120]. It seems likely that the use of MSCs overcomes such limitations and is more practical in other disease models. In recent years, MSCs have become dramatically interesting for the treatment of osteoporosis. MSCs have the ability to self-renew and to grow into specific tissues, such as cartilage, bone, and adipose tissue. Human MSCs are defined by their phenotypic expression of CD105, CD73 and CD90; their absence of expression of hematopoietic markers such as CD45, CD34, and CD14; and their ability to differentiate into osteogenic, adipogenic and chondrogenic lineages under permissive conditions [121]. It has been reported that MSCs can avoid allogeneic rejection by being hypoimmunogenic, modulating the T cell phenotype, and creating an immunosuppressive locus [122]. Moreover, MSC-derived osteogenic cells show immunoprivileged and immunomodulatory properties similar to those of their parental MSCs [123]. With regard to the pathogenesis of osteoporosis, resulting in bone mass reduction, transplantation of MSCs might promote new bone formation and strengthen the bone, contributing to improvement of bone quality and prevention of fractures. After transplantation, MSCs contribute to bone formation by two possible mechanisms of action: (1) MSCs' homing to a damaged site or pathologic area and then differentiating into bone-forming cells to repair the degenerated tissue and (2) MSCs' acting in a paracrine manner by secreting certain growth factors that modify the environment and recruit resident cells to repair the degenerated tissue [124,125]. Sources of mesenchymal stem cells: advantages and disadvantages Bone marrow-derived MSCs Bone marrow is the most commonly used tissue source of adult MSCs. Bone marrow-derived MSCs (BM-MSCs) have been extensively studied in bone regeneration and repair due to their high efficiency in osteogenic differentiation. Studies in animal models have revealed that both allogeneic and autologous BM-MSC transplantation is applicable in the treatment of osteoporosis. Ichioka et al. demonstrated that normal allogeneic BM-MSCs could increase trabecular bone and attenuate the loss of BMD after being directly injected into the bone marrow cavity of an irradiated P6 substrain of senescence-accelerated mice (SAMP6), an osteoporotic mouse model that exhibits age-dependent inhibition of osteoblastogenesis and osteoclastogenesis along with enhanced adipogenesis [126]. A similar result was also observed in an ovariectomy (OVX)-induced rat model of osteoporosis after receipt of allogeneic BM-MSCs isolated from healthy rats [127]. Autologous BM-MSC transplantation was reported to improve bone formation and to strengthen osteoporotic bone in an OVX-induced rabbit model of osteoporosis [128] as well as in goats with long-term estrogen deficiency, mimicking the postmenopausal osteoporosis that occurs in humans [129]. However, use of autologous BM-MSCs for osteoporosis treatment in elderly patients is limited due to the age-related decline in the overall BM-MSC number [130]. Recently, use of autologous BM-MSCs for the treatment of osteoporosis has been performed in clinical trial study. Autologous BM-MSCs were collected 30 days before infusion, and the cells were cultured under GMP conditions to establish the dose range. In this study, the cells were subjected to the process of fucosylation before intravenous infusion into osteoporosis patients. However, this study is still in the process of recruiting participants and is thus not yet completed (ClinicalTrials.gov Identifier: NCT02566655). Adipose tissue-derived MSCs Adipose tissue provides an attractive source of MSCs that has become increasingly popular in many stem cell applications. Adipose tissue-derived MSCs (AD-MSCs) are isolated from white adipose tissues via a minimally invasive approach and can be expanded and differentiated into classical mesenchymal lineages involved in adipogenesis, osteogenesis, and chondrogenesis [131,132]. AD-MSCs are more easily isolated and more abundant and produce higher yields in terms of cell number compared with BM-MSCs [133]. However, the yield of AD-MSCs and their proliferative and differentiation capacities vary depending on the tissue harvesting site [134] and the age of the donor [135]. For application in cell therapy for osteoporosis, AD-MSCs were reported to function as an effective autologous cell-based approach for the treatment of osteoporosis. SAMP6 osteoporosis mice showed significant improvement in several trabecular bone parameters after a single intratibial transplantation of isogenic AD-MSCs [136]. A preclinical study of the in vivo function of human AD-MSCs by Cho et al. revealed that human AD-MSCs could prevent OVX-induced bone loss in nude mice over 8 weeks, even though there was no evidence of long-term engraftment of infused human AD-MSCs in the bone of recipient mice [137]. The effect of human AD-MSC therapy likely occurs in a paracrine manner by the secretion of various bone-related growth factors, e.g., hepatocyte growth factor, BMP-2, and RANKL, and extracellular matrix (ECM) proteins, e.g., fibronectin, which might promote osteogenic differentiation, bone remodeling and repair in the recipients [124]. Moreover, Xinhai et al. demonstrated that autologous AD-MSCs enhanced bone regeneration in OVX-induced rabbit models of osteoporosis due to not only their own osteogenic differentiation but also their promotion of osteogenesis and inhibition of adipogenesis by osteoporotic BM-MSCs through activation of BMP-2 and the BMPR-IB signaling pathway [125]. Recently, a clinical trial has studied the use of human AD-MSCs for the treatment of proximal humeral fractures in individuals over 50 years old, representing a model for fractures of osteoporotic bone. In this study, AD-MSCs were wrapped around hydroxyapatite microgranules embedded in a fibrin gel to allow cellularized composite graft augmentation. Clinical/radiological follow-up was performed after 6, 9 and 12 months, and functional assessment was performed after 6 weeks and 6 and 12 months using the Quick DASH score and the Constant score. Unfortunately, the study was terminated, and no results are available (ClinicalTrials.gov Identifier: NCT01532076). Perinatal-derived MSCs Although BM-and AD-MSCs are effective sources, the therapeutic potential of these adult MSCs can be affected by the donor's lifestyle and age. Perinatal tissues are alternative sources of MSCs that have attracted growing interest in bone regenerative medicine [138]. Not only are these cells younger than adult MSCs, but perinatalderived MSCs also have the major advantage of an easy and noninvasive harvesting procedure without any risk to the donor. A comparative study of MSCs isolated from different perinatal tissue sites, including the umbilical cord, umbilical cord blood (UCB), amnion, and chorion, revealed that these tissues exhibit similar characteristics to BM-MSCs, including similar phenotypic features, growth properties, differentiation capacities, secretory protein profiles, and low immunogenic properties [138]. However, these stem cell sources are still limited by their low capacity to differentiate compared with BM-and AD-MSCs, and they have not been clearly examined in preclinical studies. Placenta-derived MSCs The placenta is an easily accessible source of perinatal MSCs that provides a high yield of MSCs. Placenta-derived MSCs (PL-MSCs) express common markers of MSCs and exhibit adipogenic, osteogenic, and neurogenic differentiation capacities [139]. Sanvoranart et al. demonstrated that PL-MSCs responded to bortezomib, a chemotherapeutic agent that improves osteolytic lesions in multiple myeloma, via enhancement of osteogenic differentiation, similarly to BM-MSCs [140]. This finding suggests the potential therapeutic application of PL-MSCs in osteopenia and osteoporosis patients. Umbilical cord-derived MSCs The umbilical cord contains various cell types, including vessels, connective tissues, and Wharton's jelly. After isolation, these heterogeneous cells are observed to possess differential vimentin and cytokeratin expression in culture, but not variable capacities to differentiate into chondrogenic, adipogenic, and osteogenic lineages [141]. In vivo bone formation by umbilical cord-derived MSCs (UC-MSCs) was demonstrated by Diao et al., who loaded human UC-MSCs into scaffolds, implanted the scaffolds into BALB/c nude mice subcutaneously and found that the human UC-MSCs could efficiently form bone after implantation for 12 weeks [142]. Wharton's jelly-derived MSCs Wharton's jelly is the mucoid connective tissue that surrounds the umbilical cord vein and that functions in the protection of the vasculature from pressure. Fibroblast-like cells were first isolated from Wharton's jelly by McElreavey et al. in 1991 [143]. These fibroblast-like cells were characterized as MSCs due to their expression of MSC phenotypic markers and their capacities to differentiate into osteogenic, adipogenic, and chondrogenic lineages [144]. A comparative study of human derived-MSCs demonstrated that Wharton's jelly-derived MSCs (WJ-MSCs) exhibited the strongest inhibitory effects on T cell proliferation and the weakest expression of immune-related genes, such as genes encoding major histocompatibility complex (MHC) II and human leukocyte antigen (HLA), compared with BM-, AD-, and PL-MSCs [145]. These immunomodulatory and immunosuppressive properties of WJ-MSCs make them more applicable for clinical use as cell therapy. A study in canines by Kang et al. revealed that canine WJ-MSCs were capable of forming new bone in recipients with bone defects after orthotopic implantation with beta tricalcium phosphate (β-TCP) for 20 weeks. The capacity of WJ-MSCs to undergo osteogenic differentiation in vitro and new bone formation in vivo was similar to that of other MSCs isolated from canine bone marrow, adipose tissue, and UCB [146]. Hence, WJ-MSCs can potentially be used in clinical bone engineering for further treatment of bone defect diseases. Trends in stem cell therapy for osteoporosis The main hurdles for stem cell-based therapy for osteoporosis are long-term engraftment and the uncertainty of stem cell fate after transplantation. Certain reports have revealed that long-term engraftment of MSCs appears to be low and that the function of MSCs might be mediated through a paracrine mechanism, rather than through sustained engraftment in injured tissues [137,147,148]. Senescence of MSCs has been investigated as one of the key factors affecting the growth of MSCs in vitro, possibly hampering the cells' long-term survival after transplantation [149]. Many ongoing studies are aiming to develop high-quality in vitro MSC cultures to increase the survival and engraftment rates. These developing methodologies include modification of MSCs by certain factors and improvement of in vitro MSC culture systems and differentiation procedures. The adjustment of culture conditions before transplantation, such as hypoxic preconditioning of MSC cultures in vitro, has been performed to increase the proliferation rate and to enhance the differentiation potential as well as to induce mobilization and homing of MSCs following transplantation [150,151]. Genetically modified MSCs have been developed to ensure their homing, differentiation capacity, survival, and long-term engraftment at the injury sites of recipients. Immortalization of MSCs by knockdown of p53, a cell cycle regulator, in combination with overexpression of human telomerase reverse transcriptase (hTERT), the catalytic component of telomerase that leads to telomere elongation, could promote proliferation and increase the lifespan of MSCs while retaining the cells' differentiation properties [152]. A combination of cell and gene therapy by overexpression of certain growth factors in MSCs has been promoted as being advantageous for MSC-based therapy [153,154]. For example, the ectopic expression of basic fibroblast growth factor (bFGF) and platelet-derived growth factor B (PDGF-B) enhanced the in vitro proliferation and osteogenesis of BM-MSCs while inhibiting their adipogenesis [154]. MSCs are also an attractive cellular vehicle for the in vivo delivery of therapeutic genes, such as the genes encoding BMP-2 and RANK-Fc (a soluble inhibitor of RANKL), which could increase bone formation in osteoporosis animal models [155,156]. Upon transplantation in vivo, the expressed transgene exerted its effect on both the host mesenchymal tissue (paracrine effect) and the transplanted MSCs (autocrine effect), contributing to the induction of bone formation in the recipients. These strategies of MSC modification are advantageous for the treatment of osteoporosis, which is characterized by increased bone resorption, and the therapies aim to maintain bone density and reduce the risk of fractures. To achieve effective MSC-based therapy for osteoporosis, the poor bone marrow homing and engraftment of MSCs after their systemic transplantation have to be improved. One emerging approach to overcome these limitations involves the overexpression of molecules involved in the bone homing of transplanted MSCs. Ectopic expression of α4 integrin on MSCs greatly increased bone marrow homing after systemic injection through the tail vein in immunocompetent mice. α4 integrin forms a heterodimer with endogenous β1 integrin and functions as a cell adhesion molecule, interacting with ECM proteins such as fibronectin and vascular cell adhesion protein 1 (VCAM-1) and thereby mediates the bone marrow homing and engraftment of MSCs [157]. Another study demonstrated that genetic modification of MSCs with CXCR4, the receptor for stromal-derived factor 1 (SDF-1), which mediates the bone marrow homing and engraftment of hematopoietic stem cells (HSCs), could also increase the bone marrow homing of MSCs and restore bone formation in mice with glucocorticoid-induced osteoporosis [158]. The development of in vitro differentiation procedures is quite important for MSCs used as cell therapy, especially for the treatment of localized osteoporosis and healing fractures resulting from osteoporosis. Technology consisting of three-dimensional (3D) in vitro culture models using biomaterial scaffolds has been developed, with the aim of mimicking the in vivo microenvironment to induce efficient tissue formation in vitro [159]. The biomaterial scaffolds must be slowly biodegradable and can act as a biocompatible matrix to support cell growth. In a recent preclinical study, Müller et al. demonstrated the combination of osteoconductive biomaterials with genetically modified human BM-MSCs in a bone defect rat model. The BM-MSCs were transduced with BMP-2 and loaded into β-TCP scaffolds before implantation into recipient rats. The researchers showed that when combined with BMP-2-transduced BM-MSCs, the scaffolds provided better results than scaffolds with recombinant BMP-2-treated BM-MSCs did [160]. This combination may represent a promising strategy for healing large-area bone defects in osteoporosis. Alternative approaches involving improvement of native BM-MSCs or the local biologic environment at defect sites are of interest and are under investigation. Using a biomaterial scaffold combined with gene delivery for BMP-7 and PDGF-B expression has been shown to enhance the recruitment of BM-MSCs to defect sites and to promote their differentiation into osteoblasts, resulting in increased new bone formation in segmental femoral defects in ovariectomized rats [161]. α5β1 integrin, which mediates osteoblast differentiation in adult human MSCs through ECM-integrin interaction, is considered to be a target for promoting the osteogenic differentiation of BM-MSCs. The use of agonists that target α5β1 integrin can promote MSC recruitment and differentiation into osteoblasts and can also increase the survival of mature osteoblasts, leading to increased bone formation and repair in vivo [162]. Small molecules and microRNAs (miRNAs) are topics of interest in this area and may be applicable for osteoporosis treatment. Many miRNAs have been found to regulate the osteogenic differentiation of MSCs by various mechanisms [163]. Several miRNAs, e.g., miR-27a, miR-346, and miR-1423p, have been demonstrated to directly target inhibitors of the Wnt/β-catenin pathway, such as glycogen synthase kinase 3 beta (GSK3-β), SFRP1, and APC [164][165][166], resulting in modulation of the Wnt/β-catenin pathway and promotion of osteogenic differentiation of MSCs. Certain miRNAs, e.g., miR-20a, promote osteogenic differentiation by downregulating genes involved in adipogenic lineages, such as the gene encoding PPARγ [167]. By contrast, certain miRNAs negatively regulate osteogenic differentiation by targeting osteogenic genes, e.g., RUNX2, OSX, and SATB2. Inhibition strategy using an antagomir sequence against these miRNAs might attenuate the expression of the osteogenic genes and subsequently induce osteogenic differentiation [168][169][170]. The discovery of small molecules that target MSCs for fate determination by using high-throughput screening (HTS) techniques provides an advantage in drug development for osteoporosis treatment [171]. Small molecules may directly stimulate signaling pathways or target genes involved in osteogenic differentiation of MSCs [172][173][174]. For example, simvastatin, a 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitor, could promote osteogenic differentiation by activating the BMP-2 pathway in an ovariectomized rat model, leading to increasing BMD and bone volume [173,175]. Given this finding together with their other advantages, i.e., their small size, high stability, non-immunogenicity, and, most of all, cell permeability, small molecules have undoubted potential for treating osteoporosis. Conclusion Osteoporosis is a systemic bone disorder defined by low BMD occurring due to an imbalance of osteoclastic and/or osteoblastic activities. The current therapeutics for osteoporosis are based on medicine for the prevention of further bone loss. The serious side effects caused by prolonged treatment have led to a need for an alternative approach for the life-long treatment of osteoporosis. Cell therapy appears to fulfill this demand, and MSCs provide a promising source of cells for clinical application in the treatment of osteoporosis. MSCs have been widely used for osteoporosis research as well as in other bone diseases due to not only their intrinsic ability to differentiate into osteoblasts but also their availability and ease of isolation, with high cell yields, from various tissues. Moreover, the immunoprivileged and immunosuppressive properties of MSCs make them more applicable in allogeneic cell replacement therapy. To date, over 400 clinical trials of MSC therapies have been registered with ClinicalTrials.gov (http:// www.clinicaltrials.gov/); these trials have involved many diseases and conditions, such as bone disorders (osteoarthritis, osteogenesis imperfecta, osteoporosis, and rheumatoid arthritis), diabetes mellitus, graft-versus-host disease, and spinal cord injury. However, many questions remain unanswered, and many features have to be validated, such as the long-term engraftment and senescence of MSCs and suitable sources of MSCs for transplantation. In conclusion, much more work is needed to clarify the clinical applications of MSCs; however, the evidence certainly indicates that MSCs will play an important role in cell therapy for osteoporosis in the near future.
2017-08-03T00:22:24.246Z
2016-08-12T00:00:00.000
{ "year": 2016, "sha1": "01a3f4d4040775d18661a0ea043033f0f2da5acb", "oa_license": "CCBY", "oa_url": "https://cmbl.biomedcentral.com/track/pdf/10.1186/s11658-016-0013-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "819afaecf67ead9c89109e7e04cfe29a9f3e7165", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
90155793
pes2o/s2orc
v3-fos-license
SURVIVAL OF LACTOBACILLUS BULGARICUS AND BIFIDOBACTERIUM ANIMALIS IN YOGHURTS MADE FROM COMMERCIAL STARTER CULTURES DURING REFRIGERATED STORAGE All over the world, fermented dairy products have been consumed for nutrition and maintenance of good health for a very long time. This study evaluated the survival of Lactobacillus delbrueckii ssp. bulgaricus and Bifidobacterium animalis ssp. lactis BB-12 in yoghurts after the manufacturing during the shelf-life up to 21 days at 4 °C, which is mostly accepted by the consumers. The titratable acidity and pH showed the same patterns of increase or decline after manufacturing and storage of yoghurts. There was a significant difference (p <0.05) in acidity between yoghurts in glass bottle and plastic cup. The increase in numbers of lactobacilli and bifidobacteria and their survival during storage time were dependent on the species and strain of associative yoghurt bacteria (control-only yoghurt lactic acid bacteria and experimental containing except yoghurt culture also Bifidobacterium animalis ssp. lactis BB-12) and on the packaging material (glass bottle versus plastic cup). It was observed, that counts of bifidobacteria were lower than counts of Lactobacillus delbrueckii ssp. bulgaricus (190 to 434 x 10 at 1d) and slowly increased (p <0.001) at maximum level on day 7 (294.3 – 754 x 10) and then slowly declined to 6.33 x 10 in glass bottle and 2.33 x 10 in plastic cups, respectively. Lactobacillus delbrueckii ssp. bulgaricus multiplied better in glass bottles than in plastic cups, as observed during experimental period in-group with Bifidobacterium animalis ssp. lactis BB-12. At the end of the storage period at 4 oC, viable counts of lactobacilli were higher (p <0.001) in glass bottles. Al the yoghurts, contained the recommended levels of lactobacilli and bifidobacteria (107 cfu.g) at the end of storage period (21 d). INTRODUCTION The first foods with probiotic bacteria were yogurts, and fermented milks are still the most important food vehicle for the delivery of probiotic bacteria.However, other foods have now appeared which carry probiotic bacteria.Numerous entries in the functional food market are linked to beverages, such as unfermented milk and fruit juices.Cheese is also gaining acceptance in the market.In addition to these commercial products, many research projects have been carried out which propose the addition of probiotics to chocolate, sausages, cereal products, dried products and vegetables.A multitude of food products contain lactic cultures and are subject to enrichment by probiotic bacteria (Hamann and Marth, 1983;Kailasapathy et al., 2008;Lovayová, 2007;Champagne, 2009). Probiotics are applied as supporting nutritional supplements and the majority of chronic gastrointestinal diseases, where treatment of microbial flora can positively affect the health status and quality of life of these patients. Considering to the safety of probiotics, exploring their still broader preventive and therapeutic using them early as in childhood, gets them into position became more attractive foods and dietary supplements (FAO/WHO, 2002; Lovayová et al., 2008).At present, most known probiotic organisms are bacteria, belonging to the Lactobacillus and Bifidobacterium genera. The viability of lactobacilli and Bifidobacterium spp. in yogurts depends on a number of factors such as strain of probiotic bacteria incorporated, the yogurt starter cultures used.It is also fermentation time and storage conditions, pH of the yogurt (post-acidification during storage), sugar concentration (osmotic pressure), milk solids content, availability of nutrients, the presence of hydrogen peroxide, dissolved oxygen content (especially for Bifidobacterium spp.), buffering capacity and betagalactosidase concentration in the yogurt (Dave and Shah, 1998;Shihata and Shah, 2000). Lactobacillus delbrueckii ssp.bulgaricus is one of the two bacteria necessary for the production of yoghurts (Kandler and Weiss, 1984; Heller, 2001).Shah (2000) reported that it is important to monitor the survival of probiotic lactobacilli because a number of products contain only a few viable bacteria by the time they reach the market. Interest in the bifidobacteria started more or less contemporaneously, when Tissier described in the feces of breastfed infants the predominance of bacteria that produced lactic and acetic acid; these bacteria were bifurcated and which he named Bacillus bifidus (Mitsuoka, 1990), which was later called Bifidobacterium. Bifidobacterium BB-12 ® (BB-12 ® ) is a catalase-negative, rod-shaped bacterium.It was included in the cell culture bank of Chr.Hansen in 1983.At the time of isolation, BB-12 ® was considered to belong to the species Bifidobacterium bifidum.Modern molecular classification techniques reclassified BB-12 ® as Bifidobacterium animalis and later to a new species Bifidobacterium lactis. The species B. lactis later shown not to the criteria for a species and was instead included in Bifidobacterium animalis as a subspecies.Today, BB-12 ® is classified as Bifidobacterium animalis subsp.lactis.Despite a change in the name over the years, the strain BB-12 ® has not changed (Garrigues et al., 2005). It is strain that was specially selected by Chr.Hansen for the production of probiotic dairy products.BB-12 ® has been used in infant formula, dietary supplements and fermented milk products worldwide.This strain is technologically well suited, expressing fermentation activity, high aerotolerance, good stability and a high acid and bile tolerance, also as freeze-dried products in dietary supplements.Furthermore, BB-12 ® does not have adverse effects on taste, appearance or on the mouth feel of the food and is able to survive in the probiotic food until consumption (Garrigues et al., 2010). Sample preparation and yoghurt technology Control yoghurt (yoghurt culture -Streptococcus thermophilus and Lactobacillus delbrueckii ssp.bulgaricus) and experimental probiotic yoghurt (Streptococcus thermophilus, Lactobacillus delbrueckii ssp.bulgaricus and Bifidobacterium animalis ssp.lactis BB-12) both of Chr.Hansen, were made from raw cow´s milk after pasteurization at 85 °C for 15 seconds and cooling at 45 °C.To improve total solids content in yoghurt at 21%, the skim milk powder was added to the milk and stirred at high speed.After very well mixing, the mixture was heated at the high-pasteurized temperature as described above and kept at this temperature for 20 minutes.Then, the mixture was cooled to 43 ±2 °C and yoghurt starter culture [2 g.100 -1 (w/w)] and Bifidobacterium animalis ssp.lactis BB-12 (experimental yoghurt) in the concentration of 107 CFU.g -1 [1 g.100 -1 (w/w)] were added into the milk.Thereafter, after very well mixing, the mixtures were added into 150 mL cups (both, glass and plastic), sealed, labeled and incubated at the temperature of 43 ±2 °C for 3.0 -3.5 hours until titratable acidity of final product reached maximum 60 ºSH.Then, the products were cooled in ice water bath and maintained under refrigeration temperature (4 °C) during 1, 7, 14, and 21 days. The cultures used in this study were in freeze-dried (DVS) form and use according to the manufacturer recommendation. Analyses of milk and yoghurt samples were done according to the Commission regulation (EC) No 213/2001. Chemical analysis of raw milk  For the experiment raw cow´s milk free of antibiotic residues was used.Antibiotic residues in milk were determined by Beta star 25 tests, a commercial screening test (Neogen Food Safety, USA) before the yoghurt manufacturing.The antibiotic residue test was performed as described by manufacturer´s instruction. The basic components of milk samples (milk proteins, fat, lactose, solids-non-fat (SNF), milk density and added water were determined using a LactiCheck ultrasonic milk analyzer (Page &Pedersen International, Ltd., USA).Temperature of the milk samples was 20 ±1 °C. Titratable acidity of milk sample was determined by titration of milk with 0.25 mol.L -1 NaOH and phenolphthalein as indicator and expressed in degrees of Soxhlet-Henkel (ºSH). The somatic cell count was determined by the Fossomatic 90 (Denmark).Total bacteria count in milk was detected by the standard plate method using Plate count agar (Oxoid) at 30 °C for 72 hours. Chemical analysis of yoghurt The pH of yoghurts was determined with a digital pH meter (pH 340i/SET).The pH meter was calibrated using reference pH 4.0 and 7.0 buffered solutions as described by manufacturer´s instruction.Titratable acidity of yoghurt samples was determined after mixing the yoghurt sample with 10 mL of hot distilled water (~90 °C) according to Soxhlet-Henkel and expressed in Soxhlet-Henkel degrees (ºSH).All the analysis was performed in triplicate. Microbiological analysis of yoghurt For enumeration of Lactobacillus delbrueckii ssp.bulgaricus Lactobacillus MRS agar (Hi Media, India) agar was used.After suspension appropriately and dilution in sterile saline, the ten-fold dilutions were spread into selective medium as described above and incubated at 37 °C for 24 hour anaerobically.Enumeration of Bifidobacterium animalis ssp.lactis BB-12 was carried out through pour plate technique by using Bifidobacterium agar with L-cysteine hydrochloride (Hi Media, India) and incubated in modified atmosphere at 37 °C for 48 hours (Fávaro-Trindade and Grosso, 2004).In cases where no growth was detected, plates were re-incubated at 37 °C for an additional 24 hours.Numbers of bacteria stated for each sample are the means of replicated counts. Depending on the number and morphological types of colony on a plate, three to five colonies of each type were randomly selected.After purification, isolates were examined for their morphology, Gram staining, and observed under a light microscope (Olympus BX 50, Japan) with a magnification of 1 000 x.For confirmation of bacteria present in yoghurts, one loop of the selected purified bacteria was mixed in a sterile vial containing No. 1/2017 porous beads kept in glycerol as cryopreservative and serves as carriers to support microorganisms (Microbank) and stored at -20 °C for MALDI-TOF MS analysis.MALDI-TOF MS analysis was performed on a Microflex MALDI Biotyper (Bruker Daltonik) according to a standard sample preparation protocol of Bruker Daltonik (Freiwald and Sauer, 2009).MALDI-TOF mass spectra were subjected to numerical analysis (BioTyper 3.1 software, Bruker Daltonik). Statistical analysis For statistical comparison of the results, statistical methods of processing and evaluation of the results were used to compare data processed into the tables and graph (MS Excel 2013).ANOVA parameter test and induction statistics methods un-pair t-test for testing means of related parameters.Correlation phi coefficient was used to assess the dependence of the relationship between the two nominal variables (IBM SPSS statistics 23). Scientific hypothesis The goal of the study, was to analyses a surviving Lactobacillus delbrueckii ssp.bulgaricus and Bifidobacterium animalis ssp.lactis BB-12 added for the manufacturing of probiotic yoghurt during the shelf-life up to 21 days which is mostly accepted by the consumers and which were packed into the screw glass bottles and plastic cups. RESULTS AND DISCUSSION Raw milk used for the manufacturing process was acceptable for yoghurt manufacturing process (data not shown). The pH and titratable acidity changes during yoghurt storage are shown in Figure 1.An overall decline in the pH of all the stored yoghurts occurred during the study.The initial pH (day 1) ranged between 4.53 and 4.79 in plastic cup and glass bottle, respectively.There was a significant difference (p <0.05) in pH between yoghurts in glass bottle and plastic cup during the experimental period.Titratable acidity increased significantly (p <0.05) on day 21 of storage period at 4 °C.Higher lactic acid content was observed in yoghurt in plastic cup (47 ºSH vs. 43 ºSH on 1 d and 54 ºSH vs. 49 ºSH in day 21).There were any differences in acidity of yoghurts, both of control and experimental groups, respectively.These results are in agreement with Tarakci and Erdogan (2003) who reported increased acidity of yoghurt over the storage period.Guler and Mutlu (2005) also observed an increase in titratable acidity during the storage period. Changes in the viable counts of Lactobacillus delbrueckii ssp.bulgaricus and Bifidobacterium animalis ssp.lactis BB-12 during manufacturing and storage period (21d) of yoghurts are listed in Table 1.All lactic acid bacteria used in this study were confirmed by numerical analysis (MALDI-TOF MS) to be Lactobacillus delbrueckii ssp.bulgaricus and Bifidobacterium animalis ssp.lactis BB-12. It was observed, that the initial counts of Lactobacillus delbrueckii ssp.bulgaricus were 280.3 x 107 cfu.g -1 at day 1 in yoghurts (control) packed into glass bottles and 283.3 x 107 cfu.g -1 in plastic cups, respectively.The count of lactobacilli in experimental group of yoghurt with probiotic strain of Bifidobacterium animalis ssp.lactis BB-12 was higher both in glass bottle (899 x 107 cfu.g -1 and plastic cups 724.3 x 107 cfu.g -1 ) at 1 d, respectively.This difference could possibly be due to the differences in different pH (4.79 vs. 4.53), respectively.After 1d storage period, the counts of Lactobacillus delbrueckii ssp.bulgaricus increased in control group of yoghurt samples and reached maximum at 3d period (p <0.001) both for glass bottle and plastic cup.It could be due to the residual activity of Lactobacillus delbrueckii ssp.bulgaricus during this experimental period.This is in agreement with the rise in titratable acidity and drop in pH for this culture (Figure 1).For next storage periods, counts of Lactobacillus delbrueckii ssp.bulgaricus showed a sharp decline, which indicated the advantage for the viability of probiotic bacterium Bifidobacterium animalis ssp.lactis BB-12, used in this experiment (Table 1). Our results are in agreement with data stated by Cruz et al. ( 2010), who submitted determination the shelf-life of probiotic flavored yoghurt supplemented with Bifidobacterium animalis DN 173010 W. As shown in Table 1, counts of bifidobacteria were lower than counts of Lactobacillus delbrueckii ssp.bulgaricus (190 to 434.7 x 107 at 1d) and slowly increased (p <0.001) at maximum level on day 7 (294.3-754 x 106) and then slowly declined to 6.33 x 107 in glass bottle and 2.33 x 107 in plastic cups, respectively.The similar results were observed also by Dave and Shah (1997), Lovayová and Figure 1 Change in pH and titratable acidity of experimental yoghurt during 21 days.In general, from yoghurts manufactured only from yoghurt lactic acid bacteria and in yoghurts in which Bifidobacterium tested was included the viable counts of all enumerated bacteria were well above the recommended limit of 107.g -1 during the storage period of 21 days at 4 ºC. Our results are also comparable with other recent studies (Martin and Chou, 1992; Lankaputhra and Shah, 1996).It seems that multiplication of Bifidobacterium animalis ssp.lactis BB-12 in experimental groups of yoghurts was due to the presence of Lactobacillus delbrueckii ssp.bulgaricus in this mixed culture.It is because the free amino acids that are produced by these lactic acid bacteria in yoghurt could have promoted the growth of bifidobacterium, which require free amino acids for its growth and development in yoghurt, respectively (Klaver et al., 1993). As shown in Table 1, Lactobacillus delbrueckii ssp.bulgaricus multiplied better in glass bottles than in plastic cups, as observed during experimental period in-group with Bifidobacterium animalis ssp.lactis BB-12.Also at the end of the storage period at 4 ºC, viable counts of lactobacilli were higher (p <0.001) in glass bottles.This is in comparison with the count of Bifidobacterium animalis ssp.lactis BB-12, those counts were also significantly higher (p <0.001) in yoghurts stored in glass bottle. These differences could be associated with the limitation of the oxygen permeation in yoghurts filled into screw capped glass bottles, because dissolved oxygen content can have effect on titratable acidity, pH and viable counts of LAB as referred by Dave and Shah (1997).According to these authors, bifidobacteria preferred an environment with dissolved oxygen content and multiplied better in glass bottles than in plastic cups, which is in confirmation with the results of our experimental study. As reported Burdová and Lovayová (2009) more carefully controlled studies in which energy intake and expenditure are measured needs to be conducted before any conclusions can be drawn regarding the positive effect of cultured dairy foods in humans and on weight gain and feed efficiency in animals. CONCLUSION The presence of Lactobacillus delbrueckii ssp.bulgaricus and Bifidobacterium animalis ssp.lactis BB-12 was confirmed at each of yoghurt samples packaged in both glass bottles and plastic cups during the completely experimental period of 21 d in total account of more than 107 cfu.g -1 yoghurts.Although the counts of tested lactobacillus and bifidobacterium were significantly higher in yoghurts packaged in glass bottle, the plastic cups are although suitable for using as packaging material as followed from our experimental results.All manufactured yoghurts had high qualitative properties and contained lactic acid bacteria above recommended limit stated for these bacteria. According to Nemcová et al. (2009), bacteria of dairy fermentation mainly of the Lactobacillus genus create, apart from the known substances, many presently unidentified substances that are effective against harmful microorganisms.They have a protective influence in food storage, which can be used clinically.Start cultures are a part of useful microorganisms and their enzymes carry out important biochemical changes during the production process ( Table 1 Survival of Lactobacillus delbrueckii spp bulgaricus and Bifidobacterium animalis spp.lactis BB-12 in yoghurts during storage period at 4 °C. Lactobacillus delbrueckii spp. bulgaricus (x 10 7 cfu.g -1 ) Bifidobacterium animalis spp. lactis BB-12 (x 10 7 cfu.g -1 ) Lactobacillus delbrueckii spp. bulgaricus (x 10 7 cfu.g -1 ) Resutlts are the average of three independent assays.Results are expressed as ± means standart deviations.Unpaired t-tests were done to compare control and experimental group of yoghurts and glass bottle versus plastic cup.The probability (p) of a significant difference between the two values is identified with the following symbols: * represents p ˂0.05, ** p ˂0.01 and *** p ˂0.001.All other comparisons had n.s. in the same row with different superscript lowecase letters are significantly different (ap ˂0.001, bp ˂0.01).Control yoghurt: yoghurt starter culture; experimental yoghurt: yoghurt starter culture and Bifidobacterium animalis spp.lactis BB-12.
2019-04-02T13:11:54.293Z
2017-10-27T00:00:00.000
{ "year": 2017, "sha1": "ca19fe3887603f3346d9b6dda5fe939c3fce57b7", "oa_license": "CCBY", "oa_url": "http://www.potravinarstvo.com/journal1/index.php/potravinarstvo/article/download/758/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ca19fe3887603f3346d9b6dda5fe939c3fce57b7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
203655700
pes2o/s2orc
v3-fos-license
Cognitive Virtual Network Topology Reconfiguration Method Based on Traffic Prediction and Link Importance With the increase of network services, how to avoid link congestion and make full use of limited bandwidth resources in network virtualization environment have become the key challenges. In this paper, we introduce the cognitive method into virtual network topology reconfiguration and modify the connectivity to reach the target topology by sensing the traffic. Then, we formulate an optimization problem to maximize the ratio of cumulative saved resources to the square of changed link number (CSR/SCLNR) and determine which virtual link to be added or deleted. Finally, a heuristic algorithm called cognitive virtual network topology reconfiguration method based on traffic prediction and link importance (CVNTRM-TPLI) is proposed to solve this optimization problem. In the CVNTRM-TPLI, the link importance is presented in link deletion as the topological factor to avoid the ping pang effect. Also, a hybrid traffic prediction algorithm based on optimal parameter selection is put forward, where immune optimization is introduced to select the optimal parameters and the virtual network topology can react promptly to the traffic fluctuation without adding or deleting virtual link frequently. Simulation results show that the CVNTRM-TPLI not only has the highest CSR/SCLNR, but also solves the link congestion and makes full use of the limited bandwidth resources. I. INTRODUCTION As an important technology in next generation Internet, the network virtualization allows multiple heterogeneous virtual networks (VNs) to share substrate network (SN) and offer flexible manageability [1]- [3]. The characteristics of network virtualization facilitate the introduction of new types of network services (e.g., visual telephone and high-definition video distribution) that require a large amount of traffic and cause traffic fluctuation frequently [4]. The static virtual network topology (VNT) needs to spend more resources to adapt changes and restrict the development of network virtualization [5], [6]. The dynamic VNT can adapt to changes in the direction of the traffic, which has become a vibrant research area. The associate editor coordinating the review of this manuscript and approving it for publication was Thanh Ngoc Dinh . To support the dynamic VNT, a VNT reconfiguration is proposed, serving as a typical type of VN reconfiguration. The VN reconfiguration can be divided into two policies [7]. In the first policy, the VN is only reconfigured in exceptional cases, such as failures. It is usually permanently established and widely used in survivable VN against node failure [8]- [10], link failure [11] and hybrid failure [12], [13]. However, the bandwidth used reaches the maximum traffic value in a virtual link. The alternative policy is also called VNT reconfiguration that reconfigures the VNT according to the current network status and traffic demand. This case is used to solve the link congestion and improve the link utilization [14]- [23]. To perceive the current status and make full use of the network resources, the cognitive method has been used in network. A cognitive network is defined as ''A network with a process that can perceive current network conditions, and then plan, decide, and act on those conditions. The network can learn from these adaptations and use them to make future decisions, all while taking into account end-to-end goals.'' [24]. Hence, the cognitive method is also introduced into VNT reconfiguration in this paper. In cognitive VNT reconfiguration, the traffic data can be collected to support the reconfiguration process and determine whether the current virtual topology should be kept or reconfigured. The virtual link can be added to solve the link congestion and deleted to release the bandwidth resources of an underutilized virtual link to embed more VNs or add more virtual links. The traffic is changed dynamically and this VNT reconfiguration is active to adapt it. While a virtual link may be deleted to handle the underutilized status in current traffic situation, but it may be added in the next traffic situation. This case is denoted as ping pang effect [15]. To avoid the ping pang effect, all new added virtual links cannot be deleted in next several periods [16]. Although this method can weaken the ping pang effect, many resources are occupied by all new added virtual links that may cause the waste of resources. Another way to solve the ping pang effect is to introduce traffic prediction methods and select their prediction results as inputs of VNT reconfiguration [17]- [19]. However, most traffic prediction methods are large scale. Also, traffic prediction requires much time and it cannot adapt the fluctuations on small scale traffic in a prediction period. In this paper, a cognitive virtual network topology reconfiguration method based on traffic prediction and link importance (CVNTRM-TPLI) is proposed to avoid the ping pang effect and make full use of the limited bandwidth resources dynamically. At first, we introduce the cognitive method into VNT reconfiguration and formulate the VNT reconfiguration as an optimization problem. Then, a heuristic algorithm is proposed to solve this optimization problem. In CVNTRM-TPLI, the link importance and hybrid traffic prediction algorithm based on optimal parameter selection (HTPA-OPS) are presented in link deletion. The HTPA-OPS is a small scale traffic prediction method that can predict the fluctuations on small scale traffic within a prediction period. In HTPA-OPS, the optimal combination parameter selection algorithm based on immune optimization (OCPSA-IO) is introduced to select the optimal parameters and the VNT can react promptly to the traffic fluctuation without adding or deleting virtual link frequently. The main contributions of this paper can be summarized as follows. (i) We introduce the cognitive method into VNT reconfiguration. In cognitive VNT reconfiguration, the current traffic is monitored and obtained by the network controller. The link addition and deletion are triggered according to the change of current traffic. It is formulated into an optimization problem with some constraints and we propose a heuristic algorithm called CVNTRM-TPLI to solve it. (ii) We regard the link importance as topological factor in link deletion. The virtual link with high link importance can become the busy path by routing and it cannot be deleted frequently to avoid the ping pang effect. (iii) We propose the HTPA-OPS to predict the future traffic that can improve the adaptability to the traffic fluctuation. As a small scale traffic prediction method, the HTPA-OPS is only triggered in link deletion when the resources of virtual link are underutilized. In the HTPA-OPS, the local projection and phase space reconstruction are used to denoise the traffic and restore its inherent chaos characteristic. Then the RBF neural network is used to predict the traffic. (iv) We propose the OCPSA-IO to select the optimal combination parameters with the help of immune optimization method in the HTPA-OPS. The OCPSA-IO is not only used to select the initial parameters, but also adjust the parameters when the prediction error is higher than the threshold to ensure the accuracy of the prediction results. The rest of this paper is organized as follows. In Section II, we discuss the related work. In Section III, we present the problem statement and optimization model. The CVNTRM-TPLI is presented and its details are shown in Section IV. In Section V, we evaluate the proposed algorithm through extensive simulations and experiments. We conclude this paper in Section VI. II. RELATED WORK In network virtualization, the VN reconfiguration is a typical method to improve the survivability of VN. The VN reconfiguration can be triggered by exceptional cases, such as failures. In this case, the VN reconfiguration is complex that includes the virtual node migration and virtual link re-embedding. Also, the VN reconfiguration can be triggered by traffic cases, such as link congestion and link underutilization. This case can be called VNT reconfiguration that adapts the VNT by link addition and deletion. In common, the current traffic status is as the input of the VNT reconfiguration. In [15], a VNT adaptation for wavelength division multiplexing (WDM) mesh networks under dynamic traffic was proposed. It adapted the optical links according to the actual traffic load continuously and reacted promptly to the fluctuations on the traffic by adding or deleting lightpath. However, the ping pang effect was controlled by adjusting the watermark values in this paper and the performance was limited. In [16], a VNT reconfiguration for mixed-line-rate optical WDM networks under dynamic traffic was proposed to follow the changes in traffic without a priori knowledge of the future traffic pattern. It could optimize resource utilization and network traffic performance by adjusting, adding or deleting one or more lightpaths. In link deletion, all new added virtual links could not be deleted in next time to avoid the ping pang effect. Consequently, more resources were wasted if the bandwidth utilization was low in next time. In [20], a gradually reconfiguring VNT based on estimated traffic matrices was proposed. It reconfigured the VNT gradually by dividing it into multiple stages and limiting the number of optical layer paths reconfigured in each stage to reduce the estimation errors. This algorithm adapted the current traffic status. However, it could not predict the future bandwidth utilization and solve the ping pang effect. In [21], a VNT reconfiguration with adaptability to traffic changes was proposed. A new index called flow inclusive relation modularity was introduced to reduce the number of optical paths, which had to be added when there were significant traffic changes. This method was proposed to avoid adding a large number of optical paths, it could not solve the ping pang effect and adapt the traffic in next time. In [22], a noiseinduced method was proposed to adapt to traffic changes and accommodate traffic demand. It repeatedly reconfigured a VN that leaded to over-reconfiguration and network services disruption. In [23], a VN reconfiguration framework based on the Bayesian attractor model was proposed that used certain patterns of incoming and outgoing traffic at edge routers to characterize the traffic situation. The over-reconfiguration could be reduced by identifying the stored traffic situation that was closest to the current one and retrieving a suitable VN. However, this method could not solve the traffic situations that were not in the obtained VN candidate set well and the ping pang effect was not avoided. The VNT reconfiguration is very relevant to current traffic and a VNT may become inappropriate to it after a certain time. Hence, the VNT reconfiguration may frequently adapt to the changing traffic and the ping pang effect may occur. To avoid the ping pang effect, a traffic prediction solution called autoregressive integrated moving averages technique was introduced, and a new transition method was also proposed to reduce the impact of instable routing tables during a reconfiguration process [17]. The ARIMA technique was a typical large scale traffic prediction method and could not predict the small scale traffic. In [18], a VNT reconfiguration approach based on data analytics for traffic prediction was proposed. The artificial neural network was used to provide robust and adaptive traffic models. The VNT was regularly reconfigured based on the current and predicted traffic. Although the performance of artificial neural network was better than ARIMA technique, it could not predict the small scale traffic well. In [19], big data analytics was applied for IP traffic prediction. Predicted traffic was used as input for VNT re-optimization. Machine learning algorithms were employed to predict traffic conditions periodically (e.g., every hour). However, the prediction period was too large to adapt the fluctuations on the small scale traffic. As can be seen above, although the traffic prediction methods have gradually introduced into VNT reconfiguration, most of them are large scale traffic prediction methods and the traffic is predicted at each period that consumes more time and resources. Also, the VNT reconfiguration with large scale traffic prediction methods cannot adapt the fluctuations on small scale traffic within the prediction period. III. PROBLEM STATEMENT AND OPTIMIZATION MODEL In this section, we formulate the VNT reconfiguration problem and design an optimization model. A. PROBLEM STATEMENT The SN is modeled as a graph G S = (N S , E S ), in which the substrate node set and substrate link set are represented by N S and E S , respectively. Similar to SN, the VN can be modeled as a graph G V = (N V , E V ). N V represents the virtual node set and E V represents the virtual link set. The following problems should be solved in VNT reconfiguration. • Whether the current VNT is efficient for the current traffic. • Whether the current VNT should be changed. • How to change the current VNT by link addition or deletion. In VNT reconfiguration, the first objective is to save more resources. Bandwidth resources are limited and relevant to the number of VNs that can be embedded successfully. By sensing the traffic and link utilization, the underutilized links can be deleted and the saved resources are used in link addition or VN embedding. Another objective is to minimize the cost that is related to the total number of additional and deleted links. Link addition and deletion lead the disruption of network service and consume the node resources for calculation and memory. Therefore, adding or deleting links frequently increase the total cost. The high threshold of link utilization is denoted as W H . If the link utilization is higher than W H , this link is considered to be overloaded or congested.W L is the low threshold of link utilization. If the link utilization is lower than W L , this link is considered to be underutilized. The threshold of virtual link importance is W I . The link utilization matrix is denoted as U = {u ij }. u ij is the utilization of virtual link vl(i, j) and it is denoted by the ratio of current traffic values between i and j to bandwidth. v is the number of prediction traffic within a prediction period. R c is the consumed bandwidth resources in link addition.R r is the released bandwidth resources in link deletion. R s is the saved bandwidth resources in the VNT reconfiguration and CN a is the cumulative number of added links. CN d is the cumulative number of deleted links. CN c is the cumulative number of changed links and CN c = CN a + CN d . 2) VARIABLES if there is no virtual link between nodes i and j, and 1 otherwise. In which, P ef is the shortest link set between virtual nodes e and f . The shortest link means the link has the shortest hop counts between the originating and terminating nodes. ϕ i,j p is the number of shortest links that traverse the virtual link vl(i, j). The CR ij denotes the importance of the virtual link vl(i, j). If the virtual link with high CR ij is deleted, it will have a high probability to be re-added. Hence, the CR ij is important for avoiding the ping pang effect. 3) OBJECT The objective function is to maximize the ratio of cumulative saved resources to the square of changed link number (CSR/SCLNR) that takes the resource and cost into consideration. is the cumulative saved resources in the VNT reconfiguration. CN 2 c (t) is the cumulative square number of changed links in the VNT reconfiguration. The saved resources can be used to embed more VNs or add more virtual links. Also, the cost of VNT reconfiguration is directly related to the CN c . Equations (1)-(3) are the SN topology constraints. In (1), at most one outgoing substrate link of the source node is assigned to one virtual link. In (2), at most one incoming substrate link of the destination node is assigned to one virtual link. In (3), the number of incoming and outgoing links reserved for a substrate link of any intermediate node is equal. Equation (4) is the flow conservation constraint used for routing traffic flows on virtual links. Equation (5) is the link addition constraint and the rerouted traffic flow Tf sd traverses the virtual link vl(i, j) in the new topology without congestion. Equations (6) (7), the virtual link cannot be deleted if its CR ij is not less than W I . In (8), we predict the future traffic of the virtual link to decide whether it should be deleted. If at least one future link utilization is higher than W H , the virtual link cannot be deleted to avoid the ping pang effect. IV. COGNITIVE VIRTUAL NETWORK TOPOLOGY RECONFIGURATION METHOD BASED ON TRAFFIC PREDICTION AND LINK IMPORTANCE Solving this optimization model is computationally intractable. Most researchers solve this optimization model by proposing a corresponding heuristic algorithm that has the short computational time and gets an approximate optimal solution [15], [18], [20]. Therefore, we propose a novel heuristic algorithm called CVNTRM-TPLI to solve the optimization model of cognitive VNT reconfiguration. The block diagram is shown in Fig. 1. As seen from Fig. 1, the network virtualization technology allows the SN to be shared by multiple heterogeneous VNs and provides different kinds of services. A network controller monitors VNs to obtain the current traffic and link utilization u ij . Then, the link utilization u ij is compared with the thresholds W H and W L . If W L ≤ u ij ≤ W H , there is no change to the VNT. If u ij > W H , another virtual link is added. If u ij < W L , we predict its future traffic with the help of HTPA-OPS and calculate the link utilization to decide whether this virtual link should be deleted. If none of the future link utilization is higher than W H and other constraints of link deletion are satisfied, this virtual link is deleted. Finally, the network controller controls the topology based on the information from VNT design. Consequently, link addition, HTPA-OPS and link deletion are three important algorithms in CVNTRM-TPLI. Link addition is designed to solve the link congestion. HTPA-OPS and link deletion are used to increase the link utilization by deleting underutilized virtual links. The HTPA-OPS is introduced to predict the traffic and the prediction results are used as the inputs of link deletion algorithm to reduce the ping pang effect. The details of these three algorithms are shown as follows. A. LINK ADDITION In CVNTRM-TPLI, link addition and link deletion are the core parts. The main idea of link addition is to decide how to transform traffic flows from a congested link to other available link. The main steps of the link addition are shown as follows. In Algorithm 1, the overloaded traffic is calculated and traffic flows that traverse the congested virtual link are selected in lines 1-4. In lines 5-15, if there is only one hop count between nodes e and f , add the bandwidth resources of the link and re-embed it. In lines 16-23, if there is more than one hop count between nodes e and f , add the direct virtual link between the originating and terminating nodes. In lines 24-31, the matrices B, U and H are updated. If there is no virtual link that has the higher utilization than W H , stop this process. For example, three traffic flows traverse the virtual nodes D and E in Fig. 2 Select the traffic flows that traverse vl(i,j) and save them into TF 5. Select the Tf(e,f) in TF that has the maximum traffic value 7. if h ef = 1 B ef = B ef + (B min + T y /W H ) 8. Re-embed the new virtual link vl(e,f) 9. if the re-embedding of vl(e,f) is successful 10. B To delete underutilized virtual links accurately without causing the ping pang effect, traffic prediction is necessary before link deletion. In this paper, we propose the new traffic prediction method called HTPA-OPS by considering characteristics of small scale traffic, such as nonlinearity, mutation and chaos. The block diagram of HTPA-OPS is shown in Fig. 3. As seen from Fig. 3, the main steps of the HTPA-OPS can be summarized as follows. Step 1: Collect the historical traffic samples and initialize the parameters. The original traffic flow is sampled with available intervals to find the historical traffic sequence. Then, we normalize the historical traffic sequence. Step 2: Denoise the historical traffic sequence with the local projection method. The chaos characteristic of network traffic is usually affected by the high-dimensional noise. The local projection is a classical technique to denoise the network traffic and restore its inherent chaos in the nonlinear dynamic system. Step 3: Reconstruct the historical traffic sequence with the phase space reconstruction method. Network traffic has complex dynamic characteristics and it is hard to describe with traditional low dimensional coordinates. As an important method in chaotic time series prediction, the phase space reconstruction can describe the evolution of hidden chaotic attractor accurately and incorporate the existing values into a descriptive framework. Based on the chaos of network traffic, we transform the network traffic prediction into a nonlinear time series prediction problem. For the traffic prediction problem: x q−τ , x q } is a historical traffic sequence, b is the mapping dimension, and τ is the delay time. Assuming that the length of traffic data is q, after phase space reconstruction, training samples can be obtained. Step 4: Select the optimal samples. In chaotic time series, the Euclidean distance is used to describe the relativity between the prediction value and the training sample. We select k training samples that have the nearest Euclidean distances to the predicted traffic value. Step 5: Select the initial optimal parameters with OCPSA-IO. Parameters k, b and τ are related to the performance and complexity of the traffic prediction method. To select the optimal parameters, the OCPSA-IO is introduced and its details are described in next section. Step 6: Train the traffic samples with the RBF neural network and predict the future traffic. Step 7: Output the prediction results and calculate the prediction error. Step 8: If the prediction error is higher than the threshold, update the optimal parameters with OCPSA-IO. 2) OCPSA-IO The parameters b and τ in phase space reconstruction and the parameter k in optimal sample selection have important effect on the performance of prediction results. In order to select the optimal combination parameters, the OCPSA-IO is proposed. The block diagram of OCPSA-IO is shown in Fig. 4. As seen from Fig. 4, the main steps of the OCPSA-IO can be summarized as follows. Step 1: Initialize the network traffic information. The range of parameters b, τ and k are set to M, N and K, respectively. Step 2: Generate the initial antibody population An. Every antibody D i ∈ An indicates one scheme of parameter selection. We set D i as follows. where d i1 , d i2 and d i3 denote the values of parameters b, τ and k, respectively. Step 3: Calculate the fitness value A D i , antibody affinity S D i ,D j and the concentration C D i of D i . Among them, y(t) denotes the current traffic values in one prediction period.ŷ D i (t) denotes the predicted traffic values in one prediction period. v is the number of prediction traffic in one prediction period. ξ D i ,D j denotes the number of the same elements between D i and D j . The length of D i is |N c |. |An| denotes the total number of antibody population. ω is the threshold. Step 4: Calculate the reproduction probability constant of antibody P (D i ) and select the elitist. where E is a constant. Step 5: Produce the new antibody. The crossover, selection and mutation of antibody are used to produce the new antibody population. Then return to Step 2 until it satisfies the threshold Er. C. LINK DELETION After link addition, the utilization of each virtual link is lower than W H . To decrease the number of underutilized virtual links, the link deletion is proposed and the underutilized virtual link can be deleted to release its possessed resources. To avoid the ping pang effect, the link importance and traffic prediction called HTPA-OPS are introduced into link deletion. The main steps of link deletion are shown in Algorithm 2. Algorithm 2 Link Deletion Algorithm Input T, B, V, U, H if at least one utilization is higher than W H 5. Do not delete the vl(i, j) 6. else 7. if CR ij > W I 8. Select other shortest links between i and j and save them into RVL 11. Calculate the residual bandwidth Delete the vl(i,j) and traverse its traffic flows to the jth link in RVL 18 In lines 1-6, the traffic values and the future utilization are predicted by the HTPA-OPS to decide whether this link should be deleted. In lines 7-9, the link importance is calculated to avoid the ping pang effect. In lines 10-27, the total traffic value in deleted link must be less than the residual bandwidth in the link that is traversed by the traffic flows of deleted link in a new VNT. In this paper, the optimization object is to maximize the CSR/SCLNR. The CVNTRM-TPLI is proposed to solve the optimization model. It contains link addition, HTPA-OPS and link deletion algorithms. In link addition, the traffic flows that traverse the congested virtual link are selected and the direct virtual link between the originating and terminating nodes is added to reduce the resource consumption in link addition. The traffic prediction results of HTPA-OPS and virtual link importance factor CR ij are introduced into link deletion to VOLUME 7, 2019 avoid the ping pang effect and satisfy the constraints (7)- (8). The underutilized virtual link that satisfies the constraints is deleted to reduce the cost of VNT reconfiguration. Therefore, the CVNTRM-TPLI can obtain the approximate optimal performance. V. SIMULATION To validate the performance of the CVNTRM-TPLI proposed in this paper, four comparative experiments are established in this section. The performance of the CVNTRM-TPLI is compared with the other three VNT reconfiguration algorithms in the first simulation experiment. Next, we simulate the performance of our traffic prediction method called HTPA-OPS. Then, we simulate the effect of different thresholds. Finally, we evaluate the effect of link importance and traffic prediction factors. Initial VN: In this paper, the VNT is generated by the improved Salam network topology random generation algorithm. The VN is composed of 20 nodes and 80 links. The bandwidth of virtual link is [0.4, 0.6] Gbps. Traffic: The traffic flow used in this paper is selected from real traffic data LBL-tcp-3.tcp [25]. The sample interval of original traffic flow is 1 second. The traffic sequence is obtained and normalized to get the network traffic used in this simulation. The traffic flow between each node pair is selected from above traffic series randomly with a continuous sequence. This selection is not vital for the CVNTRM-TPLI and the CVNTRM-TPLI should work for any input traffic. The routes over the VN are calculated by the constrained shortest path first algorithm. Parameters: The range of b is set to [1,30]. The range of τ is set to [1,10] and the range of k is set to [10,400]. The high threshold W H is set to 0.8 and the low threshold W L is set to 0.2. The link importance threshold W I is set to 3. The number of prediction traffic values v is set to 3 and the traffic prediction error Er is set to 0.05. Simulation Environment: The computer in our simulations is a Lenovo Tianyi 510Pro with the Windows 10 operating system. The hardware platform is composed of an Intel Core i7-7700 3.6 GHz processor with 8 GB of RAM. The analysis software is Matlab R2007a. In all simulation cases, the results are averaged over 50 simulations, and we show the margin of error with a 95% confidence level. A. COMPARISON OF DEFFERENT VNT RECONFIGURATION ALGORITHMS In this paper, we compare the CVNTRM-TPLI with the other VNT reconfiguration algorithms and their details are listed in Table 1. All these algorithms use the same VNT. As seen from Figs. 5(a) and 5(b), the GRVNT-ETM saves more resources than the other three algorithms and it has the highest cumulative number of changed links. In link deletion, the link importance and traffic prediction are not taken into consideration in the GRVNT-ETM and it deletes more virtual links to release the bandwidth resources. The resources saved by IW are the lowest. Comparing with the other typical VNT reconfiguration methods, it does not delete the new added virtual links to solve the ping pang effect. Although it wastes some resources that can be released, it also reduces the cumulative number of changed links. In VENTURE, the integer linear programming and corresponding heuristic algorithm are proposed to minimize both the unserved traffic and used transponders. Also, the artificial neural network is employed to predict the traffic and decide whether the current VNT needs to be reconfigured. It saves more resources and decreases the cumulative number of changed links. However, the artificial neural network can predict the large scale traffic and it cannot solve the ping pang effect well. The CVNTRM-TPLI saves more resources than IW. In link deletion, the virtual links are deleted accurately with the help of the HTPA-OPS and some important virtual links in VNT are avoided to be deleted with the help of link importance. Also, the CVNTRM-TPLI has the lowest cumulative number of changed links. As seen from Fig. 5(c), the CVNTRM-TPLI has the highest CSR/SCLNR and the GRVNT-ETM is the lowest. It demonstrates that the CVNTRM-TPLI has the best performance. B. THE PERFORMANCE OF TRAFFIC PREDICTION ALGORITHM As seen from Fig. 6, the prediction result and actual traffic are accurately fitted. In the HTPA-OPS, the high-dimensional noise in network traffic is filtered by local projection to restore its chaos. The phase space reconstruction, optimal parameter selection and RBF neural network are all used to improve the performance of traffic prediction. In Fig. 7, the probability of prediction error is mainly distributed around zero. The probability is about 0.79 when the prediction error is zero. When the absolute error of prediction is above 0.02, the probability is quite small and even can be ignored. Consequently, the predicted traffic value can be used in the virtual link deletion. C. THE EFFECT OF DIFFERENT THRESHOLDS 1) THE EFFECT OF W H The effect of changing the value of W H on the performance of the CVNTRM-TPLI for a fixed value of W L = 0.2 is shown in Fig. 8. The W H and W L are denoted by (W H , W L ) in Fig. 8. As seen from Figs. 8(a) and 8(b), with the increase of W H , both the consumed bandwidth resources and cumulative number of added links decrease. When the parameter W H increases, the number of congested virtual links decreases which reduces the number of added virtual links. From Fig. 8(c), with the increase of W H , the CSR/SCLNR increases. In this case, the cumulative number of added links has become the main factor affecting the CSR/SCLNR. 2) THE EFFECT OF W L The effect of changing the value of W L on the performance of the CVNTRM-TPLI for a fixed value of W H = 0.8 is shown in Fig. 9. The W H and W L are denoted by (W H , W L ) in Fig. 9. With the increase of the W L , the released bandwidth resources and cumulative number of deleted links both increase. The virtual links that meet constraints in link deletion increase gradually and the number of deleted virtual links also increases. As seen from Fig. 9(c), with the increase of W L , the CSR/SCLNR decreases. D. THE EFFECT OF LINK IMPORTANCE AND TRAFFIC PREDICTION FACTORS In this section, we simulate the performance of different VNT reconfiguration algorithms with different factors. To make a fair comparison, two baseline algorithms called CVNTRM-TPLI1 and CVNTRM-TPLI2 are proposed. The CVNTRM-TPLI1 is developed from the CVNTRM-TPLI, and it does not take the link importance and traffic prediction factors into consideration. The CVNTRM-TPLI2 is developed from the CVNTRM-TPLI, and it only takes the traffic prediction factor into consideration by introducing the HTPA-OPS. From Figs. 10(a)-10(c), the CVNTRM-TPLI1 adds and deletes the virtual links frequently. It has the lowest CSR/SCLNR. The CVNTRM-TPLI2 reduces the cumulative number of changed links and saves more resources than CVNTRM-TPLI1. Its CSR/SCLNR is higher than CVNTRM-TPLI1 that evaluates the effect of HTPA-OPS. The CVNTRM-TPLI uses the HTPA-OPS and link importance and its CSR/SCLNR is higher than CVNTRM-TPLI2. It evaluates the effect of link importance factor. VI. CONCLUSION In this paper, we propose the VNT reconfiguration method to modify the topology by introducing the cognitive method. We formulate it as an optimization problem to determine which virtual link to be added or deleted. A heuristic algorithm called CVNTRM-TPLI is proposed to solve this optimization problem. In CVNTRM-TPLI, the link importance is proposed in link deletion as the topological factor to avoid the ping pang effect. Also, the HTPA-OPS is presented to predict the future traffic accurately. In HTPA-OPS, the OCPSA-IO is introduced to select the optimal parameters and the VNT can react promptly to the traffic fluctuation without adding or deleting virtual link frequently. Finally, four experiments are designed to demonstrate the performance of CVNTRM-TPLI. The first experiment results show that the performance of the proposed CVNTRM-TPLI is better than that of the other typical VNT reconfiguration methods. The second experiment assesses that the HTPA-OPS has good performance of traffic prediction. The third experiment assesses the effect of different thresholds. The last experiment evaluates the influence of link importance and traffic prediction factors on CVNTRM-TPLI. The link importance and traffic prediction factors are both useful to improve the performance of the CVNTRM-TPLI. The next step is to study the VNT reconfiguration method based on deep learning method in complex traffic environment.
2019-09-26T08:51:37.825Z
2019-09-23T00:00:00.000
{ "year": 2019, "sha1": "6616d0e8c4e0f159b43f38c6aa8dc944ee5e4763", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8600701/08846212.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0191bf8aca4713605302290fc86089ef09904eac", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
244715854
pes2o/s2orc
v3-fos-license
Transcriptional Profiling and Machine Learning Unveil a Concordant Biosignature of Type I Interferon-Inducible Host Response Across Nasal Swab and Pulmonary Tissue for COVID-19 Diagnosis Background COVID-19, caused by SARS-CoV-2 virus, is a global pandemic with high mortality and morbidity. Limited diagnostic methods hampered the infection control. Since the direct detection of virus mainly by RT-PCR may cause false-negative outcome, host response-dependent testing may serve as a complementary approach for improving COVID-19 diagnosis. Objective Our study discovered a highly-preserved transcriptional profile of Type I interferon (IFN-I)-dependent genes for COVID-19 complementary diagnosis. Methods Computational language R-dependent machine learning was adopted for mining highly-conserved transcriptional profile (RNA-sequencing) across heterogeneous samples infected by SARS-CoV-2 and other respiratory infections. The transcriptomics/high-throughput sequencing data were retrieved from NCBI-GEO datasets (GSE32155, GSE147507, GSE150316, GSE162835, GSE163151, GSE171668, GSE182569). Mathematical approaches for homological analysis were as follows: adjusted rand index-related similarity analysis, geometric and multi-dimensional data interpretation, UpsetR, t-distributed Stochastic Neighbor Embedding (t-SNE), and Weighted Gene Co-expression Network Analysis (WGCNA). Besides, Interferome Database was used for predicting the transcriptional factors possessing IFN-I promoter-binding sites to the key IFN-I genes for COVID-19 diagnosis. Results In this study, we identified a highly-preserved gene module between SARS-CoV-2 infected nasal swab and postmortem lung tissue regulating IFN-I signaling for COVID-19 complementary diagnosis, in which the following 14 IFN-I-stimulated genes are highly-conserved, including BST2, IFIT1, IFIT2, IFIT3, IFITM1, ISG15, MX1, MX2, OAS1, OAS2, OAS3, OASL, RSAD2, and STAT1. The stratified severity of COVID-19 may also be identified by the transcriptional level of these 14 IFN-I genes. Conclusion Using transcriptional and computational analysis on RNA-seq data retrieved from NCBI-GEO, we identified a highly-preserved 14-gene transcriptional profile regulating IFN-I signaling in nasal swab and postmortem lung tissue infected by SARS-CoV-2. Such a conserved biosignature involved in IFN-I-related host response may be leveraged for COVID-19 diagnosis. INTRODUCTION The novel coronavirus disease 2019 induced by SARS-CoV-2 infection has resulted in a sustained threat to human life and economic growth. As of September 2021, around 4.6 million SARS-CoV-2-infected deaths have been reported by WHO, showing an unprecedented challenge and need for controlling the COVID-19 pandemic. Respiratory dysfunction is the main complication of COVID-19, including diffused alveolar damage and fulminant respiratory failure (1). Notably, clinical manifestations of SARS-CoV-2 infection vary from asymptomatic to severe symptoms (2). Such a wide range of clinical features make it difficult to establish a highly-conserved diagnostic profile of COVID-19. Although scientists have made great progress on COVID-19 management, progress in the viral diagnosis seems to be inferior to the development of therapy and prevention for COVID- 19. Till now, diagnostic measurement of COVID-19 mainly relies on the reverse transcription quantitative polymerase chain reaction (RT-PCR) due to excellent sensitivity and specificity for detecting SARS-CoV-2 (3). However, using RT-PCR alone may yield false-negative results due to fluctuated viral loads and evolution (4). This adverse situation is detrimental for hampering COVID-19 outbreak. Improving the accuracy of viral testing remains urgent demand. Apart from the direct recognition for SARS-CoV-2, deciphering the host response, especially the virusrelated fluctuated genomic profile, may be pivotal for serving as a supplementary approach for COVID-19 diagnosis. For the genetic profile of host response for COVID-19, multiple studies have reported various characteristics of immune/inflammatory actions in response to COVID-19. For example, activation of immune cells was observed in lung and bronchoalveolar lavage fluid (5). Cytokine or chemokines-related host inflammatory responses were involved in SARS-CoV-2 infected bronchoalveolar lavage and peripheral blood mononuclear cells (PBMCs) (6). Besides, COVID-19 progression is driven by the populations of myeloid-lineage cells with distinct inflammatory transcriptional features in blood, lung, and airway (7). These findings suggested that identification of a specific genomic profile of host response may be served as a supplementary method for COVID-19 diagnosis. For discovering COVID-19 related genomic trait for diagnosis, it may have three features. 1) It is representative for COVID-19 diagnosis and different from other respiratory diseases (such as Influenza, Measles, and Respiratory Syncytial Viral). 2) The specific gene expression feature detected in body substance is prefer to non-invasively collect for diagnosis, such as nasal swab. Of note, although nasal swab was extensively used for direct virus detection by RT-PCR, genomic profile of specific genes in nasal fluid may also unveil the feature of host response to COVID-19, such a molecular feature may be used as a biological basis for reducing the false-negative diagnosis. 3) Since lung is the main-affected organ in COVID-19, the diagnostic feature in extrapulmonary substances (e.g., nasal fluid) may be highly-preserved with that in pulmonary tissue. Recently, scientists have engaged in discovering the COVID-19 immune landscape. Dramatic transcriptomic changes were detected in virus-positive cells in severity-dependent manner. These differential genes were enriched in specific pathways, including "response to virus" and "response to type I interferon" (8). SARS-CoV-2 induced transcriptomic changes in the peripheral blood is varied with those detected in other respiratory infections, including interferon-driven genes (9). Besides, Nature News announced the top 10 awesome science discoveries in 2020, including "Interferon deficiency can lead to severe COVID-19, especially the IFN-I" (10,11). These studies suggested the strong correlation between "Type I interferon and COVID-19". As a first-line innate host defense mechanism, human Type I interferons (IFN-I) are a large family of interferon proteins (IFNa and IFNb, etc.) that regulate the immune system, such as the inhibition of virus proliferation and transmission (12,13). Besides, it is documented that robust cellular secretion of IFN-I is indispensable for suppressing SARS-CoV-2 replication (14). Although anti-SARS-CoV-2 effect of IFN-I has been widely reported, the potential diagnostic role of IFN-I for COVID-19 is under investigation. Recently, several reports have discussed IFN-I-related host defense in COVID-19 (15,16). These valuable studies may potentially indicate molecular clues that SARS-CoV-2 affected IFN-I-dependent gene profile may be used as a complementary diagnosis of COVID-19. Hereby, we listed the evidenced descriptions and our own postulations are as below: 1) It was reported that high density of receptors of ACE2 (an enter-receptor of SARS-CoV-2) causes the high SARS-CoV-2 viral load in nasopharyngeal fluid in the initial stage of COVID-19 but lowered during sustained viral infection in nasal swab (17). It may suggest that testing SARS-CoV-2 viral load alone in nasal swab is defective because of the potentially low viral content and evolution. 2) Nasal swab samples remain to be the main source of RT-PCR-based SARS-CoV-2 nucleic acids testing due to the high viral load in nasopharyngeal fluid before severe COVID-19 (18). Nasal swab remains to be an easily handled and non-invasive reliable testing approach worldwide (19). Additionally, the time window for virus detection via nasal swab lasts around 4 weeks and peaks in the second week from the onset of infection (20). The 4-week detectable cycle of genetic profile is not too transient and enables us to timely conduct COVID-19 diagnosis. These reports revealed that understanding the genetic host response in nasal swab may be useful for improving COVID-19 diagnosis. 3) In SARS-CoV-2 infected tissues, lung is the most vulnerable organ responsible for the mortality of COVID-19, which includes upper respiratory tract infections (in early stage) and acute respiratory distress syndrome (in late-stage) (21), suggesting that molecular profile in pulmonary host immune response is vital for COVID-19 diagnosis. It is critical to clarify the molecular correlation between nasal swab and lung tissue in response to COVID-19, because COVID-19 diagnosis by nasal swab may still be the common way due to the non-invasive collection and relatively higher virus load. 4) Interferon is one of the regulators of ACE2 receptor (22). Additionally, host IFN-I possesses high sensitivity to both SARS-CoV-2 virus and ACE2 receptors, especially in COVID-19 patients with asymptomatic manifestation (23), suggesting that understanding the genetic signature of IFN-I-associated genes in SARS-CoV-2 infection may serve as indicators of COVID-19 diagnosis. 5) Recent reports suggested that detection of IFN-I gene expression may be of great significance to measure the severity of COVID-19 (21), indicating that varied IFN-I-related gene profile may stratify the severity of COVID-19. Taken together, these five clues provide prerequisites for mining a molecular feature of IFN-I-related host response for the diagnosis of COVID-19 and its severity. Global scientists have conducted a series of clinical trials not only for COVID-19 diagnosis, but also the comparative analysis between COVID-19 and other respiratory infectious diseases, including SARS and MERS (24). Generally, these reported outcomes have two features as follows: 1) Generally, the results are independently generated from homogeneous samples. Due to the complexity of body tissues, the correlation of genetic feature among heterogeneous tissues needs to clarify. A highly-preserved profile has diagnostic potential for COVID-19. 2) Gene relationship in homogeneous or heterogeneous samples are usually determined via geometric distance (commonly by differential expression). Measuring adjacency-related similarity in a scale-free network from heterogeneous samples may make the results more biologically significant, since real-world networks are often claimed to be scale free (25). Based on these ideas, our study aims to identify a common diagnostic host characteristic from nasal swabs and lung tissues that can supplement the diagnostic strategy of COVID-19. Highlyconserved functional genes modules are commonly related to the central characteristic of a disease (26). Combined with RT-PCR-based virus detection, the specific profile of host response may provide additional information for distinguishing COVID-19 from other respiratory diseases, in which we focused on IFN-I related gene modules. Herein, eight independent RNAsequencing datasets were retrieved from NCBI-GEO, including three nasal swabs (GSE163151, GSE162835, and GSE182569-nasal swab part); two lung tissues (GSE171668 and GSE150316), one lung bronchioalveolar fluid (GSE182569-lung bronchioalveolar fluid part), one lung bronchial epithelial cells (GSE147507), and one lung bronchoalveolar carcinoma cells (GSE32155). For the analytic methods, we adopted computational language R-based unsupervised analysis for clarifying the genetic polymorphisms and highly-preserved functional gene modules. As an analytical machine-learning language in computer science, R language has a wide variety of statistical techniques for life science, including WGCNA, homological and high-dimensional multivariate analyses (27). Breakthroughs in interdisciplinary technologies between computer technology and life science may permit a holistic view of transcriptomic profile and delineate gene modules with pathophysiologic relevance in COVID-19 diagnosis. In this study, we made an attempt to identify the highly-preserved genes/modules regulating IFN-I signaling in SARS-CoV-2-infected heterogeneous samples (nasal swab and lung tissue) using transcriptional and machinelearning analysis. This intent is to complement current diagnostic strategy of COVID-19. Accurate SARS-CoV-2 detection is a significant starting point to counter COVID-19 pandemic. Construction of Co-Expression Modules The consensus gene modules of nasopharyngeal pulmonary specimens were constructed by WGCNA analysis. In brief, the pairwise similarity of co-expression matrixes was discerned by the coefficient resulted from Pearson's correlation analysis for the whole genome (equation: sij = cor(i, j)). A weighted adjacency matrix was constructed for improving the consensus similarity of gene modules by the equation as follows: aij = | (1 + cor(i, j))/2 |b, in which "aij" pointed to the value of adjacency for evaluating the strength of weighted connectivity. b was the soft-thresholding power resulted from a scale-free topological analysis by the R function "pickSoftThreshold ()" from WGCNA package. Then, a topological overlap matrix (TOM) was established in terms of the adjacency matrix, which in-turn converted into a dissimilarity TOM. Afterwards, gene clustering dendrogram consisted of genes with hierarchical clustering and gene modules with various colors upon adjacency-based dissimilarity. All the modules were constructed by clustering the close-distance modules according to the resulted Module Eigengene (ME) values. MEs were based on the first principal component of gene modules by PCA analysis using R function "signedKME()" from WGCNA package. The higher absolute value of ME represented the more intense relationship between genes and their corresponding modules. Relevance Analysis of Co-Expression Modules For determining the reliability and underlying correlations among constructed gene modules, module eigengenes were recruited to assign expression to gene modules for association study. TOM-based topological overlap plot was performed in accordance with the dissimilarity of gene expressions, which was visualized by R function "TOMplot()" in WGCNA package. In a topological overlap plot, rows and columns were pointed to genes. While at the top and left side were the colors related to gene modules. Darker or lighter blocks in the figure represented the low or high correlation, respectively. Besides, in terms of ME value, the module interactions were multi-dimensionally observed by both 3D scatter plot (R function "ScatterPlot3D") and t-SNE analysis (R function "Rtsne"), respectively. As an intersective algorithm, R function UpsetR (R function "UpsetR") was used to detect the potential overlapping targets among modules for evaluating the reliability of the constructed modules, since intramodular genes cannot be the shared intermodular targets in more than two different modules. In addition, both intermodular Pearson's R and p-value (one-way ANOVA with Tukey's multiple comparison) were given for analyzing the correlation between modules. Highly connected gene modules were identified if the p-value was no larger than 0.05. Functional Analysis For Gene Modules Across Nasal and Lung Samples To identify the gene modules characterized by regulating Type I interferon signaling pathway, both gene ontology (GO) and Kyoto Encyclopedia of genes and genomes (KEGG) enrichment analysis were adopted to analyze the biological function of each module. The annotation analysis was resulted by the online database DAVID (https://david.ncifcrf.gov/tools. jsp). When a p-value less than 0.05 was considered significant. Those gene modules with enriched annotation of "Type I interferon pathway" will be collected as the filtered ones for further cross-specimens study. In particular, enriched annotation associated with both "Defense response to virus" and "Type I interferon signaling pathway" would be filtered with UpsetR analysis and for further intensive study. The gene expression atlases (3D view) of nasal and lung specimens would be visualized by R function "plot3D". Additionally, the key interactive genes functionally representing both "Type I interferon pathway" and "Defense response to virus" were gathered and visualized by Sankey diagram using R function "dplyr" and "networkD3". Moreover, the QIAGEN Ingenuity Pathway Analysis will be used for mining the pathophysiologic relationship among targets based on reported experimental evidence. The pathway figure was plotted by BioRender and Adobe illustrator (Supplementary Material 6). Adjusted Rand Index-Based Similarity Analysis For Homogeneous Modules The Rand index (RI) is an accuracy value for similarity analysis between actual or predicted clusters using the Permutation Model in terms of the following equation: RI = (a + b)=( n 2 ), Where "a" represents the number of genes in the identical gene module, "b" points to those in the distinct modules, while "n" is the entire number of sample groups. However, the premises of the permutation model are violated in certain clustering conditions, such as fixed cluster number with various interpreted data. Thus, the Adjusted Rand Index (ARI) may guarantee the random assignment of variables, which symmetrically measures the similarity and co-expression between assignments. The calculation of Adjusted Rand Index is as follows: Where n ij , a i , and b j are values from a contingency matrix with two random sets created by R function "fossil". The range of ARI varies from −1 to 1, in which negative values represents the independent modules and a positive ARI stands for the similar modules (1 for an approximate perfect match between two modules). Transcriptional Factors For Identified Type I Interferon-Related Genes Transcriptional factor analysis offers sequence upstream of transcriptional start site of interferon-regulated genes, which were colored in the blocks for predicting the potential binding transcription factor in the promoter. The procedure of prediction was based on the MATCH algorithm by TRANSFAC 2012 matrices with minimum false positive cut-off. The transcriptional factor analysis was performed on the Interferome Database V2.0 (www.interferome.org). Constructing Weighted Co-Expression Gene Modules in Nasal Specimen and Pulmonary Tissue Infected by SARS-CoV-2 Identification of highly correlated consensus gene could disclose the regulatory mechanisms with biologically or pathologically relevant genes that may be potentially mediated. Since WGCNA can establish a scale-free gene network in terms of expression correlation, it can detect interconnected genes and modules characterized as co-expression functionally ones with specific biological profile (28). Therefore, the hub genes and modules may play a dominant role as the representative diagnostic or therapeutic targets for COVID-19 management (Figure 1). For the datasets acquisition, we adopted the available RNAsequencing data (H. sapiens) with SARS-CoV-2 infection from GSE163151 (nasal swabs) and GSE171668 (lung tissue) in NCBI GEO database. Of note, surviving or postmortem samples with SARS-CoV-2 infection may also unveil the severity of COVID-19 in patients. The nasal swab specimen (GSE163151) may correspond to the early/moderate SARS-CoV-2 infection, while the postmortem pulmonary tissues (GSE171668) probably point to the late infective stage, suggesting the underlying specific gene/ module profiles may be demonstrated in a stage-dependent manner across heterogeneous specimens. For data processing, after DESeq2-based data normalization and outlier elimination as described in the method part, we consequently included COVID-19-related nasopharyngeal swabs (n = 73 in GSE163151) and lung autopsy tissues (n = 16 in GSE171668) for WGCNA analysis as shown in black lines in Figures 2A, F. The filtered genes (n = 2,649 in nasal swab; n = 2,369 in lung tissue) with expression variance greater than 90% of the whole genome were involved as well. For establishing a scale-free network for clustering gene modules, weighted Pearson's correlation coefficient b (power value) was selected in accordance with relatively high value of signed R 2 resulted from scale-free topological analysis. Therefore, b = 5 (nasal swab) and b = 10 (lung tissue) were used for constructing the scale-free clustering dendrograms ( Figures 2B, G). The signed R 2 was shown in a log-log linear model for module connectivity analysis is R 2 = 0.94 (nasal swab) and R 2 = 0.74 (lung tissue), along with the mean connectivity close to 0 for both specimens, suggesting the successful construction of scale-free correlation for WGCNA analysis (Figures 2C, H). Thus, after hierarchical clustering and merging the close-distance gene patterns, eight gene modules for both nasopharyngeal specimens and lung tissues were identified ( Figures 2D, I). More specifically, eight gene modules in nasopharyngeal dataset were as follows: ME-A (191 genes), ME (module)-B (741 genes), ME-C (558 genes), ME-D (208 genes), ME-E (23 genes), ME-F (145 genes), ME-G (194 genes), and ME-H (589 genes), while eight gene modules in postmortem lung tissue were shown as below: ME-A (165 genes), ME-B (215 genes), ME-C (67 genes), ME-D (147 genes), ME-E (161 genes), ME-F (403 genes), ME-G (177 genes), and ME-H (1034 genes). The relationships among clustered gene modules with high adjacency were visualized in the dendrograms ( Figures 2E, J). All the supporting data for gene module classification were shown in the Supplementary Materials 1, 2. Herein, weighted co-expression gene modules in SARS-CoV-2 infected nasal specimen and pulmonary tissue were established for further analysis. Impact of SARS-CoV-2 Infection on the Intramodular Correlation of Co-Expression Gene Modules in Nasal Swabs and Lung Tissues COVID-19-induced host response in nasal and lung tissues may share common regulated pathway with antiviral effects and innate immunity. Highly correlated co-expression gene modules may be pivotal in mediating pathological actions. Thus, we measured the module relationship in accordance with their Module Eigengene (ME) values as described in Methods section. To begin with, it was straightforward to illustrate the pairwise association of gene modules in a topological overlap plot ( Figures 3A, E). Co-expressed gene modules were colored with yellow, which are widely shown in the topological plot. Notably, intramodular genes cannot lie "intermediate" across distinct modules, which would fail to be strong connected intramodular targets in either module. Taken advantage of geometric data analysis in multiple dimensions, we adopted the 3D scattering approach (Figures 3B, C) and t-SNE dimension reduction analysis (Figures 3F, G) to observe the distribution of gene modules, indicating that the intramodular genes were potentially spread separately. It was demonstrated that there was no gene intersection across modules, suggesting the qualified composition of heterogeneous gene modules ( Supplementary Materials 1, 2). Also, the intramodular genes may be potentially dedicated together to a specific pathway. For examples, ME-H-Lung is related to "GO:0016032 viral process (p = 4.95E−07) and "hsa04330: Notch signaling pathway (p = 3.20E−04)", while ME-H-Nasal swab points to "GO: 0003341 cilium movement (p = 4.41E−17)" and "hsa05016: Huntington's disease (p = 4.90E −06)". Besides, ME-B-Nasal swab stands for "GO:0001569 patterning blood vessels (p = 1.52E−04)" and "hsa04974: Protein digestion and absorption (p = 1.51E−3)". Before conducting detailed functional analysis, we would like to initially measure the correlation profile among gene modules, which was further analyzed by ME-dependent Pearson's R shown in whole pairwise scatterplots in color with a p-value in the lower panel (one-way ANOVA with Tukey's multiple comparisons) (Figures 3D, H). Surprisingly, in 28 times linear regression calculation among eight modules in either nasal or lung specimens, the pairwise p-value lower than 0.05 were 24/28 for nasal swab and 28/28 for lung tissue, indicating the strong coexpression correlations among modules in the same sample. Taken together, our findings in this part were concluded as follows: 1) The scale-free network of consensus modules may be well-established without the noise of cross-module genes. 2) Activation of a specific functional module may result in a cascaded fluctuation of certain genomic functions regulated by remaining dependent modules. 3) As a molecular strategy against COVID-19, screening out a highly-preserved functional module in a scale-free network from heterogeneous samples may be meaningful for COVID-19 diagnosis. High Preservation of Type I Interferon Pathway Specific Gene Modules From Nasal Swab and Postmortem Lung Tissue Infected With SARS-CoV-2 To evaluate the similarity of specific co-expression gene modules in nasal and lung specimens, we measured the intersective genes, expression profile, biofunction, and reproducibility (adjusted rand index) of modules across distinctive samples by R function "Upset R", "pheatmap","plot3D", and "fossil", respectively. A two-dimensional matrix was plotted for showing the overlap genes in modules from nasal swab or lung tissue ( Figure 4A). It indicated that "ME-H-Lung and ME-H-Nasal swab" (120 genes), "ME-A-Lung and ME-F-Nasal swab" (48 genes), and "ME-H-Lung and ME-B-Nasal swab" (47 genes) have the most intersective genes (Top 3). However, only approximately one-ninth genes are overlapped in ME-H-Lung with either ME-H-Nasal swab or ME-A-Lung, suggesting the limited functional contribution of these intersection genes with "ME-H-Lung". Furthermore, the functional overlap is absent among these three modules as shown in the functional analysis of above paragraph. On the other hand, the overlapped genes (n = 48) in "ME-A-Lung and ME-F-Nasal swab" (also shown in the Venn diagram) has a large proportion in ME-F-Nasal swab (48/145, showing that approximately one-third of the whole genes are the overlapped genes in either ME-F-Nasal swab or ME-A-Lung. The gene names and normalized expression (gene atlas) were shown in the right panel of Figure 4A. For measuring the correlation between ME-F-Nasal swab or ME-A-Lung, linear regression and adjusted rand index analysis were performed. As a result, mean expression-based Pearson's R 2 (R 2 = 0.995) and pvalue (p = 1.35E−7) indicated a significant correlation and similarity between these two modules from heterogeneous samples ( Figure 4B, left panel). The highly-preserved gene profile is further validated by adjusted rand index-dependent homogeneous analysis (adjusted rand index = 0.91616) ( Figure 4B, right panel). R function "SVA" was used to eliminate the batch effects of normalized gene expression in both datasets prior to conducting adjusted rand index analysis. Both outcomes suggested the high preservation between ME-F-Nasal swab and ME-A-Lung. Prompted by these findings, both GO and KEGG analyses were adopted for functional enrichment analysis ( Figures 4C, D). Strikingly, host responses for COVID-19 in ME-F-Nasal swab are as follows:" GO:0051607 Defense response to virus (p = 9.72E−31)", "GO: 0060337 Type I interferon signaling pathway (p = 7.58E−30)", and "Hsa05164: modules between ME-F-Nasal swab and ME-A-Lung. Of note, both host responses "Type I interferon signaling pathway" and "Defense response to virus" were predicted as the two common regulated pathways detected in SARS-CoV-2 infected nasal swab and lung tissue, suggesting the high functional preservation of Type I interferon-related genes between nasal and lung tissues in response to COVID-19. Here was the summary of this section: 1) SARS-CoV-2-stimulated "Type I interferon pathway" was an underlying highly-preserved signaling in nasal and lung samples. 2) It may be beneficial for COVID-19 diagnosis and therapy by a deep understanding of COVID-19-induced fluctuation of key coexpression genes related to both "Type I interferon pathway" and "Defense response to virus" in highly preserved modules (ME-F-Nasal swab and ME-A-Lung) across heterogeneous samples. Herein, GO/KEGG enrichment with Sankey diagram analysis was adopted to functionally select the concordant genes associated with "Type I Interferon signaling pathway" and at least one of the two annotations, including "Defense response to virus" and "Response to virus" ( Figure 4E). As a result, 14 common genes were identified as the preserved co-expression ones in nasal swab and lung tissue for mediating Type I interferon signaling in COVID-19. Figure 5A). All the p-value of pairwise analysis (STAT1 vs other 13 genes) in nasal swab was lower than 0.05 (13/13), while the p-value for most of the STAT1-dependent paired comparison in lung tissue was less than 0.05 (10/13). Although 14 IFN-I-related genes as host response to COVID-19 were potentially preserved across nasal and lung specimens, the differential transcriptional expression of genes in samples with or without SARS-CoV-2 infection remains obscure. Understanding the differential profile of genes is conductive to realize the relationship between vulnerability of interferon activity and COVID-19 severity. since GSE171668 (lung tissue) was absent of negative control for COVID-19 study, we analyzed the differential expression of 48 intersective genes in GSE163151 (nasal swab) and GSE150316 (lung tissue). As shown in the volcano plots (COVID-19 vs non-COVID-19 in homogeneous samples), 48 intersective genes were highlighted in red points, in which gene names in red pointed to the 14 genes regulating IFN-I pathway. Interestingly, the volcano plot indicated a significant increase of 48 intersective genes expressions (including 14 IFN-I genes) in nasal swab (COVID-19 vs non-COVID- 19), but a decreased profile in postmortem lung tissue ( Figure 5B). More specifically, the transcriptional changes of 14 IFN-I-related gene with or without COVID19 infection were further shown in Figure 5C, indicating that SARS-CoV-2 infection induced a robust IFN-I response in nasal swab (14/ 14, p <0.05), but a decreased response in postmortem lung tissue (0/14, p <0.05). The nasal swab specimen (GSE163151) may relate to the early/moderate SARS-CoV-2 infection, while the postmortem pulmonary tissues (GSE171668) probably point to the late infective stage. For further addressing this issue, GSE162835 was used to measure the "gene expression-severity" relationship of these 14 genes ( Figure 5D). GSE162835 contains transcriptional data with disease severity. As a result, a 14-gene based linear relationship was detected between mild and severe COVID-19 (p = 0.038). Additionally, the expressions of genes (13/ 14, except OASL) were negatively correlated with COVID-19 severity (areas below the diagonal line in Figure 5D). The normalized expressions of 14 genes in the nasal swab of mild or severe COVID-19 were further shown in a heatmap ( Figure 5E). Consistent with our result, scientists have reported that the increasing level of IFN-I-related genes in the onset of COVID-19 is reversed in the late stage due to the enhanced load of SARS-CoV-2 virus (29). Taken together, the 14-gene profile may be responsible for stratifying the COVID-19 severity. Additionally, for in-depth understanding the regulatory role of these 14 genes, we further investigated the potential transcriptional factors capable of binding to the promoters of these 14 key genes using Interferome database (30). The predicted transcriptional factors mainly consisted of NF-kB, STAT, and IRF family ( Figure 5F). In sum, based on RT-PCR virus detection, the highly-preserved 14 IFN-I-related genes between nasal swab and pulmonary tissue may further complement diagnosis of COVID19 with its severity. Differential Diagnosis of COVID-19 With Other Respiratory Diseases by a 14-Gene Expression Profile Since the 14 IFN-I related genes are classical STAT-IRFassociated genes, which can be triggered by other viruses rather than SARS-CoV-2 alone, it is essential to conduct a differential diagnosis between COVID-19 and other respiratory infectious diseases. To address this issue, we further included the RNA-sequencing data of Influenza A, Influenza B, Respiratory syncytial viral (RSV), and Measles (GES32155, GSE163151, and GSE171668) for the differential analysis. Firstly, the consistence of 14-gene profile was determined in heterogeneous COVID-19 samples. Apart from previous used datasets (GSE171668-lung tissue and GSE150316-lung tissue), additional four datasets were further included for pairwise analysis. The other COVID-19related four datasets include GSE147507 (lung bronchial epithelial cells), GSE163151 (nasal swab), GSE162835 (nasal swab), and GSE182569 (lung bronchioalveolar fluid and nasal swab). As a result, the minimum and maximum value for Pearson's R-square and p-value for all paired tests are R 2 = 0.952 and p = 2.6E−9 respectively, suggesting the highlyconserved transcriptional profile of the 14 IFN-related genes in heterogeneous samples (nasal swab, lung tissue, bronchioalveolar fluid) (Figures 6A, B). For the establishment of a molecular reference for COVID-19 diagnosis, we quantitatively mapped a trendgram by Z-score quantification of 14-gene expression for COVID-19 specific diagnosis. In Figure 6C, 0.19) shown in black rectangles exhibit large variance from the baseline (Top 3). The data resource for plotting 14-gene Z-score trendgram was retrieved from COVID-19 datasets as shown in Figure 6A. Besides,trendgrams of 14-gene normalized expression in various diseases, including COVID-19, Measles, Respiratory Syncytial Viral (RSV), and Influenza A/B, were plotted for detecting the expression difference. Red circles indicate the distinct peak trend when compared with that of COVID-19, suggesting the existence of expression distinction between COVID-19 and other infections Figure 6D. For deepen understanding of differential diagnosis, linear regression analysis was used for further distinguishing the type of viral infection. In Figure 6E, firstly, the genetic profile of 14 genes in Measles is linearly irrelevant with that of COVID-19 (R 2 = 0.35, Pearson's p = 0.52), suggesting the feasibility of 14-gene based differential diagnosis between COVID-19 and Measles. For COVID-19 in comparison with other respiratory infections (Influenza A/B or RSV), it was resulted that the R 2 between COVID-19 samples is higher (R 2 >0.9) than that of COVID-19 vs Measles/RSV/Influenza A/B (all R 2 <0.9). Besides, the Pearson's p-value for COVID-19 vs COVID-19 samples (p = 6.5E−10) is at least 1,000 times lower than that of COVID-19 vs Influenza A/B and RSV (all p ≤4.9E−7), suggesting the high-preservation of 14 genes among COVID-19 samples. However, although the Pearson's correlation of "COVID-19 vs COVID-19" is much stronger than that of "COVID-19 vs Influenza A/B or RSV", the correlative level is still significant in "COVID-19 vs Influenza A/B or RSV" (P <0.05). It is therefore essential to discover a diagnostic host classifier that can distinguish among COVID-19 and Influenza A/B/RSV. Notably, although the Z-scores of OASL, STAT1 and MX1 were far away from the Z-score baseline within a 14-gene profile, their relative expression (locations) in linear models are approximately similar, showing the invalid potential for differential diagnosis ( Figure 6E). However, among the IFN-I related 14 genes, OAS1 may play a complementary role in the differential diagnosis of COVID-19 and Influenza A/B/RSV. The reason is that the expression value (point location) of OAS1 is on the linear regression line in the comparison of "COVID-19 vs COVID-19" in heterogeneous samples, but it is an outlier of the regression line and 95% confidence interval in comparison of "COVID-19 vs InfluenzaA/B/RSV" (Figure 6E), indicating an underlying diagnostic classifier of OAS1 in a 14-gene regression model. The normalized expression of representative genes, including OAS1, OASL, STAT1, and MX1, were shown in the violin plots Figure 6F, which aims to visualize the transcriptional expression of characteristic genes from 14-gene regression models and a Z-score trendgram. In sum, it is prospected that based on the direct determination of SARS-CoV-2 virus by RT-PCR, the detection of transcriptional profile of these 14 IFN-I related genes may be a promising molecular reference for COVID-19 diagnosis. DISCUSSION Accumulating evidence has indicated that direct testing of SARS-CoV-2 virus may cause false-negative results by RT-PCR due to unstable viral loads and evolution. Besides, genetic profiles of host defense have been demonstrated to be able to recognize the specific bacterial or viral infection (31). Thus, discovering the unique transcriptional feature of COVID-19 may supplement the diagnostic strategy of COVID-19, especially the signature of host response. Notably, SARS-CoV-2-induced changes of gene expressions has reported to potentially distinguish COVID-19 from other infections (e.g., MERS-CoV and SARS-CoV), in which IFN-I genes is involved in the unique biosignature in response to SARS-CoV-2 infection (32). However, ISGs can be triggered by various stimulators. Thus, even used as a supplementary molecular reference, the diagnostic signature of specific ISGs should be representative for COVID-19. In this study, using transcriptional data RNA-Seq from GEO datasets, we identified highly-preserved genes/modules regulating IFN-I pathway in SARS-CoV-2-infected nasal swab and lung tissue by R-dependent machine-learning analysis. It intends to provide a complementary understanding of IFN-I-related host response as a diagnostic indicator of COVID-19 and its severity. For explicitly recapitulating our findings, firstly, we constructed a scale-free co-expression gene network without any preliminary assumption in terms of the normalized transcriptional data (RNA-seq datasets from GEO database) by WGCNA, since the real-world biological relevance commonly represents scale-free behaviors. After establishing the gene modules shown in the gene clustering dendrograms from either nasal or lung samples, the scale-free network was wellestablished with the absence of cross-module genes. Almost all the constructed pairwise modules in nasal swab (24/28) and lung tissue (28/28) were significantly correlated, indicating that screening out a predominant functional module may play a representative role in a genomic network. In homology research, highly-preserved genes/modules in heterogeneous specimens usually possess significant biological functions with similar drivers or regulators. It may be meaningful for demonstrating susceptible gene targets in an identical disorder. Using adjusted rand index-related similarity analysis and UpsetR, we identified a highly-preserved genetic profile (n = 48) across the nasal swab and lung samples infected by SARS-CoV-2. Coincidently, biological function of the 48 intersective genes pointed to both "Type I interferon signaling pathway" and "Defense response to virus", suggesting that "regulating IFN-Irelated signaling pathway against virus" may be highly conserved in nasal swab and lung tissue. Based on these findings, we identified 14 IFN-I-related genes as the most dominant functional genes among these 48 genes, which were resulted from highly-enriched annotations including "Type I Interferon signaling pathway" and at least one of the two annotations, including "Defense response to virus" and "Response to virus". The highly preserved IFN-I related 14 genes are as follows: BST2, IFIT1, IFIT2, IFIT3, IFITM1, ISG15, MX1, MX2, OAS1, OAS2, OAS3, OASL, RSAD2, and STAT1. These 14 genes are also documented to involve in the host response to COVID-19 (29). The potential interaction between these 14 genes and virus have been reported as follows: BST2 was found to inhibit viral egress antagonized by SARS-CoV-2 accessory protein Orf7a (33); IFIT1/2/3 may inhibit the translation and replication of virus (36); MX1 has a potential suppressive effect on the activity of viral ribonucleoprotein complex and its GTPase (37). MX2 may be effective in repressing viral replication, transcription, and nucleocapsid shuttling (38); OAS1/2/3 potentially focused on inhibiting viral replication, while OASL is associated with viral translation (39). RSAD2 has underlying anti-viral egress and replication effect (40). IFNrelated STAT1 nuclear translocation is the indispensable process for antiviral transduction (41). Additionally, SARS-CoV-2 is documented to suppress interferon and STAT activity, resulting in the clinical manifestation of COVID-19. In our findings, STAT may be the central role of this 14-gene network for COVID-19 diagnosis (Supplementary Material 6). On the other hand, using seven independent RNAse qu enc ing datasets from NCBI-GEO, we further mathematically validated the high preservation of this 14-gene profile in homogeneous and heterogeneous samples infected by SARS-CoV-2, supporting the diagnostic role of 14-gene profile for COVID-19. Besides, consistent with previous results, based on GSE162835 (RNA sequencing data with labeled COVID-19 severity), our study suggested that sufficient expression of these14 IFN-I genes is in an early stage of COVID-19, while it is decreased in a severe period ( Figure 5D). Moreover, it is further reported in a clinical trial that the expression of 8/14 genes: BST2, MX1, OAS1, IFIT1, IFITM1, ISG15, RSAD2, and STAT1, were relatively decreased in patients with advanced stage of COVID-19 when compared with that in the early stage, which further evidence the diagnostic feature of this 14-gene profile for COVID-19 severity (42). Apart from these findings, we reviewed the transcriptional factors binding to 14 IFN-I gene-related promoter sites in the Interferome Database, showing that a series of NF-kB, STAT, and IRF family members have potential regulatory capability for these 14-IFN genes. For biological interpretation, the activity of NF-kB is a doubleedged sword. Normal binding between NF-kB and IFN is functioned by regulating cell survival and innate/adaptive immune responses, while aberrant NF-kB may contribute to inflammation (43). IFN activity is linked with JAK/STATdependent innate antimicrobial immunity (44). All types of IFNs are able to produce STAT by JAK-induced tyrosine phosphorylation (45). STAT1 mutations can result in IFNrelated infections and inflammation (46). Nevertheless, IFNdependent anti-pathogen effect is potentially associated with the absence of STAT1 (47). STAT1 and STAT2 are both regarded as the primordial signal regulators of IFN-I as functioned by genetic ablation, hypomorphic mutation or abnormal function of impaired antiviral IFN-related genes (48). IRF9 form ISGF3 complex can transactivate IFN-related genes for antiviral response as well (49). STAT3 can be activated by IFN-I stimulation in numerous cell types (50). An antiinflammation role has been reported in IFN and Toll-like receptor response. Both IRF3 and IRF7 serve as a critical role in IFN-I for combating viral infection if adequate IRF3 and IRF7 bind to the promoter sites of IFN-I genes. IRF3 degradation is highly related with the repression of IFN-b (51). ISG15 expression is highly related with IFN stimulation. Overexpression of ISG15 can accelerate DNA replication fork progression followed by abundant DNA damage and chromosomal breakage (52). Apart from these findings, we further provided additional information for differential diagnosis between COVID-19 and other respiratory infections such as COVID-19 vs Influenza A/B, RSV, and Measles. As the results, the distinction of COVID-19 and Measles can be performed according to the IFN-I related 14-gene expression. Moreover, the expression value (point location) of OAS1 in a 14gene linear regression model can be used as a diagnostic classifier between COVID-19 and Influenza A/B/RSV, since the expression of OAS1 is on the regression line in the comparison of "COVID-19 vs COVID-19" in heterogeneous samples, but it is an outlier of the regression line and 95% confidence interval in comparison of "COVID-19 vs InfluenzaA/B/RSV. Taken together, these biological and statistical interpretations for the results may further suggest a potential molecular strategy for COVID-19 diagnosis in terms of IFN-I associated 14-gene profile. For the current COVID-19 diagnosis, the most definitive and accurate approach for measuring genetic profile and virus may be the high-throughput sequencing. However, this method is relatively disadvantage to large-scale application due to the expensive equipment and skillsets required. Moreover, identification of too many differential expression genes may not be representative and precisive enough to COVID-19 diagnosis. Thus, the 14-gene-based transcriptional profile may significantly cut down on manpower and equipment expenditure to the diagnosis. On the other hand, as a sensitive and precise approach widely used in hospitals and laboratories, RT-PCR remains the gold standard for COVID-19 diagnosis (53). Since February 2020, the US Food and Drug Administration (FDA) approved licensed laboratory to detect SARS-CoV-2 virus (20). The procedure includes the isolation and conversion of virus RNA to cDNA followed by the amplification of cDNA using Taq DNA polymerase. Afterwards, RT-PCR using primers will quantitively detect genome parts of SARS-CoV-2 virus. Such a procedure can be used for detecting transcriptional level of interesting genes as well. Notably, in asymptomatic COVID-19 patients, 38% of whom are PCR negative for virus detection (54,55), suggesting the urgent need for supplementary diagnosis by other biological reference. Because RT-PCR is still effective and sensitive to monitor the transcriptional alterations of IFN-I related genes in asymptomatic COVID-19 patients (23), it indicates the clinical significance of detecting these gene candidates by RT-PCR for supplementary diagnosis. In this paper, for a strict view, we recommended a diagnostic strategy of COVID-19 by simultaneously detecting the genetic profile of both 14 IFN-I related genes ( Figure 6C) and SARS-CoV-2 viral load. Meanwhile, the severity of COVID-19 is inversely proportional to the transcriptional level of these IFN-I related 14 genes. Taken together, our study may provide a molecular reference for COVID-19 supplementary diagnosis, which is consisted of 14 highly-preserved genes regulating IFN-I-dependent host response in heterogeneous specimens, including nasal swab and lung tissues.
2021-11-30T14:16:29.298Z
2021-11-22T00:00:00.000
{ "year": 2021, "sha1": "b66ecfb12174bef26712da1dfe1a0ec1f06d2660", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8647662", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b66ecfb12174bef26712da1dfe1a0ec1f06d2660", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
41128631
pes2o/s2orc
v3-fos-license
Spectral density in a nematic state of iron pnictides Using cluster-perturbation theory, we calculate the spectral density A(k,w) for a nematic phase of models describing pnictide superconductors, where very short-range magnetic correlations choose the ordering vector (pi,0) over the equivalent (0,pi) and thus break the fourfold rotation symmetry of the underlying lattice without inducing long-range magnetic order. In excellent agreement with angle resolved photo-emission spectroscopy (ARPES), we find that the yz bands at X move to higher energies. When onsite Coulomb repulsion brings the system close to a spin--density-wave (SDW) and renormalizes the band width by a factor of approx. 2, even small anisotropic couplings of 10 to 15 meV strongly distort the bands, splitting the formerly degenerate states at X and Y by approx. 70 meV and shifting the yz states at X above the chemical potential. This similarity to the SDW bands is in excellent agreement with ARPES. An important difference to the SDW bands is that the yz bands still cross the Fermi level, again in agreement with experiment. We find that orbital weights near the Fermi surface provide a better characterization than overall orbital densities and orbital polarization. I. INTRODUCTION In recent years, iron-based superconductors have been intensely studied, 1,2 because of their high superconducting transition temperatures.As in the cuprates, antiferromagnetic (AFM) order is present in the phase diagram and phonons are not believed to be strong enough to explain the high transition temperatures. 3However, in the pnictides the AFM phase is a metallic spin-density wave (SDW) rather than a system of localized Heisenberg spins as is the case in the cuprates.At temperatures slightly above the transition to the SDW with ordering vector (π, 0) or (0, π), many weakly doped compounds show an orthorhombic phase without long-range magnetic order, but with broken rotational symmetry.This phase has slightly different lattice constants along the in-plane ironiron bonds, 4 but the anisotropy that develops in electronic observables such as resistivity 5,6 or angle-resolved photoemission spectroscopy (ARPES) [7][8][9][10] of detwinned samples appears considerably more pronounced. A number of competing scenarios have been proposed for this phase and can be broadly categorized as "magnetism", "orbital", and "lattice" driven.In the first case, the symmetry between equivalent magnetic ordering vectors (π, 0) and (0, π) is broken and the system chooses one of them without immediately establishing long-range magnetic order. 11,12In the second picture, it is the degeneracy between two d orbitals of the iron ion, the xz and yz states providing the greatest contribution to the states at the Fermi surface (FS), that is spontaneously broken; 13 the resulting orbital occupation then determines the effective magnetic exchange constants that generate the SDW order.Both pictures were first discussed in insulating spin and spin-orbital models and have since been generalized to take into account electron itineracy.Stud-ies in several models have shown that nematic phases can indeed develop between structural and magnetic transition temperatures. 14,15ile a definite answer about the driving mechanism may be hard to nail down, as spin, orbital, 13,14 and lattice [15][16][17] degrees of freedom are naturally coupled and interact with each other, one may nevertheless try to identify the dominant ingredient(s).To this end, it is instructive to establish how each type of symmetry breaking manifests itself in observables.If the symmetry breaking is assumed to mostly concern the xz and yz orbitals, one can introduce it explicitly by adding a phenomenological energy splitting between the orbitals and evaluating its impact on observables such as the optical conductivity or the spectral density.These signatures were found to qualitatively agree with experiments, 18 where states of yz character are found to be higher in energy than those of xz character in several different pnictide compounds from the two structurally slightly different"111" and "122" families.On the other hand, ARPES data taken above the Néel temperature have alternatively been interpreted in terms of "band folding" due to magnetic order 9 or emphasising the coupling between magnetic order and the orbital states near the Fermi level. 10Shortrange magnetic order 19 and the spin-nematic scenario 20 likewise reproduce the anisotropic conductivity, and the latter has been argued to lead to an effective orbital splitting. 14However, a direct calculation of the spectral density in a nematic phase is so far lacking. We use here a method that combines real and momentum space, cluster-perturbation theory (CPT), 21,22 to calculate the one-particle spectral density A(k, ω) for a spin-nematic phase where rotational symmetry is broken via (very) short-range spin correlations that are AFM in x and ferromagnetic (FM) in y direction [corresponding to ordering vector (π, 0)], but without long-range magnetic correlations beyond second neighbors.][10] If onsite interactions bring the system close to the SDW transition, a small phenomenological magnetic anisotropy leads to large anisotropies in A(k, ω).Thus, we can theoretically describe the astonishing ARPES result that the overall band shifts characterizing A(k, ω) in the SDW phase are nearly fully developed already above the Néel temperature. 8n order to be able to solve the problem exactly on a small cluster, we use variants of models with three 23 and four 24 orbitals; models and method are introduced and discussed in Sec.II.Section III A discusses the anisotropic band shifts induced by (strong) anisotropic magnetic couplings in the non-interacting models; in Sec.III B, we show that in the presence of onsite interactions and near the SDW transition, smaller magnetic anisotropies have a large impact.In Sec.IV, our results are summarized and discussed. II. METHOD AND MODEL The aim of the paper is to calculate the spectral density A(k, ω) in a phase where (short-range) magnetic correlations break the fourfold symmetry of the lattice, but without long-range magnetic order.The latter requirement prevents us from carrying out our calculations directly in momentum space, as it has been done for the paramagnetic and AFM phases.As an alternative approach, we choose here cluster perturbation theory. 21,22n this method, the ground state and one-particle Green's function are evaluated (almost) exactly (with Lanczos exact diagonalization) for a fully interacting quantum model on a small cluster, and hoppings between clusters are treated in perturbation theory [for an illustration see Figs. 1(a) and 1(b)].Apart from the limit of small intercluster hoppings, this approximation also becomes exact in the opposite limit of vanishing interactions, as can be seen by considering that it amounts to replacing the self energy of the full system by that of the small cluster. 26ong-range order can be treated with the related variational cluster approach (VCA), 26,27 as it has been done for a two-orbital model for pnictides. 28,29he biggest drawback of the VCA is that correlations are only included exactly within the small cluster, while longer-range effects are treated at a mean-field level.For nematic phases with at most short-range order, this limitation turns into a huge advantage: We can break the symmetry between the x and y directions locally on the small cluster, see below, but without imposing long-range order by a symmetry-breaking field.If the small cluster is, e.g., an AFM coupled dimer, its groundstate is thus still given by a singlet, i.e., a superposition of "up-down" and "down-up", which removes long-range correlations. When using a dimer as the directly solved cluster, we As in (c), shading indicates the spectral weight for the "real" momentum instead of the pseudo-crystal momentum.In the online version, red refers to xz, blue to yx, and green to all other orbitals.In all spectra, peaks are broadened by a Lorentzian δ/((ω − ω0) 2 + δ 2 ) with δ = 0.05 except for Fig. 1(d), where δ = 0.025.All energies are in eV. find instabilities, i.e., poles of the one-particle Green's function that are on the wrong side of the chemical potential.While this does not necessarily invalidate the results (which are in fact similar to the more stable results described below), it may indicate that the self energy of a dimer differs too strongly from that of a large two-dimensional system to provide a reliable approxima-tion.In order to be able to use three-(four-) site clusters instead, which lead to stable results, we restrict the Hamiltonian to the four (three) orbitals that contribute most of the weight at the FS.The results presented here were obtained with the cluster decompositions shown in Figs.1(a) and 1(b), but equivalent results were found for the three-orbital model when using a "brick-wall" arrangement of 2 × 2 clusters instead of the "columns" in Fig. 1(a). The momentum-dependent tight-binding Hamiltonian in orbital space can be written as where d k,ν,σ (d † k,ν,σ ) annihilates (creates) an electron with pseudo-crystal momentum k and spin σ in orbital ν.The three-orbital model used here is based on the model of Ref. 23, but a few longer-range hoppings were added to provide a better fit of the bands near the FS, because the original three-orbital model has magnetic instabilities too far from (π, 0)/(0, π). 30The T µ,ν ( k) give the hoppings between orbitals µ and ν and are where a bar denotes the complex conjugate.Hopping parameters are t 1 = −0.08,t 2 = 0.1825, t 3 = 0.08375, t 4 = −0.03,t 5 = 0.15, t 6 = 0.15, t 7 = −0.12,t 8 = −t 7 /2 = 0.06, t 10 = −0.024,t 11 = −0.01,t 12 = 0.0275, ∆ xy = 0.75, µ = 0.4745; Fig. 1 shows the uncorrelated tight-binding bands.The four-orbital model was obtained by removing the 3z 2 − r 2 orbital 24 from the five-orbital model of Ref. 25 and slightly changing onsite energy and third-neighbor hopping of the x 2 − y 2 orbital to alleviate the fact that removing the 3z 2 − r 2 orbital moves it too close to the Fermi level, see Fig. 1(d).In principle, hoppings can be extended to three dimensions and parameters could be fitted to model specific compounds, at least in the more detailed four-orbital model. The features we aim to study here -an anisotropy between the X and Y points -have been experimentally observed in different compounds, and we are going to see that both the three-and the four-orbital models lead to similar results despite their somewhat different dispersions, suggesting that fine-tuning of the kinetic energy is not crucial.We use a unit cell with one iron atom to distinguish between momenta (π, 0) and (0, π), which would both map to (π, π) for a two-iron unit cell.Due to an internal symmetry of the two-iron unit cell, 31,32 it is always possible to use a one-iron unit cell for tight-binding models restricted to an Fe-As plane.However, the xz and yz orbitals with momentum k couple to the other orbitals at momentum k + (π, π).Thus, one writes the tight-binding Hamiltonians in terms of a pseudo-crystal momentum k, which is k = k for xz/yz and k = k+(π, π) for xy/x 2 −y 2 /3z 2 −r 2 .In real space, such a notation corresponds to a local gauge transformation, where replacing, e.g., the xy i orbital at site i = (i x , i y ) by (−1) (ix+iy) xy i (and analogously for x 2 − y 2 and 3z 2 − r 2 ) leads to a translationally invariant Hamiltonian with a one-iron unit cell.For comparison with ARPES experiments, however, this gauge transformation has to be undone, which implies that spectral weight at k with orbital character xy, x 2 − y 2 or 3z 2 − r 2 is plotted at k = k + (π, π). 18,33n order to study a nematic phase, the four-fold lattice symmetry is explicitly broken by introducing a phenomenological Heisenberg interaction, which couples electrons in all orbitals, where µ, ν denote orbitals and i, j nearest-neighbor (NN) bonds.For J anis > 0, the coupling is AFM (FM) along the x (y) direction.The electron-spin operators are given by S iν = 1 2 ss d † iνs σ ss d iνs , where σ = (σ x , σ y , σ z ) is the vector of Pauli matrices.These interactions act only within the small cluster that is solved exactly.We are here not going to investigate the origin of such a breaking of rotational symmetry, which has been shown to occur in several models, 11,12,14,15 but we will study its impact on the system.We find that when the system is close to the spin-density wave, very small values of J anis trigger highly anisotropic band distortions, suggesting that short-range correlations, as observed in a spin-fermion model, 19 indeed favor such a symmetry breaking. When onsite interactions 34,35 are taken into account, the same values of intra-orbital Coulomb repulsion U , inter-orbital repulsion U , Hund's rule coupling J and pair hopping J = J were used for all orbitals, along with the standard relation U = U − 2J, giving where α, β denote the orbital and S i,α (n i,α ) is the spin (electronic density) in orbital α at site i.While the parameters relating to the xy and x 2 − y 2 orbitals can in principle be slightly different from each other and the xz/yz doublet, symmetric interactions were chosen for simplicity. III. RESULTS A. Band anisotropy in the three-and four-orbital models In order to study the effects of phenomenological short range magnetic correlations, the Hamiltonian given by Eqs. ( 1) and ( 7), was initially treated with the VCA on a four-site cluster, with AFM interactions along x and FM ones along y but without onsite Coulomb and Hund interactions.A fictitious chemical potential was optimized as a variational parameter, but did not have a large impact on the results.No tendencies towards long-range order were found, which agrees with expectations: Since the AFM Heisenberg interaction only acts within the cluster, it favors a total cluster spin of S tot = 0.In the large system, consisting of many noninteracting clusters, there is no magnetic order.Rather large J anis 0.3 eV has to be chosen to induce appreciable signatures of the anisotropy, which is a very large energy scale compared to the other parameters of the Hamiltonian.The reason is that the non-interacting model with four electrons per site does not contain any net unpaired spins that can directly be coupled by a Heisenberg interaction; the interaction first needs to be strong enough to induce a local spin. The spectral density for J anis = 0.5 eV is shown in Fig. 2(a).Apart from the fact that the Heisenberg interactions make the spectrum more incoherent, the bands are most strongly modified near X = (π, 0), which corresponds to the ordering vector that would be favored by the NN AFM interaction along x.One clearly sees that the yz states around X are moved to higher energies, while the xz states at Y = (0, π) are shifted to slightly lower energies in agreement with experiments.The energy shifts are momentum dependent: While the differences between X and Y are large, changes around Γ = (0, 0) are far less pronounced.The corresponding orbital-resolved FS can be seen in Fig. 2(b).Like the spectral density, it shows some features that are similar to those resulting from band folding in a (π, 0) SDW; for example, the xz electron pocket at Y has a "mirror pocket" at M = (π, π) = Y + (π, 0).However, the FS is still qualitatively different from the FS found in the long-range ordered SDW, where folding leads to additional features and largely suppresses the yz weight, 36 which dominates the hole pockets here.Such differences related to longrange order are consistent with ARPES experiments in NaFeAs, 8 see also the discussion in Sec.III B. The same behavior as in the three-orbital model is seen for the four-orbital case, see Fig. 3, where A(k, ω) is shown for increasing J anis = 0.2, 0.3, 0.4 eV.In the last case, the splitting between the states at X and Y is ≈ 150 meV.Taking into account that the overall band width has to be renormalized by a factor of 2-3, this is consistent with the order of magnitude of the 60 meV splitting reported for Ba(Fe 1−x Co x ) 2 As 2 . 7This can be compared to an explicit orbital splitting, similar to the mechanism proposed in Ref. 18.The splitting can be written as ∆ = (n yz −n xz )/2, where n xz (n yz ) is the density in the xz (yz) orbital, and was set to ∆ = 0.15 eV, which approximately reproduces the energy difference between the X and Y points indicated by the dashed lines in Fig. 3(c).A momentum-independent splitting large enough to reproduce the energy differences between the X and Y points substantially distorts the features near the Γ point as well.[9][10] Total orbital densities do not turn out to be a reliable way to characterize the impact of the nematic order on states near the Fermi energy.Densities in the xz and yz orbitals differ only slightly in the four-orbital model with n xz − n yz ≈ 0.02 for J anis = 0.4 eV.This value is not strongly affected by 4% hole or electron doping, in contrast to a proposed sign change for hole doping 14 and it is broadly consistent with the small orbital polarizations found in mean-field analyses for the SDW state. 23,36In the three-orbital model, the orbital polarization is even opposite with n xz − n yz ≈ −0.1, because spectral weight with yz character is transferred below the Fermi level [see the density of states shown in Fig. 4(a)], and in contrast to the four-orbital model [see Fig. 4(b)], this weight is not balanced by xz states further away from µ.Nevertheless, the band reconstruction near the Fermi level and the band anisotropy are very similar in the two models.AFM correlations along x always bring the yz states around X closer to the Fermi level, even when the total orbital densities satisfy n yz > n xz , in contrast to a naive expectation that the yz bands should be lowered in energy in this case.When onsite interactions bring the three-orbital model closer to the SDW transition (see Sec. III B below), the orbital densities become almost equal with n xz − n yz ≈ −0.012 for U = 1 eV.Total orbital densities can determine magnetic properties via the Goodenough-Kanamori rules in Mott insulators, which do not have a FS.On the other hand, since the pnictides are more metallic with strongly hybridized orbitals, a more consistent and clearer picture can here be obtained if one concentrates on spectral weight near the Fermi level as will be discussed below. B. Impact of onsite Coulomb interaction In this subsection the impact of onsite interactions will be investigated.The full Eq.( 8) including spin-flip and pair-hopping terms can easily be included in the VCA.Interaction strengths were chosen below the critical values for the onset of long-range order because we want to focus on short-range correlations here.As can be seen in Fig. 5 for the two models considered here, lower values of J anis ≈ 0.2 is now sufficient to induce substantial asymmetries, in contrast to the larger J anis ≈ 0.4 to 0.5 eV needed for the noninteracting models.Onsite interactions favor local magnetic moments, even in the absence of long-range order, that can then be coupled even by weaker J anis . Finally, we study the three-orbital model very close to the SDW transition by setting U = 1.02 eV.In a mean-field treatment as used in Ref. 23, one finds an SDW with long-range magnetic order, but the optimal VCA solution does not yet show long-range order due to the presence of quantum fluctuations.However, the system is so close to a magnetically ordered state that very small J anis ≈ 0.015 eV = 15 meV already introduces strong short-range order and corresponding band anisotropies.Several occupied low-energy bands [e.g. between Γ = (0, 0) and Y = (0, π) as well as around M = (π, π)] in the spectral density, which is shown in Fig. 5(c), have energies reduced by a factor of ≈ 2, consistent with the renormalization factor ≈ 2 -3 needed to reconcile density-functional bands with ARPES.Bands above the Fermi level do not have reduced widths.This asymmetric impact of correlations is in agreement with dynamical mean-field studies. 37In addition to the renormalization, J anis = 15 meV induces an energy splitting of ≈ 70 meV between the X and Y points.In fact, the band at X has moved slightly above the chemical potential, as expected for the SDW phase.The fact, that this happens even in the absence of long-range order, is in excellent agreement with recent ARPES data for NaFeAs, where it was likewise found that the overall band positions at X and Y nearly reach their "SDW values" above the Néel temperature. 8evertheless, the corresponding FS, see Fig. 5(d), clearly shows important differences to that of the SDW state: As the yz states cross the chemical potential here with a rather low Fermi velocity (leading to an elongation of the hole pocket at Γ along the x-direction), they contribute substantial weight to the FS.In fact, both of the strong features along the Γ-X line are of yz character.In the SDW phase, in contrast, the yz orbital dominates the AFM order parameter and is thus mostly gapped out. 36Related effects have likewise been observed in ARPES, where these yz bands open gaps at the Néel temperature. 8,38.SUMMARY AND CONCLUSIONS The variational cluster approach was used to study the spectral density of a nematic phase in three-and four-orbital models for iron-based superconductors.We found that the method is well suited for problems involving short-range correlations without long-range (magnetic) order.The correlations considered in this study were extremely short-range, going only over NN sites, the minimum to break rotational invariance.While this is a somewhat extreme scenario, it has been argued that magnetic correlations that are effective only on a very short range lead to the linear temperature dependence of the magnetic susceptibility at high temperatures. 39Nuclear quadrupole resonance 40 indicates that there are As ions seeing different electronic surroundings in the "underdoped regime", which would be in agreement with the present scenario of As ions involved in "magnetic" vs. "non-magnetic" bonds. When symmetry between the x-and y directions is broken by a phenomenological magnetic interaction that is AFM in the x direction, the bands with yz character around momentum X = (π, 0) move to higher energies, i.e., closer to the Fermi level.This is in agreement with ARPES on detwinned samples above the magnetic transition temperature, in both the "122" compound Ba(Fe 1−x Co x ) 2 As 2 , 7 and the "111" compound NaFeAs. 8,10The latter is not expected to have surface states 41 that might complicate the analysis of ARPES in 122 compounds. 42The changes in the band structure due to the nematic order not only depend on the orbital, but also on momentum.][10] Total orbital densities and their difference are model dependent and not a reliable predictor of reconstructions of low-energy states.However, the orbital-resolved spectral weight and the bands near the Fermi level are affected in the same way both in a three-and a four-orbital model, with and without onsite interactions indicating that they are more universal and less dependent on details of the model Hamiltonian.In agreement with previous findings on the orbital polarization of the FS in the SDW phase 36 and on transport properties, 19,43 this suggests that total (orbital) densities are here less important than in Mott insulators, as the metallic character of the pnictides makes states near the Fermi level far more important than those further away. When onsite interactions are strong enough to bring the system close to the SDW transition, very small anisotropic couplings can deform the bands until their broad features resemble bands in the SDW regime, i.e., bands are renormalized by a factor of ≈ 2 and the yz states at X move above the chemical potential, as seen in ARPES on NaFeAs just above the Néel temperature. 8vertheless, the Fermi surface still differs from that of a state with full long-range magnetic order, where the yz states are mostly gapped out, 36,38 again in agreement with ARPES. 8 FIG. 1 . FIG. 1. (Color online) Schematic illustration of the cluster decomposition into (a) four-site and (b) three-site clusters used for the three-and four-orbital models.Ground-state energies and Green's functions of the small cluster -as connected by thick solid lines -are obtained by exact diagonalization.Clusters are then connected in CPT along the thinner dashed bonds.Within the cluster, AFM (FM) Heisenberg exchange acts between electrons along the x-(y-) bond.(c) Spectral density A(k, ω) of the non-interacting three-orbital model Eq.(4).Solid lines indicate the bands in terms of the pseudocrystal momentum k, shading the spectral weight in terms of "real" momentum k, the difference is that weight with xy character shifts by (π, π).[Note that along (0, 0)-(π, π), the bands with dominant xz/yz character contain these two orbitals with identical weight, even though the yz characterhere drawn on top -dominates the figures.](d) A(k, ω) of the non-interacting four-orbital model.Dashed lines indicate the results for the model that is obtained by removing 24 the 3z 2 − r 2 orbital from the five-orbital model of Ref. 25. Solid lines are for the model used here, where the x 2 − y 2 orbital is then somewhat removed from the Fermi level by changing t 33 xx from −0.02 to t 33 xx = 0.03 and 3 from −0.22 to −0.12 (notation as in Ref.25).As in (c), shading indicates the spectral weight for the "real" momentum instead of the pseudo-crystal momentum.In the online version, red refers to xz, blue to yx, and green to all other orbitals.In all spectra, peaks are broadened by a Lorentzian δ/((ω − ω0) 2 + δ 2 ) with δ = 0.05 except for Fig.1(d), where δ = 0.025.All energies are in eV. FIG. 2 . FIG. 2. (Color online) (a) Spectral density A(k, ω) and (b) Fermi surface of the three-orbital model (four-site cluster) with parameters as given for Fig. 1(c) and an explicit symmetry-breaking Janis = 0.5 eV, see Eq. (7), but without Coulomb repulsion and Hund's rule coupling.Shadings are for "real" momentum, lines indicate the non-interacting model in pseudo-crystal momentum k. FIG. 3 . FIG. 3. (Color online) Spectral density A(k, ω) of the fourorbital model (three-site cluster), see Fig. 1(d), and an increasing explicit symmetry-breaking term Eq. (7) of (a) Janis = 0.2 eV, (b) Janis = 0.3 eV, and (c) Janis = 0.4 eV.Coulomb repulsion and Hund's rule coupling are not included.Shadings are for "real" momentum, solid lines indicate the non-interacting model in pseudo-crystal momentum k.In (c), dashed lines are for a non-interacting model with an energy difference ∆ = 0.15 eV between the xz and yz orbitals, which was fitted to approximately reproduce the difference between the X and Y points. FIG. 4 . FIG. 4. (Color online) Density of states for (a) the threeorbital model with Janis = 0.5 eV and (b) the four-orbital model and Janis = 0.4 eV.U = J Hund = 0 in both cases. FIG. 5 . FIG. 5. (Color online) Spectral density with anisotropic short-range magnetic order and onsite interactions.(a) For the four-orbital model and U = 0.3 eV, J Hund = 0.075 eV and Janis = 0.2 eV.(b) For the three-orbital model with U = 0.6 eV (J Hund = 0.15 eV) and Janis = 0.2 eV and (c) U = 1.02 eV (J Hund = 0.255 eV) and Janis = 0.015 eV.For these last values of U and J Hund , the three-orbital model is very close to the SDW.(d) FS corresponding to the parameters in (c), it captures spectral weight within 1 meV of the Fermi level; broadening of the spectral weight is consistent with (c).Onsite interactions are lower for the four-orbital model, because it is at half filling and Hund's rule thus, moves it closer to a Mott transition, while it partly compensates U away from half filling, as in the three-orbital model.Shadings are for "real" momentum; lines indicate the non-interacting model in pseudo-crystal momentum k.
2012-05-03T07:56:40.000Z
2012-02-16T00:00:00.000
{ "year": 2012, "sha1": "e79c0bbcd6b2c8fb581aaf36f3222ada59c3c6bf", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.85.184515", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "e79c0bbcd6b2c8fb581aaf36f3222ada59c3c6bf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4931820
pes2o/s2orc
v3-fos-license
Relationship between diet and ankylosing spondylitis : a systematic review The question of whether diet plays a role in the onset of ankylosing spondylitis (AS) or can affect the course of the disease is an important one for many patients and healthcare providers. The aims of this study were to investigate whether: 1) patients with AS report different diets to those without AS; 2) amongst patients with AS, diet is related to severity; 3) persons with particular diets are less likely to develop AS; 4) specific dietary interventions improve the AS symptoms. The review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Medline, Embase, Cochrane Library, and reference lists of relevant articles were searched. Two authors independently selected eligible studies, assessed the quality of included trials, and extracted the data. Sixteen studies (nine observational and seven interventions) were included in the review. Due to the heterogeneity of the study designs and analyses, the results could not be aggregated. Evidence on a possible relationship between AS and diet is extremely limited and inconclusive due to the majority of included studies being small, single studies with moderate-to-high risk of bias, and insufficient reporting of results. Introduction Ankylosing spondylitis (AS) is a chronic inflammatory rheumatic disease with estimated prevalence per 10,000 of 23.8 in Europe, 16.7 in Asia, 31.9 in North America, 10.2 in Latin America, and 7.4 in Africa (1).AS adversely affects patients in terms of symptoms such as pain and fatigue, leading to impaired function and diminished quality of life (2,3).Despite the development of biological therapy, which has revolutionized the treatment of AS, many patients explore complementary treatments such as dietary therapy (4). There is overwhelming evidence of the importance of diet in the etiology of a wide range of diseases such as rheumatoid arthritis (RA), cardiovascular disease, and cancer (5)(6)(7).An examination of dietary patterns in a large cohort of nurses in the United States found that dietary patterns characterized by high intakes of fruit, vegetables, legumes, whole grains, poultry, and fish, were associated with a reduced risk of RA.In contrast, dietary patterns typical of industrialized countries (high intake of red meats, processed meats, refined grains, French fries, desserts and sweets, and high-fat dairy products) were associated with an increased risk of RA (8).A meta-analysis of placebo-controlled trials in patients with RA reported that that dietary fish oil has a modest effect in reducing tender joint count and morning stiffness, an effect attributed to the anti-inflammatory mechanism of omega-3 polyunsaturated fatty acids (9). It has been suggested that a low starch diet leads to lower AS disease activity and that Klebsiella pneumoniae, which can be influenced by starch consumption, is a triggering factor involved in the initiation and development of AS (10)(11)(12)(13). Material and Methods The review protocol was registered with PROS-PERO, an international register of systematic reviews (registration number: CRD42015026699) (20).We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (21). Literature search strategy The search terms relating to AS (AS, spondyloarthropathy, spondylitis, and spondyloarthritis) were combined with terms relating to diet (diet, nutrition, food, food habits, nutritional status, vitamins, antioxidants, fatty acids, carbohydrates, dietary protein, calcium, fish oils, fruit, vegetables, and micronutrients) to find articles published in Embase and Medline up to August 2016.Additionally, two journals (Annals of the Rheumatic Diseases and Annual Review of Nutrition) were searched manually from 2010 to 2015.The references of the retrieved manuscripts were screened for further relevant papers. Inclusion and exclusion criteria We included all observational studies on humans (cross-sectional, cohort, case-control, and case series studies), but we excluded case reports.We also excluded case series with a small number of study participants (<5).Uncontrolled treatment outcome studies and randomized controlled trials (RCT) were also included.Participants had to be at least 18 years old.We considered studies published in English that evaluated the presence of AS (or axial spondyloarthritis (axSpA)) using established criteria or clinical diagnosis, included diet assessment, and quantified an association between AS (or axSpA) and diet.In this review, we did not consider alcohol consumption. Data extraction Two independent reviewers screened the title and the abstract of each study following the inclusion criteria.If disagreement occurred between the two reviewers, a third reviewer was consulted. For eligible studies, data extraction was performed by two independent reviewers using a specially designed data collection form. Assessment of study quality We used the Scottish Intercollegiate Guidelines Network (SIGN) methodology checklists to assess the quality of individual studies (22).Two reviewers independently conducted the quality assessment.If a disagreement occurred, a third reviewer was consulted. The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach was used to rate the quality of evidence (23,24). Search results The search of the databases yielded 582 publications (Figure 1).After the removal of irrelevant papers (n=512) and duplicates (n=18) and having found 10 additional papers from other sources including searching the references of full-text papers, 58 full-text published papers, 2 letters, and 5 conference abstracts were considered.After further consideration, 3 abstracts were removed because they did not include the necessary information and 46 full-text articles were removed because they did not report AS (n=3), did not look at relationship between AS and diet (n=2), included children (n=1), had a very small sample of AS patients (n=4), contained case reports (n=3), did not report information on diet (n=1), was in the Chinese language (n=1), did not report diet (n=25), did not report information on AS (n=2), and were non-systematic reviews (n=3) and thesis (duplicate publication, n=1).There were a total of 16 studies included in the review, 10 of which were full-text papers, two were letters, two studies were summarized in review articles, and two were conference abstracts (25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40). Participation rate was reported by three studies describing a case series (range: 46%-93%) and one case-control study (25,29,32,34) (89%).None of the treatment outcome studies reported a follow-up rate, while the follow-up rate in RCTs was between 65% and 100% (28,30).The minimum duration of the follow-up was 3 months and the maximum, 10 months (Table 1, Appendix 1). Availability of information on diet and nutrition In the observational studies, the assessment of diet was conducted using a questionnaire with three studies using a validated, 84-question, semi-quantitative, food-frequency questionnaire (FFQ).Other methods used were maintaining a food diary, face-to-face interview, interview with a dietician, and telephone survey (Table 1, Appendix 2) (25, 26, 29, 31-34, 39, 40, 52, 53). Studies examined different types of food and nutrients in relation to AS, most commonly regarding the consumption of starch, dairy, and different types of diet (Table 2, 3, Appendix 3). Quality of studies As studies included in this systematic review had different designs, their quality was assessed separately as case series (Appendix 4), treatment outcome studies (Appendix 5), case-control study (Appendix 6), and RCTs (Appendix 7). The quality could not be fully assessed in studies published as abstracts, letters, or described in reviews (35)(36)(37)(38)(39)(40).Overall, the GRADE quality of evidence was low or very low with only two studies, both of them RCTs, fully satisfying the quality criteria (Table 2, 3) (28,30).In studies of intervention, compliance was assessed by questioning the participants about adherence to the diet, by counting the remaining capsules, by asking the participants to report on the number of study capsules that they had taken during the previous week, or by asking the participants to return all the study drug containers for weighing (27,28,30,35,37). Most studies, especially more recent, used modern statistical methods and investigated the effects of potential confounding factors such as age, gender, smoking, and body mass index (BMI) (Appendix 8). None of the studies investigated data to answer whether persons with particular diets are less likely to develop AS (Objective 3). Objective 1: Comparison of diet in patients with AS and those without AS Only one case-control study investigated whether diet differs among AS patients as compared to persons without AS (Table 2, 3) (34).The calculated energy intake was significantly higher among AS patients as compared to the controls (1,940 vs. 1,819 kcal, p<0.05).The difference in the calculated energy intake persisted even after adjusting for physical activity level, weight, sex, and age.Apart from differences that were not significant, i.e., a lower intake of monounsaturated fats (p=0.07) and total fat (p=0.07)among the patients, there were no other differences in diet (consumption of dairy products, fish, meat, fruits, and vegetables) when compared with the controls (Appendix 10-13). Ge et al. (31) performed an association study examining the gene-environment interaction between IL-1F7 gene polymorphisms and measures of dietary exposure.There was an interaction with the type of cooking oil with an increased risk for cooking using half plant-half animal fats (OR 4.27, 95% CI 1.59-11.48,p=0.004).The interaction between IL-1F7 alleles and other factors such as salt, meat, or vegetable consumption in AS patients was not statistically significant (all p>0.05) (Table 2, 3; Appendix 12-14). Objective 2: Diet and severity of AS (observational studies) Overall, the evidence interlinking diet and AS severity was limited, and we were unable to perform a meta-analysis due to the lack of reports with data, diversity in outcome, and definition of exposure. Haugen et al. (25) symptoms of their disease and one-third of the patients reported worsening symptoms after the intake of certain foods with 35% mentioning increased swelling of the joints.Foods most frequently implicated were meat, coffee, sweets, sugar, chocolate, citrus fruits, and apples.Sixteen percent of the AS patients had been through a fasting period on their own initiative with a majority of them reporting less pain, less stiffness, and less joint swelling.Twenty-two percent of patients with AS in an attempt to alleviate disease symptoms had previously tried diets such as lactovegetarian or vegan diets (Appendix 16). Of the four studies reporting data on the relationship between foods high in starch and AS severity, two were conference abstracts (Appendix 9, Table 2) (26,32,39,40).While one study reported a significant association of daily starch intake with BASDAI, BASFI, and BASG, other studies did not find an association between the consumption of foods high in starch and BASDAI (26,39,40).There was no association of daily starch intake with SF-36, CRP, or ESR (39).A small proportion of patients (1.8%) reported aggravation of symptoms associated with food rich in flour (32).Silva (39) reported that the average starch intake was significantly, positively associated with BASDAI, BASFI, and BASG, but not with SF-36, CRP, or ESR.The lin-ear regression showed increases of 3%, 3.9%, and 2.9% in BASDAI, BASFI, and BASG scores, respectively, by milligram of ingested starch.The authors concluded that the higher intake of starch was related to increased disease activity and greater functional impairment. Three studies that reported data on the relationship between the consumption of dairy products and AS did not find any association with BASDAI (Table 2, Appendix 10) (26,32,40). One study that reported a case series did not find an association between the consumption of fish and dietary omega-3 fatty acid and BAS-DAI ( 35) investigated, in a single-arm intervention study of 25 patients, whether a diet that excluded dairy products, was beneficial for the course of the disease or not.The results after six weeks of follow-up showed relatively good compliance to the diet (72%).Amongst the participants, 52% reported good improvement out of which 62% could discontinue their nonsteroidal anti-inflammatory drugs (NSAID) therapy.When follow-up of the responders was carried out for 80% out of the 15 patients at 3 months, all the 10 patients at 6 months, and 89% out of 9 patients at 9 months, it was found that they were satisfied and had continued the dietary regime.The authors reported that six patients were still observing the diet after two years of follow-up and remained free from any other therapy ( We did not perform a meta-analysis due to the diversity of outcomes and types of probiotic supplements (Table 2, Appendix 15). Discussion This is the first systematic review to examine the association between AS and diet.It has shown that only a few, relatively small, and mainly observational studies have been conducted in this field.From the 16 articles included in the review, there is little evidence regarding the fact that aspects of diet influence the severity of AS or are part of its etiology.In particular, there is no evidence that a reduction in starch intake, exclusion of dairy products, consumption of fish and fish oil or probiotic supplementation reduce susceptibility toward AS or diminish AS symptoms. This review has many methodological limitations.Firstly, there is scarce literature on the topic and 6 out of 16 studies were not published as full reports and, therefore, limited data were available for data extraction.Several studies did not report the actual figures and analysis results.The studies included in this review vary extensively in design, AS diagnostic criteria, measures of disease severity, exposure measured, measurement instruments, intervention, and duration of follow-up.Therefore, it was not possible to conduct a meta-analysis. Although we limited our search to publications in the English language, the studies included in this review were from 10 countries.Most studies were conducted in a hospital setting, except one study that used a patient society Web site (28).The participation rate and the participants' selection method were not stated in the majority of the studies and, therefore, it was difficult to determine how representative they were, limiting the generalizability of the findings.Most studies, especially more recent, used appropriate statistical methods and investigated the effects of potential confounding factors. Retrospective assessment of dietary exposure may introduce recall bias.However, observational studies included in this review seem to evaluate the current dietary habits, except one study that collected information on special diets and food avoidance in the past three months and one study that used a food diary over five consecutive days (29,39).In addition, when assessing dietary risk factors in prevalent cases of AS, it is difficult to ascertain if the diet influences the development of AS over the course of the disease.It is also common for people to change their diets soon after the onset of disease and, therefore, the current diet may not actually represent past dietary intake. A validated FFQ was used in three observational studies from the same research group, and reproducibility was assessed in the study by Haugen et al. (25); however, the reliability and validity of the dietary data collected in the other studies was not clear (32,34). The majority of AS patients (78%) as well patients with other rheumatic diseases (RA, 64%; juvenile rheumatoid arthritis (JRA), 88%; psoriatic arthropathy, 71%; osteoarthrosis, 65%) believe that diet influences their disease symptoms (25).This suggests that if diet is important, it may influence the inflammatory process across rheumatic diseases. Studies involving AS and other rheumatic diseases report dietary interventions such as fasting, vegan diet, and lactovegetarian diet (25,54).Clinical dietary therapy studies of AS have focused on some form of dietary elimination such as low starch diet and diet that excludes dairy products (35,37). Gut involvement in the pathogenesis of rheumatic diseases was proposed by Smith (55), and more recent studies have investigated this further, suggesting that as the intestinal bacterial flora may be affected by diet, a diet that could influence the intestinal flora might have an effect on disease activity (54,56).Klebsiella pneumoniae was suggested as a trigger for AS and Crohn's disease based on molecular mimicry, and a low starch diet was proposed as a means of reducing Klebsiella bacteria in the gut and, hence, further pathological damage (10)(11)(12)(13).The gut microbiome can also be altered using probiotics, live bacteria, and yeasts, which are considered as having possible health benefits (57). Animal models have shown that Lactobacillus casei can reduce joint damage in mouse models of arthritis, while HLA-B27 transgenic rats have been shown to be less likely to have a relapse of colitis when given Lactobacillus rhamnosus GG (58).However, while one uncontrolled study of probiotics included in this review showed an improvement in BASDAI and VAS scores, two RCTs did not confirm this association (28,30,36).Several small trials of probiotics in RA patients have reported marginal, non-significant, beneficial effects on the RA disease activity (59)(60)(61). A recent case-control study showed that breastfeeding, which influences microbiota, reduces the risk of the development of AS (62). A core set of recommendations for patients with AS proposed by Feldtkeller et al. (19) advises a reduction in meat consumption and increase in the consumption of fish and vegetarian meals.In addition, it is recommended that sufficient vitamin D and calcium intake are important to prevent osteoporosis. Conclusion In this systematic review, we have determined, from a relatively small number of studies, that the evidence on the relationship between diet and AS is extremely limited and we have highlighted important methodological weaknesses in the studies reviewed. Many AS patients believe that the aspects of diet affect their symptoms and/or have altered their diets in attempt to improve symptoms.However, well-designed studies of dietary patterns and nutrients are required before any AS-specific recommendations can be made.Future prospective, population-based studies using validated dietary assessment methods should focus on dietary patterns that have been implicated in other inflammatory conditions, including cardiovascular disease, to determine whether diet plays a role in the susceptibility to AS and AS severity. Clinical and research consequences • Information on relationship between diet and AS is extremely limited; • Evidence on a possible relationship between AS and diet is inconclusive; • There is a need for large population-based epidemiological studies investigating the relationship between AS and diet. Table 1 . reported that 78% of AS patients believed that diet influenced the Eur J Rheumatol 2017 Macfarlane et al.Role of diet in ankylosing spondylitis Description of studies AS: ankylosing spondylitis Table 2 . Summary of findings (types of food/diet) (26)ınar et al.(40)reported no association between the consumption of salt and fast food and the BASDAI.Claudepierre et al.(26)reported that among the dietary factors, the frequency of meals taken out of home was the only variable related (negatively) to disease activity.The mean (SD) BASDAI score among those eating out of home twice per week or less was 5.1 (2.1) as compared to 4.1 (2.1) among those who ate out of home more than twice per week (p<0.001)(Table2,Appendix 14).intake: 1 g/day), multivitamin and/or multimineral (n=12), and iron (n=4) supplements.(Table 2, Appendix 15).Chatfield et al. (29) reported that 82.7% AS patients used complementary and alternative medicine (CAM) and out of these patients, 16 Eur J Rheumatol 2017 Macfarlane et al.Role of diet in ankylosing spondylitis (27)ström et al.(27)reported a randomized trial of high-versus low-dose fish oil with 21 weeks of follow-up with participants blinded to the dose.At the end of the study, there was a statistically significant decrease in the BASDAI scores (p=0.038) in the high-dose group and a statistically signif- Availability of dietary information in the studies included in the reviewState specific objectives, including any pre-specified hypotheses YesCase population is clearly and fully described, including a case definition and inclusion and exclusion criteria YesCases are clearly differentiated from controls YesSample size is based on pre-study considerations of statistical power?NoCases and controls are taken from comparable populations Yes Same exclusion criteria are used for cases as well as controls YesParticipation rate for cases as well as controls is reported No (cases only)Comparison is made between participants and non-participants to establish their similarities or differences YesMeasures will have been taken to prevent knowledge of primary exposure influencing case ascertainment Not mentioned Exposure status is measured in a standard, valid, and reliable way Yes Main potential confounders are identified and taken into account in the design and analysis Yes V.M., G.J.M.; Data Collection and/or Processing -J.H., H.M.A.; Analysis and/or Interpretation -T.V.M., G.J.M., K.G.; Literature Search -J.H., H.M.A.; Writing Manuscript -T.V.M., J.H., G.J.M., E.P., K.G.; Critical Review -T.V.M., H.M.A., E.P., K.G., J.H., G.J.M. Appendix 3.
2018-04-27T03:18:16.319Z
2017-10-25T00:00:00.000
{ "year": 2018, "sha1": "6fe1eaec562edf80932611a904e620f513d15bae", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5152/eurjrheum.2017.16103", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9c232b9f862dd5b628a1d0e68b9e49bdc4e015ca", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
158619485
pes2o/s2orc
v3-fos-license
Scarcity and Environmental Impact of Mineral Resources — An Old and Never-Ending Discussion A historical overview shows that mankind has feared the scarcity of mineral resources, especially metals, for many centuries. In the first half of the 20th century, this discussion was marked by the great military demand for raw materials, followed by the growing world population, increasing consumption and environmental awareness. From then on, there was less talk of regional shortages, but more discussion of a global scarcity or even a drying up of raw material sources worldwide. Although these forecasts are still controversially discussed today, the assessment of resource depletion has become an integral element of Life Cycle Assessments (LCA) or Life Cycle Impact Assessments (LCIA) of product systems. A number of methodological approaches are available for this purpose, which are presented and applied in a series of articles as part of a special issue of “Resources”. The fundamental question is also addressed, namely to what extent the assessment of resource depletion in the context of an environmental study such as LCA is appropriate. Introduction In October 2015, a workshop entitled "Mineral Resources in LCIA: Mapping the path forward" took place in London.Richard Herrington of the Natural History Museum London and Johannes Drielsma of Euromines organized a meeting in which geologists, mining experts, and environmental scientists came together to present their different views on how to handle mineral resources, but also to look for commonalities.A fruitful discussion arose and the idea came up to record some of the thoughts.These records lead to a special issue of the magazine "Resources", which was published online in 2016.At the same time, there were other activities on this topic, e.g., various studies of the German Federal Environmental Agency, which contributed to the magazine with an article and delivered another paper in 2017 [1].Meanwhile, there are other important publications on the subject, e.g., a statement and a joint report by the German Academy of Sciences of Technology, the National Academy of Sciences Leopoldina, and the Union of German Academies of Sciences [2,3], as well as a previously unreleased report by an international working group of experts of Life Cycle Assessment (LCA) [4].There is still no uniform opinion on this important issue, but there is an intensive debate involving many different disciplines.The topic of scarcity and supply reliability has been discussed for a very long time, again and again, which proves a look into history.Therefore, in this introduction, not only are the contributions of the special edition briefly presented, but a reference to the long history of the discussion is made, and some rare sources are quoted by way of example. Resource Scarcity in the Past Two things make it so difficult to supply the industrial society with mineral raw materials and with metals in particular-they are limited on earth, and their extraction is associated with great effort and environmental pollution, both of which have concerned mankind for many centuries.In the 16th century, however, even a renewable resource threatened to become scarce-forests were cleared all over Europe because wood was the predominant raw material for mining and for the fires of melting furnaces.The Italian metallurgist Vannoccio Biringuccio (1480-1539) already warned in his "De la Pirotechnia" in 1540: "I rather believe that someday people can no longer use the fire for the melting furnaces due to the lack of ores, because they process so much of it" (Chapter 10 in [5]). It was not sure if the metal ores were a non-renewable resource at all.From the Italian island of Elba, which was an important iron ore deposit at the time, the following was said: "With the quantities of ore that have been gained in so many centuries and still are gained, the mountains and islands would have to be completely leveled.Nevertheless, today more and better ore is produced than ever.Therefore, many believe that the ore, where it is mined, regenerates in the ground in a certain amount of time.If it is true, it would be something great, and it showed the great wisdom of nature and the great power of heaven" (Chapter 6 in [5]). Unfortunately, it is not true, at least not in the time scale that is relevant to mankind.Thus, the search for the rare resource deposits has always been a great challenge to mankind.In this search, a variety of means has been employed, such as divining rods, which were already graphically demonstrated by the great German mining expert Georgius Agricola (1494-1555) (Figure 1).It was also Agricola who cited the critics of his time and described the environmental impact of mining and smelting: "By mining for ore, the fields are devastated.By clearing the forests and groves, the birds and other animal species are eradicated.The ores are washed; but by this washing, because it poisoned the streams and rivers, the fish are either expelled from them or killed" (1st book in [6]). effort and environmental pollution, both of which have concerned mankind for many centuries.In the 16th century, however, even a renewable resource threatened to become scarce-forests were cleared all over Europe because wood was the predominant raw material for mining and for the fires of melting furnaces.The Italian metallurgist Vannoccio Biringuccio (1480-1539) already warned in his "De la Pirotechnia" in 1540: "I rather believe that someday people can no longer use the fire for the melting furnaces due to the lack of ores, because they process so much of it" (Chapter 10 in [5]). It was not sure if the metal ores were a non-renewable resource at all.From the Italian island of Elba, which was an important iron ore deposit at the time, the following was said: With the quantities of ore that have been gained in so many centuries and still are gained, the mountains and islands would have to be completely leveled.Nevertheless, today more and better ore is produced than ever.Therefore, many believe that the ore, where it is mined, regenerates in the ground in a certain amount of time.If it is true, it would be something great, and it showed the great wisdom of nature and the great power of heaven.(Chapter 6 in [5]). Unfortunately, it is not true, at least not in the time scale that is relevant to mankind.Thus, the search for the rare resource deposits has always been a great challenge to mankind.In this search, a variety of means has been employed, such as divining rods, which were already graphically demonstrated by the great German mining expert Georgius Agricola (1494-1555) (Figure 1).It was also Agricola who cited the critics of his time and described the environmental impact of mining and smelting: By mining for ore, the fields are devastated.By clearing the forests and groves, the birds and other animal species are eradicated.The ores are washed; but by this washing, because it poisoned the streams and rivers, the fish are either expelled from them or killed.(1st book in [6]).But Agricola, of course, defended mining, for it was already an important basis of civilization at that time.Previously, the use conflicts and the interventions in the landscape by mining were addressed by Paulus Niavis (1455-1517) [7]. The real scarcity of metal ores came during industrialization, when demand for metals sharply increased.Three hundred years ago, local shortages in England led to a nationwide ore trade [8].The But Agricola, of course, defended mining, for it was already an important basis of civilization at that time.Previously, the use conflicts and the interventions in the landscape by mining were addressed by Paulus Niavis (1455-1517) [7]. The real scarcity of metal ores came during industrialization, when demand for metals sharply increased.Three hundred years ago, local shortages in England led to a nationwide ore trade [8].The famous British economist Stanley Jevons posed the "coal question" in 1865.He saw limited coal supplies in face of rampant economic growth and advocated more moderate growth [9]. Critical and Strategic Metals With modern times, the demand for metals increased immeasurably.In 1820, 1.65 million tons of pig iron was produced worldwide compared to 41 million tons in 1900, 250 million tons in 1960, and 1.2 billion tons today [10,11].At the beginning of the twentieth century, a broad conservation movement was emerging in the U.S., focusing primarily on the limitations and protection of natural resources, including minerals, forests, soil, and fisheries, especially in the face of the rapidly growing US economy [12,13]. With the First World War, there was a growing concern in the United States that the supply of strategic raw materials could become difficult because international trade came partially to a halt [14].An initial list of materials (Table 1), the supply of which could be of concern to the U.S., was published by C.K. Leith in 1917 for the War Industries Board [15].The boundaries between military and industrial significance were still blurring.A second list of 42 materials was produced after World War I in 1921 by a committee led by General Harbord with a primarily military orientation [16].The distinction between strategic and critical materials was first made in 1932.In 1939, the War Department compiled a list that included the term "essential material" [17].The definitions were: • Strategic Materials are those materials essential to the national defense for the supply of which in war dependence must be placed in whole, or in large part, on sources outside the continental limits of the United States, and for which strict conservation and distribution control measures will be necessary. • Critical Materials are those materials essential in the national defense, the procurement problems of which in war, while difficult, are less serious than those of strategic materials, because they can be either domestically produced or obtained in more adequate quantities or have a lesser degree of essentiality, and for which some degree of conservation and distribution control will be necessary. • Essential Materials are those materials essential to the national defense for which no procurement problems in war are anticipated, but whose status is such as to require constant surveillance because future developments may necessitate reclassification as strategic or critical.In the 1930s, several of the U.S.-governmental institutions' other authors recommended the creation of strategic stocks of so-called scarce minerals [17][18][19].In 1939, the first federal law authorizing stockpiling of strategic materials was enacted in the U.S.This stockpiling exists still today in the U.S. and is operated by the National Defense Stockpile (NDS).The total inventory of the NDS represented a market value of $1.15 billion in 2016 [20]. Thus, the concept of critical materials was introduced, as well as the academic attention to the scarcity of industrial or defense-related raw materials.It was always more about the topic of which raw materials were available for the U.S. economy (or military forces) and less about how many raw materials were available worldwide. The scarcity and availability of resources was then repeatedly addressed, e.g., with the "Road of Depletion", which was presented in a hearing of the U.S. Senate 1949 by James Boyd, the director of the U.S. Department of Mining (Figure 2) [21].At that time, it was already very clear that only 7% of the world's population, namely in the U.S., use 50% of the world's minerals and 70% of the world's oil.The U.S. president installed a Materials Policy Commission, which in 1952 submitted a major report titled "Resources for Freedom" [22].The Cold War was also a contest for economic power and access to natural resources.In 1963, a large systematic empirical study by Barnett and Morse of historic trends for various natural resources between 1870 and 1958 eventually supported the hypothesis of a decreasing (rather than an increasing) scarcity [23].They represented a critical but nevertheless optimistic picture of the resource question.They believed in technical progress and in raising efficiency. Resources 2019, 8 FOR PEER REVIEW 4 In the 1930s, several of the U.S.-governmental institutions' other authors recommended the creation of strategic stocks of so-called scarce minerals [17][18][19].In 1939, the first federal law authorizing stockpiling of strategic materials was enacted in the U.S.This stockpiling exists still today in the U.S. and is operated by the National Defense Stockpile (NDS).The total inventory of the NDS represented a market value of $1.15 billion in 2016 [20]. Thus, the concept of critical materials was introduced, as well as the academic attention to the scarcity of industrial or defense-related raw materials.It was always more about the topic of which raw materials were available for the U.S. economy (or military forces) and less about how many raw materials were available worldwide.The scarcity and availability of resources was then repeatedly addressed, e.g., with the "Road of Depletion", which was presented in a hearing of the U.S. Senate 1949 by James Boyd, the director of the U.S. Department of Mining (Figure 2) [21].At that time, it was already very clear that only 7% of the world's population, namely in the U.S., use 50% of the world's minerals and 70% of the world's oil.The U.S. president installed a Materials Policy Commission, which in 1952 submitted a major report titled "Resources for Freedom" [22].The Cold War was also a contest for economic power and access to natural resources.In 1963, a large systematic empirical study by Barnett and Morse of historic trends for various natural resources between 1870 and 1958 eventually supported the hypothesis of a decreasing (rather than an increasing) scarcity [23].They represented a critical but nevertheless optimistic picture of the resource question.They believed in technical progress and in raising efficiency. This optimistic picture changed fundamentally in the 60s through wake-up calls such as Ehrlich's book, "The Population Bomb" [24], but especially with the Club of Rome study by Meadows, "The Limits to Growth" in 1972 [25].Limited natural resources would be confronted with an almost rampant growth of world population and global economic output.Now, it was increasingly about the global development, and the careless handling of the resources was criticized.For example, a study by the U.S. National Academy of Sciences (NAS) asked for increased recycling in 1969: "The automobile is a prime target for improvement.The copper content of the average car This optimistic picture changed fundamentally in the 60s through wake-up calls such as Ehrlich's book, "The Population Bomb" [24], but especially with the Club of Rome study by Meadows, "The Limits to Growth" in 1972 [25].Limited natural resources would be confronted with an almost rampant growth of world population and global economic output.Now, it was increasingly about the global development, and the careless handling of the resources was criticized.For example, a study by the U.S. National Academy of Sciences (NAS) asked for increased recycling in 1969: "The automobile is a prime target for improvement.The copper content of the average car should be reduced from about 1.4 percent to 0.4 percent or less of the total carcass and problems of metal recovery simplified" [26].Recycling became a guiding theme of environmental policy in the following decades. In 1975, the NAS prepared another report on "Mineral Resources and the Environment" [27].Not only was the scarcity of raw materials-both energetic and non-energetic-addressed, but also the environmental impact in particular, which was demonstrated by the example of coal extraction and use.A "conservation ethic" was demanded, which could just as well have been formulated today: "Because of limits to natural resources as well as to means for alleviating these limits it is recommended that the Federal Government proclaim and deliberately pursue a national policy of conservation of material, energy and environmental resources, informing the public and the private sectors fully about needs and techniques for reducing energy consumption, the development of substitute materials, increasing the durability and maintainability of products, and reclamation and recycling" (page 37 in [27]). The NAS pointed out that the stockpiling of materials in the past was mainly for military reasons.It was stated that "similar considerations can often be applied to the protection of the U.S. economy and the essential needs of the civilian sector" (page 34 in [27]).This had changed little until today. The two updates to the Barnett & Morse study, "Scarcity and Growth Reconsidered" [28] and the study of Menzie, Singer, and DeYoung, Jr. in "Scarcity and Growth Revisited" [29] essentially confirmed the old results that there is no geological scarcity.Menzie et al. noted that the physical availability of resources in itself does not constitute a growth limit.However, the effort required to obtain them is growing, although many resources remain abound.It is obvious that supplies of mineral resources were first used most intensively in the areas closest to their use.As demand increased, exploration and eventually extraction across oceans in inhospitable climates, always deeper into land and water, occurred.Thus, costs, energy input, and the destruction of the environment associated with the extraction increased.Menzie et al. directed the attention to the fact that it is not the limited quantities of raw materials but the accompanying circumstances of their extraction that are the real problem. Nevertheless, the image of the ebbing raw material sources became apparent to the public.The study by Meadows, which has made popular the very descriptive concept of resource lifetime [25], has contributed significantly to this.The Meadows team introduced the "static reserve life index", which states how many years the known reserves of a given resource will last when the current annual consumption is assumed.With the exponential index, a continuously increasing consumption is expected, which again significantly reduces the time the reserves are available.It hit a nerve with the public and was quoted from time to time, but it was also discussed controversially.Yet, Gerling and Wellmer found out that raw material lifetimes did not decrease over the decades, but mostly stayed the same or even increased [30].The indicator describes the economic effects of exploration in the mining industry rather than a geological scarcity. The discussion of the past decades was also marked by reports from the U.S. For its first report in 1988, the U.S. National Critical Materials Council, founded by president Ronald Reagan, selected seven key commodities from three basic categories into which strategic and critical materials were broadly divided [31].These included: (1) critical alloys-cobalt, chromium, and ferrosilicon; (2) potential high growth security materials-germanium and titanium; and (3) high-volume materials-aluminum and copper.Again, the strategic importance of supply and demand, the current status of the so-called National Defense Stockpile, and the global situation of the import dependence and vulnerability of the U.S. economy were discussed. Recently, various incidents have come together; countries in Asia, South America, and Africa are claiming an ever-increasing share of resources to build their economies and to supply their populations.The global commodity prices rose rapidly in the first decade of the 21st century, causing a public "resource shock".At the same time, technical innovation has become increasingly dependent on the quantity and variety of raw materials.Many high-tech products have become indispensable in today's life, but they cannot be produced without certain raw materials.A possible scarcity of raw materials endangers not only the military-strategic position of nations, but the way-of-life of the previously wealthy countries and their primacy in the technological development of new products.In addition, there is a global ecological conscience that questions the social and ecological consequences of the use of resources. These thoughts have been reflected in the report "Minerals, Critical Minerals, and the U.S. Economy" of the Committee on Critical Mineral Impacts of the U.S. Economy of the National Research Council, which was published in 2008 [32].The report developed the current method of the semi-qualitative description of the criticality of raw materials with a multi-dimensional evaluation matrix.The impact of supply restriction is plotted against the supply risk as a two-dimensional graph and determined individually for the various raw materials. An important boost provided the work of the International Resource Panel of United Nations Environmental Programme (UNEP), which published several reports on the subject of metal resources between 2009 and 2014 [33] and in particular called for increased efforts to recycle.In 2016, the U.S. National Science and Technology Council (NSTC) published a report that provided a systematic methodology for screening potentially critical minerals [34,35].Another detailed report was issued by the U.S. Geological Survey in 2017 [36].In Europe, a corresponding list was issued by the European Commission.The first list of "Critical Raw Materials" was prepared in 2010; updates were made in 2014 and 2017 [37][38][39].Most recently, the 2017 assessment included a total of 78 individual materials. The disadvantage of these presentations is that they are only short to medium term aligned, and thus the long-term supply situation is not taken into account for the obvious and above mentioned reasons.Furthermore, it is purely economically oriented and ecological aspects are largely missing. It was Thomas Graedel and his team who developed a three-dimensional criticality system in which the environmental impact has its own dimension [40].A similar approach was recently published by the German Federal Environment Agency [41].It is currently being applied to a variety of chemical elements.Results can be expected for 2019.However, it is already evident that the environmental impact associated with the extraction and processing of raw materials can hardly be described with cardinal scales, as is known from LCA.For this purpose, too many site-specific qualitative aspects, at the mining sites for example, have to be considered.This makes the implementation in the framework of LCA difficult. What can we learn from the past?The scarcity of resources is not new.Concern for the drying up of raw materials is probably as old as mankind itself.The striking of raw materials has always been associated with labor and effort.The estimation of scarcity in each epoch was always done against the background of the respective knowledge available, but the interests connected with the raw materials were also very decisive.It becomes very clear that, especially in the past 100 years, the military interests played an important role and still do today.Many high-tech products that require special raw materials are indispensable to the military.They have a strategic meaning.In the public and scientific discussion, however, it is argued as increasingly "civil" and linked to the material and energy-intensive "way-of-life".An important role is played by the LCA of products and services, which quantifies the impact on the environment.The use of resources is an integral part of the analysis and evaluation. Abiotic Resources in Life Cycle Assessment (LCA) When an LCA is carried out for products or services, it is now standard practice to include and quantify the use of natural resources.The Life Cycle Inventory still does this on a physical and quantitative basis, i.e., the amount of required raw materials and the withdrawals from nature is quantified.For example, the use of water as a natural resource is included.It has also become customary to consider the required land use.However, the pure quantities (m 3 of water or km 2 of area) are not sufficient to describe the environmental quality of the resource input, yet this is needed in the following step of an LCA, the Life Cycle Impact Assessment (LCIA), where the ecological relevance of the energy and material flows is quantified.To get from the amount of a substance to the effect of the substance, so-called characterization factors are used in the LCA.They are a simplification for the LCA calculation, and all the knowledge about the ecological effect of a substance is hidden behind them.Their investigation is therefore always in the focus of the interests of many authors from the LCA community.This task also arises for the mineral resources taken from the lithosphere.The input of metals that originate from nature and enter the technosphere is one thing, but what is the ecological relevance of the volume flows of iron, copper, tantalum, indium, gold, etc.? The energy demands, the wastewater, and the emissions associated with the extraction and processing of raw materials are already included in an LCA.These environmental aspects of mining and metalworking are automatically considered; thus double counting must be avoided.Rather, it is about the question of how the extraction of raw materials from the lithosphere "in itself" can be evaluated. In the field of Life Cycle Assessments, the safeguard objects and the so called "Areas of Protection" (AoP) have been discussed for many years [42][43][44].It is not only interesting to know what impact a human action has on the climate, the acid rain, or the eutrophication, but what that impact means for the safeguard objects, especially for human health and for the integrity of nature, which is often circumscribed with the preservation of biodiversity.In addition, there is a third safeguard object, namely the preservation of natural resources [45][46][47].Strictly speaking, this is not an ecological aspect, but it is more subject to the idea of sustainability.The consumption of a limited natural resource eventually leads to its depletion.What is not kept in the cycle of nature disappears at some point and is no longer available for future generations, which would not meet the idea of sustainability. However, does the mining of minerals and possibly the depletion of metals really belong to an ecological analysis like the Life Cycle Assessment?Are these not rather socio-economic aspects that cannot be adequately illustrated with the methodological instruments of the LCA?This issue has been the subject of much controversy for many years.In an attempt to hierarchize the safeguard objects in the life cycle assessment, Hofstetter and Scheringer (1997) based the LCA on human welfare and divided it into the social welfare of today's generations and the material welfare of future generations [48].They identified additional safeguard objects related to resource supplies, human health, biodiversity, and ecological health (Figure 3). Resources 2019, 8 FOR PEER REVIEW 7 already included in an LCA.These environmental aspects of mining and metalworking are automatically considered; thus double counting must be avoided.Rather, it is about the question of how the extraction of raw materials from the lithosphere "in itself" can be evaluated. In the field of Life Cycle Assessments, the safeguard objects and the so called "Areas of Protection" (AoP) have been discussed for many years [42][43][44].It is not only interesting to know what impact a human action has on the climate, the acid rain, or the eutrophication, but what that impact means for the safeguard objects, especially for human health and for the integrity of nature, which is often circumscribed with the preservation of biodiversity.In addition, there is a third safeguard object, namely the preservation of natural resources [45][46][47].Strictly speaking, this is not an ecological aspect, but it is more subject to the idea of sustainability.The consumption of a limited natural resource eventually leads to its depletion.What is not kept in the cycle of nature disappears at some point and is no longer available for future generations, which would not meet the idea of sustainability. However, does the mining of minerals and possibly the depletion of metals really belong to an ecological analysis like the Life Cycle Assessment?Are these not rather socio-economic aspects that cannot be adequately illustrated with the methodological instruments of the LCA?This issue has been the subject of much controversy for many years.In an attempt to hierarchize the safeguard objects in the life cycle assessment, Hofstetter and Scheringer (1997) based the LCA on human welfare and divided it into the social welfare of today's generations and the material welfare of future generations [48].They identified additional safeguard objects related to resource supplies, human health, biodiversity, and ecological health (Figure 3).The safeguard object "resources" could be interpreted in such a way that a reduction of the resource supply or the lower quality of future mineral deposits restricts the freedom of action of coming generations and imposes higher efforts on them.Especially with regard to sustainability and intergenerational justice, the protection of mineral resources would then be worth considering.It is controversial if this should be considered in an environmental analysis such as the LCA.Social issues, for example, are excluded from the LCA and treated with their own instrument, the "Social LCA".If there were a suitable instrument to assess economic sustainability, such as influencing the welfare of future generations through current activities, the resources would have to be considered in this assessment.However, this instrument does not exist.This can be seen as a justification for today's explicit inclusion of resources as safeguard objects by the LCA methodology as a stopgap for the inability of economics, so to speak. The basic aspect of whether or not resources should be considered as safeguard objects is hardly questioned by the LCA community today [4].One may criticize this because it is based on the claim The safeguard object "resources" could be interpreted in such a way that a reduction of the resource supply or the lower quality of future mineral deposits restricts the freedom of action of coming generations and imposes higher efforts on them.Especially with regard to sustainability and intergenerational justice, the protection of mineral resources would then be worth considering.It is controversial if this should be considered in an environmental analysis such as the LCA.Social issues, for example, are excluded from the LCA and treated with their own instrument, the "Social LCA".If there were a suitable instrument to assess economic sustainability, such as influencing the welfare of future generations through current activities, the resources would have to be considered in this assessment.However, this instrument does not exist.This can be seen as a justification for today's explicit inclusion of resources as safeguard objects by the LCA methodology as a stopgap for the inability of economics, so to speak. The basic aspect of whether or not resources should be considered as safeguard objects is hardly questioned by the LCA community today [4].One may criticize this because it is based on the claim to model nearly everything with the LCA that concerns the metabolism of the technosphere and its exchange with the biosphere.There are far more relevant questions to be asked, such as whether the simple linear LCA approach can adequately address the dynamic effects of technological innovation or market developments.Another difficulty is finding suitable indicators for a quantitative assessment within the LCA.Finally, the amount of remaining inventory or usage restriction for future generations would have to be quantified in some way.However, this is largely unknown today, just as in the past (see Chapter 1) it was unknown what resources would be available to us in the 20th or 21st century. In summary, three questions play a role at the interface between the resource topic and the Life Cycle Assessment: 1. How scarce are the mineral resources, and in particular the metals, and do we really know the amount of mineral resources left in the earth? 2. Do we have to understand the mineral resources in nature as protective goods that, in addition to health and biodiversity, have to be protected and preserved for reasons of sustainability?3. Which environmental impacts occur through the mining of resources and the extraction of metals, and are they adequately reflected in the method of the Life Cycle Assessment? Contributions to the Main Topic The discussion at the 2015 workshop in London triggered this discussion and is well documented by some articles from participants.There are also additional contributions that round off the topic altogether.Drielsma et al. [49] gave an overview of the discussion in London that included the points of contact of the various scientific disciplines, the different perspectives on the subject, and the difficulties of definition that sometimes complicate scientific exchange. Meinert et al. [50] advanced a point of view that can be found among many geologists.They described how mineral resources are explored and discovered, which leads to predictions about known deposits and which definitions are used for them.They used the example of copper because very different opinions about the scarcity of this metal exist and it has been mined in large quantities for a long time.In this example, they tried to show that the lifetime concept-or the peak concept-is based on wrong assumptions and therefore leads to misinterpretations.According to their statements, by 2050, only half of the previously known and already economically degradable stocks will be needed, and the undiscovered copper deposits are not yet included.By estimating these deposits, their optimistic prognosis was that primary copper will still be available for many generations.The authors argued that less concern should be paid to the depletion of primary resources but rather what happens to resources after their extraction and how they are used with regard to dissipation.On this point, there is certainly a broad consensus with other experts who have a more "resource-pessimistic" attitude.But the most important point of their contribution may be to suggest that society is investing too little in education, research, and development to ensure the supply of raw materials for future generations.The mining sector in particular needs stronger state support.When more environmentally compatible and efficient mining methods are used, there will certainly be broad consensus on this point. The article by Oers and Guinée [51] was particularly special because it was a kind of update and reflection on an approach that is widely used in the LCA community.In 1995, Guinée and Heijungs proposed characterization factors for abiotic depletion potential (ADP), which were widely used in the application of the LCA [52].Again, the use of terms played an important role.Do we talk about depletion, scarcity, or criticality?Should the ultimate reserves, the reserve base, or the crustal content be used as a basis for comparisons between different metals?The authors emphasized that there is probably more than a dilution problem, namely when resources are released into the environment through emissions or wastes and are irretrievably lost.However, they also pointed out that it is difficult to define the correct method because the parameters to be chosen depend on the question and cannot be empirically verified in practice. Calvo et al. [53] drew attention to an interesting aspect-what does the mineral capital of various countries look like?Through the mining and export of minerals, this capital is changing.What exergy would be needed to rebuild this lost mineral capital?An average concentration of minerals in the earth's crust is assumed.The authors created a kind of a mineral balance based on exergy replication costs.Evaluating the exergy with prices (for electricity or coal) leads to an economic statement.Using the examples of Colombia and Spain, the authors showed that mining and exporting minerals produces a lower gross domestic product than one would have to pay for exergy replacement costs to rebuild the mineral capital.The figures could also be used to produce net balances between countries.This issue has a high economic and developmental importance due to the unequal distribution of raw materials among countries and the question of fair pay for raw materials supplies. Vieira et al. [54] calculated the surplus costs arising from current resource extraction on future situations and used these as a basis to calculate characterization factors for 12 metals and the platinum group metals.In the tradition of many environmental scientists, they assumed the absolute finiteness of mineral resources.They further assumed that with increasing mine production, the ore grades decrease and a grade-tonnage relationship can be set up.They derived a function for the operating costs per metal extracted, which depends on the previous amount of cumulative metal extracted and the total amount of a metal that can be mined on earth or has already been mined.The interesting thing about the approach is that the choice of this last value has only a limited effect on the result.The authors pointed out that they had not yet considered many cost drivers and therefore need to gather more data. At the focus of the article by Henßler et al. [55] was the application of a method called ESSENZ, which was developed at the Technical University of Berlin to evaluate many aspects of the use of resources.These aspects included physical availability as well as socioeconomic availability and environmental impacts.A total of 18 categories were taken into account, some category examples being abiotic resource depletion, the political stability of producing countries, and the impact of summer smog.The authors presented a case study in which ESSENZ was applied to the comparison of a conventional car and an e-car from Mercedes.The method provides greatly differentiated results for the different categories.In particular, it allows the comparison of tradeoffs that may occur when environmental impacts are reduced, but the use of resources increases at the same time. In their article, Martin-Gamboa and Iribarren [56] examined and compared the performance of wind turbines, taking into account the use of raw materials.The starting point was, of course, data from the LCA, but they used a method that goes far beyond the pure LCA approach.They used emergy as an indicator, which is the solar energy that is or was ultimately required to manufacture a product by extracting resources from the geo-biosphere.This approach is similar to corresponding exergy approaches in resource use assessment.This is connected, as they wrote, with a departure from the purely anthropocentric perspective.Gamboa and Iribarren went even further and used the data envelopment analysis (DEA) for time-dependent efficiency measurements of various wind farms. Alvarenga et al. [57] analyzed, in a very extensive study, the different methods with which abiotic raw materials are evaluated in LCA.They also dealt with the question of the area of protection and which impact assessments are useful.In total, they considered 19 different Life Cycle Impact Assessment methods, some of which treat the topic of resources very differently.They tested the methods using a case study comparing fossil and bioethanol-based ethylene production. Finnveden et al. [58] made a very important contribution by considering the crucial question of what is actually depleted or consumed.Their answer was that it is neither matter nor energy, but usable energy, i.e., exergy.The big advantage to this is that the evaluation of matter and energy can be integrated into a unified concept.However, this thermodynamic approach requires extensive calculations.The question that remains open is what practical relevance the calculation of the more theoretical exergy values has.This must be further discussed in the future at case studies. Müller et al. [1], in a very committed, detailed, and far-reaching article, dealt with the fundamental questions of which goals one wants to achieve with dematerialization and which indicators are meaningful for this purpose.They dealt with normative aspects in the discussion, cited examples from German and European resource efficiency policies, and came to the conclusion that mass-based indicators are rather unsuitable to describe the desired AoP of environmental protection.At the same time, however, they also advocated an environmental policy that does not stop at national borders but takes a life cycle perspective. Outlook The presented articles show that the question of the depletion of resources and their assessment in the LCA is by no means conclusively answered.There are many opinions and different approaches.Above all, however, it is an interdisciplinary question that has a lot to do with methods and experiences from environmental sciences, but most significantly with geology, resource economics, and mining.The positions on the depletion question depend very much on different specialists' knowledge, but also on the normative positions of the various disciplines.It is therefore quite understandable that representatives from the environmental sciences assume that resources will soon be exhausted.Representatives from the mining sector, on the other hand, must start from the inexhaustibility of resources.This is the basis of their business model. Nevertheless, recommendations for politics and for the economy are necessary, such as which criteria are used to assess recycling strategies or a circular economy.If the depletion of resources is an independent and urgent issue, a circular economy would have absolute priority.In this case, this might have to be weighed against other objectives, e.g., climate protection.Conflicting goals could arise here since the achievement of the 1.5 • goal requires an enormous expansion and the restructuring of energy supply structures worldwide.This would require a considerable use of resources that could only be covered by additional primary resources.If, however, the problem of resource depletion is postponed or perceived only as an exergy or entropy problem, then the energy balance and the environmental impacts of resource supply come to the fore, both from mining and from recycling.These can be dealt with very well in today's LCA framework.This is why the discussion on this topic is more important than ever.However, this time it must lead to a result and must not get stuck in different schools of thought as it did 40 years ago.Above all, the exchanges between the various scientific disciplines and applications in economy are important.Therefore, the meeting in London, which resulted in most of the articles presented in this paper, was an important event that urgently needs to be repeated on an interdisciplinary and open platform. Figure 1 . Figure 1.Searching the lodes with the divining rod in the 16th century[6]. Figure 1 . Figure 1.Searching the lodes with the divining rod in the 16th century[6]. Figure 2 . Figure 2. James Boyd: "The chart indicates the number of years of normal requirements our present known reserves of critical materials will supply"[21]. Figure 2 . Figure 2. James Boyd: "The chart indicates the number of years of normal requirements our present known reserves of critical materials will supply" [21]. Figure 3 . Figure 3. Proposal for the hierarchy of safeguard objects in the Life Cycle Assessment according to [48]. Figure 3 . Figure 3. Proposal for the hierarchy of safeguard objects in the Life Cycle Assessment according to [48]. Table 1 . The first lists of strategic and critical materials in the U.S.
2019-05-20T13:06:52.403Z
2018-12-21T00:00:00.000
{ "year": 2018, "sha1": "f690320ccf8c880aa58361bfb9e40d509c75cdc8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9276/8/1/2/pdf?version=1547089512", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c4094cd0ed1e6a03d6800808b488d65b6e2fb322", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
259089742
pes2o/s2orc
v3-fos-license
RhoA/ROCK1 regulates the mitochondrial dysfunction through Drp1 induced by Porphyromonas gingivalis in endothelial cells Abstract Porphyromonas gingivalis (P. gingivalis) is a pivotal pathogen of periodontitis. Our previous studies have confirmed that mitochondrial dysfunction in the endothelial cells caused by P. gingivalis was dependent on Drp1, which may be the mechanism of P. gingivalis causing endothelial dysfunction. Nevertheless, the signalling pathway induced the mitochondrial dysfunction remains unclear. The purpose of this study was to investigate the role of the RhoA/ROCK1 pathway in regulating mitochondrial dysfunction caused by P. gingivalis. P. gingivalis was used to infect EA.hy926 cells (endothelial cells). The expression and activation of RhoA and ROCK1 were assessed by western blotting and pull‐down assay. The morphology of mitochondria was observed by mitochondrial staining and transmission electron microscopy. Mitochondrial function was measured by ATP content, mitochondrial DNA and mitochondrial permeability transition pore openness. The phosphorylation and translocation of Drp1 were evaluated using western blotting and immunofluorescence. The role of the RhoA/ROCK1 pathway in mitochondrial dysfunction was investigated using RhoA and ROCK1 inhibitors. The activation of RhoA/ROCK1 pathway and mitochondrial dysfunction were observed in P. gingivalis‐infected endothelial cells. Furthermore, RhoA or ROCK1 inhibitors partly prevented mitochondrial dysfunction caused by P. gingivalis. The increased phosphorylation and mitochondrial translocation of Drp1 induced by P. gingivalis were both blocked by RhoA and ROCK1 inhibitors. In conclusion, we demonstrate that the RhoA/ROCK1 pathway was involved in mitochondrial dysfunction caused by P. gingivalis by regulating the phosphorylation and mitochondrial translocation of Drp1. Our research illuminated a possible new mechanism by which P. gingivalis promotes endothelial dysfunction. | INTRODUC TI ON The primary pathogen of periodontitis, 1 Porphyromonas gingivalis (P. gingivalis), is also intimately linked to the progress of other chronic inflammatory diseases in the body, including atherosclerosis, 2 rheumatoid arthritis 3 and Alzheimer's disease 4 . In human clinical trials and mouse models, P. gingivalis is often detected in atherosclerotic plaques. 5,6 Generally, P. gingivalis has been identified as an independent risk factor for atherosclerosis in some studies. [7][8][9] Mitochondria are highly dynamic organelles that continuously perform coordinated fusion and fission movements. Alterations in mitochondrial structure and functions will result from the imbalance between fission and fusion. The previous study of our group found that P. gingivalis infection leads to an increase in endothelial mitochondrial fission. In addition, it is determined that Drp1 mediates P. gingivalis-induced mitochondrial dysfunction, but the specific mechanism is unclear. 10 Mitochondrial dysfunction is currently recognized as an essential factor of atherosclerosis. 11,12 Lu et al. found that plateletderived growth factor type BB can induce vascular smooth muscle cells (VSMCs) phenotypic switching, proliferation, migration and neointima formation by activating the ROS/NFκB/mTOR/P70S6K signalling pathway, which is one of the pathological processes of atherosclerosis. 13 Yu et al. found mitochondrial DNA damage accelerates the progression of atherosclerosis using human aortic specimens and mouse models of atherosclerosis. 14 Rho family proteins are small G proteins with GTPases, which are widely presented in eukaryotic tissues. RhoA (Ras homologous gene family member A) is one of the most critical Rho family members. RhoA serves as a molecular switch that cyclically regulates intracellular signalling between an inactive GDP binding conformation and an active GTP binding conformation. 15 Rho-kinase 1 (Rho-related coiled-coil containing protein kinase, ROCK1) is the direct downstream and primary effector substrate of RhoA. 16 Phosphorylation of myosin phosphatase targeting subunit 1 (MYPT1), one of the important physiological substrates of ROCK1, facilitates interaction and phosphorylation of the catalytic domain of ROCK1. 17 The RhoA/ROCK signal mediates the process of cardiovascular diseases by regulating biological processes such as inflammation, differentiation and apoptosis. In addition, some research suggested the RhoA/ ROCK1 pathway regulates mitochondrial fragmentation through Drp1, a large GTPase, 18 which is activated and transported to the surface of mitochondria to regulate mitochondrial fission. Shen et al. found that RhoA/ROCK1 pathway was engaged in phosphorylated Drp1 at the 616th Serine in cardiomyocytes pretreated with TNFα, which promoted mitochondrial fragmentation. 19 Another report showed that in LPS-pretreated mice, ROCK1 inhibitor could improve mitochondrial function by restricting excessive mitochondrial fission through inhibiting Drp1(Ser616) phosphorylation. 20 Although we have learned that mitochondrial dysfunction caused by P. gingivalis infection depended on Drp1. However, the signalling pathway that regulates mitochondrial fragmentation and dysfunction in P. gingivalis-induced endothelial cells remains elusive. Here, the role of RhoA/ROCK1 pathway and the further involvement of P. gingivalis in mitochondrial dysfunction were explored. Our findings would provide new clues to understand how P. gingivalis facilitated the formation of atherosclerotic lesions. Our studies were carried out on cells in passages 4 to 6. CCG-1423 (APExBIO) selectively inhibits SRF-mediated transcription of Rho signalling pathway activation. 21 Y-27632 (AbMole Bioscience) is a pharmacologically specific inhibitor of ROCK. 22 They were used to determine the regulatory role of RhoA/ROCK1 pathway in P. gingivalis infection cells. Control cells were those exposed to DMSO only. | Bacterial culture Porphyromonas gingivalis ATCC 33277 was inoculated in brain heart infusion broth containing 5% defibrillated sheep's blood, 0.1% vitamin K1 and 0.5% hemin. The bacteria were grown in anaerobic environments with 80% N 2 , 10% O 2 and 10% H 2 . The cells were treated with P. gingivalis at different time points with a multiplicity of infection (MOI) of 100 in the following experiments. Cells that grew under the same conditions without infection were considered as a control. | Determination and quantification of the opening of mPTP by fluorescence staining and flow cytometry The openness of mPTP in EA.hy926 cells was tested using the Mitochondrial Permeability Transition Pore Detection Kit Flow cytometry (FACS, Becton-Dickinson) was used to gather fluorescence intensity, which was then analysed using FlowJo 10 analytic software. | Determination of ATP contents ATP Assay Kits (Beyotime) were utilized to ascertain the ATP production in the whole lysate of EA.hy926 cells. Cellular ATP levels of every group were computed according to the standard curves and then normalized to the control. | Western blotting The protein concentration of cells was measured by a BCA assay. SDS-polyacrylamide gel electrophoresis was used to isolate the same amount of protein, which was then transferred to a polyvinylidene fluoride membrane with GAPDH (1:3000; Affinity Biosciences) as an internal control. After 5% skim milk blocking, specific primary antibodies were used to detect the target proteins, including rab- | RhoA activity assay The RhoA activation kit (STA-403A; Cell Biolabs) was employed to value whether RhoA was activated. The GTP-bound form of RhoA was pulled down by incubating the equivalent amount of protein and a predetermined amount of GST-rhotekin-RBD on a rotator at 4°C for 1 h. The beads were then centrifuged, washed, resuspended in 20 μL loading buffer and boiled. Western blotting was carried out to determine the pulled-down GTP-bound RhoA amount using an anti-RhoA antibody. | Observation of mitochondrial morphology by transmission electron microscopy (TEM) Cells were fixed using 2.5% glutaraldehyde for 24 h, followed by 1% osmium tetroxide for 2 h at room temperature. Subsequently, the sample was dehydrated, immersed, embedded, ultrathin sectioned and stained with lead citrate and uranyl acetate. Afterwards, the TEM (H7650; Hitachi) was used to observe the mitochondrial morphology. | Quantitative analyses of mitochondrial networking The mitochondrial network refers to forming a highly interconnected network of tubular mitochondria. Cells were plated on the confocal petri dish for 24 h before being stained by MitoTracker Red (Solarbio) in a 37°C condition. Then, confocal laser scanning microscopes (CLSM; GeneTimes) were used to observe the mitochondrial network. Image-Pro Plus 6.0 software was exploited to spatially process the obtained image using the 'top hat' filter to obtain a binary image that removed artefacts. Quantitative analyses of mitochondria were performed to obtain aspect ratio (AR: major axis/minor axis), shape factor value (FF: perimeter 2 /4π•area) and mitochondrial length. 23 A smaller value obtained indicated an increase in mitochondrial fragmentation, while a higher value represented that the shape of the mitochondria had become longer and more complex. | Statistical analysis The mean ± SD was used to summarize the results of three separate tests. In SPSS 17.0 software, one-way anova and SNK test of multiple group comparisons were employed for statistical analysis. P-value < 0.05 indicated a statistically significant difference. | Mitochondrial dysfunction induced by P. gingivalis According to our earlier research, P. gingivalis caused an accumulation in mtROS, a depolarization in mitochondrial membrane potential (MMP), and a drop in ATP levels. 10 In the current exploration, we continued to investigate how P. gingivalis affects mPTP opening and mtDNA copy number by Calcein AM staining and RT-PCR, respectively. The images showed that the fluorescence intensity of mPTP declined over time in infected cells ( Figure 1A). Flow cytometry analysis corroborated these findings even more. As shown in Figure 1B, when mPTP fluorescence intensity was compared with controls, it was substantially reduced by 57.19%, 75.11% and 80.63% (p < 0.05) after 2, 12 and 24 h following P. gingivalis attack, respectively. Therefore, the conclusion in which P. gingivalis attack conspicuous induced mPTP opening was confirmed. RT-PCR results showed that a significantly reduced mtDNA copy number was present in cells exposed to P. gingivalis ( Figure 1C). Since cells were infected for 2 h, P. gingivalis had reduced mtDNA copy number, and the reduction was most significant at the 6-h time point (30.27% reduction). After 12 and 24 h of infection, the mtDNA copy number rebounded slightly, and it was still lower than the control group, decreasing by 28.42% and 24.94% (p < 0.05), respectively. | RhoA activity and RhoA/ROCK1 pathway were activated by P. gingivalis EA.hy926 cells that express RhoA and ROCK1 were employed, which was indicated by western blotting, to examine the impact of P. gingivalis on the RhoA/ROCK1 pathway. The pull-down assay was utilized to evaluate the activation of RhoA following P. gingivalis exposure. It was revealed that the levels of RhoA and RhoA-GTP increased significantly and reached a peak 6 h following P. gingivalis challenge | The RhoA/ROCK pathway was engaged in P. gingivalis-induced mitochondrial fragmentation To survey and evaluate the regulatory function of the RhoA/ROCK1 signalling in mitochondrial morphology, cells were observed by TEM. We observed that pretreatment with CCG-1423 or Y27632 could inhibit the endothelial mitochondrial swelling and vacuole-like changes caused by P. gingivalis, and most of the mitochondria of the cells returned to normal and rod-shaped ( Figure 3A). Confocal imaging also indicated the inhibition of the fragmentation and punctate changes of mitochondria by pretreatment of CCG-1423 or Y27632 | Mitochondrial dysfunction induced by P. gingivalis was dependent on RhoA/ROCK pathway The CLSM images in Figure 4A Figure 4B, C, p < 0.05) compared with that in the infected group. Additionally, it was found that CCG-1423 and Y27632 had similar effects on ATP contents. F I G U R E 3 Effect of RhoA/ROCK pathway inhibition on Porphyromonas gingivalis (P. gingivalis)-induced mitochondrial fragments. The cells were pretreated with DMSO, 10 μm CCG-1423 and 10 μm Y27632 for 30 min, respectively, and then exposed to P. gingivalis for 6 h. The cells were pretreated with DMSO only and then cultured in the medium were set as a control. (A) Transmission electron microscopy was used to observe the mitochondria morphology. Magnification 30,000; Scale bars: 1 μm. Arrowhead: mitochondria. (B) Before observing by a confocal laser scanning microscope, MitoTracker Red CMXR was operated to label the mitochondrial network. Magnification 2400; Scale bars: 20 μm. (C-E) Summary data of B. Mitochondrial length, aspect ratio and form factor were calculated to estimate the mitochondrial size. Data were presented as the mean ± SD of three independent determinations. *p < 0.05. Compared with the control group, ATP contents were decreased by 46.62% (p < 0.05) 2 h after infection. However, in comparison with the infected group, pretreatment with CCG-1423 and Y27632 increased the ATP contents by 55.17% and 61.22%, respectively, according to Figure 4C, D (p < 0.05). These results illustrated that RhoA and ROCK1 inhibitors effectively prevented the enhanced F I G U R E 4 Effect of RhoA/ROCK pathway inhibition on mitochondrial dysfunction induced by Porphyromonas gingivalis (P. gingivalis). The cells were pretreated with DMSO, 10 μm CCG-1423 and 10 μm Y27632, respectively, for 30 min before being infected by P. gingivalis (2 h for ATP, 24 h for mPTP and 6 h for mtDNA). The cells pretreated with DMSO only were considered as a control. (A) A confocal laser microscope was used to observe the openness of mPTP. Magnification 400; Scale bars: 50 μm. (B) The openness of mPTP was analysed quantitatively using flow cytometry. (C) mtDNA copy number was determined using real-time PCR. (D) ATP contents were quantified. The data were represented as a change relative to the control group, which had been designed as 100%. Results were presented as the mean ± SD of three independent experiments. *p < 0.05. mitochondrial permeability, bioenergy deficiency and mitochondrial loss caused by P. gingivalis. | DISCUSS ION Atherosclerosis is widely known to be the pathological basis of plenty of cardiovascular diseases. Multiple research has demonstrated an association between the activation of the RhoA/ROCK1 pathway and the onset and progression of atherosclerosis. It has been reported that ROCKs mRNA is enhanced in arteriosclerotic arterial lesions of animals and humans. 24,25 The elevated level of ROCK1 inhibits eNOS activity, an effector substrate of ROCK, and NO levels, resulting in impaired endothelial function and vasodilation, thereby accelerating the progression of atherosclerosis. 26,27 In addition, the RhoA/ROCK pathway plays a vital role in the formation of plaques and accelerates the process of atherosclerosis by mediating the differentiation of monocytes into macrophages and then secreting a series of inflammatory mediators. 28 Generally, RhoA/ ROCK1 pathway activation leads to the progression of cardiovascular diseases through inflammation, 29 endothelial dysfunctions, 30 VSMCs contraction, 31 proliferation and migration. 28 Nonetheless, the exact role of RhoA/ROCK in endothelial dysfunction induced by P. gingivalis is still unknown and needs further investigation. Mitochondria serve as the cell's energy factory and are essential organelles for cell survival. They have received extensive attention from many scholars. Recent research has found that the imbalance of mitochondrial division and fusion leads to changes in mitochondrial dynamics, which is closely associated with atherosclerosis onset and progression. 32 In the early stage of atherosclerosis, there will be changes in cellular inflammation, oxidative stress, endothelial dysfunction and VSMCs proliferation. Interestingly, mitochondrial dysfunction is thought to be related to these changes. 33 The excessive production of ROS caused by mitochondrial dysfunction will oxidize cellular proteins, lipids and DNA. 34 In the mouse model, the mitochondrial DNA damaged by the excessive accumulation of ROS will cause endothelial cell dysfunction and the proliferation of VSMCs, thereby accelerating the progression of atherosclerosis. 35 It is also reported that the mtDNA damage of leukocytes is related to vulnerable plaques in coronary arteries. By assessing mtDNA in leukocytes and plaques in coronary patients, high-risk plaques were associated with leukocyte mtDNA damage, and mtDNA damage was increased in atherosclerotic plaques than in normal arteries. 36 As the RhoA/ROCK1 pathway is pivotal in atherosclerosis progression, and mitochondrial dysfunction is a recognized pathogenic mechanism of atherosclerosis, the relationship between the RhoA/ROCK1 pathway and mitochondrial dynamics has attracted our attention. can recover mitochondrial function, which is followed by the alleviation of the Hutchinson-Gilford progeria syndrome phenotype. 37 Furthermore, it has been found that profilin-1 (a small actin-binding protein induced by advanced glycosylation end products) extensively dispersed in various cells leads to the accumulation of ROS, thereby activating the RhoA/ROCK1 pathway. 38 These findings have shown that the activation RhoA/ROCK1 pathway is closely associated with mitochondrial dysfunction. Recently, some academics have explored the specific mechanism of RhoA/ROCK1 pathway regulating mitochondrial dysfunction. Drp1, a GTPase, mediates mitochondrial fission by being transferred to mitochondria, which has been widely concerned. 39 Based on all the above viewpoints, it is supposed that the RhoA/ ROCK1 pathway possibly regulates Drp1-mediates mitochondrial dysfunction. However, whether the RhoA/ROCK1 pathway comes into play a role in endothelial mitochondrial dysfunction is unclear regarding periodontal infection. Growing evidence supports the opinion that P. gingivalis is defined as an independent risk factor for atherosclerosis. 48 Its pili and LPS are involved in atherosclerosis formation by supporting the differentiation of monocytes into pro-inflammatory macrophages and migration. 49 Effective manipulation of adaptive immunosuppression through virulence factors is an essential mechanism of atherosclerosis associated with P. gingivalis infection. 50 Here, P. gingivalis was used to observe the activation of RhoA/ROCK1 pathway and to explore the regulatory effect of RhoA/ROCK1 on Drp1, which has not been reported before. The elevated expression and activation of RhoA by P. gingivalis were shown in this study. We observed ROCK1 activation through quantifying the expression of p-MYPT1 (Thr696), although ROCK1 expression remained unchanged. As the downstream target of ROCK1, the phosphorylation level of MYPT1 increases, indicating ROCK1 activation. 51 Our findings supported the opinion that the activation of RhoA/ROCK1 signalling was induced by P. gingivalis infection. Drp1 has previously been demonstrated to be a crucial protein for maintaining the balance of mitochondrial fission and fusion, essential for sustaining mitochondrial morphological characteristics and functions. 10 Some other scholars have found that Drp1 is the direct substrate of ROCK1 and regulates mitochondrial fission. We hypothesized that RhoA/ROCK1 was the critical signalling pathway connecting P. gingivalis and mitochondrial fission. As predicted, inhibiting the RhoA/ROCK1 pathway downregulated the Drp1 phosphorylation and mitochondrial translocation, significantly alleviating mitochondrial fragmentation and dysfunction. This is in line with the RhoA/ROCK pathway's influence on fibroblast and glomerular endothelial cell mitochondrial fragmentation. 47,52 According to the report, the infection of bovine mammary epithelial cells with Escherichia coli increases mitochondrial fission mediated by Drp1, resulting in decreased MMP, the continuous opening of mPTP, and calcium ion disturbance. 53 56 mtDNA is more vulnerable to ROS attack when exposed to oxidative damage than nuclear DNA. Interestingly, a decrease of mtDNA copy number is proven to lead to endothelial cell dysfunction, which is a characteristic of early events occurring in the pathogenesis of atherosclerosis. 57 Although we established an infection model with viable P. gingivalis and found that P. gingivalis induced mitochondrial dysfunction, we did not further confirm which virulence factor of P. gingivalis was responsible. It is well-accepted that gingipains, LPS, peptidoglycan and flagellin are fundamental virulence factors for P. gingivalis. Among them, gingipains provide about 85% of the proteolytic activity and have been considered an essential virulence factor for P. gingivalis. 58 Cao et al. treated myocardial cells with gingipains and observed disruption of mitochondrial integrity, inducing mitochondrial pathway apoptosis. 59 P. gingivalis can degrade platelet endothelial cell adhesion molecule 1 and vascular endothelial cadherin through gingipains, leading to vascular injury, increased endothelial permeability and endothelial dysfunction. 60 Here, we conjectured that P. gingivalis expanded the gap and enhanced the permeability of endothelial cells through gingipains, opening up a channel for the invasion of P. gingivalis and its toxic products. Subsequently, RhoA/ ROCK1 signal pathway was activated, leading to mitochondrial dysfunction and endothelial damage. However, further research was required to confirm our assumption. We have determined here that in P. gingivalis-infected endothelial cells, RhoA/ROCK1 had a new role in the mitochondrial morphology and function depending on Drp1. The findings indicated a potential new mechanism that P. gingivalis promoted the occurrence and progression of atherosclerosis. Drugs that could inhibit the RhoA/ ROCK1 pathway were expected to become new targets for treating atherosclerosis with P. gingivalis infection. To verify the viewpoints, however, further research is required. | CON CLUS ION Our findings revealed that RhoA/ROCK1 pathway was activated by P. gingivalis, and it was involved in the mitochondrial dysfunction depending on phosphorylation and mitochondrial translocation of ACK N O WLE D G E M ENTS This research was supported by the National Natural Science Foundation of China (No. 81970943). CO N FLI C T O F I NTER E S T S TATEM ENT The author declares no conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T On reasonable request, the corresponding author could provide the data that support the finding of this article.
2023-06-07T06:17:50.340Z
2023-06-06T00:00:00.000
{ "year": 2023, "sha1": "5246e76f4241eba0a081dc92eddf92bf1dec61c7", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.17796", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "f97fd4bcd11e742036d47192ea5f6216405c9839", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56522450
pes2o/s2orc
v3-fos-license
Comparison of the Effect of Pomegranate Juice and Orange Juice on the Level of pH of Dental Plaque ARTICLE INFO Article Type Orginal Article Background and Aim: Reduction of dental plaque pH is an effective factor in the incidence of dental caries.One of the common methods for assessment of the cariogenic potential of food products is the study of plaque pH changes in the oral environment.The present study was performed due to the importance of dental plaque and its known complications and also the increase in consumption of industrial fruit juices which are encouraged nowadays as healthy drinks and also the positive effect of pomegranate juice on the amount of dental plaque which has been mentioned in the reports.Methods and Materials:This clinical trial was performed with crossover design. Complete prophylaxis was performed during the first session.Then, the participants were asked to refrain from oral hygiene methods for 48 hours and not to eat or drink for at least 2 hours before the experiment.The baseline plaque pH was measured, and afterwards 10 cc of fruit juice was kept in mouth for 2 minutes and then swallowed.Afterwards, plaque pH was measured at time intervals of 2, 5, 7, 10 and 30 minutes.After one week of wash out period, the participants were again evaluated by the same method and with the other type of fruit juice.The measurement of plaque pH was performed with microtouch method by use of Metrohm electrode.The data were analyzed by repeated measures ANOVA.Results: PH in pomegranate juice group before fruit juice intake equaled 6.73± 0.24 and reached 5.57±0.34 at the fifth minute and finally reached 6.19±0.32 at the 30th minute (p˂0.01).Also, in orange juice group, pH before intake equaled 6.16±6.8 and reached 5.62±0.17at the seventh minute and 6.15±0.2 at the 30th minute (p˂0.01).The maximum fall in pH for both fruit juices occurred at the fifth and seventh minutes.pH after consumption of both fruit juices began to increase from the tenth minute.These two fruit juices were not significantly different regarding plaque pH at the zero minute and at the time of maximum pH fall and at the 30th minute.(p˂0.08)Conclusion: The results showed that plaque pH after consumption of both fruit juices falls below the critical level for seven minutes and this decline is similar for both fruit juices. Introduction: Dental caries is one of the most common and costly infectious diseases. (1)Reduction of salivary and dental plaque pH is one of the effective factors in the incidence of dental caries. (2)pon encountering dental caries, a dentist needs to be familiar with its etiologic factors and preventive measures in addition to symptomatic treatments. (1)stimation of the relative cariogenic potential of foods is of special importance due to the multiplicity of nutritional factors, in a way that great effort has been done for many years to assess the relative cariogenic potential of different food products. (3)Multiple factors play role in the assessment of cariogenic potential including the amount of fermentable carbohydrates, adherence, physical form of carbohydrates and their degree of oral clearance, the effect of mixed consumption of food products, order of consumption, frequency of consumption, etc. Lack of awareness regarding the cariogenic potential of food products leads to inappropriate food intake, dental caries, tooth loss and malnutrition, etc. (4) Acid production in oral cavity during bacterial fermentation of a food product is a prognostic factor in the assessment of the role of that food product in cariogenicity.One of the common methods for assessment of the cariogenic potential of food products is the assessment of dental plaque pH changes in oral environment. (5)aliva has a key role in maintaining the healthiness of oral cavity and teeth and protecting teeth against caries, cleaning the mouth and buffering are among its duties. (6,7) cently, considering the use of healthy food products and changes in nutritional pattern, there is a tendency towards the use of industrial fruit juices especially in children, in a way that their use has been often encouraged as healthy drinks.This claim regarding the safety of these fruit juices for teeth is doubted considering the findings in the literature. (8)In a study by Toumba et al reduction in pH similar or worse than that after consumption of sucrose solution has been reported after consumption of four types of Black currant juice. (9)But in a study by Witjaksono et al in 2013 more severe reduction in pH was re-ported after consumption of edible sucrose in comparison with edibles containing maltitol. (10)omegranate juice has recently attracted a lot of attention as a product with antioxidant properties and has been assessed in a research by Zarban in 2007.In the mentioned study, pomegranate juice concentrate without additives was compared with other commercial Sun Ich fruit juices (pomegranate juice, red grape juice, cherry juice, orange juice, pineapple juice, apple juice and mango juice) and they reported that pomegranate juice had the highest absolute antioxidant capacity. (11)But this product has not been evaluated regarding its cariogenic potential.Therefore, the aim of the present study was to compare the effect of sugar free pomegranate juice and orange juice on the level of dental plaque pH. Materials and Methods: This was a randomized crossover clinical trial.The individuals involved in this study were dental students that volunteered to participate in this study after being fully informed of the study protocol.The individuals were examined and healthy subjects without any systemic diseases based on their medical history, that had not consumed any medications for the last two weeks and were not following any special diet, without xerostomia, orthodontic appliances or dental prostheses, non-smokers without periodontal diseases or active dental caries were included in the study. (8)The inspection site was between the distal surface of upper right second premolar and mesial surface of first molar.If this site had any restorations, the surfaces between first and second premolars were selected and if these surfaces were also restored, the individual was excluded from the study. The stages of the study were verbally explained to the participants and then they signed informed consent forms before the experiment.Determining the pH level of dental plaque The participants were asked not to use any fluoridecontaining products or antimicrobial mouthwashes. (8)In order for the dental plaque to reach the appropriate acid production ability and yet Downloaded from jrdms.dentaliau.ac.ir at 0:34 +0330 on Tuesday December 18th 2018 [ DOI: 10.18869/acadpub.jrdms.1.3.23 ] not to create conflict with dental and periodontal health, in the first session, total mouth prophylaxis was performed and then the participants were asked to refrain from oral hygiene procedures such as tooth brush, dental floss or antimicrobial mouthwashes for the next 48 hours and not to eat or drink for at least 2 hours before the experiment (expect water). (12)fterwards, the two experimental drinks were coded as A and B. For each participant in each experimental session, a box of juice was opened after shaking and 10 cc of the juice was poured into a disposable cup. In this stage, the baseline pH level of dental plaque (before intervention) was measured.Afterwards, the participant held 10 cc of the juice in his/her mouth for 2 minutes and then swallowed the juice. (12)Then the pH of dental plaque was measured immediately and at time intervals of 2, 5,7,10 and 30 minutes after drinking the juice.The values were recorded after the number on the pH meter was fixed for 30 seconds.Measurement of the pH of dental plaque was performed In vivo with microtouch method using Metrohm glass microelectrode connected to Metrohm digital pH meter ((Metrohm LTD CH-9101 Herisau,781 PH/Ion Meter, Switzerland). (12)In microtouch method, thin glass or metal probes which penetrate the depth of dental plaque are placed in contact with the dental surface and the connected pH meter shows the value of pH.Before each experiment and also between readings, the microelectrode was calibrated with 3 mol of KCl solution with pH 7 and the electrode was washed with gentle flow of distilled water.Glutaraldehyde 2% was used for disinfection for 20 minutes. (12)he minimum pH after consuming each juice and the difference between the rest pH (base pH) and minimum pH (ΔpH) and also the time period when the pH was below critical pH=5.5 were determined for all participants.After one week of wash out, the participants in each group were assessed again using the same method with the other type of juice.(Crossover) Afterwards, pH curve was drawn for each experimental juice for all the participants in the mentioned time intervals. The findings were analyzed with repeated measures ANOVA with 95% confidence interval and 80% power.The specifications of the experimented products After visiting the supermarkets in Tehran in search of different types of fruit juice, we realized that Sun Ich Company produces the widest variety of fruit juices.Two types of sugar free fruit juices without additives produced by this company were selected for this experiment.The specifications and ingredients of fruit juices are summarized below.*Ingredients in each serving.Serving size -8 Oz (240 ml) **Percent daily value Result: This study was performed on 20 participants and considering the study design (crossover) on 40 samples.6 participants (30%) were male and 14 participants (70%) were female with the average age of 26±1.22 years and all of them were dental students which met the inclusion criteria.Based on the performed analyses in each of the groups, the level of pH at 0 and 30 minutes (end of the experiment) were not significantly different.Moreover, the initial and final pH values in the two groups were not significantly different.(0.57˂p˂0.63) Based on the findings, the level of pH in pomegranate juice group began to decrease from the zero minute and decreased by 17 % at the fifth minute which was significantly different from the first minute (p=0.01), and afterwards the level of pH increased.In orange juice group, the decrease in pH at the seventh minute equaled 17.3% which was significantly different from the zero and 30th minutes (p=0.01).According to the performed analyses, no significant differences existed between pomegranate juice and orange juice groups during the mentioned time intervals.(0.46˂p˂0.78) The level of pH based on the follow-up time and divided by the type of studied fruit juices are summarized in Table 1 Coefficient of variation (CV) showed that plaque pH was low during all the mentioned time intervals and the maximum level equaled 11%. Discussion: The aim of the present study was the In vivo assessment of changes in the pH level of dental plaque after consuming sugar free orange juice and pomegranate juice by use of microtouch method with crossover design. Critical dental plaque pH is a pH range in which the solubility of hydroxyapatite crystals of dentin and enamel is initiated. (9)n the present study, consumption of orange and pomegranate juices similarly reduced the pH of dental plaque.In a study by Saha et al reduction of pH of dental plaque after consumption of apple juice was further than that after drinking Guava, pomegranate and lemon juice. (13)n a study by moeiny et al assessment of the effect of orange juice, orange concentrate and pineapple concentrate on plaque pH was performed and consumption of orange concentrate with pulp had the highest effect on pH decline during the majority of the time intervals in comparison with other products. (12)n a study by Toumba et al black currant juice mixed with sugar free apple juice and black currant juice mixed with citrus juice with higher concentration of carbohydrates than other products, black currant drink containing 7% concentrate and black currant drink containing 10% concentrate with new formula were compared.Among these four types of fruit juice, black currant drinks with new formula which contained low carbohydrate had lower acidogenic potential and prevented the fall of pH to below the critical level. (9)By evaluating four types of fresh and packaged fruit juices, Chaly et al reported that plaque pH didn't fall below the critical level. (14)owadays, significant transition has taken place towards the consumption of healthy food products and drinks in different societies.Fruit juices are one of the food products which have a great market as healthy drinks. (4)ental caries is a process during which the mineral tissue of teeth deteriorates gradually due to the effect of acid produced by the effect of microorganisms on fermentable carbohydrates. (15)n the present study, the maximum fall in plaque pH occurred at the fifth and seventh minutes after intake of fruit juices.Afterwards, plaque pH followed an ascending order for both fruit juices which showed the compensating process of pH.In the study by Toumba et al the maximum fall in plaque pH occurred following the intake of mixture of citrus juices at the fifth minute and remained below critical level for five minutes.This study suggests that the preliminary sampling of the plaque five minutes after consumption may not always record the minimum plaque pH, and the initial time intervals of 2 minutes or 3 minutes may better represent the maximum fall in plaque pH after intake of fruit juices. (9)lso, in a study by Azrak et al after drinking orange juice, milk, mineral water and instant Fennel tea, the maximum fall was shown to be during the fifth to 10th minutes except for mineral water which exhibited high pH at all the time intervals. (16)In the present study, at the 30th minute after the intake of both fruit juices plaque pH almost returned to the initial level.The difference between the pH decline time and returning time was close in these two studies.Johansson and colleagues stated that significant decline of the baseline pH following the consumption of acidic drinks also occurs after the intake of sugar free types of these drinks which can be an important risk factor for the initiation of dental erosion. (5)n the other hand, Beighton believes that rapid preliminary decline of plaque pH ( less than 5 minutes) after consumption of fruit juices is related to the acidic contents of these drinks rather than to sugar fermentation by plaque bacteria. (18)n the present study, both fruit juices were sugar free and caused a significant reduction in plaque pH.It seems that the acidic content of fruit juices (citric, acetic, maleic, and ascorbic acid, based on the type of fruit juice) is washed rapidly by saliva, and after this cleansing, acids produced by fermentation of carbohydrates by bacteria (lactates and succinates) reach the maximum concentration.It has been stated that exposure of bacteria to food stuff with low pH decreases their ability of carbohydrate fermentation and acid production. (18)ohansson et al have stated that overall, pH curve after consumption of foods such as fruit juices follows a typical pattern (Stephen curve) which is due to three main factors: 1) The ingredients of a product such as acids and sugar 2) Individual factors such as salivary conditions, amount of dental plaque and type of microflora 3) Food and drink consumption pattern In the present study which was a crossover experiment the last two factors were homogenized as possible.Therefore the results are highly reliable.Moreover, the method for pH measurement used in the present research is an up-to-date and reliable method. (10,19) single test with the ability to unequivocally determine the cariogenicity of foods and drinks in vivo is not yet available, but if caries acidogenic etiologic theory is accepted, measurement of pH before, during and after food consumption can be considered as a guide for determining the cariogenic potential of food products. (17)In addition, glass microelectrode used in the present experiment is extremely more accurate and reliable compared to the types which need separate reference electrode and salt bridge and have been used in previous studies (9,17) .In more recent studies which have used digital glass electrodes, the speed of readings and the accuracy of pH measurements have been higher. (10,18) e of the limitations of our study was the type of experimented fruit juices.The results of this study are only valid for these two types of fruit juices of particular brand and due to the limitations in experimental use of natural juices and impossibility of procurement of similar samples we could not use fresh fruit juices.However, the results are valid for these two commercial products in the market and in the mentioned time intervals. Conclusion: The results of the present study showed that plaque pH declined below the critical pH till the seventh minute after the intake of fruit juices and this decline was similar for both fruit juices.Plaque pH returned to baseline value 10 minutes after fruit juice consumption. .
2018-12-17T21:05:00.064Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "52df248e5f73abf608f14d62f9eaefaa324b536a", "oa_license": "CCBY", "oa_url": "http://jrdms.dentaliau.ac.ir/files/site1/user_files_d1a2ed/fattahi-A-10-275-1-c6e665b.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5ba0a2447d8f6943b677111292706ea16a9bc5e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
123426485
pes2o/s2orc
v3-fos-license
Low Energy Dynamics in Spin-Liquid and Ordered Phases of S=1/2 Antiferromagnet Cs2CuCl4 Cs2CuCl4 realizes spin-1/2 quantum antiferromagnet on a distorted triangular lattice. It remains in a quantum spin-liquid state far below Curie-Weiss temperature 4 K and exhibits an incommensurate spin ordering at TN=0.6 K. We studied Cs2CuCl4 by means of electron spin resonance (ESR) at temperatures down to 0.05 K in the frequency range 9<f<140 GHz. An unexpected energy gap of 14 GHz and a splitting of ESR were found in the spin-liquid phase. We quantitatively describe both the shift and the splitting of the ESR line for different orientations of the applied magnetic field by accounting for the effect of a uniform Dzyaloshinskii-Moriya (DM) interaction on spinon excitations of weakly coupled Heisenberg chains. At cooling below TN, we observe at lower frequency f<40 GHz a gradual crossover of the signal from the above spinon-type ESR toward a resonance of a spiral-AFM type. However, for higher frequency f>60 GHz, we observe that the above spinon-type ESR survives deep in the ordered phase. These novel phenomena are consequences of fractionalized spinon excitations of spin chains, which are effectively decoupled in Cs2CuCl4 due to strong geometric frustration. Introduction Magnetic crystals with S = 1/2 ions coupled by antiferromagnetic exchange provide numerous quantum phases with specific spin structures and excitations. A quasi 2D antiferromagnet Cs 2 CuCl 4 has stacked layered magnetic structure with distorted triangular lattice in the layers. It was thoroughly investigated by means of elastic and inelastic neutron scattering, which, in particular, uncovered an extensive two-spinon continuum [1,2]. The spinon continuum was observed below the Curie-Weiss temperature T CW =4 K but, however, still above the Néel temperature T N =0.62 K, below which the system orders into a two-dimensional incommensurate spiral in the bc-plane. The two-spinon continuum was uncovered in a q-space region near the Brillouin zone boundary. This continuum, which is a distinctive feature of a quantum critical S=1/2 spin chain, was found to survive well below the ordering temperature and to coexist with the low-energy spin-wave excitations. The 1D nature of excitations in this layered structure is attributed to a special structure of exchange bonds, with the strongest exchange bond ( J=0.375 meV) coupling magnetic ions along b-direction, parallel to the bases of isosceles triangles of the lattice. A weaker exchange integral J ′ = 0.34J corresponds to c-direction (lateral sides of the triangles). The interlayer exchange J ′′ =0.045J is the weakest one. The spin chains along b-axis should be practically decoupled due to the geometric frustration of the exchange bonds J ′ , as shown in numerical simulation [3] and analytical approach [4]. This decoupling is the reason for the quasi-1D character of the spectrum of spin excitations. It also makes the system very sensitive to remaining weak interlayer and Dzyaloshinsky-Moriya interactions [5]. In this paper we describe the investigation of spin excitations in Cs 2 CuCl 4 in the low-energy range by means of electron spin resonance (ESR) spectroscopy. A new kind of ESR signals of a spin S=1/2 antiferromagnet was found both in the spin-liquid and ordered phases. Spin-liquid phase The magnetic resonance signals were recorded as dependence of the transmitted microwave power on the magnetic field, using the resonator type microwave spectrometers for the range 9-140 GHz, combined with the cryostats with 4 He and 3 He pumping and a dilution refrigerator Kelvinox-400. The ESR signal at T >10 K is a typical single mode resonance corresponding to g-factor values of g a,b,c =2.20, 2.08, 2.30 for the orientation of the magnetic field along the crystallographic axis a, b and c correspondingly. This kind of resonance is typical for magnetic crystals containing S=1/2 Cu 2+ magnetic ions. On cooling the sample below the temperature of about 6 K a strong evolution of the ESR line occurs. At H ∥ b, the single ESR mode is observed in the frequency range above 20 GHz, with the frequency shifted to lower fields. The frequency of this spin resonance may be fitted by a standard gapped antiferromagnetic resonance form At T = 1.3K the value of the gap is ∆/2πh=14 GHz At H ∥ a, c the ESR line splits into a doublet which is resolved below 4 K. The splitting of 27 GHz ESR lines is of about 0.5 T at T =1.3 K, as shown in Fig.1. The frequency-field dependence for these split ESR resonances is given in Fig.2. The details of the temperature evolution of the ESR signals and of the ESR spectra are described in Ref. [8]. The observation of the nonzero gap in zero field and of the splitting of the ESR line in the paramagnetic phase is unusual for S=1/2 chains. In the ideal case of Heisenberg chain the ESR frequency is not renormalized and remains at the standard Zeeman energy value 2πhf = gµ B H. In presence of perturbations such as staggered g-factor, alternating Dzyaloshinski-Moriya interaction or an anisotropic exchange, a field-induced gap may occur due to the generation of a staggered magnetic field in the presence of external magnetic field (see theory [9] and experiment, e.g, [10] ). In contrast to these known cases we observe a gap in the absence of external magnetic field and, even more strangely, the splitting of the ESR line. Ordered phase The next point of our investigation is to follow the transformation of the unusual ESR shift and splitting at cooling through the ordering temperature T N =0.62 K. We consider the temperature evolution of ESR signals for the most simple case of H ∥ a. Here the magnetic field is perpendicular to the spiral plane and the structure should obey only gradual transformation to the cone configuration and further saturation without intermediate phase transitions, in contrast to the cases of H ∥ b, c [11]. We observed two kinds of temperature evolution: i) For low frequencies (f < 40 GHz) the doublet described above is frozen out and at low temperature the antiferromagnetic resonance mode is formed as shown in Fig.4. ii) For higher frequencies, f > 60 GHz, the doublet found in the spin-liquid phase survives deep in the ordered phase. Thus, in addition to the antiferromagnetic resonance, a spin-liquid mode continues to exist (see the evolution on Fig.5). The frequency-field dependence for the resonance modes of the ordered phase is shown in Fig.6. Discussion The observations in the spin-liquid phase may be explained and quantitatively described by the consideration of the influence of the in-chain uniform Dzyaloshinsky-Moriya (DM) interaction on the spinon continuum. This interaction is a distinct feature of Cs 2 CuCl 4 [5]. The uniform DM interaction in a classical antiferromagnet should result in a spiral ground state (compare to the canted antiferromagnetic ground state due to the alternating DM interaction). In a simple model case when the magnetic field is parallel to the DM vector D, this interaction modifies the spinon continuum of S=1/2 chains by simply shifting the spectral density by the amount of D/J along the q-axis [7,8], see Fig.3 (the initial, i.e. unshifted, spectrum has zero energy as a lower limit at qa = π). As a result, the ESR absorption, which is determined by q = 0 transitions, acquires maxima at two frequencies, corresponding to the upper and lower boundaries of continuum. (See [6] for the most recent theoretical investigation of this striking phenomenon.) Note that in the absence of the uniform DM interaction this continuum collapses to a single frequency. Detailed theoretical interpretation [7,8] is based on the following spin Hamiltonian: here the first term describes intrachain exchange J (x runs along crystal b axis), the second -uniform DM interaction D y,z between chain spins, and the third is a Zeeman term, allowing for anisotropic g-factor. The dots correspond to the omitted interchain exchange and DM interactions on interchain bonds. Detailed symmetry analysis of the allowed DM interactions [5] shows that there are four different orientations of the DM vector (see Fig. 6 in [5]) depending on chain's integer coordinates y, z: D y,z = D a (−1) zâ + D c (−1) yĉ . Here z indices magnetic bc layers, while y numerates chains within a layer. The crystal symmetry forbids DM vector from having a component along the b axis [5]. This Hamiltonian results in the following ESR frequencies: [8]. In the ordered phases one could expect a formation of the standard ESR spectrum of a planar spiral spin structure with two axes of the anisotropy, which should have two gapped resonance modes and a third mode with zero frequency, at least in low magnetic fields. These expected frequencies, derived from a macroscopic analysis of low frequency dynamics of such a system [12], are plotted in Fig.6. The experiments show that our expectations are indeed true at sufficiently low frequencies. As described above in Section 2.2, at higher frequencies comparable to the main exchange energy J=90 GHz, we observe an additional mode which may be interpreted as a component of the "spinon doublet", which survives deep in the ordered phase and coexists with a mode of the antiferromagnetic resonance. This observation is probably related to a similar feature of the inelastic neutron scattering experiments of Ref. [2]. In this experiment the spinon continuum was found to remain practically unchanged at the ordering transition, and to coexist with the spin-wave mode at the temperature far below T N . In our experiments we observe that the spinon-type ESR remains approximately undistorted at high frequency, while it freezes out (disappears) at low frequency. Conclusion We observe splitting of the electron spin resonance line, originating from the modification of the spinon continuum by the uniform Dzyaloshinsky-Moriya interaction, in frustrated 2D antiferromagnet Cs 2 CuCl 4 . The spinon-type resonance at zero wavevector was found to coexist with the low frequency mode of the antiferromagnetic resonance at the temperature far below the Nèel point.
2019-04-20T13:05:42.039Z
2012-12-17T00:00:00.000
{ "year": 2012, "sha1": "004d42ca67714268c6b52b11ccbde9a9349c140e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/400/3/032091", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c0cd310d6542e6850ad46366bb4f64c2271cd388", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
738173
pes2o/s2orc
v3-fos-license
THE FINE STRUCTURE AND ELECTROPHYSIOLOGY OF HEART MUSCLE CELL INJURY Injured frog heart cells electrically uncouple from their uninjured neighbors within 30 min after injury. This uncoupling process can be shown by the disappearance of an injury potential measured between such injured and uninjured cells. In the present study, the time course of the decline of injury potentials, and thus of electrical uncoupling, in bullfrog atrial trabeculae was determined. Tissue was fixed with glutaraldehyde and osmium tetroxide at various times after injury to determine the morphological changes which accompany this uncoupling process. In some cases, ruthenium red was included in the fixatives. Normal atrial cells are long and narrow, with intercellular junctions located along the lateral surfaces of the cells. Two types of intercellular junctions have been observed: cardiac adhesion plaques (CAPs), and close junctions. Close junctions occur only infrequently. Ruthenium red penetrates all around the cells, leaving only small areas within the CAPs unstained. After injury, the cells are very dense and the myofilaments disarranged. Both types of intercellular junction remain intact, and only slight changes within CAPs are observed. The results are discussed in relation to current concepts of intercellular communication. INTRODUCTION The exact mechanism of intercellular impulseconduction in cardiac muscle is still in question . It is known that conduction of excitation proceeds electrotonically through low electricalresistance pathways between cardiac cells (Barr and Berger, 1964 ;Barr et al., 1965 ;Tillie, 1966 ; Van der Kloot and Dane, 1964 ;Weidmann, 1952Weidmann, , 1965Weidmann, , 1966Woodbury, 1962 ;Woodbury and Crill, 1961 ;Woodbury and Gordon, 1965), but the nature and location of such pathways is uncertain . Barr et al . (1965) suggest that tight junctions are the sites of low electrical resistance in cardiac muscle . Loewenstein (1966Loewenstein ( , 1968, , and Loewenstein and Penn (1967) propose that specific patterns of Ca++ distribution at tight and septate junctions From the Department of Biological Structure, University of Washington, Seattle, Washington 98105 ABSTRACT Injured frog heart cells electrically uncouple from their uninjured neighbors within 30 min after injury . This uncoupling process can be shown by the disappearance of an injury potential measured between such injured and uninjured cells . In the present study, the time course of the decline of injury potentials, and thus of electrical uncoupling, in bullfrog atrial trabeculae was determined . Tissue was fixed with glutaraldehyde and osmium tetroxide at various times after injury to determine the morphological changes which accompany this uncoupling process . In some cases, ruthenium red was included in the fixatives . Normal atrial cells are long and narrow, with intercellular junctions located along the lateral surfaces of the cells . Two types of intercellular junctions have been observed : cardiac adhesion plaques (CAPs), and close junctions . Close junctions occur only infrequently . Ruthenium red penetrates all around the cells, leaving only small areas within the CAPs unstained. After injury, the cells are very dense and the myofilaments disarranged . Both types of intercellular junction remain intact, and only slight changes within CAPs are observed . The results are discussed in relation to current concepts of intercellular communication . are responsible for the low electrical-resistance between epithelial cells . In the present study, the problem of electrical communication between cardiac cells was investigated by comparing the morphology of electrically coupled cells to that of electrically uncoupled cells . Injured frog heart cells become electrically isolated from their uninjured neighbors usually within 15-30 min after the injury is inflicted (Baldwin et al ., 1963 ;Henry et al ., 1961) . This uncoupling process can be measured by the decrease of an injury potential, because injury potentials depend upon the existence of lowresistance connections between the injured and uninjured cardiac cells, and their decline with time is due to the gradual loss of such connections . The experimental plan was to injure one-half of a bullfrog atrial trabecula, measure the resulting injury potential, fix the trabecula at some time during the decline of the injury potential, and then examine it by electron microscopy . Since no detectable morphological change occurred in synchrony with the injury-potential decline, cells were examined only in the coupled state (when the injury potential is at its maximum), and in the uncoupled state (when the injury potential has decreased to zero) . Bullfrog atrium is a convenient tissue to use for this study because the wall is composed largely of trabeculae which run more or less freely for several millimeters. It is therefore possible to remove a strip of myocardium (for electrical recordings) with damage only at the ends, and to insure that nearly all the injury sustained by the tissue is that which is purposely inflicted . Dissection Procedure Adult bullfrogs (Rana catesbeiana) were chemically pithed by injecting 0 .1 ml of 2% xylocaine hydrochloride into the brain case . The isolated heart was placed in Ringer's solution, and a trabecula was dissected away from the opened atrial wall . One end of the trabecula was tied with a fine silk thread and the other left free . The trabeculae used for recording injury potentials were 4-8 mm long and approximately 0 .1 mm in diameter . Frequently, one or more trabeculae were injured deliberately in situ after the heart wall was cut open, but before the trabeculae were dissected away from the heart wall . The injury was produced by squeezing the tissue several times with a pair of jeweler's forceps so that approximately one-half of the length of the trabecula was injured in each case . Injury potentials were recorded from 10 trabeculae isolated from eight different frog hearts . Three of these trabeculae were subsequently fixed for morphological examination . Two of these were fixed after the injury-potential decay was complete, and the third immediately after the initial value of the injury potential was recorded . Trabeculae were injured in situ in eight bullfrog hearts ; 15 such trabeculae were fixed and examined by electron microscopy . Electrophysiology The experimental arrangement for measuring injury potentials is shown schematically in Fig . 1 . The recording chamber was a polyethylene trough which had been cut in half and glued back together with 456 THE JOURNAL OF CELL BIOLOGY • VOLUME 46,1970 rubber cement, so that a thin rubber diaphragm stretched between the two halves . The two halves of the chamber were filled with amphibian Ringer's solution, and the dissected trabecula was placed on one side of the diaphragm . The free end of the silk thread, which had been tied to the trabecula, was threaded through a needle which was used to pierce the diaphragm and to pull about half the trabecula through the hole. This half was used as the injured part because a small amount of injury may have occurred in these manipulations . The major injury was crushing, produced in each case by squeezing the tissue several times with a pair of forceps . The difference in potential between the solutions bathing the injured and uninjured muscle was measured with the aid of Ringer's-agar salt bridges, Hg-HgCl half-cells in saturated KCI and an amplifier and recorder (Fig . 1) . The base line for injury-potential measurements was obtained by placing the two salt bridges in the same side of the trough. Electron Microscopy The fixative was applied to the tissue at one of three different stages : (a) the fixative was injected into the sinus opening of the still beating heart before the heart wall was cut open, the atria were then opened and the tissue flooded with additional fixative ; (b) the opened atria were flooded with fixative at various times after some trabeculae had been injured in situ, but before any trabeculae had been cut away from the heart wall ; or (c) the fixative was added to the polyethylene chamber after recording injury potentials . In all cases, the initial fixative used was 0 .88% glutaraldehyde in 0 .067 M cacodylate buffer at pH 7 .2, which is isotonic for amphibians. A standard procedure was then followed : the isolated trabeculae were fixed for 1 hr in the cacodylatebuffered glutaraldehyde at 0°C, rinsed with 0.125 M cacodylate buffer for 15 min, and then postfixed in cacodylate-buffered OSO4 for 1 hr at 0°C. The tissue was dehydrated in ethanol and embedded in Epon (Luft, 1961) . Unless otherwise specified, sections were double-stained with saturated uranyl acetate and lead citrate (Reynolds, 1963) . Two variations of the above procedure were also used . Some tissue was stained en bloc with 0 .5% uranyl acetate in collidine buffer at pH 6 .1 (Trelstad et al ., 1966) . Other tissues were fixed in solutions containing ruthenium red (Luft, 1965) .' For these experiments, the tissue was fixed for 1 hr in a solution containing equal volumes of 2.62% glutaraldehyde, 0.188 M cacodylate buffer, and ruthenium-red stock solution (1500 ppm in distilled water) at 0°C . It was then rinsed for 15 min in 0 .125 M cacodylate buffer and postfixed in a solution containing equal volumes of 5% OsO4, 0.188 M cacodylate buffer, and ruthenium-red stock solution (1500 ppm in distilled water) for 3 hr at room temperature . After a brief rinse in the buffer, the tissue was then dehydrated as in the standard procedure (Luft, 1965) .' Electrophysiology Injury potentials recorded from single atrial trabeculae showed a time course of decay similar to that reported for frog ventricle (Baldwin et al ., 1963 ;Engelmann, 1877 ;Henry et al ., 1961 ;Rothschuh, 1951) . Initial injury-potentials of 4-19 my usually decayed completely within % hr (Fig . 2) . Once the injury potential had reached a low value, reinjury of the previously injured area had no effect on the injury potential . However, the injury potential could be renewed to near its former value if a new injury was made close to the old injury . The injury potential was positive when the injured side was grounded . Electron Microscopy DEFINITION OF TERMS : Trabeculae fixed initially by injecting buffered glutaraldehyde into the sinus opening of the heart are referred to as normal tissue . Tissue fixed in this way provided a basis on which to judge control tissue (see below) as to quality of fixation, damage due to mechanical manipulation, and the effect of leaving the tissue in Ringer's solutions for the various times required by the experimental procedures. Injured tissue was injured either in situ or in the recording chamber, and was fixed at different times after the injury was inflicted . A control sample was obtained for every injured sample . Control tissue was taken either from an adjacent uninjured trabecula (when the injury was produced in situ) or from the uninjured end of an 1 J . H . Luft . 1965. Personal study. The time course of injury potentials from five different trabeculae . Trabeculae I and II were fixed at the times indicated (arrows) and examined in the electron microscope . injured trabecula (when the injury was produced either in situ or in the recording chamber) . NORMAL AND CONTROL TISSUE : As control tissue did not differ significantly from normal tissue, these two will be described together . The endocardial surface of the atrial wall was composed of many trabeculae which ran in different directions at different levels, forming a contractile meshwork . Trabeculae were composed of many small bundles of several muscle cells whose long axes were oriented parallel to the long axis of the trabecula . The bundles were each surrounded by a basal lamina (Fig. 3), and they formed branching and anastomosing strands within the trabecula . The individual cells did not branch . The fusiform cells were usually 3-12 µ in diameter and contained 1-4 myofibrils in cross-section . They were too long to be measured easily by electron microscopy, and their borders could not be seen by light microscopy . Barr et al . (1965) isolated atrial muscle cells from Rana pipiens by using ethylenediaminetetraacetate (EDTA) and found that they were 175-250 µ long . Intercellular junctions oriented perpendicular to the myofibrillar direction, which are typical of the mammalian intercalated disc, were rare in frog atrial muscle . Most intercellular junctions were located along the longitudinal surfaces of the cells, often in association with Z bands . Such junctions, whether associated with myofibrils or not, were all similar morphologically and will be referred to as cardiac adhesion plaques, or . Occasionally CAPs inter-FIGURE 3 Normal tissue . Cross-section of a trabecula near its surface, illustrating the cells grouped together in bundles, each of which is surrounded by a basal lamina (BL) . Endothelial cells (E), collagen fibers (co), nerve fibers (N), and cardiac adhesion plaques (CAP) are also shown . Unless otherwise indicated, the scale markers represent 1 µ . X 7500. 458 TIIE JOURNAL OF CELL BIOLOGY • VOLUME 46, 1970 FIGURE 4 Control tissue, fixed 1 hr after injury to an adjacent trabecula . This longitudinal section illustrates the typical appearance of CAPs . Note the close association of the CAPs with Z bands . X 40,200 . FIGURE 5 Control tissue, fixed 1 hr after injury to an adjacent trabecula . In this case a CAP is shown which is not associated with any myofibrils. Note the dense bars which can be seen within this CAP (small arrows) . The intercellular dense line can be seen at large arrow . A close junction (cj) and a diadic coupling between the sarcoplasmic reticulum and the plasma membrane (D) are present. X 43,300. rupted myofibrils in a tangential or oblique manner . The area of cell surface that was involved in CAPs varied, but it appeared that CAPs preferentially located in rows along the length of the cell, rather than randomly around the cell (see Fig . 11) . The cell membrane appeared as a single dense line in all material fixed by the standard technique . When measured from the center of one cell membrane to the center of the adjacent cell membrane, the intercellular gap at a CAP was 270 f 25 A . 2 With uranyl-acetate block staining, the trilaminar structure of the plasma membrane could be seen (Fig. 9) . In uranyl-acetate blockstained material the spacing between the centers of the inner (cytoplasmic) leaflets of the two unit membranes at the CAP was 300 f 30 A, but between the centers of the outer leaflets of the two unit membranes the spacing was 190 f 30 A . This suggests that only the inner leaflets of the unit membranes were seen after using the standard technique . CAPs were characterized by an accumulation of dense material both intracellularly and extracellularly . The extracellular density was finely granular in nature and showed no organization in sections perpendicular to the cell membrane, except for an occasional faint suggestion of a dense line in the center of the extracellular space ( Fig . 5) . Since this dense line could be seen in both longitudinal and cross-sectioned material, it probably represents a plate-like structure . The intracellular density associated with the CAP was usually finely granular also . However, a certain amount of organization, which was enhanced by uranyl-acetate block staining, could sometimes be detected (Figs. 5,9) . It appeared as dense bars (about 300 A long), which radiated into the cell perpendicular to the cell membrane . The reason for the variable, finely granular or structured, appearance of the CAP with the standard fixation is unknown . The dense materials comprising CAPs (intracellular portion) and Z bands were morphologically similar and often contiguous (Figs . 4,11) . In tangential sections the CAP had a mottled appearance (Fig . 15) . As illustrated in Fig . 16, the central part of this mottled region was extracellular, but at its periphery there appeared to be an intracellular portion as well . The intra-2 Values are expressed as f the standard deviation . 460 THE JOURNAL OF CELL BIOLOGY • VOLUME 46, 1970 cellular mottling may be due to the dense bars mentioned above, cut in cross-section . CAPSs which have been pulled apart are rarely seen . In addition to CAPs, another type of intercellular junction was infrequently observed . These close junctions had a slight increase in density of the adjacent cytoplasm and close apposition of the two cell membranes (Fig . 5, cj) . The plasma membranes of the adjacent cells came very close together, but a 10-20 A gap was present between them ( Fig . 9) . Close junctions were rare in normal and control tissue, and when present were very short in extent (less than 0 . 1 u) . However, in four (out of a total of 51) blocks of tissue, close junctions were exceptional in two respects : they were seen relatively frequently (though much less often than CAPs), and were longer than usual (up to 0 .4 p) . In addition, all four cases were characterized by swollen mitochondria and/or widely dilated sacs of endoplasmic reticulum (Fig . 19) . In three of the four cases, the other blocks of tissue which had been processed together with the exceptional tissues (in the same fixation bottle) did not have this aberrant appearance ; in the other case, only one portion of the tissue block was abnormal . ELECTRICALLY UNCOUPLED INJURED TIS-SUE : Injured tissue which was fixed % hr after injury appeared morphologically similar to that fixed at later times. This was true regardless of whether the injury and fixation were carried out in situ or whether the injury-potential decline was measured and the trabecula fixed in the recording chamber. Since the injured tissues from the in situ cases would be expected to be physiologically uncoupled, as were those in the trabeculae where injury potentials were actually recorded, all the tissues which were fixed % hr or more after injury will be described together . The injured cells increased in density and their myofilaments were in disarray (Fig . 6) . Often their outline was irregular and the intercellular boundaries formed tortuous paths . The normal banding pattern was either indistinct or lost altogether. The normal spacing between the thick filaments of 525 A as seen in cross-section decreased to 400 A in the injured cells . Mitochondria enlarged and clumped together . Two changes with the CAP were observed in some, but not all, cases. A : dense plaque appeared parallel to and approximately 60 A away from FIGURE 6 Electrically uncoupled injured tissue, fixed 50 min after injury (see curve 1, Fig . 2) . The cell in the center of the field is typical of cells in the injured part of the trabecula . The particular field shown was taken from the boundary region between the injured and uninjured tissue . Typical of such boundary regions, cells with marked morphological alterations, such as the cell in the center of the field, were found adjacent to cells with little or no morphological alterations, such as the cell at the bottom of the field . A CAP and close junction (ci) can be seen. X 37,100 . the inside of the cell membrane (Fig . 7), and the dense line in the center of the extracellular space became more apparent (Fig . 7) . There appeared to be no difference in the structure of the CAP whether it was located between two injured cells or between an injured cell and an uninjured cell, except that the course of the cell surface, and thus of the CAP, was often more tortuous between two injured cells . There were no significant differences in the intercellular gap width at the CAP between the injured material and control tissue . After the standard technique, the intercellular gap (measured from the center of one cell membrane to the center of the adjacent cell membrane) was 270 f 35 A (control, 270 f 25 A) . After uranyl-acetate block staining, it was 310 f 40 A and 190 f 10 A when measured between the inner (cytoplasmic) and outer leaflets of the unit membrane, respectively (control, 300 f 30 A and 190 t 30 A) . The percentage of total cell-surface area involved in CAPs was not known for either control or FIGURE 7 Electrically uncoupled injured tissue, fixed 30 min after injury. Two CAPs are shown at high magnification. Note the dense line in the center of the intercellular space (small arrow) and the dense plaque just inside the cell membrane (large arrow) . X 100,500. FIGURE 8 Electrically uncoupled injured tissue, fixed 1 hr after injury . CAPs and a close junction (cj) are illustrated . These CAPS are similar to those seen in control tissues . X 58,600. injured tissue and therefore a quantitative comparison could not be made . However, no obvious change in area was noted, and only a very few CAPS were observed which had been pulled apart . Because close junctions were small and infrequent, a quantitative comparison was not attempted ; however, close junctions did occur between the injured cells, and they seemed to occur at least as 462 THE JOURNAL OF CELL BIOLOGY . VOLUME 46,1970 frequently in the uncoupled tissues as in the controls . ELECTRICALLY COUPLED INJURED TISSUE : Tissue which was fixed within 1 min after injury, while at least some cells were still coupled electrically with a large injury-potential, was indistinguishable from that fixed at later times . The very dense cells had irregular outlines, the myofilaments were disarranged, and the mitochondria were swollen and clumped together . In addition, the intercellular junctions were essentially un-FIGURE 9 Normal tissue after uranyl-acetate block stain, showing a CAP and a close junction (small arrow) . The trilaminar structure of the plasma membranes and cross bridges on thick filaments are prominent . Note the dense bars (large arrows) within the CAP . The inset is a higher magnification of the marked rectangle . X 81,900 ; inset, X 200,500 . FIGURE 10 Electrically uncoupled injured tissue, fixed 45 min after injury . Uranyl-acetate block stain . CAPs and a close junction (small arrow) are seen . Note the dense plaque just inside the cell membrane (large arrow) . The inset is a higher magnification of the marked rectangle. X 84,500 ; inset, X 272,000 . Unstained sections showed very dense ruthenium red outlining individual cells near the surface of the tissue block (Fig . 11) . Ruthenium red did not diffuse more than about 15 µ into the block, except along large connective tissue septa. It stained the external surface of the cell membrane up to and including the outer leaflet of the unit membrane (Luft, 1964 ;and Fig. 13) and did not pass into the cell unless the membrane was damaged (Luft, 1966) . The extracellular part of the CAP was stained heavily with the ruthenium red, as was much of the rest of the intercellular space (Figs . 11,13) . In a tangential section (Fig . 16), one can see that ruthenium red did not completely fill the extracellular space of the CAP, but left globular areas unstained, giving the area a mottled appearance . Close junctions showed ruthenium red in the small gap between the two opposed cell membranes (Fig . 13) . There was no junctional area between cells which could be shown to exclude ruthenium red, except the globular areas in the extracellular space between CAPS . Since no recognizable close junctions were seen in a tangential section, it is not known whether to those of control tissues (Fig . 14), and CAPs cut tangentially show the same mottled appearance characteristic of those in the controls (Fig . 17) . EXPERIMENTS WITH RUTHENIUM RED, ELECTRICALLY COUPLED INJURED TISSUE : The cells which were fixed within I min after injury resembled those fixed at later times . As shown by the unstained section in Fig . 12, the cell membranes were impermeable to the ruthenium red even at this time . CAPs and close junctions were similar to those seen in control and electrically uncoupled tissues . Electrophy8iology A simplified equivalent circuit for an injury current is shown in Fig. 18 . E is the difference in potential between the injured cells and the uninjured cells (if the injury causes complete depolarization, then E is the true transmembrane resting potential) ; r, is the resistance of the current path in the tissue ; r, is the resistance of the current path in the external medium ; and V is the measured injury potential, where V = E r, (Stampfli, re + r, 1954) . A decrease in the injury potential (V) with time must be due to an increase in r t or a decrease in E or re . Henry et al . (1961) found in frog ventricular muscle that E does not decrease during the time that the injury potential is decreasing (Table I) Control tissue, fixed immediately after injury to the other end of the trabecula, ruthenium red exposure . A CAP (large arrow) and a close junction (small arrow) are present . Inset is a higher magnification of the marked rectangle illustrating ruthenium red in the gap at the close junction and the trilaminar structure of the cell membranes . X 61,400 ; inset, X 221,000. FIGURE 14 Electrically uncoupled injured tissue, fixed 45 min after injury, ruthenium red exposure . Several CAPs (large arrows) and a close junction (small arrow) are visible . Inset is a higher magnification of the marked rectangle . X 71,300 ; inset, X 266,000. FIGURE 15 Control tissue, fixed 30 min after injury to an adjacent trabecula, standard fixation . This tangential section of a CAP illustrates its mottled appearance . X 37,200 . FIGURE 16 Control tissue, fixed 45 min after injury to the other end of the trabecula, ruthenium red exposure . The tangential section of a CAP shows ruthenium red-negative areas within the extracellular space at the CAP . X 38,800 . FIGURE 17 Electrically uncoupled injured tissue, fixed 45 min after injury, ruthenium red exposure. Tangential section of a CAP shows ruthenium red-negative areas within it . X 38,000 . due to the gradual electrical uncoupling of the injured cells from the surrounding uninjured cells . Deleze (1965Deleze ( , 1967 found that the cut end of a strip of mammalian cardiac muscle behaves as if a high-resistance barrier is present within a few minutes after the injury is inflicted, and that Ca++ is required for this "healing over" to occur. He suggests, without morphological evidence, that the healing over is accomplished by new membrane formation (recovery of the injured cells) . It could, however, be due equally well to electrical uncoupling of the injured cells, as demonstrated in the present study . The immediate depolarization of the injured cells is probably due to mechanical damage to the membrane at the time of injury . Continued depolarization, on the other hand, may be due to permanent loss of the membrane's selective ionic rt E FIGURE 18 Equivalent circuit of the path of an injury current (from Stampfli, 1954) . See text for explanation of symbols . permeability or to loss of the metabolic machinery required for maintaining the Na-K pump, even if the membrane heals itself. The abnormal appearance of the mitochondria after injury is consistent with the idea that their normal biochemical functions have been disrupted (Bovis et al ., 1966 ;Hackenbrock and Brandt, 1965 ;Luft and Hechter, 1957 ;Weinbach et al ., 1967) . However, there is no way to determine, from the results obtained in this study the effects of injury on membrane or mitochondrial functions, except to note that the plasma membranes are impermeable to the large ruthenium-red ions, even short times after injury . (Muir, 1965(Muir, , 1967 or interfibrillar portions of the intercalated disc (Sjöstrand et al . 1958) . The cardiac adhesion plaque (CAP) appears to be analogous to both types of junction . CAPs oriented parallel to the fiber direction are referred to as desmosomes by Barr et al . (1965, frog Henry et al ., 1961 . andCox (1960, mouse pulmonary vein), and Staley and Benson (1968, frog ventricle) . Since these junctions appear to differ primarily in their orientation with respect to the fiber direction, a single, common term can refer to all such junctions, at least in the above-mentioned tissues . This type of junction is morphologically distinct from the typical epithelial desmosomes as described by Farquhar and Palade (1965 Palade, 1963, 1964) . Revel and Karnovsky (1967) have shown, using a very fine lanthanum oxide sol as an extra-cellular tracer, that those junctions which previously were described as tight junctions should be separated into two groups ; the true tight junctions or zonulae occludentes ; and the close or gap junctions . The former type is impermeable to lanthanum (Revel and Karnovsky, 1967 ;Brightman and Reese, 1969), ruthenium red (Martinez-Palomo, 1968) 3 and horseradish peroxidase Reese, 1967, 1969) and can be considered to be a true membrane fusion Palade, 1963, 1965) . The latter type of junction is permeable to the lanthanum sol (Revel and Karnovsky, 1967), ruthenium red (Martinez-Palomo, 1968),2 and horseradish peroxidase (Brightman and Reese, 1967 . Using these extracellular tracers or a uranyl-acetate block stain, a gap of approximately 20 A can be demonstrated between the outer leaflets of the two opposed cell membranes (Revel and Karnovsky, 1967 ;Reese, 1967, 1969) . 468 The present results show that the close membrane appositions of frog atrium are close, or gap, junctions . If such close junctions were cut in a tangential section, areas within the extracellular space of the junction might not be penetrated by the ruthenium red . However, close junctions are so small, so infrequent, and so randomly distributed that there is no way to recognize with confidence a close junction except when it is cut perpendicularly . In contrast to these results, Barr et al . (1965) have reported tight junctions (nexuses) in frog atrium after permanganate fixation . Reese (1967, 1969) Although large areas of close membrane apposition can be found in mammalian ventricular muscle (Barr et al ., 1965 ;Karrer, 1960 ;Muir, 1965Muir, , 1967Sjöstrand et al ., 1958 ;Sjöstrand and Andersson-Cedergren, 1960 ;Sommer and Johnson, 1968 ;Fawcett, 1965), it appears that this is not true of all types of cardiac muscle . A paucity of such junctions has been reported for frog ventricular muscle (Staley and Benson, 1968 ;Som-mer, 1968), for chicken ventricular muscle (Sommer, 1968 ;Scott, 1969),4 and for Purkinje fibers of a variety of mammals (Sommer and Johnson, 1968) . In addition, James and Sherf (1968) have been unable to find close membrane contacts between P (Pacemaker?) cells in dog and human hearts . It has been claimed that a lack of close membrane appositions in frog atrium is a result of shrinkage during tissue preparation (Barr et al ., 1965 ;Dewey and Barr, 1964), but in the results reported here there is no evidence of such shrinkage in the control tissue . Furthermore, close junctions still remain intact, even when the cells shrink in contracture after injury (see below), and Cobb and Bennett (1969) have reported that nexuses between smooth muscle cells of taenia coli, vas deferens, and gizzard were unaffected by shrinking the cells in hypertonic solutions . Moreover, variations in amounts of close membrane appositions can be seen in different tissues of the same heart . For example, in chicken heart, there are many such areas between Purkinje cells, but very few between ventricular cells (Sommer, 1968 ;Scott, 1969) . 4 It therefore seems unlikely that differences in amounts of close membrane appositions can be ascribed solely to differences in fixation or processing . The few observations of relatively large areas of close junction in association with abnormal mitochondria and/or sarcoplasmic reticulum (Fig . 19) may be interpreted as additional evidence that electron microscope are electrically coupled or not . It seems reasonable, however, to assume that injured cells seen in the electrically uncoupled tissues are in fact electrically uncoupled from their surrounding cells as most cells in such tissues must be electrically isolated . At the very least, these cells must be typical of such uncoupled cells since they were typical of the injured cells which were uniformly and consistently observed in uncoupled tissues. It is thought that even the infrequent close junctions are likely to be present between electrically uncoupled cells. There was no obvious decrease in their frequency in the electrically uncoupled tissues as surely would be the case if close junctions came apart when cells became electrically isolated . In the electrically coupled injured tissues it is less tenable that the injured cells observed were in fact electrically coupled . One knows only that enough injured cells are electrically coupled to uninjured cells to give rise to the injury current . However, it seems likely that cells such as those shown in Fig . 12 are typical of electrically coupled injured cells since they were typical of the whole mass of cells in the electrically coupled tissues. Two types of intercellular junctions have been proposed to be electrical couplings because of their frequent occurrence between electrically coupled cells : (a) septate junctions of invertebrates (Wood, 1959), which are present between salivary gland cells of insects, have been suggested as the sites of electrical coupling in this tissue (Loewenstein and Kanno, 1964) ; (b) "tight" junctions are often found between electrically coupled vertebrate cells, and this type of junction has been proposed as being involved in electrical communication by Bennett et al . (1963), Robertson (1961Robertson ( , 1963, Barr (1962, 1964), Weidmann (1965Weidmann ( , 1966, Palade (1964, 1965), Loewenstein et al . (1965), Penn (1966), Potter et al . (1966), Revel and Karnovsky (1967), and Revel and Sheridan (1968) . However, in many cases it is not known whether these tight junctions are true tight junctions or are gap or close junctions since the gap junction was first described after many of these references were published . For the purpose of convenience and clarity in this discussion, all such junctions will be referred to as tight junctions since this term has been so commonly used in the literature . Rosenbluth (1965), and Martin and Veale (1967) . A further characteristic of tight and septate junctions is that they are not sufficient in them-5 G . D. Pappas . 1968 . Personal communication . 47 2 THE JOURNAL OF CELL BIOLOGY . VOLUME 46,1970 selves to assure low electrical-resistance between cells . On one hand, Barr et al . (1965) Loewenstein and coworkers (Loewenstein, 1966(Loewenstein, , 1968Loewenstein and Penn, 1967) account for the properties of the lowresistance pathways between cells by proposing that plasma membranes have low electrical-resistance when exposed to low Ca++ solutions on both sides of the membrane . They assume that electrical coupling depends upon the structural integrity of the plasma membrane in excluding Ca++ from the internal surfaces of the junctional membranes, and on the structural integrity of the functional area in excluding Ca++ from the external surfaces of the cell membranes at the junction . Uncoupling of the cells, then, is due to Ca++ gaining access to the junctional membranes and in some way making them impermeable to ions . Uncoupling with injury (as in this study) would therefore result from Ca++ gaining access to the inside of the cell through the damaged membrane and sealing off the junctional membrane from the inside of the cell Loewenstein and Penn, 1967) . The pattern of distribution of ruthenium red around cells is of interest in light of the hypothesis that low Ca++ concentration is responsible for low-resistance membranes at cell junctions . Ruthenium red has a molecular weight of 858 .5 and a charge of +6 (Luft, 1965) . 1 It is therefore reasonable to assume that Ca++ can diffuse into any intercellular space in which ruthenium red is found, although it is not necessarily true that an absence of ruthenium red means an absence of Ca++. Since the theory proposed by Loewenstein's group calls for intercellular junctions which exclude Ca++ from the external surfaces of the cell membrane at these junctions, any such junctions would be expected to exclude ruthenium red also . In the results of this study, the only possible areas which fit this requirement are the small (300 A in diameter) areas seen within tangentially cut CAPs, and perhaps small areas within the close junctions. Although it is not necessarily true that areas which are ruthenium red-negative also exclude Ca++ this is a possibility . If this is true, then these small areas may be sites of low membrane-resistance and thus of electrical coupling between cells . Although each of these ruthenium red-negative areas is small, there are many within each CAP (approximately one-fourth the total area of the CAP is ruthenium red-negative), so that their total area on a particular cell might be quite large . There is no change in these ruthenium red-negative areas after injury . However, if the junction is sealed off from the inside of the cell membrane as proposed by and Loewenstein and Penn (1967), then one would not necessarily expect to see a change here . The evidence that Ca++ is involved in the mechanism of electrical coupling of cells is well established (Loewenstein, 1966(Loewenstein, , 1968Deleze, 1967 ;and de Mello et al ., 1969), but it is difficult to know just how its effects are mediated . Even though Ca++ is known to effect membrane permeability to ions (Weidmann, 1955 ;Caputo and Gimenez, 1967 ;and Curtis, 1963), it is also known to effect intercellular adhesion (de Mello, 1969 ;Muir, 1967 ;and Sedar and Forte, 1964) as well as take part in a variety of metabolic reactions . Thus Ca++ may not be directly affecting cell membranes as proposed by Loewenstein, but instead affecting intercellular communication in some indirect way . It is interesting to note that Jochim et al . (1935) found that they could maintain injury potentials in heart for long periods of time (the injured cells did not uncouple) by injuring the cells with a slight pressure on the recording electrode itself . Similar effects have been obtained with suction electrodes (Churney and Ohshima, 1964 ;Sjöstrand, 1966) . These long-lasting injury potentials can be explained by assuming that the injury is not so great that the metabolic activity of the cell is destroyed, and that the injured region is confined to the inside of the electrode . If such conditions were met, then no Ca++ could enter the cell to cause uncoupling . Conclusions Any changes in intercellular relationships which occur when cells uncouple are beyond the resolution of the techniques used in this study . However, the results are incompatible with the idea that close membrane appositions per se will insure electrical coupling between cells.
2014-10-01T00:00:00.000Z
1970-09-01T00:00:00.000
{ "year": 1970, "sha1": "5aa323f5f91e0326b1ed43622e4a66f08dc96a93", "oa_license": "CCBYNCSA", "oa_url": "https://rupress.org/jcb/article-pdf/46/3/455/1385344/455.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "470b6b4f80a1093094bba9f72227cdca7b01d3c7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
674549
pes2o/s2orc
v3-fos-license
The evolution of monogamy in response to partner scarcity The evolution of monogamy and paternal care in humans is often argued to have resulted from the needs of our expensive offspring. Recent research challenges this claim, however, contending that promiscuous male competitors and the risk of cuckoldry limit the scope for the evolution of male investment. So how did monogamy first evolve? Links between mating strategies and partner availability may offer resolution. While studies of sex roles commonly assume that optimal mating rates for males are higher, fitness payoffs to monogamy and the maintenance of a single partner can be greater when partners are rare. Thus, partner availability is increasingly recognized as a key variable structuring mating behavior. To apply these recent insights to human evolution, we model three male strategies – multiple mating, mate guarding and paternal care – in response to partner availability. Under assumed ancestral human conditions, we find that male mate guarding, rather than paternal care, drives the evolution of monogamy, as it secures a partner and ensures paternity certainty in the face of more promiscuous competitors. Accordingly, we argue that while paternal investment may be common across human societies, current patterns should not be confused with the reason pairing first evolved. while paternal investment may be common across human societies, current patterns should not be confused with the reason that pairing evolved in the first place. Consequently, the question becomes, why did men originally pairbond? Why do men pairbond? Two common alternative arguments for the evolution of monogamy exist in the literature. The first argument suggests that monogamy evolved due to selective pressure favoring males that protected their offspring from attacks by infanticidal competitors 22,23 . However, recent phylogenetic analyses cast doubt on this claim and find that across animal taxa the evolution of monogamy is unassociated with the risk of infanticide 21 . The second argument focuses on patterns of female distribution 24 . Solitary females spread across a landscape, due to resource dispersal and/or female intolerance, limit multiple mate monopolization opportunities for males and favor monogamy as a consequence 21,25,26 . While this argument may hold for many mammals, this explanation is incongruous for many group-living species, including humans and other primates 27 . A growing body of theoretical and empirical research highlights another pathway to monogamy, and a possible crucial step in the evolution of paternal care, through male mate guarding (defined as the close association between a male and female prior to and/or after copulation for paternity assurance 19 ). While research on the evolution of reproductive strategies in humans often reports that the optimal male mating strategy is the pursuit of multiple partners, these modeling approaches typically assign fitness payoffs based on the effort males devote to a particular strategy rather than on the availability of partners 5,6 . Consequently, there may be overlooked conditions under which, instead of mating multiply, it may be in the best interest of a male to achieve high paternity with a single female 28 . This trade-off is particularly acute in response to partner availability. When the mating pool is male-biased, males face difficulty in finding additional mates and a current partner becomes a valued resource, favoring mate guarding 8,9,29 . Accordingly, the adult sex ratio (number of sexually mature males to females in a population; ASR) becomes a key determinant to fitness payoffs of a particular male strategy 7 . Following recent theoretical and empirical findings, we seek to examine a largely overlooked and possible intermediate step in the evolution of humans from an ancestral multi-male/female mating system with promiscuous mating 1 to monogamy and paternal care: male mate guarding in response to partner scarcity. We model the response of three male strategies -multiple mate seeking, mate guarding, and paternal care -to fluctuating paternity certainty, benefits to paternal care, and partner availability. In doing so, we seek to offer insight and evaluate key claims for the emergence of monogamy and paternal care in humans. Model Specification Verbal description of the model. We investigate the selective advantage of three male strategies: Multiple Mating (MM), Mate Guarding (MG), and Paternal Care (PC). Male reproductive success depends on the ratio of available males to females at a particular time and the frequency of the three male strategies. In the model, all males stay in the mating pool during their lifetime except for MG males who leave the mating pool with an encountered female. The probability that a male mates with a female at a particular time is dependent on the ASR, upper-bounded by 1 (i.e., all males will mate when females are in excess), with some variation across the three male strategies. PC males provide a survival benefit (c) to offspring through provisioning. They pair with a female but do not mate guard, therefore there is a probability of cuckoldry (k). MM males will attempt to mate with multiple females, with reproductive benefits reflecting shared paternity across available females. If there are PC males in the population, MM males may gain additional reproductive benefits through cuckoldry. MG males, if they meet a female, will guard their partner to prevent cuckoldry, forgoing other mating opportunities 9,30 , and mate with the same female during their lifetime, which is determined by the probability of male survival (u). These dynamics are represented in schematic form in Fig. 1. In step 1, males encounter females randomly, irrespective of their strategy, where the probability of a male encountering a female is solely dependent on the ASR. After an encounter, PC and MG males form pairbonds with females. MM males share paternity across females not paired with MG males and may additionally mate with females paired with PC males, who are at risk to cuckoldry. This risk is determined by three key factors: 1) the willingness of females to engage in extra-pair mating, 2) the possibility that females have not yet become pregnant by a PC male, which is determined by a conception rate parameter in the model (b; set to 0.3 following data on monthly conception rates among women without fertility problems 31 ) and 3) the frequency of MM males (i.e., if they are rare the probability that they encounter a female paired with a PC male is low). Moving to step 2, MG males paired in step 1 have effectively removed their partners from the mating pool, thereby reducing the number of available females, and the process begins again. Over multiple steps the summed fitness benefit for each strategy reaches an asymptote, which informs the evolutionary dynamics of the three male strategies. Results We begin by presenting the evolutionary outcomes due to selection of the three male strategies in response to a mammalian typical female-biased ASR and adult survival probability (u = 0.9; see ref. 32), as well as human conception rates (b = 0.3; Fig. 2). With weak and moderate benefits to care, a 1% and 50% increase in offspring survival respectively, we find that MM males are strongly favored regardless of paternity certainty ( Fig. 2a,b,d,e). However, when we maximize benefits to care, the level of cuckoldry plays an important role (Fig. 2c,f). Doubling the benefits to care (c = 1) and ensuring paternity certainty yields a mixed strategy equilibrium of PC and MM males (Fig. 2c). Low paternity certainty, however, returns our previous findings favoring MM males (Fig. 2f). In sum, under most conditions, males that pairbond have depressed genetic fitness outcomes relative to MM males due to their restricted sexual strategy in the face of partner abundance. However, if we assume female faithfulness to a heavily investing male 6 , PC males are expected to be as abundant as MM males (Fig. 2c). Next, we move to a male-biased ASR to examine the relative payoffs of the three strategies in response to partner scarcity (Fig. 3). Here, with weak and moderate benefits to care, and regardless of cuckoldry level, we find that MG males are most successful (Fig. 3a,b,d,e). However, as shown above, we again see variation emerge in the optimal male strategy when we double the benefits to care. Instead of a mixed-strategy, as in the female-biased condition, we find that both MG and PC males represent evolutionary stable strategies when paternity certainty is high (Fig. 3c). However, the domain of attraction for PC males decreases with lower paternity certainty, though a mixed strategy between MM and PC males will remain unless a certain threshold of MG males are present ( Fig. 3f; see SI Fig. 2 for the role of conception rates that are lower and higher than 0.3 on male strategies). In sum, with low and moderate benefits to care, MG males perform best. However, with maximal benefits to care and no cuckoldry, the PC and MG strategies become competing equilibria 33 . Lastly, we evaluate male fitness outcomes across a range of ASR values (Fig. 4). In particular, we seek to present the relative payoffs for the three strategies when the ASR is at or near parity (1 male for every 1 female) and MM males are initially common (following the typically assumed ancestral condition), while adjusting the benefits to care (c) and rate of cuckoldry (k). When returns to caring are low and cuckoldry common (Fig. 4a), we find that MM is the favored strategy at female-biased ASRs. However, moving right along the x-axis, as the ASR approaches parity, and continues to becomes increasingly male-biased, we see a mixed ESS of MM and MG males, with the equilibrium frequency of MG males increasing until it becomes the (pure) ESS at an ASR of ~1.1 (i.e., 110 males for every 100 females). The story is very similar if we keep payoffs to care low, but remove the risk of cuckoldry Scientific RepoRts | 6:32472 | DOI: 10.1038/srep32472 (Fig. 4c). While we find some increase in the frequency of PC males when MG males become more common, MG males become the favored strategy more quickly under these conditions. However, when we maximize payoffs to care, we find a mixed equilibrium between PC and MM males across a wider range of ASR values (Fig. 4b). Nonetheless, in male-biased populations (~1.3+ ), MG males again become most common. This scope for MG males disappears however when we remove cuckoldry risk (Fig. 4d). In sum, we show that under conservative conditions (low benefits to care and high rates of cuckoldry), MG males perform best across male-biased sex ratios. However, under opposite conditions (high benefits to care and low rates of cuckoldry) PC male perform best under a wide range of ASR values (please visit the link https://abell.shinyapps.io/SexRatioSimulation/ to construct your own version of the model to see how varying parameters beyond what we present here affect the relative frequencies of the three strategies and see the Supplementary Information for the R code used to build our models). Discussion Here we are interested in exploring possible pathways for the evolution of monogamy and paternal care in our lineage. To make this question tractable, we allow males to engage in one of three strategies. This is a limitation of our modeling approach, however, with a polymorphic model specification, we can interpret the evolutionary dynamics more clearly in terms of the characteristics of each strategy. Accordingly, we show that when partners are abundant, multiple mating, and not pairbonded, males generally see the greatest fitness returns to their strategy. On the other hand, when males are abundant and partners are rare, males that pairbond generally do best. While we do find scope for the evolution of care in our models, the parameter estimates require paternal care to increase offspring survival considerably in the presence of limited cuckoldry (Figs 2c and 3c). Therefore we believe our findings support previous work that challenges straightforward arguments of a promiscuity to paternal care transition in human evolution. Instead, we offer mate guarding as a possible pathway to elevate paternity certainty and allow for monogamy to evolve in humans. Once pairbonding becomes established, this then allows selection to operate on variation in the amount of care provided by males. Below we discuss the applications of our findings to: (i) current criticisms of the classical model of sexual selection, (ii) recent research on frequency dependent reproductive decision making, (iii) the evolution of human life history and its sex ratio consequences, and (iv) interpretations for the evolution of paternal care in humans. Classical sexual selection theory predicts that the relative parental investment of the sexes leads to sex differences in optimal mating rates 12,16,34 . It is argued that because males invest less in any one reproductive event, as a consequence of anisogamy, they benefit more from mating multiply than do females. Moreover, a persistent theme in the literature is the claim that a shortage of females results in elevated mating effort among males 34 . However, our results do not support claims that male benefits to mating multiply are always high and highlight the importance of frequency dependent dynamics patterning reproductive behavior. We show that it is often in the best interest of a male to forgo pursuing multiple mating opportunities and instead achieve high paternity certainty with a single partner. These results are in line with a growing body of theoretical and empirical work in the biological and social sciences showing reproductive decisions in response to partner availability that counter conventional assumptions 7,36 . For example, among humans, an abundance of men is associated with higher rates of relationship commitment 35 , monogamy [38][39] , lower reproductive skew among males 40 , and less promiscuity in both sexes 40,42 . These findings are consistent with a recent analysis of 187 bird species 41 and other studies across diverse animal taxa showing that male-biased sex ratios are consistently associated with higher rates of pairbonding [44][45][46][47][48] . Thus, a growing body of literature highlights that partner rarity intensifies male commitment to pairbonded strategies, rather than multiple mate-seeking, across populations of both human and nonhuman animals. When modeling the evolution of monogamy, we begin by assuming a typical mammalian female-biased ASR 49 . In line with patterns of mammalian mating, we find that across most conditions the optimal strategy for males is to pursue multiple mates. Males that attempt to pairbond do not take advantage of the relative female abundance and as a consequence cannot compete with MM males. However, when we examine male strategies in response to a male-biased ASR, monogamous males perform best. Our findings support empirical research, primarily among birds, showing that where male-biased sex ratios dominate, monogamy is most common 43,50 . The relevance of these typically bird-like male-biased sex ratios to human evolution is currently an open question. However, while our closest relatives have typical mammalian female-biased sex ratios 51 , human sex ratios are considerably more male-biased 8,36,52 and bird-like 49,53 . Thus the question becomes, did our ancestors experience a change in the direction of the sex ratio bias, resulting in a shortage of women? This is a possibility. Because of menopause and our exceptionally long lives, the sex ratio of reproductive-aged individuals in humans is generally male-biased 52 . Therefore, as a consequence of the evolution of increasing longevity, coupled with reproductive cessation in women 54 , ancestral males likely faced an increasingly male-biased sex ratio 8 , altering the selective arena for payoffs to mating strategies. Our findings also speak to the current debate regarding the relative emergence of monogamy and cooperative breeding in human evolution 54 . The monogamy hypothesis suggests that pairbonding and male care preceded the emergence of cooperative breeding in our lineage 55 . Thus, much of our unique life history can be attributed to male investment. However, cooperative breeding proponents argue that the assistance of others (not the father) increased the fertility of mothers, decreased the mortality of offspring, and allowed for the suite of human life-history traits to evolve 3,56,57 . While we do not engage this debate directly, we find monogamy in response to partner scarcity. If female scarcity in humans arose primarily due to the extension of the lifespan, then cooperative breeding likely preceded monogamy in humans. Lastly, for paternal care to emerge in our models, the benefits to care need to be high (e.g., a doubling of offspring survivorship). There are a couple of reasons to doubt how biologically appropriate this parameter estimate is. First, until paternal care has been under selection, it is likely to be inefficient, quite variable in returns, unreliable and possibly of little benefit 58 . Therefore, requiring such high payoffs for care to emerge raises concerns about its relevance to the evolution of monogamy. Second, in a recent cross-cultural review looking to paternal effects on child survival, only a third of studies found any beneficial outcome to father presence 59 . This is obviously quite surprising given how human pairbonds are typically described as a cooperative system of joint production to meet household needs 2,17 . With this example we do not seek to challenge whether or not males invest (e.g., paternal care may have benefits other than increasing offspring survival, such as elevating quality 60 ) but use it to simply highlight the requirements for paternal care to evolve in our models. Accordingly, in humans, we contend that the transition from males mating multiply to providing paternal care possibly passed through an intermediate step of male mate guarding in response to partner rarity. This interpretation is consistent with recent phylogenetic analyses of primate social organization, indicating that bonded relationships (i.e., pair-living) derived from an earlier state of multi-male/multi-female groups 61,62 . Pairbond formation through mate guarding provides a mechanism to ensure paternity certainty and a possible avenue to open up paternal care to selection. Once pairbond duration lengthens, the reproductive interests of males and females may become aligned. As a derived trait, monogamy may be stabilized through payoffs to infanticide protection 23 as well as by increasing the interdependence of pairbonded individuals and evolved social mechanisms to maintain the sexual division of labor and the specialization of care tasks 63 . Upon considering the competing equilibria of mate-guarding and parental-care (Fig. 3c), one intriguing possibility is that once PC populations with elevated female fertility and greater degrees of social cooperation emerge, these strategies may become favored and increase in frequency through multilevel selection 32 . In conclusion, as members of the hominin line began living longer, a transition from a female-to male-bias in the sex ratio was a likely outcome. In response to male abundance, following our model results, mate guarding by males was favored. As males consistently pairbonded and paternity certainty was assured, it became possible for selection to operate on variation across males in the amount of paternal care offered. Methods Mathematical description of the model. As mentioned above, here we investigate the evolution of three male strategies: PC, MG, and MM, with respective frequencies p, q, and 1 − p − q. The mating success of each male will depend on the ratio of males and females in the mating pool at time t, (M t /F t ), and the frequency of the three strategies in a large population. Our approach diverges from previous work on this topic, particularly that mating opportunities are dependent on partner availability. The reproductive payoffs are structured such that each male enters the mating pool multiple times during its lifetime (Fig. 1), with future benefits discounted by the probability of survival. The sum of these benefits is the fitness of each strategy. Paternal Care strategy. For each time period, PC males will pair with a female with probability y t = min [F t /M t , 1] and give an added survival benefit to offspring, c. Since PC males do not guard there is a possibility of cuckoldry by MM males. If u is the probability of male survival from one period to the next, the fitness of the PC strategy becomes, where h t is the probability of the PC male providing care to offspring not his own. The expression simplifies to, To find h t , the probability of cuckoldry, we account for the relative frequency of PC and MM males (p' and (1−p'), respectively), the probability of conception per mating bout (b), the willingness of females to engage in extra-pair mating (k), and the probability of a female encountering another PC or MM male (a t ). In the Supporting Information we show h t to be Multiple Mating Strategy. MM males attempt to mate with multiple females in each time period, such that the fitness benefit z t is gained through shared paternity across available females not being guarded or paired to a PC male. Further, if there are PC males in the population, then cuckoldry will provide added benefits to the MM strategy with g t being the expected number of cuckold events per MM male. The fitness of the MM strategy becomes, Since the number of females paired with a PC male is y t p t M t the maximum possible paternity that an MM male may gain through cuckoldry is, Scientific RepoRts | 6:32472 | DOI: 10.1038/srep32472 t t t t t and adjusting for the probability a PC male becomes a cuckold yields, Since the number of females paired to PC and MG males is y t p t M t and y t q t M t, the number of females for which paternity if shared among MM males is F t − y t M t (p t + q t ). Thus, t t t t t t Mate Guarding strategy. Following previous theoretical work on the topic 9 and empirical results linking paternity certainty to time spend guarding 30 , fitness benefits to MG males depend solely on the probability of finding one female, y t , after which the male guards the female successfully through his lifetime: When simulating the fitness of all strategies as they accumulate through time (Fig. 1), they approach an asymptote well before t = 100. Therefore in all simulations below, we calculate fitness up to this time point. Adult Sex Ratio dynamics. We assume a discrete time process t = [0, 1, … ] where the fraction of available males to females may change throughout a male's lifetime. It will change if there are MG males in the population, otherwise it will remain the same as initially determined in each time period. Thus without a significant number of Mate Guarding males (q ≈ 0) then M t = M 0 and F t = F 0 , and the fitness of the strategies specified above simplifies greatly. If there are MG males in the population then M t and F t will change through time as females encountering MG males leave the mating pool. Once MG males are fully gone, then the sex ratio of available females to males is constant. Importantly, the effects MG males have on the operational sex ratio (i.e., those available to mate; OSR) varies both with the ASR and their frequency. In a male-biased ASR, when mate guarding males are common, they reduce the numbers of available females to near 0. However, the effect MG males have on the OSR is much different at female-biased ASRs. These males, by removing themselves from the population, increase the relative numbers of females available to, for example, MM males (see SI). If there are F t females and q t M t MG males in the population and M t > F t , then the probability of a female becoming newly guarded by a male is q t , making the average number of females being newly guarded by a male is q t F t . With M t and F t being the number of males and females in the mating pool at time t, and q t the frequency of non-paired MG males, then In Figs 2 and 3 we investigate contrasting initial sex ratios that are female-biased (M 0 = 100, F 0 = 150) and male-biased (M 0 = 150, F 0 = 100). In Fig. 4 we vary the ASR along a continuous scale.
2018-04-03T03:06:48.160Z
2016-09-07T00:00:00.000
{ "year": 2016, "sha1": "43b973c139eb90fc8b2981519634a579760f98c9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep32472.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43b973c139eb90fc8b2981519634a579760f98c9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258444030
pes2o/s2orc
v3-fos-license
Self-Reported Systemic Diseases and Periodontal Status: A Cross-Sectional Study from Turkey Background: Individuals applying for periodontal treatment often have systemic diseases that can adversely affect the periodontal disease course and treatment response. Objectives: To determine the prevalence and types of systemic disease in patients referred for periodontal treatment living in Turkey, and to investigate the association between systemic disease and periodontal status, according to the new classification of periodontal disease (2017). Methods: A total of 800 randomly selected dental files were evaluated among patients who had attended the periodontology department of a university hospital between January 2021 and January 2022. Demographic data (age and gender), self-reported medical history, smoking habits, daily tooth-brushing frequency, periodontal status, and the number of missing teeth were recorded. Full-mouth periodontal examinations were undertaken, and the patients were classified according to the American Association of Periodontology/European Federation of Periodontology 2017 case definitions. Results: The prevalence of systemic disease was 48% among the study participants. Hypertension (HT), diabetes, and cardiovascular diseases (CVD) were the most common systemic diseases identified. Periodontitis was present in 32% of the study population. When periodontitis patients were classified according to their stages; 42% had severe (stageIII\IV) periodontitis, 35% moderate (stage II), and 23% mild (stage I). The prevalence of systemic disease increased according to the severity of periodontal disease (p = 0.000). A significant correlation was also present between the presence of systemic disease and missing teeth (r = 0.120, p = 0.001). On logistic regression analysis, self-reported diabetes (OR = 3.12, 95% CI: 1.90-5.12), HT (OR = 2.49, 95% CI: 1.68-3.68) and CVD (OR = 1.73, 95% CI: 1.01-2.96), age (OR = 1.05, 95% CI: 1.03-1.06), low tooth brushing habits (OR = 2.63, 95% CI: 1.81-3.83) and tobacco use (OR = 1.92, 95% CI: 1.33-2.78) were identified as significant predictors of periodontitis. Conclusions: The present study results suggest an association between self-reported systemic diseases, as well as tobacco use, and periodontal disease severity. Introduction Periodontal disease is a common public health problem, affecting approximately half of the adult population worldwide [1]. The disease often begins as gingivitis, a reversible inflammatory process involving only the gingiva and, if left untreated, can progress to irreversible periodontitis. Periodontitis is characterized by chronic destruction of the tooth-supporting structures, as a result of a complex interplay between the host's immune system and polymicrobial biofilm [1]. Numerous clinical and epidemiological studies have focused on identifying specific risk factors and indicators that make an individual more susceptible to periodontal disease. Current evidence suggests that the presence of certain systemic diseases [2][3][4][5], smoking [6], stress [7], and aging [8] can influence periodontal disease progression and severity. Studies have reported an association between periodontal disease and a wide range of systemic diseases including hypertension [9] (HT), rheumatoid arthritis [4], cardiovascular disorders [5] (CVD), diabetes mellitus [2], and respiratory infections [4]. The possible biologic mechanisms mediating the link between periodontal and systemic diseases include systemic level ( = 0.95), the number of predictors (n = 16), and the probability level ( = 0.05) indicated that the minimum sample size needed in the present study was 204 participants. To increase the generalizability and minimize the error, a larger sample size of 800 (400 matched pairs for gender) was taken. Data extraction Patient's information regarding age, gender, selfreported medical history, smoking habits (present or absent) and the number of cigarettes smoked per day, daily tooth-brushing frequency (as brushing their teeth ≤ once/day and ≥ twice/day), periodontal status and the number of missing teeth were collected. The systemic diseases were classified into 15 categories as follows; cardiovascular diseases (CVD) (congenital heart defects, mitral valve prolapse, congenital and valvular defects, angina, atherosclerotic disease, and bypass surgery), diabetes. hyper tension (HT), thyroid disease, gastrointestinal disease, musculoskeletal disease, infectious disease (hepatitis B and acquired immune deficiency syndrome), liver disease, kidney disease, respiratory disease, anemia, neoplasm, osteoporosis, psychological disease and allergy. Case definition of periodontal disease Diagnostic criteria and clinical case definitions for periodontal disease (gingivitis and periodontitis) were based on the Classification of Periodontal and Peri-Implant Diseases and Conditions of the 2017 World Workshop [12]. In brief, the following periodontal parameters were evaluated by two trained investigators (ÖD and ZVS) around six sites (mesiobuccal, mesiolingual, distobuccal, buccal, lingual, and distolingual) in all teeth present using William's periodontal probe; probing pocket depth (PPD), plaque index [13], bleeding on probing (BOP) [14], clinical attachment loss (CAL), and radiographic bone loss (RBL). Patients were diagnosed with gingivitis if they had PPD of ≤ 3 mm and BOP at ≥ 10% [12]. Further, all gingivitis patients were subcategorized as gingivitis on an intact and reduced periodontium. Periodontitis patients were diagnosed if they have more than two detectable interproximal CAL, and sub-categorized as having stage; mild (stage I) periodontitis (CAL = 1-2 mm with RBL affecting < 15% of root length) moderate (stage II) periodontitis (CAL = 3-4 mm with RBL affecting 15-33% of root length), and severe periodontitis (stages III and IV) (CAL ≥ 5 mm with RBL extending to the middle or apical third of the root) [10]. Statistical analysis Data were analyzed using using a statistical software (IBM SPSS Statistics, Version 23.0, Armonk, NY, USA). Descriptive statistics were calculated including percentages and numbers for categorical variables and means and standard deviations for continuous variables. The normality of data was evaluated by inflammation, microbial dysbiosis, and altered immune response [4,9]. In addition, periodontal diseases can also affect the pathogenesis and course of various systemic diseases by triggering a series of chronic inflammatory events [3]. The presence of severe periodontal disease increases the risk of multiple tooth loss and edentulism, which can negatively affect people's quality of life from functional and psychological perspectives [1]. Moreover, the economic burden resulting from the management of both periodontal and systemic diseases underlines the need for implementing strategies to prevent their initiation and progression. In this context, the new periodontal classification system provides a multidimensional framework for risk assessment and personalized treatment protocols, especially for periodontitis patients [10]. Currently, there is limited information regarding the systemic profile in patients with periodontal disease with respect to this new periodontal case definition. Thus, this study aimed to determine the prevalence and types of systemic diseases in patients referred for periodontal treatment, and to investigate the association between systemic disease and periodontal status, according to the new classification of periodontal disease. Study design The present retrospective study was conducted in full accordance with the Declaration of Helsinki of 1975 (as revised in 2013) and approved by the Ethics Committee of the Medical Faculty of Akdeniz University (KAEK-344\11.05.2022). This cross-sectional gender-matched study included records of patients diagnosed with periodontal disease at the department of Periodontics, Faculty of Dentistry, Akdeniz University, from 1 st January 2021 to 1 st January 2022. As a standard protocol of our clinic, all patients who attended for periodontal treatment undergo a detailed interview about their information on current medical and dental histories, followed by a routine periodontal examination, and these data are recorded on patients' standardized charts. Study inclusion criteria were: 1) Adult patients (>18 years old) and 2) Charts with complete information about study variables. Approximately 900 charts were evaluated for inclusion criteria, and when a randomly selected chart achieved the full inclusion criteria, that chart's data were recorded in a computer database file. Thirty-six patients were excluded due to age limitation, forty-two incomplete medical history, and twenty-two missing data. Sample size estimation The study sample size was determined by power calculation [11] considering multiple regression; the anticipated effect size (f2 = 0.15), the statistical power dependent variable, and the parameters that reached statistical significance with the univariate analysis. Mild periodonitis was selected as a reference category for the comparisons with moderate and severe groups. For all the logistic regression models, odds ratios (OR) and 95% confidence intervals (CI) were calculated. P value was considered statistically significant if less than 0.05. Results The general characteristics of all patients, divided according to gender, are presented in Table 1. The mean age of the females and males was statistically similar Kolmogorov-Smirnov test. The differences of categorical variables in different subgroups were analysed using Chi-square test, and continuous variables with Mann Whitney U test or Kruskal-Wallis test. Spearman's rank correlation coefficient (rho) was used to assess correlations between variables. To determine the risk of occurrence of periodontitis (0 = non-periodontitis (gingivitis), 1 = periodontitis) with the presence of the different risk factors was evaluated by the binary logistic regression analysis. Multinominal logistic regression analysis was also performed to determine the severity of periodontitis (as mild, moderate and severe) as a disease, gastrointestinal diseases (predominantly ulcers), respiratory diseases (asthma), osteoporosis, anemia and psychological disease. On the other hand, HT, and infectious diseases were more prevalent in males. The median age of patients with systemic diseases was significantly higher compared to patients without the disease (47.60 ± 13.70 vs. 38.99 ± 14.03, p = 0.000). Moreover, the frequency of systemic disease increased with increasing age (r = 0.296, p = 0.000) ( Table 2). (p = 0.101). Of the 800 subjects, 40.9% were smokers with a high percentage of male smokers than females (p = 0.000). When concerning the self-reported systemic diseases; 48.1% of the study population reported having at least one systemic disease, with females more likely to be affected (p = 0.002). The most common systemic diseases reported were HT (28.6%), followed by diabetes (13.4%) and CVD (13%). Systemic diseases found to be significantly more prevalent in females included thyroid patients was statistically higher than the gingivitis group (p = 0.000) and the prevalence of periodontitis severity increased with age (p = 0.000). Of the gingivitis patients, 77.1% were gingivitis on the intact periodontium, and 22.9% on the reduced periodontium. When The results of the relationship between the frequency of systemic diseases and the severity of periodontal disease are presented in Table 3. The prevalence of gingivitis and periodontitis were 68% and 32% respectively. The mean age of the periodontitis Table 4). The results of multinominal logistic regression analysis of the severity of periodontitis and systemic diseases are also presented in Table 5. Diabetes, HT, and tobacco use were significantly associated with severe periodontitis, and only diabetes in moderate periodontitis patients were classified according to the severity of periodontitis (stage); 23% of the patients had mild (stage I) periodontitis, 35% had moderate (stage II), and 42% of patients had severe (stage III and IV), respectively. Periodontitis was more prevalent in males than in females (p = 0.017). The frequency of having at least one systemic disease was higher in the periodontitis patients than in the gingivitis patients (p = 0.000). A significant correlation was also found between the presence of systemic disease and missing teeth (r = 0.120, p = 0.001). The frequency of smoking was higher in the periodontitis group compared to the gingivitis group (p = 0.001). Similarly, the mean number of cigarettes smoked per day in the periodontitis group was significantly higher than in the gingivitis group (p = 0.002). Significant differences were observed between gingivitis and periodontitis patients with regard to toothbrushing habits (p = 0.000). There were also significant differences between the periodontal status and the mean number of missing teeth with the severe periodontitis group having the highest number (p = 0.003). According to binary logistic regression analysis; age (OR = 1.05, 95% CI: 1.03-1.06), low tooth brushing habit (OR = 2.63, 95% CI: 1.81-3.83) tobacco use (OR = 1.92, 95% CI: 1.33-2.78), the presence of diabetes (OR = 3.12, 95% CI: 1.90-5.12), the presence of HT (OR = 2.49, 95% prevalence of systemic disease increases with increasing age [15,20]. It has been reported that age-related organ deterioration is a biological process and this may lead to the development of many health disorders [21]. The present study has identified that 32% of the study population had periodontitis with a high prevalence of severe periodontitis (stage III and IV) and a male gender predominance (Table 3). According to the 2017 case definitions, the prevalence of mild periodontitis was 23%, moderate 35%, and severe 42%. Due to the novelty of the classification, studies using this case definition are limited. But this result is similar to an epidemiological study carried out on a population in the north of Portugal in 2022 [22]. Moreover, a crosssectional study published by Chatzopoulos, et al. [23] also observed gender-related discrepancies in 262 older subjects from Greece, with more periodontal treatment needs for males compared to females. Another important finding of this study was that the periodontal disease severity was found to increase with the presence of systemic disease (Table 3). Moreover, there was a significant positive correlation between the number of missing teeth and the presence of systemic disease (p = 0.001). In line with the present results, Madi, et al. [24] also reported that the risk of both alveolar bone and tooth loss was statistically higher in periodontitis patients with systemic diseases including HT and diabetes. On the other hand, Sperr, et al. [25] reported that periodontitis severity was not significantly associated with systemic diseases. The main reason for the difference from that study may be related to the study design, and different periodontal disease classifications. Results of the present study showed that selfreported HT, diabetes, and CVD, tobacco use, age, and low tooth brushing habits were significantly associated with periodontitis (Table 4). Evidence from epidemiologic studies showed that the prevalence and severity of periodontal disease tend to increase with the age of patients, which has been attributed to the cumulative effect of time [8]. It is also well known that periodontal health depends on proper oral hygiene habits and a meta-analysis from observational studies reported that fair to poor oral hygiene habits increase the risk of periodontitis by two-to five-fold [26]. In the present study, self-reported diabetes was significantly associated with periodontitis (OR = 3.12, 95% CI: 1.90-5.12), and individuals who reported diabetes were 3.5 times more likely to have severe periodontitis than mild (Table 5). Chatzopoulos, et al. [27] also reported that self-reported diabetes was significantly associated with severe bone loss than mild in older adults (OR = 1.8, 95% CI: 1.2-2.7). Periodontitis is defined as the sixth complication of diabetes, and epidemiological studies have confirmed that patients with diabetes have a three to four-fold increased risk when compared to mild periodontitis. The logistic regression model was significant (p = 0.001); explained 32% (Nagelkerke R 2 ) of the variance with a sensitivity 87.7% and specificity 48.8%. Discussion The present study is one of the first that provides comprehensive data regarding the relationship between systemic diseases and periodontal conditions in Turkey, using the new classification system of periodontal disease. The most important finding of this study was that nearly half of the patients with periodontal disease had reported having one or more systemic diseases which can be attributed to their association with a public university hospital. This suggestion is supported by the findings of Georgiou, et al. [15], who investigated a group of 1000 Australian periodontal patients and found that patients attending the public system have an increased prevalence of systemic disease compared to those seeking treatment in private practice. The most frequently self-reported systemic diseases of the study population were hypertension, diabetes, and CVD, which also reflect the most prevalent chronic diseases in Turkey [16]. These results are in agreement with previous studies on this topic [17,18]. But, the findings differ from the data of a Western Australian tertiary institution study where CVD, allergy, and mental health disorders were the most common diseases determined [19]. The differences may be due to the different study designs and differences in race, and socioeconomic status of the study populations. In the present study, patients with systemic diseases had a significantly higher median age when compared to the group without systemic disease, and the frequency of systemic diseases increased significantly from 18.4 percent in the young age group to 48.3 percent in the old age group ( Table 2). The current results are in accordance with recent studies reporting that the OR: Odds Ratio classification system has also determined smoking as a modifier for periodontitis severity [10]. The current results may also explain the significant increase in the prevalence of periodontitis from 7.8% in young patients (18-35 years) to 38.3% and 53.9% in adults (36-49 years) and elderly (> 50 years), respectively. Therefore, smoking cessation strategies are urgently required, and periodontologists can play a critical role in providing advice on the benefits of smoking cessation on oral and general health. Some limitations are present in the present study. Firstly, the study was conducted on a single institution, therefore, the findings cannot be generalized to the general population in Turkey. Secondly, the evaluated parameters were taken retrospectively from patients' medical records but no laboratory or physical examinations were performed on patients. Thirdly, the cross-sectional study design makes it difficult to infer causal relationships for the outcome. Therefore long-term prospective studies are needed to confirm a causal relationship between systemic conditions and periodontal disease. On the other hand, the most current classification of periodontal disease was used, thus allowing comparability with future studies related to this topic worldwide. The findings of this study are also beneficial for public health professionals in terms of community-based preventive actions, especially for diabetics and smokers. Conclusion Within the limitations, the results of this study, that self-reported diabetes, hypertension, and tobacco use significantly increased the risk of developing severe periodontitis, may indicate an association between systemic disease and periodontal disease. However, large and longitudinal studies are needed to better understand this association. Nevertheless, identifying the systemic disease of patients through a detailed medical history along with any required medical consultation from the medical professional is essential before making any periodontal treatment plan. Data sharing statement The data that support the findings of this study are available from the corresponding author upon reasonable request. The data are not publicly available due to privacy or ethical restrictions. Conflict of interest The authors declare that they have no conflict of interests. of periodontitis [2]. Conversely, periodontal disease can also affect glycemic control and increase the risk for diabetic complications [3]. This bidirectional relationship between two diseases has been explained by the high levels of systemic markers of inflammation, the accumulation of advanced glycation end products, and the altered immune-inflammatory responses [2,3]. Accordingly, metabolic control of diabetes through glycosylated hemoglobin (HbA1c) levels has been added to the new classification of periodontal disease as a degree modifier to predict the risk of future progression of periodontitis [10]. Therefore, collaborations between periodontologists and medical professionals are essential in patients with diabetes. These patients should be informed about the risk of periodontal disease, and receive long-term preventive periodontal treatment to maintain optimal plaque control and prevent the risk of further periodontal disease progression. Existing evidence from case-control studies indicates an association between HT and periodontal disease [9,28] in consistence with the current study findings that HT was strongly associated with periodontitis (OR = 2.49, 95% CI: 1.68-3.68) ( Table 4). Clinical and experimental studies suggest that the association between periodontal disease and the cardiovascular system could be mediated through endothelial dysfunction, oxidative stress, increasing levels of systemic inflammation, and proinflammatory cytokines/or altered microbial composition of the dental biofilm [9,28,29]. Moreover, both CVD and periodontal disease share common risk factors that may also explain this link, such as tobacco use, stress, aging, and socioeconomic factors [9]. The results of this study also suggests that having HT was 2.5fold more likely to increase the risk of developing severe periodontitis (Table 5). In accordance with our results previous studies reported that hypertensive patients showed more severe periodontal conditions than healthy ones [28]. Moreover, tooth loss in periodontal disease was shown to be related to high blood pressure levels [9], and successful periodontal treatment has been reported to have a positive effect on decreasing blood pressure [29]. It is well documented that smoking is an important risk factor in the development of periodontitis [6,30]. In the present study, smokers were 1.92 times more likely to have periodontitis compared to non-smokers. The frequency of smoking was also higher in the periodontitis group (48%) compared to the gingivitis group (37.5%) (p = 0.001), and smokers were 2.5 times more likely to have severe periodontitis than mild. Evidence from numerous studies documented that smoking results in a chronic inflammatory process by promoting the secretion of radical oxygen species and proinflammatory mediators that play a role in the destruction of periodontal tissues, ultimately resulting in tooth loss [6]. Moreover, the new periodontal disease
2023-05-03T15:11:27.937Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "1cb79b8719eeb0db92abc06721517102a3246e31", "oa_license": "CCBY", "oa_url": "https://clinmedjournals.org/articles/ijodh/international-journal-of-oral-and-dental-health-ijodh-9-151.pdf?jid=ijodh", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b5d23d82d58e1af4a8a6b8befbb9b30742c016fc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
56269076
pes2o/s2orc
v3-fos-license
CO-REGISTRATION AIRBORNE LIDAR POINT CLOUD DATA AND SYNCHRONOUS DIGITAL IMAGE REGISTRATION BASED ON COMBINED ADJUSTMENT Aim at the problem of co-registration airborne laser point cloud data with the synchronous digital image, this paper proposed a registration method based on combined adjustment. By integrating tie point, point cloud data with elevation constraint pseudo observations, using the principle of least-squares adjustment to solve the corrections of exterior orientation elements of each image, high-precision registration results can be obtained. In order to ensure the reliability of the tie point, and the effectiveness of pseudo observations, this paper proposed a point cloud data constrain SIFT matching and optimizing method, can ensure that the tie points are located on flat terrain area. Experiments with the airborne laser point cloud data and its synchronous digital image, there are about 43 pixels error in image space using the original POS data. If only considering the bore-sight of POS system, there are still 1.3 pixels error in image space. The proposed method regards the corrections of the exterior orientation elements of each image as unknowns and the errors are reduced to 0.15 pixels. INTRODUCTION Nowadays, Light Detection and Ranging (LiDAR ) system can get high accuracy height information rapidly, however, LiDAR data is lack of semantic information compared to images which have higher precision plane accuracy and rich texture information (Baltsavias, 1999).So in order to effectively utilize these two kinds of data, part of airborne LiDAR system has been recently equipped with camera, which makes it capable to obtain point cloud data and high resolution aerial images simultaneously.Through the camera calibration parameters and POS (Position and Orientation System) system, the internal and exterior orientation elements at the exposure time of the images can be calculated and served as parameters for subsequent photogrammetric processing.However, due to installation error of camera or some other factors during shooting, it is hardly to measure the precise relative orientation between POS system and the camera, which leads to significant registration error between point cloud data and synchronized image and affect subsequent processing of orthorectification, and etc (Zhang et al., 2014).In order to make good use of these two kinds of complementary data sources, it is necessary to co-register airborne laser point cloud data with synchronous digital image.Existing methods can be roughly divided into two categories: one kind is the direct registration method based on feature matching (González-Aguilera et al., 2009;Renaudin et al., 2011); the other is by generating 3D point cloud from images, and then register two types of point cloud by Iterative Closest Points method (Zhao et al., 2005;Habib et al., 2006) .The former one depends on automatic feature matching, which is still a difficult problem; the latter one is less efficiency due to the time-consuming photogrammetry processing for producing 3D point cloud.Different from these two kinds of methods, Zhong et al. (2011) and Chen et al. (2012) proposed bore-sight calibration method to determine the orientation angle of the synchronous camera to realize the co-registration of the two kinds of data without ground control points.However, the human-computer interaction is needed for corresponding point's selection and it has limited precision since it takes only angle error into account.Aim at the above problems, this paper proposed a method of co-registration airborne laser point cloud data with the synchronous digital image, which dose not only consider the placement angle error of POS system, but also consider the GPS offset. Overview of the workflow The workflow of airborne LiDAR point cloud data and synchronous digital image co-registration based on combined adjustment is shown in Figure .1, it is composed of 3 steps.1) Point cloud constrained SIFT matching (Lowe, 2014): firstly, the corresponding area on image is obtained via the initial value of exterior orientation parameters (EOPs), then the corresponding points in the areas are matched by modified SIFT matching method, and thereafter RANSAC algorithm is adopted to remove possible mismatch (Fischler and Bolles, 1981). 2) Correspondence optimization: optimizing the distribution of the corresponding points to satisfy the demand of combined adjustment. 3) Combined adjustment: the error equation is formulated and the parameters are solved iteratively by least square adjustment method, consequently, the LiDAR point cloud is co-registrated with the synchronous digital image. Combined adjustment model The corresponding relationship between LiDAR point cloud and images can be described by co-linearity equation as equation ( 1): = rotation matrix derived by angle element , ,    of the EOPs. Equation ( 1) can be expanded using Taylor series and linearized to error equation as equation ( 2). 3) are included in the error equations of the combined adjustment. Where the first equation is relevant with tie points Least square adjustment is used to calculate the correction iteratively after the formulating equation (3). Elevation constraint The elevation of tie point object coordinates calculated by forward spatial intersection via collinearity equation will close to the DSM elevation interpolated by LiDAR point cloud, when the EOPs provided by POS system do not contain errors.However, errors generally existed in the EOPs which cause a large difference between the forward intersected elevation and LiDAR point cloud interpolated elevation.Therefore, the elevation difference is selected as the pseudo observation to eliminate the difference between the forward intersected elevation and interpolated elevation by combined adjustment.Zinter by the surrounding points: 0 Z Zinter   (4 ) Due to the errors in the EOPs of digital images, the equation ( 4) can be modified to the error equation: (5 ) Equation ( 5) is the pseudo observation equation constrained by point cloud elevation and 0 Z is the elevation value of the tie point calculated by forward spatial intersection using initial EOPs. The interpolated elevation accuracy will be declined when the tie point locates at the edge of buildings or vegetation area.Due to the spatial intersected object coordinate of tie points and LiDAR point interpolated elevations are selected as the pseudo observation, only the corresponding points located in flat area are employed as the tie points to ensure that the pseudo observation can correctly constrain the combined adjustment. Point cloud supported modified SIFT matching and optimization The extraction and optimization of tie points are the key problem of the proposed method.However, collecting corresponding points manually are time-consuming.Traditional SIFT matching method is time-consuming and the distribution is affected by image texture, therefore it is not appropriate for our demand and in this research, SIFT matching is constrained by the point cloud.The steps are as follows: 1) LiDAR points are projected onto the images and the overlapping area is obtained by the projected points to produce a coarse disparity map. 2) SIFT points are extracted and matched in the overlapping area constrained by the derived disparity map and double-way matching method is used to ensure the reliability of the corresponding points. 3) Possible mismatch are removal based on relative orientation incorporated with RANSAC approach.Although a large number of corresponding points can be obtained by SIFT extraction and matching and the double-way matching has removed most of the error matches, a small number of errors maybe still remained.Therefore RANSAC algorithm is employed to calculate the fundamental matrix and then the remained errors are removed by the geometric constraints of the stereo image pair ( Wu et al. 2011).In this research, to ensure elevation constraint is available during the iteration of adjustment.Only the corresponding points that has certain number of Lidar points in a circular region with radius R (set to 50 pixels in this paper) and the elevation standard deviation of the these point is less than a given threshold  (  is set 0.1 meters) are retained.Due to the uncertainty of the distribution of the systematic error, the distribution of the neighbourhood is checked, and only the corresponding points whose 4 quadrant have at least one LiDAR point is kept.Through above steps, appropriate corresponding points are screened out and thereafter, average grid is set and one corresponding point of each grid is selected as the tie point used for the combined adjustment. Weight determination In equation ( 3), the tie points can be considered as equal weight observation, therefore 1 P is set as a unit matrix.The second equation has mandatory constraint function, and it can make the Z value of the forward intersected 3 dimensional coordinates constrained around the true elevation value, and through a number of iterations, it can be further constrained, finally, forward intersected elevation value will be very close to the true value.In this research, the weight 2 P of equation 2 is 2 orders of magnitude greater than the weight 1 P of equation 1. Experimental data description For validation of the proposed method, airborne laser point cloud data shown in Figure 3 Results of tie points extraction For the stereo images as shown in In order to verify the registration results, the point cloud data shown in Figure 4. (b) (c) (d) is back-projected to corresponding image respectively using the exterior orientation elements before and after co-registration.As shown in Figure 7. the back-projection image points are marked in red and green respectively for before and after co-registration.It is shown that there is large misalignment before data registration.While after co-registration, the projection region marked in green are aligned strictly in both left and right images, which shows that the method is capable of registering the two kinds of data.(the projection difference of the point generated by forward spatial intersection is projected to the left and right images) are calculated.After that, the maximum and the minimum value, the root of mean square error is calculated.The result is shown in Table 1, the proposed method is compared with the traditional bore-sight calibration method only considering rotation angels, and there is about 43 pixels error in image space using the initial POS data.If only considering the boresight of POS system, there are still 1.3 pixels errors in image space.The proposed method regards the corrections of the exterior orientation elements of each image as unknowns and the error is reduced to 0.15 pixels.The experiment results reveal that the proposed method can register the airborne point cloud and the synchronous digital image with higher accuracy. CONCLUSIONS Aim at the problem of co-registration airborne laser point cloud data with the synchronous digital image, this paper proposed a registration method based on combined adjustment.For the purpose that the corresponding points provide better constraint and to ensure the reliability of SIFT matching, an algorithm point cloud supported modified matching is proposed.At the same time, through analysing of the height distribution of the point cloud around the corresponding SIFT feature points, keeping the corresponding points only if there are enough points with similar height distributing on their neighbours, so the points located on plants or the edge of building which has a big interference could be filtered out.With the corresponding points optimized automatically, basing on the collinearity equation and using LiDAR point cloud data as a constraint, the method of combined adjustment is capable of the registration of the airborne laser scanning data and the synchronized digital images.The experiment shows that the method could achieve higher accuracy than the method only considering the rotation angle.Due to the limitation of the experimental data, only the experiment that the registration of the airborne laser scanning data and the corresponding synchronized digital image pair are conducted in this paper.The model in this paper would be extended to multi-view images. Figure. 1 Figure.1 Workflow of the proposed co-registration method T x d d d dXs dYs dZs dX dY dZ     , , d d d    = correction of angle element , , dXs dYs dZs = correction of line element , , dX dY dZ = correction of object coordinates of tie points Two kinds of error equations as equation ( of 3D coordinate of tie point The second equation is the error equation of the pseudo observation equations relevant with elevation.3 x = elevation correction of tie points. Figure. 2 Figure.2 Illustration of elevation constraint (a) and the corresponding stereo pairs shown in the Figure 3(b) (c) are used for experiments.There are 16,475,402 points and about 10/m 2 in the point cloud data; the camera parameters of digital aerial images are calibrated rigorously, so the distortion error correction and the translation of the principal points of photograph have been removed.The size of the images is 9000 pixels×6732 pixels, focal length is 93.071 mm.For comparison, this paper also implements the registration method only taking the rotation angle error into consideration. Figure Figure.3 Experiment data Figure 4.(b) (c), 11413 and 11686 feature points are extracted from left and right image respectively by using SIFT feature extraction algorithm.1846 matches are obtained by using the proposed method.After RANSAC approach, only 1348 corresponding points are kept as shown in the Figure 5. Figure 5 . Figure 5. SIFT matching result After the distribution optimization of corresponding points, the correct rate of corresponding points is 100% as shown in Figure 6.It can be seen that all the corresponding points are distributed in flat area. Figure Figure 6.Optimization result 3.3 Registration results Figure.7 Back-projection results of before and after registration From above figures, it is obvious that the point cloud data and images with an obvious matching deviation turn into accurate registration after co-registration based on combined adjustment.The projection location of point cloud data from the inaccurate area to the accurate area, the accuracy is greatly improved.For quantitative validation, 17 identical point pairs between two images are selected as check points by the way of humancomputer interaction under the guidance of airborne laser point cloud.Then z  (the difference between the height of point generated by forward spatial intersection and the point generated by the interpolation from airborne point cloud), s  Table 1 . Comparison of back-projection of 3D points (pxs is abbreviation of pixels)
2018-12-15T04:03:03.476Z
2016-06-03T00:00:00.000
{ "year": 2016, "sha1": "c4525966c747fc0bdb4b7b9cc03ee296409cc9c0", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLI-B1/259/2016/isprs-archives-XLI-B1-259-2016.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c4525966c747fc0bdb4b7b9cc03ee296409cc9c0", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Geography" ] }
216333564
pes2o/s2orc
v3-fos-license
Factors influencing in implementing rice farming conservation on slope land Farmers in Pamekasan Regency, especially in Pakong Subdistrict, use sloping land as agricultural land. The farmers generally have implemented conservation rules at different levels. This study aims to examine the factors that influence the application level of rice farming conservation on sloping land. The method used in this study is a questionnaire that concern with the level of conservation application and the factors that influence the adoption level of conservation farming. Meanwhile variables of Conservation farming include making terraces, making infiltration channels, planting terrace reinforcement plants, making water drainage channels and utilizing or using natural mulch. Furthermore the influencing Factors include land area (X1), farmer’s age (X2), formal education (X3), non-formal education (X4), land slope (X5), conservation farming knowledge (X6), knowledge of conservation methods (X7), and knowledge of the importance of conservation (X8). The result showed that 65.38% of the sampled farmers had applied conservation with a high level category. Multiple linear analysis of the factors showed that X4, X5, X7 and X8 had a significant effect on the level of conservation farming iplementation for rice plants on sloping land. Introduction New land clearing by expanding the area of food crops for productive purposes and adding economic value causes the changes of land function. It can decrease the capacity of land and as a result, it cannot support farming activity [1]. Ministry of Agriculture [2] said that the use of a land must be in accordance with the ability of the land. Agricultural cultivation in mountainous land includes two main activities, farming and conservation activities. Farmers in Pamekasan District, especially in Waru Sub-District, use a lot of sloping land as agricultural land. The most cultivated plants were rice. According to Sulistyono et al. [3], conservation farming can prevent excessive erosion on sloping agricultural land and maintain land fertility. The majority of farmers have applied conservation rules at different levels. In line with this, Suwarto and Anantanyu [4] explained that farmers have carried out land conservation in different degrees. Some farmers have made bench terraces, but without strengthening plants. The difference in the level of innovative technology application in conservation farming based on the characteristics of each farmer. Nuraeni et al. [5] said the effort to implement the principles of land resource conservation in the crop cultivation system depends on the perception and participation of farmers as actors who determine the management of their farming. Based on this, it is necessary to examine the factors that influence farmers' decision in implementing conservation farming, so it can be used as a source of information in improving the implementation of conservation on sloping lands. Materials and Methods Farming and conservation activities integrated to be conservation farming systems (CFS) or Conservation Agriculture (CA). conservation agriculture (CA) was used as an alternative to conventional agriculture as a result of losses in soil productivity due to soil degradation. May techniques were aplicated such as zero-tillage, mulching, mixed cropping, crop rotation, and using botanical pesticide using rather than chemical [6]. CA practices have been widely adopted by farmers. Some countries that adopted CA were found in the southern cone of Latin America (Argentina, Brazil and Paraguay), North America, Australia, Eastern Europe, East Asia and Africa [7,8]. Location of the Research The research was carried out in Waru Subdistrict, Pamekasan Regency, with consideration that the area is mountainous land where many sloping lands are used for seasonal crops such as rice. Administratively, Pamekasan Regency is located at 6 ° 51'-7 ° 31 'south latitude and 113 ° 19'-113 ° 58' east longitude. Waru Subdistrict is a sub-district located in the northern part of Pamekasan Regency and has the potential for agricultural land because it is located in the highlands. Level of Application of Conservation Farming To find out the level of implementation of conservation, scoring rubric is carried out for each respondent which is presented in Table 1. Terrace treatment 2 Making of channel infiltration 3 Planting amplifier terrace 4 Making water channels 5 The use of natural mulch Analysis of the Factors Affecting the Level of Conservation Application To examine the factors that influence the level of application of conservation, it was used multiple linear regression analysis. The function model of conservation farming adoption which was used in this study is: Where α0 is an intercept / constant, α1-α7 is the adoption kefesien, Y is the adoption or application of conservation farming, X1 is the land area (ha), X2 is the age of the farmer (year), X3 is formal education, X4 is non-formal education, X5 is the slope of the land, X6 is the knowledge of conservation farming, X7 is the knowledge of conservation technique, X8 is the knowledge of the conservation importance. Identification of the of Application Level in Conservation Farming The level of implementation in conservation farming on sloping land based on the treatment of making terracing, making infiltration channels, making water channels, planting annual crops, planting terrace reinforcement plants and using natural mulch. The highest level of application of terrace making is 50-75% (Table 2) of the total area of land which is planted with rice. Fifty percent of farmers didn't make water infiltration channels. According to farmers, making water channels is enough to replace the water catchment channel, but some farmers also argue that the manufacture of infiltration canals is very important to reduce the amount of water discharge in the land, especially in the rainy season in order not to damage rice plants. Meanwhile for water channels, the average of farmer has applied with an application rate of 50-75% (Table 2). The level of conservation farming implementation is classified into two groups namely low and high. Farmers who apply conservation in the high category are 65.38 % of farmers (Table 3). According to farmers, the level of conservation implementation need much money than those who do not apply conservation, especially for labor wages. However, by applying conservation, it can be easier for farmers, especially during the weeding process. Conservation also minimizes the existence of water loss due to the condition of sloping land so that it requires an accompanying treatment. Factor-Factors Affecting the Level of Adoption of Conservation Farming SDS Factors influencing the level of implementation of rice farming conservation were analyzed using multiple regression methods with the dependent variable the level of conservation adoption applied by farmers and there were eight variables included in the analysis model. . From the regression results, it was found that the land area variable (X1) had no significant effect on the level of implementation of conservation farming with a t-table value is greater than t-count (Table 4), it means that farmers who have a narrow land area and a wide area have the same opportunities in the implementation of farming conservation level. Research conducted by Susanti [9] shows that land tenure variables do not affect farmers in the application of organic rice farming [10] show that the land area does not affect the farmers' decision in applying the level of rice varieties adoption. The value of t-count for variable of farmers' age (X2) is -0.689 , so it can be concluded that the age of the farmers does not have a significant effect statistically with margin of error 10 %, it means that young and old farmers have the same potential in applying conservation farming levels. In line with the research conducted by Jailanis et al. [11] explained that age has no effect on farmers in making technology adoption decisions. Research Apriliana and Mustadjab [12] showed that age did not affect farmers' decision in using hybrid corn seeds. Education of farmers (X3), t-count is smaller than the t-table, so it can be concluded that age had no significant effect to the farmers.with margin of error 10 % ( Table 4 ). Most of the Farmers' education in this study is junior high school level. The level of education does not significantly influence the decision-making of farmers in adopting the use of single varieties. Amala et al. [13] and Burhansyah [14] showed that farmer education has no influence on agricultural innovation adoption. Van Bac et al. [15] mentioned that formal education did not have a real influence on farmers' decisionmaking in adopting tea farming systems. The non-formal education coefficient (X4) has an influence on farmers' decision-making in applying conservation levels with margin of error 15%. Non-formal education in this case is the participation of farmers in farmer groups. The extension staff in this research did not only discuss related to pest disease problems that are often complained by farmers but also the treatment of land processing for sloping land in order to obtain maximum production. Indraningsih [16] revealed that the role of extension workers and membership of farmer groups has an influence on farmers in adopting farming technology innovations. The role of the extention in farmer group in delivering innovations related to farming conservation on sloping land to group members is important with the active participation of members of the farmer group which become a force for farmers in adopting farming conservation. The slope land (X5) has a positive effect on the level of implementation of conservation farming for rice farming. The greater the slope of the land used in rice farming, the higher the level of application of conservation both in the treatment of terrace use, water disposal, drainage, terrace plants and annual crop planting. Conservation treatment serves to reduce the rate of water so as to minimize erosion and loss of water. Darmadi et al. [17] suggested that land conditions that have high land slope required a land conservation technology in an effort to maintain land fertility and crop productivity. Farmers' knowledge of conservation methods (X7) and the importance of land conservation (X8) affect the level of implementation of conservation farming. Farmers who understand how to conserve conservation and the importance of conservation on sloping land will increase the level of conservation to produce maximum productivity. Farmers in this study understand the benefits of applying conservation, one of which is to avoid erosion, so that sloping land must be treated with shading. While the knowledge of farmers related to conservation farming (X6) does not affect the level of application of rice farming conservation Conclusion The level of implementation of farming conservation with a high category of 65.38% of the total respondents means that the majority of farmers have understood the importance of conservation on sloping land. Factors that have a significant effect on the level of implementation of conservation farming in rice plants on sloping land are non-formal education (participation of farmer groups), sloping land, knowledge of ways and importance of conservation farming and knowledge of the importance of conservation farming. It is needed to conduct counseling related to the use of crop residues to avoid mulch or allocations that can be used as organic fertilizer. The need for approaching from the instructor of the problem of making terraces because there is still the making of terraces that are not opposite to the speed contour of the water and the terraces that are still tilted.
2020-04-16T09:02:20.675Z
2020-04-04T00:00:00.000
{ "year": 2020, "sha1": "c704c53b60309b58564f88593e1a3232dea778c7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/458/1/012038", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "add113ab09db69e363b05ef8fd007f49c297870f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography", "Physics" ] }
119293051
pes2o/s2orc
v3-fos-license
Adiabatic Limit and the Fr\"olicher Spectral Sequence Motivated by our conjecture of an earlier work predicting the degeneration at the second page of the Fr\"olicher spectral sequence of any compact complex manifold supporting an SKT metric $\omega$ (i.e. such that $\partial\bar\partial\omega=0$), we prove degeneration at $E_2$ whenever the manifold admits a Hermitian metric whose torsion operator $\tau$ and its adjoint vanish on $\Delta''$-harmonic forms of positive degrees up to $\mbox{dim}_\C X$. Besides the pseudo-differential Laplacian inducing a Hodge theory for $E_2$ that we constructed in earlier work and Demailly's Bochner-Kodaira-Nakano formula for Hermitian metrics, a key ingredient is a general formula for the dimensions of the vector spaces featuring in the Fr\"olicher spectral sequence in terms of the asymptotics, as a positive constant $h$ decreases to zero, of the small eigenvalues of a rescaled Laplacian $\Delta_h$, introduced here in the present form, that we adapt to the context of a complex structure from the well-known construction of the adiabatic limit and from the analogous result for Riemannian foliations of \'Alvarez L\'opez and Kordyukov. Introduction Let X be a compact complex manifold of dimension n. It is well known that the existence of a Kähler metric ω on X implies the degeneration at E 1 of the Frölicher spectral sequence that relates the complex structure of X (encapsulated in the Dolbeault, i.e. the∂-, cohomology H p,,q (X, C), the start page of this spectral sequence) to the differential structure of X (encapsulated in the De Rham, i.e. the d-, cohomology H k (X, C), the limiting page of this spectral sequence). However, since Kähler metrics exist only rarely when n ≥ 3, it is natural to search for weaker metric conditions on X that ensure a (possibly weaker) degeneration property of the algebro-geometric object that is the Frölicher spectral sequence of X. The best we can hope for in the non-Kähler context is the degeneration at the second page. To this end, we proposed the following conjecture in [Pop16]: Conjecture 1.1. If a compact complex manifold X admits an SKT metric ω (i.e. a Hermitian metric ω such that ∂∂ω = 0), the Frölicher spectral sequence of X degenerates at E 2 . There is evidence that this ought to be true. The statement holds true on all the examples of compact complex manifolds that we are aware of, namely all the 3-dimensional nilmanifolds, the 3-dimensional solvmanifolds that are currently classified, the Calabi-Eckmann manifold S 3 × S 3 , etc. In [Pop16], we proved this statement under the extra assumption that the SKT metric ω which is supposed to exist has a small torsion in the sense that the upper bound of its torsion operator of type (0, 0) (defined in a precise way) does not exceed a third of the spectral gap of the elliptic, self-adjoint and non-negative, differential operator ∆ ′ + ∆ ′′ in every bidegree (p, q). As usual, ∆ ′ = ∆ ′ ω = ∂∂ ⋆ ω + ∂ ⋆ ω ∂ and ∆ ′′ = ∆ ′′ ω =∂∂ ⋆ ω +∂ ⋆ ω∂ are the ∂-, resp.∂-Laplacians on smooth differential forms on X. While Conjecture 1.1 remains elusive at the moment, we give in this paper a different sufficient metric condition for degeneration at E 2 that does not assume the fixed Hermitian metric ω to be SKT. As usual (see e.g. [Dem84] or [Dem97, VII, §.1]), we consider the torsion operator τ = τ ω := [Λ ω , ∂ω ∧ ·] of type (1, 0) defined on smooth differential forms on X, where Λ ω is the adjoint of the multiplication by ω w.r.t. the inner product defined by ω, while [A, B] = AB − (−1) ab BA is the graded commutator of any two endomorphisms A, B of respective degrees a, b of the bi-graded algebra C ∞ •, • of smooth differential forms on X. Specifically, we prove Theorem 1.2. Let (X, ω) be a compact Hermitian manifold with dim C X = n such that the inclusion of kernels ker ∆ ′′ ⊂ ker [τ, τ ⋆ ] (1) holds for the operators ∆ ′′ , [τ, τ ⋆ ] : C ∞ k (X, C) −→ C ∞ k (X, C) in every degree k ∈ {1, . . . , n}. Then, the Frölicher spectral sequence of X degenerates at the second page E 2 . Hypothesis (1) is of a qualitative nature and it is comparatively easy to check on concrete examples of compact Hermitian manifolds (X, ω) whether it holds or not. For example, S 3 × S 3 equipped with the Calabi-Eckmann complex structure and the Iwasawa manifold do not satisfy it when they are given the natural non-Kähler metrics (easy verifications that are left to the reader). Intuitively, (1) requires the torsion of ω to be "small" since, for non-negative operators, the smaller one has a larger kernel. (We will use throughout the paper the usual order relation for linear operators A, B: A ≥ B will mean that Au, u ≥ Bu, u for all forms u, where , stands for the L 2 inner product induced by the fixed Hermitian metric ω on X.) Hypothesis (1) is obviously satisfied if ω is Kähler since τ = 0 in that case. We do not know whether there exist compact complex non-Kähler manifolds that satisfy hypothesis (1). Inspired by the extensive literature on the adiabatic limit associated with a Riemannian foliation (see e.g. [Wi85], [MM90], [For95], [ALK00] and the references therein), we adapt that construction to the case of the splitting d = ∂ +∂ defining the complex structure of X. Thus, for every constant h > 0 that is eventually let to converge to 0, we define in section §.2 two rescalings of the usual d-Laplacian ∆ = dd ⋆ + d ⋆ d acting on the smooth differential forms on an arbitrary compact Hermitian manifold (X, ω): where d h := h∂ +∂ modifies d by rescaling ∂ while keeping∂ fixed, but its formal adjoint d ⋆ h is computed w.r.t. the given Hermitian metric ω, and ∆ ω h := dd ⋆ ω h + d ⋆ ω h d, where d = ∂ +∂ is kept unchanged, but its formal adjoint d ⋆ ω h is computed w.r.t. a rescaled metric ω h that modifies the original ω by multiplying the pointwise inner product of (p, q)-forms by h 2p . So, the anti-holomorphic degree q of (p, q)-forms does not contribute to the definition of ω h . Although strongly inspired by the adiabatic limit construction in the presence of a Riemannian foliation, this partial rescaling of a Hermitian metric seems to be new and to hold further promise for the future. In section §.2, we study these two rescaled Laplacians and the relationships between them. As in the foliated case of [ALK00], ∆ h and ∆ ω h are seen to have the same spectrum and to have eigenspaces that are obtained from each other via a rescaling isometry. A key ingredient in the proof of Theorem 1.2 is the following formula for the dimensions of the vector spaces featuring on each page of the Frölicher spectral sequence of X in terms of the number of small eigenvalues of the rescaled Laplacian ∆ h (or, equivalently, ∆ ω h ). "Small" refers to the eigenvalues' decay rate to zero as h ↓ 0. This result and its proof are strongly inspired by the analogous result for foliations proved byÁlvarez López and Kordyukov in [ALK00]. However, to our knowledge, this particular form of the result in the context of the Frölicher spectral sequence seems new and is of independent interest. Theorem 1.3. Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every r ∈ N ⋆ and every k = 0, . . . , 2n, the following identity holds: where E k r := ⊕ p+q=k E p, q r is the direct sum of the spaces of total degree k on the r th page of the Frölicher spectral sequence of X, ) acting on k-forms. As usual, ♯ stands for the cardinal of a set. The proof of this statement proceeds along the lines of the one given in [ALK00] for the analogous statement in the foliated case with some simplifications, adjustments and inevitable differences in detail. We spell it out in section §.4. In the proof of Theorem 1.3, we also use our pseudo-differential where p ′′ is the orthogonal projection onto ker ∆ ′′ ) constructed in every bidegree (p, q) in [Pop16] and shown there to induce a Hodge isomorphism between its kernel and the space E p, q 2 of bidegree (p, q) featuring on the second page of the Frölicher spectral sequence. Along with Theorem 1.3 and the pseudo-differential Laplacian ∆, the third main ingredient in the proof of Theorem 1.2 is the following formula of the Bochner-Kodaira-Nakano type for Hermitian (not necessarily Kähler) metrics ω established by Demailly in [Dem84] (see also [Dem97, VII, §.1): where [•, •] is the usual graded commutator (see e.g. Notation 1.4 below), Λ = Λ ω is the adjoint of the multiplication operator ω ∧ ·, τ = τ ω := [Λ, ∂ω ∧ ·] is the torsion operator of ω and ∆ ′ τ := [∂ + τ, (∂ + τ ) ⋆ ]. This formula enables us to compare various Laplacians and finish the proof of Theorem 1.2 in section §.6. This paper owes much to the ideas and techniques in our main source of inspiration [ALK00] and to the treatment given to the Leray spectral sequence in [MM90] and [For95], although the setting and the objectives are different. In the Appendix, we give an estimate of the discrepancy between the Laplacians ∆ ′ and ∆ ′′ under the SKT assumption on the metric ω (cf. Lemma 7.1). This is of independent interest and leads to the lower bound −Ch 2 for the operator ∆ h − h 2 ∆ for all 0 < h < 1 when ω is SKT, where C ≥ 0 is a constant independent of h that can be chosen to be any upper bound of the non-negative bounded torsion operator [τ ,τ ⋆ ] (cf. Lemma 7.2). In view of Theorem 1.3 and some minor extra arguments, if the lower bound −Ch 2 could be improved to 0, Conjecture 1.1 would be solved, but at the moment we are unfortunately short of arguments to perform this improvement. Notation 1.4. For a given Hermitian metric ω on a given compact complex manifold X, , = , ω will stand for the L 2 inner product defined by ω on the spaces C ∞ p, q (X, C) (resp. C ∞ k (X, C)) of smooth differential (p, q)-forms (resp. k-forms) on X, while || || = || || ω will denote the corresponding L 2 -norm. For self-adjoint linear operators A, B on the bi-graded algebra ⊕ p, q C ∞ p, q (X, C), by A ≥ B we shall mean (as is the standard convention) that Au, u ≥ Bu, u for every form u lying in the space on which A and B are defined. We shall also use the usual bracket [A, B] := AB − (−1) ab BA for graded linear operators A, B of respective degrees a, b on the algebra ⊕ k Λ k T ⋆ X of differential forms on X. of this paper and for suggestions for section 5. Thanks are also due to S. Rao and Q. Zhao for stimulating discussions. Rescaled Laplacians Let X be a compact complex manifold with dim C X = n. We fix a Hermitian metric ω on X. Rescaling the metric The first operation we will consider is a partial rescaling of ω in a way that depends solely on the holomorphic degree of forms. Definition 2.1. For all p, q ∈ {0, . . . , n}, all (p, q)-forms u, v and every constant h > 0, we define the following pointwise inner product where , ω stands for the pointwise inner product defined by the original Hermitian metric ω. Note that, for every h > 0, we obtain in this way a Hermitian metric ω h on every vector bundle Λ p, q T ⋆ X of (p, q)-forms on X. The maps induce an isometry of Hermitian vector bundles θ h : In particular, we have defined a Hermitian metric ω h = 1 h 2 ω, h > 0, on the holomorphic tangent bundle T 1, 0 X of vector fields of type (1, 0), or equivalently, a rescaled C ∞ positive-definite (1, 1)-form ω h = h −2 ω on X. This induces a C ∞ positive volume form h 2n dV ω on X, which in turn gives rise, in conjunction with the above pointwise inner product , ω h , to the following L 2 inner product for all forms u, v ∈ C ∞ p, q (X, C) and all bidegrees (p, q). Formula 2.2. For all (p, q)-forms u, v, we have Proof. The formula follows at once from the last identity and from the fact that θ h u = h p u for all (p, q)-forms u. Definition 2.3. Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every k = 0, . . . , 2n and every constant h > 0, we consider the d-Laplacian w.r.t. the rescaled metric ω h acting on C ∞ k-forms on X: , ω h and , ω h has been extended from the spaces C ∞ p, q (X, C) to C ∞ k (X, C) = ⊕ p+q=k C ∞ p, q (X, C) by sesquilinearity and by imposing that u, v ω h = 0 whenever u ∈ C ∞ p, q (X, C) and v ∈ C ∞ r, s (X, C) with (p, q) = (r, s). Rescaling the differential The second operation we will consider is a partial rescaling of d = ∂ +∂ that applies solely to its component of type (1, 0). Definition 2.4. Let X be a compact complex manifold, dim C X = n. For every constant h > 0, let Some basic properties of the rescaled differential d h are summed up in the following h on pure-type forms, so this identity extends to arbitrary forms by linearity. (ii) On the one hand, so we have the equivalence: These equivalences show that the linear map H k is well defined and bijective. In particular, the spectral sequences induced by the pairs of differentials (∂,∂) and (h∂,∂) are isomorphic, so degenerate at the same page. The first of them is the Frölicher spectral sequence of X. Definition 2.6. Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every constant h > 0 and every degree k ∈ {0, . . . , 2n}, we consider the d h -Laplacian w.r.t. the given metric ω acting on C ∞ k-forms on X: the L 2 inner product induced by ω. Comparison of the two rescaled Laplacians We now bring together the above two operations by comparing the corresponding Laplace-type operators. Note that ∆ ω h was defined by the rescaled differential d h and the original metric ω, while ∆ h was induced by the rescaled metric ω h and the original differential d. In particular, the second-order Laplacians ∆ ω h and ∆ h are elliptic since the second-order Laplacians ∆ ′ and ∆ ′′ are and the deviation terms Note that [∂,∂ ⋆ ]u, u = [∂, ∂ ⋆ ]u, u = 0 whenever the form u is of pure type and whatever metric is used to define , (because pure-type forms of different bidegrees are orthogonal w.r.t. any metric), so (This fails, in general, if u is not of pure type, unless the metric ω is Kähler.) (iii) The rescaled Laplacians ∆ ω h and ∆ h are related by the formula The second identity in (i) follows by taking conjugates in is proved in the same way by using the fact that∂ acts only on the anti-holomorphic degree of forms which is unaffected by the change of metric from ω to ω h . Using these formulae, we get On the other hand, we know from [Dem84] (or [Dem97, VII, §.1]) that and, by conjugation, we get [∂, So, the terms measuring the deviations of ∆ ω h and ∆ h from h 2 ∆ ′ + ∆ ′′ are of order 1 and we get the alternative formulae for ∆ ω h and ∆ h spelt out in the statement. (iii) For any smooth (p, q)-form α, we have on pure-type forms and this identity extends to arbitrary forms by linearity. Corollary 2.8. Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every constant h > 0 and every degree k ∈ {0, . . . , 2n}, the spectra of the rescaled Laplacians ∆ h , ∆ ω h : and their respective eigenspaces are obtained from each other via the rescaling isometry θ h : where E ∆ω h (λ), resp. E ∆ h (λ), stands for the eigenspace corresponding to the eigenvalue λ of the operator ∆ ω h , resp. ∆ h . Thus, ∆ h and ∆ ω h have the same eigenvalues with the same multiplicities. . These implications also hold in reverse order, so we get the equivalences: These equivalences amount to (6) and (7). Another consequence of the above discussion is a Hodge Theory for the d h -cohomology and the resulting equidimensionality of the kernels of ∆ and ∆ h in every degree. Corollary 2.9. Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every constant h > 0 and every degree k ∈ {0, . . . , 2n}, the operator d h : C ∞ k (X, C) −→ C ∞ k (X, C) induces the following L 2 ω -orthogonal direct-sum decomposition: This, in turn, induces the Hodge isomorphism Proof. Since X is compact and ∆ h is elliptic and self-adjoint, a standard consequence of Gårding's inequality (see e.g. [Dem97, VI]) yields the two-space orthogonal decomposition The same consequence of Gårding's inequality ensures that ker ∆ h is finite-dimensional and that the images in C ∞ k (X, C) of d h and d ⋆ h are closed. The differentials in the Frölicher spectral sequence We begin by recalling the well-known construction of the Frölicher spectral sequence in order to fix the notation and to point out the key features for us. Let X be a compact complex manifold with dim C X = n. Recall that the zero-th page E 0 of the Frölicher spectral sequence of X consists of the spaces E p, q 0 := C ∞ p, q (X, C) of smooth pure-type forms on X and of the type-(0, 1) differentials d 0 :=∂ forming the Dolbeault complex: Thus, in every bidegree (p, q), the inclusions Im d p, q−1 are the Dolbeault cohomology groups of X. The first page E 1 of the Frölicher spectral sequence consists of the spaces E p, q where d p, q 1 is d 1 acting in bidegree (p, q), while the spaces E p, q 2 := ker d p, q 1 /Im d p−1, q 1 form the cohomology of the page E 1 . The remaining pages are constructed inductively: the differentials d r = d p, q r : E p, q r −→ E p+r, q−r+1 r are of type (r, −r + 1) for every r, while the spaces E p, q r := ker d p, q r−1 /Im d p−r+1, q+r−2 r−1 on the r th page are defined as the cohomology of the previous page E r−1 . On every page E r and in every bidegree (p, q), the inclusions Im d p−r, q+r−1 where E p, q r+1 := ker d p, q r /Im d p−r, q+r−1 r . It is worth stressing that (8), (9) and (10) only assert that the vector spaces on either side of ≃ are isomorphic, but no choice of preferred isomorphism is possible at this stage. A classical result of Frölicher [Fro55] asserts that this spectral sequence converges to the De Rham cohomology of X and degenerates after finitely many steps. This means that there are (noncanonical) isomorphisms: N for all p, q and where N ≥ 1 is the positive integer such that the spectral sequence degenerates at E N . 3.1 Identification of the d r 's with restrictions of d Summing up (8), (9), (10) over r = 0, . . . , N −1, we get (infinitely many, non-canonical) isomorphisms for every bidegree (p, q). Note that the isomorphisms (8), (9), (10) identify the spaces Im d p−r, q+r−1 r , E p, q r (including for r = ∞) and E p, q r / ker d p, q r with certain subspaces of C ∞ p, q (X, C). However, these subspaces have not been specified yet since multiple choices (and no canonical choice) are possible for the isomorphisms (8), (9), (10). These choices can only be made unique once a Hermitian metric has been fixed on X. Thus, under these isomorphisms, the operator where the isomorphism d p, q r : to the third piece on the r.h.s. of (10). The fact that d r is of type (r, −r + 1) will play a key role in the sequel. On the other hand, summing up the splittings of C ∞ p, q (X, C) over p ≥ s for any given s, we get where we set m k r := l≥r p+q=k dim (E p, q l / ker d p, q l ). (ii) For every r and every k, let L p, q r := l≥r (E p, q l / ker d p, q l ) and L k r := p+q=k L p, q r . Then, dim L k r = m k r (obvious) and, under the identifications defined by the isomorphisms (8), (9), (10), the following inclusions hold: where d(L p, q r ) := ⊕ l≥r d p, q l (E p, q l / ker d p, q l ) in keeping with identification (12). Proof. (i) For every fixed r, summing up the splittings (10) with l in place of r over l ≥ r and then summing up over p + q = k for every fixed k, we get for all p, q, l, if we set p ′ := p − l and q ′ := q + l − 1, we have p ′ + q ′ = k − 1 when p + q = k and the above isomorphism translates to for every k. Now, dim ⊕ p+q=k E p, q ∞ = b k (the k th Betti number of X) thanks to (11), so taking dimensions in the above isomorphism, we get (13). (ii) Since d p, q l : E p, q l / ker d p, q l −→ Im d p, q l is an isomorphism of type (l, −l + 1) for all l, p, q, we get for all l ≥ r: p+r under the identification of each space E p+l, q−l+1 l with a subspace of C ∞ p+l, q−l+1 defined by the isomorphisms (8), (9), (10). This proves (14). Explicit description of the above identifications We take this opportunity to point out an explicit description of the differentials d r in cohomology and of their unique realisations induced by a given Hermitian metric on X. Lemma 3.2. Let X be a compact complex manifold with dim C X = n. (i) For every r and every bidegree (p, q), the vector space of type (p, q) featuring on the r th page of the Frölicher spectral sequence of X can be explicitly described as the following set of multicohomology classes (i.e. each of these is the d r−1 -class of a d r−2 -class . . . of a d 1 -class of a∂-class): where condition (P r ) on α requires the existence of forms u l ∈ C ∞ p+l, q−l (X, C) for l ∈ {1, . . . , r − 1} such that∂ α = 0, ∂α =∂u 1 , ∂u 1 =∂u 2 , . . . , ∂u r−2 =∂u r−1 . These identities imply the identities ∂ζ 1 = 0, ∂ζ 1 =∂ζ 2 , . . . , ∂ζ r−2 =∂ζ r−1 , which, in turn, imply that ζ 1 satisfies condition (P r−1 ) (hence defines a multi-cohomology class lying in E p+1, q−1 r−1 ) and that Thus, the result we get by formula (17) . (12)). Now, recall that infinitely many choices are possible for the isomorphisms (8), (9) and (10). However, any fixed Hermitian metric ω on X selects a unique realisation of each of these isomorphisms and, implicitly, identifies each space E p, q r with a precise subspace H p, q r (depending on ω) of C ∞ p, q (X, C) via an isomorphism E p, q r ≃ H p, q r depending on ω. These harmonic subspaces H p, q r ⊂ C ∞ p, q (X, C) are constructed by induction on r ≥ 1 as follows. p, q (X, C) be the orthogonal complement for the L 2 ω -norm of Im d p−r, q+r−1 r in ker d p, q r (viewed as subspaces of H p, q r ). Due to (10), H p, q r+1 is isomorphic to E p, q r+1 . Note that we have where ∆ = ∂p ′′ ∂ ⋆ + ∂ ⋆ p ′′ ∂ + ∆ ′′ is the pseudo-differential Laplacian constructed in [Pop16]. Also note that standard Hodge theory (for the elliptic differential operator ∆ ′′ ) is used to ensure that Im d p, q−1 0 is closed in C ∞ p, q (X, C) and that H p, q 1 is finite-dimensional. However, all the other images Im d p−r, q+r−1 r are automatically closed since they are (necessarily finite-dimensional) vector subspaces of a finite-dimensional vector space. It is also possible to construct pseudo-differential operators ∆ (r) : C ∞ p, q (X, C) −→ C ∞ p, q (X, C) whose kernels are isomorphic to the spaces H p, q r (cf. forthcoming joint work of the author with L. Ugarte, where the Hodge theory found in [Pop16] for the second page of the Frölicher spectral sequence is extended to all the pages), making these spaces into harmonic spaces for these pseudo-differential Laplacians, but the mere spaces H p, q r suffice for our purposes in this paper. When the vector space C ∞ p, q (X, C) is endowed with the L 2 -norm induced by ω, every subspace H p, q r inherits the restricted norm. On the other hand, every space E p, q r has a quotient norm induced by the L 2 ω -norm. The isomorphisms E p, q r ≃ H p, q r constructed above are isometries when E p, q r and H p, q r are endowed with the quotient, resp. L 2 norms. In particular, the maps α → u 1 and u l−1 → u l are linear. For all r, p, q, we define the linear operator is finite-dimensional, T r is bounded, so there exists a constant C p, q r > 0 such that ||T r (α)|| = ||∂u r−1 || ≤ C p, q r ||α|| for all α ∈ H p, q r . It is easy to see that T r (α) need not belong to H p+r, q−r+1 r when α ∈ H p, q r . If we let P p, q r : C ∞ p, q (X, C) −→ H p, q r be the L ω -orthogonal projection onto H p, q r , we get ||(P p, q r • T r )(α)|| = ||P p, q r (∂u r−1 )|| ≤ ||∂u r−1 || ≤ C p, q r ||α|| for all α ∈ H p, q r . Use of the rescaled Laplacians in the study of the Frölicher spectral sequence In this section, we prove Theorem 1.3. As in [ES89], [GS91], [ALK00], we consider the spectrum distribution function associated with any of the rescaled Laplacians ∆ h , ∆ ω h in our context. Its definition and its study are made far simpler in this setting than in those references by the manifold X being compact and by the Laplacians ∆ ′ , ∆ ′′ being elliptic. Definition 4.1. Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every k ∈ {0, . . . , n} and every constant λ ≥ 0, let N k h (λ) stand for the number of eigenvalues (counted with multiplicities) of ∆ h that are ≤ λ. Replacing ∆ h with ∆ ω h does not change the spectrum distribution function N k h : [0, +∞) −→ N since ∆ h and ∆ ω h have the same eigenvalues with the same multiplicities (cf. Corollary 2.8). Theorem 1.3 can be reworded as ensuring the existence of a constant C > 0 independent of h such that, for all r and k, we have The Efremov-Shubin variational principle The main technical ingredient we will need is the following variant of the variational principle proved in a more general context in [ES89] and used extensively thereafter (e.g. [GS91], [ALK00]) in settings different from ours. We adapt to our situation the result of [ES89]. Proposition 4.2. (see e.g. Efremov-Shubin [ES89]) Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every k = 0, . . . , 2n and every λ ≥ 0, the following identity holds where b k is the k th Betti number of X and the function F k h : [0, +∞) −→ N is defined by where L ranges over the closed vector subspaces of the quotient space C ∞ k (X, C)/ ker d on which the operator d : (The understanding is that ||dζ|| ω h stands for the usual L 2 -norm induced by the metric ω h , while ||ζ|| ω h stands for the quotient norm induced on C ∞ k (X, C)/ ker d by the L 2 ω h -norm.) We will present a detailed proof of this statement along the lines of [ES89] with a few minor simplifications afforded by our special setting where the manifold X is compact and the operator ∆ h is elliptic. While a more general version for unbounded operators on L 2 spaces was needed in [ALK00], we stress that, in this context, we can confine ourselves to the case of operators on spaces of C ∞ differential forms. The main step is the following statement (a version of the classical Min-Max Principle) that was proved in a more general setting in [ES89]. Then, for every λ ≥ 0, the spectrum distribution function N k of P (i.e. N k (λ) is defined to be the number of eigenvalues of P , counted with multiplicities, that are ≤ λ) is given by the following identities (in which the suprema are actually maxima): where L (k) λ stands for the set of closed vector subspaces L ⊂ C ∞ k (X, C) such that P u, u ≤ λ||u|| 2 for all u ∈ L, while P (k) λ stands for the set of all bounded linear operators E : C ∞ k (X, C) −→ C ∞ k (X, C) satisfying the conditions: e. E is an orthogonal projection w.r.t. the L 2 ω inner product); (ii) P u, u ≤ λ||u|| 2 for all u ∈ Im E. (In other words, E is the orthogonal projection onto one of the subspaces L ∈ L (k) λ , so L = Im E for some L ∈ L (k) λ .) Proof. The second identity in (23) follows at once from the fact that the dimension of any closed subspace L ⊂ C ∞ k (X, C) equals the trace of the orthogonal projection onto L. So, we only have to prove the first identity in (23). Since X is compact and P is elliptic, self-adjoint and non-negative, the spectrum of P is discrete and consists of non-negative eigenvalues, while there exists a countable orthonormal (w.r.t. the L 2 ω -inner product) basis of C ∞ k (X, C) (and of the Hilbert space L 2 k (X, C) of L 2 k-forms) consisting of eigenvectors of P . For every µ ≥ 0, let E P (µ) ⊂ C ∞ k (X, C) be the eigenspace of P corresponding to the eigenvalue µ (with the understanding that E P (µ) = {0} if µ is not an actual eigenvalue). The spaces E P (µ) are finite-dimensional and consist of C ∞ forms since P is assumed to be elliptic (hence also hypoelliptic) and X is compact. The second step towards proving Proposition 4.2 is the standard 3-space decomposition used in Hodge theory. For every k = 0, . . . , 2n, the operator ∆ ω h : C ∞ k (X, C) −→ C ∞ k (X, C) is elliptic and since the manifold X is compact and d 2 = 0, we have the L 2 ω h -orthogonal decomposition: and where H k C)) . Moreover, each of the three subspaces into which C ∞ k (X, C) splits in (24) is ∆ ω h -invariant, i.e. because ∆ ω h commutes with d and with d ⋆ ω h . The invariance implies that an L 2 ω h -orthonormal basis {e k i (h)} i∈N ⋆ of C ∞ k (X, C) consisting of eigenvectors for ∆ ω h (whose existence follows from the standard elliptic theory) can be chosen such that each e k i (h) belongs to one and only one of the subspaces H k . . be the corresponding eigenvalues, counted with multiplicities, of the rescaled Laplacian ∆ h : k (X, C) satisfying the above properties. Lemma 4.4. The functions F k h and G k h are the spectrum distribution functions of the restrictions . In other words, they are described as follows: stands for the set of closed vector subspaces L ⊂ E ⋆ k (X, C) such that and L ′ (k) λ stands for the set of closed vector subspaces L ⊂ E k (X, C) such that Proof. This is an immediate application of the variational principle of Proposition 4.3 to the restric- . Estimates (26) and (27) are consequences of the identity ∆ ω h u, u ω h = ||du|| 2 ω h +||d ⋆ ω h u|| 2 ω h and of the fact that d ⋆ ω h u = 0 whenever u ∈ E ⋆ k (X, C) (since Im d ⋆ ω h ⊂ ker d ⋆ ω h ) and that du = 0 whenever u ∈ E k (X, C) (since Im d ⊂ ker d). The last ingredient we need is the following very simple observation. Lemma 4.5. For every λ ≥ 0 and every k ∈ {−1, 0, . . . , 2n}, we have Proof. We know from the orthogonal decompositions (24) that the restriction of d to E ⋆ k (X, C) is injective, so . Combined with the above isomorphism, with the invariance of E ⋆ k (X, C) under ∆ ω h and with the definitions of F h k (λ) and G h k+1 (λ), this implies the contention. . Proof of Proposition 4.2. Putting together (24), the definitions of F k h (λ) and G k h (λ) and the fact that the Hodge isomorphism H k for all k and all λ ≥ 0. Using Lemma 4.5, this is equivalent to (20). On the other hand, the descriptions (25) and (26) of F k h (λ) coincide with the descriptions (21) and (22) thanks to the isomorphism E ⋆ k (X, C) ≃ C ∞ k (X, C)/ ker d, which is another consequence of the decompositions (24). Metric independence of asymptotics Although the following statement has no impact on either the statement of Theorem 1.3 or its proof, we pause briefly to show, exactly as in the foliated case of [ALK00], that the asymptotics of the eigenvalues λ k i (h) and of the spectrum distribution function N k h as h ↓ 0 depend only on the complex structure of X. The proof is an easy application of the Variational Principle of Proposition 4.2. Proposition 4.6. The asymptotics of the λ k i (h)'s and of N k h as h ↓ 0 are independent of the choice of Hermitian metric ω. Proof. We adapt to our setting the proof of the corresponding result in [ALK00]. Let ω and ω ′ be two Hermitian metrics on X. They induce, respectively, rescaled metrics (ω h ) h>0 and (ω ′ , written as in (20). Since X is compact, there exists a constant C > 0 such that the respective L 2 -norms satisfy the following inequalities in every bidegree (p, q): The constant C is independent of h > 0 thanks to Formula 2.2. Thanks to Proposition 4.2, this implies that , so putting the last two inequalities together, we get 4.3 Proof of the inequality "≤" in Theorem 1.3 This proves (31) after setting C := max 0≤r≤N 0≤p,q≤n C 2 r > 0. The proof is complete. Note that L k r is a vector space of classes of cohomology classes, rather than of differential forms, so what is meant by L k r in the above proof is its image in C ∞ k (X, C)/ ker d under the isometries explained in §.3.2. We can use these isometries, the identification of d acting on H p, q r with d r and Conclusion 3.4 in the following way to make the above proof even more explicit. If we choose ζ p, q to be the ω h -harmonic representative of its class (also denoted by ζ p, q ) and to play the role of α of Conclusion 3.4, we can re-write the above inequalities in a more detailed form as follows: where P and T are the linear maps P p, q r and T p, q r (with indices removed) of Conclusion 3.4 that was used above, while || || ω h stands for the L 2 ω h -norm when applied to a form and for the induced quotient norm when applied to a class. Preliminaries to the proof of the inequality "≥" in Theorem 1.3 We will need a few simple observations. Lemma 4.8. Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every bidegree (p, q) and every (p, q)-form u on X, the following identities hold: Proof. The latter identity is obvious, so we will only prove the former one. Since u is of pure type, (4) yields the first identity below, while the second identity follows from Formula 2.2: The last identity followed again from (4). Lemma 4.9. Let u ∈ C ∞ p, q (X, C) be an arbitrary form. Considering the splitting (12)) and the splitting with u r ∈ E p, q r / ker d p, q r (see §.3 and recall that d r : E p, q r / ker d p, q r −→ Im d p, q r ⊂ C ∞ p+r, q−r+1 (X, C) is an isomorphism), the following identity holds: Proof. Since d r is of type (r, −r + 1), d r u r is of type (p + r, q − r + 1), so the d r u r 's are mutually orthogonal (w.r.t. any metric) when r varies. We get where for the last identity we used Formula 2.2. Lemma 4.10. For every r and every bidegree (p, q), the formal adjoints of d r w.r.t. the metrics ω h and ω compare as follows: Consequently, for every form u ∈ C ∞ p, q (X, C), the following counterpart of Lemma 4.9 for the adjoints holds. Considering the splitting with v r ∈ Im d p−r, q+r−1 r (see §.3.1), the following identity holds: Proof. For every (p, q)-form v and every (p − r, q + r − 1)-form u, we have This proves (34). Using the mutual orthogonality of the (d r ) ⋆ ω h v r 's (due to bidegree reasons) and Formula 2.2, we get This proves (35). Putting together (32), (33) and (35), we get Corollary 4.11. Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every bidegree (p, q) and every (p, q)-form u on X, the following identity holds: where u splits uniquely (cf. §.3.1) as 4.5 Proof of the inequality "≥" in Theorem 1.3 Following again the analogy with the foliated case of [ALK00], we will actually prove a stronger statement from which the following result will follow as a corollary. Theorem 4.12. Let (X, ω) be a compact Hermitian manifold with dim C X = n. For every r and every k = 0, . . . , 2n, the following inequality holds: The first main ingredient we will use is the pseudo-differential Laplacian C). The pseudo-differential Laplacian ∆ gives a Hodge theory for the second page of the Frölicher spectral sequence in the sense that there is a Hodge isomorphism E p, q 2 ≃ −→ H p, q ∆ (X, C) := ker( ∆ : C ∞ p, q (X, C) −→ C ∞ p, q (X, C)) for all p, q = 0, . . . , n. Note that (p ′′ ) 2 = p ′′ = (p ′′ ) ⋆ , so ∂p ′′ ∂ ⋆ = (p ′′ ∂ ⋆ ) ⋆ (p ′′ ∂ ⋆ ) and ∂ ⋆ p ′′ ∂ = (p ′′ ∂) ⋆ (p ′′ ∂). Thus, ∆ is a sum of non-negative operators, so its kernel is the intersection of the respective kernels. Since ker(A ⋆ A) = ker A for any operator A, we get The second main ingredient we will use is the following lower estimate of the rescaled Laplacian ∆ h . It is the analogue in our context of a result in [ALK00]. The coefficients 3/4 and 1 are not optimal, but they suffice for our purposes and the proof provided below shows that they can be made optimal if this is desired. We are now ready to state and prove a general result that will imply Theorem 4.12. Theorem 4.14. Let (X, ω) be a compact Hermitian manifold with dim C X = n. Let k ∈ {0, . . . , 2n} and r ≥ 1 be fixed integers. Suppose there exist a sequence (h i ) i∈N of constants h i > 0 such that h i ↓ 0 and a sequence (u i ) i∈N of k-forms u i ∈ C ∞ k (X, C) such that ||u i || ω = 1 for every i and Then, there exists a subsequence (u i l ) l∈N of (u i ) i∈N such that (u i l ) l∈N converges in the L 2 ω -topology to some k-form u ∈ H k r := ⊕ p+q=k H p, q r ≃ E k r , where the H p, q r ⊂ C ∞ p, q (X, C) are the "harmonic" vector subspaces of Definition 3.3 induced by the metric ω. Proof. • Case r = 1. In this case, Hypothesis (39) means that ∆ h i u i , u i ω −→ 0 as i → +∞. Then also ∆ h i u i , u i ω + Ch 2 i −→ 0 as i → +∞. Since, by Lemma 4.13, we have we get Meanwhile, the∂-Laplacian ∆ ′′ is elliptic and the manifold X is compact, so the Gårding inequality yields constants δ 1 , δ 2 > 0 such that the first inequality below holds: where || || W 1 stands for the Sobolev norm W 1 induced by the metric ω. The second inequality above holds for some constant C 1 > 0 since the quantity ∆ ′′ u i , u i ω converges to zero (cf. (40)), hence is bounded, and ||u i || ω = 1 by the hypothesis of Theorem 4.14. Consequently, the sequence (u i ) i∈N is bounded in the Sobolev space W 1 (a Hilbert space), so by the Banach-Alaoglu Theorem there exists a subsequence (u i l ) l∈N that converges in the weak topology of W 1 to some k-form u ∈ W 1 . In particular, the following convergences hold in the weak topology of distributions:∂ u i l −→∂u and∂ ⋆ u i l −→∂ ⋆ u as l → +∞. On the other hand, in the L 2 -topology as i → +∞. Comparing this with the above convergences in the weak topology of distributions, we get∂ u = 0 and∂ ⋆ u = 0, which, by (18), is equivalent to u ∈ ker (∆ ′′ : Note that by the Rellich Lemma (asserting the compactness of the inclusion W 1 ֒→ L 2 ), the convergence of (u i l ) l∈N to u in the weak topology of W 1 implies that (u i l ) l∈N also converges in the L 2 -topology to u. Moreover, the ellipticity of ∆ ′′ and the relation u ∈ ker ∆ ′′ imply that u is C ∞ . On the other hand, we know from the discussion of the case r = 1 (whose weaker assumption is still valid in the case r = 2) that there exists a subsequence (u i l ) l∈N that converges in the weak topology of W 1 to some k-form u ∈ W 1 . Thus, ∂u i l −→ ∂u ∈ L 2 in the weak topology of L 2 as l → +∞. This means that as l → +∞. (The second convergence follows from the first since ||p ′′ v|| ≤ ||v|| for all v ∈ L 2 , so p ′′ (L 2 ) ⊂ L 2 .) Now, p ′′ is self-adjoint, so the last convergence translates to This means that p ′′ ∂u i l converges to p ′′ ∂u in the weak topology of L 2 as l → +∞. However, we know from (42) that p ′′ ∂u i l converges to 0 in the L 2 -topology. Hence p ′′ ∂u = 0. The same argument run with ∂ ⋆ in place of ∂ yields that p ′′ ∂ ⋆ u = 0. On the other hand, we know from the discussion of the case r = 1 that u ∈ ker∂ ∩ ker∂ ⋆ = ker ∆ ′′ , so we get u ∈ ker(p ′′ ∂) ∩ ker(p ′′ ∂ ⋆ ) ∩ ker∂ ∩ ker∂ ⋆ = H k 2 ≃ E k 2 5 Consequences of Theorem 1.3 The following consequences of Theorem 1.3 are of independent interest. Proposition 5.1. Let X be a compact complex manifold with dim C X = n. For every r ∈ N ⋆ and every k = 0, . . . , 2n, the following identity (a kind of numerical Poincaré duality extended to all the pages of the spectral sequence) holds: where, as usual, E k r = p+q=k E p, q r is the direct sum of the spaces of total degree k on the r th page of the Frölicher spectral sequence of X. This is an immediate consequence of Theorem 1.3 and of the following Proposition 5.2. Let (X, ω) be a compact complex Hermitian manifold with dim C X = n. Fix an arbitrary constant h > 0. In particular, the operators ∆ h : C ∞ k (X, C) −→ C ∞ k (X, C) and ∆ h : C ∞ 2n−k (X, C) −→ C ∞ 2n−k (X, C) have the same spectra and their corresponding eigenvalues have the same multiplicities for all h > 0 and all k = 0, . . . , 2n. (ii) Using the formula under (i) and ⋆⋆ = (−1) k on k-forms, we get the following equivalences: , where (a) was obtained by conjugating and then applying the isomorphism ⋆. This shows the well-definedness of the linear map under consideration. Both the conjugation and ⋆ are isomorphisms, hence so is that linear map. Proof of Proposition 5.1. By Theorem 1.3, dim C E k r , resp. dim C E 2n−k r , is the number of eigenvalues λ k i (h) ∈ O(h 2r ), resp. λ 2n−k i (h) ∈ O(h 2r ), counted with multiplicities, of ∆ h in degree k, resp. 2n−k. Since, by Proposition 5.2, λ k i (h) = λ 2n−k i (h) for all i ∈ N ⋆ and all h > 0, the statement follows. The last consequence of Theorem 1.3 that we notice in this section is the following degeneration criterion for the Frölicher spectral sequence. We shall now give a sufficient condition for the r.h.s. of (43) to be non-negative. If the lower bound −Ch 2 in (54) could be improved to 0, then we would have ∆ h ≥ h 2 ∆ for all 0 < h ≪ 1 (as in Corollary 6.3) and Conjecture 1.1 would follow by the argument spelt out at the end of section §.6.
2017-09-13T13:58:24.000Z
2017-09-13T00:00:00.000
{ "year": 2017, "sha1": "cb4f267f53aa0b123328a8fbf3954e33dbf46718", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1709.04332", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cb4f267f53aa0b123328a8fbf3954e33dbf46718", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
238641561
pes2o/s2orc
v3-fos-license
Influence of copper content and thermal treatment on shape memory effect in rapidly quenched TiNiCu alloys Amorphous alloys of the TiNi - TiCu system with a copper content of 25 to 40 at.% were prepared by planar flow casting at a melt cooling rate of 106 K/s. Crystallization of the alloys was carried out by isothermal annealing and by action of single electric pulse with a duration of 5 ms. Shape memory behavior and structure of the alloys was investigated by means of bending tests and X-ray phase analysis. It was found that increasing the copper content to above 30 at.% considerably reduces the plasticity and shape memory effect of the alloys. However, abrupt decreasing the annealing duration significantly improves the shape memory performance due to prevention of the formation of brittle Ti-Cu phases in the alloys structure. Introduction Shape memory alloys (SMA) are a prime example of the so-called intelligent functional material [1,2]. The ability of the SMAs to retain their unique characteristics at the microscale level makes it possible to create on their basis effective devices for micro-electromechanical systems (MEMS) [3][4][5], which are currently one of the most innovative and fastest growing technologies. Alloys of the TiNi-TiCu quasi-binary intermetallic system, produced by rapid quenching from the liquid state in the form of thin ribbons about 40 μm thick, are an attractive material for creating microactuators due to the narrow temperature hysteresis of the martensitic transformation (MT) and the shape memory effect (SME) and the relatively large recoverable strain [6][7][8][9][10]. Recently, we have developed a number of microtweezers, which are a micromechanical instrument that provides gripping and holding micro-and nano-objects for spatial manipulation [11][12][13]. The possibility of manufacturing composite amorphouscrystalline microgrippers with SME using pulsed laser action is shown, the operation of the devices and their compatibility with most of the known micro-and nano-positioning systems are demonstrated experimentally. It was found that the best characteristics are possessed by TiNiCu alloys crystallized from the amorphous state. Amorphization is achieved in alloys with a high copper content (more than 20 at.%) at a melt cooling rate of about 10 6 K/s [6,9,[14][15][16][17]. Recently, it was shown that an increase in the copper content up to 38 at.% significantly affects both their structural and functional properties [18][19][20]. In this work, we investigated the shape memory behavior in rapidly quenched TiNiCu alloys depending on the copper content, as well as the method and duration of crystallization from the amorphous state. Samples To obtain test samples, we have used the planar flow casting method [21], by means of which alloys of the TiNi-TiCu quasi-binary system with a copper content of 25, 30, 35, and 40 at.% (denoted below 25Cu, 30Cu, 35Cu, and 40Cu, respectively) were manufactured by rapid quenching from the liquid state at a melt cooling rate of 10 6 K/s in the amorphous state in the form of 30-50 µm thick ribbons. Techniques Thermal treatment of the alloys was carried out in two ways: by isothermal annealing in a muffle furnace in air at 500°C for 100-300 s and by passing a short (5 ms duration) electric current pulse through the sample. The annealing temperature was selected on the basis of the data [18], and the current amplitude and duration of the electric pulse heat treatment in accordance with the procedure presented in [22]. X-ray phase analysis was performed on a PANalytical Empyrean diffractometer in Cu-Kα radiation using Bragg-Brentano focusing with a hybrid monochromator. The study of thermomechanical properties and parameters of the SME in alloys was carried out using the bending test method [23]. A ribbon sample of thickness d, to which a rectilinear shape memory was given, was bent through 180 degrees in the martensitic state and placed between the fixed and movable pressure plates. The distance D between the plates was set so that the sample acquired a given initial deformation i = d/D, which corresponds to the maximum deformation on the sample surface. After removing the load and subsequent heating above the temperature Аf of the end of the reverse MT, as a result of the SME implementation, the sample restored the specified rectilinear shape completely or partially, returning part of the accumulated strain sme, which characterizes the SME value. Observation and measurement of the shape change of the sample were implemented using a special video recording system and a data processing program. Results and Discussion For each alloy, we studied the thermomechanical characteristics of four samples, subjected to isothermal crystallization for 100 s, 200 s, 300 s, as well as dynamic crystallization by a single electric current pulse with duration of 5 ms. Measurements of the value of the maximum deformation f, at which the fracture of the ribbon occurs, showed that an increase in the copper content leads to a sharp embrittlement of the alloys after isothermal treatment. If in alloys 25Cu and 30Cu the value of strain f reaches 8-11% depending on the duration of thermal treatment, then in alloy 35Cu it decreases to 1.5-3%. At the same time, the alloy 40Cu, regardless of the duration of isothermal treatment, is destroyed at f  0,2% and, as a consequence, is unable to exhibit the SME. In this case, an increase in the time of isothermal treatment from 100 s to 300 s causes a decrease in f by several percent. The use of high-rate electro-pulse crystallization increases the plasticity of all alloys, especially sharply increasing the value of f for alloys 35Cu (up to 6%) and 40Cu (up to 4%). The results of studying the SME parameters showed that for isothermally crystallized alloys 25Cu and 30Cu, the strain sme increases almost linearly with an increase in the initial strain i (Fig. 1). However, at i more than 3-5%, this increase noticeably slows down, which is probably because of an excess of the characteristic value of pseudoplastic strain in alloys of the TiNi-TiCu system [23] and with the occurrence of noticeable plastic strain. It is important to note that with a decrease in the duration of isothermal treatment from 300 s to 100 s, the sme value increases at the same i. The increase in the maximum value of sme is especially noticeable, which, for example, in the 30Cu alloy increases from 4.4 to 5.5%. In this case, an increase in the copper content noticeably decreases the maximum value of sme, and the 40Cu alloy is so brittle that any shape memory behavior is not observed in it. Figure 1. Dependence of strain sme, recovered due to SME, on initial strain i for alloys 25Cu (a), 30Cu (b) and 35Cu (c) after isothermal treatment with different durations (100 s, 200 s and 300 s) The dependences of sme on i for alloys 25Cu and 30Cu, exposed to dynamic electro-pulse crystallization with a duration of 5 ms, have a similar character, however, the maximum strain sme increases significantly (from 5.2% to 6.5% for the 25Cu alloy and from 4.6% to 7.1% for 30Cu alloy). At the same time, dynamic crystallization of samples with higher copper content (35Cu and 40Cu) dramatically changes their thermomechanical properties (Fig. 2). In the 35Cu alloy, a sharp increase in plasticity and the maximum value of sme is observed, and in the 40Cu alloy, a significant SME appears after electro-pulse crystallization. In this case, the strain sme, restored due to the SME, continuously increases up to the fracture itself, and the alloy is able to withstand high strains until fracture. Figure 2. Dependence of strain sme, recovered due to SME, on initial strain i for TiNiCu alloys with different copper contents after isothermal crystallization for 100 s (a) and dynamic crystallization for 5 ms (b) To clarify the reasons for the observed shape memory behavior, X-ray diffraction studies of crystallized alloys were carried out. It is known that the SME in alloys of the TiNi-TiCu system originates from thermoelastic martensitic transformation В2  В19 [9,10]. The high-temperature austenitic B2phase has a bcc lattice of the CsCl type, which upon cooling transforms into the martensite B19-phase (orthorhombic lattice). It has been established in this work that in alloys 25Cu and 30Cu after isothermal heat treatment for 300 s, a martensitic state with the B19 structure is formed, which is illustrated by characteristic diffraction patterns in Fig. 3a. A decrease in the time of isothermal crystallization from 300 s to 100 s does not lead to noticeable changes in the diffractograms. It is obvious that these structural features explain the large value of the SME in these alloys. The isothermal treatment of the 35Cu alloy for 300 s predominantly forms a B11-type structure (Ti-Cu phase) (Fig. 3b), which embrittles the alloy and prevents the appearance of SME. A decrease in the duration of crystallization to 100 s causes a noticeable decrease in the intensity of the peaks of the B11-phase and the appearance of reflections of the B19 phase, as a result of which a two-phase structure (B19+B11) is formed. As a result, the alloy exhibits the SME, but its value is rather small (Fig. 1c). In the 40Cu alloy, after holding for 300 s, only reflections from the B11-phase are visible in the diffractogram (Fig. 3c). With a decrease in the processing time to 100 s, the shape of the diffraction patterns slightly changes, but, in contrast to the 35Cu alloy, the peaks of the B11-phase are completely retained and peaks of reflections from the planes of the B19-phase do not appear. Thus, the structural state of the 40Cu alloy after isothermal treatment with any duration is determined by the brittle B11phase, which explains its extremely low plasticity and the inability to exhibit SME. The microstructure of 35Cu and 40Cu alloys radically changes after high-rate electro-pulse crystallization in comparison with isothermal heat treatment. The main difference is that at room temperature these alloys are almost completely in the martensitic state with the B19 structure, which is confirmed by the presence of reflections of the B19 phase in the X-ray diffraction patterns and the absence of pronounced peaks of the brittle B11-phase (Fig. 4). When the 40Cu alloy is heated to a temperature of 75°C (above Af), the peaks of the B19-phase disappear, and only reflections of the B2-phase are present, that is, the alloy passes into a completely austenitic state as a result of the B19↔B2 MT, which ensures the appearance of the SME. Conclusions In the present work, amorphous alloys of the TiNi-TiCu quasi-binary system with a copper content of 25 to 40 at.%, obtained by rapid quenching from the melt (by the method of planar flow casting), were crystallized in two ways: isothermal annealing at 500°C with a variable holding time from 300 to 100 s and electro-pulse heat treatment with a duration of 5 ms. Thermomechanical bending tests and X-ray
2021-10-13T20:07:32.938Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "9f1df8997a5687d9b9d71c67e365f7465cb807b9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2036/1/012013", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9f1df8997a5687d9b9d71c67e365f7465cb807b9", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
235663383
pes2o/s2orc
v3-fos-license
Differently PEGylated Polymer Nanoparticles for Pancreatic Cancer Delivery: Using a Novel Near-Infrared Emissive and Biodegradable Polymer as the Fluorescence Tracer In this study, a chemically synthetic polymer, benzo[1,2-b:4,5-b′]difuran(BDF)-based donor–acceptor copolymer PBDFDTBO, was individually coated by amphiphilic poly(ethylene oxide)-block-poly(ε-caprolactone) (PEO-PCL) and 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-methoxy(polyethylene glycol) (DSPE-PEG or PEG-DSPE), to form stably fluorescent nanoparticles in the near-infrared (NIR) window. The physicochemical properties of the synthesized nanoparticles were characterized and compared, including their size, surface charge, and morphology. In addition, in vitro studies were also performed using two pancreatic cancer cell lines, assessing the cell viability of the PBDFDTBO-included PEGylated nanoparticles formulations. Moreover, in vivo studies were also conducted, using subcutaneous murine cancer models to investigate the polymeric nanoparticles’ circulation time, tumor accumulation, and preferred organ biodistribution. The overall results demonstrated that even with the same PEGylated surface, the hydrophobic composition anchored on the encapsulated PBDFDTBO core strongly affected the biodistribution and tumor accumulation of the nanoparticles, to a degree possibly determined by the hydrophobic interactions between the hydrophobic segment of amphiphilic polymers (DSPE or PCL moiety) and the enwrapped PBDFDTBO. Both PEGylated nanoparticles were compared to obtain an optimized coating strategy for a desired biological feature in pancreatic cancer delivery. INTRODUCTION Pancreatic cancer remains to be one of the most challenging diseases worldwide. Pancreatic ductal adenocarcinoma (PDAC) arises in ductal epithelium of exocrine pancreas, most diagnosed only at a late stage and thereby not being resectable upon diagnosis (Kleeff et al., 2016). In the United States in 2020, 57,600 new cases are expected to develop and 47,050 would die from this aggressive disease, making it the fourth leading death cause, with a 5-year survival rate after diagnosis less than 5% (Bengtsson et al., 2020;Zavala et al., 2021). Despite the technology advances, the poor prognosis of PDAC has remained unimproved in the past two decades. This seemingly untamable nature of PDAC mainly results from the unusual scarcity of specific biomarkers for diagnosis and the usual resistance to chemotherapeutic reagents during the treatment. In this context, gemcitabine is a chemically nucleoside analog that is used as a first-line treatment in patients with pancreatic cancer, who previously performed a tumor resection. This strategy approach for the treatment has a reported survival of about 6 months on average (Burris et al., 1997). This survival advantage of postoperative gemcitabine treatment was only observed in patients with lymph node metastases (Skau Rasmussen et al., 2019). Gemcitabine in combination with either erlotinib (a tyrosine kinase inhibitor) or protein-bound paclitaxel (a mitotic inhibitor) did not add a significant benefit to the patient, although additional 1-or 2-month survival could be prolonged with a costly treatment (Moore et al., 2007;Von Hoff et al., 2013). Lately, compared to single gemcitabine therapy, a chemotherapeutic cocktail (FOL-F-IRIN-OX) containing folinic acid (leucovorin), 5-fluorouracil, irinotecan, and oxaliplatin substantially prolonged the patient survival with the diagnosis of metastatic pancreatic cancer (11.1 versus 6.8 months) (Conroy et al., 2011). However, this new chemotherapy regimen became only treatable for patients with good physical status and high tolerance to the drugs' toxicity. Ineffective diagnosis and treatment could be highly associated with extraordinary heterogeneity in pancreatic tumor microenvironment. Pancreatic tumors are enclosed by a dense stroma, including a diversity of cellular and acellular substances, with the pancreatic cancer cells being hardly accessible via conventional pharmaceutical delivery (Liang et al., 2017;Hosein et al., 2020). In addition, unlike many other angiogenic cancers, which developed irregular blood vessels de novo, PDAC is histologically characterized as poorly vascularized (Ryan et al., 2014). Therefore, uncommon richness of stroma and deficiency of vasculature, plus common presence of hypoxia in PDAC, are responsible to further reduce the intratumoral blood flow, elevate the interstitial pressure and decrease delivery of therapeutic drugs to tumor sites through the bloodstream (Ryan et al., 2014). Thus, an effective and specific delivery of theranostic agents to overcome the stromal barrier and target the pancreatic cancers became imperative to precisely diagnose and specifically treat PDAC (Han et al., 2018;Hu et al., 2021). Nanomedicine has shown a unique and unmatched privilege of their enhanced permeability and retention in pancreatic cancers with hypovascular and poorly permeable features (Cabral et al., 2011;Meng et al., 2015). In fact, facile conjugation of various targeting moieties, according to different cancer origins, has enabled the specified and concentrated nano-drug delivery (Zhang et al., 2021). Among them, polymeric nanoparticles have been widely adopted in biomedical and clinical research because of their organic nature and optimal properties, such as bioavailability, biocompatibility, and biodegradability (Tao et al., 2013;Hong et al., 2014). In particular, the surface modification of nanoparticles by using polyethylene glycol (PEG) has been adopted as the most common strategy to prolong the nanoparticles' blood circulation and eliminate immunogenicity, therefore increasing their accumulation in the targeted organs or tumors (Suk et al., 2016). Here, two different PEGylated amphiphilic polymers were selected to modify one novel near-infrared (NIR) emissive and biodegradable polymer for fluorescent imaging of pancreatic cancers in two subcutaneous murine models. A donor unit benzo [1,2-b:4,5-b ]difuran(BDF)-based donor-acceptor (D-A) copolymer PBDFDTBO was synthesized (Liu et al., 2012;Gao et al., 2014) and applied in this study for the first time as a fluorescent probe with bright and stable emission in NIR-I window. Choosing this polymeric dye rather than the smallmolecule dye is mainly due to the following reasons: (i) the polymeric dye with a conjugated system owns a larger Stokes shift, which results in much better signal-to-noise ratio (Dutta et al., 2009;Ren et al., 2018); (ii) the polymeric dye has much enhanced photostability in a variety of chemical environments with improved resistance to photobleaching (Mao et al., 2015); (iii) the polymer with high molecular weight usually enables a longer circulation time in vivo than the small-molecule substance (Ghezzi et al., 2021). Moreover, due to the hydrophobic nature of PBDFDTBO polymer, we employed the commonly adopted PEGylation strategy to modify it for a good availability in the biological system. Thus, 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-methoxy(polyethylene glycol) (PEG-DSPE; Bedu-Addo and Huang, 1995) or poly(ethylene oxide)-block-poly(ε-caprolactone) (PEO-PCL; Grossen et al., 2017) were chosen to differently PEGylate PBDFDTBO to compare their biological features for pancreatic cancer delivery. Intriguingly, it also remains unexplored whether the hydrophobic segment in PEG-based amphiphilic polymers has a modulatory effect on their pharmacokinetics, biodistributions, and tumor accumulations. The results would help reveal an optimized PEGylation strategy for nanoparticulate delivery into pancreatic tumors. Materials, Cells, and Animals PEO(5000)-b-PCL(5000) and PEG(5000)-DSPE were purchased from Xi'an Ruixi Biological Technology Co., Ltd. (Xi'an, China) and Avanti Polar Lipids Inc. (Alabaster, AL, United States), respectively. PBDFDTBO (P for short in the following figures) was synthesized and characterized as previously reported (Liu et al., 2012). Tetrahydrofuran (THF) and phosphate-buffered saline (PBS) were purchased from Vicente Biotechnology Co., Ltd. (Nanjing, China). Ultrafiltration centrifuge tubes (molecular weight cutoff or MWCO = 100 kDa) were purchased from Millipore (MA, United States). Cell Counting Kit-8 (CCK-8) assay was obtained from Dojindo (Kumamoto, Japan). Mouse pancreatic cancer cell line Panc02 was obtained from Chinese Academy of Sciences (Shanghai, China) and cultured in high glucose Dulbecco's Modified Eagle's Medium (H-DMEM, BioInd) with a 10% fetal bovine serum (FBS, Excell). Human pancreatic cancer cell line PATU-8988T was obtained from Shanghai YuBo Biotech Co., Ltd. (Shanghai, China), being cultured in H-DMEM with 10% FBS. Nude mice (6week-old female) were purchased from Changzhou Cavins Laboratory Animal Co., Ltd. (Changzhou, China). All the animal experiments were carried out following the guidelines of the Experimental Animal Administrative Committee of Jiangsu University. Syntheses of PEO-PCL-P and PEG-DSPE-P Nanoparticles In typical synthesis, PEO-PCL or PEG-DSPE were dissolved in the THF at a concentration of 20 mg/ml, and 200 µl of the solution (i.e., 4 mg PEG-based amphiphilic polymer) was then pipetted into a glass bottle, mixed thoroughly with 10 µl of 4 mg/ml PBDFDTBO solution in the THF, before the mixture was added dropwise into 5 ml dH 2 O during ultrasonication with the energy of 800-900 Joule applied continuously for 1 min to form the nanoparticles. Subsequently, the mixture was transferred to the ultrafiltration tube (MWCO = 100 kDa) and centrifuged at 1500 g for 30 min to remove the organic solvent. The residues were washed three times by PBS to ensure the complete removal of the organic solvent. The obtained nanoparticles were dissolved in 200 µl of PBS and filtered using a 0.22-µm polyethersulfone filter (Sigma-Aldrich, United States) for sterilization purposes before in vitro or in vivo experiments. Characterizations of Polymeric Nanoparticles One hundred microliters of the synthesized PEO-PCL-P or PEG-DSPE-P nanoparticles was added to a 96-well plate (CellStar, United States). First, the absorption wavelength of the two particles was measured by the microplate reader (CellStar, United States) and then the absorption peaks were determined. The corresponding wavelength under peak absorbance was selected as the excitation wavelengths to further acquire the emission spectra. The size and ζ-potential of PEO-PCL-P or PEG-DSPE-P nanoparticles were measured by Nanoparticle Tracking Analysis (NTA ZetaView R PMX120) or Zetasizer Nano Analyzer (Malvern ZS90). A cryogenic transmitting electron microscopy (cryo-TEM) observation of polymeric nanoparticles in solutions was carried out in a controlled-environment vitrification system. The climate chamber temperature was 25-28 • C, and the relative humidity was kept close to saturation to prevent the sample evaporation during the preparation. The samples at room temperature were placed on a carbon-coated holey film supported by a copper grid and gently blotted with filter paper to obtain a thin liquid film (20-200 nm) on the grid. The grid was quenched rapidly in liquid ethane at −180 • C and then transferred to liquid nitrogen (−196 • C) for storage. Then, the vitrified specimen stored in liquid nitrogen was transferred to a Tecnai G2 F20 cryo-microscope, using a Gatan 626 cryoholder and its workstation. The acceleration voltage was 200 kV, and the working temperature was kept below −170 • C. The images were digitally recorded with a charge-coupled device camera (Gatan) under low-dose conditions. Cytotoxicity Tests The pancreatic cancer cell lines, 4000 Panc02 cells/well and 6000 8988T cells/well, were placed into 96-well plates and six duplicate wells were set for each condition overnight, before the supernatant was then carefully discarded and the freshly cell culture solutions with pre-added PEO-PCL-P and PEG-DSPE-P nanoparticles at different concentrations were added. The cell plates were gently shaken and moved to the incubator for 24-h treatment. For the measurements, the supernatants were removed by leaving adherent cells undisturbed, and the fresh culture solutions containing 10% CCK8 reagent were next supplied for 2 h incubation at 37 • C. The absorbance of each well was measured in a microplate reader. In vivo Animal Experiments Six-week-old female nude mice were subcutaneously injected with 100 µl of 5.0 × 10 7 /ml Panc02 cells or 5.0 × 10 7 /ml 8988T cells on both left and right flanks during anesthesia. Mice were closely monitored after the tumor cells were implanted, and animal experiments were initiated when the tumor size reached ∼0.5-0.8 cm in diameter. PEO-PCL-P or PEG-DSPE-P nanoparticles were injected into the tail vein and mouse blood was sampled at the time points as indicated. At the same time, the fluorescence intensity of the tumor on the left and right sides of nude mice was also measured when co-localized through bright field observations of subcutaneous tumor regions. At the end of animal experiments, mice were sacrificed, and the major organs were extracted and measured for their ex vivo fluorescence distributions. Animal experiments were performed using IVIS R Spectrum In Vivo Imaging System (PerkinElmer, United States). Statistical Analysis All data were shown as the mean ± standard deviation (SD). All statistical analysis was performed with GraphPad Prism 5.0 software (GraphPad Software Inc., United States). The statistical differences between groups were analyzed using Student's t-test. A p-value of <0.05 was considered to indicate a statistically significant difference. Synthesis of NIR-I Emissive Biodegradable Nanoparticles The chemical structure of PBDFDTBO (P) is represented in Figure 1A. At first, different concentrations of PBDFDTBO were prepared in THF and measured for their absorbance in a range of wavelength from 300 to 900 nm, with findings that this polymeric substance has two absorbance peaks, one at 401 nm and the other at 574 nm ( Figure 1B). Peak values were plotted versus the corresponding PBDFDTBO concentrations, exhibiting an excellent linearity (R 2 > 0.99). Due to the strong hydrophobic nature of PBDFDTBO, we adopted two PEGbased amphiphilic polymers to encapsulate this D-A copolymer inside the hydrophobic core, making it more water-soluble. The synthetic scheme was illustrated in Figure 1C. PEG-DSPE or PEO-PCL was dissolved in THF, where PBDFDTBO solution was added to mix well before these compounds were dispersed into dH 2 O during sonication. Organic solvents were later removed, and the synthesized particles were resuspended in aqueous solutions (i.e., PBS) for the further characterizations. Absorption spectra of both synthesized PEO-PCL-P and PEG-DSPE-P nanoparticles showed the highest absorbance peak at 570 nm ( Figure 1D). Both nanoparticle solutions demonstrated fluorescence emission spectra Figure 1E, where substantial emission emerged since 750 nm with peak values at 830 nm, using 570 nm as an excitation wavelength. To optimize the encapsulation of PBDFDTBO in the amphiphilic polymer, different initial concentrations of this D-A copolymer were employed with a fixed amount of PEO-PCL or PEG-DSPE, to prepare water-soluble nanoparticles and test their fluorescent properties in different dilutions. The results were summarized and shown in Figure 2. With an initial mass ratio between amphiphilic polymer (PEO-PCL or PEG-DSPE) and PBDFDTBO = 1: 0.005, 0.01, 0.02, and 0.04, respectively, nanoparticles were synthesized as shown in section "Materials and Methods" and diluted in PBS with a dilution factor = 2, 5, 10, 20, 50, and 100, respectively. Fluorescence intensity of each formulation was measured and plotted versus the concentration of nanoparticles (in dilution as shown in Figure 2). As a result, the mass ratio of PEO-PCL or PEG-DSPE: PBDFDTBO = 1:0.01 was chosen for the nanoparticle synthesis, due to the finding that after 10 times dilution of as-synthesized nanoparticles in aqueous solutions, fluorescence intensity became linearly altered when nanoparticle concentration changed (a standard injection of 200 µl via mouse tail vein would be diluted about 10 times by presuming the whole mouse blood volume of 2 ml). In this range, detecting the fluorescence intensity of nanoparticles represents a valid measure in evaluating the nanoparticle of an unknown concentration. Characterization of Differently PEGylated Nanoparticles To calculate the composition of as-made nanoparticles, nanoparticles after typical synthesis were lyophilized and redissolved in THF, and the PBDFDTBO contained in PEO-PCL-P or PEG-DSPE-P was measured at 5.4 ± 0.3 and 7.4 ± 1.5 µg/mg nanoparticles. At the same time, the as-synthesized PEO-PCL-P or PEG-DSPE-P after typical synthesis and purification was measured at 3.6 ± 0.3 and 2.7 ± 0.2 mg, respectively; that is, 18.0 ± 1.5 mg/ml PEO-PCL-P and 13.5 ± 1.0 mg/ml PEG-DSPE-P in 200-µl aqueous solutions. The PEO-PCL-P or PEG-DSPE-P nanoparticles were then resuspended in dH 2 O, PBS, or serum at the same concentration as above, and tested for their photostability in different chemical environments. By normalizing each fluorescence intensity under all the tested condition (n = 5) to the highest one acquired, the results were shown in mean ± SD in Supplementary Figure 1A. For PEG-DSPE-P nanoparticles, the normalized fluorescence intensity became 0.97 ± 0.02, 0.96 ± 0.02, and 0.93 ± 0.01 in dH 2 O, PBS, and serum solutions, respectively. For PEO-PCL-P nanoparticles, the normalized intensity read was 0.98 ± 0.02, 0.98 ± 0.01, and 0.95 ± 0.02 in dH 2 O, PBS and serum, respectively. Therefore, the fluorescence intensity in serum dropped significantly for both nanoparticles compared with the corresponding results in water or PBS conditions (p < 0.05). Then, the synthesized nanoparticles were incubated in serum at 37 • C and protected from light for 6 days, and their fluorescence intensity was measured at days 0, 2, 4, and 6, and then normalized to the highest one acquired. Results are shown in Supplementary Figure 1B, as the normalized fluorescence intensity of each nanoparticle formulation was plotted over time. For both nanoparticles incubated in serum solutions, the fluorescence intensity became gradually lower over time and remained 70% of the original value after 6 days suspension. Furthermore, the synthesized nanoparticles in dH 2 O, PBS, or serum were subjected to continuous photobleaching, as formulations were exposed to the excitation of 570 nm and emission intensity at 830 nm was recorded every 30 s. The fluorescence intensity was acquired by being normalized to the highest one under each condition and plotted versus time, as shown in Supplementary Figure 1C. It was observed that the photoluminescence was stable over continuous exposure to excitation light, demonstrating a good resistance to photobleaching. Further characterizations were investigated using TEM and NTA measurements, which corroborated the size, morphology, and surface charge of both nanoparticles (Figure 3). The nanoparticle size of PEO-PCL-P was measured with a peak value of 121 nm, with a surface potential of −24 mV. Similarly, the particle size of PEG-DSPE-P was measured with a peak value of 118 nm with a surface potential of −26 mV. Their hydrodynamic diameters were visualized and verified by cryo-TEM images with a spheric morphology. Particle stability was next tested by suspending PEO-PCL-P or PEG-DSPE-P in dH 2 O and measuring their size, polydispersity index (PDI), and zeta potential over time (0, 2, and 4 days after syntheses, measured by Zetasizer Nano Instrument). Results were shown in Supplementary Figure 1D. Upon synthesis, PEO-PCL-P had a particle size of 126.3 ± 0.8 nm with a PDI of 0.231 ± 0.022 and a zeta potential of −25.3 ± 0.6 mV, whereas PEG-DSPE-P had a particle size of 123.0 ± 5.1 nm with a PDI of 0.224 ± 0.046 and a zeta potential of −29.5 ± 1.6 mV. Compared to PEO-PCL-P, PEG-DSPE-P owns indistinguishable particle size and PDI, but a slightly more negative surface charge. Moreover, during a period of 4 days, both nanoparticles showed great stability in aqueous solutions. In vitro Toxicity of PEGylated Nanoparticles The synthesized nanoparticles were studied for their potential cytotoxicity to ensure the biosafety when applying these PEGylated nanoparticles in biological systems. Different concentrations of PEG-DSPE-P or PEO-PCL-P nanoparticles were added into the cell cultures (i.e., murine PDAC cell line PANC02 and human PDAC cell line 8988T) and tested for their toxicity at 24-h incubation. The results are shown in Supplementary Figure 2. For PANC02 cells, dosages of 0-2000 µg/ml nanoparticles were examined for cytotoxicity, in which PEO-PCL-P showed no noticeable toxicity, and PEG-DSPE-P remained non-toxic until 892 µg/ml (cell viability then dropped to 85.4 ± 7.0%). In comparison, the toxicity for 8988T cells showed a much-elevated sensitivity, as dosage of nanoparticles over 442 µg/ml started to lower the cell viability to 92.2 ± 5.1% in the presence of PEG-DSPE-P and to 80.9 ± 5.8% in the presence of PEO-PCL-P nanoparticles (p < 0.05 when compared to untreated cells), respectively. Therefore, both PEGylated nanoparticles presented a minimal toxicity to pancreatic cancer cells studied, while for the same nanoparticle, PEO-PCL-P or PEG-DSPE-P, it showed different toxicity profiles to cells with different origins. In vivo Pharmacokinetics, Biodistribution, and Tumor Accumulation Mouse (PANC02) or human (8988T) PDAC cells were transplanted into the left and right flanks of nude mice, allowing for a subcutaneous tumor growth of a 2-week period. A tumor with a diameter of approximately 0.5 or 0.8 cm was formed for 8988T or PANC02 cells, respectively. Before in vivo administration of nanoparticles, the mouse blood was obtained, and different concentrations of PEO-PCL-P or PEG-DSPE-P were added into the mouse blood for ex vivo measurement of fluorescence intensity. Results are shown in Supplementary Figure 3, and it was observed that nanoparticle concentrations less than 10 times dilution of the highest concentration tested (i.e., 18 mg/ml for PEO-PCL-P and 13.5 mg/ml for PEG-DSPE-P) exhibited a good linear function versus fluorescence intensity. Presuming a whole mouse blood volume of 2 ml, 200 µl of 18 mg/ml for PEO-PCL-P and 13.5 mg/ml for PEG-DSPE-P can be the intended injection dosage for the next in vivo study. The nanoparticles were then intravenously administered via tail-vein injection in the subcutaneous pancreatic cancer models at a dosage of 180 mg/kg mouse for PEO-PCL-P or 135 mg/kg mouse for PEG-DSPE-P. Mouse blood was withdrawn at different time points for the pharmacokinetics study and the polymeric dye-included nanoparticles were checked by their fluorescence intensities in various organs or tumors for biodistribution and tumor accumulation studies. For PANC02 cell-transplanted tumor models, both flanks bearing tumors were monitored by co-localizing the tumor region under the bright field with fluorescent imaging (Figure 4A, only showing left flank as example). The fluorescence intensities of both left and right tumors were recorded over time, being normalized to the first fluorescence reading in each tumor and plotted against time as shown in Figure 4E. Simultaneously, mouse blood was collected at different time points post-injection (p.i.) as indicated in Figure 4B, with the fluorescence intensity of mouse blood plotted versus the withdrawal time, fitting into an exponential function where the exponential decay constant was obtained and converted into the circulation half-time (t 1/2 ). At 24 h p.i., the mice were sacrificed, and the major organs were extracted and analyzed by ex vivo fluorescent imaging (Figure 4C). The fluorescence intensity of each organ was recorded per tissue area and normalized to that of liver (the largest organ), as shown in Figure 4D. As a result, t 1/2 for PEO-PCL-P nanoparticles in nude mice were calculated to be 14.5 ± 2.4 h. The tumor uptake at the left flank was 13.7 ± 8.9% (normalized to liver uptake) at a rate of 0.20 h −1 to reach the highest accumulation, that is, 5.7 ± 1.7 times the original at 24 h. Similarly, the tumor accumulation at the right flank was 14.8 ± 7.5% at a rate of 0.24 h −1 to reach the highest accumulation 6.6 ± 1.6 times the original at 24 h. PEG-DSPE-P nanoparticles were administered into PANC02 cell-transplanted subcutaneous cancer model via tail-vein injection, and the experiments were performed as above-mentioned. As a result, t 1/2 for PEG-DSPE-P nanoparticles in nude mice was calculated as 12.4 ± 1.4 h (Supplementary Figure 4A). The tumor uptake at the left flank was 19.7 ± 3.9% (normalized to liver uptake) at a rate of 0.06 h −1 to reach the highest accumulation 2.3 ± 0.3 times the original at 24 h (Supplementary Figure 4). The tumor accumulation at the right flank was 15.0 ± 5.8% at a rate of 0.06 h −1 to reach the highest accumulation 2.4 ± 0.4 times the original at 24 h p.i. (Supplementary Figure 4 and Table 1). PEG-DSPE-P showed a similar circulation time and tumor accumulation, but their tumor accumulation was much slower in comparison to PEO-PCL-P nanoparticles in the same mouse cancer model transplanted with PANC02 cells. The same in vivo experiments were next conducted in 8988T cell-transplanted subcutaneous cancer models of nude mice, using PEO-PCL-P (Supplementary Figure 5) or PEG-DSPE-P (Supplementary Figure 6) nanoparticles. The results are summarized in Table 1. t 1/2 was calculated as 10.6 ± 0.8 h for PEO-PCL-P and 9.4 ± 1.0 h for PEG-DSPE-P, respectively, each shorter than that of the same nanoparticles in PANC02 cell-transplanted models. Compared to PEO-PCL-P nanoparticles, PEG-DSPE-P nanoparticles achieved a much lower tumor accumulation (normalized by liver uptake) (3.8 ± 2.5 versus14.9 ± 2.3 in the left tumor, and 3.5 ± 0.7 versus 13.3 ± 3.3 in the right tumor). However, the accumulation rate and the highest accumulation between two nanoparticles in this 8988T cancer model were similar to each other. DISCUSSION In this study, a novel benzo[1,2-b:4,5-b ]difuran(BDF)-based D-A copolymer PBDFDTBO was synthesized and applied as a polymeric dye, which owned the emission in the NIR-I range upon excitation at 570 nm. Two differently PEGylated amphiphilic polymers, PEO-PCL and PEG-DSPE, were used to encapsulate the hydrophobic PBDFDTBO in the core to form water-soluble nanoparticles with a stable emission of NIR-I fluorescence upon excitation. The physicochemical properties of the synthesized PEO-PCL-P and PEG-DSPE-P nanoparticles were investigated, showing a close similarity to each other in many physical and chemical parameters, including their size, shape, and surface properties. For human and murine pancreatic cancer cell lines, both PEO-PCL-P and PEG-DSPE-P nanoparticles at the studied dosages exhibited a similar biocompatibility with no apparent toxicity, although the sensitivity to the cell type could be different based on the exact cell origin. In subcutaneous pancreatic cancer models of nude mice, both PEGylated nanoparticles displayed a prolonged circulation time and enhanced tumor accumulation, albeit the accumulation rates and retentions of nanoparticles in tumors were different. In fact, it was observed that PEO-PCL-P nanoparticles possessed a higher accumulation in 8988T cell-transplanted subcutaneous tumor and showed a faster accumulation in PANC02 cell-transplanted subcutaneous tumor than PEG-DSPE-P nanoparticles. Poly(ethylene oxide)-block-poly(ε-caprolactone) and PEG-DSPE are among the common strategies to modify the hydrophobic substances with hydrophilic PEG on surface, to fabricate the stealth nanoparticles for enhanced blood circulation (Jokerst et al., 2011;Suk et al., 2016). The density and the molecular weight of PEG chains bound to the nanoparticle surface could also contribute to the efficacy of this shielding effect . Here, both PEGylated on their surfaces, PEO-PCL-P and PEG-DSPE-P demonstrated a similar circulation in vivo but different profiles of tumor accumulation. External PEG on nanoparticles could minimize their surface energy, increasing their steric distance among nanoparticles, through (Jokerst et al., 2011). In this context, the particle aggregation can be minimized. Furthermore, forming "brush" conformation rather than "mushroom" conformation, stacking of PEG on nanoparticles reduced non-specific cellular uptake by binding to clusterins and growing protein corona in the particle surrounding (Schöttler et al., 2016), which, in turn, suppressed further protein adsorption and nanoparticlemediated complement activation (Pannuzzo et al., 2020). Notably, for the same nanoparticle (PEO-PCL-P or PEG-DSPE-P), it exhibited a slightly quicker clearance in the bloodstream in 8988T cell-transplanted mouse model than the PANC02transplanted one. The underlying reason remains unclear, but it is possible that different tumor burdens would influence the essential metabolism of mice, further altering blood flows (Komar et al., 2009). We have observed a much more rapid growth of PANC02 cell-transplanted tumor than that of the 8988T one, suggestive of likely higher metabolic activity and slower blood flow. This deserves further research efforts to understand it more in depth. Being hydrophobic tails, DSPE as a phospholipid owns a higher affinity to the biological membrane of bilayer than PCL, achieving a faster and easier cellular penetration and transportation (Yu et al., 2013;Zhao et al., 2013). In parallel, hydrolysis of PCL may be accelerated under acidic and alkaline pH (White et al., 2021), making it less stable for PEO-PCL-P nanoparticles when endocytosed and entrapped in the acidic endosomes. However, in the present study, for pancreatic cancers of rich stroma and poor vasculature, PEO-PCL-P nanoparticles exhibited a faster and higher accumulation than PEG-DSPE-P with similar size, shape, and surface charge. This points out a valuable feature that despite of the similar PEGylated surface, hydrophobic interactions between the included PBDFDTBO and the hydrophobic end of amphiphilic PEG-based polymers (DSPE or PCL) might fine-tune the biodistribution of nanoparticles in vivo. Although undetermined, the interaction between PCL and PBDFDTBO favored the tumor accumulation in this study. Further experiments are ongoing to uncover the mechanism. Currently, the nanomedicine research has extended to targeted molecular therapy, involving not only small-molecule chemical drugs but also a variety of biologics, including nucleotide or protein drugs and cellular immunotherapy (El-Zahaby et al., 2019;Li et al., 2020;Gong et al., 2021). These therapies are the important steps in overcoming some of the unique challenges of pancreatic cancers, which prevent conventional treatments from being effective. At the same time, still the most lethal human malignancy, pancreatic cancer harbors a hypoxic and hypovascular extracellular matrix microenvironment with a dense stroma made of proliferating myofibroblasts, collagen, hyaluronic acid, and other component inhabits. Moreover, the factors produced by the stroma might further support tumor survival and growth . The presence of this stroma is the major barrier against effectively treating pancreatic cancer in patients (Adiseshaiah et al., 2016). For cancers with rich stroma and poor vasculature like PDAC, a rational design on physicochemical characters of polymeric platforms at nanoscale to enhance their drug delivery and accumulation abilities within tumors is a prerequisite and remains in high demand. CONCLUSION In summary, two types of PEGylated nanoparticles were here compared to conclude an optimized coating strategy for a desired biological feature in pancreatic cancer delivery. With the same PEGylation on the outer surface, hydrophobic segments that anchor onto the encapsulated core may affect the biodistribution and tumor accumulation of the PEGylated nanoparticles, to a degree that could be determined by the hydrophobic interactions between the hydrophobic ends of amphiphilic polymers and the enwrapped substances. This study paves a new path to adjust nanoparticulate systems for enhanced permeability and retention in pancreatic tumors. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by the Jiangsu University Animal Ethics Administration Committee. AUTHOR CONTRIBUTIONS HC, YZ, GL, and ZT conceived the idea of study. HC, YC, LX, and XZ performed the experiments. HC, YC, LX, YZ, GL, and ZT conducted the data analysis. HC, YC, LX, YZ, XZ, GL, DW, and ZT contributed to the writing of the manuscript and agreed on this submission and publication. All authors contributed to the article and approved the submitted version.
2021-06-29T13:15:00.511Z
2021-06-29T00:00:00.000
{ "year": 2021, "sha1": "6d6a8a3ad26de597ca3bfba86c5a1511dfe244aa", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2021.699610/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6d6a8a3ad26de597ca3bfba86c5a1511dfe244aa", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
16483800
pes2o/s2orc
v3-fos-license
Geometric complexity theory and matrix powering Valiant's famous determinant versus permanent problem is the flagship problem in algebraic complexity theory. Mulmuley and Sohoni (Siam J Comput 2001, 2008) introduced geometric complexity theory, an approach to study this and related problems via algebraic geometry and representation theory. Their approach works by multiplying the permanent polynomial with a high power of a linear form (a process called padding) and then comparing the orbit closures of the determinant and the padded permanent. This padding was recently used heavily to show no-go results for the method of shifted partial derivatives (Efremenko, Landsberg, Schenck, Weyman, 2016) and for geometric complexity theory (Ikenmeyer Panova, FOCS 2016 and B\"urgisser, Ikenmeyer Panova, FOCS 2016). Following a classical homogenization result of Nisan (STOC 1991) we replace the determinant in geometric complexity theory with the trace of a variable matrix power. This gives an equivalent but much cleaner homogeneous formulation of geometric complexity theory in which the padding is removed. This radically changes the representation theoretic questions involved to prove complexity lower bounds. We prove that in this homogeneous formulation there are no orbit occurrence obstructions that prove even superlinear lower bounds on the complexity of the permanent. This is the first no-go result in geometric complexity theory that rules out superlinear lower bounds in some model. Interestingly---in contrast to the determinant---the trace of a variable matrix power is not uniquely determined by its stabilizer. Statement of the result Let per m := σ∈Sm m i=1 X i,σ(i) denote the m × m permanent polynomial and let Pow m n := tr(X m ) denote the trace of the mth power of an n×n matrix X = (X i,j ) of variables. The coordinate rings of the orbits and orbit closures C[GL n 2 Pow m n ] and C[GL n 2 per m ] are GL n 2 -representations. Let λ be an isomorphism type of an irreducible GL n 2 -representation. In this paper we prove that if n ≥ m+2 ≥ 12 and λ occurs in C[GL n 2 per m ], then λ also occurs in C[GL n 2 Pow m n ], see Theorem 2.12 below. Introduction Valiant's famous determinant versus permanent problem is a major open problem in computational complexity theory. It can be stated as follows, see Conjecture 2.1: For a polynomial p in any number of variables let the determinantal complexity dc(p) denote the smallest n ∈ N such that p can be written as the determinant p = det(A) of an n × n matrix A whose entries are affine linear forms in the variables. Throughout the paper we fix our ground field to be the complex numbers C. The permanent is of interest in combinatorics and theoretical physics, but our main interest stems from the fact that it is complete for the complexity class VNP (although the arguments in this paper remain valid if the permanent is replaced by any other VNP-complete function, mutatis mutandis). Valiant famously posed the following conjecture. For an affine subvariety Z ⊆ A n (e.g., Z = GL n 2 det n or Z = GL n 2 per n m ) the coordinate ring C[Z] is defined by restricting the functions in C[A n ] to Z. Since in our case Z will always be a cone (i.e., closed under vector space rescaling) it follows that C[Z] inherits the grading from C[A n ]. In each degree d both coordinate rings split: C[GL n 2 det n ] d = λ V ). Such λ are called occurrence obstructions. It was recently shown that no lower bounds better than dc(per m ) > m 25 can be proved with occurrence obstructions [BIP16]. Mulmuley and Sohoni proposed even further to use the following upper bound for a ′ λ (d[n]) coming from the coordinate ring of the determinant orbit: The algebraic group GL n 2 is an affine variety and acts on itself by left and right multiplication. Hence GL n 2 × GL n 2 acts on the coordinate ring C[GL n 2 ]. The algebraic Peter-Weyl theorem (see e.g. [Kra85, II.3.1 Satz 3], [Pro07, Ch. 7, 3.1 Thm.], or [GW09, Thm. 4.2.7]) tells us how its coordinate ring splits as a GL n 2 ×GL n 2 -representation: C[GL n 2 ] ≃ λ V λ ⊗V λ * , where the sum is over all isomorphism types of GL n 2 and λ * is the type dual to λ. If p ∈ A n has a reductive stabilizer S ⊆ GL n 2 , then the orbit GL n 2 p is an affine variety whose coordinate ring C[GL n 2 p] is the ring of right S-invariants: C[GL n 2 p] = C[GL n 2 ] S , see [BIP16,Sec. 4.1 & 4.2]. For the determinant the stabilizer was already calculated by Frobenius [Fro97]. Functions on the orbit closure restrict to the orbit and since the orbit is dense in its closure this gives an embedding C[GL n 2 p] ⊆ C[GL n 2 p] and in each degree d we have that (2.5) C[GL n 2 p] d ⊆ C[GL n 2 p] d is a GL n 2 -subrepresentation; see also [BI17] for a study of the relationship between the two coordinate rings. The multiplicities that arise in C[GL n 2 det n ] d are much more accessible than those in C[GL n 2 det n ] d . Indeed, C[GL n 2 det n ] d = λ V ⊕sk(λ,n×d) λ . Here the nonnegative integer sk(λ, n × d) is the so-called rectangular symmetric Kronecker coefficient, a quantity that can be described completely in terms of the symmetric group as follows. The irreducible representations of the symmetric group S D are indexed by partitions λ of D into arbitrarily many parts and denoted by [λ]. For partitions λ ✤ D and µ ✤ D the group S D × S D acts irreducibly on the tensor product [λ] ⊗ [µ], but the embedding S D ֒→ S D × S D , g → (g, g) makes [λ] ⊗ [µ] an S D representation that decomposes: , where the sum is over all partitions of D and the nonnegative integers g(λ, µ, ν) are called the Kronecker coefficients. Finding a combinatorial expression for g(λ, µ, ν) is a famous open problem in algebraic combinatorics (see Problem 10 in [Sta00] [BIP16]. A natural upper bound for sk(λ, n × d) is the Kronecker coefficient g(λ, n × d, n × d). Mulmuley and Sohoni conjectured that the vanishing of g(λ, n × d, n × d) suffices to find sufficiently good orbit occurrence obstructions that prove Conj. 2.1, but recently [IP16] proved that no lower bounds better than 3m 4 can be proved in this way. Note that even the small polynomial 3m 4 would be a highly nontrivial lower bound: The best lower bound on dc(per m ) is m 2 2 by Mignon and Ressayre [MR04]. The paper [IP16] does not rule out that this lower bound could be improved using orbit occurrence obstructions and their proof is tightly optimized to yield an exponent as small as possible. Notably [IP16] does not make a statement about symmetric Kronecker coefficients because they are more challenging than Kronecker coefficients. We will see in Section 6 how trivial statements about Kronecker coefficients can become interesting if one studies symmetric Kronecker coefficients. The results in [IP16] (g(λ, n × d, n × d) > 0) and [BIP16] (a ′ λ (d[n]) > 0) are proved using the semigroup property in the following way: One decomposes λ into a sum of smaller partitions, then shows positivity for the smaller partitions, and then uses the semigroup property. In both papers Prop. 2.4 is heavily used because it enables us to assume that the smaller partitions have an almost arbitrarily chosen first part. This simplifies the construction of these positive building blocks considerably. Prop. 2.4 crucially uses that the permanent is padded with a high power of a linear form. Moreover, also crucially using this padding, [ELSW16] showed that the method of shifted partial derivatives applied to Prop. 2.2 cannot be used to prove Conj. 2.1. In the light of these no-go results we remove the necessity of the padding in the next section. 2.B. The homogeneous setting. Using a result by Nisan [Nis91] Prop. 2.2 and the whole geometric complexity theory approach can be reformulated without padding the permanent: Let A m n denote the space of homogeneous degree m polynomials in n 2 variables. Let Pow m n := tr(X m ) ∈ A m n , where X = (X i,j ) is the n × n variable matrix. Let pc(per m ) denote the smallest n such that per m can be written as p = tr(A m ), where A is an n × n matrix whose entries are homogeneous linear forms. It In Section 4 we calculate how the coordinate ring of the orbit GL n 2 Pow m n splits. This is based on knowing the stabilizer of Pow m n : 2.9. Theorem. Let X = (X i,j ) be an n × n variable matrix. Then tr(X m ) = tr((X t ) m ) and tr(X m ) = tr((gXg −1 ) m ), where g ∈ GL n , and tr(X m ) = tr((ωX) m ), where ω is an m-th root of unity. Moreover, if n, m ≥ 3, the whole stabilizer S of Pow m n is generated by these symmetries. Theorem 2.9 is proved in Section 5. , where the sum is over all λ ✤ md and sm(λ, n) := µ ✤ n dm sk(λ, µ) is a sum of symmetric Kronecker coefficients. Note that sm(λ, n) does not depend on d and m independently, but only on their product dm = |λ|, therefore the notation sm(λ, n) is justified. We call these λ orbit occurrence obstructions. We prove that no superlinear lower bounds can be proved with orbit occurrence obstructions: 2.12. Theorem (Main Result). Let m ≥ 10 and n ≥ m + 2. For every λ ✤ dm that satisfies This is the first time that the possibility of superlinear lower bounds is ruled out in geometric complexity theory. Note that in contrast to [IP16] we work directly with the multiplicities in the coordinate ring of the orbit and not with any upper bound. The methods used to prove this result differ greatly from [IP16], in particular [BIP16] lifts the result in [IP16] to the closure, which appears to be challenging in the homogeneous setting because of the absence of the padding. 2.13. Remark. We remark that even though the homogeneous setting is equivalent to the padded setting in terms of algebraic complexity theory in a very natural way, Pow m n is not characterized by its stabilizer (see Prop. 5.9), unlike the determinant. Obtaining a homogeneous setting in which the computational model is characterized by its stabilizer is also possible: one has to study the orbit closure of the m-factor iterated n × n matrix multiplication, a polynomial in mn 2 variables, which seems to be even more challenging. Its stabilizer has been identified in [Ges16]. [BLMW11], just because the per m has only m 2 variables. We prove a slightly more general result in Section 3 (Proposition 3.7, where L = m 2 ). Acknowledgments We thank Neeraj Kayal, Michael Forbes, and Pierre Lairez for helpful discussions on arithmetic circuits. We thank Peter Bürgisser for helpful insights on the subgroup restriction problem at hand. The third author was partially supported by NSF. Occurrence in the coordinate ring of the orbit Here we prove that the relevant multiplicities sm(λ, n) are positive in all cases of interest, and in particular we prove Proposition 2.16. We list the necessary facts, their proofs appear in the corresponding sections. If |λ| is a multiple of some m ≥ 3, i.e., |λ| = dm, then by Theorem 2.10 sm(λ, n) is the multiplicity of λ in C[GL n 2 Pow m n ] d and thus part (1) holds by multiplying two highest weight vector functions, provided both |λ| and |ν| are divisible by the same number m ≥ 3. The approach to proving the general case is very similar, as we will see next. Proof of Proposition 3.2. We write λ ✤ n d to denote that λ is a partition of d into at most n parts. Let V ≃ C n and let V * be its dual. The space V * ⊗V is naturally isomorphic to V * ⊗V * * = V * ⊗V . This gives rise to a natural automorphism on V * ⊗ V that has order 2, i.e., this gives an S 2 action on V * ⊗ V . Thus we get an S 2 action on V ⊗ V * ⊗ V keeping the first tensor factor fixed. This induces and S 2 action on d (V ⊗ V * ⊗ V ). Since the S 2 action commutes with the natural action In this way G acts of Sym d (V ⊗ V * ⊗ V ) and the actions of S 2 and G commute. Thus we have an action of S 2 on the highest weight vector space HWV λ,µ (Sym d 3 V ). Schur-Weyl duality says as GL 3 -modules, where µ * = (−µ n , −µ n−1 , . . . , −µ 1 ), so that {µ * } is the GL-module dual to {µ}. We write ρ n k for a nonincreasing sequence of n integers that sum up to k. As G-modules we have where c ρ µ,ν * is the Littlewood-Richardson coefficient (which is naturally defined not only for partitions, but for nonincreasing sequences of integers). Recall G = GL × GL. We want to distinguish between the left and the right factor and therefore we denote by H the right factor, i.e. G = GL×H. Going to H-invariants we see that the Littlewood-Richardson coefficients are either 1 or 0, so Since {λ} contains a unique highest weight vector line of type λ and no other highest weight vector, going to GL-highest weight vector spaces yields because there is a unique HWV in every GL-representation. S 2 acts on this space and we take invariants: Since the action of S d commutes with the actions of GL 3 and S 2 , we can take S d invariants and obtain Completely analogously we can take S d skew-invariants (denoted by skew-S 2 ) and obtain We conclude the proof by analyzing what happens when we multiply two highest weight vector The inequality that we need to show follows from multiplying a basis of observing that the resulting vectors are still linearly independent. Clearly we can also switch the roles of λ and ν, so part (1) is proved. We proceed completely analogously for part (2) and (3). Next, in order to prove the positivity of sm we need some positivity results for particular symmetric and skew-symmetric Kronecker coefficients. For a self-conjugate partition λ we consider the number of boxes that are not on the main diagonal of its Young diagram. Since λ is self-conjugate, this number is even. Half of them are above the main diagonal and half of them below. For a self-conjugate partition define its sign sgn(λ) to be 1 if the number of boxes above the main diagonal is even, −1 otherwise. This is proved in Section 6 using the tableaux basis for the irreducible representations of the symmetric group S D . We now consider the positivity of sm and show that it is positive for almost all cases. First, we prove it when λ is a single column. When λ has more columns we apply the semigroup property to the sum of its columns to derive positivity. Set X s := {2, 3, 4, 7, 8, 12} and X a := {1, 2, 5, 6, 10, 14}, as the next statement shows these are exactly the sets of exceptional column lengths, for which sm, respectively am, is 0. Let now c 1 = 1, and set d : . We have that δ ⊢ a, ℓ(δ) ≤ d + 4 = b + 3, and δ = δ t . Moreover the number of boxes above the diagonal of these partitions is 1 2 (a − d) and 1 2 (a − d + 2) = 1 2 (a − d) + 1, so again one is even and one odd, and we set them to µ and ν respectively. Finally, when a ≤ 99, so b ≤ 9, we treat the cases as above, noting that the problematic places arise when c 1 = 1 and some of the inequalities r 1 − d + 7 ≤ d − 2 or 5 > d − 2 fails. In these cases we replace the problematic 1 r 1 −d+7 or 1 5 by thicker partitions with at most 12 − (d + 2) = 11 − b parts. Proof. We use a program written by Harm Derksen and adjusted by Jesko Hüttenhain that was already used to generate the computational data in [Ike12]. A direct computation for partitions λ with ℓ(λ) ≤ 12 and λ 1 ≤ 3 shows that sm(λ, 7) > 0 except for the cases listed above. We also verify that sm(λ, 7) > 0 for the partitions with ℓ(λ) = 13, 14 and λ 2 = 1, 2 or λ 1 = 4 and ℓ(λ) ≤ 4. If ℓ(λ) = 13, 14 and λ 1 = 3, then the cases when the second 2 columns of λ form one of the exceptional partitions listed above, we have |λ| ≤ 23 and we check by direct computation. Otherwise the second 2 columns have positive sm and adding them to the first column by the semigroup property we have sm(λ, 7) > 0. Finally, we consider the positivity of the classical Kronecker coefficients, as they are needed to derive sm positivity in some other exceptional situations. Proof. Let X := (2 a 2 , 3 a 3 , 4 a 4 , 7 a 7 , 8 a 8 , 12 a 12 ) be the multiset of columns in λ which are of the exceptional lengths X s , and let β be the partition formed by them. Let x := a 2 + a 3 + a 4 + a 7 + a 8 + a 12 , and let α be the partition formed by the nonexceptional columns of λ, so λ = α + β. By Proposition 3.4 we have that each column 1 k in α, sm(1 k , ℓ) > 0 and so by the semigroup property adding these columns we get sm(α, ℓ) > 0. Suppose for the rest of the proof that x = 1, so there is exactly one column of length r ∈ X s . Since λ is not one of the exceptional partitions, it must have at least one more column k and since x = 1, we must have k ∈ X s . Let first r = 2, then r ∈ X a . Suppose that k ∈ X a as well. By Proposition 3.4 we have am(1 k , ℓ) > 0 and by the am semigroup property we have sm(1 k + 1 r , ℓ) > 0. The remaining columns of λ are ∈ X s , so also have positive sm, and we can add them all to obtain sm(λ, ℓ) > 0 by the sm-semigroup. If k ∈ X a , then k ≤ 14 and so sm(1 k + 1 r , ℓ) > 0 by Proposition 3.5. We can now derive the proof of Proposition 2.16. Stabilizer-invariants in the Schur modules In this section, we prove Thm. 2.10. We introduce the notation that we will need in this section. Let E be a vector space of dimension n, let E * be its dual space. Define V = E * ⊗ E = End(E). We have that Pow m n ∈ S m V is defined by Pow m n (X) = tr(X m ) for any X ∈ V * . For any two vector spaces W, W ′ and any invertible linear map f : W → W ′ , f −T : W * → W ′ * denotes its transpose inverse. We are interested in the stabilizer of Pow m n in GL(V ), that is S := {g ∈ GL(V ) | g · Pow m n = Pow m n }. It is characterized by the following theorem. 4.1. Theorem. If n, m ≥ 3, The stabilizer of Pow m n in GL(V ) is − → E is a vector space isomorphism identifying a basis of E with its dual basis. The proof of Theorem 4.1 is given in Section 5. Denote S 0 = PGL(E) ⊆ S. Let π be a partition, π ✤ d with length ℓ(π) ≤ n 2 . The space of S-invariants in the Schur module S π V will be determined in two steps. First, we will determine the space of S 0 × ω m Id V invariants in S π V : this space is 0 if d is not a multiple of m and it is the space of It is immediate that, if d is not a multiple of m, then S π V does not contain non-zero invariants, because ω m Id V acts on S π V by multiplication by ω d m , that is 1 if and only if d is a multiple of m. We proceed as in [BI11] to determine the space of S 0 -invariants. Proof. S 0 is the image of GL(E) in GL(V ) via the adjoint representation, so the S 0 -invariant subspace in S π V coincides with the GL(E)-invariant subspace. We have the following decomposition under the action of GL(E) (see e.g. [Ike12,Sec. 4.4]): where K µ,ν π is a multiplicity space whose dimension is the Kronecker coefficient g(π, µ, ν). In particular, the action of GL(E) on K µ,ν π is trivial. Moreover, if ℓ(µ) > n or ℓ(ν) > n, then The dimension of this space is µ ✤ d ℓ(µ)≤n g(π, µ, µ). In order to determine the space of τ -invariants in [S π V ] S 0 , we will study the action of τ on the right-hand side of (4.3). We follow the discussion of [BLMW11, Sec. 5.2]. If W is a vector space of dimension n and λ is a partition λ ✤ d, ℓ(λ) ≤ n, then, by Schur-Weyl duality S λ W = Hom S d ([λ], V ⊗d ), where [λ] is the Specht module associated to λ. Given partitions π, µ, ν ✤ d, by definition of the Kronecker coefficient K µ,ν π = Hom S d ([π], [µ]⊗[ν]). For every π, µ, ν, the following GL(E)-equivariant map realizes a summand on the right-hand side of (4.3) as submodule of S π (E * ⊗ E): where we use the reordering (E * ⊗ E) ⊗d ≃ E * ⊗d ⊗ E ⊗d (maintaining the relative order of the copies of E and of the copies of E * ). Notice that the isomorphism δ : E * ∼ − → E induces a vector space isomorphism E * ⊗d ∼ − → E ⊗d and that restricts to S λ E * ∼ − → S λ E for every λ ✤ d. Similarly, the map τ ∈ GL(V ) acts on (E * ⊗ E) ⊗d : its action commutes with the action of S d , so it passes to the components S π (E * ⊗ E). More precisely, if ψ ∈ S π (E * ⊗ E) = Hom S d ([π], (E * ⊗ E) ⊗d ) then τ (ψ) = τ ⊗d • ψ, that is the composition For every π, µ, ν, there is an isomorphism σ π µ,ν : K µ,ν π → K ν,µ π obtained via the composition of an element ψ with the canonical isomorphism [µ] ⊗ [ν] ≃ [ν] ⊗ [µ]; in particular σ π µ,ν is the inverse of σ π ν,µ and σ π λ,λ is an element of order 2 acting on K λ,λ π . Consider the diagram where the horizontal arrows from left to right are the GL(E * ) × GL(E)-equivariant embeddings as in (4.5), the horizontal arrows from right to left are the corresponding projections, the vertical arrow on the right is the τ ⊗d as in (4.6) and the vertical arrow on the left is the map sending . A straightforward calculation shows that the diagram commutes. In particular, the action of τ restricts to the summands of (4.3) where µ = ν as In particular, we need to show that τ ⊗d (Id SµE ) = Id SµE (up to scale). But this is clear as τ , by definition, preserves Id E and so Id SµE . Now, we can conclude 4.9. Theorem. If π ✤ d, and d is a multiple of m, then the space of S-invariants in S π V is ). In particular, its dimension is sm(π, n) = µ⊢d,ℓ(µ)≤n sk(π, µ). Proof. The entire S π V is invariant under the cyclic group ω m ⊆ S, therefore the space of Sinvariants in S π V coincides with the subspace of τ -invariants in [S π V ] S 0 . Restricting to the space of GL(E)-invariants, from Lemma 4.8, τ acts on each summand K π λ,λ ⊗ Id SµE as in (4.7). ) is the invariant subspace under the action of τ and by definition its dimension is sk(π, µ). Theorem 2.10 follows from Theorem 4.9 via Peter-Weyl Theorem, as explained in Section 2. Proof of the stabilizer Theorem 4.1 In this section m, n ≥ 3. Fix a basis e 1 , . . . , e n of E and its dual basis η 1 , . . . , η n . Write The expression of Pow m n in coordinates is Write ξ j i for the dual basis of x i j : we can identify ξ j i with the differential operator ∂ ∂x i j . If G is a group and H is a subgroup, we denote by N G (H) = {g ∈ G : gHg −1 = H} and C G (H) = {g ∈ G : ∀h ∈ H ghg −1 = h}, respectively, the normalizer and the centralizer of H in G. For a group G, let Aut(G) denote the group of automorphisms of G. There is a natural group homomorphism G → Aut(G), given by h → (φ h : g → hgh −1 ); the kernel of this homomorphism is Z(G), the center of G; the image of this homomorphism is denoted by Inn(G), the group of inner automorphisms of G. Inn(G) is a normal subgroup of Aut(G): let Out(G) = Aut(G)/ Inn(G) be the quotient group, called the group of outer automorphisms of G. The stabilizer S of Pow m n inherits the Zariski topology of the space End(V ); let S 0 denote the connected component of the identity in S. In this section we prove Thm. 4.1. First, we state the following standard fact: 5.1. Lemma (e.g. [Ges16], Lemma 2.1). Let f ∈ S d W be a polynomial and let G be a connected Lie group acting on W . Let G f be the stabilizer of f in G and let G 0 f be the connected component of the identity in G f . Then G f ⊆ N G (G 0 f ). Applying Lemma 5.1 to f = Pow m n (in the setting of the lemma we have The outline of the proof is as follows: first we will determine the connected subgroup S 0 of S; the second step is determining N GL(V ) (S 0 ) that will be obtained by studying the action of N GL(V ) (S 0 ) on S 0 via conjugation; finally we will determine S exploiting its action on S 0 via conjugation. The following observation is important to determine the connected subgroup S 0 . The subgroup S 0 will be given by the image of the adjoint representation of GL(E), that is the homomorphism ad : The kernel of ad is the center of GL(E) and its image is denote by PGL(E) ⊆ GL(E * ⊗ E). Proof. Let Ad : End(E) → End(V ) be the differential of ad. We will prove that ann End(V ) (Pow m n ) = Im(Ad). Observation 5.3 and the universality of the exponential map (see e.g. [Hal15,Prop. 3.28]) will allow us to conclude that S 0 = ad(GL(E)) = PGL(E). Finally, suppose ξ 1 2 ⊗ x 2 1 appears in L. In (ξ 1 2 ⊗ x 2 1 ) · Pow m n , we obtain monomials of the form We observe that the only other basis elements that can generate this monomial are ξ 1 ℓ ⊗ x ℓ 2 and ξ ℓ 1 ⊗ x 1 ℓ . In the first case, we already saw that L has to contain a term generated by elements in the image of Ad. In the second case, we can repeat the argument as we did above, and we observe that L contains all the terms in Ad(η 1 ⊗ e 1 ). This concludes the proof that Im(Ad) = ann End(V ) (Pow m n ) and so the proof of the Proposition. Recall from (5.2) that S 0 ⊆ S ⊆ N GL(V ) (S 0 ). The next step toward the proof of Theorem 4.1 is to determine N GL(V ) (S 0 ). We will prove that, as an abstract group, where, PGL(E) = S 0 , C * ×2 is the centralizer C GL(V ) (S 0 ) and τ is an element of order 2 acting on S 0 as in the statement of Theorem 4.1 and on C * ×2 via (c 1 , c 2 ) → (c −1 1 , c −1 2 ). In order to determine the factors of N GL(V ) (S 0 ), the following general observation will be useful. This allows us to determine N GL(V ) (S 0 ) by determining first its centralizer and then realizing the outer automorphisms of S 0 via conjugation by an element of GL(V ). is the subspace of traceless endomorphisms in End(E). The fact that g ∈ C GL(V ) (S 0 ) is equivalent to the fact that g : V → V is S 0 -equivariant. By Schur's Lemma, g acts by non-zero scalars on the irreducible components of V under the action of S 0 : we conclude C GL(V ) (S 0 ) = C * Id CId E × C * Id sl(E) . Moreover, it is known that, if n ≥ 3, then Out(S 0 ) ≃ Z 2 and an outer automorphism can be realized as follows. Consider the automorphism of SL(E) defined as follows: It is easy to observe thatτ 0 is an isomorphism. If we fix coordinates and we identify SL(E) with the group of n × n matrices whose determinant is 1, thenτ 0 : A → A −T . In particular, it maps the center of SL(E) to itself and therefore it descends to the quotient, defining an isomorphism It turns out that τ 0 is an outer automorphism and that it is unique up to conjugation by an inner automorphism (corresponding to the choice of the identification δ). See [Die71, Ch. 3] for details. Now, we can characterize N GL(V ) (S 0 ). 5.8. Proposition. The normalizer N GL(V ) (S 0 ) is An element (c 1 , c 2 ) ∈ C * ×2 acts as c 1 Id Id E × c 2 Id Id sl(E) and τ acts via τ : η ⊗ e → δ −1 (e) ⊗ δ(η). Proof. It is straightforward to verify that, τ is an element of GL(V ) of order 2 and if s ∈ S 0 , then τ sτ −1 = τ 0 (s). This proves that (S 0 × C * ×2 ) ⋊ τ ⊆ N GL(V ) (S 0 ). Passing to the quotient modulo S 0 × C * ×2 , we obtain τ ⊆ Out(S 0 ) and since they both have order 2 we conclude that they are the same. The polynomial Pow m n is not characterized by its stabilizer S. But we can characterize the subspace of polynomials that are stabilized by S. 5.9. Proposition. Let f ∈ S m V . Then f is stabilized by the action of S if and only if it is a homogeneous symmetric polynomial of degree m in the eigenvalues of the elements of V * . The space of these polynomials has dimension #{γ ⊢ m, ℓ(γ) ≤ n} -the number of partitions of m in at most n parts. When n ≥ m this is asymptotically ∼ 1 4m √ 3 exp π 2m 3 . Proof. After fixing coordinates, V * is identified with the space of n × n matrices, f is a polynomial in matrix entries and g ∈ PGL(E) ⊆ S acts via conjugation by any element S g ∈ GL(E) whose image in PGL(E) is g. We will prove that f coincides with a symmetric function of the eigenvalus of the elements of V * on the dense subset of diagonalizable matrices. Passing to the closure we conclude. Let A be a diagonalizable matrix in V * , namely there exists S ∈ GL(E) such that D = S −1 AS is diagonal and its diagonal entries are the eigenvalues of A. In particular f (A) = f (D); the eigenvalues of D are the same as the eigenvalues of A and f is a polynomial in the entries of D, so f is a polynomial in the eigenvalues of A (and clearly it is homogeneous of degree m). Moreover, conjugation by a permutation matrix permutes the diagonal entries of D, therefore f is a symmetric polynomial. Conversely, for A ∈ V * , denote by Σ A the set of the eigenvalues of A. Let g ∈ S: we have that Σ gA = ω ′ m Σ A , where ω ′ m is an m-th root of 1. A symmetric polynomial of degree m has the same value on Σ A and Σ gA ; in particular f (A) = f (gA). The space of symmetric polynomials of degree m is spanned by the basis {e α |α ⊢ m}, where e α := e α 1 e α 2 · · · and e k (x 1 , x 2 , . . .) = i 1 <i 2 <···<i k x i 1 x i 2 · · · x i k are the elementary symmetric polynomials, see e.g. [FH91]. When the number of variables is n we must have α i ≤ n, else e α i = 0, and the dimension is given by #{α ⊢ m|α 1 ≤ n}, via conjugation γ = α t , this is equivalent to the number of partitions γ with ℓ(γ) ≤ n. If m ≤ n, then we have α 1 ≤ m ≤ n, and there is no further restriction on these partitions. The asymptotics is then given by the classical formula of Hardy-Ramanujan for integer partitions. 5.10. Observation. If t 1 , . . . , t n are the eigenvalues of A ∈ V * , then Pow m n (A) = t m 1 + · · · + t m n , that is indeed a symmetric polynomial in t 1 , . . . , t n . Moreover, the argument used in the first part of the proof of Prop. 5.9 applies to every degree, showing that f is invariant under the action of PGL(E) if and only if it is a symmetric function of the eigenvalues. In particular, the k-th elementary symmetric function of the eigenvalues (namely the coefficients of t n−k in the characteristic polynomial) is stabilized byS = (S 0 × ω k ) ⋊ τ ; in factS is the entire stabilizer [LP01, Thm. 3.4]. Symmetric Kronecker coefficients of columns In this section we prove Theorem 3.3. The irreducible S D representation of type λ ✤ D has a concrete description as follows [Ful97,p. 110], see also [Ike12,Sec. 4.1]. A tableau of shape λ is a filling of the boxes of the Young diagram corresponding to λ with entries 1, 2, . . . , |λ|. Let T (λ) denote the set of all tableaux of shape λ. Then C T (λ) is a finite dimensional vector space with an action of S D . We will quotient out a linear subspace K(λ) as follows: • Given tableaux T 1 and T 2 of shape λ. Then T 1 + T 2 ∈ K(λ) if T 2 arises from T 1 by switching two entries in a column. This relation is called the Grassmann relation. • Given a tableau T . Then T + S S ∈ K(λ), where the sum goes over all tableaux S that arise from T by exchanging for some j and k the top k elements from the (j + 1)th column with any selection of k elements in the jth column, preserving their vertical order. This relation is called the Plücker relation. Our argument will only need the Grassmann relation. In the light of Theorem 6.1 we identify [λ] with T (λ)/K(λ). We will always think of tableaux of shape λ as being representatives of cosets in T (λ)/K(λ) = [λ]. In particular [λ] is generated as a vector space by tableaux of shape λ and [π] is a triple of tableaux of shape (π, λ, λ). Proof. This immediately follows from the fact that the set {T ′ 1 ⊗ T ′ 2 ⊗ T ′ 3 | T ′ 1 of shape π, T ′ 2 of shape λ, T ′ 3 of shape λ} is a generating set of [π] ⊗ [λ] ⊗ [λ] and that P is the linear projection onto the S D invariant subspace. For a shape λ there is a unique tableau whose entries increase from top to bottom, left to right, columnwise. We call it the column standard tableau of shape λ. Analogously, for a shape λ there is a unique tableau whose entries increase from top to bottom, left to right, rowwise. We call it the row standard tableau of shape λ. For example, if λ = (4, 3, 1), its column standard tableau is 1 4 6 8 2 5 7 3 , and the row standard is 1 2 3 4 5 6 7 8 . If λ is just a column or a row, then the row standard and column standard tableaux coincide and we call it the standard tableau. 6.3. Lemma. Let π = (D × 1) and let λ be self conjugate. If T 1 is the standard tableau of shape (D × 1), T 2 is the row standard of shape λ, and T 3 is the column standard of shape λ, then P (T 1 ⊗ T 2 ⊗ T 3 ) = 0. switching boxes above the main diagonal with the corresponding box at the transpose position. Therefore sgn(σ) = sgn(λ), which concludes the proof. Proof of Theorem 3.3. Let T 1 be standard of shape (D × 1), T 2 be row standard of shape λ, and T 3 be column standard of shape λ. Vanishing of plethysm coefficients In this section we prove Prop. 2.15. Proof of Prop. 2.15. Let λ ✤ md with λ 1 < m. We want to show that a λ (d[m]) = 0. An known upper bound for a λ (d[m]) are the so-called Kostka numbers K λ,d×m : which are quantities for which a classical combinatorial description is known. We will prove Prop. 2.15 by proving the following stronger statement: If λ 1 < m, then K λ,d×m = 0. The upper bound (7.1) follows for example directly from [Gay76], see also the exposition in [Ike12, Thm. 4.3.8]. The Kostka numbers have a combinatorial interpretation as follows. A semistandard Young tableau of shape λ and content µ is a filling of the boxes of the Young diagram of λ with entries 1, 2, . . . , ℓ(µ) such that every entry i appears exactly µ i times and such that • the entries are strictly increasing in each column from top to bottom and • the entries are nondecreasing in each row from left to right. For example 1 1 1 2 2 2 3 is a semistandard Young tableau of shape (4, 2, 1) and content (3, 3, 1). The Kostka number K λ,µ counts the number of semistandard Young diagram of shape λ and content µ. Given a partition λ ✤ md with λ 1 < m. We claim that K λ,d×m = 0. Indeed, the pigeonhole principle says that for every placement of m 1s to the boxes of λ we will end up with at least one column containing the number 1 at least twice. Therefore if µ 1 > λ 1 there is no semistandard Young tableau of shape λ and content µ, so we have K λ,µ = 0. Setting µ = d × m and observing that µ 1 = m we conclude that µ 1 > λ 1 . Therefore K λ,d×m = 0. Kronecker positivity Here we consider the positivity of the Kronecker coefficients when one partition is a 2-row or 2-column, which is used to derive some of the positivity results for sm in Section 3. 8.1. Proposition. We have that g ((a, b), ν, ν) > 0 for all partitions ν ⊢ a + b, such that d(ν) ≥ √ 2b + 1 and d(ν) ≥ 7, where d(ν) is the Durfee size of ν (i.e. the length of main diagonal of ν).
2016-11-04T21:38:15.820Z
2016-11-02T00:00:00.000
{ "year": 2017, "sha1": "4b9443fc3ba6987dcb08692487d3d9fcc5b83aa9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1611.00827", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "265b062891ddad5755d819a2e35446d8f77ef0d1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
29165629
pes2o/s2orc
v3-fos-license
Successful Formulation and Application of Plant Growth-Promoting Kosakonia radicincitans in Maize Cultivation The global market for biosupplements is expected to grow by 14 percent between 2014 and 2019 as a consequence of the proven benefits of biosupplements on crop yields, soil fertility, and fertilizer efficiency. One important segment of biosupplements is plant growth-promoting bacteria (PGPB). Although many potential PGPB have been discovered, suitable biotechnological processing and shelf-life stability of the bacteria are challenges to overcome for their successful use as biosupplements. Here, the plant growth-promoting Gram-negative strain Kosakonia radicincitans DSM 16656T (family Enterobacteriaceae) was biotechnologically processed and applied in the field. Solid or liquid formulations of K. radicincitans were diluted in water and sprayed on young maize plants (Zea mays L.). Shelf-life stability tests of formulated bacteria were performed under 4°C and −20°C storage conditions. In parallel, the bacterial formulations were tested at three different farm level field plots characterized by different soil properties. Maize yield was recorded at harvest time, and both formulations increased maize yields in silage as well as grain maize, underlining their positive impact on different agricultural systems. Our results demonstrate that bacteria of the family Enterobacteriaceae, although incapable of forming spores, can be processed to successful biosupplements. Introduction Within the next 40 years agricultural production must increase by 60% to meet worldwide demands for food. However, arable land will only increase by five percent by 2050, and to date 25% of the arable land is already severely degraded. Both increasing food demands and diminishing arable land call for strategies to intensify agricultural systems without harming the environment. Maize, rice, and wheat provide at least 30% of food calories to more than 4.5 billion people in developing countries [1]. Maize mainly serves as fodder for livestock, where maize silage is an important forage component for ruminants, especially dairy cows. A minor portion of silage maize, 0.8 billion hectares, is used for producing ethanol and biogas. Grain maize serves as forage for pig fattening in stock farming. Since maize contains high amounts of starch, it is also of particular interest to the food industry as maize meal in human food, as well as for the paper producing industry. In 2016, the total land area used for maize cultivation in Germany was 4.163 million hectares, and a total of 4,017,800 tonnes of maize was produced in Germany [2]. Maize cultivation is highly nutrient demanding, particularly for mineral nitrogen such as nitrate and ammonia. However, synthetic production (and field application) of mineral nitrogen is cost-and energy-intensive. Complementary procedure such as using biological atmospheric nitrogen fixing leguminous crops like beans or clover in crop rotation is one strategy to provide additional nitrogen to plants. Another strategy is the application of organic fertilizers as manure or slurry. Both strategies require microorganisms that convert atmospheric nitrogen or organic nitrogen into mineral nitrogen. Drawbacks associated with organic fertilization include significant nitrogen losses through ammonia volatilization, nitrates leaching into groundwater, and denitrification, as reviewed by Cameron et al. [3]. The application of plant-supporting microorganisms such as arbuscular mycorrhizal (AM) fungi, other fungi, and plant growth-promoting bacteria (PGPB) offers an attractive alternative strategy [4][5][6][7], since their application increases crop yields without adding additional mineral nitrogen. In maize, studies have shown how the application of microorganisms contributes to the growth of plants [8][9][10][11][12]. Among the microorganisms that promote growth and yield of maize is the Gram-negative, rod shaped bacterium Kosakonia radicincitans from the family of Enterobacteriaceae [13,14]. In previous studies we tested K. radicincitans strain DSM 16656 T , isolated from the phyllosphere of wheat, for its plant growth-promoting capacity. In vitro analyses showed that DSM 16656 T is able to fix atmospheric nitrogen [15,16] and solubilize rock phosphates [17]. Moreover, this strain produces phytohormones as auxin and cytokine-like compounds [18]. The in vitro characterization of DSM 16656 T was followed by several glasshouse and field experiments where seeds or young plants were inoculated with the strain at various concentrations to assess the most appropriate way and amount to exploit the plant growth-promoting effect of K. radicincitans. Besides maize, among the species that responded positively to inoculation with K. radicincitans we identified wheat [19], tomato [20], pea [21], and different members of the cabbage family [22,23]. Significant increases in growth and yield promoted by K. radicincitans in greenhouse and field trials were confirmed [14,15,20,22,24], highlighting the potential of this strain for benefitting different cultivation management systems. The successful transfer of biological supplements based on living microorganisms such as K. radicincitans DSM 16656 T from controlled greenhouse pot experiments to field cultivation approval is highly challenging. Problems are due to the huge variability not only in natural soil parameters, such as composition, graininess, water holding capacity and pH value, and the associated microbiota, but also in environmental conditions such as precipitation, air humidity, and temperature; all these factors interfere with the bacteriaplant interaction rendering the outcome of field experiments often difficult to predict. Even more challenging is effective biotechnological processing and formulation of a potential strain like our K. radicincitans DSM 16656 T to transform it into a biosupplement suitable for application in agricultural systems. The biological supplement must be produced cost-efficiently, retain its positive traits during biotechnical processing steps, and be stable over a period of at least six months. Additionally, handling the product must be easy and the product must be robust enough for practical use in the field at the farm level. So far, biotechnological processing approaches include preparation of polymeric biodegradable low-cost foams [25], liquid formulation [26], or powders [27]. Still, it is a challenge to generate a robust biological supplement from Gramnegative bacteria. The inability of Gram-negative bacteria to form spores, as Gram-positive bacteria do under detrimental conditions, requires more sophisticated freeze-drying and biotechnological processing strategies. So far, successful formulation of Gram-negative bacteria is mostly described for Pseudomonas spp. [28,29] and Azospirillum spp. [26], but not for Kosakonia spp. Ultimately, the final proof as to whether the bacterial formulation is sufficient to persist and have a successful impact on plant hosts still has to be tested in the field under real farm conditions [30]. Here, we present a study that describes the positive effects of K. radicincitans DSM 16656 T based biosupplement prototype, AbiVital5, on maize growth and yield in three different field plots in Germany. Bacterial Strain Description and Biotechnological Processing. Aliquots of K. radicincitans DSM 16656 T were kept lyophilized at −80 ∘ C. For further cultivation bacteria were plated on ENDO agar and maintained at 4 ∘ C. Formulation of K. radicincitans was carried out by ABiTEP GmbH, Berlin. The product is listed as AbiVital soil auxiliary supplies in the German FiBL list, which regulates the implementation of microorganisms as resources in sustainable agriculture [31]. The AbiVital product comprises 64% (percent by weight) of centrifuged (7200 rpm) K. radicincitans DSM 16656 T cells from liquid culture and 36% cryoadditives. It contains less than 1.5% N, less than 0.5% P 2 O 5 , and less than 0.75% K 2 O to meet requirements of soil auxiliary supplies. Shelf-Life Studies of Processed Bacteria. Formulations of solid and liquid AbiVital were tested for their shelf-life properties. Viable cell concentrations for each formulation were determined directly after fermentation by colony counting on agar plates according to ISO 4833-2 and in parallel performing an electrooptical analysis of bacterial cells using EloTrace5 [32,33]. Four biological replicates per formulation were analyzed subsequently for their shelf-life; independent subsamples were stored after the formulation process either at 4 ∘ C or at −20 ∘ C over a period of six months and monitored for viable cells using the most probable number (MPN) method, two, three, four, and six months after formulation. For the solid formulation, 1 g was weighed to an Eppendorf tube, ten clean glass pearls were added, and for the liquid formulation 1 mL was pipetted into an Eppendorf tube. Each formulation was transferred to a 50 mL Erlenmeyer flask containing a 9 mL standard nutrition broth for microbial cultivation (Merck, Germany). The flask was placed on a shaker at 200 rpm at 4 ∘ C for 1 h to dissociate the cells homogenously in the solution. The homogenous solution of each formulation was serially diluted by a factor of ten. The optical density as an indicator of the most probable number of viable cells was measured at 620 nm in a Tecan plate reader (Tecan, Germany). After adding 180 L of standard nutrient broth to each well of a microtiter plate, 20 L of the diluted formulation was added to the respective wells, incubated for 72 h at 30 ∘ C, and shaken for ten seconds just before measurement. Three dilutions of each sample were measured in two technical replicates. Wells containing 200 L of standard nutrient broth and no formulation served as a blank. Gadsdorf 2015 and 2016 Field Experiments. Field experiments in Gadsdorf (soil type loamy sand) were conducted under conditions of organic farming. Zea mays var. P 7902 (Pioneer Hi Breed, Buxtehude, Germany) was sown on April 28, 2015. Two different formulations, solid (1) and liquid (2), were tested in Gadsdorf field trials for their impact on corn maize yield. Both formulations were added directly into the tank of the field spraying device (Amazone UX 5200 Super). Altogether, six hectares (approx. 570,000 plants) were sprayed on May 16, 2015, with formulation (1), formulation (2), or control liquid (=water) (2 hectares each). Additionally, formulations and control liquid were applied to neighboring subareas of the same field. Firstly, the control field site was sprayed with water. Secondly, the solid and liquid formulations of the bacterial strain K. radicincitans were diluted in water to a final concentration of 10 7 cfu mL −1 , and each plant received approximately 1 mL. Maize was harvested on November 2, 2015. The total weight of harvested kernels per subarea was determined. Dannenberg 2015 and Sanitz 2016 Field Experiments. Field experiments in Danneberg and Sanitz were conducted to test the effect of the liquid formulation on silage maize grown in conventional cultivation systems. In 2015, a farm level field experiment was performed in Dannenberg, Lower Saxony. The plant cultivar was Zea mays var. Ronaldino (KWS, Germany); the soil was loamy sand. Plants were not treated with pesticides or fungicides. In contrast to field experiments in Gadsdorf, plant growth in Dannenberg and Sanitz was determined on smaller areas and then extrapolated to hectare sizes. Plant growth in Dannenberg was determined by harvesting subplots within the field from six randomly chosen spots (seven plants each) in the Kosakonia-treated section of the field and the control section of the field without bacterial application. Consecutively, in 2016 the Kürzinger GbR-agro nord experimental station conducted an exact trial field experiment with Zea mays var. Colisee (KWS, Germany) on a loamy soil under good experimental practice (GEP) certificated conditions. Per hectare, 2.5 L of formulated AbiVital was sprayed. Plant growth was determined on four lots of 18 square meters per treatment. Statistical Data Analysis. Data from field trials in Sanitz and Dannenberg were analyzed with SigmaPlot Version 12.3. Normality of data was tested by Shapiro-Wilk before using the -test to compare the two treatments of noninoculated and AbiVital inoculated maize plants grown either in Sanitz or in Dannenberg. Experiments in Gadsdorf were performed in strip vials and analyzed with the adjusted mean value according to the guidelines of Michel and colleagues [34]. Shelf-Life Studies on AbiVital Formulations . Shelf-life was tested by the most probable number method in four individual subsamples of formulated K. radicincitans. Bacterial viable counts in both solid and liquid formulations of K. radicincitans remained stable over the period of six months when they were stored at −20 ∘ C. In contrast, at 4 ∘ C only the solid formulation was stable during the 6-month period; viable counts in the liquid formulation decreased drastically during this time frame by >99% (Figure 2). Grain Corn Treatment. In a second approach we tested the formulated K. radicincitans product AbiVital on grain maize grown in organic cultivation management in Gadsdorf, Brandenburg, Germany. We tested two formulations, solid and liquid, in two consecutive years. In 2015, we found an increase in grain corn yield of 18.7% when using solid and an increase of 32.8% when using the liquid formulations of AbiVital. For the solid formulation, we obtained a similar effect of 20% increase in 2016, while the liquid formulation promoted an increase of 9.7% (Figure 4). Discussion Microorganisms represent a tremendous source of plant growth-promoting additives for application in agriculture. However, only a minority of potential microorganisms have been used in agriculture as yet. This is due either to the limited cultivation and isolation of bacteria from environmental samples [35], or to the failure to follow up processing towards a stable and efficient product [36]. Therefore, successful formulation under large-scale production conditions is crucial for commercial bacterial inoculants. Experiments in 1990-1992 already documented the positive effects of K. radicincitans DSM 16656 T on maize cv. Bekenova: grain yield increased by 8-15%, and shoot dry matter by 3-7% after inoculation with the bacteria [37]. Importantly, our results document for the first time the successful development of a bacterial isolate into a biosupplement for maize cultivation. The AbiVital formulation of the Gram-negative bacterium K. radicincitans preserves its plant growth-promoting properties, as shown in our field experiments with maize. Differences in growth promotion were also described for a Bacillus sp. in two lima bean varieties [38], and inoculation with the same mycorrhiza on three rice ecotypes also resulted in different responses [39]. In Phaseolus vulgaris, Azospirillum spp. affects the Rhizobium-legume symbiosis, according to the plant's genotype [40]. However, we observed a positive promotion effect in all maize varieties tested with the newly formulated K. radicincitans DSM 16656 T biosupplement, suggesting no trade-offs in maize. The fact that maize was found to be the native host of a plant growth-promoting strain of K. radicincitans (GXGL-4A) [41] strongly supports the potential of this species as a biosupplement in maize cultivation. Several reports on plant growth-promoting K. radicincitans strains from different crops in different habitats around the world have been published in the past few years (Becker et al. submitted). Among the factors that interfere with the effect of exogenously applied microorganisms on plants are soil composition and tillage management. According to ascertainments of the Federal Statistical Office in 2016, German farming is mostly conventional (92.8%), and only a minor part is organic farming (7.2%) [42]; but the latter is increasing since demands for organic farming products are growing rapidly. However, organic farming relies on strict guidelines. In general, conventional management systems allow not only more tillage than organic farming, but also the application of chemicals for weed and pest control. It is essential to know whether preprocessing, fertilization management, use of pesticides, or other differences between the farming systems would elevate or depress the effect of the growth-promoting bacteria before the commercialization of the "AbiVital" formulation. For instance, soil disturbances by tillage can cause qualitative and quantitative changes in soil microbiota and biological nitrogen fixation [43,44]. Knowledge about how soil management changes microbial community structures is a prerequisite for optimized management practices, since soil microbial communities constitute a major factor controlling soil processes and plant growth [45][46][47]. To our knowledge, this is the first formulation and successful application of Kosakonia spp. in field grown maize plants. Liquid formulations are often preferred by the user because the product is easy to mix in a tank and cheaper to produce. Powder formulations are easier to transport and more stable, but the dry formulations must be easy to dissolve. The formulated carrier supplement plays an important role in delivering the bacteria to the field, and carriers can mainly be divided into the following categories: soils, inert material such as polymers or vermiculite, liquid formulation with additives, oil-dried bacteria, or just the plain lyophilized microbial culture. Biochar as an inoculant carrier has been proposed for developing new formulations [48]. Our product is free of genetically modified organism (GMO) carriers and complies with all economic and farm level application demands. It was shown to be easily manageable by the farmers in the field (Figure 1) and resulted in the same efficiency as Months of storage previously used cultures produced in experimental laboratory conditions. Rapid decrease in shelf-life for liquid products is a severe problem in the biotechnological processing of microorganisms for application in agriculture. A period of at least six months without drastic losses of vital cells is required in industrialized countries. During this period, loss-free storage should be achievable in already existing devices such as fridges or freezers. In developing countries the shelf-life requirements are even higher. Some studies claim a shelflife of one or two years at room temperature [49,50]. Our objective was to achieve shelf-life stability over six months at 4 ∘ C or −20 ∘ C with both formulations. Although the AbiVital biosupplement formulation shows promising results for storing at −20 ∘ C, further investigations will be needed into how to ensure a stable product during storage at 4 ∘ C. Exhibiting clear growth and yield-promoting effects on crop plants, the use of microbial products is of interest in both conventional and organic farming systems. However, variable outcomes from applying microbial supplements have damaged their reputation as an environmentally friendly additive in agriculture. Advocates of conventional farming practices applying synthetic pesticides and microbial supplements being sold in ineffective concentrations have further contributed to the poor reputation of plant growthpromoting microorganisms. Nonetheless, the number of reliable microbial products on the market is increasing. To determine the benefits of microorganisms in crop farming and the circumstances under which they tap into their full potential, a combination of basic and applied research on the same strains of microorganisms is required. Deciphering the complex interactions of microorganisms, host plants and the environment will require interdisciplinary collaboration of botanists, microbiologists, biotechnologists, molecular biologists, bioinformaticians, and farmers. Conclusions We present the formulation of the Gram-negative bacterium K. radicincitans as a marketable product for application in silage and grain maize production. We show that the same bacterial strain is able to increase yields of silage and grain ) * * * * * * Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this article. Authors' Contributions Matthias Becker and Silke Ruppel designed the work; Matthias Becker and Beatrice Berger analyzed data and wrote the manuscript. Matthias Becker, Sascha Patz, and Silke Ruppel conducted the experiments in Gadsdorf. Kristin Dietel, Sebastian Faetke, and Helmut Junge developed the marketable product AbiVital and processed the Kosakonia radicincitans strain for the field experiments.
2018-05-30T00:47:29.412Z
2018-03-28T00:00:00.000
{ "year": 2018, "sha1": "d8b24a5280532b9f1eff0c05e39807910f909aa7", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2018/6439481.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f7a6d6f98441e6b363e919bbb1832804c1c95a7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211732509
pes2o/s2orc
v3-fos-license
Taxonomy of Dual Block-Coordinate Ascent Methods for Discrete Energy Minimization We consider the maximum-a-posteriori inference problem in discrete graphical models and study solvers based on the dual block-coordinate ascent rule. We map all existing solvers in a single framework, allowing for a better understanding of their design principles. We theoretically show that some block-optimizing updates are sub-optimal and how to strictly improve them. On a wide range of problem instances of varying graph connectivity, we study the performance of existing solvers as well as new variants that can be obtained within the framework. As a result of this exploration we build a new state-of-the art solver, performing uniformly better on the whole range of test instances. INTRODUCTION Discrete graphical models, one of the most sound and powerful frameworks in computer vision and machine learning, is still used in many applications in the era of CNNs. Graphical models effectively encode domain specific prior information in the form of a structured cost function, which is often hard to learn from data directly. With an increase in parallelization, fast dual block-coordinate ascent algorithms (BCA) have been developed that allow their application e.g. in stereo [1], optical flow [2], 6D pose estimation [3]. Combined and jointly trained with CNNs they can create more powerful models [4,5]. They can also provide efficient regularization for training of CNN models [6,7,8]. Applications where structural constraints must be fulfilled (e.g. [9]) or the optimality is required also significantly benefit from fast computation of good lower bounds by such methods [10,11]. In this work we systematically review the existing BCA Proceedings of the 23 rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020, Palermo, Italy. PMLR: Volume 108. Copyright 2020 by the author(s). methods. Despite being developed for different dual decompositions, they can be equivalently formulated as BCA methods on the same dual problem. We contribute a theoretical analysis showing which block updates are sub-optimal and can be improved. We perform an experimental study on a corpus of very diverse problem instances to discover important properties relevant to algorithm design. Such as, which types of variable updates are more efficient, or whether a dynamic or static strategy in sub-problem selection is better, etc. One observation that we made is that there is currently no single algorithm that would work well for both, sparse and dense problems. With this new comparison and theoretical insights, we synthesize a novel BCA method, that selects subproblems automatically adapting to the given graph structure. It applies the type of updates that are more expensive but which turn out to be more efficient and performs universally better across the whole range of the problems in the datasets we used. Related Work Inference in graphical models is a well-known NP-hard problem. A number of solvers with different time complexities and guarantees, utilized in different applications, is surveyed in [19,20]. The linear-programming approach and the large family of associated methods is well covered in [21,22]. In this work we focus on BCA methods, which appear to offer the best lower bounds with a limited time budget for pairwise models with general pairwise interactions. These methods can be used to obtain fast approximate solutions directly, or to efficiently reduce the full combinatorial search [10,11]. Many BCA methods have been proposed to date and we selected in Table 1 a mostly complete and representative list of the state-of-the-art BCA algorithms. Some of these methods were originally obtained for different dual formulations, based on the decompositions into larger subproblems (TRW-S, TBCA, DMM). Although, it is known that these duals are equivalent in the optimum [22,23], it has been believed that optimizing a stronger dual can be more efficient. Works [24,25] proposed a unified view of several MAP and sum-product arXiv:2004.07715v1 [cs.LG] 16 Apr 2020 [18] Dynamic Tree Block Coordinate Ascent Dynamic trees, sequential algorithms as BCA methods. However, the dual objectives were different per method (derived from different region graphs [24], resp. splittings [25]) and the algorithms operate with messages and beliefs. We consider a single dual for all methods, following the more recent understanding of TRW-S [15], and all algorithms are explicitly updating the same dual variables. We study the issue of non-uniqueness of the block maximizers in BCA methods and their influence on the overall algorithmic efficiency. Tourani et al. [3] shows that MPLP method can be significantly improved by a small modification in the choice of block maximizers. We generalize these results to chain and tree subproblems. Werner and Průša [26] study the effect on fixed points. Our code is available at https://gitlab.com/ tourani.siddharth/spam-code. Proofs of all mathematical statements can be found in the appendix. MAP INFERENCE WITH BCA MAP-Inference Problem Let G = (V, E) be an undirected graph with the node set V and edge set E. A labeling y : V → Y assigns to each node u ∈ V a discrete label y u ∈ Y, where Y is some finite set of labels, w.l.o.g. assumed the same for all nodes. For brevity we will denote edges {u, v} ∈ E as just uv. For each node u ∈ V and edge uv ∈ E there are associated the following local cost functions: θ u (s) ≥ 0 is the cost of a label s ∈ Y and θ uv (s, t) ≥ 0 is the cost of a label pair (s, t) ∈ Y 2 , where the non-negativity is assumed w.l.o.g. Let also Nb(u) denote the set of neighbors of node u in G. In the well-known paradigm of MRF / CRF models, the posterior probability distribution is defined via the energy E(y) as p(y) ∝ exp(−E(y)) and the maximum a posteriori (MAP) inference problem becomes equivalent to finding a labeling which minimizes the energy (total labeling cost): y * = arg min Reparametrizations The representation of the energy function E(y | θ) as the sum of unary and pairwise costs is not unique: there exist many cost vectors θ such that E(y | θ) = E(y | θ ) for all labelings y ∈ Y V . Such cost vectors are called equivalent. All cost vectors equivalent to θ can be obtained as (e.g., [21]): . This reparametrization is illustrated in Fig. 1(a). It is straightforward to see that when substituting (2) into (1) all contributions from φ cancel out and thus any reparametrized θ φ is equivalent to θ (for the converse, that all equivalent costs do have such a representation see [21]). Dual Problem The basic idea, pioneered in pattern recognition by [27], is the following. In practice there exist oftentimes a reparametrization with the property that by selecting the label in each node independently as y u ∈ arg min θ φ u a good, or even optimal, solution is recovered. From an optimization perspective, this is captured by the lower bound: obtained by applying the reparametrization in (1) and using the min-sum swap inequality. If there is a reparametrization such that the lower bound is tight and the minimizer y u ∈ arg min θ φ u in each node is unique, then y is the unique global optimum of (1). To tighten the lower bound we seek to maximize it in φ. It is known ( [21,22]) that this maximization problem is dual to the natural linear programming relaxation of (1). The dual problem has the following advantages: (i) it is constraint-free; (ii) it is composed of a sum of many simple concave terms, each of which is straightforward to optimize. BCA algorithms Block-coordinate ascent methods exploit the structure of the dual by iteratively max-(a) Dual Variables (b) Dual Blocks Figure 1: (a) An edge block and its corresponding reparametrization components in the graphical notation of [21,22]: nodes u, v are shown as grey ovals and circles representing possible labels. The lines connecting the labels represent label pairs (s, t) with associated pairwise costs θ uv (s, t). For each set of pairwise costs connected to a particular label, there is one reparametrization coordinate φ u,v (s) shown by a blue arc. (b) Different variable blocks. Highlighted are block sub-graphs and arcs indicating the variables considered. imizing it w.r.t. different blocks of variables (subset of coordinates of φ) such that the block maximization can be solved exactly. Formally, let φ F be the restriction of φ to a subset of its coordinates , s ∈ Y}, BCA algorithms perform the update: with different blocks F in a static or dynamic order. Constrained Dual For the purpose of this work, it is convenient to work with the constrained dual: The equivalence can be shown by constructing for any solution φ to the unconstrained dual, a correction preserving the objective value and satisfying the constraints [22]. We will formulate all BCA algorithms in this paper in a way that they maintain the feasibility to the constrained dual. TAXONOMY OF BCA METHODS We survey a number of BCA methods, listed in Table 1. Many of these methods are derived for different dual objectives and work with different sets of parameters. We reformulate them all as BCA methods on the dual (5) and identify the following important design components: • Type of blocks used. This has a significant impact on algorithm efficiency. Larger blocks (such as chains or trees) lead to greater dual improvement, but optimizing over them requires more computations. • Strategy of selecting which block to optimize at every step. A dynamic strategy may be more advantageous for some problems but has additional overhead costs. • Type of the update applied. This is not systematically studied in the literature. The maximizer for each block is non-unique but instead it is any point in the optimal facet. One may obtain algorithms with drastically different behaviour, depending on the choice of the maximizer. Choice of Variable Block BCA algorithms (Table 1) exploit the following types of blocks that are tractable to be optimized over: , s ∈ Y} consist of coordinates of the reparametrization vector that are "adjacent" to a node u, Fig. 1(b, red). These blocks are used in TRW-S, MSD and CMP algorithms. containing all variables associated with an edge uv, see Fig. 1(b, blue). These are used in MPLP and MPLP++ algorithms. • Chains and Trees For a sub-graph (V , E ) ⊂ G we select variables associated to all its edges: F E := ∪ uv∈E F uv , see Fig. 1(b, green). To optimize over such blocks, a dynamic programming subroutine is needed. Chain blocks are used e.g. in DMM (rows and columns of a grid graph). The TRW-S algorithm, which we introduced above as a node-adjacent BCA, simultaneously achieves optimality over a large collection of chains. Spanning trees are used in TBCA variants. We call edge, chain and tree blocks collectively as subgraph blocks. We will investigate which type of blocks and respective updates are more efficient. Static vs. Dynamic Blocks In dynamic TBCA [18] the trees are found dynamically by estimating where the dual can be increased the most (so-called local primal-dual gap [18]), which showed a significant practical speed-up in some applications [18]. In other methods, the blocks are fixed in advance: e.g. rows and columns for grid graphs in DMM, single edge blocks in MPLP, spanning trees, selected greedily to cover the graph, in the static TBCA. We will investigate static and dynamic strategies for several update types. Choice of The Local Maximizer With the same blocks one could get very different algorithms depending on how the block maximizer is selected from the polyhedron of possible optimizers, which we refer to as update type. We can systematize all used update types for node-adjacent blocks and subgraph-based blocks using several elementary operations. We now review them one by one. Node-Adjacent Updates The update of blocks F u works in two operations performing aggregation (6) and distribution (7) for every label s of u: where coefficients w u,v are non-negative and satisfy v∈Nb(u) w u,v ≤ 1. After the aggregation, the reparametrized costs θ φ uv stay non-negative and label pairs that have zero cost are consistent with the minimizers of the unary reparametrized costs θ φ u . This step achieves block maximum (4). The purpose of the distribution step is to redistribute the cost excesses back to the edges of the block (leaving a fraction 1 − v∈Nb(u) w u,v at the node u) while preserving block optimality. In effect, the neighbouring nodes receive information about good labels for u. The MSD, CMP, dynamic programming (DP) and TRW-S algorithms are obtained by the respective setting of weights: are Iverson brackets and the other details follow. The MSD and CMP algorithms do not express any preferences in direction (are isotropic) and the order of updating blocks is not as important. Updates of DP and TRW-S are anisotropic and depend on the order of the vertices. Let us see how DP updates work. Consider a chain graph and the chain ordering of nodes. For an inner node u there are two neighbouring nodes: u − 1 and u + 1. By choosing w u,u+1 = 1 and w u,u−1 = 0, we let all the excess costs be pushed forward and implement the forward pass of the Viterbi algorithm. TRW-S considers some order of processing of the nodes and applies coefficients w u,v such that for v < u it is zero and for v > u the coefficients are distributed evenly based on the numbers N in (u), N out (u) of incoming and outgoing edges in u w.r.t. the node order. Note that when there are more incoming edges than outgoing, these weights sum to less than one, i.e. some cost excess is left at the node u. It is clear that the choice of the block update and the order may be crucial in BCA methods. In contrast to node-adjacent blocks, subgraph blocks overlap only in the nodes of the graph. Therefore, different redistribution strategies have been proposed in order to make the excess costs visible in all nodes of the processed block. Edge Updates MPLP and MPLP++ methods consider edge blocks. MPLP performs the following symmetric The aggregation step (9) achieves that the costs in the nodes u, v become aggregated in the edge uv. Repa- does not depend on the initial reparametrization components φ uv , φ vu . At this point the maximum over the edge block is found. The distribution step (10a)-(10b) divides the aggregated cost in two halves and pushes the excesses from each half back to two nodes u, v, to make the preferred solution for the edge visible in the nodes. See Fig. 2c. The Handshake (HS) update φ := HS u,v (θ, φ) is used in MPLP++ and DMM. It differs in the distribution step. Let φ u,v (s) be computed as in MPLP and φ v,u (t) set to an arbitrary value. The Handshake update additionally performs: This step pushes the still remaining cost excess from the edge to the nodes as illustrated in Fig. 2d. It leads to a strictly better improvement of the dual objective after the pass over all blocks and performs considerably better in experiments [3]. The step (11a) does not depend on the value of φ v,u (t), which can be seen by moving φ v,u (t) under the min and expanding the reparametrization. Therefore step (10b) may be omitted when computing HS. Chain / Tree Updates The optimality over an edge, chain or a tree can be achieved by applying the following dynamic programming update φ := DP u,v (θ, φ) (in the order of the chain or from leaves to the root of a tree): The step (12a) aggregates the cost excess from node u to the edge uv and the step (12b) pushes the cost excess to note v. Observe that it can be written in the form of a node-adjacent update (6)-(7) by grouping the push step (12b) into v with the aggregation step (6) at v when processing the next edge vw. TBCA algorithm uses DP to achieve optimality over a tree and then performs a pass in the reverse order, redistributing the costs with the following rDP update. where 0 ≤ r ≤ 1 is a constant similar to the weights in the node-adjacent updates. The fraction r of cost excess is pushed forward to v and the fraction 1 − r is left in the node u. TBCA detailed in Algorithm 1 and Fig. 3 redistributes cost excesses based on the size of the tree branch remaining ahead. TBCA was originally proposed for the dual decomposition with trees [17] and works with its Lagrange multipliers. DMM works with chain subproblems and performs the redistribution hierarchically as explained in Algorithm 2 and Fig. 4. It was also originally proposed for the dual decomposition with chains [1]. One advantage of this method is that when the chain contains an edge with zero pairwise costs (no interactions), the processing becomes equivalent to redistribution in two chains independently. In contrast, the TBCA method would be confused in its estimate of the size of the subtree to push the excess to. ANALYSIS Subgraph-based blocks usually overlap over the nodes only (horizontal / vertical chains) or have a small overlap over the edges (spanning trees). Consider two blocks that overlap over nodes only. The representation of the information (costs of different solutions) which is available to one block about the other is limited to the reparametrized node costs θ φ u for all shared nodes u. We identify this reparametrized unary potentials with modular minorants [1], having clear analogies with minorants/majorants in pseudo-Boolean optimization [28]. 3: for i = n, . . . , 2 do Distributes the costs in reverse order to the nodes 4: Modular Minorants A function f : Y n → R of n discrete variables is called modular, if it can be represented as a sum of n functions of one variable: The function f (y) = u∈V θ φ u (y u ) is modular for any subset of nodes V ⊆ V and any reparametrization φ. Definition 1. A modular function g is called a (tight) minorant of f : Y n → R, if (i) g(y) ≤ f (y) asnd (ii) min y∈Y n f (y) = min y∈Y n g(y). For the rest of this section we will assume that G = (V , E ) is a subgraph defining a block of variables optimized at one step of a BCA algorithm and E G is the restriction of energy E to graph G with the reparametrized costs θ φ . A reparametrization φ is called dual optimal on G , if it is block-optimal in the sense of (4) w.r.t. block F E . Minorants and dual optimal reparametrizations are closely related: The function g(y) = u∈V θ φ u (y u ) is a minorant for the energy E G (y) if and only if φ is dual optimal on G . To put it differently, if G defines a sub-graph block for a block-coordinate ascent method, then choosing amongst block optimal reparametrizations φ is equivalent, up to a constant, to choosing a modular minorant for the energy E G . Observe that, for a sub-graph block (V , E ), there are 2|E ||Y| reparametrization variables but only |V ||Y| coordinates are needed to define a minorant. The minorant naturally captures the degrees of freedom that are important for subgraph-based BCA methods. Handshake update (red HS) is applied. Then the same is applied recurrently to the two formed sub-chains. The messages that have been already computed are kept from preceding levels. When recurrence completes, the HS operation has been applied to every edge, resulting in a maximal-minorant. Minorants can be partially ordered with respect to how tightly they approximate the function. For two minorants g, g we write g ≥ g if g (y) ≥ g(y) for all y ∈ Y V . Since our minorants are modular, the condition is equivalent to component-wise inequality g u (s) ≥ g u (s) ∀u ∈ V , ∀s ∈ Y. The greater the minorant, the tighter it approximates the function. Hence, of interest are maximal minorants: For the best performance of a BCA method, it makes sense to select a maximal minorant and not just any minorant. To actually apply this idea to BCA methods, we show how the maximality property of a minorant translates back to reparametrizations: Theorem 2. Let G be a tree and reparametrization φ be dual optimal on G . The function g(y) = u∈V θ φ u (y u ) is a maximal minorant if and only if ∀uv ∈ E and ∀s, t ∈ Y: With these results we can now draw conclusions about algorithms updating subgraph blocks. All BCA methods considered, as they achieve block optimality, construct minorants. However, many of them are not maximal. Minorants constructed by MPLP and TBCA are non-maximal. The change introduced in MPLP++ achieves maximality as illustrated in Fig. 2d. This minor change brings more than an order of magnitude speed-up to the algorithm in some problem instances [3]. The correction can be extended to TBCA, also leading to improvements without any further changes, Sec. 5.2. On the other side, the connection we established allows to interpret DMM as a BCA method working on the dual (5) and identify its reparametrization form as presented. SYNTHESIS Based on the above analysis and the experimental comparison of individual components of BCA methods, we synthesize the following BCA algorithm that appears to perform universally better in terms of achieved dual objective value versus time on a corpus of diverse problems. Here are the design choices that we made: • We utilize chain blocks and the hierarchical minorant (HM, Algorithm 2 and Fig. 4) updates. These updates are the most expensive ones, but the maximality property and better redistribution of the excess costs pays off in practice. • We observed that with the hierarchical minorant updates, selecting chain blocks dynamically does not give an improvement over a static set of chains, unlike in [18]. • We select chains automatically for a given graph by a new heuristic. This heuristic behaves favourably in both sparse regular graphs as well as dense graphs. This automatic choice allows the method to achieve a uniformly good performance over problems with different graph structure and connectivity. Next we present specifically designed experiments that led to these choices. Experimental Setup For a uniform evaluation over different problem types we formed three datasets grouping problems from different domains by their graph connectivity, which is the proportion of number of edges to the maximal possible number of edges |V||V − 1|/2. These datasets are detailed in Fig. 9. For objectiveness of comparison, we measure the computation cost in messages, the updates of the type Sparse Denser Complete Figure 5: Comparison between TRW-S efficiently optimizing all monotonic chains with subgraph-based updates on a covering subset of monotonic chains. See description of datasets in Fig. 9. The number of messages is scaled by the ratio |E|/|E|, where |E| is the number of edges in an instance and |E| is the average over the dataset. These normalizations allow us to show average performance on the whole datasets. TRWS vs. Subgraph Updates TRW-S is selected as representing the most efficient node-adjacent update. In particular, it is much faster than CMP and MSD as shown e.g. in [15,19]. It was originally derived as a method for optimizing the dual decomposition of (1) with monotonic chains [14]. We compared it to subgraph-based updates running on the same set of chains. Such direct comparison over datasets of different sparsity has not been conducted before. For a given graph we took a subset of maximum monotonic chains to cover all edges (exact details can be found in Appendix B). TRW-S is very efficient and takes O(|E|) messages to achieve optimality on all monotonic chains, including our covering subset. Two subgraph-based updates can be applied to optimize over chains in the covering subset sequentially: TBCA, taking O(|E|) messages as well and hierarchical minorant (HM, Algorithm 2) taking O(|E| log |E|) messages. The comparison in Fig. 5 on our broad corpus of problems shows that while TBCA is clearly inferior in performance to TRW-S, HM is actually performing significantly better than TRW-S. It works on the same subproblems as TBCA but the maximal minorant property justifies the extra computation time. In this experiment we also evaluated an improved version of TBCA, denoted TBCA++, which modifies TBCA as follows: After each rDP update on uv the operation (11a) pushes the remaining cost back to v and thus achieves the maximal minorant conditions (14). This small change leads to a noticeable improvement, see Fig. 5. However, we can conclude that a better redistribution of cost excess done by HM is more important than the maximality alone. Static vs. Dynamic In the TBCA and HM methods, the choice of subproblems is not limited to monotonic chains. In [18] it was proposed to select spanning trees dynamically favouring node-edge pairs with the most disagreement as measured by the local primal-dual gap. We verified whether this strategy is beneficial with HM. Fig. 6 shows the comparison of dynamic spanning trees versus static spanning trees (a fixed collection selected greedily to cover all edges, see Appendix B). We reconfirm observations [18] on our corpus of problems that dynamic strategy is beneficial with TBCA updates. It does not however have a significant impact on the performance of HM updates. We therefore propose to use a static collection of subgraphs, optimized for a given graph. Graph Adaptive Chain Selection Tourani et al. [3] have shown that for densely connected graphs, edge-based updates are much faster than other methods. Since the MPLP++ update is equivalent to HM update on chains of length 1, this suggests that shorter chains are more beneficial in dense graphs. Intuitively, when there is a direct edge between nodes, the longer connections through other nodes become increasingly less important. On the contrary, in grid graphs MPLP++ is found inferior to TRW-S [3] and the natural choice of row and column chains seems to be the best selection of sub-problems. Sparse Denser Complete Figure 8: Algorithm comparison for sparse, denser and complete graphs following the experimental setup in Sec. 5.1. The minor difference between MPLP++ and SPAM for complete graphs is explained by different order of computations. The corresponding runtime plots look qualitatively the same and are provided in the appendix. Complete: Problems with fully connected graphs (100% connectivity): protein-folding model [31] instances of OpenGM benchmark [19] (11 instances with 33 − 40 nodes and up to 503 labels per node); pose 6D object pose estimation model [32] instances of [3] (32 instances with 600-4800 variables and 13 labels). Based on these observations there is a need for the sub-graphs to be chosen adaptively to the graph topology. We use chain subproblems for their simplicity and better parallelization utility and propose the following informed heuristic: • Select subproblems sequentially as shortest paths from the yet uncovered part of the graph; • Chose strictly shortest paths, such that no other path of the same length connects the same nodes; • Find the most distant pair of nodes connected by a strict shortest path. An example of strict shortest path is given in Fig. 7. The algorithm implementing this heuristic, detailed in Appendix B, Algorithm 4, randomly picks a starting vertex, finds all strictly shortest paths from it (a variant of Dijkstra search), removes the longest traced shortest path from the graph and reiterates. This heuristic has the following properties: (i) in a complete graph, it selects edge subproblems; (ii) in a grid graph, irrespective of the input data ordering, it is likely to select large pieces of rows and columns (with some distortions due to greediness); (iii) in graphs with bottleneck long connections, these connections are very likely to be covered with long chains. The SPAM Algorithm The synthesis of block selection via graph adaptive chain selection, as described in 5.4, and the hierarchical minorant updates we call the Shortest Path Adaptive Minorant (SPAM) algorithm. The adaptive chain selection has a linear complexity w.r.t. the size of the graph and takes only a fraction of a single iteration time of the main algorithm. Final Experimental Evaluation We tested the proposed SPAM algorithm against existing methods. Fig. 8 shows the summarized evaluation results. One can see that across all graph types SPAM consistently does well. It automatically adapts to the density of the graph, reducing to MPLP++ for complete graphs, where TRW-S struggles. In grid graphs, where TRW-S uses the natural ordering, SPAM automatically finds sub-problems similar to rows and columns and achieves a significant improvement while MPLP++ becomes inefficient. Detailed results per dataset and speed-up factors with confidence intervals are included in Appendix C. CONCLUSION We have reviewed, systematized and experimentally compared different variants of block-coordinate-ascent methods proposed to date. We have shown the updates for subgraph-based methods take the form of modular minorants and that maximal minorants outperform non-maximal ones. We experimentally compared existing methods as well as new combinations of basic components of BCA algorithms and synthesized a novel algorithm that is a synthesis of the best aspects of all methods. It additionally adopts blocksize to the graph structure and delivers uniformly best performance across the tested datasets. Designing an efficient dual solver for discrete energy minimization Appendix Contents: A Proofs of Theorems 1,2. B Algorithms details and description of monotonic chains used in experiment Fig. 5. C Detailed experimental results. A. PROOFS Theorem 1. Let G be a tree and uv∈E min s,t θ φ uv (s, t) = 0. The function g(y) = u∈V θ φ u (y u ) is a minorant for the energy E G (y) if and only if φ is dual optimal on G . Proof. The "if" part Let φ be a dual optimal reparameterization for E G on the graph G = (V , E ). We need to show that g(y) is a minorant. i.e. • g(y) ≤ E G (y) for all y. (lower-bound property) • g(y * ) = E G (y * ) is the cost of a minimizing labeling y * . (same-minima property) We have by the definition of reparametrization We assume w.l.o.g. θ φ uv (y u , y v ) ≥ 0. Substituting this in (15) we have where the left hand side matches the definition of g(y) in the theorem. With this we have proved the lower bound property. Now we prove the same minima property. Comparing the dual function (3) with g(y) and g(y) with (1) we have the following inequalities: Since G is a tree-subgraph, strong duality holds and we have for all pairs of an optimal labeling y * and an optimal dual φ that D(φ) = E G (y * ) and there holds complementarity slackness conditions. It follows that min yu θ φ u (y u ) is attained at y * u and min yu,yv θ φ u,v (y u , y v ) is attained at (y * u , y * v ) (there is an optimal solution composed of minimal nodes and edges). It follows that the next inequalities are satisfied: Using (19) in g(y) we obtain Thus as E G (y * ) ≤ g(y * ) ≤ E G (y * ), g(y * ) = E G (y * ), proving the equal-minima property. The "only if" part We have to show that if g(y) is a minorant of E G , then φ is an optimal reparameterization, i.e. D(φ) = E G (y * ) = g(y * ), where y * is the optimal labelling for E G . Due to the minorant equal-minima property, we have As we assume θ φ uv (s, t) ≥ 0, for all s, t ∈ Y 2 and uv ∈ E , this would imply all terms θ φ uv (y * u , y * v ) are identically zero, i.e. Our initial objective was to show D(φ) = g(y * ) = E(y * | θ φ ). As we assume uv∈E min s,t θ φ uv (s, t) = 0, we just have to show Following a proof by contradiction argument, we claim Assume the above statement is false and let D * be the optimal dual. Further, w.l.o.g. let's assume the min s θ φ u (s) = y * u for all u ∈ V \ k and min s θ φ k (s) = y + k . As strong duality holds, we have Thus by assumption D(φ) ≥ D * , But θ φ k (y + k ) = min y k θ φ k (y k ) ≤ θ φ k (y * k ), this is therefore a contradiction and D(φ) = D * = g(y φ ) = E(y φ | θ φ ). Theorem 2. Let G be a tree and reparametrization φ be dual optimal on G . The function g(y) = u∈V θ φ u (y u ) is a maximal minorant if and only if ∀uv ∈ E and ∀s, t ∈ Y: Proof. "Only if part" . For an optimal reparametrization φ, its corresponding tight minorant by Theorem 1 is g(y) = u θ φ u (y u ). We need to prove the statement that minorant g is maximal only if the conditions in the theorem are fulfilled. Recall that we are working with the constrained dual so that θ φ ≥ 0 component-wise. Assume for contradiction that one of the two zero minimum conditions is violated. Let it be the one with minimum over s . Then ∃uv ∈ E ∃t such that λ(t) := min s θ uv (s , t) > 0. We can then add λ(t) to φ vu (t). This will not destroy optimality of φ but will strictly increase θ φ v (t), therefore leading to a strictly greater minorant, which contradicts maximality of g. "If part" We need to show that if the conditions of the theorem are fulfilled then g is maximal. Assume for contradiction that g is not maximal, i.e. there is a modular function h(y) such that it is also a minorant for E G and it is strictly greater than g: h(y) ≥ g(y) for all y and h(y ) > g(y ) for some y . The inequality h(y) ≥ g(y) for modular functions without constant terms is equivalent to component-wise inequalities: h u (y u ) ≥ g u (y u ), ∀u, ∀y u . From the inequality h(y ) > g(y ) we conclude that there exists u and y u such that h u (y u ) > g u (y u ). By the conditions of the theorem, and assuming a tree graph, a labeling y can be constructed such that it takes label y u in u and all costs θ φ u,v (y u , y v ) are zero. The construction starts from y u , finds labels in the neighbouring nodes such that edge costs with them is zero and proceed recurrently with the neighbours and their unassigned neighbouring nodes. For the labeling y constructed in this way we have that At the same time, h(y ) > g(y ) and therefore h(y ) > E G (y ), which contradicts that h is a minorant of E G . B.1 Maximal Monotonic Chains In this section we describe how we selected a collection of monotonic chains (MMC), on which TRW-S can run in its full efficiency and at the same time subgraph-based updates of TBCA and HM can be computed. A chain is a subgraph of graph G = (V, E) that is completely defined by enumerating the sequence of nodes it contains, i.e. a chain C is denoted as C = (n 1 , . . . , n M ), n i ∈ V, with (n i , n i+1 ) ∈ E for i = 1 : M − 1 denoting the edges it contains. Therefore, for every pair of consecutive nodes (n i , n i+1 ) there must also exist a corresponding edge in E for a chain to be a subgraph of G. Let there be a partial order defined on the nodes V such for each edge uv ∈ E the nodes are comparable: either u > v or v < u. This can be always completed to a total order as was used for simplicity in [14]. A chain C is said to be monotonic if n i < n i+1 holds for its nodes. A chain C is maximal monotonic if it is monotonic and not a a proper subgraph of some other monotonic chain. For a given ordering, we select a collection of edge disjoint monotonic chains covering the graph by greedily finding and removing from the edge set maximal monotonic chains. Finding and removing one chain is specified by Algorithm 3. The algorithm works on the graph adjacency list representation. Let Ad be the adjacency list corresponding to the directed version of directed the graph G: Ad(i) contains all neighbours of node i in G that are greater than i, i.e. ∀j ∈ Ad(i), j > i. The operation Ad(i).remove(j) removes element j from the list Ad(i). The algorithm is executed until all Ad lists are empty (all edges have been covered). Algorithm 3 Compute Maximal Monotonic Chain 1: function (C, Ad)=computeMMC(Ad) Ad is the adjacency list of G as defined above. 2: C = ∅, tail = ∅, done = f alse C is initially empty., tail is the last node added to the chain. 3: Find the smallest in the order i such that Ad(i) is not empty. 4: C.add(i), tail = i. Add node i to C. Update tail. 5: while !done do 6: Find j in Ad(tail) such that j > tail. 7: if j is found then 8: C.add(j), Ad(tail).remove(j), tail = j The node j is added to C, removed from Ad(tail). tail is updated. 9: else if j is not found then 10: The loop exit condition is satisfied. The result of the algorithm is a collection of chains that are monotonic w.r.t. to the ordering. TRWS running on the respective ordering of nodes as introduced in Sec. 3.3 can be viewed also as optimizing the dual decomposition with monotonic chains [14]. It can be shown that the number max(N in (u), N out (u) used to calculate weights in TRWS is exactly the number of different chains containing node u for any collection of monotonic chains found as above. Hence such a collection natively represent subproblems associated with TRWS. B.2 Message Passing in Spanning Trees The hierarchical minorant for chains involves passing messages from the ends of the chain to the central nodes, as shown in 2. For trees, the process is similar. Messages are passed from the leaf nodes to the central nodes. The centroid of a tree of size n is the node whose removal results in subtrees of size ≤ n 2 . The central nodes of a tree are defined as nodes connected by an edge whose removal gives trees that are similar in length. One of the central nodes is always the tree-centroid. The other inode s selected keeping in mind minimum deviation between the different sub-trees that arise from the removal of this node. As the hierarchical minorant is recursive, the recursion is repeated with a subtree. B.3 Generation of Spanning Trees in TBCA For the static strategy, we compute a sequence of minimum weight spanning trees with the weights being the number of times an edge has already been included in a spanning tree. This weighing scheme ensures that un-sampled edges are prioritized in building spanning trees. The sampling is stopped when all the edges are covered. In the experiments (below) we observed that with the block update strategy that we chose, dynamic updates were not advantageous any more and performed slower overall. Algorithm 4 Compute Strictly Shortest Path 1: function (C)=computeSSP(G = (V, E),src) src is the source node from which to grow the shortest path. 2: Create Vertex Set Q from graph G The x-axis shows the normalized dual value and the y-axis the speed-up to achieve the same dual. The statistics are computed over all instances in a dataset. We show asymmetric confidence intervals with the equal percentage around the mean.
2020-03-03T07:54:12.658Z
2020-04-16T00:00:00.000
{ "year": 2020, "sha1": "27b87572b46a51aaa774527e69bddc7b9df62b92", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1f9065a378e56f8326c4856207e3fb159b16c604", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
11663415
pes2o/s2orc
v3-fos-license
Cryptochrome-dependent magnetic field effect on seizure response in Drosophila larvae The mechanisms that facilitate animal magnetoreception have both fascinated and confounded scientists for decades, and its precise biophysical origin remains unclear. Among the proposed primary magnetic sensors is the flavoprotein, cryptochrome, which is thought to provide geomagnetic information via a quantum effect in a light-initiated radical pair reaction. Despite recent advances in the radical pair model of magnetoreception from theoretical, molecular and animal behaviour studies, very little is known of a possible signal transduction mechanism. We report a substantial effect of magnetic field exposure on seizure response in Drosophila larvae. The effect is dependent on cryptochrome, the presence and wavelength of light and is blocked by prior ingestion of typical antiepileptic drugs. These data are consistent with a magnetically-sensitive, photochemical radical pair reaction in cryptochrome that alters levels of neuronal excitation, and represent a vital step forward in our understanding of the signal transduction mechanism involved in animal magnetoreception. M any animals sense the Earth's magnetic field. Of the proposed biophysical mechanisms, the best described are a magnetite-based system 1,2 and chemical magnetoreception based on a photoinitiated radical pair reaction 3,4 . Both have credible experimental and theoretical foundations, and may not be mutually exclusive. Much of the behavioural work in this area has been conducted using complex animals that migrate (e.g. species of bird, turtle and lobster) 5 . However, simpler animals that don't migrate, including the fruit fly Drosophila melanogaster 6-10 , also possess a magnetic sense. This significantly broadens the type of biophysical, neurobiological and genetic investigation available to establish primary receptor mechanism and signal transduction. The magnetic sense of Drosophila is dependent on the presence of the flavin adenine dinucleotide (FAD)containing, circadian clock photoreceptor protein, cryptochrome (DmCRY) 7 , and the presence and wavelength of light to which the flies are exposed [6][7][8][9] . CRY are closely related to the light-dependent DNA repair enzymes, the photolyases. A second, UV-harvesting pterin chromophore is also present in members of the CRY/photolyase family, but the residues involved in binding differ significantly in DmCRY such that pterin-binding is thought unlikely [11][12][13] . CRY-dependent magnetoreception is currently proposed to be a result of light-initiated electron transfer chemistry in the protein, which is magnetically-sensitive by virtue of the radical pair mechanism 3,4 . Spin correlated radical pairs can undergo coherent mixing between singlet and triplet spin states, which have different reactive fates, and this mixing process can be modulated by magnetic fields 14,15 . The exact identity of the magnetically-sensitive radical pair in CRY is currently unknown. Presumably the influence of the magnetic field in some way affects the concentration of a CRY signalling state that, in turn, results in a neurophysiological response. However, there exists very little evidence of the signal transduction mechanism that might link magnetically-sensitive chemistry in CRY to an organism response. Fogle et al. have shown that expression of DmCRY in central neurons in Drosophila is sufficient to bestow photosensitivity to those neurons, such that illumination with blue light (450-490 nm) increases action potential firing 16 . Thus, we hypothesise that a light-induced change to neuron activity levels, mediated by DmCRY, might be modified by external magnetic fields. To date no physical mechanism in a primary magnetoreceptor (CRY or magnetite) has been demonstrated to unequivocally produce a magnetically-induced response in neuronal activity 17,18 . As part of a study investigating the importance of patterned activity for the development of robust neural circuitry in the developing Drosophila embryonic CNS, we noted that exposing embryos to pulsed blue light (,470 nm) resulted in a heightened seizure-phenotype when tested post-embryonically at the third instar larval stage. Such a phenotype has been associated with, and is an indicator of, increased synaptic excitation in the locomotor circuitry 19,20 . We show in this study, that the effect of blue light pulses during embryogenesis is significantly potentiated by the presence of a magnetic field. The effect of both light and applied magnetic field is blocked by prior ingestion of typical antiepileptic drugs, indicative of a change to neuronal activity level. Moreover, the effect of both light and light 1 magnetic field requires the presence of DmCRY. Thus, we conclude that an applied magnetic field alters the ability of lightactivated DmCRY to influence levels of synaptic excitation in the Drosophila CNS. Results To identify a magnetic field effect (MFE) on the CNS of Drosophila, we employed an established assay designed to probe how manipulation of neuronal activity during embryogenesis in Drosophila affects the probability of seizure in subsequent third instar larvae (,3 days later) 19,20 . Seizure duration is measured as the mean recovery time (MRT) of Drosophila larvae from a DC electric shock across the anterior-dorsal surface (approximating the position of the underlying CNS). Single gene mutations of the bang-sensitive (i.e. seizure-sensitive) grouping of Drosophila show a significantly extended MRT compared to wildtype. Electrophysiological analysis shows that this effect observed in larvae is associated with increased levels of synaptic excitation in the CNS of these mutants during embryogenesis 19 . Exposing wildtype embryos to pulsed blue light (470 nm, 100 ms on/900 ms off) during 11-19 h of embryogenesis (when the locomotor neural circuits form) 21 results in subsequent larvae that show significantly increased seizure duration compared to control embryos that developed in constant darkness (using a 30 V/3 s electroshock, Figure 1a). We found this effect of light to be DmCRYdependent; it is neither observed in a cry 03 loss-of-function mutation (cry -/-), nor is it produced when using pulsed orange light of 590 nm peak wavelength (Figure 1a). It is known that light-activated DmCRY results in increased action potential firing in Drosophila arousal neurons 16 , which we hypothesise is sufficient to destabilise the development of the CNS leaving it prone to seizure 19 . Significantly, repeating these experiments in the presence of a 100 mT magnetic field from a pair of NeFeB permanent magnets during the same period of embryogenesis substantially increased the effect of blue light on seizure severity in larvae compared to lightpulses alone ( Figure 1a). We reproduced this MFE when a different researcher conducted equivalent experiments using a different population of flies, a different blue LED (from the same manufacturer) and a different electric shock stimulator (10 V/1.5 s). Qualitatively similar relative mean recovery times were recorded: control, 35.0 6 7.2 s; blue light, 76.1 6 13.3 s (P 5 0.02 vs. control); blue light 1 magnetic field, 139.0 6 15.5 s (P 5 0.003 vs. control and blue light). The MFE on seizure duration was shown also to be DmCRY-dependent: being abolished in a cry 03 null (cry -/-) background and rescued by transgenic expression of UAS-cry in a cry null (BL/MF/cry -/-/cry 1 , Figure 1a). Prolongation of seizure duration was also prevented by prior ingestion of typical antiepileptic drugs (e.g. phenytoin and gabapentin), consistent with an effect on neuronal activity (Figure 1a). Although magnetic field exposure in the absence of light results in a marginally longer MRT than for the dark controls, the difference is not statistically significant (P . 0.99). Moreover, the MRT after exposure to a combination of blue light and magnetic field (137.3 6 15.7 sec) is significantly longer than the MRT after exposure to blue light alone added to the MRT after exposure to magnetic fields alone (75.1 6 9.1 sec, Figure 1b). The MFE is therefore dependent on light and is not simply an additive effect. The effect of antiepileptic drugs (phenytoin and gabapentin) is a strong indication that the increased seizure after exposure to blue light and magnetic field is related to increased synaptic excitation in the CNS. This result therefore represents an important initial step in unravelling the neuronal circuitry involved in CRY-dependent magnetoreception in Drosophila. Dark OL-exposed BL-exposed MF-exposed Discussion We present a significant MFE on seizure duration in Drosophila larvae. These data were acquired using an established proxy measurement for perturbations to neuronal activity. A change in neuronal activity that results from the response of any primary magnetosensor is considered necessary to produce an organism response 17,18 . The effect we observe requires light and is DmCRYdependent, the spin dynamics of which are potentially magneticallysensitive via a photochemical radical pair mechanism 3,4 . Indeed, both low (,5 mT) and moderate (5-30 mT) magnetic fields have been reported to produce changes in quantum yield of flavin semiquinone radicals in photoreceptor CRY from Arabidopsis thaliana (AtCRY1) 22 . By analogy, magnetically sensitive radical pair reaction dynamics in DmCRY may influence the concentration of FAD N2 .There is evidence to suggest that this oxidation state of the flavin activates the protein in its role as circadian photoreceptor 23 . Conformational changes in the C-terminal tail of DmCRY, which is well placed to respond to the flavin oxidation state, were observed to be kinetically coupled to the single electron reduction (by light or chemically) of oxidised FAD to FAD N2 . This conformational change appears to allow interaction with its partner protein, Timeless (TIM), which ultimately leads to degradation of TIM and resetting of the clock. Interestingly, DmCRY was found to revert back to its dark state conformation with the same kinetics as flavin reoxidation 23 . Consistent with this picture is the observation that clock neurons overexpressing DmCRY result in Drosophila with free-running circadian periods that show an enhanced response to a 300 mT applied MF 8 . However, the conserved triad of tryptophans thought to act as an electron transfer chain to the flavin to generate the photoinitiated radical pair are not necessary for DmCRY-dependent magnetic orientation of adult flies 9 . Moreover, light-induced conformational changes have also been observed in DmCRY where the flavin was reduced chemically to FAD N2 prior to illumination 24 , and in variants containing tryp-triad mutations 25 . The authors of these studies argue that photoexcitation alone (of FAD N2 or even oxidised FAD), without any subsequent electron transfer chemistry, might be sufficient to trigger activation of DmCRY 24-26 . Alternatively, MFs might influence radical pair photochemistry in Drosophila via a mechanism that is independent of the CRY-TIM interaction that initiates signal transduction in the circadian clock. Light-activated DmCRY is also known to result in an increased firing rate of arousal neurons 16 . This pathway is a consequence of light initiated redox chemistry in DmCRY (which is likely to proceed via radical pair intermediates) that modulates potassium channels and results in membrane depolarisation. The fact that the MFE we observe is negated by prior ingestion of anti-epileptic drugs indicates the magnetically-sensitive activity of DmCRY is similarly impacting neuronal firing activity. However, whether our observations, or Drosophila magnetoreception in general, is dependent or independent of the flies' circadian clock is yet to be determined. Exposing the embryos to a 100 mT magnetic field in the first instance has a range of benefits over the mT field exposures immediately relevant to animal magnetoreception. First, the radical pair mechanism predicts that fields of this magnitude will saturate the Zeeman effect of typical organic radical pairs 14 . This is likely to produce a magnetically-induced change in spin selective product yield and reaction kinetics that is larger than those expected from mT fields, and therefore might produce a larger physiological and organism response. Second, potential variations in background field are much less significant when using mT exposure conditions compared to mT conditions. Finally, the use of permanent magnets removes the confounding variables of vibration and heating that are possible when using the electromagnets necessary for mT exposure. These factors may have been significant in the history of conflicting reports in the context of biological MFEs from exposure to mT fields, which includes examples concerning the radical pair/CRY model of magnetoreception 27,28 . Moderate (mT) field exposure was therefore chosen as a rational and reliable starting point, which has provided us with greater confidence in the observed effect before mT exposure experiments are conducted. Our results represent an important initial step in elucidating the signal transduction mechanism between the response of the putative primary magnetoreceptor, cryptochrome, and a behavioural response in a genetically tractable organism. This study paves the way for assessing the influence of the amplitude and orientation of Earth strength magnetic fields (,mT) on seizure duration in Drosophila larvae. Using similar methods to those employed by Fogle et al. 16 , we can also confirm whether the MFE observed here is mediated through light-dependent redox chemistry in DmCRY that is known to increase action potential firing in central brain neurones. This combined approach will provide a platform from which to detail the underlying electrophysiology of Drosophila magnetoreception. Methods Flies were maintained on standard corn meal medium at 25uC. Embryos were collected by allowing females to lay on grape-agar (Dutscher, Essex, UK) plates supplemented with a small amount of live yeast paste at 25uC. Flies used were Canton-S wildtype and cry 03 homozygotes 29 . For rescue of DmCRY expression, the following stocks were crossed: ElaV C144 -GAL4;;cry 03 females crossed to UAS-Dmcry;cry 01 males. Magnetic field and light exposure during embryogenesis. Embryos (,100, 1-3 h after egg laying) were aligned in a central region (1 cm 2 ) on a grape-agar plate in rows of 10 such that all had the same anterior-posterior orientation, which was aligned, where applicable, parallel to the magnet separation axis (Figure 2). The magnetic field within the 1 cm 2 region containing the embryos was measured to be 100 6 5 mT. The plate was placed in a humidified atmosphere inside a 25uC incubator and, where applicable, exposed to collimated light from an overhead LED (Cairn Research Ltd, UK, Figure 2). LEDs were used with peak emission at 470 nm (bandwidth 25 nm, irradiance 466 6 14 nW cm 22 ) or 590 nm (bandwidth 18 nm, 1094 6 18 nW cm 22 ). Embryos were exposed to light for 100 ms every second between 11-19 h after egg laying, but exposed to the magnetic field throughout embryogenesis. After hatching, larvae were transferred to vials and maintained in complete darkness and in the absence of any applied magnetic field until ,3 days later when wall climbing third instar larvae were tested for seizure-like behaviour. Electroshock. Prior to stimulation, third instar larvae were washed to remove food residue and gently dried using paper tissue. Larvae were then allowed to recover on a plastic dish until normal crawling behaviour resumed. A stimulator, comprising two tungsten wires (0.1 mm diameter, ,1-2 mm apart) was placed across the anteriordorsal surface, over the approximate position of the CNS. A DC pulse, generated by either a Grass S88 stimulator (30 V/3 s, Grass instruments, RI, USA) or constant current stimulator (10 V/1.5 s, DS2A, Digitimer, UK), was applied. The animal responded by tonically contracting and ceasing normal, motile behaviour. Time to resumption of normal motile behaviour was recorded (see [19] for more details). Results were analysed for significance using a one-way ANOVA with a Bonferroni post-hoc test. Drug-feeding. Mated adult females were fed with phenytoin (0.4 mg per ml) or gabapentin (0.1 mg per ml) for 2 days by adding flies to food vials containing the drug. Drugs (Sigma, UK) were prepared in DMSO, which has no effect on MRT 19 . apparatus used during embryogenesis. Embryos were aligned in a central region (1 cm 2 ) on a grape-agar plate in rows of 10 such that all had the same anterior-posterior orientation, which was aligned parallel to the magnet separation axis. The plate was placed in a humidified atmosphere inside a 25uC incubator and exposed to collimated light from an overhead LED (e.g. 470 nm).
2018-04-03T02:12:44.318Z
2014-07-23T00:00:00.000
{ "year": 2014, "sha1": "918aa8dc25f0c5a8929a161145737caf4367b7d1", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep05799.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "918aa8dc25f0c5a8929a161145737caf4367b7d1", "s2fieldsofstudy": [ "Biology", "Physics" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119281665
pes2o/s2orc
v3-fos-license
Hybrid membrane resonators for multiple frequency asymmetric absorption and reflection in large waveguide We report that Hybrid membrane resonators (HMRs) made of a decorated membrane resonator backed by a shallow cavity can function as Helmholtz resonators (HRs) when mounted on the sidewall of a clear waveguide for air ventilation. When two single-frequency HMRs are used in the same scheme as two frequency-detuned HRs, asymmetric total absorption/reflection is demonstrated at 286.7 Hz with absorption coefficient over 97 % in a waveguide 9 cm x 9 cm in cross section. When two multiple-frequency HMRs are used, absorption in the range of near 60 % to above 80 % is observed at 403 Hz, 450 Hz, 688 Hz, 863 Hz and 945 Hz. Theoretical predictions agree well with the experimental data. The HMRs may replace HRs in duct noise reduction applications in that at a single operation frequency they have stronger strength to cover a much larger cross section area than that of HRs with similar cavity volume, and they can be designed to provide multiple frequency absorption band. Total absorption in linear dissipative systems within the subwavelength scale had been a challenge until recently [1 -8]. Several schemes have been theoretically proposed via coherent perfect absorber (CPA), where two coherent waves with specific amplitude and phase were incident in opposite direction to an absorption core [1,2]. For one beam incidence scenario, maximum absorption by a single dipole or monopole unit was shown to be at most 50 % [3]. Dark acoustic metamaterials backed by a hard wall could achieve total absorption via curvature energy at resonance to maximize energy dissipation while achieving impedance match to eliminate reflection [4], while meta-surface made of hybrid resonators [5] could provide an effective perfect absorption area several times larger than the physical area of the active devices. Perfect absorption by a meta-surface consisting of a perforated plate and a coiled coplanar air chamber has also been demonstrated [6]. These types of devices all contained a hard wall to eliminate transmission. The hard wall was replaced in a monopoledipole co-resonance scheme to eliminate transmission, while the individual resonances ensure impedance matching and elimination of reflection [7]. No airflow was allowed in the direction of wave propagation, except for the second device reported in Ref. 7, but the dipole sub-component blocked part of the waveguide cross section so it still impeded airflow. Most recently, two Helmholtz resonators (HRs) mounted on the sidewall of a waveguide were reported to have nearly perfect absorption at low frequency [8]. For devices with physical sizes several times the relevant wavelength, perfect absorption could be achieved by slowing down the waves [9,10] and in broadband [11]. However, up to now, no broadband subwavelength absorbers mounted on the sidewall of clear waveguide have been experimentally realized. In this letter, we report ventilated perfect absorbers comprising two sub-wavelength hybrid membrane resonators (HMRs) which form part of the sidewall in line with the rest of the ventilation waveguide. For two single-resonant-frequency HMRs with one being slightly detuned from the other, total absorption is achieved when waves with a frequency somewhere in-between the two HMR resonant frequencies are incident from the high resonant frequency HMR side, while total reflection is achieved if the waves are incident from the other side. For two multiple-resonant-frequency HMRs with slightly detuned resonant frequencies from one another, strong asymmetric absorption and reflection are experimentally realized at five frequency bands below 1000 Hz. Consider first a decorated membrane resonator (DMR) backed by a sealed cavity, forming a HMR [5]. The surface response function of the DMR can be expressed in terms of its eigenmodes [12] is the displacement-weighted mass density, ρ is the local mass density, A m is the area of the decorated membrane, ω n is the angular frequency of the n-th resonant mode of the DMR, and ω is the angular frequency of the excitation. The dissipation coefficients β n are fitting parameters. Usually only the eigenmodes close to the frequency of interest are included in Eq. (1). The acoustic impedance of the HMR cavity is given as Z c = −iγp 0 /(Vω) [5], where γ is the adiabatic index, p 0 is the atmospheric pressure, and V is the volume of the cavity. The total acoustic impedance of the HMR is When the HMR is mounted on the sidewall of a waveguide, as shown in Fig. 1(a), the acoustic impedance across the waveguide (waveguide impedance) is 1 where Z 0 = ρ 0 c 0 /A is the acoustic impedance of air in the waveguide, ρ 0 is the density of air, c 0 is the speed of sound and A is the cross section area of the waveguide. Similar to a sidemounted HR [8], when the frequency reaches its resonance, the HMR will generate a monopole resonance that pushes or sucks the air in the axial direction along the waveguide through the normal movement of the membrane. The waveguide impedance of an HMR behaves like a soft boundary with near zero impedance, leading to minimum transmission and maximum reflection. When two HMRs are mounted on the sidewall of the waveguide as shown in Fig. 1(a), and based on the impedance transfer method [13], the waveguide impedance in front of the second HMR (HMR-2) can be transferred to the surface in front of the first HMR (HMR-1). The combined waveguide impedance (CWI) at HMR-1 is then where Z r1 and Z r2 denote the acoustic impedance of HMR-1 and HMR-2 given by its response function of DMR and cavity, and L is the distance between the two HMRs measured from their centers. The CWI at HMR-2 ( 2 CW Z ) can be obtained by exchanging Z r1 and Z r2 in Eq. (2). Let the resonant frequencies of the two HMRs be f 1 and f 2 , with f 1 being slightly higher than f 2 . For waves incident from the HMR-1 side, at a frequency near f 2 the impedance of HMR-2 is almost zero, so 11 . It is seen from the expression that HMR-2 generates a perfect-reflection boundary in the waveguide, and the air column between the two HMRs serves as a cavity that is in parallel to HMR-1. Similar to the case of a HMR [5], it is possible for 1 CW Z to be equal to Z 0 , leading to zero reflection. As the transmission is also small, this leads to total absorption of the incident waves. For waves incident from the HMR-2 side at the same frequency, Z r2 is near zero so which leads to a soft boundary and total reflection. Therefore, when the waves at frequency close to f 2 are incident from the HMR-1 side, near total absorption will occur. Near total reflection will occur if the same waves are incident from the HMR-2 side. As DMR's usually have several resonant modes, especially for those with multiple platelets [4] shown in Fig. 1(b), one expects that asymmetric absorption and reflection could occur at multiple frequencies. To understand clearly the mechanism of asymmetric absorption/reflection of HMRs we first investigate a pair of HMRs, each having just one resonance in the frequency range of interest, and demonstrate asymmetric absorption/reflection at a single frequency. The HMRs were mounted on the sidewall of a waveguide at 65 mm distance measured from center to center ( Fig. 1(a) The reflection for waves incident from the HMR-1 side and HMR-2 side can be obtained are shown as solid curves in Fig. 2(a) and (b), respectively. It is seen that when the waves are incident from the HMR-1 side, the real part of the CWI increases to 0.85Z 0 while the imaginary part becomes zero at 286.7 Hz. The reflection reaches minimum due to near-match of impedance with air, as indicated by the blue curve in Fig. 2(c), which agrees well with the experimental reflection spectrum (blue circles in Fig. 2(c)). As the transmission (green circles for experimental data and green curve for theory) is small due to the monopole excitation of the HMRs, the absorption reaches the peak value of 97.5 %, as indicated by the red circles for experimental data and the red curve for theory in Fig. 2(c). As shown in Fig. 2(b), for waves incident from the HMR-2 side, the CWI is nearly zero around 286.7 Hz, resulting in high reflection, as shown in Fig. 2(d). The transmission is identical to that for the other incident direction, as expected. In all, the theoretical results agree very well with the experimental data. The numerical air velocity fields at the peak absorption frequency of 286.7 Hz are depicted in Fig. 2(e) and 2(f) for the two incident directions, respectively. When the wave was incident from the HMR-1 side, both HMRs were highly excited, as the air velocities inside both cavity reached over 9 times the incident wave. The directions of the air velocity near the two HMRs, however, were opposite. The incident sound waves were trapped between the HMRs and eventually dissipated by the HMRs, leading to near total absorption. When the waves were from the HMR-2 side, only HMR-2 was excited at 286.7 Hz. The air velocity in the cavity of HMR-2 was about 5 times the incident wave, while in the cavity of HMR-1 the air velocity was only about 1.5 times the incident wave. The directions of the air velocity near the two HMRs were about the same. The wave was mostly reflected with little absorption. For multiple frequency absorption, we made two devices HMR-3 and HMR-4. The schematics of the device structure are shown in Fig. 1(b). Both cavities were made of a 37 mm  120 mm front plate and 40 mm in depth. An opening of 35 mm  78 mm in dimension was made on the front plate and sealed by a rectangular membrane 0.15 mm in thickness. Two semicircle hard platelets were mounted on the membrane. The semicircle radius and the thickness were 6 mm and 0.5 mm, respectively. The mass of the top platelet in HMR-3 was 37 mg, while that of the bottom platelet was 39 mg. The corresponding ones in HMR-4 were enlarged portion of Fig. 1(b) that was drawn in scale as the real HMR-3 device. The top platelet of HMR-4 was moved up by 1 mm as compared to that of HMR-3, while the bottom platelet of HMR-4 was moved down by 1 mm as compared to that of HMR-3. The crosssection of the waveguide was 90 mm  90 mm. A thin layer of grease was applied onto the membrane of HMR-3 to introduce extra dissipation. A single dissipation coefficient in the same value as HMR-1 and HMR-2 was used in the simulations for all the eigenmodes of HMR-3 and HMR-4, respectively. The same membrane parameters as for HMR-1 were used for HMR-3 and HMR-4. The separation between the centers of the two devices was 60 mm when both were mounted on the waveguide. from the HMR-3 side is shown as circles in Fig. 3 Fig. 3(c). They all have large average displacement to generate a strong monopole resonance in the waveguide impedance. These five eigenmodes were then used in the calculations for the waveguide impedance following the same way as for HMR-1 and HMR-2. The best fit shown in Fig. 3 agrees well with the experimental data except for Peak-I, where the resonant frequency is off by about 120 Hz. Through theoretical investigations, we found that it is extremely difficult to optimize the delicate balance between impedance matching and dissipation cancelation for each individual absorption peak, even though we are able to match four out of the five resonant peak frequencies. What we have presented here is a preliminary success in the attempt to predict the properties of the devices, which omitted some of the structural details, such as the contact areas between the membranes and the platelets, the mass distribution of each platelet, and the stress distribution in the membrane. In summary, we have demonstrated that an HMR functions like an HR when mounted on the sidewall of a clear waveguide, in that it also creates a soft boundary in the waveguide that causes strong reflection and transmission loss. When two slightly detuned HMRs are mounted in series on the sidewall along the waveguide without any significant impediment of airflow, asymmetric total absorption and reflection can be realized. At a single working frequency the HMRs with comparable cavity volume as the corresponding HRs have significantly stronger strength than HRs, as the waveguide cross section area in this work is over three times of that for the HRs [8]. The HMRs can also provide multiple frequency asymmetric absorption/reflection, which the HRs cannot. Acknowledgement -We sincerely thank P. Sheng for invaluable discussions. This work was supported by AoE/P-02/12 from the Research Grant Council of the Hong Kong SAR government.
2019-04-13T16:14:49.801Z
2016-10-12T00:00:00.000
{ "year": 2016, "sha1": "78b370aec95354c5053fac714dda51cf85b273b2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.03754", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3ac4861ca33a81008c193e8c97f05983cb374b76", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
245129062
pes2o/s2orc
v3-fos-license
Using the paleotsunami data for the tsunami hazard assessment The article is focused on the development of statistical methods of the tsunami recurrence evaluation using paleotsunami data. The new key moment is the creation of a model to quantify the preservation potential of paleotsunami deposits. The article includes a brief overview of the results of studies of the variability and preservation of tsunami deposits. The model was tested on materials about paleotsunami on the coast in the Khalaktyrka area (a village within the city of Petropavlovsk-Kamchatsky), obtained earlier, for four time intervals set by the key-marker volcanic tephra layers in Kamchatka (Ksudach in 1907, Avachinsky in 1855 and 1779, Opala in 606). The maximum likelihood estimates of the number of tsunamigenic horizons for the indicated time intervals are given. The restrictions of the considered model are analyzed. Introduction. Geological tsunami traces on the coasts Tsunami is a dangerous natural phenomenon, at times attacking the Far East coast of Russia. Therefore, hazard and risk assessments are necessary for planning the development of tsunami-prone coasts and urban planning in the coastal zone [1]. However, the lack of data in the catalogs [2][3] of historical tsunamis does not allow to quantify the tsunami hazard with an acceptable accuracy of the order of 5-10 %. This requires 250-500 years of tsunami registration without gaps. The only alternative to this expectation is the use of paleotsunami data. Research, started independently in different countries, has shown that strong tsunamis leave geological traces on the attacked coasts. Figure 1 shows that tsunami deposits the most beach material in the "middle third" of the inundated coastal area. Near the shore, the material is weakly deposited from the high-speed flow, and in the zone of maximum run-up height, a little material remains in the flow. Fresh deposits of tsunami can be traced almost to the boundary of inundation zone, but in the case of paleotsunami deposits, the situation is complicated by the incomplete preservation of thin layers. The aim of this work is to create a method for obtaining the tsunami recurrence assessments using the data on paleotsunami deposits, taking into account their preservation potential. Deposits of modern tsunamis and paleotsunami, their formation and preservation When interpreting data on paleotsunami sediments on the Far East coast of Russia, the results of studying the features of sedimentation associated with modern large historical tsunamis of the 20th and early 21st centuries are of great importance [5][6][7][8][9]. Tsunami deposits on slopes are usually washed away by atmospheric precipitation over time, but in the conditions of coastal peatlands, the slopes are very small, and the beach material deposited by the tsunami is fixed with vegetation over the years. In the long-term process of fixation, tsunami deposits are deformed by plants growing from below and roots of a new vegetation cover, as well as by insects living in this environment (bioturbation). One of the urgent tasks in the study of tsunami deposits is the analysis of their preservation potential on the scale of geological time. For several years, аuthors [10][11][12] traced the dynamics of deposits of the largest recent tsunamis in relation to their preservation potential. The most important factor was the initial thickness of the sediments. The probability of preservation of deposits less than 10 cm thick is quite small, but deposits with a thickness of more than 10 cm were well preserved. In a number of regions, anthropogenic impact turned out to be an important factor. The final conclusion drawn in the cited papers is following: the data on tsunami deposits require correction for variability and preservation. Features of statistical accounting of data on paleotsunami on the example of the Khalaktyrka region, Kamchatka For a long time, the main type of natural materials used by tsunami specialists were data on maximum tsunami run-up heights collected in catalogs [2][3]. The main feature of such material is the data on the heights that were reached by the tsunami waves. The presence of paleotsunami deposits found at a certain level above the ocean indicates that the tsunami has exceeded this level. What the maximum run-up height was in this case, is unknown. The method of joint analysis of data on historical tsunamis and paleotsunamis is considered on the example of the coast in the Khalaktyrka region, Kamchatka (figure 2). Figure 2 shows typical features of paleotsunami layers. As a rule, tsunami deposits at wave heights of less than 10 m are located in spots. Surface processes reduce the preservation of the layers, and as a result, each section contains its own set of deposits, with partial overlapping sets from adjacent sections. As can be seen from figure 2, volcanic tephra deposits can be traced along the profile much more clearly and reliably. Accordingly, for the correlation and dating of the paleotsunami deposits, the tephrostratigraphy and tephrochronology method was used [13], based on the previously studied Holocene key-marker tephra layers in Kamchatka. This approach made it possible to determine the stratigraphic position, relative and absolute age of tsunamigenic layers in geological sections. In the study area on the coast of the Khalaktyrka beach, 13 tsunamigenic horizons (Ts1-Ts13) were identified [14]. Tsunami deposits are usually represented by thin (0.5 to 20 cm) layers of dark gray sea sands. It is obvious that the formation of deposits of various tsunamis took place in different conditions. Therefore, the average probability of sediment preservation should be used in conditions that are close to homogeneous. For this purpose, the entire time interval in which tsunamigenic layers were identified is divided into four intervals of shorter duration (k = 1, 2, 3, 4), within which the conditions for the formation of deposits are more uniform. 2). For the same purpose, a two compact groups of sections in the "middle" parts of profiles 1 and 2 were studied (figure 2). There are sections 302-306 (at a distance of 270-500 m from the shoreline) and 309-313 (at a distance of 250-440 m from the shoreline) [14]. Figure 2. Paleotsunami study area on the Pacific coast of Kamchatka and the location of profiles and sections. Geomorphological profile 2 and section diagrams [14]. The data on the number of tsunamigenic layers in each of the studied sections that fall within the time interval between the horizons of key-marker volcanic ash are summarized in Table 1. The positions of sections 302-306 and 309-313 are of different heights. However, the height of the beach ridge h = 8 m is associated with tsunami deposits in these sections, since the tsunami reached them, having overcome the high beach ridge (figure 2). Assume that there were N paleotsunamis at some coastal location during the time period T, with the deposits of which the height h above the sea level is associated. It is known that the sequence of large tsunamis is close to Poissonian one [15]. Therefore, the probability of such an event is given by formula: where the parameter φ (h), depending on the threshold height h, is the average frequency of tsunami manifestations, which is called the tsunami recurrence function (RF). According to the definition, tsunami recurrence function (RF) is the average frequency of tsunami occurrence in a given place x, with maximum run-up height being equal to or more than the threshold height h where N (run-up height ≥h) is the number of tsunamis with maximum height ≥ h occurring during the time period T. Moreover, the exponential approximation of the tsunami recurrence function is acceptable for large tsunami heights h ≥ 0.5 m [16]: ( Parameter f is the asymptotic frequency of large tsunamis in the region, which generally slowly changes along the coast and can be considered to be a regional constant. Parameter H* is the characteristic tsunami height for selected location x, which is proportional to the average coefficient K(x) of the tsunami height transformation (amplification) from the open ocean to the coastal location x. Tsunami activity parameters f and H * must be determined from historical tsunamis and paleotsunami data. Accordingly, substituting the values of these parameters into formulae (3) and (1), we can estimate all the necessary probabilistic values of hazard and risk associated with a tsunami. In fact, we cannot be sure that the identified n deposits of paleotsunami are all paleotsunami N at some coastal location, belonging to the time period T and associated with the height h above sea level, because not all traces of paleotsunami are preserved. Let us assume, that the i-th section contains n ik deposits from N k paleotsunami that actually took place during the k-th time interval. The corresponding probability P ik (n ik ) can be estimated by the binomial distribution [17]: where q k = 1 -p k is the probability of "erasing" the paleotsunami traces. The likelihood function for the k-th time interval is equal to the product of the probabilities related to each i-th section [17]: For the k-th time interval, the values of the number of paleotsunami N k and probabilitiy p k that maximize the value of the likelihood function L k should be used as the maximum likelihood estimates. According to (5), these maximum likelihood values are related by a simple analytical formula: where m is the number of sections. The value of the maximum likelihood estimates of paleotsunami interlayers N k is found by a numerical method. The obtained values of the estimates of the number of tsunami deposits N k and probabilitiy p k of their preservation in the given time intervals, obtained by maximizing the likelihood function, are included in Table 1. The analysis of the likelihood function (5) showed that the amount of actually identified deposits Тs1, Тs2 and Тs3 for the time intervals 2020-1907 and 1907-1855 is consistent with their distribution in 10 sections and are maximally probable. It is interesting that the estimates of the probability of preservation of tsunamigenic interlayers for these time intervals, р 1 = 0.6 and р 2 = 0.2, characterizing the conditions of their formation and preservation, differ significantly. This probability is less for the more ancient events of the period 1907-1855. For the other two time intervals 1855-1779 and 1779-606, the numbers of identified paleotsunami interlayers Тs4 -Тs5 and Тs6 -Тs8 does not agree with their distribution in 10 sections, and the maximum of the likelihood function does not correspond to these quantities. There may be several reasons for this discrepancy. It is possible that 10 investigated sections are not enough to statistically compensate for the low probability of the preservation of traces of some ancient tsunamis. It is also possible that the conditions for the formation of tsunamigenic deposits during these longer periods of time could change significantly, which violates the requirement of uniformity of the conditions assumed in the model. As an example, figure 3 shows the joint function of the recurrence of tsunami heights for Khalaktyrka, constructed earlier [18] by the least squares method based on the data on the historical tsunamis of 1841, 1952, and 1960, and 11 paleotsunami identified in [14] for all considered sections for the period starting from 236 BC. Figure 3. Empirical tsunami recurrence function for Khalaktyrka using historical and paleotsunami data [18]. Figure 3 demonstrates the significance of the paleotsunami data: exactly 11 paleotsunami correspond to one value of the recurrence function with the smallest value of the standard deviation (apriori error), compared with large standard deviations for (total) 3 large historical tsunamis. Conclusion Data on paleotsunami are very important for obtaining estimates of tsunami hazard (recurrence and possible heights) and risk with acceptable accuracy, which is necessary both for solving purely scientific problems and for planning the development of the coastal zone. However, the direct use of data on paleotsunami can lead to underestimations of the recurrency frequency, and therefore it is necessary to take into account the peculiarities of their formation and changes that occur in tsunami sediments prior to their fixation. Using the maximum likelihood method, a model was built to estimate the real number N of paleotsunami during a certain time period T (which characterizes the frequency of tsunamis) and the probability p of the preservation of their traces based on data on the number of paleotsunami deposits in several sections associated with the same height h above ocean level. The model was tested on data on paleotsunami on the coast near Khalaktyrka, for four time intervals set by clear deposits of key-marker volcanic tephra layers. Estimates of the number of tsunamigenic horizons for time intervals after 1855 are the most probable. For older events prior to 1855, the number of identified paleotsunami interlayers does not agree with their distribution in 10 sections, and the maximum likelihood function does not correspond to these quantities. This is explained both by the limitations of the constructed model, associated with the conditions assumed in it, and by the low probability of preserving traces of some tsunamis. Despite some limitations of the considered method, the developed quantitative approach to assessments of the formation parameters of tsunamigenic deposits can be successfully used as a starting point for obtaining adequate quantitative assessments of tsunami hazard and risk.
2021-12-14T20:17:28.885Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "ff6c24c95eef0edf98f857ec1a07a9eb1f139845", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/946/1/012021/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ff6c24c95eef0edf98f857ec1a07a9eb1f139845", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Physics" ] }
261064628
pes2o/s2orc
v3-fos-license
Meta-Stock: Task-Difficulty-Adaptive Meta-learning for Sub-new Stock Price Prediction Sub-new stock price prediction, forecasting the price trends of stocks listed less than one year, is crucial for effective quantitative trading. While deep learning methods have demonstrated effectiveness in predicting old stock prices, they require large training datasets unavailable for sub-new stocks. In this paper, we propose Meta-Stock: a task-difficulty-adaptive meta-learning approach for sub-new stock price prediction. Leveraging prediction tasks formulated by old stocks, our meta-learning method aims to acquire the fast generalization ability that can be further adapted to sub-new stock price prediction tasks, thereby solving the data scarcity of sub-new stocks. Moreover, we enhance the meta-learning process by incorporating an adaptive learning strategy sensitive to varying task difficulties. Through wavelet transform, we extract high-frequency coefficients to manifest stock price volatility. This allows the meta-learning model to assign gradient weights based on volatility-quantified task difficulty. Extensive experiments on datasets collected from three stock markets spanning twenty-two years prove that our Meta-Stock significantly outperforms previous methods and manifests strong applicability in real-world stock trading. Besides, we evaluate the reasonability of the task difficulty quantification and the effectiveness of the adaptive learning strategy. Introduction Sub-new stocks are stocks listed for less than one year. Compared to stocks listed for longer periods, the price trends of sub-new stocks are more volatile, allowing investors to profit from short-term trading (Mingli et al. 2022). Consequently, predicting the price of sub-new stocks can be valuable for both stock traders and quantitative finance researchers. Due to short listing time, data scarcity is the main challenge for sub-new stock price prediction, and introducing supplement information is the most direct approach to address this issue. Previously, textual data such as social media information (Sawhney et al. 2021c,d) and company relations extracted via graph neural networks (Sawhney et al. 2021b) have been examined to facilitate stock price prediction. Although these methods can tackle limited data availability in theory, high-quality supplementary data remains difficult to √ × Lookback window Lookback window To Predict To Predict Figure 1: The price series of Welsbach Technology Metals Acquision reflect larger temporal volatility, and is harder to predict than the price trend of Jovo Energy. The red flag represents the date on which the model predicts the stock price trend. obtain and assess (Dong et al. 2020;Batra and Daudpota 2018). Apart from introducing supplement data, transfer learning and meta-learning are two machine learning techniques employed to tackle data scarcity. Specifically, transfer learning obtains a pre-trained model via large amounts of source data and fine-tunes the model on limited target domain data (Li et al. 2022). Acquiring useful features via source data, transfer learning, to some extent, allows the fine-tuned model to solve target tasks with smaller datasets. However, when data distributions of source tasks and target tasks significantly differ, transfer learning underperforms since the pre-trained models may overfit source data and thus fail to adapt to target data . In contrast to data-focused transfer learning, meta-learning is task-focused, emphasizing generalizing practical learning strategies instead of transferring low-level features (Chang et al. 2021). By learning "how to learn" on source tasks, meta-learning enables a model to quickly adapt and generalize to unseen tasks, relying less on task similarity and domain match. However, these two learning techniques have not been applied to subnew stock price prediction, with their performance remaining unknown. Given the non-stationarity of stock price series (Zhang et al. 2017;Wen et al. 2019), significant differences exist between old and sub-new stock data. Therefore, meta-learning is a more reasonable approach for sub-new stock price prediction. However, applying meta-learning to sub-new stock price prediction still faces challenges. Firstly, meta-learning is a task-based approach, but most existing methods for stock price prediction are data-focused (Ang and Lim 2022;Sawhney et al. 2021a,b), with little research on task construction for stock price prediction. Therefore, constructing tasks in stock price prediction contexts is crucial before employing meta-learning. Additionally, the difficulty levels of prediction tasks are disparate due to the varying volatility of price series. For example, as shown in Figure 1, the price trend of Welsbach Technology Metals Acquisition (higher volatility) may be more difficult to predict than that of Jovo Energy (lower volatility). However, traditional meta-learning performs equivalent training on each task (Finn et al. 2017;Chang et al. 2021), failing to deal with tasks based on their difficulty levels. This weakness undermines the effectiveness of meta-learning in capturing task-specific knowledge and acquiring fast generalization ability vital for sub-new stock price prediction. To address the above issues, we propose a task-difficultyadaptive meta-learning model: Meta-Stock. With numerous old stock price prediction tasks, Meta-Stock employs meta-learning to adapt the generalization ability acquired from these tasks to the sub-new stocks, overcoming sub-new stock data scarcity. Besides this traditional meta-learning process, we incorporate an adaptive learning strategy to tackle disparate task difficulty levels, thereby enhancing meta-learning effectiveness. Specifically, task difficulty levels can be measured by price volatility in stock price prediction contexts (Xiang et al. 2022). Based on this assumption, we employ wavelet transform to measure the volatility of stock price series. Instead of using wavelet transform to extract low-frequency components for capturing general price trends (Teng et al. 2020;Luo 2021;Wu et al. 2021), we employ it to extract high-frequency coefficients manifesting irregular volatility (Lahmiri 2014) and utilize them to measure task difficulty levels. Consequently, the optimized meta-learning model can assign gradient weights according to varying task difficulties. With such an enhanced metalearning process, Meta-Stock can acquire the generalization ability adapted to predict sub-new stock prices more effectively. The main contributions of this paper can be summarized as follows: • We propose Meta-Stock, a task-difficulty-adaptive metalearning approach to address the price prediction problem targeting sub-new stocks, flexible to different backbones. Meta-Stock adapts the generalization ability acquired from old stock price prediction tasks to those of sub-new stocks, thus overcoming sub-new stock data scarcity. • We introduce a task-difficulty-adaptive learning strategy to enhance the meta-learning process. We define task difficulty as price volatility measured by high-frequency coefficients extracted via wavelet transform. • We show that Meta-Stock outperforms previous methods and demonstrate its applicability in real-world trading via extensive experiments on three stock markets spanning twenty-two years. Given the high profitability of sub-new stocks, Meta-Stock is valuable for stock traders and finance professionals. Related Work Stock Price Prediction Modern methods based on the Efficient Market Hypothesis (Malkiel 1989) leverage natural language features to analyze market sentiment (Sawhney et al. 2020c), supplementing original price data. The textual features can be extracted from news (Sawhney et al. 2021d), social media (Xu and Cohen 2018), and public earning calls (Qin and Yang 2019). For instance, Sawhney et al. (Sawhney et al. 2021c(Sawhney et al. , 2020b propose hierarchical temporal attention and cross-modal attention fusion for NLP-enhanced stock prediction. The efforts show how natural language data can complement pricebased methods in capturing the effect of events like market surprises, mergers and acquisitions over stock returns. Recent work also attempts to model company relations using stock prices (Matsunaga et al. 2019;Kim et al. 2019;Feng et al. 2019b) and text data (Sawhney et al. 2021b(Sawhney et al. , 2020a with the GNNs (Graph Neural Networks). For example, Sawhney et al. (Sawhney et al. 2021a) and Ang and Lim (Ang and Lim 2022) propose the hyperbolic stock graph attention network and guided attention multimodal multitask network respectively to capture the inter-company relationship and temporal dependencies in stock prices, promoting accurate stock prediction. However, despite these competitive results, text-based approaches require a large-scale, high-quality corpus to extract helpful information accurately (Dong et al. 2020;Batra and Daudpota 2018). The demand in quantity and quality can result in significant time and money. Moreover, most existing approaches only focus on old stocks that have been listed for over a year and consume substantial training data. They ignore the significance of sub-new stocks for quantitative trading and thus fail to take the challenge of sub-new stock price prediction into consideration. To address this problem, we have to uncover the price series' characteristics: volatility, and utilize the valuable information effectively. The volatility manifests the stock prediction difficulty, which motivates our task-difficulty-adaptive meta-learning design. Meta-learning Meta-learning, also known as learning to learn, emerges as an efficient method for learning to solve a new task with a limited amount of data by leveraging the generalization capability acquired from previous tasks (Hospedales et al. 2021). The idea of meta-learning has been taken to solve the data scarcity problems in many areas, such as recommendation system ) and text classification (Lei et al. 2022).For stock price prediction, Shin-Hung et al. (Chang et al. 2021) adopt MAML (Model-agnostic Metalearning) for model training (Finn et al. 2017). However, despite its great success in solving the data scarcity problem for stock price prediction, MAML has to differentiate through the SGD steps, consuming lots of time. Moreover, this method treats tasks with different difficulties equally and fails to consider the inherent volatility of the stock price series (Chang et al. 2021). Unlike the existing method, we choose Reptile, an efficient meta-learning algorithm without differentiating through the SGD steps (Nichol et al. 2018). To tackle the sub-new stock price prediction problem, we incorporate old stocks and construct meta-learning tasks for the model to acquire a fast generalization ability. We further improve the meta-learning process with an adaptive learning strategy that assigns weights to tasks according to their difficulty measured by volatility. Figure 2 presents an overview of our proposed Meta-Stock. In the following subsections, we first describe the formulation of the sub-new stock price prediction problem ( §). We then articulate the construction of stock price prediction tasks ( §) and elaborate the quantification of task difficulties ( §). Adaptive Meta-training from abundant old stock prediction tasks is introduced in ( §), which enables the model to generalize fast across homogeneous tasks. Lastly, we introduce Sub-new Stocks Adaptation to adapt the model with task-agnostic knowledge to sub-new stock prediction with limited samples ( §9). Problem Formulation Stock price prediction can be formulated as the time-series classification problem. Given the i-th stock sample X i in a stock dataset D, the stock sample X i can be denoted as where U denotes the time window length of a stock sample. The feature of stock sample X i on the u-th day can be denoted as X u i ∈ R d , where u ∈ (0, U ] and d denotes the feature dimension of corresponding timestep. Following (Feng et al. 2019a), the label of stock sample X i can be defined as: where p u i denotes the adjusted closing price of X i on the u-th day. Y i = 1 denotes the adjusted closing price rises, and Y i = 0 denotes it drops. We denote the old stock dataset as D old and the sub-new stock dataset as D sub-new , and our model Meta-Stock aims to acquire the generalization ability from D old to facilitate the prediction on D sub-new . Task Construction By learning from a diverse set of tasks in the source domain, our model acquires the ability to adapt to target tasks with limited data. To achieve this, we devise a task construction strategy that ensures diversity in the meta-training tasks by sampling randomly from old stock data D old with different distributions, where W denotes the number of samples in a task. Likewise, we also construct a few sub-new stock price prediction tasks . With the sub-new stock tasks, the meta-learning model can adapt to these tasks after meta-training. Specifically, the data samples are obtained via sliding window over stock feature series calculated by the adjusted closing price and volume (More details are provided in the Section A of the appendix). By updating the model with gradient descent on both old and sub-new stock tasks that share same task sizes W , Meta-Stock achieves improved generalization for sub-new stock price prediction. Task Difficulty The difficulty of stock price prediction tasks lies in the inherent volatility of stock price series. To measure this, we compute a difficulty score S j for each training task T j ∈ T . To determine the scores S j , we calculate a sample difficulty score S i for each data sample X i ∈ T j with the following approach. For each data sample X i ∈ T j , S i reflects the temporal volatility in price and volume signals present in X i . However, quantifying price volatility in the time domain is challenging. Therefore, we use wavelet transform techniques to analyze the volatility in the frequency domain instead. One common approach is the Fourier transform (FT) (?), which creates a representation of the signal in the frequency domain. However, the Wavelet transform (WT) (?) provides more localized information of the signal in both the time and frequency domains. Hence, we employ the Discrete Wavelet Transform (DWT) (?) to decompose the multivariate time series stock sample X i into its smooth (low-frequency) coefficients L λ,µ and its detail (high-frequency) coefficients H λ,µ . During DWT, the original multivariate time series X i is convolved with a low-pass filter and a high-pass filter, and their outputs are downsampled to obtain the smooth (low-frequency) coefficients L λ,µ and the detail (highfrequency) coefficients H λ,µ , respectively. The frequencydomain volatility of the time series X i can now be quantified with the DWT coefficients as follows: where Φ and Ψ are, respectively, the father and mother wavelets, and λ and µ are, respectively, the scaling and translation parameters. The father wavelet approximates the smooth (low-frequency) components of the signal, and the mother wavelet approximates the detail (high-frequency) components. The father wavelet Φ and the mother wavelet Ψ are defined as follows: The two wavelets Φ and Ψ satisfy the following condition: The detail coefficients along the temporal dimension contain high-frequency information and indicates the volatility and Old Stock Samples S1 S1 S1 S1 S1 S1 S1 S1 S2 S1 S1 S1 S1 S1 S1 S1 S1 S1 ... S1 S1 S1 S1 S1 S1 S1 S1 Sn Task Construction Task Construction Task 1 Task M Test Data Sub-new Stock Samples S1 S1 S1 S1 S1 S1 S1 S1 S1 ... S1 S1 S1 S1 S1 S1 S1 S1 Sm . We thus quantify the sample difficulty S i with volatility measured by high-frequency components after discrete wavelet transform c i λ,µ . Once we obtain every sample's difficulty c i λ,µ in the task T j , we can measure the task difficulty S j by their root sum of squares: where the task T j contains W samples. Adaptive Meta-training As Figure 2 shows, our model learns from numerous old stock price prediction tasks to extract task-agnostic knowledge and acquires the fast generalization capability, which can be measured by the model's average predicting performance for K meta-training steps on task T j . Therefore, we establish the objective of Adaptive Meta-training to minimize the expected loss given a selected task T j : where U K Tj (φ) = φ k , the model learning on task T j . When learning on task T j , Reptile optimizes the model φ k , where k ∈ (0, K], with gradient descent for K meta-training steps as follows: where α denotes the fixed learning rate and L Tj represents the loss on task T j . In contrast to the Reptile algorithm, Meta-Stock aims to capture a better learning strategy of stock price prediction by assigning weights w j to a given stock price prediction task T j according to varying task difficulty S j . However, if we retain the extreme values or outliers in weights w j , the weighted gradients can be too large, and thus bypass the local minimum and overshoot. Otherwise, it can be too small, and hence increases the total computation time to a very large extent. Therefore, we normalize the task difficulty S = [S 1 , . . . , S N ] by a softmax function to get a weight vector w = [w 1 , . . . , w N ] for all the old-stock tasks T old = [T 1 , . . . , T N ], where there are N old-stock tasks in total. The softmax normalization is a way of reducing the influence of extreme values or outliers in the weight vector without removing data points from the set. For task T j , the weight w j of this task is computed with all W samples in T j as: In Equation 11, we train the model by minimizing the cross-entropy loss L Tj , given as: where Y i denotes the true price movement of a stock sample X i from the training data of the tasks T j . y i denotes the prediction of the model φ k for the stock sample X i . After learning from the tasks T j , we can optimize the meta-learning objective as shown below: where β denotes the meta-learning rate. Here, we aggregate K meta-training task gradients to obtain a meta-gradient φ k − φ. With the meta-gradient, we move the initial parameter of the model φ in the direction of the average of the task model parameters φ k . Hence, the model converges towards a solution φ k close (in Euclidean distance) to each task T j 's manifold of optimal solutions (Nichol et al. 2018). Because the meta-learning model parameters φ are close to the optimal parameters of each task T j , only a few gradient updates are required to obtain the optimal solutions for each task T j . Algorithm 1: Adaptive Meta-training Require: Z(T ): distribution of the task T Require: α, β: learning rate hyperparameters Require: w j : weight measured by task T j 's difficulty 1: randomly initialize φ, the vector of initial parameters 2: for all T j ∼ Z(T ) do 3: for every meta training step k do 5: Evaluate ▽ φ k L Tj (f φ k ) with respect to task samples 6: Compute adapted parameters with gradient descent: Therefore, Meta-Stock enables the model φ to generalize on different tasks with task-agnostic knowledge. To show more details, we outline the optimization process in Algorithm 1. Sub-new Stocks Adaptation After acquiring the generalization ability on old-stock tasks, our model φ can generalize efficiently to sub-new stock tasks with a handful of training data via a few gradient steps and obtain the adapted parameters. This fast adaptation comes from the fact that we have already simulated fast learning on multiple tasks with limited data in the Adaptive Meta-training phase. In particular, we minimize the loss in φ on the sub-new stock price prediction tasks through gradient descent. where L Tsub-new denotes the cross-entropy loss on the sub-new stock price prediction task and γ refers to the learning rate. Experiments and Setup Dataset For the dataset, we choose the stock markets in US, mainland China and Hong Kong due to their large capitalization and numerous companies. We then collect the dataset from AKShare (King 2019) on the three real-world stock markets, from 01/01/2000 to 22/02/2022 and denote the market as US-STOCKS, CN-STOCKS and HK-STOCKS, respectively. We preprocess data and shift a 5-day lookback window along the trading days to generate samples following (Sawhney et al. 2021c Training Setup We perform all experiments on an Nvidia GeForce GTX 1080Ti GPU. We train Meta-Stock for 50 epochs with AdamW optimizer. We use grid search to find optimal hyperparameters for Meta-Stock based on validation performance. We set the length of stock sample T = 5, training steps K = 6, meta batch size B = 6, batch size C = 4096, weight decay rate σ = 1e−5, learning rate α, β, γ ∈ (1e−4, 1e−1) for Meta-Stock. Here, the number of samples in a task W = 24576, which is equal to B * C and the number of old stock tasks N and sub-new stock tasks M can be calculated with W , the total number of old stock samples and the total number of sub-new stock samples. We repeat each experiment 5 times and record the average performance. For model evaluation in stock price prediction, we follow the metrics in (Feng et al. 2019a) (Keating and Shadwick 2002). We formulate the evaluation metrics and explain their details in the Section A of the appendix. Baselines and Backbones We choose the following baseline approaches to train different backbones and compare their performances with Meta-Stock: • Train on sub-new stocks: The backbones are trained on the training set of sub-new stocks and then tested on the test set of sub-new stocks. • Transfer Learning: The backbones are pre-trained on the old stocks and then finetuned on the training set of sub-new stocks. Finally, we test the backbones on the test set of sub-new stocks. • Reptile: The backbones are meta-trained on the old stocks and then adapted to the training set of sub-new stocks. Finally, we test the backbones on the test set of sub-new stocks. For backbones, we choose LSTM-FCN (Čeponis and Goranin 2020), ResCNN (Zou et al. 2019), ResNet (Li et al. 2020) and InceptionTime . We provide backbone details in the Section A of the appendix. Note that recent NLP-based stock price prediction models are not considered as comparison due to the limited availability of text data for sub-new stocks. Performance Comparison We compare Meta-Stock with various approaches. Meta-Stock achieves a state-of-the-art performance in terms of Table 1: Meta-Stock performance comparison against baseline models and methods. Except Train on sub-new stocks, all methods introduce old stock data to assist sub-new stock prediction. Values in bold, underline denote best and second-best results, respectively. * indicates improvements over the same backbone but with other methods are statistically significant (p < 0.01), under Wilcoxon's signed rank test. (Groggel 2000) and reveal significant improvements (p < 0.01) of Meta-Stock over the compared methods. With such an advance, Meta-Stock validates its effectiveness, though facing both bullish and bearish conditions in three markets. We attribute the improvement of Meta-Stock over other approaches to three reasons. First, Meta-Stock formulates the sub-new stock prediction problem from a new task-based perspective, allowing our model to learn from various tasks and capture a better learning strategy on stock price prediction tasks. With the mastered strategy, Meta-Stock improves generalization across homogeneous tasks and thus learns faster than many state-of-the-art methods. Second, we design a strategy to construct training tasks with various data distribution, which enable Meta-Stock to better learn the homogeneous data pattern in different distributions. By perceiving similar data patterns between old and sub-new stocks, Meta-Stock can utilize old stock data more efficiently and generalize better to sub-new stock price prediction. Third, when finetuning the meta-learning model on the sub-new stock data, we keep the training strategy on sub-new stock data the same as that on old stock data, which enable the model to better apply the obtained task-agnostic knowledge to the prediction of sub-new stocks. Profit Analysis We examine the practical applicability of Meta-Stock to real-world stock trading by analyzing the pure returns (Annual Return Rate), risk-adjusted returns (Sharpe ratio, Sortino Ratio, Calmar Ratio, Omega Ratio) and the maximum risk (Maximum Drawdown) associated with the trades using ResNet across stocks in US, CN and HK markets. We follow a trading strategy: if the model predicts the rise of a stock's price the next day, we will buy the stock at the closing price and sell it at the closing price when the model speculates a price fall. We first train ResNet with Reptile, which is Meta-Stock without adaptive learning for stock trading, and observe poor performance in terms of profits and a high risk for all markets as shown in Table 2. This observation indicates that Reptile takes riskier trading decisions and often experiences enormous losses. However, when we train ResNet using Meta-Stock, we observe significant improvements in risk-adjusted returns (781.21%) and a substantial reduction in maximum losses (73.75%). Such improvements indicate the efficacy of Meta-Stock in enhancing the realworld applicability of neural stock prediction methods. We further elucidate the benefits of Meta-Stock via a qualitative study. Probing Task Difficulty In this study, we investigate the performance improvements achieved by training on samples of varying difficulty lev- To this end, we divide our dataset into three groups of tasks with different levels of difficulty: easy, medium, and hard. Specifically, we distribute three task groups with the same amount of tasks by their difficulty scores. For instance, the tasks with the top 1/3 difficulty scores are assigned to hard groups, and those with the bottom 1/3 are assigned to easy groups. The resulting performance gains obtained through the Reptile algorithm are presented in Table 3. Our results highlight the effectiveness of Reptile in improving the performance of stock price prediction tasks over data with varying levels of difficulty, with improvements observed for both easy and medium difficulty levels. However, the algorithm exhibits a decline in predicting ability for the hard-level group of tasks. Interestingly, we also note that at the beginning of the learning process, the relative improvement for stock price prediction increases as the task difficulty decreases from hard to medium to easy. These findings are consistent with the typical learning curve of humans, that is, learning with increasing difficulty. Learning complex tasks ahead of time can be frustrating for humans if they cannot solve simple tasks. Therefore, the observations validate the effective quantification of task difficulty. Analyzing the Effectiveness of Meta-Stock We now study the performance improvements obtained via Meta-Stock over Reptile against samples of varying difficulty levels. In Table 4 we divide the dataset into groups of easy, medium, and hard tasks according to the task difficulty D. We observe significant improvements over all three difficulty levels on all evaluation metrics, demonstrating that Meta-Stock improves performance across sub-new stock price prediction tasks with varying difficulty levels (more results about improvements on the MCC and F1 scores can be referred to in the Section B of the appendix). We con- tribute these improvements to Meta-Stock's adaptive learning strategy that assigns more weight to complicated tasks. Once Meta-Stock can better handle complicated tasks, the easier ones can also be solved better. Qualitative Analysis We further conduct an extended study to elucidate the benefits of Meta-Stock for stock prediction, as shown in Figure 3. Price series in training sample C possess a volatile trend, making it hard to analyze the future trend of the stock. We show that for a moderately complex test-data sample C, its movement is incorrectly classified when training ResNet without the adaptive learning. However, when training with Meta-Stock, its price trend is classified accurately. We attribute Meta-Stock+ResNet's overall improved performance to the generated task weights that ameliorates the efficiency of the learning process. Conclusion In this paper, we propose Meta-Stock, a task-difficultyadaptive meta-learning approach to predict sub-new stock price trends. Our meta-learning approach seeks to solve the data scarcity of sub-new stocks by leveraging old stocks and acquiring the fast generalization ability that can be extended to sub-new stock price prediction. Furthermore, we improve the entire meta-learning process by introducing adaptive learning according to volatility levels. We display Meta-Stock's applicability in sub-new stock price prediction and real-world trading through extensive quantitative and qualitative experiments on real market data. In future work, we intend to extend Meta-Stock's architecture to enhance its scalability in cross-market scenarios.
2023-08-23T06:45:34.725Z
2023-08-22T00:00:00.000
{ "year": 2023, "sha1": "d82ec877a1553006baa06026f4a4fc2638651190", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d82ec877a1553006baa06026f4a4fc2638651190", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
249628478
pes2o/s2orc
v3-fos-license
Potassium Sulfate Spray Promotes Fruit Color Preference via Regulation of Pigment Profile in Litchi Pericarp Fruit color is a decisive factor in consumers’ preference. The bright red color of litchi fruit is associated with its high anthocyanin; however, poor fruit coloration is a major obstacle in litchi plantation. The role of spraying mineral nutrient salts such as KH2PO4, KCl, K2SO4, and MgSO4 on litchi pericarp pigmentation was examined by a field trial, and the relation between human visual color preference versus pericarp pigments and hue-saturation-brightness (HSB) color parameters was investigated. K2SO4-sprayed litchi fruit gained the maximum popularity for its attractive red color. Spray of K and Mg salts decreased the buildup of yellowish pigments, but increased the accumulation of red ones, with the exception of slightly reduced anthocyanins in KH2PO4-sprayed fruit, by regulating the activities of enzymes involved in anthocyanidin metabolism and decreasing pericarp pH, leading to varied pericarp pigment composition. K2SO4 spray generated the highest percentage of cyanidin-3-glucoside over all pigments in pericarp. Correlation analysis shows the percent of cyanidin-3-glucoside, superior to anthocyanin concentration and HSB color parameters, was a reliable indicator to fruit color preference. This work demonstrates that spray of suitable mineral salt can regulate pericarp pigment profile, and is an effective approach to improve fruit pigmentation and promote its popularity. INTRODUCTION Color typically plays a vital role in the evaluation of aesthetic quality, and has been emphasized in different fields, such as psychology, physics, chemistry, optics, vision, engineering, visual arts, graphic design, urban studies, architecture, and so on (Green-Armytage, 2006). Fruit color is a pivotal commercial quality trait for fruit and a decisive factor for consumer's preference. Fruit pigmentation can affect the taste, flavor, and smell of the fruit as well (Lewinsohn et al., 2005;Paauw et al., 2019). Pigment accumulation is responsible for fruit color (Honda and Moriya, 2018;Hu et al., 2019). Flavonoids are well recognized as the characteristic pink, red, blue, and purple anthocyanin pigments of plant tissues. For example, the major pigments present in tomatoes are the carotenoids (Chattopadhyay et al., 2021), anthocyanin accumulation is responsible for the red color of the skin and flesh of apple fruits (Honda and Moriya, 2018), carotenoids, and anthocyanin jointly contribute to the diverse colors of citrus fruits (Rodrigo et al., 2013). Litchi (Litchi chinensis Sonn.), a tropical and subtropical Sapindaceae fruit tree with a lifespan of centuries, is widely cultivated in China and Southeast Asia. Commercial litchi plantations are developed in Africa, America, Europe, and Oceania as well. Litchi fruit is popular for its bright red color, and succulent, sweet, and unique taste and health-related nutrients (Wall, 2006;Pareek, 2016). The red color of litchi pericarp is ascribed to high concentrations of anthocyanin accumulation (Lee and Wicker, 1991;Zhang et al., 2005). However, poor and/or uneven pigmentation of fruit pericarp is a common impact in litchi production. Improvement of litchi pericarp coloration is greatly beneficial to raise the market value of litchi. Pigmentation enhancement of fruit and vegetable before harvest is highly affected by light (Yoo et al., 2020), temperature (Balcerowicz, 2020), and their interaction (Azuma et al., 2012), as well agronomic measures such as bagging (Ma et al., 2019), fertilization (Jezek et al., 2018), and so on. These environmental factors can regulate the expression of genes in biosynthesis of pigment, leading to altered fruit pigmentation. It is widely known that litchi loses its brilliant red appearance soon at ambient condition after harvest and extensive efforts are made to solve this phenomena (Zhang et al., 2005;Fang et al., 2015), however, the role of pericarp pigmentation enhancement through foliar nutrient application preharvest is scarcely investigated in litchi. Potassium (K) is frequently supplemented to enhance the pigmentation in plants due to its multiple reactive functions in higher plants (Nguyen et al., 2010;Sulistiani et al., 2020). Thus, this study is to evaluate the role of foliar nutrient spray on pericarp pigmentation, and explore the relation between human visual color preference and pigment composition in litchi fruit pericarp, with the objective to identify an effective mineral nutrient salt to improve litchi pericarp color. Field Experiment A spray experiment with five treatments (four mineral salts and the control) was conducted in a commercial litchi orchard in Haikou city, Hainan province southern China. The soil in this orchard was volcanic ash soil, with a pH of 6.1. Soil alkali-hydrolyzable N, available P, and K were 325.7, 32.2, and 439.0 mg/kg, respectively. The cultivar "Ziniangxi" was used in this trial. "Ziniangxi, " a unique litchi variety called "the king of litchi" in southern China, is well accepted for its great fruit size, but simultaneously castigated by its poor pigmentation. The litchi trees were planted at the spacing of 5 m × 5 m in the spring of 2015. Twenty uniform, healthy trees were selected and divided into five groups for the spray experiment. The three K salts were sprayed at the concentration of 900 mg/L (calculated as K), and MgSO 4 was used at the same level with K 2 SO 4 (calculated as S). Each salt solution was sprayed to four trees, with each tree as a repeat. Spraying water was used as the control. All the solutions were evenly sprayed to the leaves and the fruits in the afternoon. There was no surfactant to be used. These trees were treated four times during fruit swelling stage from early April (35 days after fruit set) to mid-May in 2021, with intervals of 8-9 days. Fruit Sample Collection and Preparation In the early morning, approximately 3 kg fruits with good quality and uniform maturity were harvested from each tree at economic ripening stage in late May. Four fruits were randomly collected from each 3 kg fruits and wrapped with paper tissues, then put into a plastic bag and sealed for color preference assessment. Then, all the fruits were immediately bagged into iced bubble chambers to keep cold, then delivered back to the laboratory on the same day by air transportation. All the fruits for chemical analysis were rinsed with tap water and dried with a clean cotton tower in the afternoon of the same day in the laboratory. Then, the fruit epicarp (the endocarp was not included) was manually peeled and divided into four parts. The first part was frozen immediately in liquid nitrogen, and further lyophilized (Christ, Alpha 1-4 LD plus, German) and ground to a fine powder for phenolic compound detection. The epicarp powder samples were stored at −80 • C till further analysis. The second part was frozen immediately in liquid nitrogen, and then stored at −80 • C for enzyme activity assessment. The third part was oven-dried at 105 • C for 30 min and then at 65 • C to constant weight for nutrient (N, P, K, Ca, Mg, and S) analysis. The fourth part was used for epicarp pH determination immediately. Visual Color Preference Evaluation on Litchi Fruit Twenty fruits from four repetitions of each treatment, were placed into a porcelain dish laid with clean white cotton cloth for color preference assessment in a well-lighted and air-conditioned room. Five dishes were prepared and aligned for the five treatments. Thirty-two participants in the age of 19-49 (students and teachers in the university, none of them was color blind) were Chinese came from all over China, half male and half female. All participants were asked to assess the fruits in five dishes and then put the rated numbers in the front of the dishes to label their preference for fruit color one by one. The five rated numbers, 1, 2, 3, 4, and 5 were used as the color preference index, referring to the first-, the second-, the third-, the fourth-, and the fifth-favorite color, respectively. The choice of each participant was recorded. Digital Measurement of Fruit Skin Color After the color preference evaluation, the image of each fruit was immediately captured by a digital single-lens reflex camera (Cannon EOS-1D X, Japan). The imaging was taken under controlled and well-distributed light conditions in a mini photostudio to avoid color cast caused by environmental light. A total of 80 images were achieved. The colorimetric values of each image were extracted by Adobe Photoshop 2018 (Adobe Systems Inc., California, United States) by using hue-saturationbrightness (HSB) color model. HSB model, namely HSB space, a cylindrical coordinate system, is regarded as similar to human visual system (Jih-Gau and Yang, 2016;Agahchen and Albu, 2017), and more concordant with human perception, especially for surface with granular protuberance due to its ignorance of shading effect (Dal Grande et al., 2008). Litchi epicarp consists of tubercles, which make it more suitable for the HSB system. In HSB model, hue (H) values are defined as an angle from 0 to 360 • , representing various colors. 0 • stands for red, 45 • refers to yellow, and 90 and 135 • points to yellow-green and green, respectively. Saturation (S) is measured as a percent from 0 (white) to 100% (fully saturated color), and brightness (B) as percent from 0 (black) to 100% (fully bright color) (Cubukcu and Kahraman, 2008). Extraction and Detection of Phenolic Compounds in Epicarp Approximately 0.35 g of the lyophilized epicarp powder was mixed with 7 mL of pre-frozen methyl/water (70:30, v/v, pH 0.5) solution and sonicated at 5 • for 30 min, then centrifugated with 4,000 rpm at 5 • for 10 min, and the supernatants were collected. The residuals were re-extracted twice with 4 mL of methyl/water solution, respectively. All the three supernatants were combined and subjected to rotary evaporation to remove the methyl, then transferred to a 25 mL volumetric flask and diluted to the constant volume with ultrapure water. The extract solution was stored at 4 • and filtered with a 0.22 µm film prior to phenol detection by HPLC. Enzyme Activity Essay The activities of phenylalanine ammonialyase (PAL) and chalcone isomerase (CHI) were determined by the biochemical methods, and the kits were purchased from the Suzhou Keming Biological Technology Co., Ltd (China). PAL activity is defined as the absorbance variation of 0.1 units per mg tissue per minute in 1 mL reaction solution under 290 nm. CHI activity is measured as the absorbance variation of 0.1 units per mg tissue per hour in 1 mL reaction solution under 381 nm. The activities of both enzymes were expressed as U/g. Epicarp pH and Nutrients The epicarp was manually torn apart into pieces and placed into a beaker, and ultrapure water was added (epicarp:water = 1:5, w:w). The solution was stirred for 10 min using a magnetic stirrer, and then the supernatant pH was measured by a pH meter. The epicarp sample was digested with concentrated H 2 SO 4 + H 2 O 2 , then N content in the digested solution was detected by Kjeldahl determination. The epicarp sample was digested with concentrated HNO 3 + HClO 4 , and P concentration in the digested solution was determined by Mo -Sb colorimetric method, and K concentration by flame photometer, and Ca and Mg concentrations by atomic absorption spectrophotometer and S concentration by ICP-OES (Varian 710-ES, United States) (Lu, 1999). Standard materials of GBW07603 were used to assure the analysis quality. Data Analysis and Statistics The mean of color preference rating score was calculated by the formula as follow. All the data are expressed as mean ± standard deviation. The color preference rating scores were compared by non-parametric analysis with Kruskal-Wallis test, and all the other data were subjected to analysis of variation, followed by Duncan's multiple comparisons (P < 0.05) by SAS 9.2. The Pearson correlation analysis was conducted by SPSS 22.0. Fruit Peel Color Preference Rating Among all the participants, 50% (16/32) of them chose the K 2 SO 4 -sprayed fruit as their best preferred for their more attractive and even red color, and 37.5% (12/32) and 12.5% (4/32) of the participants preferred the fruit applied with KCl and KH 2 PO 4 over all others, respectively ( Table 1). None selected the control or MgSO 4 -sprayed fruit as their favorite. For preference rating scores, a significant discrepancy was observed among treatments (p < 0.0001). KCl-and K 2 SO 4sprayed fruits were calculated with similar preference scores (1.8 and 1.8, p = 0.9541), which were significantly higher than that of KH 2 PO 4 -treated fruit (p < 0.0001). The control-and MgSO 4sprayed fruits scored similarly (4.3 and 4.0, p < 0.1961) as well, and were significantly lower than those applied with KH 2 PO 4 (3.0, p = 0.005). Totally, the control fruit was the least preferred by all the participants. The above indicates that spraying K salts, in particular K 2 SO 4 was an effective approach to improve red color development in litchi fruit pericarp. 1 | The numbers of participants who gave their preference rating for the pericarp color of litchi fruits sprayed with K and Mg salts and the means of color preference rating score (preference index 1-5 means the popularity from the best to the least). Color Parameters of Fruit Epicarp Although it is well recognized that hue, saturation, and brightness positively affect human preference for color (Camgoz et al., 2002;Wilms and Oberfeld, 2018), frequently, hue is still the dominant for color choice, regardless of saturation and brightness (Camgoz et al., 2002;Cubukcu and Kahraman, 2008;Fortmann-Roe, 2013 Flavonols contribute to pale yellow to dark brown color, and anthocyanins endow colors from pink through red and to purple in a range of plants. The amalgamation of pigment endues versatile colors for plants. The pigment profile shows that although all the treated litchi fruits were characterized by a red tone in the epicarp; spray of K and Mg salts did alter the distribution of visible pigments in the epicarp (Figure 1). The allotment of cyanidin-3-glucoside with bright red color among treatments decreased in the order: K 2 SO 4 (18.9%) > KCl (18.5%) > KH 2 PO 4 (17.9%) > control (16.9%) > MgSO 4 (16.7%), and that of cyanidin-3-O-rutinoside with dark red color was K 2 SO 4 (63.4%) > MgSO 4 (60.0%) > KCl (57.0%) > KH 2 PO 4 (56.1%) > control (53.2%). Similarly, the allocation of yellowish pigments in the epicarp was decreased by the spray of the four salts. Rutin, an extremely pale yellow pigment, was the largest ingredient of the yellow-hue pigments in litchi fruit epicarp. The K 2 SO 4 spray generated the lowest percentage of rutin (11.4%), followed by KCl (16.0%) and MgSO 4 (16.5%). Further, K 2 SO 4 spray reduced the allotment of quercetin-3-glucoside and kaempferol-3-glucoside as well. Activities of Enzymes Involving in Anthocyanin Synthesis Phenylalanine ammonialyase is the first key enzyme in the phenylpropanoid pathway and plays a vital role in the synthesis of anthocyanins (Boudet, 2007). The expression of PAL increases from green to yellow and to red stages in litchi (Zhao et al., 2012), and epicatechin content, regulated by PAL activity, decreases during litchi fruit development (Sun et al., 2009) due to its polymerization to procyanidins (Liu et al., 2007). CHI is the key enzyme involved in anthocyanin biosynthesis in litchi pericarp as well (Qu et al., 2021). The spray of K and Mg salts, K 2 SO 4 and MgSO 4 in particular, significantly increased PAL activity (p < 0.05) (Figure 2A), which might lead to enhanced biosynthesis of epicatechin. Meanwhile, the application of the three K salts, superior to MgSO 4 spray, significantly raised CHI activity (Figure 2B), which might promote the synthesis of cyanidins. However, epicatechin and both procyanidin derivates were decreased by spraying all the K and Mg salts ( Table 3), irrespective of increased PAL activity. The discrepancy between increased PAL activity and decreased epicatechin and procyanidins might be explained that although more precursors of procyanidins were synthesized by increased PAL activity, and more of them were transformed to anthocyanidins owing to increased CHI activity, leading to lower accumulation of flavanols and higher buildup of cyanidins in litchi pericarp in the present study. The metabolism of colorants in litchi pericarp is not completely illustrated yet (Sun et al., 2009;Qu et al., 2021); however, it is well recognized that the color of litchi pericarp is determined by both synthesis and degradation or conversion of pigments; therefore, the phenolic profile in epicarp of litchi fruit sprayed with K and Mg salts is the joint effect of K and Mg salts on phenolic metabolism. Epicarp pH and Nutrients The spray of KH 2 PO 4 increased litchi epicarp pH, whereas both MgSO 4 and K 2 SO 4 supplement significantly decreased it (p < 0.01), and KCl spray reduced it insignificantly (Figure 3). The response of epicarp nutrient to spray of K and Mg salts differed greatly (Table 4), which was probably associated with varied mobility of mineral nutrients from pericarp to pulp and then to seed in litchi (Su et al., 2022). Epicarp N, K, and S in litchi fruits were slightly raised by spray of K and Mg salts, in contrast to the control. Spray of KH 2 PO 4 and KCl significantly enhanced epicarp P (p < 0.05), and K 2 SO 4 spray increased it insignificantly, while MgSO 4 spray did not affect it. Spray of all K and Mg salts, with the exception of K 2 SO 4 , significantly increased epicarp Ca (p < 0.01). Spray of the three K salts had no effect on epicarp Mg and S, whereas MgSO 4 spray slightly increased them. Relation Between Visual Color Preference and Pigment Palette and pH in Litchi Fruit Skin The attractive red color of litchi fruit is believed to be ascribed to high contents of anthocyanins (Lee and Wicker, 1991;Zhang et al., 2004). Amazingly, the color preference rating score was Frontiers in Plant Science | www.frontiersin.org solely correlated to the percentage of cyanidin-3-glucoside over total visible pigments in the epicarp (r = −0.973 * * , p = 0.005) in the present study (Table 5). K 2 SO 4 spray raised not only the concentration of cyanidin-3-glucoside, but also its allotment over total visible pigments, leading to a more attractive red color. Intriguingly, MgSO 4 -treated fruit has a higher skin H value than K 2 SO 4 -treated one but lower than KCl-treated one, but none of the participants make it as their best choice and only a fewer of them take it the second-like, leading to a similar preference rating score as the control fruit. The lower preference rating score of MgSO 4 -sprayed fruit with lower fruit pericarp H value, is probably associated with the lowest percentage of cyanidin-3-glucoside in the visible pigments (Figure 1). And the low epicarp H value in MgSO 4 -sprayed fruit might be ascribed to the high percentage of cyanidin-3-O-rutinoside with dark red color. In addition, despite the advantage of the HSB color model, the human eye can sense the trace difference of pigment palette in litchi pericarp more subtly than a digital camera in the current investigation, as demonstrated by the discrepancies between the simulative pericarp color ( Table 2) and the actual fruit color (Figure 1). It implies that the allotment of cyanidin-3-glucoside, rather than anthocyanin concentration or the HSB color parameter, is a more reliable indicator for visual color preference. Correlation analysis shows that both derivatives of cyanidin were negatively correlated with epicarp pH (p < 0.05) ( Table 6). The protection of red pigment in litchi pericarp conferred by low acidity (Zhang et al., 2005;Fang et al., 2013) and the color enhancement effect on cyanidin-3-glucoside by low acidity of plant cell sap (Mizuno et al., 2019) have been documented, therefore, the decreased epicarp pH in K 2 SO 4sprayed fruit promotes the display of red color of litchi fruit. Thus, the high percent of cyanidin-3-glucoside in epicarp and pigmentation enhancement by low epicarp pH, are jointly responsible for the popularity of K 2 SO 4 -sprayed fruit in the color preference evaluation. Meanwhile, a close relation between phenol compounds and mineral nutrients was observed as well. For example, epicatechin, and procyanidin A2 were negatively correlated with N, respectively (p < 0.05), and both Ca and Mg were negatively correlated with yellowish pigments like rutin, kaempferol-3glucoside, and ferulic acid. The above indicates that mineral nutrients in the epicarp did alter the pigment profile in litchi fruit, leading to varied fruit color. Intriguingly, spraying of K salts, with the exception of K 2 SO 4 , significantly reduced epicarp Ca (p < 0.01). The antagonism between K versus Ca and Mg in higher plants (Garcia et al., 1999;Papadakis et al., 2004) and in litchi (Yang et al., 2015) implies that spraying of K salts might regulate the pigment composition by altering epicarp Ca and Mg, rather than by a sole and direct role of K itself. However, how the interaction between K, Ca, and Mg to affect the pigment development in litchi, needs to be further investigated. Differential Role of K and Mg Salts on Litchi Fruit Skin Coloration The role of K fertilizers on anthocyanin accumulation is described in a range of plants. However, the effect is highly plant species-dependent. For example, the positive effect is observed in apple (Solhjoo et al., 2017) and olive (Zivdar et al., 2016), whereas no effect is reported in red cabbage (Piccaglia et al., 2002), purple corncob (Jing et al., 2007), and Melastoma malabathricum (Koay et al., 2014). In addition, a quadratic function of K fertilizer to anthocyanin is documented in batatas (Sulistiani et al., 2020) as well. The discrepancy of anthocyanin accumulation for plants might be ascribed to the varied anthocyanin composition of a specific plant per se. In the present work, spray of KCl, K 2 SO 4 , and MgSO 4 decreased the concentrations of flavanols and flavonols, but increased the levels of anthocyanins as compared to the control, indicating that anthocyanin synthesis was enhanced because these both compounds were the precursors of the latter. Unlike KCl and K 2 SO 4 , KH 2 PO 4 spray reduced not only the values of flavanols and flavonols as well, but also the concentrations of anthocyanins. Increased synthesis of anthocyanins is a typical response of plants to P deficiency and lead to dark-brown to purple color, while P supplementation induces primary metabolism and inhibits anthocyanin synthesis by regulation of enzymes like PAL, anthocyanidin synthase, and so on, in the phenolic and flavonoid synthesis pathways (Vance et al., 2003;Misson et al., 2005). It implies that the significantly increased epicarp P by KH 2 PO 4 spray contributes to the reduced pericarp anthocyanins in the present study. The influence of K forms on anthocyanins is compared in a few plants. No significant difference is found on anthocyanin concentrations of purple corncobs (Jing et al., 2007) and "Red delicious" apple (Solhjoo et al., 2017) receiving K 2 SO 4 , KNO 3 , and KCl, respectively. In the current work, K 2 SO 4 spray, superior to KCl spray enhances the production of both red pigments, implying the role of accompanying anions on anthocyanin synthesis. However, to our knowledge, the mechanism has not been investigated to date and is worthy to be revealed in the future. In addition, K 2 SO 4 addition does not affect the concentrations of most of the phenols in litchi epicarp, but increases the levels of both red pigments by approximately 2-folds as compared to MgSO 4 spray, highlighting the role of K + on synthesis and protection of anthocyanin. The significantly decreased flavanols and flavonols and slightly increased anthocyanins by MgSO 4 spray show that Mg treatment not only promoted anthocyanin synthesis, but also inhibited its catabolism as observed in other plants (Shaked-Sachray et al., 2002;Sinilal et al., 2011). These results refer to the importance of combination suitability of mineral cation and anion. CONCLUSION A spray of K and Mg salts can alter the pigment profile in litchi fruit pericarp. K 2 SO 4 spray leads to the maximum allotment of cyanidin-3-glucoside over all pigments and lower acidity in fruit epicarp, both of which jointly contributes to the highest visual color preference. This work highlights the role of spraying suitable mineral salt on improvement of fruit color and commercial value enhancement in litchi. DATA AVAILABILITY STATEMENT The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. AUTHOR CONTRIBUTIONS XS: field experiment, chemical analysis, and writing of original draft. CB: phenol analysis and visual color preference evaluation on fruit. XW: field experiment. HL: field experiment and sample preparation. YZ: field experiment and sample chemical analysis. LW: photographing of litchi fruit. ZC: processing of fruit color parameters. LY: funding acquisition, methodology, field experiment, and writing -review and editing. All authors contributed to the article and approved the submitted version.
2022-06-14T13:39:42.675Z
2022-06-14T00:00:00.000
{ "year": 2022, "sha1": "e313419f86eebdcee8b49d6fdd0db8c9b44c82a8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "e313419f86eebdcee8b49d6fdd0db8c9b44c82a8", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
210709783
pes2o/s2orc
v3-fos-license
Potential gains in life expectancy by attaining daily ambient fine particulate matter pollution standards in mainland China: A modeling study based on nationwide data Background Ambient fine particulate matter pollution (PM2.5) is one leading cause of disease burden, but no study has quantified the association between daily PM2.5 exposure and life expectancy. We aimed to assess the potential benefits in life expectancy by attaining the daily PM2.5 standards in 72 cities of China during 2013–2016. Methods and findings We applied a two-stage approach for the analysis. At the first stage, we used a generalized additive model (GAM) with a Gaussian link to examine the city-specific short-term association between daily PM2.5 and years of life lost (YLL); at the second stage, a random-effects meta-analysis was used to generate the regional and national estimations. We further estimated the potential gains in life expectancy (PGLE) by assuming that ambient PM2.5 has met the Chinese National Ambient Air Quality Standard (NAAQS, 75 μg/m3) or the ambient air quality guideline (AQG) of the World Health Organization (WHO) (25 μg/m3). We also calculated the attributable fraction (AF), which denoted the proportion of YLL attributable to a higher-than-standards daily mean PM2.5 concentration. During the period from January 18, 2013 to December 31, 2016, we recorded 1,226,849 nonaccidental deaths in the study area. We observed significant associations between daily PM2.5 and YLL: each 10 μg/m3 increase in three-day–averaged (lag02) PM2.5 concentrations corresponded to an increment of 0.43 years of life lost (95% CI: 0.29–0.57). We estimated that 168,065.18 (95% CI: 114,144.91–221,985.45) and 68,684.95 (95% CI: 46,648.79–90,721.11) years of life lost can be avoided by achieving WHO’s AQG and Chinese NAAQS in the study area, which corresponded to 0.14 (95% CI: 0.09–0.18) and 0.06 (95% CI: 0.04–0.07) years of gain in life expectancy for each death in these cities. We observed differential regional estimates across the 7 regions, with the highest gains in the Northwest region (0.28 years of gain [95% CI: 0.06–0.49]) and the lowest in the North region (0.08 [95% CI: 0.02–0.15]). Furthermore, using WHO’s AQG and Chinese NAAQS as the references, we estimated that 1.00% (95% CI: 0.68%–1.32%) and 0.41% (95% CI: 0.28%–0.54%) of YLL could be attributable to the PM2.5 exposure at the national level. Findings from this study were mainly limited by the unavailability of data on individual PM2.5 exposure. Conclusions This study indicates that significantly longer life expectancy could be achieved by a reduction in the ambient PM2.5 concentrations. It also highlights the need to formulate a stricter ambient PM2.5 standard at both national and regional levels of China to protect the population’s health. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 across the 7 regions, with the highest gains in the Northwest region (0.28 years of gain [95% CI: 0.06-0.49]) and the lowest in the North region (0.08 [95% CI: 0.02-0.15]). Furthermore , using WHO's AQG and Chinese NAAQS as the references, we estimated that 1.00% (95% CI: 0.68%-1.32%) and 0.41% (95% CI: 0.28%-0.54%) of YLL could be attributable to the PM 2.5 exposure at the national level. Findings from this study were mainly limited by the unavailability of data on individual PM 2.5 exposure. Conclusions This study indicates that significantly longer life expectancy could be achieved by a reduction in the ambient PM 2.5 concentrations. It also highlights the need to formulate a stricter ambient PM 2.5 standard at both national and regional levels of China to protect the population's health. Author summary Why was this study done? • Ambient fine particulate matter (PM 2.5 ) pollution is a severe environmental health concern in China. • Both short-term and long-term exposure to PM 2.5 have been found to be associated with increased mortality and years of life lost. • A few studies have estimated the association between annual PM 2.5 concentration and life expectancy, but there is no report on the effects of daily PM 2.5 exposure on life expectancy. What did the researchers do and find? • This nationwide time-series study collected data on more than 1 million nonaccidental deaths in 72 Chinese cities from January 18, 2013 to December 31, 2016. • We used a generalized additive model to explore the city-specific association between daily PM 2.5 and years of life lost and then conducted random-effects meta-analyses to generate the regional and national estimates. • During the study period from January 18, 2013 to December 31, 2016, we estimated that 168,065.18 (about 1.00% of the total) years of life lost can be avoided by achieving WHO's guideline on daily PM 2.5 concentrations (25 μg/m 3 ) in the study area, which corresponded to 0.14 years of gains in life expectancy for each death. What do these findings mean? • This is the first study to report the potential gains in life expectancy by attaining the daily standards of PM 2.5 , which provided important and useful information of the burden caused by ambient PM 2.5 pollution. Introduction The health effects of fine particulate matter (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm, PM 2.5 ) have attracted increasing public concern over the past decade in China [1,2]. The population-weighted annual PM 2.5 concentration in mainland China reached 54.3 μg/m 3 in 2013 [3], which was much higher than that in 1990 (39 μg/m 3 ) and far above the ambient air quality guidelines (AQGs, 25 μg/m 3 ) recommended by the World Health Organization (WHO) [4,5]. Meanwhile, mounting evidence has linked the ambient PM 2.5 exposure with excess premature deaths and years of life lost (YLL) [6,7]. Such findings have provided valuable information to estimate the disease burden of ambient PM 2.5 [8,9]. Previous studies have examined the association between short-term and long-term exposure to ambient air pollution and mortality or YLL [10,11]. The short-term studies, usually based on the daily time-series data, evaluated the acute health effects of air pollution [12], while the long-term studies estimated health effects of chronic and cumulative air pollution exposures (usually with the average concentration of several years as the exposure indicator) [13]. The long-term health effect studies generally reported relatively larger effects than those in short-term analyses [14]. The exact biological mechanisms for the health effects of PM 2.5 exposure remain unclear; previous studies suggested that oxidative stress, systemic inflammation, direct vascular endothelial impairment, and alterations in arterial tone might play important roles [15][16][17]. Considering the widely reported effects of air pollution exposure on premature mortality and increased years of life lost [7,18], it was reasonable to hypothesize that high levels of air pollution exposure could lead to losses in life expectancy; however, only a few studies have investigated this association, and most of those studies focused on the long-term air pollution exposure [19,20]. For example, one study from the United States and two studies from China reported long-term exposure to higher levels of particulate pollution was associated with reduced life expectancy [21][22][23]. However, to the best of our knowledge, the evidence is lacking on the effects of short-term (e.g., daily) PM 2.5 exposure on life expectancy. Furthermore, there is a need to estimate the potential benefits of reduction in daily ambient PM 2.5 concentration by attaining the air quality standards. As such, we used potential gains in life expectancy (PGLE) to investigate the benefit on life expectancy by assuming the PM 2.5 concentration was in compliance with certain ambient air quality standards. Compared with other indicators such as excess mortality and YLL, PGLE is a more informative indicator for epidemiological research [24]. Through directly quantifying the health benefits by attaining the air quality standards, PGLE is more relevant to air pollution controlling and formulation of air quality standards. Another advantage of PGLE is that it can be easily compared across different areas, while excess deaths and YLL are somewhat influenced by the age structure and size of the study population [25]. Although this limitation can be solved by several standardization techniques, the YLL was subject to one important issue of its sensitivity to competing risks of death [25,26]. In this study, we firstly examined the associations between daily PM 2.5 and YLL after adjusting for potential confounders at both national and regional levels of mainland China from 2013 to 2016, based on which we estimated the PGLE by postulating that ambient PM 2.5 concentrations were successfully controlled under the Chinese National Ambient Air Quality Standards (NAAQS), as well as WHO's AQG and its Interim Targets (ITs). Mortality data and YLL calculation This is a nationwide modeling study based on a time-series analysis. The daily time-series mortality data on nonaccidental causes in 72 Chinese cities (S1 Table) for the period of January 18, 2013 through December 31, 2016 were selected for this study, and a total of 1,226,849 nonaccidental deaths were recorded. The data were extracted from the Disease Surveillance Points (DSP) System of China, which is operated by the National Center for Chronic and Noncommunicable Disease Control and Prevention, Chinese Center for Disease Control and Prevention [27]. The data from the DSP System have been widely used in the assessment of health risk factors or disease burden and policy formulation [28,29]. These cities were selected based on the following process: (1) they were randomly selected using a multistage stratification approach that took the sociodemographic characteristics of the Chinese population into consideration; (2) the daily mortality counts in these cities were temporally stable without large fluctuations, and no change in the administrative divisions occurred during the study period; and (3) their air pollution and meteorological records were accessible during the study period. The completeness and accuracy of the death data in the DSP System were strictly checked by different administrative levels of the Chinese Center for Disease Control and Prevention network. Practitioners in the health facilities were responsible for checking the accuracy, completeness, and data quality of the death data, and they then reported that information to the DSP System. Staff in the district-level CDC reviewed all new information to ensure the data quality (i.e., to check that the ICD codes were maintained and to exclude the duplicate records and redundant information) in the system within 7 days, as well as returning the unclear or uncertain records back to the reporting health facilities. Then, practitioners in those health facilities asked the physicians to correct and confirm the data. Staff in district-level CDC also collected nonaccidental death information from the security department and civil affairs bureau (the other government departments collecting the death information for the purpose of residence) every month. Then, the staff of the provincial-or regional-level CDC would conduct a second round of checking and reviewing. Finally, data were sent to the national-level CDC to undergo a further round of review, which included the duplication, logic, data analysis, and investigation of misreported data. The 72 cities in our study were divided into the following 7 regions: Northwest, North, Northeast, Central, East, Southwest, and South (Fig 1), and cities in the same region usually incorporated similar features in terms of geographical, meteorological, and cultural conditions. The average life expectancy in China was 76.25 years in 2016. We used the life expectancy in the corresponding years to calculate the YLL for each death by matching age and sex to the Chinese national life table [30], which was obtained from WHO's website, and then summed the YLL for all deaths on each day of the study period to compute the daily YLL of each city. This study was based on one project aiming to examine the short-term health effects of air pollution in China, which has been approved by the Ethical Review Committee of Institute for Environmental Health and Related Product Safety, Chinese Center for Disease Control and Prevention. No individual consent was required because all data were analyzed at an aggregated level. The present study is reported as per the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines (S1 STROBE Checklist). The data analyses were performed following a prospective analysis plan (S1 Text), and the model structure of this study is provided as a diagram in S1 Fig. 37.208.233:20035), which delivered the real-time concentrations of ambient air pollutants that were measured by state-controlled air-monitoring stations [31]. The 24-hour mean concentrations of ambient PM 2.5 , SO 2 , and NO 2 and the maximum 8-hour mean levels for O 3 were averaged from all available monitoring data within each city. Air pollution and meteorological factors In addition, daily meteorological data on mean temperature (˚C) and relative humidity (%) were obtained from the National Meteorological Data Service Center of China (http://data. cma.cn), which is publicly accessible. Statistical analysis Descriptive analysis. For descriptive analysis, the number of cities and mean air pollutant concentrations, meteorological conditions, and mortality and YLL in the 7 regions during the study period were summarized. In addition, the Spearman correlation was performed to Ambient fine particulate matter and life expectancy quantify the correlation between air pollutants and weather variables. Analyses were based on complete mortality records during the study period. Analysis for the PM 2.5 -YLL association. We examined the national and regional shortterm association between daily PM 2.5 and YLL using two-stage models. At the first stage, we applied a generalized additive model (GAM) with a Gaussian link to explore the city-specific short-term association between daily PM 2.5 and YLL. In the GAM model, daily mean concentration of PM 2.5 in each city was incorporated as the independent variable while daily YLL was used as the dependent variable, and all the quantitative variables were treated as continuous variables. We controlled for public holidays and day of the week in the form of categorical variables, while long-term and seasonal trends, temperature, and relative humidity were adjusted using penalized smoothing splines [32]. A complete list of model parameters was provided as a supplemental table (S2 Table). We selected the model specifications and the degree of freedom (df) for the smoothers according to previous experiences of similar studies [33]. For example, we applied a df of 6 per year for long-term trends to filter out the information at time scales of about 2 months, a df of 6 for moving average temperature of the current day and previous 3 days (lag 03 ) for the potential nonlinear relationship, and 3 df for the same day's relative humidity. We explored the associations with different lag structures from the current day (lag 0 ) up to 3 days before (lag 3 ), and we also evaluated the effects of moving averages for the current day and the previous 1, 2, and 3 days (lag 01 , lag 02 , lag 03 ). The statistical model can be specified as At the second stage, we used a random-effects meta-analysis to generate the regional and national estimates. This approach provided a useful tool to pool risk estimates while interpreting within-city statistical error and between-city heterogeneity of the genuine risks [34]. Sensitivity analyses. We conducted a series of sensitivity analyses to check the robustness of the findings. Two-pollutant models were used to examine the associations between daily PM 2.5 and YLL after adjusting for other air pollutants. Specifically, PM 2.5 was included alone in the single-pollutant models, while PM 2.5 and SO 2 (or NO 2 , O 3 ) were included simultaneously in the two-pollutant models. In addition, we observed that the Northwest and Southwest regions covered a rather large area, which may have wide variation in basic characteristics, and relatively fewer cities were included in these two regions. Considering the uncertainty and the complex geospatial correlation between the cities, we performed a spatial statistical model by adjusting for the longitude and latitude of the cities in the model using a penalized smoothing splines function [35]. Furthermore, we also used a mixed-effect GAM as a one-stage approach to examine the regional and national estimates, in which we included the variable of city as a random term. In addition, we performed a meta-regression to evaluate whether the observed PM 2.5 -YLL relationship could be explained by some city-level variables: Gross Domestic Product (GDP), population density, GDP per capita, elevation, precipitation, poverty, education, annual PM 2.5 concentration, annual CO concentration, annual O 3 concentration, annual SO 2 concentration, annual NO 2 concentration, air pressure, annual temperature, and annual relative humidity. The potential interaction between annual PM 2.5 and GDP was checked by including an interactive term of PM 2.5 and GDP in the meta-regression model. Estimating the avoidable YLL, PGLE, and attributable fraction. Based on the established associations between ambient PM 2.5 and YLL, we further estimated the avoidable YLL by assuming the ambient PM 2.5 had been controlled at specified concentrations as in China's NAAQS or WHO's AQG and its ITs. We further estimated the PGLE, which was the average years longer each deceased person would have lived if ambient PM 2.5 were kept under a certain standard in the study area. We also calculated the attributable fraction (AF) that denoted the proportion of YLL due to a higher-than-standards daily PM 2.5 concentration. The two indicators can be calculated using the following formulas: where avoidable YLL is the sum of estimated YLL that can be prevented in the study area if ambient PM 2.5 were kept under a certain concentration, overall mortality count is the total mortality number during the study period, AF is the AF, and overall YLL is the sum of the YLL for all deaths that occurred during the study period. The reference levels of PM 2.5 included WHO's AQG (25 μg/m 3 ) and its ITs, including IT-1 (75 μg/m 3 , which was the same as the China's NAAQS), IT-2 (50 μg/m 3 ), and IT-3 (37.5 μg/m 3 ). Our main analyses were performed using R (version 3.5.1; R foundation for Statistical Computing, Vienna, Austria) with the "mgcv" and "metafor" packages. All statistical tests were two-sided, and values of p < 0.05 were considered statistically significant. Descriptive results During the study period, a total of 1,226,849 nonaccidental deaths were recorded in the 72 cities across the 7 regions of China; 44.0% of the study population were females. The average age of death of the subjects included in this study was 71.72 ± 16.74 years. Table 1 The correlation analyses showed low to moderate correlation coefficients between air pollutants and weather variables. For example, PM 2.5 had moderate positive correlations with NO 2 (correlation coefficient = 0.50), had relatively lower correlations with SO 2 and O 3 (correlation coefficients of 0.26 and 0.29, respectively), and had a negative correlation with mean temperature and relative humidity (correlation coefficients of −0.15 and −0.02, respectively) (S3 Table). The association between daily PM 2.5 and YLL S2 Fig, S3 Fig, and S4 Fig show the diagnostic graphs of the model, including the plot of the residuals, the plot of partial autocorrelation function (PACF), and Q-Q plot for 6 provincial capital cities. These results showed that there were no discernible autocorrelation and patterns in the residuals, suggesting that the models had acceptable goodness of fit. In the single-pollutant models, we observed statistically significant associations between PM 2.5 and YLL at both national and regional levels, especially in the lag 02 models (Fig 2). At the national level, we estimated that each 10 μg/m 3 increase in the PM 2.5 concentrations of lag 02 was associated with an increment of 0.43 (95% CI: 0.29-0.57) YLL (S4 Table). The plot of residuals at the national level suggested that these residuals were generally independent, and there were no obviously discernible autocorrelation and patterns (S5 Fig). The region-specific results showed that the associations varied by regions. For example, the Northwest region was found to have the highest association (β = 0.94, 95% CI: 0.21-1.68), while the North region had the lowest association, with a regression coefficient of 0.12 (95% CI: 0.03-0.22). In the two-pollutant models, the significant associations between PM 2.5 and YLL generally remained (S4 Table). For instance, at the national level, each 10 μg/m 3 increase in lag 02 PM 2.5 concentration was associated with an increment of 0.41 (95% CI: 0.27-0.55), 0.32 (95% CI: 0.19-0.45), or 0.41 (95% CI: 0.27-0.55) in YLL after controlling for SO 2 , NO 2 , and O 3 , respectively. The spatial statistical models for the Northwest and Southwest regions, which additionally adjusted for the longitude and latitude of each city, also produced significant effect Table). In addition, we evaluated whether the observed PM 2.5 -YLL relationship could be explained by some city-level factors (S6 Table). The analysis showed that the associations between PM 2.5 and YLL were relatively higher in cities with lower annual mean concentrations of PM 2.5 . Each IQR (39.40 μg/m 3 ) increase in annual concentrations of PM 2.5 was associated with a 0.59 decrease in the regression coefficient. Furthermore, we did not find a significant interactive effect of PM 2.5 and GDP on the associations between PM 2.5 and YLL (p = 0.89). Avoidable YLL, PGLE, and the AF Based on the established relationship between daily PM 2.5 and YLL, we estimated the avoidable YLL and AF in different regions of China ( We further estimated that 0.41% (95% CI: 0.28%-0.54%) and 1.00% (95% CI: 0.68%-1.32%) of the YLL could be attributable to the daily exposure of PM 2.5 by using China's NAAQS and WHO's AQG as the reference (Table 2). In addition, different effect estimates were observed among these regions, with the largest being observed in the Northwest region (1.69% [95% CI: 0.37%-3.02%]) and the minimum in the South region (0.24% [95% CI: 0.10%-0.38%]). Fig 3 shows the regional and national estimates of the PGLE using different air quality standards. Overall, we estimated that 0.14 (95% CI: 0.09-0.18) and 0.06 (95% CI: 0.04-0.07) years in life expectancy can be potentially gained according to WHO's AQG (25 μg/m 3 ) and China's standard (75 μg/m 3 ), respectively. Among the 7 regions, the largest value of 0.28 (95% CI: 0.06-0.49) was observed in the Northwest region, and the minimum value of 0.08% (95% CI: 0.02-0.15) was found in the North region by using WHO's AQG as the reference. Discussion To our knowledge, this might be the first study to quantify the short-term association between ambient PM 2.5 and life expectancy in China. Using a large data set coving 72 Chinese cities, we estimated that about 0.14 years in life expectancy could be prolonged based on the hypothetical situation that the daily ambient PM 2.5 concentration was in compliance with WHO's ambient AQG. Previous studies have well-documented the health effects of ambient air pollutants using a series of health outcomes such as premature mortality, excess morbidity, and YLL, which provided crucial information to measure the harmful effects of ambient air pollutants [36][37][38]. A few studies further examined the effects of long-term air pollution exposure on life expectancy [39][40][41]; however, little has been done to address the association of short-term PM 2.5 exposure with life expectancy, and no studies, to our knowledge, have quantified the potential benefits in life expectancy due to short-term air quality improvement [42,43]. Such evidence will be helpful for policy-making, risk management, and resource allocation. A few studies have reported the association between long-term exposure to ambient particulate matter pollution and life expectancy. For example, one study reported that a reduction of 10 μg/m 3 in annual PM 2.5 concentration could increase the life expectancy by about 0.61 years in the United States [21]. Another study similarly reported that an increase of 10 μg/m 3 in long-term PM 10 exposure was associated with a decrease of 0.64 years in life expectancy in China, and it may save 3.7 billion life-years in the whole country if the concentrations of PM 10 reached the Class I standard of 40 μg/m 3 [23]. In the present study, we estimated that 0.14 years in life expectancy can be potentially gained by reaching WHO's AQG on daily PM 2.5 concentrations in China. This finding was in line with previous observations that the shortterm health effects of PM 2.5 were relatively smaller than those from long-term exposure [44], and this may be due to the cumulative effects of prolonged exposures [45]. Nevertheless, findings from this study provided valuable evidence for the potential benefits in life expectancy of improved daily air quality, indicating that exposure to higher levels of air pollution even for a short time could reduce life expectancy. The underlying biological mechanisms linking short-term PM 2.5 exposure to life expectancy included a range of pathophysiological pathways. For example, one reason was that short-term PM 2.5 exposure could lead to increased mortality and morbidity of cardiopulmonary diseases through formation of atherosclerotic plaque, systemic oxidative stress, and inflammation [46,47]. This explanation was supported by an intervention study that reduction of particle exposure by indoor air filtration could improve microvascular function in the elderly [48]. We observed a larger potential health benefit when using WHO's AQG (25 μg/m 3 ) as the reference than using China's NAAQS (75 μg/m 3 ), indicating that a stricter ambient quality standard would lead to more health benefits and therefore should be considered in future revision of China's air quality standards. We observed some evidence for spatial heterogeneity in the association between PM 2.5 and YLL across different regions. This finding was in line with previous studies [31,49]. Generally, we found relatively weaker associations in the North, East, and South regions, whereas the associations were stronger in the Northwest and Southwest regions. The underlying reasons remained unclear. One possible underlying reason might be the differences in emission sources and chemical constituents of ambient PM 2.5 among the different regions. The PM 2.5 in the Northwest and Southwest regions may be more hazardous than that in other regions; most of the ambient fine particles were related to biomass combustion, which was more toxic than other sources [50]. Our meta-regression analysis showed that the areas with higher annual concentrations of PM 2.5 tended to have a lower PM 2.5 -YLL association, indicating a better adaptation to the local environmental conditions in the areas with higher levels of air pollution. It was possible that people living in highly polluted areas have higher self-protection awareness, which could lead to taking better protective actions such as wearing masks, reducing outdoor activities, and use of air purifiers [31]. Moreover, considering that the cities with a higher PM 2.5 concentration may also be wealthier and have better healthcare access, we cannot rule out the possibility that there may be some protective effect of economic development level. We therefore included the interactive term of PM 2.5 and GDP in the meta-regression model and did not find a significant interactive effect. Additionally, in light of previous studies that reported varying effects of PM 2.5 constituents on human health, we suspect that the differences in chemical components of PM 2.5 in different areas may be a potential explanation [51,52]. The observed associations between PM 2.5 and YLL were generally robust in the sensitivity analyses. In particular, the associations remained consistent in the two-pollutant models with adjustment for other air pollutants, indicating that the associations were not confounded by these air pollutants. However, we observed a relatively smaller estimate when adjusting for NO 2 , which could be partly explained by the moderate positive correlation between PM 2.5 and NO 2 (r = 0.50). It was also possible that PM 2.5 and NO 2 shared similar emission sources and biological pathways in their health effects [53,54]. Furthermore, spatial autocorrelation might be one issue in this analysis; however, this concern should be minimal because the cities were sparsely distributed in different areas, and our spatial model controlling for longitude and latitude of the cities yielded a consistent result. This study applied a novel, to our knowledge, indicator, namely PGLE, to measure the potential health benefits by controlling air pollution to a certain level. This indicator estimated the average years a person would have lived longer through air quality improvement. This measurement took into consideration of the age of the deceased and the population size of the study area, making it comparable across different areas [55]. A few limitations should be noted for this study. This was an ecological time-series study, which used the city-averaged concentrations of ambient air pollutants as the exposure measurement. It might have led to ecological fallacy and thus limited our ability of causal inference. However, it is not feasible to measure every participants' exposure directly for such a largescale study, and this strategy has been widely used in previous time-series studies [56,57]. Relatively fewer cities were included in some regions such as the Northwest and Southwest regions, which might have limited the representativeness of these two regions; however, our sensitivity analyses based on a spatial statistical model produced consistent results, suggesting that the issue did not affect the result estimate to a great extent. The findings from this study have some important implications for both public health and environment management. We suggest applying this indicator in future efforts. For example, the PGLE can be applied to estimate the effects of other air pollutants on life expectancy, as well as for conducting studies in different populations. The average life expectancy in China was 76.25 years in 2016. The Chinese government released the Healthy China (HC 2030) blueprint in 2016 as a national strategy. One goal of this plan is to increase the average life expectancy to 79 years by 2030. To achieve that goal, a series of action plans were suggested such as health education, diet control, and sufficient physical exercise [58]. In this respect, our study provided some new evidence that the life expectancy can be prolonged by controlling the concentrations of air pollution, and we suggest that this finding should be considered in future policy-making. In conclusion, this study indicates that ambient PM 2.5 might be a risk factor for YLL that should not be neglected, and significantly longer life expectancy could be achieved by a reduction in the pollution level.
2020-01-19T14:03:11.019Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "4a5c95a7f1974d6be7bea8e0916331f1feff0d4d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.1003027&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3258dbbf148652dbd2fa36f683f94b444308cedc", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
4284813
pes2o/s2orc
v3-fos-license
The social evaluation of faces: a meta-analysis of functional neuroimaging studies. Neuroscience research on the social evaluation of faces has accumulated over the last decade, yielding divergent results. We used a meta-analytic technique, multi-level kernel density analysis (MKDA), to analyze 29 neuroimaging studies on face evaluation. Across negative face evaluations, we observed the most consistent activations in bilateral amygdala. Across positive face evaluations, we observed the most consistent activations in medial prefrontal cortex, pregenual anterior cingulate cortex (pgACC), medial orbitofrontal cortex (mOFC), left caudate and nucleus accumbens (NAcc). Based on additional analyses comparing linear and non-linear responses, we propose a ventral/dorsal dissociation within the amygdala, wherein separate populations of neurons code for face valence and intensity, respectively. Finally, we argue that some of the differences between studies are attributable to differences in the typicality of face stimuli. Specifically, extremely attractive faces are more likely to elicit responses in NAcc/caudate and mOFC. INTRODUCTION Within a single glance of a face, people automatically appraise face attractiveness and make a host of social attributions (Olson and Marshuetz, 2005;Bar et al., 2006;Willis and Todorov, 2006;Rule et al., 2009;Todorov et al., 2009). For example, 33-ms exposure to a face is sufficient for people to make trustworthiness decisions . Additional time exposure simply increases confidence in these decisions (Willis and Todorov, 2006). As one of the founding fathers of modern social psychology, Solomon Asch (1948, p. 258), put it, 'We look at a person and immediately a certain impression of his character forms itself in us. A glance, a few spoken words are sufficient to tell us a story about a highly complex matter. We know that such impressions form with remarkable rapidity and with great ease. Subsequent observations may enrich or upset our view, but we can no more prevent its rapid growth than we can avoid perceiving a given visual object or hearing a melody'. Despite the importance of first impressions for social interactions, research on their neural basis is in its infancy. Researchers began to use social neuroscience methods to investigate this basis only a decade ago (Adolphs et al., 1998;Nakamura et al., 1998;Aharon et al., 2001;Winston et al., 2002). Although a number of neuroimaging studies have been published on the topic, many of the results have been inconsistent . The objective of this article is to provide a quantitative summary of the major findings across studies on face evaluation. The neural basis of face evaluation Neuroimaging research on the social evaluation of faces has usually focused on evaluations along the trait dimensions of trustworthiness and attractiveness. Although these are separable dimensions, psychometric studies of social judgments from faces show that these judgments are highly inter-correlated with each other, with correlations ranging from 0.60 to 0.80 (Oosterhof and Todorov, 2008;Todorov et al., 2008a,b). For example, principal components analyses show that (i) the first component, which indicates general face valence, accounts for >60% of the variance of judgments; and (ii) trustworthiness and attractiveness judgments are highly correlated with this valence component. Given these behavioral data, one would expect to observe overlapping regions in neuroimaging studies on attractiveness and trustworthiness. For the purposes of this meta-analysis, we focus on studies on attractiveness and trustworthiness. Typically, such studies present participants with facial stimuli that vary on the respective dimensioneither systematically manipulated via computer modeling, or confirmed by independent behavioral ratingsand subsequently report brain activity that shows a linear relationship with changes in facial appearance along that dimension. For example, some studies have observed increased responses in the amygdala for untrustworthy faces (Winston et al., 2002), while other studies have observed increased responses in the nucleus accumbens (NAcc) and medial orbitofrontal cortex (mOFC) for attractive faces (Aharon et al., 2001;O'Doherty et al., 2003). More recent studies have sought to identify regions that show a quadratic relationship between brain activity and changes in attractiveness or trustworthiness. Researchers have observed non-linear responses in the amygdala for both attractive and unattractive faces (Winston et al., 2007), as well as for both trustworthy and untrustworthy faces . While there is convergence between the linear and non-linear approaches, there exists the possibility that these analyses are tapping distinct processes, wherein areas that show a linear pattern of activity are coding for face valence, while areas that show quadratic patterns are coding something more like face intensity. The first objective of this article is to systematically explore the pattern of observed brain activations across published neuroimaging studies on face evaluation as a function of face valence. The second objective is to examine possible dissociations between linear and non-linear responses. The third and final objective is to explore potential differences between trustworthiness and attractiveness studies. Multilevel kernel density analysis Meta-analysis is a powerful statistical tool that allows researchers to combine the data sets of a collection of similar studies to provide a more accurate, robust estimate of the effect-size of a given phenomena. This approach is widespread within behavioral research, and in recent years, meta-analyses of neuroimaging studies have become more common (Fox et al., 1998;Phan et al., 2002;Wager and Smith, 2003;Wager et al., 2004;Laird et al., 2005;Nielsen et al., 2005). Meta-analyses of neuroimaging data typically compute how frequently studies examining a given psychological phenomenon report activity in a specific brain area (Kober and Wager, 2010). This approach can be used to confirm the prevailing thinking regarding what brain areas are associated with a particular psychological phenomenon or experience. At the same time, meta-analysis can serve a more exploratory purposeidentifying regions that are consistently activated across a large number of studies of the same psychological phenomenon, but that are not typically associated with that phenomenon. Indeed, a meta-analysis of the social evaluation of faces has been recently published (Bzdok et al., 2011), and in part, motivated the analyses herein. While we ultimately employed slightly different selection criteria in choosing studies to include, we also sought to perform several more targeted analyses, as noted above. Perhaps more importantly, while Bzdok and colleagues conducted an activation likelihood estimation (ALE) meta-analysis, we use a different statistical procedure. Specifically, we use a Multi-level Kernel Density Analysis (MKDA), which represents an advance in meta-analytic methods for neuroimaging data, because it accounts for the fact that individual activation peaks are nested within contrast maps (maps of particular comparisons within studies), making these maps the unit of analysis, and not the peaks . Further, MKDA models contrast maps as a random effect, eliminating the possibility of one contrast dominating the meta-analysis. We conduct several analyses. First, we analyze activations across all contrasts showing (i) stronger brain responses to negativeuntrustworthy and unattractivethan positive trustworthy and attractivefaces; (ii) stronger responses to positive than negative faces; and (iii) stronger responses to positive and negative faces than to neutral faces. Second, within these contrasts, we also explore potential differences between trustworthiness and attractiveness studies. Data collection We searched for neuroimaging studies of the social evaluation of faces using the online databases PsycINFO and PubMed, as well as the scholarly article search engine Google Scholar. We limited our search using combinations of keywords including 'faces', 'social evaluation', 'social judgment', 'fMRI', 'trustworthiness' and 'attractiveness'. To be included in our meta-analysis, studies had to involve fMRI or PET investigations of healthy adults, 1 report activations in a standard coordinate systemeither Talairach or Montreal Neurological Institute (MNI) coordinates, and explicitly state whether their analyses were performed with fixed or random effects. With respect to in-scanner tasks, we only included studies in which subjects either made explicit judgments regarding the trustworthiness or attractiveness of faces, or were presented with faces that varied on one of these two dimensions during an implicit or a passive viewing task, based upon normative ratings, computer modeling or some other form of categorization. In the case of some studies (Hampshire et al., 2011;Pochon et al., 2008;Zaki et al., 2011), relevant contrasts were not originally reported, but were obtained through personal communication with the respective authors. We excluded studies that did not report specific coordinates arising from relevant contrasts, but instead referred to various ROIs from a functional localizer being more or less active during specific contrasts (Kranz and Ishai, 2006). In some instances, multiple studies were found which presented analyses of the same data sets (Todorov and Engell, 2008;Pinkham et al., 2008b). In these cases, we only included one study's reported coordinates, and this choice was made based upon which version of the study ultimately presented the more relevant analyses. Finally, we excluded some studies whose research questions bordered on ours (for instance, aesthetic judgments of paintings of faces, as in Kawabata and Zeki (2004) or neural responses to faces similar to the self varying in trustworthiness, as in Verosky and Todorov (2010) as they ultimately did not report contrasts that were appropriate for inclusion in our analyses. These choices are not trivial, as they represent some of the differences between our meta-analysis and the one conducted by Bzdok and colleagues (2011), in terms of study selection. This search yielded 28 published papers comprising 29 2 neuroimaging studies on the social evaluation of faces. Seventeen of these studies were on attractiveness evaluations and 12 were on trustworthiness or related evaluations (i.e. 'would you approach or avoid this person'). The latter were included because such approach/avoidance evaluations are highly correlated with trustworthiness evaluations (Todorov, 2008). This set of studies accounted for 52 separate contrasts (Table 1). For contrasts to be included in our database, they had to be representative of neural activity that varied parametrically with either facial attractiveness or trustworthiness, and furthermore, the direction and linearity Winston et al. (2002), both explicit and implicit paradigms were employed, but only collapsed analyses were reported. b 'Collapsed' analyses refer to analyses in which neural activity was aggregated across both explicit and implicit tasks. c In this column, we note if a given study reported coordinates arising from ROI-based analyses. In some cases, these studies only reported such ROI-based analyses (for instance, Pinkham et al., 2008). As such, these studies have only been able to impact our supplementary analyses, which incorporate ROI-based analyses in addition to whole-brain contrasts. 2 We consider a single study to represent an investigation of the neural responses to a given set of stimuli in the context of one or potentially multiple psychological tasks within the same set of subjects. Thus, Todorov et al. (2011) of this relationship had to be clearly stated. We excluded coordinates derived from complex interaction-based analyses (for instance, stimuli type and gender interactions, as seen in O' Doherty et al. (2003), as well as coordinates arising from analyses that were not relevant to our research questions (e.g. effects of face novelty in Kim et al. (2007). Further, overlapping contrasts are often reported in the articles surveyed. For instance, Aharon and colleagues (2001) report separate contrasts detailing neural activity associated with facial attractiveness for male stimuli, female stimuli, and collapsed across both kinds of stimuli. In these cases, we only included the most general reported contrastfor instance, for Aharon et al. (2001), we used the collapsed contrast. Studies that report separate results for explicit and implicit paradigms presented a unique problem (see Winston et al., 2002;Baas et al., 2008;Chatterjee et al., 2009). On the one hand, both analyses are relevant to our main research question, and favoring one paradigm over the other in these three cases would bias our results in favor of that task design. On the other hand, the contrasts are undeniably non-independent of each other. Ultimately, we chose to run our analyses using both sets of coordinates for these three studies, which were entered into our database as separate contrasts. To confirm that this approach had no demonstrable impact on our results, we ran complimentary analyses that only included one contrast per study (i.e. only the explicit task contrast from the three studies in question). We observed no practical differences in either the size or localization of consistent activations. We tabulated the design particulars and parameters of each study, as well as the reported activation points for all relevant contrasts. Specifically, we coded each study in terms of which coordinate system activations were reported in, number of participants, whether a fixed or random effects analysis had been performed, whether activations represented linear or non-linear effects, whether the task was explicit or implicit in nature, and whether the reported activations were the result of a whole-brain or region-of-interest (ROI) analysis. This coding scheme served two purposes. Primarily, this information was fed into the MKDA toolbox and used to determine the proper weighting scheme for the different studies. Secondarily, it served as the basis for contrasting studies against each other on relevant variables. This coding scheme was initially entered by the first author, with subsequent confirmation and complete agreement from the second and third authors. Entered coordinates were checked and re-checked against their original sources numerous times throughout the course of setting up our database. The studies compiled in our database used a variety of face stimuli. Some studies used computer-generated faces (for instance, Chatterjee et al., 2009), others used standardized photograph sets of volunteer subjects (for instance, Engell et al., 2007), and still others used photographs culled from magazines and newspapers (for instance, O'Doherty et al., 2003) . These faces likely differ in terms of their typicalityfaces in standardized photographs are more typical than the extreme faces seen in photographs of models and actresses. Given recent work suggesting that face typicality can partially account for the amygdala's response to the valence of face stimuli (Said et al., 2010), it is possible that different types of face stimuli (e.g. extremely attractive faces that are less typical) could lead to different patterns of neural responses. As such, while we did not exclude studies based on the sources of the face stimuli, we did keep track of the source of each study's stimuli. This allowed for the possibility of comparing the more typical faces (computer-generated and standardized sets) against the more atypical faces (photos of models and actors from print media). It is important to note that contrasts containing ROI-based analyses pose a problem for inclusion in meta-analyses. On the one hand, including coordinates from ROI-based analyses may bias the results by introducing researchers' a priori predictions about which regions are involved in trustworthiness and attractiveness evaluation. On the other hand, such analyses represent theoretically motivated prior research. Further, because some ROIs like the amygdala and NAcc are relatively small and often difficult to image, excluding ROI-based analyses may miss important findings that are consistent across studies. Given that, we chose to run each of our analyses twiceonce limited to whole-brain contrasts, and once with ROI-based contrasts included. In the interest of space, we chose to report the whole-brain analyses in the main text, as well as to note whether or not adding ROI-based contrasts substantially affected the results. (In all cases but one, adding ROI-based contrasts did not have a substantial effect on analyses. For the contrast that produced divergent results, we chose to explicitly note in the text how the two approaches differed.) The specific results for the ROI-based analyses are reported in supplemental material. We note that while some studies reported ROI-based analyses side by side with whole-brain analyses (for instance, Van Rijn et al., 2011), there are a small number of studies that reported only ROI-based analyses (for instance, Pinkham et al., 2008a). Data analysis Our MKDA of 'negative' contrasts comprised all contrasts in which brain activity increases as facial stimuli decrease in either trustworthiness or attractiveness. 'Positive' contrasts comprised all contrasts in which brain activity increases as facial stimuli increase in either trustworthiness or attractiveness. Non-linear, quadratic contrasts comprised all contrasts in which brain activity increases as facial stimuli increase or decrease in either trustworthiness or attractiveness relative to faces at the middle of the continuum. Not all studies included in our database reported both negative and positive contrasts. Therefore, neither of our primary MKDAs contains contrasts from every study. When performing these analyses, the peak coordinates from all relevant contrast maps were first separately convolved with a 10-mm spherical kernel, yielding comparison indicator maps (CIMs). Previous meta-analytic work suggests that this is an appropriate default kernel size (Wager et al. 2003;Salimi-Khorshidi et al., 2009). These CIMs were subsequently weighted based upon the number of participants and what type of analysis was performed in each study, following the same parameters used by Kober and colleagues (2008). Specifically, each map was first weighted by the square root of its study's sample size and subsequently multiplied by an adjustment factor accounting for the type of analysis used in the respective study. Random effects studies were multiplied by an adjustment factor of 1; fixed effects studies were multiplied by an adjustment factor of .75. In this fashion, studies received higher weighting if they had large sample sizes and performed random effects analyses. Second, the weighted CIMs were averaged together, producing a density map. Each voxel of this density map attains a density statistic, P, which is the weighted proportion of contrasts included in the MKDA that yield activity within 10 mm of that voxel. To identify voxels whose P-statistic exceeds the frequency expected by chance, a Monte Carlo simulation was conducted. Over 5000 iterations of the Monte Carlo simulation, the observed activation blobs (contiguous regions of activation within the CIMs, holding shape constant) from each CIM were randomly shuffled within a gray-matter mask. Following each iteration, we recorded both the maximum whole brain density statistic (P, across all studies) and the largest cluster of contiguous voxels. These values were used to create null-hypothesis distributions for the density statistic and the expected size of clusters, respectively. Third, the weighted P was subsequently tested against the resulting null-hypothesis P 0 produced by the Monte Carlo simulation. A similar procedure was used to test for the significance of the size of the clusters, allowing us to identify a size threshold at which a certain number of voxels must be activated contiguously for a given cluster to be deemed significant. Hence, we used two types of thresholdsa density height-based threshold and a cluster size threshold, the latter derived from a non-parametric cluster-based thresholding procedure (Nichols and Hayasaka, 2003). For P, the resulting familywise error rate (FWER)-controlled threshold is the proportion of studies reporting activity within 10 mm of a given voxel that exceeds the maximum P-statistic across 95% of the resulting Monte Carlo maps. These voxels appear on resulting maps colored in yellow and will be referred to in our results as exceeding the height-based threshold of the MKDA. For the cluster size threshold, the resulting FWER-controlled threshold is the clusters observed at P < .001 and P < 0.01 whose size exceeds the maximum cluster size across 95% of the Monte Carlo maps. These voxels appear on resulting maps in orange and pink, respectively, and will be referred to as exceeding the extent-based threshold. The thresholded maps were overlaid on a canonical MRI image (colin27.img, the single-subject template in SPM5; http://www.fil.ion.ucl.ac.uk/spm/software/spm5/), which was co-registered to the MNI brain template. When reporting areas of consistent activation in our tables, we provide information on whether each area withstood height-based thresholding, extent-based thresholding, or both. Some areas of activation were sizable enough to pass extent-based thresholding but not height-based thresholding. Conversely, other areas were highly consistent across the database and passed height-based thresholding, but were not sufficiently large to pass extent-based thresholding. XYZ-coordinates reported in our tables reflect the peak activation foci which withstand height-based thresholding, or, if activations are less consistent, the center of mass of the cluster at the most stringent level of extent-based thresholding. Further, we report the number of voxels in each cluster which withstood height-based thresholding, or if activations are less consistent, the number of voxels at the most stringent level of extent-based thresholding. We also performed several smaller, more targeted MKDAs exploring differences between trustworthiness and attractiveness studies and performed several additional exploratory analyses based on stimulus typicality. To perform such analyses, a simple subtraction yields the relative difference in the distribution of peaks between the respective contrasts, which is subsequently thresholded as explained above. Results across negative contrasts Eleven studies reported 13 negative contrastswhere brain activity increased as attractiveness or trustworthiness decreasedacross the whole brain. The MKDA results for these contrasts are summarized in Table 2 and Figure 1. We observed highly consistent activation in right amygdala (withstood height-and extent-thresholding, P < 0.001), as well as activation in left amygdala that survived extent-but not height-thresholding (P < 0.001). Four studies reported ROI-based coordinates for negative contrasts. When we included these additional coordinates in our analysis as well, we continued to observe highly consistent activation in right amygdala (withstood height-and extentthresholding, P < 0.001), as well as activations which survived extent-but not height-thresholding in left amygdala (P < 0.001), right globus pallidus (P < 0.01) and a large region of consistent activation encompassing right anterior insula, right inferior frontal gyrus (IFG) and right ventrolateral prefrontal cortex (vlPFC, P < 0.01, additional results summarized in Supplementary Table S1). Results across positive contrasts Twenty-one studies reported 23 positive contrastswhere brain activity increased as attractiveness or trustworthiness increasedacross the whole brain. The MKDA results for these contrasts are summarized in Table 2 and Figure 2. We observed highly consistent activation in left caudate Social evaluation of faces SCAN (2013) extending into NAcc and mOFC, right thalamus, vmPFC and dACC/pgACC (withstood height-and extentthresholding, P < 0.001), as well as portions of right amygdala right anterior insula, right IFG (P < 0.001), and bilateral vlPFC that survived extent-but not height-thresholding (P < 0.01). Four studies reported ROI-based coordinates for negative contrasts. Including these additional coordinates in our analysis yielded similar results (summarized in Supplementary Table S1). Non-linear responses Nine studies within our database conducted non-linear analyses testing for stronger responses to both negative-(unattractive or untrustworthy) and positive-looking (attractive or trustworthy) faces than to faces at the middle of the continuum. Collapsed across both sets of stimuli, we observed consistent non-linear activation across seven whole-brain contrasts in the right amygdala extending into right putamen (withstood height-and extent-thresholding, P < 0.001, Table 3). Including two additional ROI-based contrasts in the analysis yielded similar results (Supplementary Table S2). We note that given the relatively small number of contrasts documenting non-linear responses, this analysis is underpowered. Nevertheless, five of the seven whole-brain contrasts reported activity in right amygdala. We also compared non-linear responses against linear responses, though these comparisons are, by virtue of the smaller number of non-linear contrasts, unavoidably unbalanced. Contrasting negative linear contrasts (13 contrasts) against non-linear contrasts (seven contrasts), we observed a ventral portion of the right amygdala that was more consistently active in negative linear contrasts (withstood heightbut not extent-thresholding, Supplementary Table S3), while a more dorsal portion of the right amygdala was more consistently active in non-linear contrasts. Including ROI-based contrasts in the analysis yielded similar results (Supplementary Table S4). (As this contrast is unbalanced, we have provided information regarding the frequency of activation at the peak voxels of those areas that withstood height-thresholding, Supplementary Table S5A) Contrasting positive linear contrasts (23 contrasts) against non-linear contrasts (seven contrasts), we observed a set of regions that were more consistently active in positive linear contrasts, including bilateral caudate, vmPFC/OFC, dACC/pgACC (withstood height-and extent-thresholding, Negative linear responses in attractiveness and trustworthiness studies We contrasted negative linear responses in attractiveness (six contrasts) and trustworthiness studies (seven contrasts), observing one activation in the right amygdala that was more consistently active for negative linear responses to trustworthiness than attractiveness (withstood height-and extent-thresholding, P < 0.001). We observed no regions that were consistently more active for negative linear responses to attractiveness than trustworthiness. (Results are summarized in Supplementary Table S5.) Including ROI coordinates (from one unattractiveness study and three untrustworthiness studies) in the analysis yielded similar results (noted in Supplementary Table S6). Positive linear responses in attractiveness and trustworthiness studies We contrasted positive linear responses in attractiveness (18 contrasts) and trustworthiness (5 contrasts). We observed activations in left caudate extending into NAcc, vmPFC/ OFC and pgACC extending dorsally into dACC (withstood height-and extent-thresholding, P < 0.01) that were more consistent for positive linear responses to attractiveness than trustworthiness. We observed no regions that were Separating attractiveness studies by stimulus type The differences between trustworthiness and attractiveness studies are interesting but also puzzling given that evaluations on these two dimensions are highly correlated. There were no obvious differences between these two sets of studies (for instance, they were well-balanced between implicit and explicit tasks) except for the nature of the face stimuli used in the studies. Whereas eleven of the attractiveness studies used atypical, extremely attractive faces (culled from magazines and print media, often of models), none of the trustworthiness studies used such faces (typically, these were standardized sets of faces or computer-generated faces). If the differences between attractiveness and trustworthiness studies are partly due to differences in stimuli, then the regions that differentiate these studies should also appear in contrasts involving the extremeness of faces. We can test this proposition by splitting attractiveness studies into two groupsthose that used extremely attractive stimuli and those that used average or computer-generated stimuli. Comparing extreme attractiveness studies to the set of trustworthiness studies should yield areas of consistent activation in NAcc/caudate and mOFC, for example, while there should be fewer differences between average attractiveness and trustworthiness studies. Indeed, when contrasting extreme attractiveness (11 contrasts) against trustworthiness (8 contrasts), we observed consistent activation in left caudate and NAcc, extending into mOFC, pgACC, and vmPFC (withstood height-and extent-thresholding, p < .01; see Figure 3, results summarized in Supplemental Table 7). Further, we observed a consistent pattern of activation centered in pgACC and extending broadly into both vmPFC and vlPFC that withstood extent-thresholding (p < .01) but not height-thresholding. Contrasting average attractiveness against trustworthiness produced no areas of consistent activation. Similarly within the set of attractiveness studies, when contrasting studies that used extremely attractive faces (11 contrasts) against studies that used more typical faces (7 contrasts), we observed consistent activation in left caudate, vmPFC/mOFC, and pgACC/dACC (withstood height-and extent-thesholding, p < .01), while a larger activation extending broadly through mOFC, vmPFC and vlPFC withstood extent-thresholding (p < .01), but not heightthresholding. The reverse contrast produced no areas of consistent activation. Further, differences due to face stimuli should be apparent in studies that used implicit evaluation paradigms. Because no evaluative dimension is specified in such paradigms, stimulus properties should drive the neural responses. Contrasting implicit paradigm studies that used extremely attractive faces with implicit paradigm studies that used more typical faces produced consistent activation in right amygdala, left caudate extending into NAcc and right inferior frontal gyrus (withstood height-but not extent thresholding; results are summarized in Supplemental Table S7). DISCUSSION Using multi-level kernel density analysis, a statistically rigorous method of meta-analysis that treats contrasts as the unit of analysis instead of individual activation peaks, we performed a meta-analysis on 29 neuroimaging studies of the social evaluation of faces. We split these studies by valence into two MKDAs, one focusing on brain responses to negative evaluations like unattractiveness and untrustworthiness, and the other focusing on brain responses to positive evaluations like attractiveness and trustworthiness. Our negative MKDA revealed the most consistent activation in right amygdala. Less consistent areas of activation were observed in left amygdala, right anterior insula, right IFG, right vlPFC and right globus pallidus. These results are remarkably consistent with previous findings regarding the neural responses to angry faces (Morris et al., 1998;Whalen et al., 2001;Monk et al., 2006;Dannlowski et al., 2007). Amygdala responses to angry faces have been widely observed and characterized (Morris et al., 1998;Whalen et al., 2001;Nomura et al., 2004;Taylor et al., 2006;Dannlowski et al., 2007;Monk et al., 2008;Vrticka et al., 2008). Furthermore, a functional connectivity between the amygdala and vlPFC has been proposed and demonstrated (Nomura et al., 2004;Taylor et al., 2006;Monk et al., 2006Monk et al., , 2008, suggesting that in response to angry faces, the vlPFC may serve to modulate amygdala reactivity, effectively regulating emotional responses. Right IFG (Dannlowski et al., 2007), right insula (Dannlowski et al., 2007;Vrticka et al., 2008) and right globus pallidus (Jackson et al., 2008) have also all been implicated in the neural response to angry faces. The consistent activation in the left caudate nucleus, extending broadly into the nucleus accumbens, suggests that positive evaluation of faces may depend, in part, on the recruitment of structures implicated in reward-processing (Knutson et al., 2001a,b;Haruno et al., 2004). However, we note that consistent activation in this area was almost entirely driven by attractiveness contrasts, and, therefore, may not be part of a general network for face evaluation. Nonetheless, the highly consistent presence of these areas in our meta-analysis suggests that under certain task and stimulus conditions, attractive faces modulate activity in reward-related regions of the brain. The similarities between the neural correlates of negatively and positively evaluated faces and angry and happy faces, respectively, parallels perceptual similarities between these types of faces. In computer models of facial trustworthiness, extreme untrustworthiness resembles anger and extreme trustworthiness resembles happiness Todorov, 2008, 2009;Todorov et al., 2008a,b). Further, behavioral adaptation studies suggest common neural underpinnings for evaluations of trustworthiness and anger/ happiness (Engell et al., 2010). These observations are consistent with the emotion overgeneralization hypothesis (Montepare and Dobish, 2003;Todorov et al., 2008a,b;Zebrowitz and Montepare, 2008;Said et al., 2009a,b), according to which evaluative judgments of faces are based on configurations of facial features resembling emotional expressions. In the context of positive and negative evaluation, these configurations signal approach and avoidance behaviors, respectively (Todorov, 2008). Our meta-analysis findings are also consistent with the hypothesis that novel faces are automatically evaluated with respect to their approach/avoidance value. The role of the amygdala in face evaluation The amygdala is critical for adaptive social behavior (Adolphs, 2010;Sander et al., 2003) and, possibly, for normal face perception and evaluation (Todorov, 2011). Large meta-analyses of PET and fMRI studies on emotional processing show that faces are one of the most potent stimuli for eliciting responses in the amygdala (Costafreda et al., 2008;Sergerie et al., 2008). The role of the amygdala in face evaluation is also consistent with neurophysiology findings of face selective responses in the amygdala (Nakamura et al., 1992;Rolls, 2000a,b;Gothard et al., 2007). The amygdala receives input from the inferior temporal (IT) cortex and projects back not only to IT cortex but also to extrastriate and striate visual areas (Amaral et al., 1992). The amygdala also has strong interconnections with rACC, OFC, mPFC, basal ganglia and anterior insula. This anatomical position of the amygdala allows for it to serve as an affective hub of information. The current findings, together with the findings of a recent ALE-based meta-analysis of a smaller and only partially overlapping set of 16 studies on face evaluation (Bzdok et al., 2011), further buttress the importance of the amygdala in face perception and evaluation. Importantly, the amygdala responded not only to negatively evaluated faces but also to positively evaluated faces, consistent with meta-analyses of its responses to the valence of emotional expressions (Sergerie et al., 2008). Interestingly, we observed different loci of activation within the amygdala for linear and non-linear responses (Figure 4). Whereas a ventral portion responded more consistently to negative faces only, a dorsal portion of the amygdala responded more consistently to both negative and positive faces than to neutral faces. This dissociation of linear and non-linear responses in the human amygdala parallels the findings of a Social evaluation of faces SCAN (2013) 293 high-resolution fMRI study on non-human primates (Hoffman et al., 2007). Hoffman and colleagues observed a linear response in ventral portions of the amygdala (comprising the basolateral amygdala)specifically, stronger responses to threatening faces and progressively weaker responses to neutral and appeasing faces. However, in a more dorsal portion (comprising the central nucleus and the bed nucleus of the stria terminalis), they observed a non-linear responsestronger responses to both threatening and appeasing faces than to neutral faces. This ventral/dorsal distinction also parallels a distinction made by Whalen and his colleagues (Whalen et al., 2001;Kim et al., 2003, Somerville, et al., 2006Davis, et al., 2010). They have argued that while the ventral portion of the amygdala is involved in processing valence, the dorsal portion of the amygdala is recruited in determining the value of ambiguous information (e.g. expressions of surprise) in a given context. These authors suggest further that given the dorsal amygdala's response to surprised (Kim et al., 2003), fearful (Whalen et al., 1998(Whalen et al., , 2001 and happy faces (Breiter et al., 1996;Whalen et al., 1998), it may be tracking the salience of these faces, more generally. This hypothesis is consistent with the current findings. These findings open the door to future work along those lines. One possibility is that there exist separate populations of neurons within the amygdala that code for stimulus valence and stimulus salience, respectively. Ultimately, the findings are in line with previous work proposing a shift away from conceptualizing the amygdala as simply a fear or threat module and instead toward an account of the amygdala as also tracking stimulus intensity (Anderson et al., 2003;Small et al., 2003) or motivational salience (Sander et al., 2003;Cunningham et al., 2008;Adolphs, 2010;Todorov, 2011). These findings also serve as an excellent reminder that one of the additional benefits of the meta-analytic method is the possibility of generating new, testable hypotheses for future research. Faces that are tagged as affectively significant in the amygdala can be further processed in prefrontal regions, which, in turn, can serve to modulate amygdala activity. Prefrontalamygdala connections have been explored in the vmPFC (Quirk et al., 2003, Heinz et al., 2004, as well as the pgACC (Pezawas et al., 2005;Stein et al., 2007;Zink et al., 2010), both of which were observed as consistently activated across our set of positive contrasts. Stimulus effects on neuroimaging findings We also performed several smaller MKDAs to compare between study type, within negative and positive linear responses. These more targeted MKDAs offered evidence that our negative and positive analyses were driven by untrustworthiness and attractiveness, respectively. These two sets of studies were associated with different loci of activations: the right amygdala was more consistently active as facial trustworthiness decreased, while the NAcc/caudate and vmPFC/pgACC were all more consistently active as facial attractiveness increased. This distinction mirrors the results we observed in our primary analyses. In contrast, no brain regions were consistently activated across contrasts where facial attractiveness decreased or facial trustworthiness increased, respectively, even when including ROI coordinates in the analyses. We should note that these contrastsespecially the comparison between positive linear responses in attractiveness and trustworthiness studiesare certainly unbalanced, rendering the results more suggestive than confirmative. As noted in the introduction, given that attractiveness and trustworthiness judgments from faces are highly correlated, this pattern is puzzling. These differences between trustworthiness and attractiveness studies cannot be explained by researchers' a priori focus on different regions because our results hold even for whole brain analyses that did not include ROI coordinates from individual studies. The apparent differences in the neural bases of attractiveness and trustworthiness are also puzzling in the context of studies that used the same set of faces to examine responses to facial attractiveness and trustworthiness (Todorov and Fig. 4 Linear and non-linear response patterns in right amygdala. Blue indicates voxels more consistently active across non-linear contrasts, red indicates voxels more consistently active across negative evaluations, and green indicates voxels consistently active across positive evaluations. Blue and red clusters withstood height-based thresholding, while the green cluster withstood extent-based thresholding (P < 0.001). Engell, 2008). Specifically, Todorov and Engell re-analyzed the data from Engell et al. (2007), using 14 different social judgments of the same set of faces. Most of the brain responses were accounted by a general valence dimension rather than by specific dimensions such as attractiveness and trustworthiness (both of these were highly correlated with this dimension). What could be driving the differences in neuroimaging studies on attractiveness and trustworthiness? One possibility is that the type of faces used in these studies may lead to different responses. Specifically, a third variable that is correlated with both trustworthiness and attractiveness but could vary across sets of faces may account for such differences. One candidate is face typicality. Recently, Said and colleagues (2010) showed that coding face typicality is a more parsimonious explanation of prior findings of the involvement of the amygdala in face evaluation than coding face valence. Face typicality could vary across data sets and lead to different results. For example, in many standardized data sets of natural faces, typicality is positively correlated with both attractiveness and trustworthiness judgments ( Figure 5A). In studies using these stimuli, the amygdala shows stronger responses to more atypical faces that happened also to be more negative (Todorov and Engell, 2008). In studies using artificial stimuli created by a statistical model, the most atypical faces are faces at the extremes of the dimension. In such studies, the amygdala responds to more atypical faces that happened to be more positive or more negative (Said et al., 2010). The important distinction here is not between real and artificial faces. Judgments of artificial faces that have not been manipulated to exaggerate differences along social dimensions are linearly correlated with their perceived typicality. Finally, in attractiveness studies that use extremely attractive faces (e.g. Aharon et al., 2001), the most attractive faces may be the least typical ( Figure 5B). In such studies, the amygdala may respond to both extremely attractive and extremely unattractive faces as observed in Winston et al. (2007). The typicality hypothesis predicts that faces that systematically differ in their perceived typicality may lead to different neural responses. In fact, in the contrast of attractiveness studies using extreme faces and studies using more typical faces, focusing specifically on studies employing implicit paradigms, we observed consistent patterns of activation in right amygdala and NAcc/caudate. This suggests that when task demands are controlled for, the driving force behind NAcc and caudate activations observed in these studies was the usage of extremely attractive, atypical faces. This result also lends additional support to the suggestion that extreme, atypical faces will drive amygdala activity, regardless of their trustworthiness. Further, the consistent activations in vmPFC and pgACC that were observed across attractiveness contrasts but not trustworthiness contrasts can also be accounted for by face typicality. Contrasting extreme attractiveness and trustworthiness continued to produce consistent activation in these regions, while contrasting more typical attractiveness against trustworthiness did not. It may be the case that in the context of face evaluation, some of the regions implicated in reward processing are only activated upon the presence of real and extremely attractive faces, or the goal to evaluate face attractiveness, or some combination of stimulus features and task demands. Unfortunately, we do not have a sufficient number of studies to test for more specific effects. Recommendations for studies using face stimuli Our findings suggest that the type of face stimuli selected for a particular study matters a great deal. For example, using more 'extreme' faces resulted in more consistently observed activation in the NAcc. Given that stimuli are often selected in an ad hoc fashion and rarely shared among research groups, this complicates comparisons across studies. Moreover, it undermines the generalizability of results. To overcome these problems, researchers need to use a shared set of stimuli, not necessarily the same stimuli but stimuli sampled from a common pool. One approach is to use parametrically manipulated faces generated by an explicitly specified statistical model (e.g. Oosterhof and Todorov, 2008;. This approach has the benefit of providing researchers with a full spectrum of facesone that is not biased towards one portion of a given dimension. Our laboratory has made a number of such databases available for academic research (http://web script.princeton.edu/$tlab/databases/). However, artificial faces may not be the best stimuli for many investigators. In this case, it would be best to create a common bank of stimuli that are shared with other research groups. These stimuli could be validated on a number of important variables such as typicality, and these variables could be further used to facilitate comparisons across studies. SUPPLEMENTARY DATA Supplementary data are available at SCAN online.
2014-10-01T00:00:00.000Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "35eb937da91f69c500d657f3fa3d264f6ae23430", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/scan/article-pdf/8/3/285/14136664/nsr090.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "1046a36304b42f4e02034845a1231bf9c6261e90", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
2617448
pes2o/s2orc
v3-fos-license
Multi-scale agent-based modeling on melanoma and its related angiogenesis analysis Background Recently, melanoma has become the most malignant and commonly occurring skin cancer. Melanoma is not only the major source (75%) of deaths related to skin cancer, but also it is hard to be treated by the conventional drugs. Recent research indicated that angiogenesis is an important factor for tumor initiation, expansion, and response to therapy. Thus, we proposed a novel multi-scale agent-based computational model that integrates the angiogenesis into tumor growth to study the response of melanoma cancer under combined drug treatment. Results Our multi-scale agent-based model can simulate the melanoma tumor growth with angiogenesis under combined drug treatment. The significant synergistic effects between drug Dox and drug Sunitinib demonstrated the clinical potential to interrupt the communication between melanoma cells and its related vasculatures. Also, the sensitivity analysis of the model revealed that diffusivity related to the micro-vasculatures around tumor tissues closely correlated with the spread, oscillation and destruction of the tumor. Conclusions Simulation results showed that the 3D model can represent key features of melanoma growth, angiogenesis, and its related micro-environment. The model can help cancer researchers understand the melanoma developmental mechanism. Drug synergism analysis suggested that interrupting the communications between melanoma cells and the related vasculatures can significantly increase the drug efficacy against tumor cells. performance. Thus, one of the aims of this study is to develop such indexes and tools that can estimate drug effects on melanoma cells. It is known that angiogenesis [4][5][6][7] is a significant transforming phase in tumor growth. A drug's distribution inside a tumor is highly heterogeneous due to the tumor vasculature's tortuous, chaotic structure compared to fine, nearly parallel blood vessels in normal tissue. Drugs delivered to tissues will not only change the behavior of melanoma cells (secretion of cytokines, proliferation, differentiation, apoptosis, or migration) in the intracellular drug-triggered cell division process, but also inhibit the development of new capillary sprouts by preventing sprouts from receiving vascular endothelial growth factors (VEGF). In turn, inadequate glucose and oxygen transported from the blood vessel will drive even more melanoma cells towards apoptosis. Therefore it is of great necessity to take tumor-induced angiogenesis into consideration and simulate the irregular vasculature inside tumor in order to further study the drug distribution and drug therapeutic effects. Many mathematical models [8][9][10][11][12][13][14][15][16][17][18][19] have been proposed to address the current challenges mentioned above. These models studied one or more phases of cancer progression, including tumor growth, angiogenesis, and drug treatment, with the purpose of better understanding the pathophysiology of cancer, mechanisms of drug resistance, and the optimization of treatment strategies. Although biologists have already obtained many experimental data sets at the molecular, cellular, micro-environmental and tissue levels, only a few scientists have integrated these data into a multi-scale platform to investigate the tumor progression with regard to its related angiogenesis and drug treatment. Studies on the anti-angiogenesis drug effects and the drug combination treatment responses are still rare. Hence, this research presents a 3D multi-scale agent-based model to investigate the role of the tumor-angiogenesis interactions in melanoma tumor progression by extending our previously well-developed 2D agent-based tumor growth models [20][21][22]. The multi-scale system is comprised of intracellular, intercellular, and tissue levels to describe the melanoma growth with angiogenesis. As a rule based model, this study developed a set of rules to determine the melanoma cell's phenotypic switch. These rules not only underline the migration of endothelial cells and the branching of vessel sprouts, but can also be more easily integrated into the agent based tumor growth model than previous Hybrid Discrete-Continuum (HDC) rules [23]. The model also can be employed as the test bed to predict the in vivo tumor responses to the combined drug pair: one for anti-angiogenesis and the other for the tumor. In general, the multi-scale model can not only simulate melanoma tumor expansion with related angiogenesis, but also explore the best drug combination for tumor treatment and the dual role of angiogenesis (transporting both nutrient and drugs). Mathematical models In order to describe tumor growth with angiogenesis and study melanoma's response to given drug pairs, our model defines two types of agents: the melanoma cell and the endothelia cell. The melanoma cell and the endothelia cell agents represent the progression of tumor and vasculature, respectively. The aforementioned multi-scale model consists of three biological levels: the intracellular, intercellular, and tissue levels. The intracellular level describes the fundamental mechanism for cell's phenotypic switch. The intercellular level bridges the tissue and intracellular scale as follows. (a) The vasculature delivers oxygen, cytokine, and glucose to the tumor microenvironment in the tissue level; (b) The melanoma cells uptake the glucose for metabolism as well as switch the phenotype under the stimulation of specific cytokine in the intercellular scale; (c) In turn, the inadequate glucose and oxygen will stimulate the tumor cell to secrete the VEGF in the intercellular level to induce angiogenesis. In the tissue level, blood vessel sprouts migrate and branch via tip endothelial cells' migration in response to the diffused VEGF and drugs. Initialization We use a 100×100×100 cube ( Figure 1) with four sub-compartments to represent a slice of the virtual tumor extracellular matrix (ECM). The lattice size is 10 μm, which is approximately the same as the radius of the tumor cells. A hundred active melanoma cells are initialized in the center of the lattice like a sphere and the age of each tumor cell is randomly initialized between 0-24 hours. Sixteen tip endothelia cells are initialized on the surface of the 3D ECM as the main blood vessels. The VEGF and glucose are normally distributed in the cube 1 to 4 at the start, respectively. Intracellular level: the phenotypic switch of melanoma cells At every simulation step (Δt = 2 hours), each melanoma cell determines its phenotype according to the following rules as shown in Figure 2. Apoptosis If the concentration of the glucose is less than the cell's survival threshold, the starving cancer cell will secrete VEGF to induce angiogenesis for nutrition deliver. If the Figure 1 The 3D lattice represents the tumor extracellular matrix. starving cancer cell stays in an environment with inadequate nutrition for too long, it may go to apoptosis phenotype as Equation 1. where Δt is the time interval and λ 0 is the normal death rate of the melanoma cell. λ 1 denotes the impact of the cytotoxic drug as Equation 2. where A sDox and A sglucose denote the average drug (Dox, which is a cytotoxic drug directly to melanoma cell) and glucose concentrations on the current site and its Von Neumann neighbors, respectively. w 1 , w 2 are the regulatory factors. Proliferation: Equation 3 describes the proliferation probability of the melanoma cell. where λ 2 is the normal proliferation rate of the melanoma cell equal to the reciprocal of average proliferation time of the cell. Die function (Equation 4) is determining whether the cell enters the cell cycle or not. with a die C rand ∈ [0, 1), if the die C rand falls into the interval [0, p prol ), the cell enters the cell cycle and starts to proliferate. Migration If the cell is neither in the cell cycle nor dividing, it will migrate. The detail of migration will be discussed in the next section. Quiescence After the cell determines its phenotype, it will look for a free place that is of least resistance, most permission, and highest attraction [24] to divide or migrate. The cell will enter a reversible quiescent state in the absence of a free space. Intercellular level Three major extracellular micro-environmental factors, such as glucose, VEGF, and drugs, are discussed in this model. A set of reaction-diffusion equations describes the diffusion, degrading, and uptake of these factors. Glucose diffuses, degrades, and it is consumed by tumor cell as described in Equation 5 where G ijk (t + 1) is the glucose concentration on the location P ijk in the (t + 1) time step, λ g is the diffusion constant of glucose. G l ijk t ð Þ; l ¼ 1; 2; …; 6 are the glucose concentrations of P ijk 's at its six immediate neighbors (Von Neumann neighbors) in the current time step. The time dependent characteristic functions χ endo (t, P ijk ) and χ tumor (t, P ijk ) relate to the occurrences of an endothelia cell or a melanoma cell at P ijk . If a related cell is located at P ijk , the value of the function χ equals to 1; otherwise 0. Pe g is the vessel permeability for glucose. U g represents the glucose uptake rate of melanoma cell. D g represents the natural decay rate of glucose. The melanoma cell secretes VEGF to induce angiogenesis for the delivery of nutrients. VEGF diffuses in the surrounding tissue and is also consumed by the endothelial cells. This process is described by the following equation: Where V ijk (t + 1) is the VEGF concentration at the location P ijk in the (t + 1) time step, λ v is the diffusion constant of VEGF. V l ijk t ð Þ; l ¼ 1; 2; …; 6 are the VEGF concentrations of P ijk 's at its six immediate neighbors in the current time step. Se v is the secretion rate for VEGF. Pe v represents the vessel permeability rate of VEGF. D v represents the natural decay rate of VEGF. There are two drugs involved in our model. One is Doxorubicin (Dox) [25], which directly kills the tumor cells. The other is Sunitinib [26], which inhibits the growth of endothelial cells by preventing its receptor from receiving VEGF secreted from fast growing melanoma cells. After the drug is injected into the blood vessels, it is delivered through the vasculature and diffusing into the surrounding tissue. Finally, it is taken by the tumor cells and the endothelial cells. We model this process with the following equation: where DR ijk (t + 1) is the drug (Dox or Sunitinib ) concentration at the location P ijk in the (t + 1) time step, λ d was the diffusion constant of drug. DR l ijk t ð Þ; l ¼ 1; 2; …; 6 are the drug concentrations of P ijk 's at six immediate neighbors in the current time step. Pe d is the vessel permeability for drug. U d represents the drug uptake rate. As discussed before, once an agent (melanoma cell or endothelia cell) has determined its biomechanical phenotype, it will look for a free space to proliferate, migrate, or become quiescent. Each living melanoma cell chooses the "best" location to proliferate or migrate by the following rules: 1) Since the tumor cell always looks for a place with more nutrition to migrate to or to deliver its offspring to, we use the mean (M g ) and the standard deviation (σ g ) of the glucose concentrations on the place and its Moore neighbors [27] to locate candidate locations. Here, G(P l ijk ) represents the glucose concentration of the lth Moore neighbors of the current site. If G(P l ijk ) − M g > 3σ g, , we consider it as an abnormally high nutrition location for a tumor cell to migrate to or deliver its offspring to. 2) If G(P l ijk ) − M g ≤ 3σ g , the model needs to evaluate all candidate locations nearby. All candidate locations were ranked through Equation 8. where A l mglucose is the average glucose concentration of the lth candidate site and its Moore neighbors of this site. A l mDox is the average Dox drug concentration of the lth candidate site and its Moore neighbors of this site. w 3 is the regulator factor. The tumor cell always prefers a location that has a high nutrition concentration (A l mglucose ), a low drug concentration (A l mDox ), and few neighborhoods (V l ). The preference of neighborhoods (V l ) is denoted by Equation 9 Ranks of candidates were normalized as Equation 10. Normalized ranks formed the scale as Equation 11 S ¼ S l : S is an ordered set of S l . Each S l is a region in the [0,1] and relates to the lth candidate site. The die casting generates a random valued ∈ [0, 1). If d falls in S l , the candidate location relates to theR l will be chosen as the next migration or proliferation stop. 3) If no space is available, the cell will become reversible quiescent. Tissue level The starving melanoma cells secrete VEGF to induce angiogenesis and the induced vasculature transports nutrient for the tumor growth in the tissue level. Here, we employ the motion of the tip individual endothelial cell ("EC agent") to represent vasculature progression. The algorithm for angiogenesis ( Figure 3) is described as follows: 1) At each time step, each EC agent evaluates the VEGF concentration in its surrounding tissue. If there is no VEGF, the EC agent becomes quiescent. 2) Degeneration: The purpose of the drug Sunitinib is to inhibit tip endothelia cell's ability to receive the VEGF signal as well as increase the apoptosis rate of tip endothelia cell. The threshold of a tip endothelial cell's apoptosis rate (Apop endorate ) is computed by Equation 12. where λ 3 is the normal death rate of the endothelia cell and λ 4 denotes the impact of the Sunitinib which is described by Equation 13. where A sSuni and A sVEGF denote the average Sunitinib drug and VEGF concentrations, respectively, on the current site and its Von Neumann neighbors. w 4 ,w 5 are the regulatory factors. At each time step, a uniformly random number is generated by the die function. If it is less than the apoptosis threshold, the endothelia cell becomes apoptotic and its parent cell is set as the tip endothelia cell. Progression (migration): The living tip endothelia cell will proliferate or branch. The tip cell usually looks for the location with higher VEGF to branch to or proliferate to. The mean value m svegf and the standard deviation σ svegf of the VEGF concentration on the cell's current site and its von Neumann neighbors are used to determine behavior of the tip endothelia cell P(i,j,k). Here, V(P l ijk ) represents the VEGF concentration of the lth von Neumann neighbor of the current site. If V(P l ijk ) − m svegf > 3σ svegf , we consider the VEGF is so strong that the blood vessel will directly grow toward this direction. If there are more than one candidate directions that meet the condition, the blood vessel will randomly select a direction to grow toward. If the V(P l ijk ) − m svegf ≤ 3σ g , the blood vessel tended to search valid spaces to branch. 3) Branching: For each EC agent, which tends to branch, their Moore neighbors are employed as candidate locations and ranked by Equation 14. where A l mvegf and A l mSuni are the average VEGF and average Sunitinib drug concentrations of the lth candidate site and its Moore neighbors of this site. w 6 is the regulator factor. The endothelia cell always moves to a location with high VEGF, low drug concentration, and low crowdedness (V l ) as described in Equation 9. All R lendo were normalized by Equation 10. All normalized ranks were incorporated to form a scale S as specified by Equation 10. The die casting generates two random value sd 1 ∈ [0, 1), d 2 ∈ [0, 1). If d 1 , d 2 fall in S l1 , S l2 , the candidate location which relate to theR l1 ,R l2 will be chosen as the branching sites. If both d 1 , d 2 fall in the same region, the algorithm will repeat the die casting process. 4) If no space is available, cells would remain in a reversible quiescent state and try again in the next round. This multi-scale agent based melanoma cancer model with angiogenesis is summarized as follows (Figure 4). At the intracellular level, it employs exponential functions (Equations, 1-4) to describe the phenotypic (migration, proliferation, or apoptosis) switch of the cancer and endothelia cells. At the intercellular level, a set of reaction-diffusion equations (Equations, 5-7) is employed to describe the spatial concentration changes of glucose, VEGF, and drugs. Cancer cells compete for the best location in the 3D extracellular matrix in order to migrate or proliferate depending on the gradient of glucose, drugs, and cell density (Equations, [8][9][10][11]. At the tissue level, the spatial concentration distributions of VEGF and drug concentrations play an important role to impact the tip endothelial cells' migration and sprout branching (Equations, [12][13][14]. In turn, the dynamic vasculature at the tissue level remodels the tumor microenvironment by changing the important factors (spatial concentration distributions of glucose and drugs) in the intracellular level. And the behaviors of melanoma cells (secretion of cytokines, proliferation, migration, or apoptosis) are greatly influenced by these changes at the intracellular level. The parameters of the model are listed in Table 1. Results We have implemented the above model in the VC++ programming environment. It includes a 3D melanoma-angiogenesis interaction model and its related drug combination treatment. We can employ this tool to predict the responses of melanoma and its related angiogenesis under drug combination treatment. Volumetric growth dynamics We measured the tumor system's (total) volume by counting the number of the lattice sites occupied by a tumor cell regardless of its phenotype, hence lumping together both proliferative and migratory expansion. Figure 5 shows the increase in melanoma system volume over time for 20 simulation runs. The volume increase is not smooth. The volume quickly increased and slowed down during time interval between 50 h and 150 h. After that, the volume kept increasing rapidly. The simulated data is presented by a red line of Figure 5. As reported by Khodadoust et al. [35], there are two experimental data sets related to human melanoma, which were estimated at the indicated times by manual counting. The mean value of these two experimental data sets is the blue line of Figure 5, which has similar growth trend as the simulated data (red line). Tissue scale behavior Figure 6 shows the three-dimensional snapshots of the tumor at time points 0, 40, 80, 120, 160 and 200. Note that different colors denote various tumor cell states: active (both proliferative and migratory) (yellow), quiescent (blue), dead (grey). The endothelial cells are red. Each time step is 2 hours. Figure 6 shows that melanoma cells tend to proliferate or migrate to the locations near the vessels, where the glucose concentration level is the highest. In the beginning, the tumor was comprised of active cells and some quiescent cells. The blood vessels were far away from the tumor. At around 40 hours, Figure 5 The tumor system volume (y-axis) over time (x-axis) derived from simulations (red line, for t = 0-400 hours) and from in vitro experimental observations (blue line, for t = 0-300 hours). some dead cells appeared in the center of tumor and the blood vessels moved to the tumor. From 80 to 200 hours, not only did the number of both dead and active cells keep increasing, but also did the number of blood vessels that were approaching the tumor. Furthermore, more and more quiescent cells near the blood vessels switch the phenotype back to migration or proliferation. Finally, the blood vessels become a tree bunching microvasculature and were much denser near the tumor. Figure 7 shows the mean value and standard deviation of the numbers of different types of melanoma cells and endothelia cells with respect to time with 20 simulation runs. Also, Figure 8 presents the proliferation rate of tumor cells. Figure 7a shows that the number of active cells increased rapidly from 0 to 50 hours and decreased in [50 h, 150 h]. After that, the number of active cells increased monotonically with time until 250 hours. The increasing trend of active cells became mild after 250 hours. The number of apoptotic cells increased abruptly at around 50 hours and kept increasing until the tumor microvasculature developed around 150 hours. There was a significant decrease of apoptotic cells at around 150 hours. After that, the number of apoptotic cells kept to a relatively flat, monotonically increasing rate until 400 hours. Figure 7b shows that the number of quiescent melanoma cells kept increasing from 0 to 400 hours with a similar curve as the total melanoma cells. Figure 7c shows the comparison between the dynamics of the simulated and experimental number of the endothelial cells under angiogenesis condition [36]. The 200 hours' experimental data (blue line) shows similar growth trend as the simulated data (red line). Combined drug effects on melanoma treatment We employed the drug Dox to directly increase the apoptosis rate of melanoma as well as used Sunitinib to treat melanoma by interrupting the VEGF signal communications t = 0 hour t = 40 hour t = 80 hour t = 120 hour t = 160 hour t = 200 hour between the melanoma and its related angiogenesis. Figure 9 shows 3D snapshots of the tumor system at time steps 40, 80, 120, 200 hours under Dox, Sunitinib, and the combined drugs (Dox and Sunitinib) treatments, respectively. The different colors represent cell types in the same manner as Figure 6. Figure 9 shows that Sunitinib is more effective than Dox at decreasing the tumor expansion, and the effect of combined drugs treatment is the best. Cell dynamics change under drug treatment Figure 10 shows the average cell dynamics under different drug treatments in 20 simulations. Figure 10a shows a sharp drop of total melanoma cells at around 100 hours regardless of what the drug treatment was. Figure 10b shows that the number of the active melanoma cells started to drop around t = 50 hours which is earlier than the total tumor cells. In general, Figure 10 demonstrates that single Dox/Sunitinib therapy cannot kill all the tumor cells like the combined drugs therapy during the simulation. Drug effect test There were 12 doses for each drug, with 1 being the control and having 12 levels ranging from 0.1× to 10× in geometric sequence relative to the original dose. Then, we explored the drug efficacies for various combinations of the two drugs. We used the total melanoma cell death rate as the indicator of drug efficacy. Dramatic synergistic effects of the Dox and Sunitinib were observed if the total elimination of melanoma cells (E100 at time point 400 hr) was used as the criterion for Loewe combination index [37][38][39]. Figure 11a shows the whole effect of drug combinations, and the red line indicates that the number of the total tumor cells is lower than the initial number 100. The metric level of the color bar in Figure 11 is 1:10. That means that the value 10 indicates that the number of cancer cells is 100. Since Figure 11a cannot accurately describe the details of drug effect when the number of the total tumor cells is less than 100, Figure 11b is used to describe the dramatic synergistic effects of the Dox and Sunitinib when both drug doses are greater than 6. In Figure 11b, the blue line indicates the E50 isobole (the total number of the melanoma cells is lower than 50), and the red line marks the E100 isobole (the total elimination of melanoma cells). The yellow dashed line AB in Figure 11b represents the Loewe additivity criteria. The two end points A, B represent the concentrations of single Dox or Sunitinib with respect to the total elimination of melanoma cells. According to the Loewe additivity concept [37][38][39], if the E100 isobole is lower than the Loewe additivity line AB, the combination of the drugs has a synergistic effect; if the E100 isobole is higher than the Loewe additivity line AB, the combination of the drugs has an antagonistic effect; otherwise we say that the combination of the drugs has an additive effect. Parameter sensitivity analysis To evaluate the impact of the parameter values on the behavior of the melanomaangiogenesis system, we analyzed the sensitivity of our model to the following parameters: We varied each parameter individually over the ranges shown in Table 1, while fixing other parameters constant at their base values. The ranges of the parameters are from the previous literatures listed in Table 1. Limited by the relatively high computing cost of the ABM, we did 10 simulations for each set of parameters. To assess the influence of the parameters, we calculated the Spearman rank-order correlation [40] of each parameter versus the number of total melanoma cells, the number of active melanoma cells, and the number of endothelia cells. Table 2 shows the Spearman rank-order correlations ρ, and p-values for each parameter. Also, it explores three parameters closely related to the endothelial cells number, total, and active melanoma cells number: the permeability for glucose (Pe g ), the diffusion constant of drug (λ d ), and the drug uptake rate, U d . The secretion rate for VEGF (Se v ) closely correlates with endothelial cells number and active melanoma cells number, and it turns out that VEGF plays important role in the tumor growth and angiogenesis. However, the natural decay rate of glucose (D g ) is only closely related to active melanoma. Discussion and conclusions This study proposed a 3D multi-scale agent-based cancer model by integrating a novel angiogenesis module into a tumor growth module. The major aims of the research are investigating the relationship between the melanoma-induced angiogenesis, melanoma development, as well as exploring the optimum synergistic drug combinations to treat melanoma cancer. The inadequate nutrition in the microenvironment will decrease the tumor proliferation rate ( Figure 8) and make the melanoma cell undergo apoptosis ( Figure 5 and Figure 7). Starving melanoma cells will release VEGF to induce the angiogenesis for nutrition. In turn, the angiogenesis will promote the tumor growth ( Figure 6). This study developed such a platform that can estimate optimum drug dose combinations for tumor treatment. Figure 9 and Figure 10 intuitively showed that the anti-angiogenesis drug (Sunitinib) has much better effect than the tumor-specific cytotoxic drug (Dox), which directly kills the melanoma cells. Moreover, the synergistic use of both Sunitinib and Dox can significantly decrease melanoma progression and inhibit tumor-induced angiogenesis, since the drug combination therapy can not only kill the cancer cells by increasing the apoptosis rate, but also inhibit the cancer cells' ability to obtain the enough nutrients by interrupting communication between the cancer cells and the vasculature. The Drug effect test based on the Loewe drug combination analysis also strongly suggested that combination of cytotoxic drug (Dox) and the angiogenesis inhibitor (Sunitinib) are of high clinical potentials. Classical Loewe combination analyses often use isobole at effect level 50 (E50) as standard index, which is convenient to monitor in animal models and clinical chemotherapy evaluations. Due to the advantages of simulations, the E100 isobole was employed to evaluate the performance of drug combinations in killing all melanoma cells in a given treatment time. Simulations of the combination effects indicated the drug combinations could successfully kill almost all melanoma cells in the given treatment time (the red lines in Figure 11a and b). When it was evaluated by the E100 isobole against the melanoma cells, the simulations also suggested strong synergistic effects (the red line in Figure 11b is lower than the Loewe additivity criteria dashed line). As shown in Figure 11b, without the aid of Sunitinib, Dox alone cannot extinguish all melanoma cells and thus enabling the disease to relapse soon. Taken together, angiogenesis is highly targetable during melanoma treatment, and inhibitors of the interactions between cancer initiating cells and their related angiogenesis are promising co-drugs for traditional cytotoxic agents. The sensitivity analysis not only explored the high correlation between simulation outcomes and blood vessel delivery rates of glucose and drugs, but also demonstrated that interrupting communication between the tumor and its related angiogenesis can significantly amplify the effect of the treatment. This is the first time a 3D multi-scale agent-based cancer model was employed to describe the communication between the melanoma and the vasculature around the tumor and investigate how to employ anti-angiogenesis drugs to cure melanoma by breaking this communication. This study also indicated that angiogenesis plays a very important role in the transportation of nutrients for the tumor growth. Drug synergism analysis indicated that inhibiting communications between melanoma cells and their related vasculature could increase the efficacy of the treatment, decrease the tumor progression, and finally reduce the cancer cell survival rate. In the distant future, we are going to develop a predictable cancer model by considering more realistic biological and physical data and features, such as blood flow, the influence of focal adhesion kinases, complicated signaling pathway, and the oxygen pressure [41].
2015-03-21T21:52:17.000Z
2013-06-21T00:00:00.000
{ "year": 2013, "sha1": "290d65ea3c8399c4e4d7f92e92a25c30d8c7c5da", "oa_license": "CCBY", "oa_url": "https://tbiomed.biomedcentral.com/track/pdf/10.1186/1742-4682-10-41", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c3480925e0c29612e3aa20ebf28413a735854c4f", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119207799
pes2o/s2orc
v3-fos-license
Study on the Energy Dependence of the Radii of Jets by the HBT Correlation Method in e+e- collisions The energy dependence of the radii size of jets are studied in detail by the HBT correlation method using Monte Carlo Simulation generator Jetset7.4 to produce 40,000,000 events of e$^+$e$^-$ collisions at $\sqrt s =30$, 50, 70, 91.2, 110, 130, 150 and 170 GeV. The radii of jets are measured using the HBT correlation method with the indistinguishability of identical final state pions. It is found that the average radii of quark-jets and gluon-jets are independent of the c.m. energy of e$^+$e$^-$ collisions. The average radius of quark-jets are obviously larger than that of gluon-jets. The invariable average radii of quark-jets and gluon-jets in e$^+$e$^-$ collisions are obtained at the end of parton evolvement. Introduction It was well-known that Hanbury-Brown and Twis had brought forward HBT correlation in the process of measuring the angular radii of the emitting sources in 1956 1,2,3 . Later, the HBT correlation was widely used in subatom studies 4,5,6 . The HBT correlation method has been an important way to measure the size of the emitting source in high energy collisions. Due to "color confinement", we cannot observe free quarks and gluons and cannot yet measure the size of them directly. However, the HBT correlation method offers a viable indirectly method, and applying this method into the high energy collisions we can obtain some characteristics of strong interaction for quarks and gluons. Historically, the discovery in 1975 of a two-jet structure 7 in e + e − collisions at center of mass (c.m.) energy ≥ 6 GeV had been taken as an experimental confirmation of the parton model 8 , and the observation in 1979 of a third jet in e + e − collisions at 17 − 30 GeV had been recognized as the first experimental evidence of the gluon 9,10,11,12 . In the early 1990s, the production of jets in hadron-hadron collisions was widely studied 13,14,15 and had been considered as an efficient way to obtain the strong coupling constant α s 16 . How to distinguish jets and the study on jets are also very important, in relative high energy ion collisions 17,18 . Based on this idea we can get information about quarks and strong interactions from the study of jets by using the HBT correlation method. In e + e − collisions 19 , firstly, the e + e − pair is annihilated into a virtual γ * /Z 0 resonance. The virtual γ * /Z 0 , in turn, decays into a qq pair. Then the initial qq may radiate other gluon and qq pairs, giving rise to a cascade process. This stage is responsible for the formation of hadronic jets. Further, the unstable hadrons decay into experimentally observable particles (mostly pions). It has been found that the majority of e + e − collision events have a 2-jet structure. If an initial quark or anti-quark emits a hard gluon with sufficiently large transverse momentum, a 3-jet structure can be formed. Thus, the source of a single jet is from a single initial quark (or anti-quark) or gluon. Although the quark and gluon, before being observed, have been fragmented into the final state hadrons, the final state particles inside the jets still carry a lot of information about the parent quark and gluon. The quark and gluon are two different types of particle. For example, the quark is a fermion with colour charge equal to 4/3, while the gluon is a boson, carrying colour charge 3. These differences will certainly influence their fragmentation, resulting in different properties of quark-jets and gluon-jets.Some characteristics of quarks (anti-quarks) or gluons is reflected by the geometrical characteristics of jets. So, the study of the geometrical characteristics of the jets is helpful in the understanding of the perturbative/nonperturbative properties of QCD. In the ref. 20 , the geometrical characters of quark-jets and gluon-jet have been studied with the HBT correlation method using MC generator producing quarkjets and gluon-jets in 3-jet events of e + e − collisions at √ s = 91.2GeV. However, do the size of quark-jets and gluon-jets depend on the c.m. energy of e + e − collisions producing these jets? Are the size of quark-jets measuring in 3-and 2-jets events of e + e − collisions the same? Our work will focus on these questions. The paper is organized as follows: In Sec. II, we briefly introduce the method of identification jets and the HBT correlation function. In Sec. III, the average radius of quark-jets and gluon-jets in 2-jet Events are calculated. In Sec. IV, the average radius of quark-jets and gluon-jets in 3-jet Events are calculated. A short summary is the content of Sec. V. The method of identification jets and HBT Correlation Function In our work, the data of e + e − collision events are produced by Monte Carlo Simulation generator Jetset7.4. The 2-jet events and the 3-jet events are selected using the Durham jet algorithm 21 . In these methods, there is a cutting parameter y cut , which, in the case of the Durham algorithm, is related to the relative transverse momentum k t as 22 where √ s is the c.m. energy of the collision. From the experimental point of view, k t can be taken as the transition scale between the hard and soft processes. Its value depends on the definition of "jet". The single quark-jet and single gluon-jet are identified from 3-jet events using the angular rule 23 .We assume that the three jets in a 3-jet event come from quark, antiquark and gluon, respectively. Because of energy-momentum conservation, the three jets in one event must lie in a plane, which is shown in Fig.1, where P i (i = 1, 2, 3) is the total momentum of all particles in jet-i. The jets are tagged using the angles between them: where the largest angle, θ 3 , faces the gluon jet; the smallest angle, θ 1 , faces the jet formed by an initial quark without emitting a hard gluon; and the middle one, θ 2 , faces the mother-quark-jet. According to the requirement of momentum conservation, the three jets should be in one plane, and we add the condition: It should be noticed that the angle opposite to the mother-quark-jet is very close to that opposite to the gluon-jet, i.e. θ 2 ≈ θ 3 , so they are easily confused. Therefore, we demand a cut condition: θ 3 − θ 2 ≥ θ cut , here θ cut = 10 • . This cut rejects about 12% events 23 . The HBT correlation, also called the Bose-Einstein correlation, results from the indistinguishability of identical final state particles. Most of the final state particles produced in e + e − collisions are π mesons, so we choose π mesons (π + , π − , π 0 ) as the identical particles to study. If P (k 1 , k 2 ) is defined as the probability of observing two identical pions at the same time with momentum k 1 and k 2 , and P (k 1 ) and P (k 2 ) are defined the probability of observing pions with the momentum k 1 and k 2 , respectively. The correlation function C 2 (k 1 , k 2 ) is defined as: If the equivalent density function of the source is parameterized to Gaussian form, we have: If only the spatial part of the source is considered and assume that the distribution of the source is isotropic, the correlation function can be simplified as: According to the definition, jets do not possess spherical symmetry, but are axially symmetric instead. So the parameter R characterizing the geometrical properties of the jets is actually the average radius of jets. In this paper, we study the average radius R of the pion source only through the spatial distribution function of the source, which is taken as spherically symmetric. Then, the information about the average size of the emitting source for the final state particles can be obtained. The three-dimensional momentum interval region chosen is Q = 0 ∼ 2.5 GeV/c, and is equally divided into 50 cells. We use Monte Carlo simulation generator Jetset7.4 to produce e + e − collision events both with and without HBT correlation, and then select out suitable events for study. Identical π mesons are selected from the final state particles to make pion pairs after any two π mesons are grouped with each other. The three-dimensional momentum difference of the π meson pairs are calculated. The correlation function (also called correlation coefficient) with statistical method is: where F c (Q) is the three-dimensional distribution function of the identical particles with HBT correlation inside the jets and F (Q) is the three-dimensional distribution function of the identical particles without HBT correlation inside the jets. Since the correlation among identical particles with large momentum difference is quite weak, the distribution here with the HBT correlation should be almost the same as the distribution without the HBT correlation. Thus the C 2 (Q) can be multiplied by a coefficient to make the value of it equal to 1. Thus, using Eq(5) to calculate the average radius R, the Eq(6) can be expressed as: where η is the value of the correlation function C 2 (Q) at a large momentum interval. The Measurement of source radii inside jets of 2-jet events We use Monte Carlo Simulation generator Jetset7.4 to produce 40,000,000 events of e + e − collisions both with and without HBT correlation, which the c.m. energies are √ s = 30, 50, 70, 91.2, 110, 130, 150, 170 GeV,respectively. The final state π mesons (π + , π − , π 0 ) as the identical particles are chosen from event samples to study. We just select out the final state identical π mesons emitted from vertex at origin, because some of them are secondary emitted or even multistage emitted. Thus, the calculated result is able to reflect the characteristics of the source of jets properly. The 2-jet events are selected using the Durham jet algorithm, and the cutting parameters at different c.m. energies are selected as Table 1. The 2-jet event is constituted by the two jets which are formed by the fragmentation of the back-to-back qq. And the two jets are called quark jet-1 and quark jet-2, respectively. Due to the back-to-back symmetry of the two jets formed by the fragmentation of the qq, the distribution patterns of the two jets should be totally the same. So, we just need to study one of the two jets. We will choose quark jet-1 which is referred to as quark-jet. The correlation function is produced both with and without HBT correlations when the three-dimensional momentum interval region of the identical π mesons is chosen as Q = 0 ∼ 2.5 GeV. According to equation (6) we calculate the value of the correlation function C 2 (Q) of π (π + , π − and π 0 ) mesons inside quark-jets from 2-jet events for all the 8 c.m. energies. Then, the average radii size of emitting source of pion meson inside single jet can be obtained through fitting the correlation functions C 2 (Q) using Eq.(7) for π + , π − and π 0 , as shown in Table 2,respectively. As an example, the results for 3 c.m. energies are shown in Fig.2 It is clear to see from Fig.2 that the distributions of correlation functions of the π meson inside quark-jets at different c.m. energies are similar. Especially, the distributions of particle both for π + and π − mesons at the same energies are in superposition. However, the distributions of identical particles for π 0 mesons is different to π + and π − mesons owing to the electromagnetic interaction among π + and π − mesons in the process of hadronization. The mean radii R of jets for all various c.m. energies are listed in Table 2 at the last line. For the convenience of comparison, we draw the source radii of the three kinds of π mesons inside quark-jets for all energies in Fig.3. It can be seen from Table 2 or Fig.3 that the values of the radii R of jets for one meson from different c.m. energies are nearly the same within the error range; the radii R of jets for the same c.m. energy both π + and π − mesons are approximately the same within the error range, i.e. their meansR q,π + = 0.693 ± 0.003 fm,R q,π − = 0.690 ± 0.004 fm; But there are some difference between the π + or π − mesons and π 0 mesons for theR q,π 0 = 0.6000 ± 0.006 fm. This is due to the electromagnetic interaction among π + and π − mesons in the process of hadronization. So, the results for π 0 mesons are more authentic. The Measurement of source radii inside jets of 3-jet events In the same way, we use Monte Carlo Simulation generator Jetset7.4 to produce 40,000,000 events of e + e − collisions both with and without HBT correlation for 8 c.m. energies. And we choose the final state π mesons (π + , π − , π 0 ) emitted from the vertex at origin as the identical particles to study. The 3-jet events are selected using the Durham jet algorithm, which the cutting parameters for 8 energies are listed in Table 3. After the quark-jet, mother quark-jet and gluon-jet are identified from 3-jet events using the angular rule, the correlation function is produced both for the case with and without HBT correlation when the three-dimensional momentum interval region of the identical π mesons is chosen as Q = 0 ∼ 2.5 GeV/c. We calculate the value of C 2 (Q) according to formula (6), then the average radius size of emitting source of pion inside single jet can be obtained through fitting the correlation functions C 2 (Q) using Eq.(7) with π + , π − and π 0 meson, respectively. The results are listed in Table 4. As an example, the results for 3 c.m. energies are shown in Fig.4. It is easy to come to the conclusion from Fig.4 that the three type of π meson correlation functions for quark-jets, mother quark-jets and gluon-jets are all similar. And the distributions of different identical particles, for π + and π − mesons, formed at the same energy are nearly in superposition. However, the distributions of identical particles for π 0 mesons are different to π + and π − mesons. For the convenient of comparison, we draw the average source radii of the three kinds of jets at different c.m. energies for the three types of π mesons in Fig.5. It can be seen from Table 4 and Fig.5 that: the values of radii R of quark-jets or mother quark-jets and or gluon-jets at different c.m. energies are all nearly the same within the error range, respectively; the average radii of quark-jets are obviously larger than that of gluon-jets, and the average radii of quark-jets are also larger than that of mother quark-jets. The mean radius of quark-jets for π + mesons is 0.71 ± 0.02 fm, that of mother quark-jets is 0.67 ± 0.01 fm, and that of gluon-jets is 0.59 ± 0.01 fm. The mean radius of quark-jets for π 0 mesons is 0.61 ± 0.02 fm, that of mother quark-jets is 0.55 ± 0.02 fm, and that of gluon-jets is 0.43 ± 0.01 fm. The mean radius of quark-jets for π − mesons is 0.74 ± 0.01 fm, that of mother quark-jets is 0.68 ± 0.02 fm, and that of gluon-jets is 0.60 ± 0.01 fm,respectively. GeV. The 2-jet events and 3-jet events are selected using the Durham jet algorithm. The geometrical characters of quark-jets and gluon-jets are studied in detail using the HBT correlation method. The conclusions are as follows: • The radii of quark-jets or gluon-jets measured at different c.m. energies are approximately the same within the error range, which shows that the radii of quarkjets and gluon-jets reflect some intrinsic properties of quarks and gluons. • The values of the mean radii of quark-jets, mother quark-jets and gluon-jets are measured by calculating the final state identical π mesons using the HBT correlation method are obtained, shown in Table 5. • The mean radii of quark-jets measured is obviously larger than that of gluon, which indicates that the size of quark is larger than that of gluon. However, the mean radii of mother quark-jets measured is less than that of quark-jets. This may be due to the mixture of a small amount of gluon-jets and mother-quarkjets in the process of measurement which makes the radii of mother quark-jets measured smaller. • The results for π 0 mesons are more authentic than for π + and π − mesons for there are no electromagnetic interactions among π 0 mesons in the process of hadronization.
2013-02-19T03:54:28.000Z
2013-02-19T00:00:00.000
{ "year": 2013, "sha1": "a5914408436428890c812a5ba451aa99239cfb8a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1302.4511", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a5914408436428890c812a5ba451aa99239cfb8a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
239687470
pes2o/s2orc
v3-fos-license
Optimal Sizing of Energy Storage System for Operation of Wind Farms Considering Grid-Code Constraints Transmission system operators impose several grid-code constraints on large-scale wind farms to ensure power system stability. These constraints may reduce the net profit of the wind farm operators due to their inability to sell all the power. The violation of these constraints also results in an imposition of penalties on the wind farm operators. Therefore, an operation strategy is developed in this study for optimizing the operation of wind farms using an energy storage system. This facilitates wind farms in fulfilling all the grid-code constraints imposed by the transmission system operators. Specifically, the limited power constraint and the reserve power constraint are considered in this study. In addition, an optimization algorithm is developed for optimal sizing of the energy storage system, which reduces the total operation and investment costs of wind farms. All parameters affecting the size of the energy storage systems are also analyzed in detail. This analysis allows the wind farm operators to find out the optimal size of the energy storage systems considering grid-code constraints and the local information of wind farms. Introduction Wind energy is a renewable energy source that has been dramatically exploited in recent years. The Global Wind Energy Council has estimated that the cumulative installed wind power has reached approximately 540 GW in 2017, and this number may reach up to 840 GW by 2022 [1,2]. To convert the wind's kinetic energy into electricity, a huge number of wind turbine generators (WTGs) are installed and grouped to form a wind farm (WF) system. The WF system is operated by an energy management system [3][4][5] to achieve the common objectives for the whole WF system. For instance, an optimization method is proposed in [6] to adjust the set-points of WTGs for maximizing the output power of the entire system. The output power of the WF system usually fluctuates and is highly dependent on the variations in the wind speed. This may not cause any major issue to the operation of the power system with small WF systems; however, WF systems have been recently developing, and they usually have a large capacity with a vast number of WTGs. A small change in wind speed can cause large fluctuations in the output power of WF systems, which can cause several difficulties in the operation of the power system, even causing instability of the whole system [7,8]. Therefore, transmission system operators (TSOs) often impose different requirements for the operation of large WF systems to ensure the stability of the power system; so-called grid-code constraints [9][10][11]. WF systems need to meet these requirements during the connection time to the power system. However, wind energy is variable and cannot be predicted accurately. There is always an uncertainty factor in determining the output power of the WF system. To mitigate the effect of the uncertainty in wind speed, energy storage systems (ESSs) are often installed in WF systems [12,13]. An optimal structure of a hybrid photovoltaic and wind power supply is presented in [14], an off-grid mode using ESSs to improve power supply reliability. Similarly, an optimal structure of a WF presented in [15] uses kinetic energy storage to enhance the reliability of the power supply. ESSs can also play a role of an energy buffer to maintain the power balance between load demand and the output power of the WFs [16], and also to maintain frequency control [17,18]. The authors in [19,20] have proposed an optimization dispatching strategy to eliminate the forecasting error and to maintain the output power smoothing under normal operation. The authors in [21] proposed an optimal control strategy for ESSs to reduce wind power curtailment in a WF. The authors in [22] introduced a management policy for ESS installation to support the WF system in grid-connected mode and maximize the economic benefit for this integrated system. Most of the existing studies available in literature only use ESSs to support WFs in power balancing [16], frequency control [17,18], power fluctuations smoothing [19,20], reducing wind power curtailment [21], or increasing economic benefits [22]. In the operation of WF systems, a violation of grid-code constraints may result in the WF operator being subject to paying a high amount of penalties. The penalties usually come from the power mismatch between the amount of commitment power and the actual output power, which is caused by the uncertainty of the wind speed in WF systems. Furthermore, the requirement of reserve power significantly reduces the amount of output power of the WF system. This is because a large amount of spinning reserve might be required at the WTGs. The ESS can play a role of a reserve capacity source and can potentially reduce the amount of required spinning reserve power and thus, increase the output power of the WF. However, the use of ESSs in supporting WFs to fulfill grid-code constraints has not been considered in existing studies [16][17][18][19][20][21][22]. Therefore, this study proposes an operation strategy to optimize the operation of the WF system using ESSs. The ESSs can support the WF systems in handling different grid-code constraints issued by the TSO, including reserve power and limited power constraints. These constraints ensure that a certain amount of reserve capacity is maintained in the WF system and the injected power from the WF into the power system is always less than or equal to a predetermined limited power. In addition, to reduce the investment costs of ESSs, a mixed-integer linear programming (MILP)-based optimization model is formulated to find out the optimal size of an ESS considering the two different gridcode constraints. The cost function includes the penalty of the power mismatch between the committed power and the actual output power, the investment and operation costs of the ESS, and the profit of selling power to the power system. The total yearly cost of the ESS includes four different cost categories (i) cost of power conversion system, (ii) cost of battery, (iii) cost for the balance of the plant, (iv) annual O&M cost of the ESS. The optimal solution is found by taking a trade-off between the investment and operation costs of the ESS and the profit gained by the WF system. In addition, the effects of different parameters on determining the optimal size of the ESS are analyzed in detail with two constraints from the TSO. WF operators typically collect the entire information of the WF system, including wind speed data, the uncertainty of wind speed, the requirement of the TSO, penalty value, and market price. With this detailed information of a WF system, the detailed analysis in this study allows WF operators to easily determine whether or not to install an ESS and find the optimal size of the ESS. Operation of WF System Wind power plays an important role in coping with load growth in a power system and at the same time in cutting capacity from fossil fuel power plants. However, the uncertainty of wind power also causes many difficulties in the operation of the power system. This section presents the operation of WFs considering wind uncertainty and different grid-code constraints. In addition, the effects of ESSs on the operation of WFs is also analyzed in detail. WF System Configuration and Operation A WF usually consists of many WTGs and is installed over a huge area. Therefore, all adjacent WTGs are often grouped to form a cluster. In a WF system, there might be several clusters, and the output power of each cluster is adjusted to help the WF to meet all grid-code constraints. The set-points of each cluster or WTG are controlled by a center controller (i.e., a WF operator). The control scheme is summarized as follows: • Firstly, wind speed information is measured locally at each WTG. The available power is then computed and communicated to the WF operator; • The WF operator calculates the total available power in the WF and determines the optimal amount of output power considering grid-code constraints; • Finally, the set-points of each cluster/WTG is determined and implemented at each WTG unit. In this study, a WF with 20 WTGs is used to evaluate the proposed method and the 20 WTGs are grouped into four clusters, as shown in Figure 1. An ESS is used to improve the performance of the WF. In order to reduce the investment and operation costs for the WF system, different optimization algorithms have been investigated to find out the optimal size of the ESSs using the Voltra model [23], the hybrid particle swarm optimization-genetic algorithm [24], or an MILP [25]. In Section 4, the optimal sizing of the ESS is also discussed in various scenarios using an MILP-based model. Wind power plays an important role in coping with load growth in a power system and at the same time in cutting capacity from fossil fuel power plants. However, the uncertainty of wind power also causes many difficulties in the operation of the power system. This section presents the operation of WFs considering wind uncertainty and different grid-code constraints. In addition, the effects of ESSs on the operation of WFs is also analyzed in detail. WF System Configuration and Operation A WF usually consists of many WTGs and is installed over a huge area. Therefore, all adjacent WTGs are often grouped to form a cluster. In a WF system, there might be several clusters, and the output power of each cluster is adjusted to help the WF to meet all grid-code constraints. The set-points of each cluster or WTG are controlled by a center controller (i.e., a WF operator). The control scheme is summarized as follows: • Firstly, wind speed information is measured locally at each WTG. The available power is then computed and communicated to the WF operator; • The WF operator calculates the total available power in the WF and determines the optimal amount of output power considering grid-code constraints; • Finally, the set-points of each cluster/WTG is determined and implemented at each WTG unit. In this study, a WF with 20 WTGs is used to evaluate the proposed method and the 20 WTGs are grouped into four clusters, as shown in Figure 1. An ESS is used to improve the performance of the WF. In order to reduce the investment and operation costs for the WF system, different optimization algorithms have been investigated to find out the optimal size of the ESSs using the Voltra model [23], the hybrid particle swarm optimizationgenetic algorithm [24], or an MILP [25]. In Section 4, the optimal sizing of the ESS is also discussed in various scenarios using an MILP-based model. Grid-Code Constraints Grid-code constraints are the requirements for WF operation that are issued by TSO at the point of common coupling. The output of the WTGs is adjusted to help the WFs fulfill these requirements. In the Korean power system, grid-code constraints are applied to all renewable energy sources with an installed capacity exceeding 20 MW, and the requirements for renewable energy systems include different aspects, as follows [26]: Grid-Code Constraints Grid-code constraints are the requirements for WF operation that are issued by TSO at the point of common coupling. The output of the WTGs is adjusted to help the WFs fulfill these requirements. In the Korean power system, grid-code constraints are applied to all renewable energy sources with an installed capacity exceeding 20 MW, and the requirements for renewable energy systems include different aspects, as follows [26] In the operation of WF systems, all grid-code constraints should be satisfied. In this study, we focused on maximizing the active output power of the WF with the ESS. Therefore, only the active power control is considered and analyzed in detail, while the other constraints are assumed to be fulfilled during the WF operation. The Korean grid-code often requires three constraints for active power control, including (i) the absolute production constraint (i.e., limited power mode); (ii) the delta production constraint (i.e., reserve power mode); and (iii) the power gradient constraint [26]. In the absolute production constraint, a predefined limited power is required to avoid overloading. In the delta production constraint, a certain amount of reserve capacity is maintained in the WF to support the power system in emergencies. In the power gradient constraint, ramp-up and ramp-down for the output of active power are bounded by 10% of the rated power per 1 min. In this paper, the three above-mentioned active power constraints are considered for the operation of the WF with the ESS. Figure 2a shows the constraint for the output power of the WF in limited power mode. The actual output power is limited by a predefined value determined by the TSO. It can be seen that if the available power of a WF is less than the limited power, the set-point of the WF is the available power, while the available power of the WF exceeds the limited power, the WF must shed the surplus power and maintain output power at the limited value. The second grid-code constraint is the requirement of the total reserve power in a WF system. WFs must be able to support the power system in an emergency. Therefore, a certain amount of reserve power is required in the WF system, as shown in Figure 2b. The WF operator needs to adjust the output power of the WTGs to maintain a required amount of reserve power during the grid-connected time of the WF system. This reserve capacity is usually set to a fixed percentage of the available power of the WF. The third grid-code constraint is the requirement of ramp-up/ramp-down for the output power of the WF. This constraint can be easily met by setting the ramp-up and ramp-down of each WTG in the WF system. In the operation of WF systems, all grid-code constraints should be satisfied. In this study, we focused on maximizing the active output power of the WF with the ESS. Therefore, only the active power control is considered and analyzed in detail, while the other constraints are assumed to be fulfilled during the WF operation. The Korean grid-code often requires three constraints for active power control, including (i) the absolute production constraint (i.e., limited power mode); (ii) the delta production constraint (i.e., reserve power mode); and (iii) the power gradient constraint [26]. In the absolute production constraint, a predefined limited power is required to avoid overloading. In the delta production constraint, a certain amount of reserve capacity is maintained in the WF to support the power system in emergencies. In the power gradient constraint, ramp-up and ramp-down for the output of active power are bounded by 10% of the rated power per 1 min. In this paper, the three above-mentioned active power constraints are considered for the operation of the WF with the ESS. Figure 2a shows the constraint for the output power of the WF in limited power mode. The actual output power is limited by a predefined value determined by the TSO. It can be seen that if the available power of a WF is less than the limited power, the set-point of the WF is the available power, while the available power of the WF exceeds the limited power, the WF must shed the surplus power and maintain output power at the limited value. The second grid-code constraint is the requirement of the total reserve power in a WF system. WFs must be able to support the power system in an emergency. Therefore, a certain amount of reserve power is required in the WF system, as shown in Figure 2b. The WF operator needs to adjust the output power of the WTGs to maintain a required amount of reserve power during the grid-connected time of the WF system. This reserve capacity is usually set to a fixed percentage of the available power of the WF. The third grid-code constraint is the requirement of rampup/ramp-down for the output power of the WF. This constraint can be easily met by setting the ramp-up and ramp-down of each WTG in the WF system. Operation of WF with ESS As mentioned earlier, the uncertainty of wind speed can lead to large fluctuations in the output power of WFs. Furthermore, the total output power of WFs is also significantly Operation of WF with ESS As mentioned earlier, the uncertainty of wind speed can lead to large fluctuations in the output power of WFs. Furthermore, the total output power of WFs is also significantly reduced due to the grid-code constraints. The use of ESSs is intended to increase the output power of WFs in both operation modes (i.e., reserve power and limited power modes), thereby improving the efficiency of the WFs. The benefits of ESSs are analyzed in detail in different scenarios. Figure 3a shows the actual output power of WFs in limited power mode. It can be seen that WFs must curtail a large amount if the available power is higher than the limited power at t 1 , while the output power of the WF is much smaller than the limited power after a few next intervals at t 2 . Using ESSs allows the WF operator to shift the surplus power from the high output power periods to the low output power periods. Therefore, with ESSs, the WF not only reduces the amount of curtailment power but also increases the amount of output power, as shown in Figure 3b. Limited Power Mode in different scenarios. Figure 3a shows the actual output power of WFs in limited power mode. It can be seen that WFs must curtail a large amount if the available power is higher than the limited power at t1, while the output power of the WF is much smaller than the limited power after a few next intervals at t2. Using ESSs allows the WF operator to shift the surplus power from the high output power periods to the low output power periods. Therefore, with ESSs, the WF not only reduces the amount of curtailment power but also increases the amount of output power, as shown in Figure 3b. Reserve Power Mode In this operation mode, a certain amount of spinning reserve capacity is maintained in the WF, which depends on the available power of the WF. This can be used in emergencies in the power system. Without an ESS, the WF operator must reduce the output power of many WTG units to maintain a required reserve capacity, as shown in Figure 4a. To reduce the amount of spinning reserve capacity, an ESS can store energy and play a similar role to the spinning reserve capacity, as shown in Figure 4b. As a result, the amount of output power is significantly increased compared to the case without ESSs. System Model This section presents an MILP-based mathematical model for the optimal sizing of an ESS and the optimal output power of a WF. The optimal size of an ESS is determined by taking a trade-off between the investment costs and its profit in the operation of the entire WF system. Reserve Power Mode In this operation mode, a certain amount of spinning reserve capacity is maintained in the WF, which depends on the available power of the WF. This can be used in emergencies in the power system. Without an ESS, the WF operator must reduce the output power of many WTG units to maintain a required reserve capacity, as shown in Figure 4a. To reduce the amount of spinning reserve capacity, an ESS can store energy and play a similar role to the spinning reserve capacity, as shown in Figure 4b. As a result, the amount of output power is significantly increased compared to the case without ESSs. Figure 3a shows the actual output power of WFs in limited power mode. It seen that WFs must curtail a large amount if the available power is higher than the power at t1, while the output power of the WF is much smaller than the limited after a few next intervals at t2. Using ESSs allows the WF operator to shift the s power from the high output power periods to the low output power periods. The with ESSs, the WF not only reduces the amount of curtailment power but also in the amount of output power, as shown in Figure 3b. Reserve Power Mode In this operation mode, a certain amount of spinning reserve capacity is main in the WF, which depends on the available power of the WF. This can be used in gencies in the power system. Without an ESS, the WF operator must reduce the power of many WTG units to maintain a required reserve capacity, as shown in Fig To reduce the amount of spinning reserve capacity, an ESS can store energy and similar role to the spinning reserve capacity, as shown in Figure 4b. As a resu amount of output power is significantly increased compared to the case without E System Model This section presents an MILP-based mathematical model for the optimal si an ESS and the optimal output power of a WF. The optimal size of an ESS is deter by taking a trade-off between the investment costs and its profit in the operation entire WF system. System Model This section presents an MILP-based mathematical model for the optimal sizing of an ESS and the optimal output power of a WF. The optimal size of an ESS is determined by taking a trade-off between the investment costs and its profit in the operation of the entire WF system. Wind Data Wind data is assumed to follow the Weibull distribution [28,29]. Equations (1) and (2) show the formula of probability density function (PDF) and cumulative distribution function (CDF) of the Weibull distribution, respectively. where: k is Weibull shape factor, λ is Weibull scale parameter (m/s), and v is wind speed (m/s). Each couple of values {k,λ} is determined to fit the historical wind data in a certain period of time (e.g., a day, a week, a month, a season, a year). Fitting the wind data during a short period of time can increase the accuracy of determining wind speed/wind power. However, the optimization model is developed with one-year data. To reduce the complexity of the optimization model and the computation cost, four Weibull distribution functions were estimated to describe the seasonal wind data in a year. This selection not only reduces the complexity of the proposed model, but it also ensures the tracking of the significant changes in the wind speed in each season. WTGs locally control the blade angle and tip speed ratio to maximize their output power. The output power can be computed using Equation (3), considering the cut-in and cut-out speed of the WTG. In the operation of WFs, the operation of upstream WTGs can affect the operation of the downstream WTGs due to wake effects. This reduces the wind speed/wind force of the downstream WTGs, and therefore reduces the amount of output power of the entire WF system. In this study, the main objective is to analyze the effects of an ESS on the operation of a WF and to find out the optimal size of an ESS in different scenarios. Therefore, we assume that the location of all WTGs has been optimally determined to minimize the wind speed reduction and this factor can be neglected in this study. The available power of a WF is calculated by the WF operator by accumulating the available power from the WTGs, as shown in Equation (4). The WF operator determines the optimal set-point of the WF in each operation mode, which fulfills the constraint (5). Optimization Model for Sizing of ESS The objective function (6) aims to find out the optimal set-point of the WF and the optimal size of the ESS to reduce the penalty for shortage power in different operation modes; where P is the rated output power of the ESS and P WF out,t is the output power of the WF at t. The first term of (6) represents the total yearly cost of an ESS. The detailed cost calculation of an ESS is presented in Equations (7)-(12), same to [30], including four different cost categories: (i) cost of power conversion system; (ii) cost of battery; (iii) cost for the balance of the plant; and (iv) annual O&M cost of ESS. The factor φ is the capital recovery factor representing the weights between the four types of costs, as given in (7). The second term of (6) shows the penalty of shortage power in the limited power mode due to wind uncertainty. The third term of (6) shows the penalty for the shortage of reserve capacity in the reserve power mode. The fourth term of (6) shows the profit by selling power from WFs to the power system. The coefficients α, β represent the weight of each grid-code constraint in the operation of W. Equation (8) is used to compute the total cost of the power conversion system using the rated output power of the ESS (P) and the unit cost of power electronics. Equation (9) is used to calculate the total cost of the battery by using the length of the discharge cycle (H), the unit cost of the storage devices, P, and the discharging efficiency of the ESS (η dis ). Equation (10) is used to determine the total cost of the balance of the plant using P, H, and the unit cost of the balance of the plant. Equation (11) is used to compute the capital recovery factor using the lifetime of the component (y) and the annual interest rate (i r ). Finally, Equation (12) is used to calculate the annual O&M cost using the fixed O&M cost and P. Constraints in Limited Power Mode In this mode, a WF may be charged a penalty amount for its mismatch of power between the commitment power and the actual output power. The uncertainty of wind power causes the actual amount of output power to be significantly lower than the commitment power. Using an ESS significantly reduces this mismatch power. Figure 5 shows the shifting process of the surplus power from the high available power periods to low available power periods. For instance, the available power is higher than the limited power at t, the set-point of the WF is at the limited power and the surplus power is charged to the ESS. At t', we assume that the available power is at the lower bound of wind power uncertainty (i.e., the worst case), as shown in Figure 5. Without an ESS, the shortage of power is dP neg,t , and the WF must pay a penalty for this amount. With an ESS, the value of dP neg,t can be reduced to 0 by discharging power from the ESS. If the discharging amount is greater than the maximum value of dP neg,t , the WF can sell surplus power (dP pos,t ) and earn more profit. The output power of the WF is calculated based on the shortage and surplus power (i.e., dP neg,t , dP pos,t ), as given in Equation (13). The calculation of the amount of surplus/shortage power is summarized in detail in Algorithm 1 based on the amount of discharging power from the ESS. P WF out,t = P WF avl,t − dP neg,t + dP pos,t ∀t ∈ T Algorithm 1: Determine output power of WF with limited power constraint for t to T do//during a year if u·P WF avl,t ≥ P lim,t do dP neg,t = 0 dP pos,t = 0 //charging/discharging bounds out,t = min min P lim,t , P WF avl,t + dP pos,t − dP neg,t , P lim,t end Constraints in the Reserve Power Mode In this operation mode, the ESS plays a role of a major power capacity reserve in the WF system. The required reserve capacity can be fulfilled from two sources (i) spinning reserve power from WTGs and (ii) stored power in the ESS. The shortage of the reserve capacity is determined by Equation (14). The storage capacity in the ESS is calculated based on the current state of charge (SOC), as given in Equation (15). The output power of the WF is determined using Equation (16), based on the amount of spinning reserve power, charging, and discharging power of the ESS. Constraints for the Operation of ESS Equations (17) (17) and (18), respectively. The SOC of the ESS is computed at the end of each interval using the amount of charged/discharged power, as shown in Equation (19). The SOC of the ESS is set to the initial value at the first interval, as given in Equation (20). The operation bound of the ESS is shown in Equation (21). Numerical Results In this section, the optimal output power of a WF is determined in different operation modes using an ESS. The optimal size of an ESS is also analyzed in detail with various parameters of penalty, selling price, uncertainty, and so on. This allows the WF operator to determine the suitable size of the ESS with a given WF system. Input Wind Data and WF Layout Firstly, the wind speed parameters for each season following the Weibull distribution are tabulated in Table 1. The two model parameters (i.e., the Weibull shape and Weibull scale) are estimated by fitting an historical wind speed dataset to a Weibull distribution function. Different numerical methods, such as the modified maximum likelihood method, maximum likelihood method, and energy pattern factor method, can be used to determine the two model parameters [29]. The PDFs and CDFs of wind speed are demonstrated in Figures 6 and 7 for different seasons, respectively. The wind data is generated for a year using the given parameters in Table 1. In order to reduce the complexity of the proposed model and computation cost, the time interval is set to 1 h and the total number of intervals is 8760 for a year. The available power at each WTG is determined based on wind speed data using Equation (3) and the available power in the WF is determined by Equation (4). The detailed parameters for the ESS are given in Table 2. The test WF system consists of 20 WTGs and is divided into four clusters, as shown in Figure 1. All WTGs have the same operation parameters. The rated power is 10 MW with a minimum set-point of 1 MW. The maximum ramp-up and ramp-down in two consecutive intervals are 2 MW [28]. As mentioned earlier in Section 3.2, the value of α, β in the objective function (6) represents the weight of each grid-code constraint in the WF operation. The following three cases are analyzed in detail in this study: i. {α =1, β = 0}, the WF operates with limited power constraint only; ii. {α =0, β = 1}, the WF operates with reserve power constraint only; iii. {α =0.5, β = 0.5}, the WF operates with both limited power and reserve constraints and the WF tries to satisfy both constraints with the same priority. The MILP-based optimization model is implemented in Visual Studio C++ and the optimal solution is determined using commercial software, i.e., IBM ILOG CPLEX 12.6. This optimizer is capable of solving linear programming, MILP, or quadratic programming problems with a high-performance and flexible mathematical programming solver [32]. Figure 6. The probability density function of wind data in different seasons [28]. Optimal Size of ESS in Limited Power Mode In this mode, the set-point of the WF is limited by a predefined set-point (i.e., limited power). Firstly, we assume that the uncertainty of the wind speed is 2% and the limited power is set to 168 MW. This means that the output power of the WF is always less than The test WF system consists of 20 WTGs and is divided into four clusters, as shown in Figure 1. All WTGs have the same operation parameters. The rated power is 10 MW with a minimum set-point of 1 MW. The maximum ramp-up and ramp-down in two consecutive intervals are 2 MW [28]. As mentioned earlier in Section 3.2, the value of α, β in the objective function (6) represents the weight of each grid-code constraint in the WF operation. The following three cases are analyzed in detail in this study: i. {α = 1, β = 0}, the WF operates with limited power constraint only; ii. {α = 0, β = 1}, the WF operates with reserve power constraint only; iii. {α = 0.5, β = 0.5}, the WF operates with both limited power and reserve constraints and the WF tries to satisfy both constraints with the same priority. The MILP-based optimization model is implemented in Visual Studio C++ and the optimal solution is determined using commercial software, i.e., IBM ILOG CPLEX 12.6. This optimizer is capable of solving linear programming, MILP, or quadratic programming problems with a high-performance and flexible mathematical programming solver [32]. Optimal Size of ESS in Limited Power Mode In this mode, the set-point of the WF is limited by a predefined set-point (i.e., limited power). Firstly, we assume that the uncertainty of the wind speed is 2% and the limited power is set to 168 MW. This means that the output power of the WF is always less than or equal to 168 MW. The optimal size of the ESS is determined to minimize the amount of power shortage in the worst case when the wind speed is at the lower bound of uncertainty. The optimal size of the ESS turned out to be 17.5 MW by considering the trade-off between its profit and investment cost. The costs corresponding to the optimal size of the ESS are calculated based on the unit cost of each component, as tabulated in Table 3. The optimal output power of WF is determined for one year, and to show the results unambiguously, the optimal set-point is only extracted for the first three days, as shown in Figure 8. It can be observed that the output power of the WF is maintained lower than or equal to the limited power (168 MW). Without the ESS, the output power of the WF is always at the lower bound of the uncertainty (blue line). The use of the ESS increases the output power of the WF significantly (red line), thus reducing the amount of the power mismatch between the actual and the committed output power. mismatch between the actual and the committed output power. The detailed operation of the ESS is shown in Figure 9. The ESS plays an important role in shifting the surplus power from high output power periods to low output power periods. The charging/discharging amount of the ESS is shown in Figure 9a. It can be seen that the ESS always tries to charge as much power as possible when the amount of available power is greater than the limited power. Then the ESS discharges the power to fulfill the power mismatch between the actual capacity and committed power caused by uncertainty. The amount of storage power in the ESS is shown in Figure 9b, and is calculated by using the amount of charging/discharging power, as shown in Figure 9a. The detailed operation of the ESS is shown in Figure 9. The ESS plays an important role in shifting the surplus power from high output power periods to low output power periods. The charging/discharging amount of the ESS is shown in Figure 9a. It can be seen that the ESS always tries to charge as much power as possible when the amount of available power is greater than the limited power. Then the ESS discharges the power to fulfill the power mismatch between the actual capacity and committed power caused by uncertainty. The amount of storage power in the ESS is shown in Figure 9b, and is calculated by using the amount of charging/discharging power, as shown in Figure 9a. In this section, the impact of various parameters on determining the size of the ESS are analyzed in detail. Firstly, the ESS is used to reduce the cost of penalties of shortage power. Therefore, the value of the penalty is the main factor affecting the size of the ESS, as shown in Figure 10. The size of the ESS increases significantly if the penalty increases. The optimal size of the ESS is 17.5 MW as the penalty value is approximately 40,000 KRW. In this section, the impact of various parameters on determining the size of the ESS are analyzed in detail. Firstly, the ESS is used to reduce the cost of penalties of shortage power. Therefore, the value of the penalty is the main factor affecting the size of the ESS, In this section, the impact of various parameters on determining the size of the ESS are analyzed in detail. Firstly, the ESS is used to reduce the cost of penalties of shortage power. Therefore, the value of the penalty is the main factor affecting the size of the ESS, as shown in Figure 10. The size of the ESS increases significantly if the penalty increases. The optimal size of the ESS is 17.5 MW as the penalty value is approximately 40,000 KRW. In addition, the values of uncertainty and limited power also affect the size of the ESS. Figure 11a shows the effect of both uncertainty and penalty value on the size of the ESS. It can be seen that when uncertainty increases to approximately 2%, the size of the ESS is the largest. If uncertainty is too high, the available power of the WF in the worst case is less than the limited power at most time intervals. As a result, the WF often generates maximum output power and there is no surplus power for charging the ESS. Similar to Figure 10, with an increase in the penalty, the size of the ESS increases. Figure 11b shows the effect of limited power and the value of the penalty on the ESS size. The size of the ESS increases significantly as the value of the penalty increases and the limited power is set at approximately 168 to 170 MW. This is because if the limited power is too high, the available power of the WF is always lower than the limited power. Therefore, the WF always generates its maximum power and does not charge the ESS. Conversely, when the limited power is too low, the output power of the WF is set at limited power and the ESS does not require to discharge power. Therefore, the size of the ESS is usually very low in these cases. Finally, based on the given information of each WF system, this detailed analysis allows WF operators to find out the optimal size of the ESS. In addition, the values of uncertainty and limited power also affect the size of the ESS. Figure 11a shows the effect of both uncertainty and penalty value on the size of the ESS. It can be seen that when uncertainty increases to approximately 2%, the size of the ESS is the largest. If uncertainty is too high, the available power of the WF in the worst case is less than the limited power at most time intervals. As a result, the WF often generates maximum output power and there is no surplus power for charging the ESS. Similar to Figure 10, with an increase in the penalty, the size of the ESS increases. Figure 11b shows the effect of limited power and the value of the penalty on the ESS size. The size of the ESS increases significantly as the value of the penalty increases and the limited power is set at approximately 168 to 170 MW. This is because if the limited power is too high, the available power of the WF is always lower than the limited power. Therefore, the WF always generates its maximum power and does not charge the ESS. Conversely, when the limited power is too low, the output power of the WF is set at limited power and the ESS does not require to discharge power. Therefore, the size of the ESS is usually very low in these cases. Finally, based on the given information of each WF system, this detailed analysis allows WF operators to find out the optimal size of the ESS. Optimal Size of ESS in Reserve Power Mode In reserve power mode, the WF must maintain a certain amount of reserve power. Figure 12 shows the available power and output power of the WF when the required reserve capacity is 10% of the available power. Without the ESS, the output power is always lower than the available power by a certain amount, as shown in Figure 12a. With the ESS, the amount of the storage power in the ESS can play a role in the reserve capacity in the Optimal Size of ESS in Reserve Power Mode In reserve power mode, the WF must maintain a certain amount of reserve power. Figure 12 shows the available power and output power of the WF when the required reserve capacity is 10% of the available power. Without the ESS, the output power is always lower than the available power by a certain amount, as shown in Figure 12a. With the ESS, the amount of the storage power in the ESS can play a role in the reserve capacity in the WF system. Therefore, the WF can generate extra output power while ensuring the requirement of reserve power, as shown in Figure 12b. (a) (b) Figure 11. The optimal size of ESS: (a) Different value of uncertainty and penalty (in KRW/kWh); (b) Different value of limited power and penalty (in KRW/kWh). Optimal Size of ESS in Reserve Power Mode In reserve power mode, the WF must maintain a certain amount of reserve power. Figure 12 shows the available power and output power of the WF when the required reserve capacity is 10% of the available power. Without the ESS, the output power is always lower than the available power by a certain amount, as shown in Figure 12a. With the ESS, the amount of the storage power in the ESS can play a role in the reserve capacity in the WF system. Therefore, the WF can generate extra output power while ensuring the requirement of reserve power, as shown in Figure 12b. The total reserve capacity in the WF is illustrated in Figure 13. It can be observed that the major reserve capacity comes from the stored energy in the ESS. The remaining is fulfilled by the spinning reserve capacity from the WTGs. The optimal size of the ESS turned out to be 16.3 MW by considering a trade-off between the investment costs and profit of selling power. The detailed cost of the ESS is computed using the unit cost in Table 2 and the detailed ESS cost is summarized in Table 4. The total reserve capacity in the WF is illustrated in Figure 13. It can be observed that the major reserve capacity comes from the stored energy in the ESS. The remaining is fulfilled by the spinning reserve capacity from the WTGs. The optimal size of the ESS turned out to be 16.3 MW by considering a trade-off between the investment costs and profit of selling power. The detailed cost of the ESS is computed using the unit cost in Table 2 and the detailed ESS cost is summarized in Table 4. In the reserve power mode, the selling price is the main parameter that affected the size of the ESS. A high selling price encourages the WF operator to use the ESS to increase the output power of the WF and sell it to the power system. It can be seen that the size of the ESS increases significantly if the selling price is greater than 540 KRW/kWh. Figure 14 shows the effects of the selling price and requirement of the reserve capacity on the size of the ESS. In the reserve power mode, the selling price is the main parameter that affected the size of the ESS. A high selling price encourages the WF operator to use the ESS to increase the output power of the WF and sell it to the power system. It can be seen that the size of the ESS increases significantly if the selling price is greater than 540 KRW/kWh. Figure 14 shows the effects of the selling price and requirement of the reserve capacity on the size of the ESS. In the reserve power mode, the selling price is the main parameter that affected the size of the ESS. A high selling price encourages the WF operator to use the ESS to increase the output power of the WF and sell it to the power system. It can be seen that the size of the ESS increases significantly if the selling price is greater than 540 KRW/kWh. Figure 14 shows the effects of the selling price and requirement of the reserve capacity on the size of the ESS. Optimal Size of ESS Considering Several Grid-Code Constraints Sections 4.2 and 4.3 present the optimal size of the ESS in only limited power or reserve power mode. It can be seen that the WF operator only decided to use ESS if the penalty for shortage power and selling price is quite high for the limited power and reserve power mode, respectively. In this section, we consider both requirements of limited power and reserve power. Firstly, the input data is presented as follows: (i) The requirement of reserve power is 10% of the available power; (ii) Limited power is 168 MW; Optimal Size of ESS Considering Several Grid-Code Constraints Sections 4.2 and 4.3 present the optimal size of the ESS in only limited power or reserve power mode. It can be seen that the WF operator only decided to use ESS if the penalty for shortage power and selling price is quite high for the limited power and reserve power mode, respectively. In this section, we consider both requirements of limited power and reserve power. Firstly, the input data is presented as follows: (i) The requirement of reserve power is 10% of the available power; (ii) Limited power is 168 MW; (iii) The value of penalty in both the limited power and reserve power modes are 1000 KRW/kWh; (iv) The selling price for surplus power is 150 KRW/kWh. The optimal size of the ESS turned out to be 16.7 MW. The detailed cost of the ESS is summarized in Table 5. The output power of the WF is shown in Figure 15. This set-point is always lower than or equal to the limited power (168 MW). The storage capacity in the ESS is shown in Figure 16a; this amount can play a role of reserve power in the WF system. Figure 16b shows the total amount of reserve power, including the amount of stored energy in the ESS and the amount of spinning reserve power at the WTGs. It can be seen that sometimes the reserve capacity requirement is not met. This is because the investment cost of the ESS is too expensive compared to the profits from the ESS. Therefore, the WF operator decides to pay the penalty instead of using a large size ESS. Finally, the effects of the various parameters, including the value of the penalties for two modes, on the size of the ESS are clearly shown in Figure 17. It can be seen that the size of the ESS increases if the value of the penalty increases in both operation modes. The size of the ESS is maximum at approximately 32 MW when the penalty values are approximately 40,000 KRW and 30,000 KRW in the limited power and reserve power modes, respectively. Finally, the detailed analysis in this study helps the WF operator to determine the optimal size of the ESS corresponding to each given WF system. size of the ESS is maximum at approximately 32 MW when the penalty values are approximately 40,000 KRW and 30,000 KRW in the limited power and reserve power modes, respectively. Finally, the detailed analysis in this study helps the WF operator to determine the optimal size of the ESS corresponding to each given WF system. Discussion and Future Extensions In this study, a multi-objective optimization model is proposed to optimize the operation of an integrated WF-ESS considering different constraints, i.e., (i) limited power con- Discussion and Future Extensions In this study, a multi-objective optimization model is proposed to optimize the operation of an integrated WF-ESS considering different constraints, i.e., (i) limited power constraint and (ii) reserve power constraint issued by TSO, where the weight coefficients α, β represent the priority for each of these constraints. The change in values of α and β can impact the outcome of the optimization model. Therefore, depending on the preference of the WF operators, their values can be decided. For example, if the value of α > β, more preference will be given to fulfilling the limited power constraint and vice versa. The optimal output power of the WF and the optimal size of the ESS are presented with different operation scenarios. However, there are still some limitations of this research and thereby, opens up several approaches for future extension, as follows: a. Since the life of all the equipment of the ESS is taken as the same, the replacement cost of storage devices was not considered in this study. However, the ESS model can be improved by considering replacement cost and recycling/disposal cost of the ESS; b. The wind speed at each WTG is determined using a Weibull distribution and this could not be very accurate. Therefore, a deep neural network can be adopted to learn from historical data and be used to predict wind speed with higher accuracy; c. In small WFs, WTGs might be located in a restricted space. This greatly reduces the wind speed/wind force of the downstream WTGs due to wake effects and thus decreases the amount of output power of the entire WF system. Developing a detailed WF model considering wake effect can increase overall WF performance; d. This study analyzes the operation of WFs in detail and the optimal size of ESSs under different operation scenarios. In the next step, the optimal results might be evaluated and implemented in a real WF system. In summary, approaches (a)-(c) can be the possible solutions to improving the optimization model. Then, the final optimal solution can be analyzed and applied to some real WF systems in South Korea. Conclusions In this study, an optimization algorithm was proposed to find out the optimal size of an ESS to support a WF in fulfilling different grid-code constraints, including the reserve power and the limited power modes. Furthermore, the major parameters affecting the size of the ESS have also been analyzed in detail. In the limited power mode, the optimal size of the ESS was approximately 17 MW with the penalty of power mismatch between actual power and the commitment power of approximately 40,000 KRW/kWh and 2% of the uncertainty in the available power of the WF. In the reserve power mode, the optimal size of the ESS was approximately 16 MW with the requirement of reservation capacity of 10% of the available power. In the case of considering multiple grid-code constraints, the optimal size of the ESS can go up to approximately 32 MW if the penalty values are approximately 40,000 KRW/kWh and 30,000 KRW/kWh in the limited power and reserve power modes, respectively. Finally, based on the detailed analysis in this paper, the WF operator can easily find out the optimal size of an ESS that is suitable with the WF capacity, wind power information, uncertainty, and requirements from the TSOs (i.e., limited power, reserve capacity, penalty of mismatch power, and selling price).
2021-10-22T15:45:49.543Z
2021-09-02T00:00:00.000
{ "year": 2021, "sha1": "07568c83c396ff90fc89a66fbf05111ed20b66b2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/14/17/5478/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "be4630b20a3de348b120e3ce799c4c58dda2669f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
248695557
pes2o/s2orc
v3-fos-license
Trends in COVID-19 Objectives: To detect the epidemiological trend of coronavirus disease-19 (COVID-19) in Iraq, the distribution of cases by age, gender, and governorates, and to assess its burden on the health system by estimating morbidity and mortality rates. Methods: This biometric study was carried out in 2021. The distribution, incidence, mortality, and case fatality rates in a 17-month period was sketched in a biometric design. A semi-structured questionnaire was distributed to a number of decision makers in the Ministry of Health regarding health system challenges that have been faced during this pandemic. Results: More than half (55.1%) of the cases were among males, and 67.5% were in the age group 30-60 years. Mortality was also predominant among males (62.7%), and 50.0% of the deaths were in the age group >50 years. The predominant age group for both genders was 30-60 years. Case fatality rate was 1.2%; again higher among males (1.3% versus 1.1%). Conclusion: The trend of COVID-19 in Iraq showed 2 peaks, August-October 2020 and March-July 2021, with males being more affected by morbidity, mortality, and fatality. The main challenge faced by the Iraqi health system was the rapid increase of COVID-19 cases with limited bed capacity and medical equipment. occurred worldwide in the last decades, causing major disruption to life and health. The unpredictable nature of these disasters urged the health professionals to be adapted to the resulted rapid and unprecedented changes of the environment. 1 Pandemics have a memorable occurrence in the history of populations via causing them to live in a state of anxiety and fear, and disrupt the natural flow of their life. 2 Contrastive to natural disasters, which usually have a known onset and size of population affected, pandemics usually start insidiously and propagate rapidly depending on the route of transmission, virulence of the agent, and other human and environmental circumstances. 3 The first rising of coronavirus disease-19 )COVID-19( cases was around the end of December 2019 or early January 2020. 4,5 The rapid increase in infections and deaths resulted in anxiety, panic, stigma, mistrust, and rumor-mongering among people. This incident was labeled on January 30, 2020, as a Public Health Emergency of International Concern. On March 11, 2020, World Health Organization )WHO( formally announced the outbreak of the new coronavirus COVID-19 a pandemic, after it has spread to more than 100 countries and lead to several thousands of cases in the first few months of its appearance. 6 As of October 2021, more than 240 million people globally have contracted the disease and approximately 5,000,000 people died. Three countries )United States of America [USA], India, and Brazil( comprised more than 42.0% of the world cases and 37.0% of the world deaths. 7 Primarily, the case-fatality-rate )CFR( for coronavirus was 2-3.0% globally; though, the age group of 70-79 years, which is the high risk group, has an 8.0% CFR. 8 Center for Disease Control and Prevention found that the CFR increased with age, as it increased from 0.2% in patients aged below 39 years to 14.8% in those over 80 years, with death risk been greater among males )2.8%( than females )1.7%(. 9 The prevalence of the infection in the population has a considerable influence on the overall mortality rate, once the number of infected individuals among population has reached large sizes, this will overwhelm the health care systems which may lose its aptitude to treat all patients reported. 10 The numbers of cases and deaths in many African countries were low compared to European and American countries, some possible reasons may be related to low test capacity, relatively young population, and probably, under reporting, which may result in delayed action against the pandemic. 11,12 However, CFR differed between countries; 2.0% in Pakistan, 1.45% in India, 4.7% in Iran, 3.4% in the United Kingdom )UK(, and 3.5% in Italy. This difference could be due to a multifactorial combination of viral immunogenicity, genetic factors of the host, and demographic differences. 13 The reduction in workforces and the increasing unemployment, attributed to COVID-19, caused a significant economic burden worldwide that superimposed the health impact. 14 Coronavirus disease-19 pandemic made a global public awareness and panic, as in addition to the morbidity and mortality, it has major adverse psychological, social, and economic sequels. 15 The features of this highly infectious and fatal disease have forced governments to adopt unfamiliar measures in most countries, such as declaring a state of emergency, general incarceration of the population, imposition of social distancing, and the application of restrictive social )and health care( visiting policies. 16 The purpose of this study was to recognize the epidemiological trend of COVID-19 in Iraq, the distribution of cases by age, gender, and governorates, and to assess the burden on the health system by estimating the morbidity, mortality, and case fatality rate, in addition to the health authorities' opinion on the obstacles emerged in dealing with this pandemic. Methods. This study was designed as 2 main parts; first, as a biometric descriptive study that involves treatment and processing of already available data from the Ministry of Health and related facilities to sketch the epidemiological trend of COVID-19 including: distribution, morbidity, mortality, and CFR in a 17-month-period )March 2020 through July 2021(. The second part was designed as a qualitative study to throw light on the burden of the pandemic on the health system via interviewing the decision makers in the Ministry of Health and Baghdad health directorates. A semi-structured questionnaire )with some open-ended questions( was distributed to a number of decision makers regarding the health system challenges that have been faced during this pandemic, such as health resources, people's commitment to health authorities' instructions, availability of vaccine, the main reasons for recurrent peaks in the disease trend, and the possible solutions for any future scenarios, including the plans to overcome any emerging situation. The challenges include comprehensiveness of the health services, access )geographic accessibility, acceptability, and affordability(, referral systems, vertical integration and coordination of health services, and continuity of care. Data were categorized by age and gender, then plotted against time to see the trend of the disease for each age group and gender. Data were examined and triple checked for missed information and conformed to WHO reports. This study did not include human or any sort of intervention. Hence, the ethical considerations were not needed apart from the approvals of the Ethical Committee in the College of Medicine, Mustansiriyah University, Baghdad, Iraq, and the permissions from the Ministry of Health. However, the participants were assured that the information they declare would not be used for any purpose other than research work. Statistical analysis. The data analysis was carried out using Statistical Packages for Social Sciences, Disclosure. Authors have no conflict of interests, and the work was not supported or funded by any drug company. Saudi Med J 2022; Vol. 43 )5( https://smj.org.sa version 26.0 )IBMCorp, Armonk, NY, USA(. Data were presented in simple measures of frequency and percentages. Linear regression was used to sketch the trend of the disease. Results. Figures 1 & 2 show the trend of COVID-19 in Iraq during the period from March 2020 through July 2021. There are 2 peaks; a moderate one )August-October 2020( and a high one )March-July 2021(. Figures 2 & 3 show the trend of mortality during the same period. Figure 4 demonstrates that males are affected more than females, 55.1% of the cases were among males, with the predominant age group for both genders )67.5%( being 30-60 years. The trend of deaths is illustrated in Figure 5 with a predominance of male gender, Mortality was also predominant among males )62.7%( and 50.0% of the deaths were among the age group >50 years. Case fatality rate was 1.2%; again higher among males than females )1.3% versus 1.1%(. A comparison between the number of cases and deaths in each governorate showed that Diyala, Anbar, Wasit )Middle region( have low numbers of deaths compared to cases; while Sulaimaniyah )North( has a high number of deaths )not tabulated(. Table 1 describes the challenges and obstacles that faced the Iraqi health system during this pandemic and the possible reasons stated by decision makers in the Ministry of Health and health directorates, the main challenges were: shortage of health personnel, limited bed capacity due to old buildings of hospitals, and insufficient oxygen supply. Discussion. More than half of the cases were males, and two-third aged 30-60 years. This goes with a study in Victoria, Australia 17 regarding the age group of cases, although the vulnerable age group for death was ≥60 years, but it disagreed with a study in India where male constituted 65.4%, with age group 18-35 years formed 37.5% of the cases. 18 The results of meta-analysis in 8 countries showed that the highest frequency of cases )in Germany, Chile, Portugal, South Korea, New Zealand, Turkey, Canada, and USA( was in the age group 20-39 years; while in Italy, Netherlands, and UK the highest frequency was among the age group >80 years. 19 The differences in the access of COVID-19 testing could be the main reason, rather than in the actual number of cases, as testing the elderly was a priority. They were labeled as a high risk group, due to the severe complications they might face, taking in consideration the limited capacities in testing at the beginning of the pandemic. However, younger age groups began to receive a parallel consideration as testing capacities improved especially with increasing numbers of asymptomatic patients in most populations that may facilitate the transmission of the infection. 19,20 Two-third of the total deaths were males, half of whom age more than 60 years. Certain groups especially those age groups above 65 years or those with previous medical illnesses are more affected by COVID-19 complications and can result in more fatalities. Physiological effect due to aging process, such as waning of the immune system, dysfunction, and degeneration of body tissues can be the leading cause for deterioration and death. 21 Coronavirus infection is now labeled as "the third leading cause of death for children and adults )697.5 deaths/million(", that comes only after heart problems )1287.7 deaths/million( and cancer )1219.8 deaths/million(. 22 Gender could play a part in vulnerability to coronavirus disease, however, this is still unclear. Several reports have indicated that men have a higher number of infection and case fatality rates than women. Reasons behind that might include variability in gender norms and habits, or differences in social roles in each society. 23 Islam et al 24 attributed that to several factors, such as occupational and lifestyle that may increase the likelihood of exposure among men or differences in underlying comorbidities between both genders. The disease trend in the current study showed 2 peak; a moderate one )August-October 2020( following "Eid al Fitr" celebration, and a higher one )March-July 2021(, while Saudi Arabia reported a single peak )June-July 2020(. In Egypt, there the highest peak was in June 2020 with 2 moderate peaks in January 2021 and in June 2021. Iran experienced 2 moderate peaks in November 2020 and in April 2021, and a highest peak in August 2021. 25 Although large gatherings of people like religious ceremonies with massive overcrowding can participate in the disease trend fluctuation, small and informal social gatherings are thought to be an important source of transmission; birthdays, wedding, and funeral occasions can empirically quantify the impending role of small social gatherings in COVID-19 spread. 26 Case fatality rate seems to be relatively low in the current study compared to other countries, a study that included 20 European countries )severely affected with COVID-19(, in addition to USA and Canada, concluded that the country-specific CFR showed a broad spectrum, ranging from 0.6 )Iceland( to 18.1% )France(. 27 However, CFR in the current study was higher among males, which was consistent with most of the countries in Hoffmann and Wolf 's study. 27 Many heterogeneous reasons related to country-specific differences in cases and deaths have been evaluated, including genetic, socioeconomic, and environmental factors. 28 Coronavirus disease-19 cases distribution, deaths, and CFR by months during the period of the study showed that the highest number of deaths was in July 2020, and the maximum number of cases was in July 2021, whilst the CFR reached its peak at the beginning of the pandemic. These results were in line with WHO COVID-19 weekly global report, in which the elevated trend in the last 2 months )Jun-July 2021( was largely attributed to increase number of cases in the Western Pacific region )14.0% increase( and the Americas )8.0% increase(, while at the country level, the peak numbers of new cases in the same period were reported by USA, Iran, and India. 29 The highest global death rate reported by WHO was in January 2021, while the highest global CFR was in April 2020. 30 Hasan et al 31 in a meta-analytic study concluded that the weekly worldwide coronavirus CFR got a top at 7.2% during the 17th epidemiological week )April 22-28, 2020(. The topmost 5 countries with coronavirus CFR were Yemen )28.9%(, Italy )13.2%(, UK )12.4%(, Belgium )11.6%(, and France )11%(. Case fatality rate is a key measure of disease burden that is crucial for effective pandemic monitoring and control. 17,32 However, many reasons restrict obtaining an accurate estimate of CFR and Mortality rate by COVID-19. 33 The decline in CFR in this study was consistent with other studies in USA )New York(, where the death rate among inpatients declined 18-20.0% in a 3-4 months period, from 25.6% in March to 7.6% in June 2020. 34 In another study in England, the death rate decreased among patients that were admitted in May related to those admitted in March 2020 )from 11.2% to 9.0%(. 35 The relative drop in CFR could be due to some reasons, such as: increased public awareness, widespread testing that detects even asymptomatic or mild cases, precision of severely ill patients' management, favorable outcomes for infection of younger people, and experience gained by health professionals. 31 The morbidity and mortality statistics by governorates demonstrated that the highest incidence rates were in Wasit, Duhok, and Baghdad, whilst Sulaimaniyah recorded the highest mortality and case fatality rates. Despite the geographical variability between Iraqi governorates, it is generally expected that increased population density rises the susceptibility of some regions to get infection due to the high occurrence of social and economic interactions. Population density is certainly a crucial hint in studying virus spread, however, limited access to medical services due to a shortage of health personnel, hospitals beds )especially intensive care units( and intensive care beds, testing intensity and access to testing, in addition to low income level in more closely inhabited areas could be possible factors. 21 The main challenges faced by the Iraqi health system during this pandemic as stated by decision makers in the Ministry of Health were shortage of medical personnel as a result of migration of doctors due to unfavorable security condition, also the limping health system and services represented by shortage of medicines and insufficient oxygen supply. In a study in Nepal, the most challenging aspects were the availability of testing kits, medical supplies, and personal protective equipment. 36 In a study in France, the main challenges were increase health workers' awareness regarding management of suspected and confirmed cases, preparedness by education, and training that have been regularly organized for frontline health workers. 37 Moreover, the burden on the healthcare system was also attributed to the rapidly increasing number of cases. Governments have to take several measures to improve the capacity of the health system in order to tackle the prevailing healthcare crises, some healthcare professionals advise that the government should actively work on the security of the health workers. Lockdowns must be focused on places where clusters of cases are detected. 38 Many obstacles emerged during the process of the pandemic control, especially the sudden flare up in the number of cases beyond hospitals' capacity )due to poor adherence of people to health instructions(, in addition to the delay in closing the borders with the neighboring countries, and the slow and weak implementation of vaccination campaigns. The main reasons for loss of people's role and cooperation during the pandemic were economic problems in making a living especially among those with daily work, which contributed to a negative outcome, in addition to panic that is fed by rumors from the social media especially with respect to vaccination. People generally rely on social media to gain information on the virus. 39,40 The interviewed decision makers made some suggestions to overcome the obstacles such as: establish modern hospitals in accordance with international health standards, intensify community sensitization, and engagement to encourage COVID-19 vaccine demand and uptake. Organizational, and institutional efforts and coordination approaches are compulsory for management of any global health crisis. 40 Important suggestions to overcome psychological problems that affect people or health personnel were also raised, such as psychological education of health workers, in addition to periodic home visits of psychiatrists to the patients to provide psychological/ social support. The findings of this study form a baseline information that would be helpful for quality improvement and governmental work for improving medical services. Study limitation. We depended in the data collection on the numbers from the surveillance system of the Ministry of Health, there might be a sort of underestimation as many people with COVID-19 may not go to the health facilities, especially if the disease is not severe, because they were afraid of being )or their family contacts( quarantined. In conclusion, the trend of COVID-19 in Iraq showed 2 peaks )August-October 2020 and March-July 2021(, with males being more affected by morbidity, mortality, and CFR. The main challenge faced by Iraqi health system was the rapid increase of coronavirus cases with limited bed capacity, and medical equipment, while the main obstacle was poor compliance of the people. Enhancing the number and quality of critical care units and hospital beds capacity to cope for the increasing numbers of patients is an urgent need.
2022-05-12T06:18:01.968Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "6324fb002237bba6c95789c37fe2ee91fb2da265", "oa_license": "CCBYNC", "oa_url": "https://smj.org.sa/content/smj/43/5/500.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93e934c717e1f12d35a42e5d3b3c282f3ef98976", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
49208872
pes2o/s2orc
v3-fos-license
Multitasking During Simulated Car Driving: A Comparison of Young and Older Persons Human multitasking is typically studied by repeatedly presenting two tasks, either sequentially (task switch paradigms) or overlapping in time (dual-task paradigms). This is different from everyday life, which typically presents an ever-changing sequence of many different tasks. Realistic multitasking therefore requires an ongoing orchestration of task switching and dual-tasking. Here we investigate whether the age-related decay of multitasking, which has been documented with pure task-switch and pure dual-task paradigms, can also be quantified with a more realistic car driving paradigm. 63 young (20–30 years of age) and 61 older (65–75 years of age) participants were tested in an immersive driving simulator. They followed a car that occasionally slowed down and concurrently executed a mixed sequence of loading tasks that differed with respect to their sensory input modality, cognitive requirements and motor output channel. In two control conditions, the car-following or the loading task were administered alone. Older participants drove more slowly, more laterally and more variably than young ones, and this age difference was accentuated in the multitask-condition, particularly if the loading task took participants’ gaze and attention away from the road. In the latter case, 78% of older drivers veered off the road and 15% drove across the median. The corresponding values for young drivers were 40% and 0%, respectively. Our findings indicate that multitasking deteriorates in older age not only in typical laboratory paradigms, but also in paradigms that require orchestration of dual-tasking and task switching. They also indicate that older drivers are at a higher risk of causing an accident when they engage in a task that takes gaze and attention away from the road. INTRODUCTION In everyday life, we often must perform multiple cognitive and motor tasks concurrently. For example, we steer a car along the road while watching for other traffic, responding to street signs and planning our route. As another example, we stroll on a sidewalk while avoiding obstacles, obeying traffic lights and chatting with another person. Experimental research about human multitasking began with a study by Jersild (1927), who reported that task performance deteriorates when two tasks are executed in an interleaved rather than in a blocked fashion. These performance decrements, later called "switching costs, " were attributed to the effort involved in disengaging from one task and adjusting to another task (Rogers and Monsell, 1995). In another line of research, two tasks were presented simultaneously or with a small stimulus onset asynchrony (Telford, 1931), which again led to performance decrements, called "dual-task costs." The latter costs were attributed to a central processing bottleneck (Welford, 1952), task competition for a limited pool of attention (Kahneman, 1973) or competition for limited pools of specific processing resources (Wickens, 2002). These costs implicate a deterioration of performance, when the required attentional resources exceed the available ones. When participants have to handle very complex tasks or several tasks that require attention from the same pool, structural interferences impair the simultaneous handling of those tasks (Duncan et al., 1997). In real-life car driving, for example, a driver who passes a construction zone with narrow lanes must tightly control the car's lateral position while at the same time keeping his distance to the preceding car. This forces the driver to direct his gaze at two spatially distinct locations concurrently, which is physically not possible, i.e., structural interference emerges (Heuer, 1996). In contrast, driving in narrow lanes without a leading car while listening to traffic announcements should lead to less structural interference, because the tasks don't share sensory modalities. Five decades of research provided indisputable evidence that abilities in cognitive (reviews in Craik, 1977;Verhaeghen et al., 2003) and motor-cognitive (Hahn et al., 2010;Beurskens and Bock, 2013) multitasking decline with advancing age. This age-related decline is not uniform, however. It affects mainly task combinations which draw heavily on working memory (Voelcker-Rehage et al., 2006;Voelcker-Rehage and Alberts, 2007;Chu et al., 2013) and/or visuo-spatial processing (Beurskens and Bock, 2012), and/or postural control (Boisgontier et al., 2013), and it emerges even if multitasking is limited to singular events such as an unexpected stimulus (Bock and Beurskens, 2011) or an unexpected error (Voelcker-Rehage et al., 2006). The decay of multitasking abilities in older age is also correlated with a decay of task-switching and memory-updating abilities (Kray and Lindenberger, 2000;Holtzer et al., 2005;Iersel et al., 2008;Liu-Ambrose et al., 2009), which suggests that it is at least partly due to an age-related impairment of executive functions. It should be noted, that the age-related decline of multitasking abilities was observed in traditional laboratory paradigms and may not generalize unconditionally to real life. Laboratory research typically uses a limited number of well-defined stimuli (e.g., colored shapes on an otherwise blank screen), prescribes a limited number of elementary response alternatives (e.g., button presses) and associates those responses with no ecologically valid purpose. In contrast, everyday life offers an ever-changing flow of complex stimuli to which we respond by complex behavior in order to achieve a desirable goal. Furthermore, virtually all laboratory research was concerned with 'multi' tasking but actually presented only two tasks. This work therefore neglects the fact that in real life, we face an ever-changing sequence of concurrent tasks and must adjust to all of them in sequence. In other words, realistic multitasking incurs both dual-task costs and switching costs. Summing up, traditional laboratory paradigms suffer from behavioral impoverishment, lack of purpose and absence of the natural interplay between dualtasking and task switching. The ecological validity (Chaytor and Schmitter-Edgecombe, 2003) of those paradigms may therefore be limited. Several studies avoided behavioral impoverishment and lack of purpose by implementing realistic and immersive virtual-reality tasks such as car driving, street crossing or grocery shopping. Some of those studies dealt with dual-tasking: they combined virtual car driving or street crossing with a concurrent, cognitive or motor loading task. For example, simulated car driving has been combined with mobile texting (Drews et al., 2009), pattern detection or color memorizing (Cassavaugh and Kramer, 2009), and simulated street crossing with mobile internet use (Byington and Schwebel, 2013), listening to music or cellphone conversation (Neider and Kramer, 2011). The few studies which administered more than one concurrent task did so in separate blocks (Cassavaugh and Kramer, 2009;Neider and Kramer, 2011) and therefore still dealt with dual-tasking only; they didn't address the natural interplay of dual-tasking and task switching encountered in everyday life. The present research goes beyond those studies by including such an interplay: our participants drove in a car driving simulator and concurrently performed not just one repetitive loading task, but rather an ever-changing sequence of loading tasks that involved different stimulus modalities, different cognitive processes and different output channels. To our knowledge, ours is the first study to introduce such a multitude of intermixed loading tasks. Earlier virtual-reality studies reported a range of performance deficits under dual-task conditions. Thus, braking reaction times increased (Lamble et al., 1999;Lee et al., 2002;Strayer et al., 2003), gap estimations became less optimal (Brown et al., 1969), steering wheel control deteriorated (Kubose et al., 2006) and drivers responded to road hazards less often (Horberry et al., 2006). Findings were similar when loading tasks were administered while participants drove a real car on a closed-road circuit (Chaparro et al., 2005). The detrimental effects of loading tasks persisted even when drivers were encouraged to ignore them and to prioritize car braking (Levy and Pashler, 2008). Some of the available studies on dual-tasking in virtual reality dealt with older participants (Chaparro et al., 2005;Horberry et al., 2006;Anstey and Wood, 2011), but they didn't sufficiently compare their performance to that of young persons. The effects of old age on realistic dual-tasking, let alone on the natural interplay of dualtasking and task switching, are therefore still largely unknown. The main purpose of the present study was to close this gap in our knowledge. It is well established that divided and selective attention deteriorate with advancing age (e.g., Rabbitt, 1965;McDowd and Shaw, 2000;review in Verhaeghen et al., 2003), especially when the tasks are complex (Zanto and Gazzaley, 2014) and that this downward trend is associated with poorer driving safety (Ball et al., 1993). It therefore is quite conceivable that the natural interplay of dual-tasking and task switching in realistic scenarios deteriorates as well. However, it has also been shown that age-related deficits observed in the laboratory may be absent under more natural conditions (Bock and Beurskens, 2010;Verhaeghen et al., 2012), possibly because older persons capitalize on their lifelong experience (Salthouse, 1984;Neider and Kramer, 2011). We therefore hypothesized that both young and older persons will show multitasking deficits when driving, that these deficits will be more pronounced when the loading task requires substantial visual processing and thus introduces structural interference, and that the magnitude of those deficits will be only moderately higher in older compared to young persons because of lifelong experience. Summing up, our study is the first to compare young and older participants' driving skills when exposed to a natural interplay of dual-tasking and task switching. Participants Sixty-three young (age 20-30 years; M = 23.17, SD = 2.83, females = 40) and 61 older (age 65-75 years; M = 69.97, SD = 2.96, females = 22) adults were recruited via postings at public places, social media, contacts with local senior networks as well as the website of the German Sport University Cologne and the Chemnitz University of Technology. Inclusion criteria were: -A driving history of at least one trip per week during the last 6 months (self-report) -No experience in multitasking research or simulator driving by self-report -Good physical and mental health by self-report -No history of stroke or brain surgery and no red-green color blindness by self-report -A physician's health clearance based on an exercise ECG within the last 6 months -Visual acuity better than 20/60 (as assessed by the Freiburg Vision Test "FrACT", Version 3.9.0); although the minimum requirement for a drivers' license is 20/40 in most jurisdictions, driving safety is not degraded with a visual acuity of 20/60 (Keeffe et al., 2002). Those who met these criteria underwent screening tests to assure that they don't suffer from: cognitive impairment (assessed by the Mini-Mental State Examination; cutoff: 27/30 points), language comprehension deficits (assessed by the "Freiburger Sprachverständlichkeitstest"; cutoff: 50% word recognition at best hearing level) or obesity (cutoff: BMI ≥ 30). The Edinburgh Handedness Inventory (cf. Oldfield, 1971) was used to determine hand dominance. Five Participants were left-handed, all others were right-handed. One participant was ambidextrous but used the right hand for the typing task. Persons who usually wore contact lenses, prescription glasses or hearing aids did so as well while participating in our study. Participants were informed about the possibility to experience simulator sickness, and about their right to interrupt or abort the session at any time. Among the recruited persons, six young ones dropped out without giving a reason, three older ones because of simulator sickness and an older one because of reasons unrelated to our study. Registrations therefore were completed, and data were analyzed, from 63 young adults and 61 older ones. This study was carried out in accordance with the recommendations of the Ethics Commission of the German Sport University with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the Ethics Commission of the German Sport University. Participants received 15 € per session (60€ in total). Figure 1 shows a schematic top view of the setup, Figure 2 shows a photo of the realization and the environment. Participants sat in a conventional car seat in front of three 48 TV screens, which rendered the driver's view of cockpit and surrounds with a total viewing angle of 195 • . A steering wheel and pedal set (Logitech G27) were mounted in locations similar to a real car, and a numeric keypad ('K' in Figure 1) was mounted within easy reach. Participants wore a headset with microphone (shark zone H10, Sharkoon) not shown in Figure 1. Driving Task Commercially available driving simulator hard-and software (Carnetsoft R version 8.0) was used to display a softly winding rural road without traffic lights or intersections. The driving environment was realistically portrayed with road signs, buildings and other vehicles (cars, busses, and trucks) which traveled in the opposing lane at constant speed. The landscape contained animals, trees, bushes, fences, straw bales, mountains and clouds in a blue sky. Participants drove a VW Golf with automatic transmission, and had full front and side view out of the cockpit. The dashboard displayed the typical devices including a speedometer. Two side-view and one rear-view mirrors were located in the usual locations, and presented the expected views. Participants were instructed to follow a lead car which drove at a constant speed of 70 km/h. At irregular intervals, the lead car approached a construction site or a speed-restricted zone and slowed down to 40 km/h within 7 s. It kept this speed for 6 s, and then returned within 9 s to 70 km/h. Thus, participants had to slow down in order to avoid a collision, and to speed up afterwards in order to keep up with the leading car. We will refer to this maneuver as 'braking task.' Each driving trip was 25.7 km long, included 10 braking tasks and took about 25 min to drive. When drivers didn't keep up with the leading car and intervehicle distance exceeded 100 m, the leading car slowed down to 70% of the participants' current speed until inter-vehicle distance decreased to 50 m, and then sped up again. This ensured comparable inter-vehicle distances for all participants and conditions. Loading Tasks A battery of loading tasks was presented in a mixed order, at unpredictable times. Task presentation was identical for every participant. Tasks were modeled after natural activities, involved different sensory modalities and required different types of responses. A given type of any task was not presented twice in succession via the same modality. The sound volume of auditory stimuli was individually adjusted for each participant. Each of the three following types of task was presented 20 times during a driving trip: 10 times visually for 5 s in the middle of the windshield and 10 times auditorily over headphones (Example in Figure 3). -Typing: a three-digit number was presented, and participants responded by typing that number into the keypad. This task simulates operating, e.g., a radio receiver or GPS navigator. -Reasoning: a question which couldn't be answered by "yes" or "no" was presented, e.g., "What would be an argument against the taxation of sugar?" Participants responded verbally, and their response was registered by the headset microphone. This task simulates conversation with a car passenger or via a hands-free mobile phone. -Memory: In the visual version, participants passed a gas station equally often appearing on the right or left side of the road and were asked over headphones whether the displayed price for premium gas was the same as at the preceding gas station immediately after (Example in Figure 4). In the auditory version, participants heard a traffic announcement over headphones and were then asked whether the reported congestion (highway number, location, length) was the same as in the preceding traffic announcement. In both task versions, participants respond verbally "yes" or "no" into the headset microphone. Procedures Each participant completed four experimental sessions on separate days, with at least 1 day off in-between. This took between 8 and 28 days, depending on the participants' availability. The first session included screening tests (to meet our inclusion criteria), driving simulator practice and practice of the loading tasks. Before the practice trials, participants received instructions and were encouraged to ask questions. Driving was practiced for 3-4 min, on the same course used for data collection. Loading tasks were practiced for 3-4 min on the same course as well, while the car drove in autopilot mode. The multitask condition (MT) was not practiced. The subsequent three sessions were administered in an order that was balanced across participants. In one session, participants drove behind the leading car with no additional tasks (singletask driving, ST D ). In another session, they drove behind the leading car while concurrently responding to the loading tasks (MT). In yet another session, the car drove in autopilot mode to provide a similar visual stimulation as in the other two sessions, and participants only responded to the loading tasks (ST L ). The driving course was identical in all three conditions. Before the practice trials and at the beginning of the 2nd, 3rd, and 4th session, the examiner read aloud the pertinent instructions and explained every task separately. (S)he then withdrew from the participants' view; during the remainder of the session, (s)he took notes and supervised the procedure without disturbing or interacting with the driver. Participants also underwent cognitive and physical testing, and their street-crossing behavior was examined in a separate virtual-reality setup. This paper focuses on driving, a separate contribution in this issue deals with street crossing, and the other outcomes will be communicated later. Data Analysis Driving performance in MT was analyzed within road segments of interest. Each segment started with the presentation of a loading task and ended 1 s before presentation of the next loading task. Segment duration varied, in dependence on driving speed and loading-task distance, in the range 17.46 ± 2.45 s (Mean duration ± standard deviation). We adopted this particular definition of road segments in order to analyze driving performance even when responses required substantial time for pondering and verbalizing. On rare occasions, reasoning took longer than the duration of the pertinent road segment; we then decided case by case whether the response was substantially completed and if not, marked it as 'invalid.' Since the driving course was identical in all three conditions, we could analyze participants' performance in each condition within the same road segments (i.e., same road curvature and visual scenery). However, this similarity of the driving environment does not extend to the individual loading tasks: it is conceivable that on the average, one loading task was presented on curvier road segments and/or in a more cluttered visual scenery than another loading task. Differences between tasks are therefore confounded by differences between road conditions. By the same token, differences between modalities are confounded by differences between road conditions. Scattering of loading tasks along therefore added to the realism of our paradigm, but hinders comparisons between tasks and modalities. The simulator software registered a range of continuous signals at a rate of 10 Hz. Among them were the lateral position of the driven car (0 m: car centered in its lane; <−0.78 m: right wheels off the road), and its distance from the lead car (0 m: bumpers touch). From these signals, we calculated the following parameters for each road segment of interest: -Mean velocity -Standard deviation of velocity (SD velocity) -Mean lateral position -Standard deviation of the lateral position (SD lateral position). Furthermore, we calculated the following parameters for the typing and the memory task: -Reaction time (RT): Interval between task presentation and response onset -Correctness (COR): Proportion of all correct key presses in the typing task [0.00 (all wrong), 0.33 (one correct), 0.67 (two correct) or 1.00 (all correct), response correctness in the memory task (0 (wrong) or 1 (correct)]. Reaction time and COR in the typing task were determined by a software algorithm. RT in both other tasks was determined manually, by setting a cursor in the visually displayed voice tracks. COR in both other tasks was determined by listening to the voice tracks. We noticed during data analysis that in the memory task, older participants often started to respond even before the verbal question was completed. We therefore decided to exclude RT in the memory task from further analyses. All other parameters were averaged across the 10 repetitions of each task, excluding outliers as identified by the ± 3.29 SD criterion (Tabachnick et al., 2001). Statistical Analyses Averaged scores were submitted to four-way analyses of variance (ANOVAs) with repeated measures on the factors Condition (ST and MT), Task (memory, reasoning, and typing) and Modality (visual and auditory) and the FIGURE 5 | Mean velocity ± SE of both age groups in single task (ST D ) and multitask (MT) conditions. Memory, reasoning and typing task were presented auditorily (_a) and visually (_v). between-factor Group (young and older). We interpreted η 2 p values < 0.06 as small, 0.06-0.14 as medium and >0.14 as large effects (Cohen, 1992). P < 0.05 was set for statistical significance. When the assumption of sphericity was violated in Mauchly's tests, degrees of freedom were Greenhouse-Geisser corrected. We used IBM SPSS Statistics, version 25 (IBM Corp., Armonk, NY, United States) for those calculations. Figure 5 illustrates the driving parameter mean velocity of both age groups in ST D and in MT, separately for all six combinations of loading task and modality. ANOVA (see Table 1) yielded a FIGURE 6 | Standard deviation of velocity ± SE of both age groups in single task (ST D ) and multitask (MT) conditions. Memory, reasoning and typing task were presented auditorily (_a) and visually (_v). significant main effect for Condition: participants drove more slowly in MT compared to ST D (F = 12.07, p = 0.00, η 2 p = 0.09, df = 1, 122). The mean difference between MT and ST D was 1.35 ± 0.74 km/h. Slowing was least pronounced for the memory task and most pronounced for the reasoning task (significant ANOVA effect for Condition × Task), particularly when the latter was presented visually (significance for Condition × Modality, Task × Modality and Condition × Task × Modality). We further found a significant main effect for Group: older participants drove more slowly than young ones (F = 15.62, p = 0.00, η 2 p = 0.11, df = 1, 122). The mean difference between young and older persons was 3.89 ± 0.41 km/h. We also observed significant main effects for Task (F = 78.98, p = 0.00, η 2 p = 0.39, df = 1.92, 244) and for Modality (F = 22.25, p = 0.00, η 2 p = 0.39, FIGURE 7 | Mean lateral position ± SE of both age groups in single task (ST D ) and multitask (MT) conditions. Memory, reasoning and typing task were presented auditorily (_a) and visually (_v). df = 1, 122): participants drove more slowly with the reasoning compared to the memory and the typing task, and they drove more slowly when tasks were presented visually rather than auditorily. Figure 6 illustrates corresponding data for the parameter SD velocity. ANOVA (see Table 2) revealed a significant main effect for Condition: speed variability scores were 0.75 ± 0.48 km/h higher in MT compared to ST D (F = 32.60, p = 0.00, η 2 p = 0.21, df = 1, 122). This increase was particularly pronounced for the visually presented reasoning task and when the typing task was presented auditorily (significance for Condition × Task, Modality, Condition × Modality, Task × Modality and Condition × Task × Modality). We further found a significant main effect for Group (F = 30.70, p = 0.00, η 2 p = 0.20, df = 1, 122): variability scores were −1.87 ± 0.19 km/h higher in older compared to young persons. We also found a significant main effect for Task (F = 230.39, p = 0.00, η 2 p = 0.65, df = 1.69, 206.19): variability scores were higher for the reasoning task compared to the memory and the typing task. Figure 7 shows the parameter mean lateral position of both age groups in ST D and in MT, separately for all six combinations of loading task and modality. ANOVA (see Table 3) yielded a significant main effect for Condition: participants drove more laterally in MT compared to ST D (F = 11.10, p = 0.00, η 2 p = 0.08, df = 1, 122). Mean difference between MT and ST D was 0.12 ± 0.05 m. This shift toward the curb was larger when the memory task was presented visually and when the reasoning and typing tasks were presented auditorily, more so in older than in young persons [significance for Modality (F = 61.91, p = 0.00. η 2 p = 0.34, df = 1, 122), Group × Modality, Task × Modality and Condition × Group × Modality]. The main effect for Group was not significant, but a significant effect of Task (F = 79.79, p = 0.00, η 2 p = 0.40, df = 1.72, 209.82) and Group × Task emerged: participants drove more laterally when performing the memory task and this shift toward the curb was much more pronounced in older persons. Figure 8 illustrates corresponding data for the parameter SD lateral position. ANOVA (see Table 4) revealed a significant main effect for Condition: scores were higher for MT compared to ST D (F = 10.53, p = 0.00, η 2 p = 0.08, df = 1, 122), but this was limited to older participants performing the typing task (significance for Task (F = 93.68, p = 0.00, η 2 p = 0.43, df = 1.88, 244), Condition × Task, Condition × Group, Group × Task and Condition × Group × Task). Mean absolute difference between MT and ST was 0.03 ± 0.12 m. We also found a significant main effect for Group: scores were higher in older compared to young participants, with a mean difference of 0.17 ± 0.10 m (F = 20.82, p = 0.00, η 2 p = 0.15, df = 1, 122). Finally, there was a significant main effect for Modality (F = 53.60, p = 0.00, η 2 p = 0.31, df = 1, 122): scores were higher with auditory rather than visual task presentation, particularly for older participants in the typing task (significance for Group × Modality, Task × Modality and Group × Task × Modality). We noticed that participants sometimes veered off their driving lane when they engaged in the typing task. 78% of older participants but only 40% of young ones reached the curb with their right wheels during at least one presentation of the typing task; this age difference is statistically significant (test of proportions: p < 0.001). Furthermore, 15% of older participants but 0% of young ones crossed the median with their left wheels at least once; this age difference is again statistically significant (p < 0.01). Figure 9 depicts the RT in the typing task. ANOVA (see Table 5) yielded a significant main effect for Condition (F = 30.70, FIGURE 8 | Standard deviation of lateral position ± SE of both age groups in single task (ST D ) and multitask (MT) conditions. Memory, reasoning and typing task were presented auditorily (_a) and visually (_v). p = 0.00, η 2 p = 0.20, df = 1, 122): RT was higher in MT compared to the ST L ; however, this finding was limited to older participants (significance for Group × Condition). The mean difference between MT and ST L was −0.32 ± 0.29 s. We further found a significant main effect for Group: RT of older participants was 0.37 ± 0.18 s higher than that of young ones (F = 10.22, p = 0.00, η 2 p = 0.08, df = 1, 122). We also observed a significant main effect for Modality (F = 124.44, p = 0.00, η 2 p = 0.50, df = 1, 122): RT was higher with auditory compared to visual presentation, more so in MT (significance for Condition × Modality) and in young persons (significance for Group × Modality). Figure 10 shows COR in the typing task. ANOVA (see Table 6) revealed a significant main effect for Condition (F = 66.00, p = 0.00, η 2 p = 0.35, df = 1, 122): COR was lower by 0.036 ± 0.007 in MT compared to ST L , but this difference only occurred for older participants (significance for Condition × Group). There also was a significant main effect for Group (F = 8.56, p = 0.00, η 2 p = 0.07, df = 1, 122) and for Modality (F = 8.78, p = 0.00, η 2 p = 0.07, df = 1, 122), as COR was lower by 0.026 ± 0.012 in older compared to young persons, and lower for auditory compared to visual presentation. Loading Tasks Reaction time data from the memory task were not analyzed (see above), and COR data were not complete since the data sets of two older persons were lost for technical reasons. The remaining data are shown in Figure 11. ANOVA (see Table 7) revealed only a significant main effect for Group: COR was lower by 0.057 ± 0.011 in older compared to young participants (F = 19.31, p = 0.00, η 2 p = 0.14, df = 1, 122). DISCUSSION This study deals with multitasking in simulated car driving. It differs from earlier work on this topic in two ways. First, we use not just one repetitive loading task but rather a mixed sequence of different loading tasks, to simulate 1.82, 121 * p < 0.05, * * p < 0.001. FIGURE 9 | Reaction time (RT) ± SE in the typing task for both age groups in single task (ST D ) and multitask (MT) conditions. The typing task was presented auditorily (_a) and visually (_v). the natural interplay of dual-tasking and task switching. Second, we compare driving performance of young to that of older persons. Our work addressed three hypotheses. According to one, performance of young and older persons will decrease under multitasking conditions. Indeed, we found significant main effects of Condition for all six outcome parameters. According to our second hypothesis, the effects of multitasking will be larger with visual compared to auditory loading tasks, because of structural interference. We found significance of Condition * Modality for only three of our six parameters; we also observed three significant effects of Condition * Task * Modality, since effects of multitasking were sometimes smaller rather than larger with a visual loading task. We therefore found no unanimous support for the second hypothesis. Our third hypothesis stipulates that multitasking deficits may not be much larger in older compared to young persons, since cognitive decay is compensated by lifelong experience. Indeed, significance of Condition * Group emerged for only one driving parameter and was qualified by significance of Condition * Group * Task: when multitasking, lateral lane variability increased in older persons more than in young ones, but only with the typing task. Accordingly, significance of Condition * Group also emerged for both parameters related to typing. Our data therefore indicate that age-related deficits of multitasking emerge for some but not for other loading tasks, which adds partial support to our third hypothesis. Compared to single-task driving, participants in MT drove at a lower speed, with a higher speed variability and at a more lateral lane position. Similarly, Chaparro et al. (2005), Horberry et al. (2006), Horrey and Wickens (2006), Strayer et al. (2006) reported lower speed and deficient lane keeping under dual-compared to single-task driving. As an example, Strayer et al. (2006) found driving speed to decrease by about 2.2 km/h when participants were talking on a mobile phone, while the decrease was about 1.4 km/h in the present multitasking study. More research is needed to find out whether our loading tasks were less disruptive than the task of Strayer et al. (2006) or, alternatively, whether multiple loading tasks FIGURE 10 | Correctness (COR) ± SE in the typing task for both age groups in single task (ST D ) and multitask (MT) conditions. The typing task was presented auditorily (_a) and visually (_v). are less disruptive than one single loading task. The observed reduction of driving speed and the more lateral lane position could represent compensatory strategies, implemented to avoid collisions with the leading car and with oncoming traffic in high-demand driving situations. The observed increase of speed variability could be a more direct marker of high demand: possibly, participants slowed down when their attention was focused on the loading task, and sped up to catch up with the leading car when attention was redirected to the driving task. We further found that compared to young participants, older ones drove at a lower speed, with a higher speed variability and at a more lateral lane position. In other words, old age and multitasking had similar effects on driving, and possibly so for similar reasons, namely, a higher cognitive demand of driving. We also observed that older persons' performance on the memory task was poorer than that of young ones, which concurs with the known age-related deficits of working memory (Salthouse and Babcock, 1991;Waters and Caplan, 2001;Voelcker-Rehage et al., 2006). Chaparro et al. (2005) reported that a loading task had stronger effects on driving when it was presented FIGURE 11 | Correctness (COR) ± SE in the memory task for both age groups in single task (ST D ) and multitask (MT) conditions. The memory task was presented auditorily (a) and visually (v). visually rather than auditorily. We can't confirm this observation unanimously, and therefore can't claim unequivocal support for the structural-interference model (Gopher and Donchin, 1986;Duncan et al., 1997). Although we hypothesized that age related deficits of multitasking are compensated by experience (see section "Introduction") differential effects of age on multitasking were observed. Performance of older persons suffered more than that of young ones with the loading task 'typing, ' not with 'reasoning' or 'memory.' Critically, this often let especially older persons veer off the lane when typing. The detrimental effect 'typing' on older persons could reflect the known age-related problems of attention engagement/disengagement (D'Aloisio and Klein, 1990), gaze control (Maltz and Shinar, 1999;Bock et al., 2015) and/or limb coordination (Darling et al., 1989;Ketcham et al., 2002). Since the keypad was located near the steering wheel, participants had to shift their attention, gaze and arm toward a new location in task 'type, ' but not in the other two loading tasks. In any case, our finding could be of substantial relevance for the driving safety of older persons since activities similar to task 'type' are quite common in driving: drivers often operate radios, navigation systems and other dashboard instruments, open and close windows, adjust side and rear mirrors, and on longer trips may even reach for drinks and food located elsewhere in the car cabin. It would be interesting to know whether multitasking skills can be improved by practice. Previous work has shown that dualbut not single-task training improves dual-task performance (Silsupadol et al., 2009) and accordingly, multitask-but not dualor single-task training may improve performance on a realistic multitask. Future research should determine whether the effects of multitasking in our study, and their modulation by age, are similar, larger or smaller than those documented by traditional dual-task studies which disregarded the natural interplay of dual-tasking and task switching (see section "Introduction"). Furthermore, our present multitasking paradigm should be expanded to allow for more than two tasks at a given time; for example, participants could drive a car, memorize events in the environments and keep up a conversation all at the same time, then switch to driving, memorizing and typing, etc. DATA AVAILABILITY The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS CV-R and OB contributed conception and design of the study. KW wrote the first draft of the manuscript. CV-R, CJ, MH, UD, OB, and KW wrote sections of the manuscript. All authors contributed to manuscript revision and read and approved the submitted version.
2018-06-15T13:08:03.506Z
2018-06-15T00:00:00.000
{ "year": 2018, "sha1": "5938e3fe705364c348610ea63ed92ce0e4ac5c56", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpsyg.2018.00910", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5938e3fe705364c348610ea63ed92ce0e4ac5c56", "s2fieldsofstudy": [ "Psychology", "Engineering" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
247524874
pes2o/s2orc
v3-fos-license
Standards and quality of care for older persons in long term care facilities: a scoping review Background Caring for older persons has become a global necessity to ensure functional ability and healthy ageing. It is of paramount importance that standards of care are monitored, especially for older persons who live in long term care facilities (LTCF). We, therefore, scoped and summarised evidence relating to standards and the quality of care for older persons in LTCFs in gerontological literature globally. Methods We conducted a scoping review using Askey and O’Malley’s framework, including Levac et al. recommendations. PubMed, CINAHL, Health Sources, Scopus, Cochrane Library, and Google Scholar were searched with no date limitation up to May 2020 using keywords, Boolean terms, and medical subject headings. We also consulted the World Health Organization website and the reference list of included articles for evidence sources. This review also included peer-reviewed publications and grey literature in English that focused on standards and quality of care for older residents in LTCFs. Two reviewers independently screened the title, abstract, and full-text of evidence sources screening stages and performed the data extraction. Thematic content analysis was used, and a summary of the findings are reported narratively. Results Sixteen evidence sources published from 1989 to 2017 met this study’s eligibility criteria out of 73,845 citations obtained from the broader search. The majority of the studies were conducted in the USA 56% (9/16), and others were from Canada, Hong Kong, Ireland, Norway, Israel, Japan, and France. The included studies presented evidence on the effectiveness of prompted voiding intervention for urinary incontinence in LTCFs (37.5%), the efficacy of professional support to LTCF staff (18.8%), and the prevention-effectiveness of a pressure ulcer programme in LTCFs (6.3%). Others presented evidence on regulation and quality of care (12.5%); nursing documentation and quality of care (6.3%); medical, nursing, and psychosocial standards on the quality of care (6.3%); medication safety using the Beer criteria (6.3%); and the quality of morning care provision (6.3%). Conclusion This study suggests most studies relating to standards and quality of care in LTCFs focus on effectiveness of interventions, few on people-centredness and safety, and are mainly conducted in European countries and the United States of America. Future studies on people-centerdness, safety, and geographical settings with limited or no evidence are recommended. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-022-02892-0. above is projected to rise to about 2.1 billion in the next 15 years [1]. This surge in ageing will potentially increase the demand for long term care due to a deterioration in functional capacity experienced by older persons [2]. Long term care encompasses a diversity of services, including rehabilitative, restorative and ongoing-nursing care to address individualised health, social or personal care of the aged [2]. The services are planned to help the individuals live independently and safely while performing their daily activities, which would have been difficult/ impossible if they had lived alone [2,3]. Long term care can be rendered in formal or informal settings by a variety of trained or untrained caregivers, including family members, depending on the setting. Formal long term care facilities (LTCFs) for the aged such as nursing homes and residential care homes, supplement family members' support for their ageing relatives by providing diverse professional services [2]. These formal LTCFs offer tailored services to their residents to meet their changing needs and that of their family members. The services in response to the need of the resident are often administered by diverse trained staff attached in the LTCFs to ensure that older people who are with or at risk of a significant or ongoing loss of intrinsic capacity can maintain a level of functional ability as stipulated by the World Health Organization (WHO) [4]. Functional disability is the prime reason for using LTCFs [2,3]. To this end, it is of the utmost importance to assess the quality of care delivered to the aged living in LTCF by monitoring their safety, efficiency and effectiveness of practices, and people-centeredness with regard to timeliness and fairness of interventions [5]. However, to enable the delivery of quality care to residents in LTCFs, it is essential to adhere to standards and ensure that the caregiving professionals are adequately trained to adhere to set criteria. Standards for clinical and non-clinical care are critical to maintaining healthy ageing for residents in LTCFs. A scoping review focusing on standards and the quality of care delivered to older people as residents in LTCFs is needed to synthesize and highlight gaps in the literature to facilitate or direct future research to ensure healthy ageing in line with the WHO in its global strategy and action plan [6], and the sustainable development goals Plan of Action for older persons [7]. Although several prior scoping reviews have been conducted [8][9][10][11][12][13][14][15][16][17], none of these previous reviews focused on standards and the quality of care for older residents in LTCFs. Therefore, this study aimed to scope and summarise the evidence relating to standards and the quality of care for older persons in LTCFs in the English gerontological literature worldwide. Scope of review This study adopted Arksey and O'Malley's framework, including Levac et al. recommendations as a guide [18,19]. This study used five of the six steps outlined in the framework as follows: identifying the research question; identifying relevant evidence sources; selecting evidence sources; charting the data, and collating, summarizing, and reporting the results [18]. This study's protocol was developed a priori and published [20]. This study population included individuals aged 65 years or more resident in LTCFs, the concept included standards (a duty determined by a given set of circumstances that present in a particular patient, with a specific condition, at a definite time and place) for care of older persons, and the context was quality of care as per the WHO definition (the extent to which health services for individuals and populations increase the likelihood of desired health outcomes. That is, safety, effectiveness, and people-centerdness through timely, efficient, integrated, and equitable health care) [5]. Evidence sources published globally and grey literature relating to standards and quality of care of older persons in the LTCFs were included [20]. Limits included only English language publications and primary study designs [20]. Identifying the research question This study sought to answer the main research question: To date, what evidence and knowledge gaps exist relating to standards and the quality of care for older persons in LTCFs? The population, concept, and context framework was used to define the eligibility of this review question. Identifying relevant studies/evidence We systematically searched the literature to retrieve grey literature and published studies relating to standards and the quality of care for older persons in LTCFs. We used a combination of keywords ("older person", "aged", "elderly", "aging", "ageing", geriatric, standard of care", "standard", "care", "clinical practice guideline", "quality of care", "long term care facility", "long term care setting", "nursing home"), Boolean terms (AND/OR), and Medical Subject Headings (MeSH) terms during the search [20]. This study limited the search on PubMed, EBSCOhost (CINAHL with full text and Health Sources), Scopus, Cochrane Library, and Google Scholar for relevant peerreviewed publications in English from inception to May 2020. We also consulted the WHO website and the reference list of included articles. Each search was adequately documented (Supplementary file 1). The Peer Review of Electronic Search Strategies statement [21] guided this study's electronic search strategy. EndNote X9 reference manager was used to compile all relevant sources of evidence and identify and remove duplicates. Screening and selection of studies LK conducted the database searches and titles screening assisted by JvW after the search strategy and screening methods were piloted to calibrate operators, increase consistency, and fine-tune the methods. PG reviewed the retrieved titles to ensure completeness prior to abstract screening. Subsequently, the cleaned EndNote library was shared among the review team following the removal of duplicate titles. Using an electronic screening tool developed in Google forms, LK and JvW independently screened the abstracts, and full texts and categorized them into an "include" or "exclude" category based on the study's eligibility criteria (scope of review). The review team resolved all discrepancies (relating to the eligibility of an evidence source) between LK and JvW at the abstract screening stage through discussions until consensus was reached, whilst PG resolved the discrepancies between LK and JvW at the full-text screening phase. Cohen's kappa coefficient (κ) statistic was calculated to determine the inter-rater agreement between the reviewers at the full-text screening phase and Kappa statistic less than 50, 50-70%, and greater than 70% were respectively interpreted as poor, moderate, and substantial agreement. We adapted the PRISMA flow diagram to present the screening results [22]. Charting the data LK and DK independently extracted all relevant data from the evidence sources using a form developed in Microsoft Excel. Prior to the data extraction, the form was piloted by LK and DK using 10 % of the evidence sources to ensure accuracy and reliability of the data. LK thoroughly read the full texts and extracted all relevant data from the included studies. Inductive and deductive approaches were employed to extract relevant data. We extracted data that described the characteristics of the study: author(s) and publication year, methodological details, standards/interventions, standard procedure, and the study findings relating to the quality of care. Quality appraisal We employed the mixed method quality assessment tool to conduct a quality appraisal of each included primary study [23]. Methodological quality appraisal of individual studies is not required for a scoping review study. Still, we considered it essential for inclusion to enable this study to assess the validity of conclusions drawn by each included study. LK and JvW independently conducted the methodological quality assessment and scored each included study using the screening questions and the set of quality appraisal questions prescribed by the MMAT for the study design employed (randomized controlled trial, non-randomized study, and quantitative descriptive studies). Then, an overall quality score was calculated for each retained study using the MMAT. The quality score was generated into a percentage and graded as low (less than 50%), average (50 to 75%) and high (greater than 75% as published in previous study [24]. Collating, Summarising, and reporting the results A content analysis [25] of the extracted studies was performed to categorized the reported standards of care into themes. A further content analysis of the findings reported for each theme was performed to link each theme to the quality component been addressed based on the WHO definition quality of care (safety, effectiveness, and people-centrerdness through timely, efficient, integrated, and equitable health care). A narrative summary of the findings for each standard (quality component been addressed) was reported. The characteristic of the included articles was described using frequencies and percentages. Results The broader electronic search yielded 73,845 citations, of which 167 potentially eligible titles were identified with eight duplicates. Subsequently, 123 and 22 evidence sources were removed at the abstract and fulltext screening stages, respectively. Finally, 16 evidence sources, including two articles obtained from reference list searches met the inclusion criteria and were included for data extraction and review. There was substantial agreement between the reviewer's responses at the full article screening stage (Kappa statistic = 0.85, p < 0.01). Twelve were clinical practice guidelines with no human participants [26][27][28][29][30][31][32][33][34][35][36]. Five of the evidence sources excluded at the full-text stage were other review studies [37][38][39][40][41], three did not include this study's population [42][43][44], one was a hospital-based study [45], and one did not have any standard of care [46] (Fig. 1). Practice guidelines/criteria for older residents in LTCFs Aside from the 16 included studies, this review revealed 12 practice guidelines for care of older persons in LTCFs. Namely; practice guidelines for evaluation of fever and infection [26], practice guidelines for improving medication management [27], oral health care guidelines [28,31], and standard guidelines for specialized nutrition support [63]. Clinical practice guidelines for the evaluation of fever and infection [30], guidelines for reducing the risk of aspiration pneumonia through oral health care [31], standards for psychological services [32], infection prevention and control [33], prevention of influenza [64], and recommendations for the management of Clostridium difficile [34] were also revealed. The remainder was a framework to combat antimicrobial resistant bacteria [35] and criteria for determining inappropriate medication use [36]. Most of these guidelines focused on clinical care for older residents in LTCFs. This finding suggests a dearth of guidelines for non-clinical care for older people resident in LTCFs. Findings from the primary studies Prompted voiding interventions (effectiveness) Six of the 16 included studies highlighted evidence on prompted voiding intervention/standards [47,[50][51][52][53]57]. Burgio et al. indicated significant improvement to the two-hourly schedule in one of the four groups involved. Two groups appeared to improve on the less intensive three-hour schedule (P < 0.05) [47]. Moreover, the authors indicated that during training self-initiated toileting reduced (P < 0.05) and volume voids in a suitable receptacle increased (P < 0.05) [47]. Lai et al. investigated the effectiveness of the use of prompted voiding by nursing home staff in managing urinary incontinence among residents in Hong Kong over 6 months [57]. Significant differences in wet episodes, incontinence rate, and total continent toileting per day between the control and intervention groups 6 months after intervention were noted. A reduction of 9% incontinence was noted in the intervention group [57]. Schnelle et al. appraised a prompted voiding treatment for urinary patients presenting incontinence in nursing homes in the USA by reviewing patients hourly, and ascertaining if they required toileting assistance, and socially reinforced proper toileting [50]. In their study, the frequency of incontinence per 12 h from an average of 3.85 at baseline to an average of 1.91 during the treatment [50]. Schnelle et al. reported that prompted voiding treatment significantly reduces incontinence frequency in patients who can initiate voiding when prompted in another article aimed at providing a controlled experimental evaluation of prompted voiding procedures of 126 patients [51]. They found no differences between the immediate and delayed treatment groups at baseline (Phase 1), but found significant differences in Phase 2 (F(1,125) = 33.64, P < 0.001) [51]. Nonetheless, the treatment effects were replicated in Phase 3 when both groups of patients received treatment with no significant differences between Phase 2 and Phase 3 (F(1,125) = 0.008, P < 0.931) [51]. Schnelle et al. used a statistical qualitycontrol process to assess the effectiveness of incontinence management procedures by indigenous nursing staff in four nursing homes [52]. Their revealed that 36 out of the 81 patients were responsive to the toileting procedures [52]. The overall average expected wetness for all toileting patients was 18%, SD 16% [52]. Furthermore, Schnelle et al. reported that nearly 75% of the 344 residents significantly improved wetness, and 35% (120/344) decreased wet episodes to less than 1 per 12-h period in their study aimed to provide a specific illustration of how such management technologies can improve nursing aides' incontinence care [53]. Provision of professional support to LTCF staff (effectiveness) Three of the 16 included studies presented evidence on professional support to LTCF staff and quality of care. Rolland et al. investigated the effects of a global intervention that included professional support and education for nursing home staff on quality indicators as well as functional decline and emergency department transfers of residents [62]. At the outset, they reported that quality indicators in nursing homes in France were generally low [62]. The annual rate of transfer to the emergency department was found to be high (about 20%) in both the intervention and control groups [62]. The global intervention was found to have a significant positive effect on the prevalence of assessment of pressure ulcer risk, depression, pain, and prevalence of emergency department transfers but had no significant impact on the functional decline [62]. Ryden et al. investigated the impact on clinical outcomes when advancedpractice gerontological nurses collaborated with nursing home staff in the United States to implement evidence based protocols for incontinence, pressure ulcers, depression, and aggressive behavior [49]. Eighty-six residents who received input from gerontological advanced practice nurses (GAPNs) in their care improved significantly more (less decline in incontinence, pressure ulcers, and aggressive behavior, and higher mean composite trajectory scores); 111 compared to 111 residents who received standard care [49]. As a result, Ryden et al. proposed that GAPNs can serve as important bridges between current scientific knowledge about clinical problems and nursing home staff [49]. The effectiveness of the second tier of interventions in a two-tiered nursing intervention model designed to improve the quality of care for residents in LTCFs in the United States was tested by Krichbaum et al. [48]. The first tier of the model required GAPNs to provide direct care and teach staff how to implement care protocols for residents with incontinence, pressure ulcers, depression, and aggression, while the second tier required GAPNs to add a set of organization-level (OL) interventions such as membership on the LTCF quality assurance committee and collaboration with staff on problem solving teams [48]. In the first tier, there was a significant improvement in resident outcomes for incontinence, pressure ulcers, and aggression [48]. The addition of OL interventions also revealed a significant improvement in both depression scores and depression trajectory in LTCF residents who received OL interventions [48]. Effect of regulation of LTCFs (effectiveness) Two reported evidence sources relating to regulation and quality of care in LTCFs were accessed from the 16 included studies. Bravo et al. compared the mortality rate in regulated and unregulated facilities and concluded that quality of care has a much stronger influence on resident outcomes in Canada than regulation [56]. The study found that a resident's length of survival in LTCF is unaffected by the regulatory status of the facility where he or she lived at the start [56]. Nonetheless, residents with low quality ratings at the outset had shorter survival times than those who received good care [56]. The median survival time for residents receivinginadequate care was 28 months, compared to 41 months for those receiving adequate care (p = 0.0217) [56]. The Kirkevold and Engedal study described the extent to which nursing homes provided services in accordance with the 'Regulation of quality of care' and reported that the majority of residents in Norwegian nursing homes received good basic care [59]. However, the study found that residents had fewer opportunities to participate in leisure activities such as going for a walk [59]. Low function in mental capacity, low function in activities of daily living, and aggressive behavior in residents were found to have a strongly negative association with acceptable quality of care [59]. Documentation of nursing care (people-centeredness) One of the 16 included studies demonstrated nursing documentation and quality of care. Broderick et al. investigated nursing care documentation in Ireland's long-term care facilities and described aspects of personcentered care as evidenced in nursing records [58]. In their study, they revealed that many nursing records were incomplete and contained infrequent information about psychosocial aspects of care [58]. The nurses interacted with the residents and worked with their beliefs and values, but nursing documentation was not completed in consultation with the patient, and there was little evidence that patients were involved in care decisions [58]. Medical, nursing, and psychosocial standards of care (people-centeredness) One of the 16 included studies also demonstrated findings on medical, nursing, and psychosocial standards of care quality. Fleishman et al. assessed the quality of care in Israeli LTCFs, focusing on medical, nursing, and psychosocial standards of care (using tracers such as hypertension, vision difficulties, hearing difficulties, oral health problems, mobility problems, difficulty in washing, difficulty in dressing, difficulty in brushing teeth, urinary incontinence, loneliness, and lack of autonomy) [61]. According to the Fleishman et al. study, residents in good units were more satisfied than residents in poor units [61]. Residents in independent and frail units, on the other hand, were more satisfied than residents in nursing units [61]. Loneliness, autonomy satisfaction, staff attitudes satisfaction, and living conditions were all significant predictors of overall satisfaction (R2 = 0.478, p0.001) [61]. Medication safety (Beers criteria) (safety) One of the 16 included studies reported on medication safety. Niwata et.al. assessed inappropriate medication in LTCFs in Japan based on the Beers criteria. The study indicated that 356 (21.3%) of the 1669 patients were treated with potentially inappropriate medication independent of disease or condition [61]. Ticlopidine was reported as the most (107 patients (6.3%)) commonly inappropriately prescribed medication [61]. The study further indicated that 300 (18.0%) patients were treated with at least one inappropriate medication dependent on the disease or condition [61]. Factors such as psychotropic drug use (OR = 1.511), medication cost per day (OR = 1.173), number of medications (OR = 1.140), and age (OR = 0.981) related to inappropriate medication use were independent of disease or condition [61]. Provision morning care (staff assistance with either transfer out of bed, dressing, and/or incontinence care) (people-centeredness) One of the 16 included studies reported on morning care. Simmons et al. examined three aspects of morning care (staff assistance with either transfer out of bed, dressing, and/or incontinence) and reported that 40% of the observations showed a lack of morning care provision, including any staff-resident communication about care, during the 4 h observation period [55]. The findings of that study reported that residents who were physically more dependent and required two members of staff for transfer were more likely not to receive morning care [55]. Prevention of pressure ulcer (effectiveness) Shannon et al. 2012 [54] assessed the comparative prevention-effectiveness of a pressure ulcer prevention programme (PUPP) against the standard practice of prevention using Agency for Health Care Policy and Research guidelines, and an assortment of commercial skin care products, briefs, pads, and mattresses in the USA [54]. The study indicated that the PUPP strategy resulted in a 67% reduction in the incidence of nosocomial pressure ulcers over 6 months period for the residents [54]. Discussion This study scoped and summarised published evidence relating to standards and the quality of care for older persons in LTCFs in the gerontological literature globally. This review found 16 studies relating to standards and quality of care in LTCFs whichwere published between 1989 and 2017. The included studies mostly focus on effectiveness of interventions, few on people-centeredness and safety, and studies were mainly conducted in European countries and United States of America. The majority (37.5%) of the included literature demonstrated the effectiveness of prompted voiding intervention for urinary incontinence in LTCFs, provision of professional support to LTCF staff, and PUPP strategy. Within the LTCF context, this study also revealed literature on regulation and quality of care; nursing documentation and quality of care; medical, nursing, and psychosocial standards relating to the quality of care; inappropriate medication using the Beer criteria, and the quality and provision of morning care in LTCFs. This scoping study to the best of our knowledge is the first comprehensive review of standards and quality of care for older residents in LTCFs. Nevertheless, our study findings are consistent with a previous review study on financing and regulation of oral care in LTCFs that noted that the majority of studies originated from the USA. In this study (56%) of the publications were conducted between 1989 and 2017 in the USA. Similarly, MacEntee et al., also reported that 28 of the 68 references included in their review were from the USA [8]. The included references that focused on the use of prompted voiding interventions for incontinence in older residents in LTCFs evidence their effectiveness though limited. This finding corroborates Roe et al. report that evidence on the effectiveness of voiding programmes are limited [65]. Our study finding has implications for practice and research. For instance, prompted voiding intervention for urinary incontinence, provision of professional support to LTCF staff, and PUPP strategy in LTCFs were shown to be effective by the studies included. Hence, the implementation or scale up of these interventions for older people resident in LTCFs will be useful towards maintaining healthy ageing in keeping with international goals. The adoption and implementation of these interventions in all LTCFs on a global scale would be beneficial though potential contextual challenges may need prioritizing. This study findings also showed infrequent documentation of nursing care. This is worrying since documentation of care is essential for subsequent assessment and care planning. Besides, documentation of care also helps evaluate and monitor the quality and standards of care to ensure possible improvement where needed Moreover, records of care are useful when a legal case arises against the LTCF. This study further revealed a dearth of research on psychosocial standards of care. Anxiety, depression, delirium, dementia, personality disorders, and substance abuse are common psychological issues that often affect older residents [66,67]. Social and emotional issues may lead to loss of autonomy, grief, fear, loneliness, financial constraints, and lack of social networks [66,67]. Therefore, standards or guidance on psychosocial care for residents in LTCFs are critical and should be considered in future research. Moreover, this study suggests limited primary research focusing on standards and quality of care for older residents in LTCFs. Most (9/16) of the included studies were from a single country (USA), hence this study's findings cannot be generalized for older populations resident in other countries due to differences across health systems and socioeconomic variations. Therefore, future international studies are needed involving standards and care for older persons using varying study designs to provide contextualized evidence relating to the quality of care. This review also suggests a dearth of research on standards and quality of care for older residents especially in low to middle-income countries (LMICs).. This is a concern considering that the population of older persons in LMICs is said to be on a rise [1]. Besides the nature of work and migration of people, the traditional extended family system is no longer a dominant social structure, making older persons vulnerable in LMICs. Hence, older people may have to relocate to LTCFs to facilitate healthy ageing due to inadequate or lack of home-based care. To this end, several primary studies are needed to provide evidence on the standards and quality of care of older persons in LMICs, and the lived experiences of older residents living in LTCF in LMICs. The evidence emanating from such future studies will help improve the quality of care delivered to older residents in LTCFs in LMICs. Research on areas such as oral and nutritional standard of care is needed since no study on these areas met this study's inclusion criteria. The research, alongside political will and commitment to improving the quality of care for older persons in LTCFs are essential to enable healthy ageing in line with the WHO global strategy and action plan [6] and the SDG Action Plan for older persons [7]. This scoping review study has many strengths. It is potentially the first exhaustive review to focus on standards and quality of care for older residents in LTCFs. This study has demonstrated the available evidence in the literature and knowledge gaps. This study included literature worldwide. This scoping review followed most of the steps required of a systematic review, including the methodological appraisal of the included references. Despite these strengths, our review has many limitations. Only a few databases were searched. It is possible that other useful articles relating to standards and quality existed in those databases not included in this study. Perhaps, our study eligibility criteria such as limitations to only English language publications, also excluded useful evidence published in other languages published elsewhere. Moreover, we included only primary studies which resulted in the exclusion of many other review studies and guideline documents. Notwithstanding these limitations, this scoping review has synthesized the knowledge from the existing literature relating to the care for older residents in LTCFs. It has also provided useful evidence to guide future research. Conclusion This study synthesized evidence on useful standards and highlighted gaps in the literature on quality of care. However, the findings suggest Tthat most studies relating to standards and quality of care in LTCFs focus on the effectiveness of interventions, few on people-centeredness and safety, and mainly conducted in European countries and United States of America. Future studies focusing on people-centeredness, safety, and geographical settings with limited or no evidence are recommended. Research using various primary study designs are needed to inform the standards and quality of care for older people resident in LTCFs, particularly in LMICs.
2022-03-19T13:45:44.437Z
2022-03-19T00:00:00.000
{ "year": 2022, "sha1": "3501012c87bfbc748ce264936663253e89d3f1a3", "oa_license": "CCBY", "oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-022-02892-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3501012c87bfbc748ce264936663253e89d3f1a3", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
265562621
pes2o/s2orc
v3-fos-license
Local delivery of EGFR+NSCs-derived exosomes promotes neural regeneration post spinal cord injury via miR-34a-5p/HDAC6 pathway Spinal cord injury (SCI) causes severe axon damage, usually leading to permanent paraparesis, which still lacks effective regenerative therapy. Recent studies have suggested that exosomes derived from neural stem cells (NSCs) may hold promise as attractive candidates for SCI treatment. Epidermal Growth Factor Receptor positive NSC (EGFR+NSC) is a subpopulation of endogenous NSCs, showing strong regenerative capability in central nervous system disease. In the current study, we isolated exosomes from the EGFR+NSCs (EGFR+NSCs-Exos) and discovered that local delivery of EGFR+NSCs-Exos can effectively promote neurite regrowth in the injury site of spinal cord-injured mice and improve their neurological function recovery. Using the miRNA-seq, we firstly characterized the microRNAs (miRNAs) cargo of EGFR+NSCs-Exos and identified miR-34a-5p which was highly enriched in EGFR+NSCs derived exosomes. We further interpreted that exosomal miR-34a-5p could be transferred to neurons and inhibit the HDAC6 expression by directly binding to its mRNA, contributing to microtubule stabilization and autophagy induction for aiding SCI repair. Overall, our research demonstrated a novel therapeutic approach to improving neurological functional recovery by using exosomes secreted from a subpopulation of endogenous NSCs and providing a precise cell-free treatment strategy for SCI repair. Bioinformatic analysis of the single-cell transcriptomic dataset The single-cell RNA sequencing data of the neural stem cells from the mice subventricular zone (SVZ) samples was retrieved from the GEO database (GSE67833).CellRanger (v6.1.2) software was used to transform the Illumina output into gene-barcode count matrices for further analysis in our study.Data processing and analysis were performed using the R package "Seurat" (v5.0).Cells having >6000 genes and >20 % of mitochondrial transcripts were removed.The gene expression matrix was normalized and scaled.We selected the top 50 principal components by performing Principal Component Analysis (PCA) based on 5000 variable genes.The "harmony" algorithm was applied to correct the batch effect.The FindNeighbors and FindClusters functions were used to cluster cells on a shared-nearest-neighbor graph.Clusters were annotated to each cell type based on markers from the previous study.A uniform Manifold Approximation and Projection (UMAP) plot was used to visualize annotated cell clusters.The markers of each cell type were calculated using the FindAllMarkers function, which compares gene expression between clusters by performing the Wilcoxon rank-sum test.While analyzing the NSCs clusters, 2 main clusters were identified. Among them, the cluster with relatively high expression of EGFR was defined as the aNSCs.The GO, KEGG pathway enrichment and GSEA of the marker genes were performed as above. Animal ethical statement Animals were maintained in standard, specific pathogen-free conditions of the Department of Laboratory Animals, Central South University (CSU) with 12 light/12 dark cycles and 4-5 mice per cage.All mice in this study were kept on a standard normal chow diet.All animal experiments were approved by the Ethics Committee of CSU for Scientific Research. Cell culture The primary culture of NSC was performed as described previously [1,2].Briefly, the cortex and spine were dissected from the E14C57BL/6 mice, cut into small pieces (~1 mm3), and followed by digestion via Accutase™ (Invitrogen, United States) for 30 min at 37 • C.After centrifugation, cells were suspended in an NSC medium composed of Neurobasal-A (Gibco, United States), 0.24 % GlutaMAX™ Supplement (Gibco, United States), 2 % B27 without vitamin A (Gibco, United States), 10 ng/mL EGF (Sigma, United States), 10 ng/mL bFGF (Sigma, United States) and 1 % penicillin-streptomycin and seeded in ultralow adhesion 6-well plates (Corning, United States), and 3 days later, neurospheres were collected via centrifugation and digested through Accutase into single cells, passed and maintained in neurosphere culture media until experimental use. Primary cortical neurons were extracted from the cerebral cortices of fetal C57BL/6 mice and washed twice with phosphate buffer saline (PBS).After removal of the dura mater, the cortex was separated in Dulbecco's modified Eagle's medium (DMEM, HyClone, United States) and pipetted with a Pasteur pipetted for homogenization.After centrifugation at 1000 rpm, the cells were resuspended and cultured with neurobasal medium containing 2 % B27, 1 % glutamine (Gibco, United States), and 1 % penicillin-streptomycin in a poly-lysine pre-coated plate [3,4].HEK293T cells were purchased from Procell (Gibco, United States) and were cultured in Dulbecco modified Eagle medium (DMEM) containing 10 % fetal bovine serum and 1 % penicillin-streptomycin. Microfluidic culture assay The SND450 culture plate was used to detect the axon extension [5].The compartments for axon and soma are separated by a physical partition that has a number of embedded micrometer-sized grooves.The axons generated from neurons plated into the soma compartment could extend across the barrier through the microgrooves.Briefly, the primary isolated neurons were seeded on the somal compartment.After culturing for 7 days, the axons crossing the 450 μm-microtunel were cut off using a 200 μL pipette.24 h later, the regenerative axons were stained and the total regenerative axonal length was calculated. Flow cytometry and cell sorting The cortex tissues were dissected from 14 d-embryonic C57BL/6 mice after removing surrounding meninges.The cell sorting process was administrated as previous reported [6].Briefly, the remained tissues were pipetted until tissue pieces were dissociated into single cells in Hanks balanced alt Solution (Life Technologies, United States) containing 10 mM HEPES (Gibco, United States).After washing, the anti-mouse CD16/CD32 antibody (Fc Receptor Blocking Solution) (BD, United States) was added to the single cell suspension according to a 1/50 dilution ratio, incubating at 4 • C for 15 min.Next, the harvest cells were resuspended in a staining buffer and stained with antibodies for 30 min at 4 • C. The antibodies were as follows: Anti-CD133-APC (Thermal, United States), Anti-Glast-PE (Miltenyi, Germany), EGF-Alexa 647 (Thermal, United States), Anti-CD45-Alexa700 (Thermal, United States), Anti-Ter 119-PE/Cy7 (Thermal, United States).After washing, cells were resuspended in PBS with 2 mM EDTA (Thermal, United States).and 1 μg/mL 4-6, Diamidino-2-Phenylindole (DAPI, BD, United States).The sample was sorted using a FACS Aria II SORP cell sorter (BD Biosciences) cytometer, and then analyzed utilizing FlowJo software (TreeStar).DAPI + cells and doublets were excluded during the analysis. Isolation and identification of exosomes Exosomes were isolated from the primary and EGFR + NSCs supernatant by differential centrifugation/ultracentrifugation protocols [7].In our experiment, after first passage and 3 days of continuous culture, the cell supernatant was collected from the primary and EGFR + NSCs.The obtained medium was centrifuged at 300 g for 15 min and 2,000 g for 30 min to remove the remaining debris.The resulting solution was further processed by centrifugation at 10,000 g for 1 h to remove any remaining cellular debris.The supernatant was then subjected to ultracentrifugation at 100,000 g for 2 h to obtain the exosomal pellet, which was subsequently resuspended in PBS.This suspension was then subjected to another ultracentrifugation at 100,000 g for 2 h.Finally, the pelleted exosomes were resuspended in PBS.The morphology of the exosomes was characterized by transmission electron microscopy (TEM) using fresh samples that were loaded onto a continuous carbon grid, fixed in 3 % (w/v) glutaraldehyde, and stained with 2 % uranyl acetate.The size and concentration of the exosomes were assessed using flow NanoAnalyzer instruments, following the manufacturer's instructions.The presence of exosomal markers CD9, TSG101, and CD63 and the cell marker Calnexin were detected by immunoblotting analysis. Exosomes loading and sustained releasing test in vitro To load exosomes, we introduced the exosome suspension into the hydrogel solution, resulting in a final concentration of 10 mg/mL.Afterward, we bio-fabricated a 3D-printed hydrogel patch using dynamic projection stereolithography based on the shape of the removed lamina [11].This hydrogel containing PKH67 labeled exosomes was then utilized for confocal imaging (LSM780, Zessis) with multilayer scanning and subsequent 3D reconstruction. To generate the release profile for primary and EGFR + NSCs-Exos loaded hydrogel, we immersed 100 μL of exosomes-loaded hydrogel (10 mg/mL, post UV irradiation) in 500 μL of PBS dissolution medium at 37 • C. At regular intervals, we replenished the same volume of preheated PBS dissolution medium after collecting 500 μL of the sample. To quantitatively determine exosome release, we employed the Micro BCA™ Protein Assay Kit (ThermoFisher, Rockford, USA) [12].The release profile was constructed based on the release data. Exosome fluorescent label and uptake assay both in vitro and in vivo Exosomes were labeled with DiR and PKH67 (Sigma, United States) according to the manufacturer's protocol.In Brief, 4 μL dye was mixed with exosome suspension in diluent C and incubated for 10 min at 37 • C. The labeling reaction was stopped by adding 20 mL of chilled PBS.Labeled exosomes were then ultracentrifuged at 100,000 g for 70 min, washed with PBS, ultracentrifuged again at 100,000 g and the pellet was resuspended in PBS.For the in vitro uptake experiment, PKH67 labeled neural stem cell exosomes were added to the neuron cell culture medium (10 μg/mL) and incubated for 12 h at 37 • C and 5 % CO 2 .After the set time, the neuron-related marker Tuj1 was immunofluorescently stained to label the cultured neurons.For in vivo tracking of neural stem cell exosomes, a mixture of DiR or PKH67 fluorescent-labeled exosomes and a photosensitized-hydrogel patch was placed above the lesion of spinal cord injury.Three days post-injury, the spinal cord tissue was harvested and sectioned for immunostaining to analyze the uptake of exosomes by injured and peripheral neurons.The mice treated with DiR-labeled exosomes were anesthetized at 1, 3, 7, 14, and 28 days post-SCI transplantation and placed in a Xenogen IVIS Imaging System (Caliper Life Sciences). Exosomal miRNA microarray assay Total RNA was extracted from exosomes using TRIzol reagent (Invitrogen, United States) following the manufacturer's instructions.The quality of the RNA samples was evaluated using the NanoDrop ND-1000 and Agilent 2100 Bioanalyzer.After passing the quality check, a small RNA sample pre-kit was used to construct the library.The library was constructed by directly splicing the two ends of small RNA, starting from total RNA as the sample, using the special structure of the 3′ and 5′ ends of small RNA (complete phosphate group at the 5′ end and hydroxyl group at the 3′ end).The cDNA was synthesized by reverse transcription after the splicing, followed by Polymerase Chain Reaction (PCR) amplification to obtain the target DNA fragments.The cDNA library was obtained by cutting the gel and recycling it.The quality and quantity of the library were checked carefully using automated electrophoresis on an Agilent 2100 Bioanalyzer.The different libraries were pooled based on the effective concentration and target offline data volume, followed by Hiseq/Miseq sequencing. Fluorescent miRNA in situ hybridization Biotin-labeled miR-34a-5p (Sangon Biotech, China) was constructed for fluorescent miRNA in situ hybridization as previously described [13].Briefly, the paraffin-embedded tissues were cut into 8 μm thick sections.After dewaxed with xylene and rehydrated in an ethanol gradient, the proteinase K (20 μg/mL, Ambion) was administrated to the sections for 30min.After washing with PBS, the sections were incubated with the hybridization buffer at 37 • C for 1 h for prehybridization.Next, the sections were incubated with the hybridization buffer containing miR-34a-5p-biotin probe (sequence: 5′-biotin-ACAACCAGCUAAGACACUGCCA-biotin-3′) at 60 • C for 8 h.After incubation, the slices were sequentially washed with 2 × SSC, 1 × SSC, and 0.5 × SSC buffer.The Anti-Neun antibody was administered to stain the spinal neurons, and the 4′,6-diamidino-2-phenylindole (DAPI) solution was used to stain the nucleus.The region of interest (ROI) was selected from the anterior horn of the spinal cord at 500 μm rostrally adjacent to the injured site in the horizontal spinal cord sections.For quantification, the relative immunofluorescent intensity ratio of miR-34a-5p in the NeuN positive area was measured by ImageJ software (National Institutes of Health, USA). In vitro neurite outgrowth assay Primary neurons were isolated from 14-day embryos of C57/BL6 mice and culture as previously described.These neurons were then cultured with primary NSCs-Exos and EGFR + NSCs-Exos to investigate their ability to promote neurite outgrowth.To test functional recovery, HDAC6 overexpress plasmids and their corresponding control vectors were transfected into primary cultured neurons before treatment with exosomes.At least 5 random fields per group were measured, and all measurements were performed by experimenters blinded to each treatment condition.A neurite outgrowth assay was carried out on cortical neuron cells under various treatment conditions using β-III-tubulin (Tuj1) immunofluorescence staining.DAPI counterstaining was done to visualize neuron cell nuclei.Fluorescent red images (showing cell body and neurites) were merged with fluorescent blue images (showing cell nuclei) to analyze a complete image.Tuj1 positive cells and nuclei with blue fluorescent signals from stained neurons were quantified using the ImageJ software and digitized with an immunofluorescence microscope (Zeiss, Germany). Histology and immunofluorescence analysis The spinal cords and bladders of mice were harvested after perfused with 4 % paraformaldehyde.Following gradient alcohol dehydration, the spinal cords and bladders were embedded in paraffin, horizontally sectioned at a thickness of approximately 8 μm, and subjected to H&E staining as per the instructions of the H&E staining kit (Solarbio, China).Images were captured using an Olympus photomicroscope.To define "the injury area", the normal tissue replaced by vacuoles and gliomatoid tissues was outlined.The injury area of the spinal cord and the thickness of the bladder's muscular layer were manually outlined and calculated using ImageJ.For immunohistochemistry analysis, after dehydration, the paraffin spinal sections were stained using an immunohistochemical kit (Yeason, China).The antibody dilution concentration is as follows: anti-HDAC6 (1:500, sigma), anti-Acetylated α-tubulin (1:100, sigma), anti-LC3 (1:100, proteintech).For Immunofluorescence analysis, the spinal cord slices were immersed in PBS buffer for 15 min to remove the O⋅C.T compound.They were then permeabilized with 0.1 % Triton X-100 in PBS buffer for 15 min to break the membrane and blocked with 5 % bovine serum albumin (BSA) for 30 min.The cell samples were fixed with 4 % Paraformaldehyde (PFA) in PBS for 20 min, permeabilized with 0.1 % Triton X-100 for 15 min, and blocked with 5 % BSA for 30 min.The diluted antibodies anti-GAP43 (1:200, CST), anti-NF (1:200, CST), anti-NeuN (1:500, Abcam), anti-HDAC6 (1:500, proteintech), anti-Acetylated α-tubulin (1:500, sigma), anti-Tyrosinated α-tubulin (1:500, sigma), anti-LC3 (1:500, proteintech) anti-Nestin (1:500, Abcam), and anti-SOX2 (1:500, Abcam), were then added and incubated with the samples for 8-10 h at 4 • C.After rinsing the slices with PBS, the samples were incubated with the respective diluted fluorescent secondary antibody (1:500; Abcam) for 1.5 h at room temperature.Finally, a DAPI (GeneTex, United States) solution containing a sealing agent was added dropwise to stain the nucleus.The Region of interest (ROI) of HDAC6, miR-34a-5p, and Acetylated α-tubulin/Tyrosinated α-tubulin was selected from the anterior horn of the spinal cord at 500 μm rostrally adjacent to the injured epicenter.Images were captured by an Axio Imager one microscope (Zeiss, Germany). Establishment of contusive spinal cord injury mouse model and experimental design All of the mice were anesthetized with 0.3 % pentobarbital sodium before undergoing thoracic spinal cord contusion injuries.A moderate contusion injury was induced by using a modified Allen's weight drop apparatus, dropping a 10 g weight at a vertical height of 25 mm after laminectomy at T10 [13].Mice in the sham group underwent laminectomy without contusion.Bladders were manually massaged twice daily until full voluntary or autonomic voiding was achieved.The animals were randomly divided into different intervention groups (N = 5) according to the study design, to analyze the effect of exosomes on spinal cord injury functional recovery.The sham group was subjected to laminectomy without spinal cord injury.Mice in the experimental group were administered with solid hydrogel patches containing exosomes (10 mg/mL) or SW-100 (10 mg/mL) based on the grouping assignment upon the injury site.The vehicle group was given an equal size of hydrogel mix with an equal volume of PBS.At the scheduled time point post-SCI, mice were euthanized, and the spinal cord tissue containing the lesion site was collected and processed for analysis described in the method. Behavioral functional evaluations BMS (Basso mouse scale) score was scored independently by 2 observers who were blinded to the experimental group information.The assessments were made before the injury, immediately after the injury, and on day 1, day 3, day 7, day 14, day 21, and day 28 post-injury [14].Specifically, the study involved placing mice in an open field for 4 min to evaluate the motor recovery capabilities of mice with spinal cord injuries.The evaluation was based on a scoring system that ranged between 0 and 9.This system was determined by hind limb movement in the open field, such as joint movement in the hind limb, weight support, foot stepping, coordination, claw positioning, and trunk and tail control.A score of 0 indicated complete hind limb paralysis, while a score of 9 indicated normal movement. MEPs recording The mice were anesthetized using 0.3 % Pentobarbital Sodium.They were shaved and disinfected before being secured in the stereotactic apparatus, and their body temperature was maintained with a heating pad.The motor cortex area was exposed by craniotomy, and the stimulation electrode was guided to a depth of 700-1000 μm from the brain surface using a stereotactic instrument.The target was the corticospinal neurons in the sensorimotor cortex.The recording electrode was positioned at the distal end of the contralateral thigh sciatic nerve to capture the muscle action potential triggered by electrical stimulation.The Bl-420 F biological function experimental system was utilized to amplify and record the bioelectrical signal.The stimulation parameters included fine voltage stimulation, single stimulation mode, 100 ms delay, and 100 Hz frequency, with a stimulation intensity of 14 V [3]. Quantitative RT-qPCR Total RNA was extracted from exosomes and cultured neurons (3 days post-transfection or exosomal treatment) using TRIzol reagent (Invitrogen, United States).The cDNA was then reverse transcribed using the PrimeScript™ RT Reagent kit (Promega, United States) according to the manufacturer's instructions.Additionally, the reverse transcription of miRNAs was performed using the miRNA First Strand cDNA Synthesis kit (Tailing Reaction, Sangon Biotech, China).The pri-miRNAs were synthesized into the cDNA using the miRNA 1st Strand cDNA Synthesis Kit (by stem-loop, Sangon Biotech, China).For the measurement of mRNAs and miRNAs, qRT-PCR was carried out using a GoTaq qPCR Master Mix kit (Promega, United States), as well as a quantitative PCR system (ABI, United States).To evaluate the relative expression levels of mRNA or miRNA, the 2 − ΔΔCT method was employed with GAPDH and U6 serving as internal references.The primers that were used for qRT-PCR can be found in Table S1. Autophagy detection According to the manufacturer's instructions, the primary cortical neurons that had been previously separated were planted on confocal dishes and subsequently transfected with the mRFP-GFP-LC3 adenovirus (Hanbio Biology, China) [16].After transfection, cells were fixed in 4 % paraformaldehyde and photographed using a laser confocal microscope (Zeiss, Germany).The quantity of red and yellow dots, corresponding to autolysosomes and autophagosomes, respectively, was measured.For transmission electron microscope (TEM) assessment, the cell pellet was fixed with a 2 % glutaraldehyde solution, pre-cooled, and then stained with a 2 % uranyl acetate solution.The cells were finally dehydrated in acetone, embedded, and cut into thin slices for examination under an electron microscope (FEI Tecnai, Hillsboro, OR, USA). Statistical analysis Results are represented as mean values ± SD, and n values represent the number of animals in the experiments.Statistical analysis was carried out using Prism 8 software.Differences were considered statistically significant at values listed as follows: *P < 0.05, **P < 0.01, and ***P < 0.001.Statistical comparisons of two groups were conducted using a two-tailed unpaired Student's t-test.Differences among multiple groups were analyzed with one-way ANOVA followed by Tukey's post hoc tests.The analysis of BMS scores and tactile sensory tests were analyzed with repeated-measures two-way ANOVA at different time points. Introduction Spinal cord injury (SCI) is a devastating event that results in neurologic deficits such as motor, sensory, and autonomic (sexual, urinary, and gastrointestinal) dysfunction [17].SCI has a high prevalence globally, affecting an estimated 40-80 per million people every year [18,19].Long-term disability places huge financial, physical, and psychological burdens on both their families and society [20].Although the treatment of spinal cord injury has made significant progress in the laboratory [21], there is currently no effective clinical treatment that can improve the poor prognosis after SCI. The adult mammalian central nervous system (CNS) is considered an organ with limited regeneration capability [22,23].However, the discovery of endogenous NSCs in specific niches of the adult brain and spinal cord provides evidence that CNS may be generative by inducing the differentiation of endogenous NSCs into neurons [24][25][26].There are also endogenous neural precursor cells in the spinal cord tissue of adult mice, called Ependyma cells.While, compared with NSCs in brain, most ependymal cells located in the central canal of the spinal cord tend to differentiate into astrocytes rather than mature neurons after SCI [23].Exogenous NSCs-based therapy may be a feasible option to promote function recovery after SCI.In a phase I clinical trial, human spinal cord-derived NSCs were found to have good histocompatibility and could be safely transplanted into damaged spinal cord in patients with thoracic SCI [27].The favorable effects of NSC grafts have been demonstrated due to their unique neuroprotective effects, including affecting axon plasticity, regeneration, sprouting, and establishment of neuronal relays at the injury site [28][29][30].However, there are also some concerns that direct transplantation of NSCs may have deleterious effects such as cell necrosis, immune rejection, and even tumor formation, which may limit their application in SCI treatment [31,32]. Exosomes are a form of extracellular vesicles with a diameter of 30-150 nm, serving as a medium for intercellular communication [33].Due to their low immunogenicity, exosomes are increasingly being used as a potential cell-free therapy for CNS injury healing [34,35].Adult NSCs can be divided into two states: static state (outside the cell cycle, in a low metabolic state) and active state (inside the cell cycle), which is mainly located in SVZ region.The quiescent state of NSCs is closely related to astrocytes, and the active state of NSCs shows an obvious tendency to neurogenesis [36].In craniocerebral injury, the activation of active NSCs (CD133 + EGFR + ) increased and repaired the damaged area, suggesting that active NSCs play an important role in the repair of nerve injury [25].However, whether these active NSCs, as a subset of stem cells with potential neurogenesis, could play a role in the treatment of SCI remains to be studied. In a mouse model following thromboembolic stroke, intravenous treatment of NSCs-derived extracellular vehicles (EVs) improved functional outcomes [37].Rong et al. also found in a rat model that NSCs-derived EVs could promote functional recovery after SCI by activating autophagy [38,39].Our previous study also showed that exosomes produced by NSCs improved functional recovery after SCI by promoting angiogenesis [2].However, it is unclear whether exosomes produced by EGFR + NSCs, a subtype of NSCs, are comparable or stronger than primary NSCs-derived exosomes in terms of neurological recovery after SCI.Exosomes alter the biological activity of target cells by transferring messenger RNAs, miRNAs, lipids, and proteins [33].MicroRNAs (miRNAs) are components of exosomal cargo and play a key role in mediating the therapeutic effects in various neurodegenerative diseases [40].Through high-throughput sequencing, we found that EGFR + NSCs-derived exosomes were enriched in miR-34a-5p.However, whether exosomes affect neurite growth and functional repair through miRNAs still needs to be further explored. However, the stability and retention of exosomes released in vivo are major barriers, as they are rapidly eliminated by the innate immune system or accumulate in the liver and lungs via body fluids or blood circulation [44].Various biomaterials have been employed to create tissue engineering scaffolds for spinal cord injury (SCI), including natural materials such as hyaluronic acid (HA), collagen-based matrices, chitosan, agarose, alginate, etc [41][42][43].Loading exosomes in a protective hydrogel, which is a hydrophilic polymer network, may shield them from degradation and offer a prolonged reservoir for therapeutic actions [45].Thus, in the current study, exosome-loaded hydrogels were used to print into patches based on the shape of removed lamina using 3D printing technology and placed cover the injury lesion of the spinal cord, and we found that exosomes secreted by EGFR + NSCs showed a more powerful therapeutic effect on functional recovery of SCI than primary NSC derived exosomes.The EGFR + NSCs derived exosomes could deliver miR-34a-5p to neurons to downregulate histone deacetylase 6 (HDAC6), which in turn activated autophagy process and enhanced neurite growth.Our study proposed a new type of exosomes derived from EGFR + NSCs with the potential capability to improve functional recovery after SCI, which provided a new cell-free therapy for the treatment of SCI (Fig. 1). Identification of primary and EGFR-positive NSCs and their secreted extracellular vesicles First, we projected the GLAST + CD133 + cells onto a Uniform Manifold Approximation and Projection (UMAP) to visualize clustering according to the published data (GSE67833) (Fig. 2A).The NSCs were clustered into aNSCs and qNSCs, the aNSCs showed high expression of Egfr a known marker of active NSCs (Fig. 2B-C).To obtain functional insights into the gene classes that define each cell subtype, we performed gene ontology on the four clusters.From this, we determined the aNSCs expressed genes highly enriched in GO biological process pathways related to neural development and reparation (Microtubule organizing center localization, Negative regulation of neuron death, Negative regulation of oxidative stress− induced neuron death, Positive regulation of neuron projection development) (Fig. 2D-E).To explore the therapeutic effects of the aNSCs in SCI treatment, using flow cytometry, the living EGFR + neural stem cells (EGFR + NSCs) were sorted by the panel of DAPI − (living cell), CD45 − and Ter 119 -(none blood cells) and GLAST + CD133 + EGFR + (active NSCs) in the subventricular zone from E14 fetal mice cerebral cortex (Fig. 2F).About 2.159 ± 0.315 % of cells from the fetal mice cortex brain were GLAST, CD133 and EGFR triple positive.The stem cell markers Nestin and Sox2 were expressed in the sorted EGFR + NSCs (Fig. 2G).To verify whether all cells expressing EGFR, we passed EGFR + NSCs into P1, P2, and P3 and detected EGFR expression in NSCs by flow cytometry and immunofluorescent staining.We found that the EGFR positive rate of NSC still reached up to 90 % even after three passes, indicating that the cell phenotype we extracted did not change over time (Fig. 2H-I).Then the exosomes were extracted from primary NSCs and EGFR + NSCs culture media.According to high-resolution transmission electron micrographs (TEM) images, exosomes released by primary NSCs (primary NSCs-Exos) and EGFR + NSC (EGFR + NSCs-Exos) exhibit characteristic rounded and cup-like architectures (Fig. 2J).Primary NSCs-Exos and EGFR + NSCs-Exos have a homogenous size distribution with an average single peak at 75.21 nm and 74.2 nm, which was revealed by nanoparticle tracking analysis (Fig. 2K).Western blotting showed that both Primary NSCs-Exos and EGFR + NSCs-Exos express the exosomal surface markers CD63, CD9, and TSG101 without negative biomarkers of calnexin (Fig. 2L), revealing that the separated particles were exosomes released from primary or EGFR + NSCs. The exosomes obtained from EGFR + NSCs promoted better functional recovery in mice with spinal cord injury To effectively maintain the action concentration of local exosomes after spinal cord injury, the photo-cured hydrogel base on the GelMA and HA-NB was used to embed exosomes.Firstly, we characterized the physicochemical properties of the hydrogel, the SEM image of the sol-gel transformation after UV suggested that the porous hydrogel is a good exosome carrier (Fig. 3A).And the cytocompatibility test did not show any significant toxicity to neuron cells (Fig. 3B).We also found that the hydrogel treatment did not result in appreciable cardiotoxicity, hepatotoxicity, spleen cell toxicity, pulmonary toxicity, and nephrotoxicity by HE staining (Figs.S1A-E).These results demonstrated that the photocured hydrogel exhibited good biocompatibility.As illustrated in Fig. 3C, the hydrogel degraded by approximately 70 % after 28 days in vitro analysis.The results of the swelling test showed that the hydrogel showed rapid swelling behavior within 24 h, and the swelling rate reached nearly 1200 % (Fig. 2D).The release curve demonstrated a continuous release of loaded exosomes from the hydrogel, reaching approximately 80 % of the cumulative amount after 28 days (Fig. 3E).Next, we used confocal multi-layer 3D scanning to visualize the PKH67labeled exosomes retained within the hydrogel.The results demonstrated that primary NSCs-Exos and EGFR + NSCs-Exos were uniformly distributed in hydrogel (Fig. 3F).Exosome-loaded hydrogels were printed into 2*2*0.5mm-sized patches based on the shape of removed lamina using 3D printing technology and placed over the injury lesion in spinal cord injured mice (Fig. S1F).Using in vivo imaging techniques to assess the release of hydrogel-coated exosomes, we found that most DIR signal was detected at the 10 mg/mL group 7 days post-implantation (Figs.S3A-B).The DIR signal was still retained within the spinal cord injury zone 14 days post-SCI (Fig. 3G-H).To confirm that the hydrogel patch was maintained on the surface of the spinal cord, we obtained the spinal cord tissues 3, 7 and 14 days after the injury respectively, and found that even 14 days after the injury, some hydrogel remained on the surface of the spinal cord (some degraded), indicating that the current molding method could ensure the fixation of the hydrogel patch within 14 days (Figs.S1G-H).BMS scores and BMS sub-scores were evaluated preoperation and on days 0, 1, 3, 7, 14, 21, and 28 post-SCI after the treatment of Primary NSCs-Exos or the EGFR + NSCs-Exos.All groups' functional behavior deteriorated following SCI.However, in comparison to the control group, the group treated with primary NSCs-Exos displayed a significant improvement in post-SCI motor dysfunction.Furthermore, the EGFR + NSCs-Exos group demonstrated an even better functional recovery when compared to the primary NSCs-Exos group (Fig. 3I-J).Additionally, the motor-evoked potentials (MEPs) were analyzed, and the results showed that at 28 days post-SCI and the results revealed that the amplitude of primary NSCs-Exos treated mice had significantly increased compared to the control mice treated with vehicles at day 28 post-SCI (Fig. 3K-L).Histological investigation with H&E staining was carried out to further determine the therapeutic benefits of NSCs-Exos on neurological functional recovery after SCI.It was demonstrated that 28 days after SCI, the injured area was noticeably less in the primary NSCs-Exos treated animals than in the mice that had received vehicle treatment (control groups).Additionally, mice treated with EGFR + NSCs-Exos had fewer lesion areas than those treated with primary NSCs-Exos (Figs.S2A-B).The bladder's function assessment following SCI was consistent with the histological evaluation (Figs.S2C-D).These data provide evidence that the mice in the EGFR + NSCs-Exos group had better functional recovery as compared to the primary NSCs-Exos. EGFR + NSCs-derived exosomes promoted neural regrowth both in vivo and in vitro To identify the cellular mechanisms underlying functional recovery after SCI, exosomes were labeled with PKH67 dye and embedded in the In this study, we have identified a specific subtype of neural stem cells (NSCs) known as EGFR + NSCs and isolated their exosomes.By employing transcriptomic and miRNA profiling, we characterized both EGFR + NSCs and their exosomes.Using an in vitro cellular model, we explored the mechanisms through which exosomes derived from EGFR + NSCs promote axonal outgrowth.In conjunction with 3D printing technology and a hydrogel scaffold, we designed and implemented a hydrogelcoated exosomal patch for the treatment of SCI, demonstrating superior efficacy in enhancing neural regeneration.These results showed that the exosomes derived from EGFR + NSCs were found to deliver miR-34a-5p to neurons, resulting in the downregulation of HDAC6.This, in turn, activated the autophagy process and enhanced microtubule stabilization.Our study introduces a novel class of exosomes derived from EGFR + NSCs with the potential to enhance functional recovery following SCI, offering a promising cell-free therapy approach for SCI treatment.photo-cured hydrogel.24 h post SCI, fluorescence microscopy was used to detect the exosomal absorption.As shown in Fig. S3A, we discovered that exosomes produced by primary and EGFR + NSCs could infiltrate the blood-brain barrier (BBB) after administered and be taken up by neurons and phagocytes around the lesion side, while rarely absorbed by astrocytes, stem cells and endothelial cells (Figs.S4B-E).Consistent with the previous functional assessment, we also detected robust neurite regrowth in the EGFR + NSCs-Exos group using anti-GAP43 and anti-NF immunofluorescent labeled neural axons (Fig. 3M − P).To confirm the therapeutic effects of EGFR + NSCs-Exos on neural regeneration in SCI treatment, the cerebral neurons were isolated and cultured as a cell model to investigate the bioactivity of the NSCs-derived exosomes.Using SND450 neural culture plates to isolate neuronal cytoplasm and axons, the distal axons were severed to simulate axonal rupture after spinal cord injury.The axonal regenerative assay revealed that the EGFR + NSCs-Exos promotes axonal regrowth in a dose-dependent manner (Figs.S3A-B).However, simple administration of the Primary NSCs-Exos and EGFR + NSCs-Exos could not further promote neurite growth, and the neurons in the stable culture condition did not show the capability to absorb the additional NSCs-Exos and EGFR + NSCs-Exos (Figs.S4F-H).Simulating the microenvironment of neurite growth inhibition, we used chondroitin sulfate (CSPG) to intervene in the cultured neurons.Under the condition of CSPG administration, the neurites stained with Tuj1 showed an inhabitation of neurite outgrowth (Figs.S4I-J).It was found that the PKH67 labeled NSCs-Exos and EGFR + NSCs-Exos could be taken up by neurons treated with CSPG (Fig. S4K).Quantitative measurements revealed that, after exosomal treatment, the cortical neurons featured an elongated neurite, and the EGFR + NSCs-Exos have a stronger capability for alleviating the inhibitory effects of CSPG on neurite growth (Fig. 3Q-R).To further cultivate neurons derived from the spinal cord, we employed CSPG intervention to mimic the environment of growth inhibition.We discovered that EGFR + NSCs-Exos also possessed a superior capability in promoting spinal cord neuron regeneration (Fig. S2L-M).The axonal regenerative assay revealed that the EGFR + NSCs-Exos also exhibited a stronger ability to promote axonal regeneration (Fig. 3S-T).These findings indicated that, compared to primary NSCs-Exos, the EGFR + NSCs-Exos has a stronger capability for triggering neural regeneration, according to both in vivo and in vitro evidence. miR-34a-5p was enriched in EGFR + NSCs-Exos and could deliver to neurons Exosomes play a vital role, mainly dependent on their internal contents.First, we examined the transmission of EGFR in exosomes derived from EGFR + NSCs and found that EGFR expression was relatively low in the EGFR + NSCs-Exos (Fig. S5A).Additionally, we assessed the expression levels of EGFR in neurons following EGFR + NSCs-Exos intervention and observed that exosomal intervention did not lead to an increase in EGFR levels in neuronal cells (Figs.S5B-C).One of the key cargos for mediating the biological function of exosomes is miRNA.Both in vivo and in vitro studies have revealed that EGFR + NSCs-derived exosomes have a stronger capability for promoting neurite outgrowth compared to the primary NSCs derived exosomes.Therefore, the total RNA was extracted from the EGFR + NSCs-Exos, and analyzed using microRNA sequencing.The miRNA-seq demonstrated that a total of 1143 miRNAs were screened (Table S2), and the top 20 expressed miRNAs were listed in the heatmap (Fig. 4A).Among these highly expressed miRNAs, the top 10 miRNAs were quantified using qRT-PCR between primary and EGFR + NSCs-Exos.Among them, miR-34a-5p showed abundant expression in EGFR + NSCs-Exos and had 10-fold greater levels than in primary NSCs-Exos (Fig. 4B).To determine the role of miR-34a-5p in mediating neural regrowth, the miR-34a-5p mimic and miR-34a-5p inhibitor were administrated to cultured cortical neurons.It was demonstrated that mimics of miR-34a-5p alleviated the inhibitory effects of CSPG on neurite growth, and the miR-34a-5p-inhibitor offset the ability of EGFR + NSCs-Exos for promoting neurite growth (Fig. 4C-D).The axonal regenerative assay also revealed the vital role of miR-34a-5p in promoting neural axonal regrowth (Fig. 4E-F).In the meantime, we found a significant increase of miR-34a-5p level in neurons in spinal cord injured tissue after administration with EGFR + NSCs-Exos (Fig. 4G-H).The qRT-PCR result also revealed that the EGFR + NSCs-Exos upregulated the miR-34a-5p expression in cultured neurons (Fig. 4I) without influencing the pri-miR-34a-5p expression (Fig. 4J), which indicated that the expression of the mature miR-34a-5p in cultured neurons was delivered from the EGFR + NSCs-Exos.These results indicated that the EGFR + NSCs-Exos can transfer exosomal miR-34a-5p to the target neuron cells and affect their bioactivity. miR-34a-5p mediated the effect of EGFR + NSCs-Exos on promoting neurological functional recovery and neurite regrowth To further investigate the mechanistic insight into the role of exosomal miR-34a-5p in mediating the positive effect of EGFR + NSCs-Exos on promoting neurological functional recovery and neurite regrowth following SCI, we constructed miR-34a-5p knockdown EGFR + NSCs using miR-34a-5p-inhibitor, along with negative control of miR-34a-5p as a control.Exosomes were isolated from EGFR + NSCs and miR-34a-5p knockdown EGFR + NSCs, respectively.miR-34a-5p expression was lower in exosomes derived from miR-34a-5p-knockdown EGFR + NSCs (named miR-34a-5p IN -Exos) than exosomes derived from negative control of miR-34a-5p treated EGFR + NSCs (named NC-Exos) (Fig. 5A).As illustrated in Fig. 5B-C, the downregulation of miR-34a-5p in EGFR + NSCs-Exos could eliminate the effects of functional recovery noticed by NC-Exos treatment.Electrophysiological analysis of MEPs showed that the miR-34a-5p IN -Exos treated mice show lower amplitude and longer latent period than the NC-Exos treated mice (Fig. 5D-E).The histological investigation with H&E staining showed that the knockdown of miR-34a-5p could offset the effects of NC-Exos in reducing the injury area (Figs.S6A-B), indicating that EGFR + NSCs-Exos promotes functional behavior recovery via transfer miR-34-5p into the neuron.We then evaluated the functional role of exosomal miR-34-5p derived from EGFR + NSCs in the mediation of neurite regrowth in vivo.As shown in Fig. 5F-I, administering miR-34a-5p IN -Exos attenuated the neurite regrowth besides the lesion core and decreased the amount of GAP43 and NF positive signal in spinal cord sections compared to the NC-Exos groups.It was shown that NC-Exos alleviated the inhibitory effects of CSPG on neurite growth, while the beneficial effects were abolished by miR-34-5p IN -Exos administrating (Figs.S6C-D).Similarly, the neural axonal regrowth assay showed that the miR-34a-5p inhibition weakened the therapeutic effect of EGFR + NSCs-Exos (Fig. 5J-K).Collectively, these results demonstrated that miR-34-5p is essential for the EGFR + NSCs-Exos in promoting neurite regrowth. EGFR + NSCs-Exos-derived miR-34a-5p promoted the neurite regrowth and reduces HDAC6 expression To further explore the targeted regulated molecular mechanisms of exosomal miR-34a-5p secreted by EGFR + NSCs in modulating the neurite regrowth, the bioinformatic tools: miRDB and microT were combined used to identify a putative target gene of miR-34a-5p.According to the online database of miRNA target genes and the GO term-0030,517 (negative regulation of axon extension), we identified four potential target genes of the miR-34a-5p, including the Aatk, Sema4b, Ntn1 and HDAC6 (Fig. 6A).Among the target genes, the histone deacetylases 6 (HDAC6) is the only HDAC localized in the cytoplasm, which can regulate the deacetylation of nonhistone structures such as autophagy and deacetylase of microtubules.To verify, which could be significantly inhibited after administrating with EGFR + NSCs-Exos, the expression level of HDAC6 in cultured neurons was the most significantly downregulated gene among these genes (Fig. 6B).The luciferase reporter experiment was used to determine if the HDAC6-3′UTR is a direct target for miR-34a-5p, we found that miR-34a-5p mimics dramatically decreased the luciferase activity of the wild type (WT), while the luciferase activity of the mutant type (Mut) of 3′UTR reporter constructs was not altered (Fig. 6C).These results indicated that miR-34a-5p could directly bind to the 3′UTR of HDAC6 and repress its expression.The immunofluorescent staining and western blotting confirmed that EGFR + NSCs-Exos could significantly reduce the HDAC6 expression in cultured neuron cells (Fig. 6D-G).For verifying the effects of EGFR + NSCs-Exos on regulating HDAC6 expression, the NC-Exos and miR-34-5p IN -Exos treated mice were satisfied for HDAC6 expression detection.The results showed that the mice in NC-Exos treated groups demonstrated a lower HDAC6 expression level in neurons, as revealed by immunofluorescent as compared with the control groups, and these effects were reversed followed by miR-34a-5p inhibited (Fig. 6H and J).Similar results of HDAC6 expression change trend followed by miR-34a-5p inhibition were also observed by Western blot analysis (Fig. 6I and K).Collectively, our findings indicated that EGFR + NSCs-Exos derived miR-34a-5p promoted neurite regrowth and reduced the HDAC6 expression in neurons. miR-34a-5p mediated the effect of EGFR + NSCs-Exos on regulating neural microtubule stabilization and autophagy induction By analyzing the expression level of the miR-34a-5p in neurons in the spinal cord injured section, we can see notable upregulated expression of miR-34a-5p in neurons with the treatment of NC-Exos, as compared with the control groups.However, these effects were attenuated following the inhibition of miR-34a-5p (Fig. 7A and C).As HDAC6 was reported to regulate neural microtubule stability and neural autophagy in axonal regeneration [46][47][48], the expression of acetylated and tyrosinated α-tubulin, as well as autophagy-related proteins in the injured tissue, was analyzed using immunofluorescent staining and western blotting.The results of the study revealed that the administration of NC-Exos increased the A/T ratio, indicating improved stability.Meanwhile, inhibition of exosomal miR-34-5p decreased microtubule stiffness (Fig. 7B and D, Fig. 7G and H).Meanwhile, NC-Exos were found to have the capability to enhance the expression of autophagy-related proteins LC3 expression in neurons as evidenced by immunofluorescence via miR-34a-5p when compared with the control group (Fig. 7E-F).Similar results were found in the Western blot experiment, the LC3A/B and Beclin1 were increased expression in the NC-Exos treatment groups as compared to the control groups, while the P62 as an autophagy negative regulator was downregulated.After treatment with miR-34-5p IN -Exos, there was a marked decline in the protein level expression of LC3A/B and Beclin1, along with the upregulation of autophagy arresting P62 (Fig. 7G and I).The results of immunohistochemistry also indicated miR-34a-5p which was highly expressed in EGFR + NSCs-Exos is required and essential for mediating the function of exosome on microtubules stabilization and neural autophagy (Figs.S7A-C). EGFR + NSCs-Exos stabilized the microtubules and activated autophagy via the miR-34a-5p/HDAC6 pathway To further determine the role of miR-34a-5p/HDAC6 in the mediating effect of EGFR + NSCs-Exos on promoting neurite regrowth, we generated miR-34a-5p IN -Exos and HDAC6 siRNA to reveal the role of miR-34a-5p/HDAC6 in regulating neural regrowth.As evidenced by the neurite and neural axonal regrowth assay, the promoting effects of EGFR + NSCs-Exos were suppressed by miR-34a-5p silencing.However, the inhibition of HDAC6 restored the promotion effects of EGFR + NSC- -B).The fluorescent staining of markers Ace-tub and Tyr-tub was administrated to evaluate microtubule stability, the results showed that HDAC6 knockdown upregulated the A/T ratio, reversed the effects of miR-34a-5p IN -Exos (Fig. 8C-D).The autophagy detected assay and TEM results also showed the HDAC6 knockdown in neurons rescued the autophagy flux following the miR-34a-5p IN -Exos treatment (Fig. 8E-G).Furthermore, we performed western blotting to detect autophagy-relative proteins (LC3A/B, Beclin1 and P62) and microtubule stability markers (Ace-tub and Tyr-tub) of the aforementioned treatment groups, and the results were consistent with the fluorescence staining data (Fig. 8H-K).These data indicated that EGFR + NSCs-Exos promotes neural regrowth via miR-34a-5p/HDAC6 pathway by mediating microtubule stabilization and autophagy activation. EGFR + NSC-Exos promoted functional recovery post SCI via miR-34a-5p/HDAC6 pathway To investigate whether miR-34a-5p/HDAC6 is involved in EGFR + NSC-Exos mediation of recovery SCI.miR-34a-5p IN -Exos and SW-100 (a selective HDAC6 inhibitor) were combined and administrated to SCI mice [49].The immunofluorescent and immunohistochemistry staining of the spinal cord sections showed that the miR-34a-5p inhibition attenuated the microtubule acetylation and neural autophagic activation around the lesion core.and the further SW-100 administration reversed the effects of miR-34a-5p inhibition (Figs.S9A-F).BMS and BMS sub-scores evaluation demonstrated that miR-34a-5p IN -Exos administration attenuated the positive effect of neuroprotection with the EGFR + NSCs-Exos.However, the presence of SW-100 could rescue the functional effects of miR-34a-5p IN -Exos on enhancing neurological functional recovery as compared to the miR-34a-5p IN -Exos treated alone (Fig. 9A-B).The electrophysiological analysis confirmed the significant effects of the miR-34a-5p/HDAC6 pathway in regulating the neurologic connectivity recovery of the EGFR + NSC-Exos treated mice (Fig. 9C-D).The H&E staining also demonstrated that silence of HDAC6 by SW-100 can abolish the inhibitory role of miR-34a-5p IN -Exos on suppressing the injured spinal cord tissue healing after SCI (Figs.S10A-B).In addition, the bladder functional analysis after SCI showed the same results as the neurological functional test (Figs.S10C-D).The immunofluorescent staining revealed decreased GAP43 and NF-positive signaling in the spinal cord of the miR-34a-5p IN -Exos treated mice.The administration of SW-100 could rescue the inhibition effects of miR-34a-5p IN -Exos on promoting intrinsic neurite growth (Fig. 9G-J).The in vitro neural regrowth assay was also conducted and found similar effects as the in vivo data (Fig. 9E-F and Figs.S10E-F).These results indicated that EGFR + N-SC-Exos promoted functional recovery post-SCI via the miR-34a-5p/HDAC6 pathway. Discussion In our current study, we identified a subtype of NSCs, named EGFR + NSCs, and extracted their secreted exosomes, which were superior to primary NSCs derived exosomes in aiding SCI repair.In the meantime, we generated a 3D-printed hydrogel patch loading with EGFR + NSCs-Exos, and demonstrated that it can be sustained by releasing exosomes, which could cross the blood-spinal cord barrier and downregulate HDAC6 in neurons by locally delivering exosomal miR-34a-5p.Inhibiting HDAC6 by miR-34a-5p activated the autophagy pathway and promoted microtubule stability to facilitate neurite growth and neurological functional recovery after SCI.It highlights that the local delivery of EGFR + NSCs-derived exosomes can serve as a novel therapeutic agent for spinal cord injury repair. Previous studies have shown that SCI can lead to severe disruption of neural networks and loss of neurological function.Due to their ability to self-renew and differentiate into different cell types, NSCs hold great promise in cell replacement therapy for SCI.Transplanted neural stem cells exert their neuroprotective effects by promoting neurite outgrowth [28,50].However, in terms of clinical application and ethical issues, direct transplantation of NSCs is highly controversial and raises safety concerns [31,32].As a novel "cell-free" therapy, stem cell-derived exosomes have attracted therapeutic attention due to their unique biological properties [51].Our previous research demonstrated for the first time that the primary NSCs-derived exosomes have a neuroprotective effect for promoting SCI repair by enhancing angiogenesis [2].In our subsequent investigations, we have explored a specific subtype of immune cell-derived exosomes, which may contribute to enhancing the efficiency and quality of exosome-based therapies.The results demonstrated that exosomes derived from M2 macrophage could deliver OTULIN to vascular endothelial cells in the injured spinal cord, and promote vascular regeneration through activation of the Wnt/β-catenin pathway [8]. Adult NSCs in the mammalian central nervous system are characterized by maintaining an undifferentiated and quiescent state (qNSCs), and occasionally transitioning from a quiescent state into an active state to generate new neurons [24].Upon activation, qNSCs upregulate EGFR and become highly proliferative.Single-cell RNA-sequencing analysis revealed that there are two distinct subsets of NSCs, named quiescent and active NSCs, with distinct molecular and functional properties.Active NSCs (aNSCs) were enriched in genes involved in transcription, translation and DNA repair, while quiescent cells were enriched in transcripts associated with cell adhesion and extracellular matrix [25].Notably, neurogenic transcription factors such as Dlx 1, Dlx 2, Sox 4 and Ascl1, which are hallmarks of NSCs, are predominantly expressed in aNSCs.qNSCs produced few neurospheres and did not increase their neurosphere formation efficiency during regeneration.In contrast, aNSCs extensively formed neurospheres and adherent colonies [26].These results suggest that the biological functional properties of aNSCs may be stronger than that of qNSCs.In our research, we performed a series of experiments to validate that a subtype of NSCs, EGFR-positive NSCs-derived exosomes have a better therapeutic effect than primary NSCs exosomes in SCI repair.We found that EGFR + NSC derived exosomes could be taken up by neurons in the injured area after SCI, and significantly promote the growth of neurites after spinal cord injury, suggesting that EGFR + NSC derived exosomes may play a role in neurite growth by affecting neurogenesis-related-pathways in neurons. Exosomes can transfer a variety of biologically active components (miRNAs, mRNAs, DNAs, proteins, etc.) to recipient cells, among which, exosome-loaded miRNAs are the most critical cargo mediating biological functions [33].miRNAs are small non-coding regulatory RNAs that bind to the complementary sequence of the 3-untranslated region (3′UTR) of target mRNAs and can inhibit translation or lead to degradation of target mRNAs [40].In this study, we screened the highly expressed miRNA components in exosomes derived from EGFR + NSCs using miRNA-seq.We found that miR-34a-5p was highly expressed in EGFR + NSCs-derived exosomes, and could mediate the effect of EGFR + NSCs-derived exosomes for promoting neurite regeneration.miR-34 has been identified as a key regulator of neural differentiation and proliferation.Pandey et al. found that the miR-34 family was one of the most significantly elevated miRNAs during neural differentiation of PC12 cells treated with nerve growth factor (NGF) [52].Jauhari et al. discovered that miR-34a-5p could target p53, protecting cells from p53-induced death [53].Aranha et al. confirmed that miR-34a could control the differentiation of NSCs.Overexpression of miR-34a-5p resulted in increased neurite extension in differentiated neurons of NSCs, whereas decreased expression of miR-34a prevented neuronal differentiation [54].Another research showed that miR-34a affected Notch signaling, possibly by targeting the Notch ligand Dll 1, which controlled neuronal differentiation and cell growth [55].However, there is no relevant literature on the application of exosomal miR-34a-5p in SCI management.To clarify the role of miR-34a-5p in EGFR + NSCs-derived exosomes, we used the inhibitor of miR-34a-5p to intervene in the expression of miR-34a-5p in the obtained EGFR + NSCs derived exosomes.We found that the positive effect of EGFR + NSCs derived exosomes on SCI treatment was attenuated after reducing the expression of miR-34a-5p in exosomes, suggesting that miR-34a-5p was a key component in EGFR + NSCs derived exosomes for promoting neurite growth after SCI.Considering that exosomes carry a variety of biologically active molecules, culturing NSCs with miR-34a-5p inhibitor may lead to the accumulation of other bioactive factors in the derived exosomes.Therefore, whether EGFR + NSCs derived exosomes promote SCI repair through other factors remains to be further investigated. The integrity of the spinal cord parenchyma, including axon connectivity, is disrupted immediately after trauma.Due to the limited neuroplasticity, impaired axons are hard to grow across the lesion [56].Therefore, any therapeutic strategy aimed at promoting neuroplasticity, neurogenesis, and reconstruction of damaged neural circuits is necessary to improve neurological healing after SCI.Neurons utilize microtubules as their main structural element to produce elongated axons.Acetylation, which plays a crucial role in the stability of microtubules, primarily takes place in the luminal region of microtubules at the Lys 40 residue of α-tubulin.This process modifies the interaction between protofibrils, increasing stiffness and protecting them from damage caused by mechanical stress [57,58].The maturation of microtubules requires detyrosinylation.Tyrosinated α-tubulin is indicative of the formation of nascent microtubules and reflects the dynamics of microtubules.Promoting the expression of tyrosinated tubulin can lead to microtubule destabilization.Therefore, the ratio of acetylated to tyrosinated α-tubulin (A/T ratio) was used as a reliable method for quantitatively comparing microtubule stability properties [59,60].And the non-histone protein acetylation influences a myriad of cellular and physiological processes, including transcription, autophagy, mitosis, differentiation and neural function [61], in which neural autophagy plays a beneficial role of autophagy in axon regeneration [62].Therefore, targeting the microtubule stability after neural damage may be a potential target for spinal cord injury treatment.HDACs are proteases that regulate chromosomal structure and the expression of various genes.Among HDACs, HDAC6 is the only HDAC located in the cytoplasm, which regulates the deacetylation of non-histone structures or substrates and is involved in various physiological and pathological processes such as microtubule stabilization and autophagy [48].HDAC6 has been reported to promote neurite growth by inhibiting the level of autophagy and affecting the acetylation level of neural microtubules, thereby improving the stability of microtubules [57,58].In the present study, we found that EGFR + NSCs derived exosomal miR-34a-5p could regulate the expression of HDAC6 to increase the acetylation level of neurite microtubules, while decreasing the level of tyrosination of neurite microtubules, and finally promote the extension of neurites. After adult SCI, insufficient axon regeneration will lead to poor recovery, which is the most urgent problem to be solved in the treatment of SCI.Unlike neurite outgrowth during development, adult axon regeneration is characterized by insufficient intrinsic neuronal growth capability, lack of adequate neurotrophic factor support, and lack of guidance from the nerve growth matrix [63].Previous studies suggested that NSC-derived exosomes promoted angiogenesis through the delivery of VEGF, and these remodeled vessels could provide scaffolds for neurite extension [2].NSC-derived exosomes could alleviate neuroinflammation to promote the production and accumulation of neurotrophic factors, thus providing a favorable microenvironment for the extension of neurites [38,64].In the present study, we explored the effect of EGFR + NSCs derived exosomes and demonstrated it could trigger the intrinsic neurite outgrowth ability of neurons.In addition, we also found that EGFR + NSCs-Exos could internalized by the phagocytes and other tissue cells (astrocytes, endothelial cells, etc.).However, whether EGFR + NSCs derived exosomes could improve functional recovery after SCI through other mechanisms, needs to be further explored. Systemically delivered exosomes (Intravenous, Oral administration, Intraperitoneal injection, etc.) may cause exosome accumulation at the non-injured sites or rapidly eliminated from the body through fluids [44].Direct injection of exosomes could rapidly clear and caused secondary injuries to the spinal cord [65].These approaches limit their therapeutic effects.The impact of GelMA-based hydrogels is emerging as a promising material on the preclinical SCI landscape due to their advantageous properties such as plasticity, biocompatibility, and biodegradability [10,66,67].To overcome rapid clearance and sustain exosome bioactivity, we encapsulated these small vesicles in GelMA hydrogels to achieve protection and sustained release.We characterized the physicochemical properties of this GelMA-based hydrogel and confirmed that this exosomes-hydrogel control release system possesses excellent biocompatibility and gradual degradation.Our 3D-printed tissue-engineered patches with high plasticity enable local and sustained delivery of exosomes in the injured site of the spinal cord after SCI with long-term maintenance, which can minimize the problems caused by the rapid administration of exosomes with the above-mentioned methods, and provides us with novel prospects for the treatment of spinal cord injury. In conclusion, this study demonstrates that local administration of 3D-printed hydrogel patch coated EGFR + NSCs-Exos could transfer miR-34a-5p into neurons to inhibit the expression of HDAC6.Reduction of HDAC6 increased the stability of microtubules and activated autophagy, which in turn promoted neurite regrowth in the injury site of the spinal cord after SCI and improved their neurological function recovery.Our research provided a novel and precise cell-free therapeutic strategy for SCI repair. Fig. 1 . Fig. 1.Schematic of the project.In this study, we have identified a specific subtype of neural stem cells (NSCs) known as EGFR + NSCs and isolated their exosomes.By employing transcriptomic and miRNA profiling, we characterized both EGFR + NSCs and their exosomes.Using an in vitro cellular model, we explored the mechanisms through which exosomes derived from EGFR + NSCs promote axonal outgrowth.In conjunction with 3D printing technology and a hydrogel scaffold, we designed and implemented a hydrogelcoated exosomal patch for the treatment of SCI, demonstrating superior efficacy in enhancing neural regeneration.These results showed that the exosomes derived from EGFR + NSCs were found to deliver miR-34a-5p to neurons, resulting in the downregulation of HDAC6.This, in turn, activated the autophagy process and enhanced microtubule stabilization.Our study introduces a novel class of exosomes derived from EGFR + NSCs with the potential to enhance functional recovery following SCI, offering a promising cell-free therapy approach for SCI treatment. Fig. 2 . Fig. 2. Characteristics of single-cell Transcriptome of neural stem cells and identification of their exosomes.(A) Uniform Manifold Approximation and Projection (UMAP) of the cells from mice SVZ.Colors indicate assigned activation states and cell types.(B) Violin plots showing the course of expression for genes in the neurogenic lineage (NSCs and neuroblasts).(C) Expression distribution of selected marker genes.(D) Selected GO biological processes were significantly overrepresented in aNSCs subtype ranked by enrichment foldchange.(E) Average expression of selected marker genes in (D) for each cell type in (A).(F) Flow cytometry sorting of the living CD235 − CD45 − CD133 + EGFR + NSCs.(G) Identification of EGFR + NSCs using immunofluorescent analyses of the expression of specific stem cell markers Nestin and Sox2, scale bar, 100 μm.(H) Flow cytometry analysis of EGFR as a cell marker.The unstained blank control is shown as a grey dashed curve and the test sample is shown as the blue, red, pink and orange curve.(I) immunofluorescent analyses of the expression of EGFR in the passaged NSCs.scale bar, 50 μm.(J) TEM image of exosomes secreted by primary and EGFR + NSCs, scale bar, 100 nm.(K) Nanoparticle tracking analysis of exosomes isolated from primary NSCs and EGFR + NSCs.(L) Western blotting analysis of exosome-specific markers CD9, CD63, and TSG101, as well as the negative exosomal marker CALNEXIN. Fig. 3 . Fig. 3. EGFR + NSCs-derived exosomes promoted functional recovery and neural remodeling both in vivo and in vitro.(A) SEM images showed the microstructures of the hydrogel.Scale bar: 50 μm.(B) The survival and cell viability of neurons co-incubated with hydrogel, were detected by the CCK-8 assay.N = 3 per group.(C) In vitro degradation curves of hydrogels over time based on gravimetric measurements.N = 3 per group.(D) Swelling ratios of hydrogels over time.(E) Release curves of Exosomes at different time points.N = 3 per group.(F) Confocal multi-layer scanning and 3D reconstruction of PKH67-labeled exosomes retained within the hydrogel.(G) In vivo tracing of the distribution of DiR-labeled exosomes embedded in hydrogels in the injured spinal cord at 1, 3, 7, 14, and 28 days post-SCI.(H) Quantification of average fluorescence efficiency of (G).N = 3 per group.(I) BMS scores in sham, control, Primary NSCs-Exos, and EGFR + NSCs-Exos-treated groups at different time points post-SCI.N = 5 per group.* Control vs Primary NSCs-Exos, # Primary NSCs-Exos vs EGFR + NSCs-Exos.(J) BMS sub scores in sham, control, Primary NSCs-Exos, and EGFR + NSCs-Exos treated groups at different time points post-SCI.N = 5 per group.(K) Representative electrophysiological trace images were recorded in each group at 28 days post-SCI.(L) Measurement of the MEP amplitude and latent period in (K).(M) Representative immunofluorescent stains of GAP43 images of the spinal cord at 28 days post-injury in each group.The Z1-Z4 indicates the area sequential 500 μm roster to the epicenter.Scale bar, 500 μm.(N) Quantification of GAP43 positive signals in different areas roster to the epicenter of sham, control, primary NSCs-Exos, and EGFR + NSCs-Exos treated mice in (M).(O) Representative immunofluorescent stains of NF images of the spinal cord at 28 days post-injury in each group.The Z1-Z4 indicates the area sequential 500 μm roster to the epicenter.Scale bar, 500 μm.(P) Quantification of NF positive signals in different areas roster to the epicenter of sham, control, primary NSCs-Exos, and EGFR + NSCs-Exos treated mice in (O).(Q) Representative immunofluorescent stains of Tuj1 image of the cultured neurons of PBS, primary NSCs-Exos, and EGFR + NSCs-Exos treated groups.scale bar, 100 μm and 50 μm.(R) Quantification of Tuj1 positive neurite length in PBS, primary NSCs-Exos, and EGFR + NSCs-Exos treated neurons in (Q).(S) Representative immunofluorescent images of the regenerative axons on the microfluidic culture plate (green: Tuj1).(T) Quantification of total axonal length in (S).N = 5 per group.Data are presented as mean ± SD, NS, no significant difference, *P<0.05,**P<0.01,***P<0.001.
2023-12-04T17:48:33.038Z
2023-11-28T00:00:00.000
{ "year": 2023, "sha1": "c24a7c3be3af2bdb5cc9125d0f031ee85b78a110", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.bioactmat.2023.11.013", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "61ad9d2bfc6cdb8fd4eaa7fefc64a9bf79bbb576", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
208112486
pes2o/s2orc
v3-fos-license
A decision-support methodology for the energy design of sustainable buildings in the early stages Abstract A holistic approach integrating all aspects of building design directly affected by its energy performance is necessary for supporting decision-making throughout the design process. In this work, a decision-support methodology for the energy design of buildings is proposed, considering the three dimensions of the concept of sustainability, and adapted to the level of information detail available in the early design stages. The 4 modules composing this methodology are (i) a set of 36 key design variables defining the building design, (ii) a set of 16 indicators covering environmental, economic and user comfort aspects of building performance affected by energy-related design decisions, (iii) a calculation method for the performance indicators composed of 7 simulation models, and (iv) a knowledge base of building elements, energy sources and meteorological data. The methodology is primarily aimed to assist architects and engineers who participate in the design of office buildings in a French context. PUBLIC INTEREST STATEMENT The energy design of buildings can be defined as the decision-making for the selection and sizing of the elements of structures that have a decisive influence on their energy performance. The specific conditions of construction and the heterogeneity of the design criteria make each building a different case: there is no formula that works for all possible situations. To simplify its task, the designer, through his expertise and experience, offers validated solutions in other projects and adapts them to the context of the new project. Nevertheless, the increasing demands in terms of energy performance push the design of buildings beyond the usual solutions, making the work of the designer more and more complex. The objective of this paper is to develop a decision support methodology for the energy design of sustainable office buildings in the early stages of a construction project. Introduction The building sector is characterized by high energy consumption rates and high emissions of greenhouse gases. In 2010, the sector accounted for 40% of final energy consumption and 36% of all carbon dioxide emissions in the European Union (European Commission, 2013). Therefore, improving the energy performance of buildings represents a cost-effective way of reducing climate change and improving energy security (Eichhammer et al., 2009). Current approaches focusing solely on the reduction of energy consumption during the operation phase of buildings may cause undesirable results. These include a degradation of user comfort conditions and an increase of both the investment cost and the environmental impact of construction products. A holistic approach integrating other aspects directly affected by the energy performance of buildings is thus necessary for supporting decision-making throughout the design process. International initiatives such as the European projects OPEN HOUSE (2013) and SuPerBuildings (Hakkinnen, 2012) as well as the works of the technical committee CEN TC350 (European Committee for Standardization, 2011Standardization, , 2010Standardization, , 2012aStandardization, , 2012b converge on integrating the concept of sustainability into the decision-making process of building design. This is done by considering criteria covering the three dimensions of sustainable development: environment, economy and society. This approach is more effective when applied during the early stages of a building project, when design decisions have a higher impact potential on building performance (Agency, International Energy, 2003). However, these stages are characterized by a limited availability of information, which complicates performance assessment. Various tools and methodologies are proposed in the literature to assist decision-making in the early stages of the energy design of buildings by considering multiple sustainability-related criteria simultaneously (D'Amico & Pomponi, 2018;Gan et al., 2018;Meex, Hollberg, Knapen, Hildebrand, & Verbeeck, 2018b;Moghtadernejad, Chouinard, & Saeed Mirza, 2018). Ochoa and Capeluto (2009) propose NewFacades, a decision-support tool for the design of facades that evaluates the performance of design alternatives based on their energy consumption and daylight glare index. Wang, Zmeureanu, and Rivard (2005) employ a multi-objective optimization algorithm to find Pareto solutions combining life-cycle cost and an exergy-based environmental impact indicator. Similarly, Baek, Park, Suzuki, and Lee (2013) have studied the effects of eco-architectural techniques on life-cycle costs and carbon dioxide emissions. Attia, Gratia, De Herde, and Hensen (2012) help selecting passive and active strategies using energy consumption or generation in the design of nearly zero energy buildings in an Egyptian context. Hamdy, Hasan, and Siren (2013) propose a three-stage optimization model for designing nearly zero-energy buildings using present worth, space heating demand, summer comfort, difference in life-cycle cost and primary energy consumption indicators. Requirements for the implementation of Life Cycle Assessment in early stage design of buildings have been proposed by Meex, Hollberg, Knapen, Hildebrand, and Verbeeck (2018a); their methodology is designed for architects, in order to take into account environmental impact in these early design stages. Similarly, Gervásio, Santos, Martins, and Simões da Silva (2014) presented a life-cycle approach in early design stages in which the life-cycle environmental performance is estimated by a macro component; their approach allows overcoming the lack of design data in order to guide the designers in achieving a long-term building efficiency. Furthermore, a design-decision support tool named UrbanSOLve has been proposed and tested by Nault, Waibel, Carmeliet, and Andersen (2018); it aims to design the neighborhood according to its energy and daylight performance. In this tool, the performance evaluation is based on a predictive approach using a metamodeling and an optimization procedure in order to identify the design parameters that maximize the benefits and minimize the costs; this tool can also generate different alternatives of design, leading practitioners to compare them before planning their own final design decisions. In general terms, the evaluation tools used in the early design stages are based on simplified calculation algorithms, which allow for short simulation times and require a limited amount of input data in order to be compatible with the level of information available at these design stages. However, these tools consider at most two of the three aspects of the concept of sustainability at a time, normally complemented with energy consumption criteria. In this context, we propose a progressive decision-support methodology for the energy design of buildings which considers the three dimensions of sustainability and which is adapted to the level of information detail available in the early design stages. Furthermore, we propose a model for structuring decision-making to be integrated into the decision support methodology in order to take into account the progression of design decisions throughout the building design process. The methodology is primarily aimed to assist architects and engineers who participate in the design of office buildings in a French context. Progressive decision support methodology The aim of the proposed methodology is to provide an overview of the impact of energy-related decisions on all aspects of the building influenced by energy performance, in order to assist through the selection and sizing of the related building elements. The methodology is based on a comparative assessment of design alternatives through the concept of sustainability and based on a whole life-cycle approach. Four modules compose the proposed methodology: (1) A set of 36 key design variables defining the building design, representative of the decisions taken during the early stages, and a set of 16 indicators covering environmental, economic and user comfort aspects of building performance affected by energy-related design decisions. (2) A calculation method for the performance indicators composed of seven simulation models and adapted to the level of information detail available in the early design phases. (3) A progressive logic of the design decisions given as a model of sequential distribution of the choices to be made at each phase of the project. This logic is based on a basic building configuration, representing values to be considered by default for the choices that have not yet been decided in a given phase. (4) A knowledge base of building elements, energy sources and meteorological data, used to effortlessly translate design decisions into their corresponding simulation parameters. An overview of the proposed methodology and its composing modules is shown in Figure 1. Key design variables Based on an analysis of the decision-making process during the early design stages, 36 key design variables have been identified as having a decisive impact on the energy performance of the building. These variables cover building geometry, materials and technical systems, and are presented in Table 1. A series of hypotheses and simplifications were considered for the definition of the key design variables. In order to simplify geometrical input requirements, identified as one of the main barriers towards decision-support tools use during early design stages (Attia et al., 2012), a rectangular building plan and a typical distribution of interior spaces, both representative of office buildings, were considered as presented in Figure 2. Table 2 proposes the building spatial dimensions. While energy consumption of the building is calculated based on its heating, cooling, mechanical ventilation and artificial lighting needs, no design variables regarding artificial lighting or system operation parameters were taken into account at this point, since these are normally defined during later stages of the design process. Thus, typical values for these variables in the office building were considered. Performance indicators and calculation methods In order to characterize the impact of energy-related design decisions, a set of 16 indicators was selected covering the three dimensions of sustainability, based on the performance-based approach followed by the OPEN HOUSE and SuPerBuildings projects as well as the works of the CEN TC350. These performance indicators are compatible with the limited level of information detail available at the early design stages and are calculated using simple calculation algorithms with a reduced cost in simulation time. Furthermore, their selection is consistent with the current availability of sources of simulation parameters, including environmental profile databases, considered as one of the main obstacles for analysis in France (LEBERT et al., 2013). The selected performance indicators are presented in Table 3 and further information regarding their selection criteria and calculation are given in the following sections. The perimeter of analysis of the environmental and economic evaluations includes (i) the building components used both during the construction phase and the operation phase (due to component replacement), and (ii) the energy resources consumed by technical systems during the operating phase. To evaluate the environmental impact throughout the building, the individual contributions of all material and energy flows considered in the scope of analysis are aggregated. Finally, the values of the environmental profile of the building are expressed in terms of the floor area and the duration of the period of life-cycle analysis (set as 50 years in this work), in order to facilitate comparison between different building design alternatives. Economic indicators According to ISO 15,686-5 (International Organization for Standardization, 2008), the contribution of a type of flow to the overall cost of the building C flow is simply given as the product of the number of functional units of the flow used throughout the analysis period Q flow and its associated cost per unit c flow : This method applies for the calculation of the construction cost, given at the time of the building's delivery. In contrast, when calculating operating costs which take place at a later stage in the building's lifetime, these expenses must be updated by projecting its future value to its equivalent value at the reference date. This is done through the discount rate "a" as shown below: where "t" is the time (in years) between the building's delivery date and the date of the operating expense. Social indicators 2.2.3.1. Hygrothermal comfort model. Although hygrothermal comfort may depend on a large number of parameters, a thermal model based on an interval of acceptable air temperatures is prevalent in current practice because of its implementation simplicity. Such a simple model has been retained in this work in accordance with the chosen building thermal model, which does not include any equation for humidity transfers. In this model of thermal comfort, two temperatures are specified: the indoor air temperature and the operative temperature of the indoor environment. The estimation of the operative temperature of an indoor space T op usually depends on the conditions of relative humidity and air velocity and on the activity level and clothing of the occupants. However, in the case of office buildings, this temperature can be approximated as the average of the temperature of the walls surrounding a space T wallÀin and the air temperature inside the space T airÀin (American Society of Heating, 2004): The percentage of time of thermal comfort at each thermal zone of interest is then calculated as the ratio between the time of comfort during occupancy and the total number of hours of annual occupancy. Finally, this hygrothermal comfort indicator is given as the weighted average based on the surface area of the spaces of interest (offices and meeting rooms). 2.2.3.2. Visual comfort model. In general, a space can be considered autonomous in terms of daylight when the value of illuminance inside is higher than the level required by the type of activities. The level of illuminance required for an office space is normally considered equal to 500 lux, as proposed in the EN 15,251 standard (European Committee for Standardization, 2007). In this work, the average Daylight Factor DF is used to calculate the illuminance level in each of the spaces of interest. This Daylight Factor, commonly used to evaluate the potential of access to daylight, is given as the ratio between the indoor illuminance level on a reference plane and the corresponding outdoor illumination level under an overcast sky (Université catholique de Louvain, 2014). Its calculation involves the geometry of the space as well as the optical properties of the internal surfaces enclosing this indoor space. Based on this Daylight factor and the required indoor illuminance level of 500 lux, a corresponding minimum value of the outdoor illuminance is calculated for each space. This minimum level is then compared at each time step of the simulation to the outdoor illuminance level for building location, thus determining the number of occupancy hours during which the space is autonomous in terms of daylight. As in the case of the hygrothermal comfort indicator, the daylight autonomy at the building level is given as the weighted average based on the surface area of offices and meeting rooms. 2.2.3.3. Acoustic comfort model. In this work, the weighted standard level difference of the building facades is obtained using the calculation method proposed by the French certification Qualitel (HOUSE 2012). This method incorporates the result of the complex analysis of spectral profiles defined in the EN 12,354 standard into a single parameter characterizing each façade component: the reduction index. This value depends on the type, thickness and position of each wall layer composing the facade. As specified in the Qualitel certification, three types of acoustic transmissions are to be considered in the calculation of the weighted standard level difference: direct, indirect and equipment transmissions. Indirect transmissions are negligible when the isolation required is less than 35 dB, which is usually the case. Furthermore, acoustic transmissions through equipment such as air intakes are not considered in this work because they are not yet defined in the early stages of the design project. Thus, only direct acoustic transmissions are considered here: the definition of weighted standard level difference of a composite façade D nT;A;tr is then given in terms of the volume of the space V space as well as the surface area S j and reduction index R A;trÀj of windows and opaque walls: In the case of windows, the reduction index R A;trÀwin is usually given by the manufacturer based on the type ad number of glazing elements. The reduction index of an opaque wall R A;trÀwall is calculated as the sum of the value related to the structural wall R A;trÀstr and its correction due the thermal insulation R A;trÀtherm , both values given in the Qualitel certification for a wide range of materials, as shown below: . Indoor air quality model. The calculation method of the expected percentage of user satisfaction with the quality of indoor air is given as a generalization of the comfort levels proposed in EN 15,251 (European Committee for Standardization, 2007). In this standard, 4 categories of comfort related to indoor air quality are proposed as a function of the air change rate per occupant. Each category is associated to a given percentage of user dissatisfaction with the air quality. The indicator selected in this work represents the complementary amount of this percentage; it thus represents, for each selected air change rate, the ratio of occupants which are considered to be satisfied with the indoor air quality. In order to generalize the assignment of a percentage of user satisfaction for any given value of the ventilation rate per occupant _ q, an equation defining the relationship between these two quantities has been obtained through a regression analysis. The expected percentage satisfied n sat is thus estimated as follows: n sat ¼ 100 À 224:9 Á _ q À0:75 (7) Progressive decision-making model Decision-making through the building design process follows a natural progression from general decisions, such as building geometry and facade glazing ratio, to more specific decisions, such as glazing type and thermal insulation thickness. In order to aid the decision-making throughout the early design stages, in this work, we propose a progressive decision-making model as a support guide to structure decision-making in the energy design of buildings. This progressive model is to be integrated into the performance assessment methodology described in the previous sections, aimed at the energy design of sustainable buildings. The design process of a building varies from a country or region to another. In France, the progression of a construction project is dictated by the legal framework on public works performed by private providers (JORF, 1994). Among the various project phases identified in this framework, it is essential in the first three where critical decisions vis-à-vis energy building design are taken. These design stages ("Esquisse", "Avant-Projet Sommaire" and "Avant-Projet Détaillé") are translated in this work as Schematic Design, Outline Design and Detailed Design, as shown in Figure 3. During the schematic design phase, an initial but complete proposal for the design of the building is defined in broad terms. All aspects of the building are considered, at least in very general terms, to define the overall outline of the energy strategy. In order to introduce a linear logic in the design process, the schematic design phase is divided into four sequential sub-stages, as shown in Figure 3. These steps are mostly related to or have an impact on one of the following building aspects: spatial composition, thermal envelope, thermal mass and technical equipment. At each of these sub-stages, design decisions are primarily related to or have an impact on the aspect in question. After an initial definition of the building design has been declared, the following two stages are aimed at increasing the precision of variable declaration. In the outline design phase, the initial proposal defined in the previous phase is complemented by increasing the level of accuracy of design decisions, especially in terms of building envelope composition and the choice of technical systems. The detailed design phase is characterized by a higher level of accuracy in the declaration of all design variables from the two previous stages; in this third phase, the designer makes specific decisions for the dimensioning of building elements, mainly concerning the composition of the building envelope and the choice of materials. Table 4 shows the design decisions to be made at each of the design stages as proposed in the progressive decision-making model. Two levels of precision are identified in the decision-making model proposed here: • On a general level, typologies of either numerical values or types of building components are proposed, given as typical design choices representing a family of components or a representative interval of values. • On a more precise level, specific numerical values or types of building components are to be declared. In order to evaluate the performance indicators at the earlier phases, where only general decisions are fixed, a set of default values describing a default initial building configuration will be used to fill in the blanks (specific values which have not been fixed). This set of default values has to represent the most common values for a given building type in the corresponding implementation context. These default values are to be identified from surveys of high-performance buildings as to serve as proven starting points for good energy building design (see sections 3.2 and 4.2). Basic knowledge database A basic knowledge database containing the simulation parameters describing the key design variables (see Table 1) has been compiled. It consists of three types of information: • Environmental and economic data describing the energy sources used by the technical systems. • Meteorological data, including solar irradiation, wind speed and illumination levels of various geographical locations. Definition of the basic thermal model The selected thermal model is based on a multizone building nodal network model, as proposed in the works of Roux (1984) and Caccavelli, Roux, and Brau (1987). In this model, the main building elements, including walls, windows, floor slabs and inside air mass, are modeled as a network of resistances and capacities, as shown in Figure 4. Based on this principle, building elements are represented as follows: • the elements composing the envelope of the thermal zones, including roof, bottom floor, walls and intermediate floors between thermal zones, are modeled using a thermal resistance R wall as well as 2 thermal capacitances C wallÀin and C wallÀout on each side of the element, as shown in Figure 4, • partitions within the same thermal zone are modeled by a thermal capacitance, • glazing elements, whose thermal inertia is considered negligible compared to that of opaque walls, are represented by a thermal resistance. Additionally, each thermal zone is considered at a homogeneous air temperature value. Figure 5 presents the whole thermal-electrical representation of the elements composing the building, as well as all the thermal exchanges considered in the building thermal model. For each of these nodes, different heat exchange phenomena are considered in the thermal balance: • convection and radiation between the air inside the thermal zone and the inner surface of the walls, • convection, radiation and conduction between the air inside the thermal zone and the outdoor air, through the windows, • convection due to air flow between the thermal zones and the outdoor air, • absorption of solar radiation by the air inside the thermal zone and by the inner and outer surfaces of the walls, Figure 4. Principle of the thermal-electrical analogy for a wall toward a nodal network model. • absorption of heat flow from internal loads (occupancy, office equipment and lighting) by the air inside the thermal zone and the inner surfaces of the walls, • convection between the air inside the thermal zone and the heating and cooling technical systems, • conduction between the inner and outer surfaces of the envelope walls (excluding thermal bridges), • conduction between the air inside the thermal zones and the air outside through the thermal bridges of the envelope, • convection and radiation between the outer surface of the envelope walls of thermal zones and the outdoor air, • conduction between the outer surface of the envelope walls of the thermal zones and the ground. Definition of a basic default initial configuration The choice of the default values that characterize the initial default configuration were identified through a study of the BBC Observatory database (Collectif Effinergie, 2014) and are shown in Table 5. This database compiles energy-efficient project statistics, with High Environmental Quality (HQE) and Low Consumption Building (BBC) certifications as well as other exemplary buildings in France. All of the office building projects, documented in this database in February 2013 (a total of 63 projects), were considered for the identification of the possibilities of choice in this work. Implementation methodology of a complementary system: illustration through the example of a double skin facade It could be desirable from an architect or engineer point of view, to add a complementary thermal and/ or architectural system to the basic building thermal model succinctly described in the last section. In order to illustrate the corresponding implementation methodology, we consider a novel energy system, namely a double skin facade. This energy system can be defined as a traditional façade doubled by the exterior by a second, essentially glazed façade (see Figure 6). The appeal of the implementation of such building element is double: (i) during winter, solar heat energy is captured by the cavity between the traditional and the glazed facades, which allows for the creation of a buffer space; this buffer space thus insulates the building from the low temperatures of the exterior environment, (ii) during summer, the heated air inside the cavity leads to a natural air circulation cycle caused by a chimney effect, bringing fresh air to the building façade or interior, through passive means. Modeling of the Double skin facade In accordance with the principle of electrical-thermal analogy, in this double skin model, the space created between the building envelope and the glass facade is characterized by an air temperature representative node located inside the cavity (T air-cav ). The temperature characterizing this node is estimated from a simplified formulation of the thermal balance, considering the heat exchange with thermal zones contiguous to the cavity and the outside. The exchange phenomena considered in the modeling of the double skin glass facade are (see Figure 7): • conduction, convection and radiation between the air inside the cavity and the outside air, through the glazing which constitutes the facade of the double skin (h cav-out ); • conduction, convection and radiation between the air inside the cavity and the air inside the building, through the glazing composing the thermal envelope in contact with this cavity (h win ); • convection and radiation between the air inside the cavity and the outer surface of the opaque walls composing the envelope (h cav-wall ); • convection due to the air change outside the cavity, by wind action and by thermal draw (h cav-vent ); • exchange between the air inside the cavity and the air inside the thermal zones in contact with it through the thermal bridges of the building envelope (h bridges ); • absorption of heat flux due to solar radiation by the air inside the cavity (n rec S wall G glaz φ sun ). The regulatory method for the modeling of sun-tempered buffer spaces has been considered in coherence with the principles of the present work in order to model the Double skin façade (see black added lines and nodes in Figure 7), because this method allows a good balance between simplification of calculation and representativeness of the phenomena involved. In order to take into account in a more complete way the impact of the decisions associated with the design of the glazed facade in its thermal behavior, an adaptation to this modeling (Th-BCE 2012 rules, (Centre Scientifique et Technique du Bâtiment, 2006)) is proposed in this work with regard to the efficiency of recovery of solar contributions in the cavity . This efficiency is defined as the portion of the solar heat flux remaining in the buffer space relative to the incoming solar heat flux to the cavity. It is usually considered to be constant and equal to 0.8 in the Th-BCE 2012 (n rec ¼ 0.8); in this work, the value of Note that the model considered in this work does not include the possibility of using the preheated air in the cavity of the double skin facade to supply the ventilation system, either directly or through a heat exchanger double-flux type. this output, namely n rec coefficient (see Figure 7), is estimated from the geometric and thermal characteristics of the solar protections and the building envelope, so as to model the influence of these elements in the thermal dynamics of the facade double skin glazed: Addition to the basic default initial configuration For the design of a double skin facade, the decision values describing the default configuration were obtained from the analysis of a study of 55 projects in office buildings in 10 different countries (Poirazis, 2004). These values are shown in Table 6. Addition to the knowledge database As shown in Figure 6, the basic components of a double skin facade are the glazing and the façade structure. Since the glazing types used in double skin facades are practically the same as the ones used in windows, the only component which is not documented in the knowledge database of the base decision-support tool is the façade structure. The environmental, economic and technical characteristics of a metallic structure for such purposes have been added to the knowledge database. Qualification of the decision-support tool Decision-making for the design of energy-efficient buildings requires a global vision, which integrates, in parallel, the functions of the system (physical comfort of the occupants) and the constraints (environmental costs and impacts). The currently available methodologies and tools that consider these three aspects of building performance are not suitable for design in the early stages of a construction project. It is nevertheless in these first stages of reflection that the most decisive decisions in terms of energy performance are taken. For this reason, the novelty of the proposed method is to support decision-making in a building energy design process by integrating both: the logic of knowledge progression, the social, economic and environmental dimensions of sustainable development. In order to show the interest of the application of the proposed decisionsupport tool in the design of an office building, a design scenario composed of 3 progressive decisions allocated along the early design stages is further presented. The positioning of these design decisions in the progressive decision-making model is shown in Figure 8. In the following three sub-sections, the results, in terms of performance indicator values, are given as the variation in percentage of each performance indicator (see Table 3) for a given building configuration (X 1 ) compared to the considered default configuration (X 0 ). First decision: building orientation In the schematic design phase, one of the first decisions is to choose the building orientation, relative to the South (for the northern hemisphere) or to the North (for the southern hemisphere). This simple design choice plays a decisive role in the energy performance of the building, particularly in the distribution of incident solar contributions on the building façades (Liébard & De Herde, 2005). Figure 9 shows the variation of performance indicators of the initial default building by varying its orientation from the South: 0°(South orientation), 30°, 60°and 90°(East orientation). Results of the variations of performance indicators versus building orientation towards the West (-30°, -60°,-90°) are not presented because they are similar to results versus building orientation towards the East (30°, 60°, 90°), for reasons of symmetry. One can note that the energy performances of the building, and thus the operating cost of this building, are significantly impacted for orientation variation values (from South) less or equal than 30°. These variations did not affect the cost of the construction and thus of the investments. In the same way, beyond this zone, a significant increase in the majority of environmental can be observed: • non-renewable primary energy use • •freshwater consumption • non-hazardous solid waste • radioactive solid waste • depletion potential of stratospheric ozone layer Other indicators are also affected by the variation of this design variable, but to a lesser degree: • hazardous solid waste • Global warming potential Figure 9. Variation of performance indicators for all building orientations from a south orientation. • Acidification potential of land and water • Formation potential of tropospheric ozone • •Life-cycle cost Two indicators present a minimum degree of sensitivity with respect to the variation of the azimuth: the construction cost and the percentage of time of thermal comfort (this last performance indicator being slightly enhanced). Finally, one can point out that the rest of the user comfort indicators (visual, acoustic and air quality) are not affected by the building orientation. Figure 10 illustrates the variation of performance indicators with the implementation of a double skin facade configuration compared to the default-building configuration. As shown in this figure, the presence of the double-glazed facade has two different effects on the energy performance of the building. As expected, it reduces the building's heating needs by approximately 20% compared to the basic building configuration; this involves a decrease in the environmental indicators associated with the consumption of electrical energy. At this stage, an analysis in terms of time of ROI (Return On Investment) should be done to support the environmental decision-making. Nevertheless, the largescale use of glazing and the supporting metal structure that constitute this double skin facade, both components that are associated with high environmental impacts and high costs, have very negative effects on the building's performance; the most negatively affected indicators are of course the environmental indicator "hazardous solid waste" because of the use of glassing and metal structure, and the three economic indicators, due to the increase in construction materials needs for building. Third decision: type of double skin façade glazing The third design decision concerns the type of glazing composing the double skin facade. Two types of glazing are proposed in the knowledge database for "Thermal mass" stage of the schematic design phase (see Figure 3): single glazing unit with 6 mm thick clear glass (default building configuration), double glazing which consists of two sheets of glass 4 mm thick, separated by a space filled with 16 mm of air. Results presented in Figure11 concern the comparative impact of this default single glazing and of the double glazing for the double skin façade. It points out that some of the environmental indicators are better for a facade with a single glazing, while others are better results for a facade with double glazing. To understand these indicator behaviors, two different effects have to be considered: indeed, a double glazing allows the reduction of the thermal needs, because of the increase of the thermal performance of the envelope; nevertheless, the implementation of such a type of glazing induces larger environmental impacts and larger investments. Thus, the choice of double glazing leads to reduce the indicators that are more sensitive to the energy consumption of thermal systems, while the choice of single glazing limits those that are more sensitive to the use of building materials. Conclusion In this work, a progressive decision-support methodology for the energy design of buildings during the early stages has been developed. This methodology provides a quantitative assessment of the impact of technological-related and material-related decisions on the overall building performance, characterized by a set of indicators covering the three dimensions of sustainability and considering the whole building life cycle. The three dimensions concern environmental criteria, comfort criteria and three economic costs and are evaluated by the way of dedicated performance indicators. The selected performance indicators and their calculation algorithms are compatible with the limited level of building detailed information available during the early design stages. Their calculations are based on standard technical databases and on improved energy performance models. Through the case studies presented here, the use of the evaluation tool helped to guide decision-making by offering a global vision of the energy performance of the building. This global vision makes it possible to fully consider the effects of design decisions taking place in different stages of the building's life cycle. An energy strategy may be attractive because it has positive effects on the building's performance during operation; however, the implementation can bring negative effects that can mitigate these gains, or even cancel them completely. Consideration of the effects of design decisions on the entire building life cycle and on the set of performance criteria is thus necessary to respond to these types of situations. The proposed progressive decision-making model will lead in the future to a complete decision-support tool for the energy design of buildings throughout the early design stages, considering the three dimensions of sustainability (Velázquez, 2015). HIGHLIGHTS • A progressive decision-support methodology for the energy design of buildings. • Assessing building performance considering the three dimensions of sustainability. • 16 performance indicators covering environmental impacts, cost and user comfort. • 36 key design variables used to describe the building in the early design stages. • Performance assessment compatible with information detail in early stages.
2019-11-14T17:12:39.794Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "ab1850c8462056ce3bd12292855602cb793c2772", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23311916.2019.1684173", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "c05ff736b9989ec7a09bd465dca0a88ba3927b8a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
55387575
pes2o/s2orc
v3-fos-license
Diversity did not influence soil water use of tree clusters in a temperate mixed forest Compared to monocultures, diverse ecosystems are often expected to show more comprehensive resource use. However, with respect to diversity–soil-water-use relationships in forests, very little information is available. We analysed soil water uptake in 100 tree clusters di ffering in tree species diversity and species composition in the Hainich forest in central Germany. The clusters contained all possible combinations of five broadleaved tree species in one-, twoand three-species clusters (three diversity levels), replicated fourfold (20 one-species, 40 two-species and 40 three-species clusters). We estimated soil water uptake during a summer dry period in 0–0.3 m soil depth, based on throughfall and soil moisture measurements with a simple budgeting approach. Throughout the whole vegetation period in 2009, soil water uptake was additionally determined at a higher temporal resolution and also for a greater part of the soil profile (0–0.7 m) on a subset of 16 intensive clusters. During the dry spell, mean soil water uptake was 1.9 ±0.1 mm day−1 in 0–0.3 m (100 clusters) and 3.0±0.5 mm day−1 in 0–0.7 m soil depth (16 clusters), respectively. Besides a slightly higher water use of Fraxinusclusters, we could not detect any e ffects of species identity or diversity on cluster water use. We discuss that water use may indeed be a conservative process, that di fferences in tree-species-specific traits may be compensated for by other factors such as herb layer coverage and tree spatial arrangement, and that diversitydriven differences in water use may arise only at a larger scale. We further conclude that with respect to stand water use “tree diversity” alone is not an appropriate simplification of the complex network of interactions between species traits, stand properties and environmental conditions that have varying influence on stand water use, both in space and time. Introduction Little information is available on the relationship between tree diversity and stand water use in temperate forests; but water use is most likely related to productivity in forest stands (Law et al., 2002).For grasslands, an increase in productivity with species diversity has been widely recorded (e.g.Hector et al., 1999).Evidence for a positive relationship between productivity and tree species diversity in forests is accumulating, indicated by a modelling exercise of competitive interactions of randomly chosen species (Tilman et al., 1997).From a forest succession model dealing with "real" species, the conclusion was derived that "tree diversity strongly influences primary productivity in European temperate forests across a wide range of sites with different climates through a strong complementarity effect" (Morin et al., 2011).Similar findings are also supported by some field studies: a positive relationship between tree species diversity and productivity was indicated in early successional and disturbed sclerophyllous and conifer forests before canopy closure (Vilà et al., 2005).In a Panamanian experimental plantation, mixed-species plots yielded on average 30-58 % higher summed tree basal area compared to monocultures after 5 yr (Potvin and Gotelli, 2008).On 12 000 permanent forest plots in Canada, a strong positive effect of biodiversity on tree productivity (controlled for environmental conditions) Published by Copernicus Publications on behalf of the European Ecological Federation (EEF). was obtained (Paquette and Messier, 2011).Another largescale study in Sweden across 400 000 km 2 found approximately 50 % higher biomass productivity comparing one and five species plots (Gamfeldt et al., 2013).Also a large-scale cross-European modelling study indicated that tree wood productivity was positively related to species richness (Vilà et al., 2013). However, mainly due to the longer life cycle of trees, and possible changes in biodiversity-productivity relationships with tree age, experimental approaches in forests remain complicated (Pretzsch and Schütze, 2009).Pretzsch (2005) reported that productivity of mixtures of Norway spruce (Picea abies) and European beech (Fagus sylvatica) trees may differ from the respective monocultures by −20 to 10 %, dependent on site conditions.In addition climatic variables influenced wood production in varying direction and magnitude dependent on forest type (Vilà et al., 2013).Even a weak negative relationship between tree species diversity and above-ground biomass was found on several sites across Central European forests (Szwagrzyk and Gazda, 2007) and also at our study site (Jacob et al., 2010). In grasslands, it has been observed that plant species diversity enhances transpiration rates (Verheyen et al., 2008).In addition, in an experimental tree plantation in Panama, transpiration increased with increasing tree species diversity (Kunert et al., 2012).In both studies, complementarity of water uptake was discussed as an underlying mechanism.This would imply water resource partitioning and, consequently, more effective utilization of water resources (Hagger and Ewel, 1997;Hooper et al., 2005).Hence, biodiversity-rich stands may be more susceptible to drought events since they extract water "more efficiently" than less diverse stands.This coherence has already been demonstrated for grasslands (Van Peer et al., 2004;Verheyen et al., 2008). It is important to study if a water-use-diversity relationship also exists for forests, since there is an ongoing trend in Central European silviculture towards more naturalness or close-to-nature forestry (O'Hara, 2001), which implies a transformation of monocultural stands of narrow tree diameter range into stands composed of several tree species with a broader range of diameters.In addition to improving ecological, commercial and recreational purposes of forests, it is believed that this forest transformation might increase the resilience to extreme climatic conditions (L ÖWE, 2011).Climatic extremes are predicted to occur more frequently for large parts of Central Europe (Rowell and Jones, 2006;Christensen et al., 2007).Now if the results from grasslands are valid for forests too, the anticipated effect of forest restructuring might not be achieved. First studies on the relationship between tree species diversity and forest water use were carried out in the broadleaved Hainich forest in Germany: here increased water extraction from the topsoil during a summer drought in diverse plots compared to Fagus-sylvatica-dominated plots was observed (Krämer and Hölscher, 2010).Canopy transpiration was also found to differ among diverse and less diverse stands in certain years (Gebauer et al., 2012).However, none of the outcomes could clearly be attributed to a biodiversity effect, as increasing biodiversity was paralleled by decreasing Fagus admixture, and no monocultures of any other species involved were studied.In order to differentiate between the effects of tree diversity and of species identity, we applied a new experimental design in the same study area, where all observed tree species occur in monospecific study plots and in admixture.We selected 100 groups of three neighbouring trees, hereafter named tree clusters, which contained all possible combinations of five tree species (Acer pseudoplatanus, Carpinus betulus, Fagus sylvatica, Fraxinus excelsior, and Tilia sp.).All species occurred in single-species clusters (n = 20), as well as in two-and three-species mixtures (n = 40, each).We asked whether stand water use is related to tree diversity.Our hypothesis was that water uptake in tree clusters increases with increasing species diversity. Study area The study was conducted in the deciduous Hainich forest in central Germany close to the village of Weberstedt (51 • 05 28 N, 10 • 31 24 E).The forest has remained free from harvesting or thinning for almost 50 yr, and it was estimated that the area has hosted a deciduous forest for over 200 yr (Mölder, 2009;Mölder et al., 2006).The study sites are located on level terrain in the south-eastern part of the forest area (Fig. 1a) at an elevation of approximately 350 m a.s.l.The park receives a mean annual precipitation of 544-662 mm (average of 30 yr of precipitation records from four climate stations around the national park; DWD, 2008) and has a mean temperature of 7.5 • C. Soil texture is characterized by high clay content of ∼ 25 % at a soil depth of 0-0.3 m and 33-41 % at 0.4-0.6 m, respectively (Guckland et al., 2009).Limestone already occurred at shallow soil depths (0.6-1.0 m) limiting the rooted soil volumes.Stand fine root biomass in the area decreased exponentially with soil depth with 63-77 % being concentrated in the upper 20 cm (Meinen et al., 2009). In 2008, tree clusters were selected in two mixed forest stands within the Hainich forest area (sub-areas Lindig and Thiemsburg, Fig. 1b).All clusters were located in close vicinity to the study plots of Krämer andHölscher (2009, 2010).Each cluster consisted of three co-dominant trees arranged in a triangular shape with their surrounding neighbours.Observed tree species on these clusters were Acer pseudoplatanus (sycamore maple), Carpinus betulus (hornbeam), Fagus sylvatica (European beech), Fraxinus excelsior (ash) and Tilia sp.(lime).In this forest, the two Tilia species cordata and platyphyllos often form hybrids, which are phenotypically difficult to differentiate.Hence we did not differentiate at the species level, and we refer to them as Tilia sp.Table 1.Soil properties (0-0.3 m soil depth) and structural characteristics of the one-to three-species tree clusters (means ± sd).Similar letters indicate no significant differences between the three diversity levels (p ≤ 0.05, ANOVA and Tukey's HSD or Kruskal-Wallis test, canopy openness). Cluster characteristics Diversity level 1-species (n = 20) 2-species (n = 40) 3-species (n = 40) Cluster selection was based on a predetermined combination of tree species comprising all possible neighbourhood combinations of the five tree species.This resulted in five different single-species, ten two-species and ten three-species cluster combinations, with each combination being replicated four times (twice replicated in each sub-area, Thiemsburg and Lindig).In the two species combinations, it was assured that not one species dominated the mixture in all four replicates.From the 100 clusters, we selected a subset of 16 clusters containing the species Fagus sylvatica, Tilia sp. and Fraxinus excelsior in monoculture and in three-species clusters.The selected clusters were used to monitor soil water content in the subsoil, to increase the temporal resolution of soil water content measurements and to conduct throughfall measurements (Fig. 1b). Since the clusters of the two forest sub-areas were statistically not different with regard to soil properties and tree structural characteristics, they were pooled in the subsequent analysis.Soil and stand structural characteristics, such as soil bulk density (g cm −3 ), clay content (%), tree diameter at breast height (dbh in m), cluster ground area (m 2 ) and openness (%), were also not significantly different among diversity levels (Table 1). Meteorological data, soil water content and throughfall measurements Data on air temperature (C • ), gross precipitation (Pg, mm), global radiation (MJ m −2 day −1 ) and wind speed (m s −1 ) were recorded hourly at the meteorological station Weberstedt (Meteomedia, Germany), 2-3 km northwest of our study area at an altitude of 270 m a.s.l.On all 100 clusters we conducted measurements of soil volumetric water content (θ in m 3 m −3 ) at four points with a time domain reflectometer (TDR) probe (CS616, Campbell Scientific) at a depth of 0-0.3 m.Water content was assessed monthly throughout the vegetation period in 2009 (30 April to 31 October) and on four occasions during a dry spell in summer (30 July, 10 and 24 August, 1 September). The 16 intensive clusters were equipped with PVC access tubes, enabling measurement of θ with a portable frequency domain reflectometry (FDR) sensor (Diviner 2000, Sentek Pty Ltd.Stepney, Australia) in addition to the TDR measurements.Access tubes were installed to a maximum depth of 0.7 m in which sensor readings were taken at depth intervals of 0.1 m.Volumetric soil water content was measured weekly throughout the vegetation period.The FDR sensor had already been soil-and depth-specifically calibrated for the local soil conditions in the field (Krämer and Hölscher, 2010).By correlating 72 FDR readings at different soil water contents with corresponding TDR readings in the direct vicinity of the FDR, we established a site-specific calibration for the TDR probes. Throughfall was monitored weekly throughout the whole vegetation period on the 16 clusters with rainfall collectors consisting of a plastic bottle screwed to a funnel attached to a metal rod at a height of 1 m.To reduce evaporation from the rain gauge, a table tennis ball was placed in the funnel.The instrumental set-up within a tree cluster is shown in Fig. 2. Soil water budgeting Daily water uptake, Wu (mm day −1 ), between two consecutive measurements of soil water content was calculated by Eq. ( 1): where Tf is throughfall (mm), Sf stemflow (mm), ∆S change in soil water storage between two successional measurements (mm) and ∆t the elapsed time between the two successional measurements (days).∆S (mm) was calculated for each cluster from θ (m 3 m −3 ), measured by TDR sensors, multiplied by the depth of the soil layer in which θ was measured and converted to mm.Tf was either measured directly (16 cluster subset) or calculated from an established relationship with average cluster dbh (Tf = 81.7 − 0.2 dbh) for the remaining clusters.Sf for each rainfall event during our study period was estimated from findings of an earlier study in the same area using 50 stemflow collectors on all five tree species during two successive years (Krämer and Hölscher, 2009).The magnitude of Sf in the Hainich forest in general is usually relatively low (∼ 0.4 to 6.3 % of Pg), varying more between seasons than between plots of differing tree species diversity/Fagus admixture.It was highest on Fagus trees of large dbh and during high rainfall events, but even then stemflow was lower compared to other Fagus-dominated forests (Krämer and Hölscher, 2009).We quantified intensity and duration of single rainfall events from hourly data on gross precipitation automatically recorded at the nearby weather station.We then calculated Sf for given rainfall intensities for each of our cluster trees, dependent on tree species and dbh based on raw data from the study of Krämer and Hölscher (2009).For Fagus and Carpinus, Sf was calculated as 1 % of gross precipitation for trees with dbh > 10 and < 30 cm; for trees > 30 cm Sf was 3 % at rainfall intensities > 2.0 and < 6.0 mm h −1 .For Acer, Fraxinus and Tilia, 0.5 % of Pg was added to the water budget for trees with dbh > 30 cm, at rainfall intensities > 4 mm h −1 .All incoming water (Tf and Sf ) is regarded to infiltrate the soil.Hence evaporation from understorey vegetation and litter layer enters the budget as root water uptake. Water use of all 100 clusters was only calculated during the dry spell for the soil layer in which the TDR was inserted (0.3 m in depth; hereafter referred to as Wu 30d ).With regard to the 16 cluster subsets on which water content was additionally measured down to 0.7 m by FDR sensors, Wu was determined for all 0.1 m wide subsections of soil according to Eq. ( 1) and then summed to yield Wu 70 .Average water use measured on the 16 clusters during the dry spell is referred to as Wu 70d .Wu 70 was determined on several occasions during the vegetation period only where trees were fully in leaf.Drainage or surface runoff could be neglected here (Bittner et al., 2010 and personal communication with the author, 2010).Also the soil parameters (high residual water content and low saturated hydraulic conductivity in the subsoil) lead to very slow water movement rates (Bittner et al., 2010) from which we gain further confidence in our no-drainage assumption. Statistical analysis All statistical analyses were done with R version 3.0.0(R Core Team, 2012).We fitted linear mixed effect models (LME, lme4 package) using maximum likelihood estimation (MLE) to determine the influence of the 25 possible species combinations, the 3 diversity levels or absence/presence of the 5 species (set as fixed effects respectively).For the analysis of Wu 30d , the sub-areas (Tiemsburg/Lindig) served as a random effect.We included the covariates cluster area and cluster dbh and, in the case of the three-diversity-level model, also their interactions, since they are likely to influence cluster water use. Another LME was used to judge the influence of the four possible species combinations (Fagus, Tilia, Fraxinus and their mixture), dbh and area on Wu 70 with the date of measurement (n = 11) as a random effect.To ensure homoscedasticity, Wu 70 was log-transformed here.Wu 70 was further modelled as a smoothing function of radiation and the factor species combination with generalized additive models (GAMs, mgcv package) employing thin plate regression splines.The model was supplied with weights (Wu −1 70 ) to ensure homoscedasticity.Model comparison and the assessment of the significance of the smoothers and the factor species combination within a model were done with F tests. Residuals of all models were visually checked for homoscedasticity and normality by box plots, residuals against fitted values plots and Q-Q plots.Non-significant effects (p > 0.05) in LMEs were discarded from the full model stepwise by comparing models with the same random effect structure fitted with MLE.To this end, likelihood ratio tests were used, since both t statistic provided by "anova (LME)" and F statistic provided by "summary (LME)" are only approximate (Zuur et al., 2009).Differences between species combinations or diversity levels, whenever significant in the mixed model, were further investigated using Tukey's HSD post hoc tests (glht, multcomp package). The relationship between Tf and dbh was determined using a linear regression model.Differences in soil properties and structural characteristics among the three diversity levels were assessed with ANOVA or Kruskal-Wallis tests.We further used Spearman's rank correlation analysis to relate selected stand structural variables and Wu 30d on all 100 clusters.All statistical tests were considered significant where p ≤ 0.01 and marginally significant where p ≤ 0.05. Meteorological conditions Rainfall in 2009 totalled 773 mm, which was higher than the long-term average rainfall measured at four stations around the park (544-662 mm yr −1 ).This is mainly attributed to two heavy storms in July.The rather wet July was followed by an August of below-average rainfall, the month on which we mainly focused our study.Here, the average maximum and minimum air temperatures were about 25 • C and 12 • C, respectively.The global radiation average was 17.5 MJ m −2 day −1 .During the dry spell, Pg was about 9 mm per week. Soil water content Throughout May and June 2009, volumetric soil water content averaged over the 16 intensively studied clusters was continuously high at around 0.40 m 3 m −3 (Fig. 3).Two storms at the end of July were not found to notably increase soil water content.Thus, we assume that drainage or overland flow could have possibly occurred here, and therefore we did not include these occasions in the calculations of Wu 70 .In a following period of low rainfall, soil water content decreased continuously from the end of July through to the beginning of September.For this dry spell, soil water budgeting was conducted on both the 16 intensive and the 100 cluster groups (see Fig. 1) yielding Wu 70d and Wu 30d . Soil water budget -16 clusters Average Wu 70 (n = 16 clusters) calculated for all occasions within the vegetation period (trees fully in leaf) ranged overall between 0.8 and 4.0 mm day −1 (Fagus 1.2-4.0;Tilia 0.9-3.8;Fraxinus 1.3-3.9 and mix 0.8-3.1 mm day −1 ) and was closely related to average daily global radiation (Rg) during these occasions.For illustration simple linear regression models are given for all species combinations (Fig. 4).Modelling Wu 70 as a smoothing function of Rg and species combination (levels: Fagus, Tilia, Fraxinus and mix) with a GAM revealed a highly significant smoothing term (F = 16.21,p < 0.001) but no effect of the factor species combination (F = 2.38, p = 0.07).A model with a species-specific smoothing term was not significantly different from a model with one smoothing term for all species. Comparing a full MLE (explanatory variables: species combination, dbh and area and all two-way interactions) with MLEs with selectively dropped two-way interactions indicated no significant two-way interactions.Dropping species, dbh or area selectively from a new full MLE fitted without interaction terms indicated no effect of either one of these variables on Wu 70 .However, having a full model with species as only an explanatory variable and comparing it to a model including only random effects revealed that species had slight effects on Wu 70 (L ratio = 9.07, p = 0.03).Tukey's HSD post hoc tests showed that monospecific Fraxinus clusters had higher water use (average of 11 measurement occasions = 0.35 mm day −1 ) compared to the mixed species clusters (t = −2.67,p = 0.04).Neither Fraxinus nor mixed clusters were any different from the other species combinations. The measured throughfall component of Wu 70 was not related to the species composition or diversity throughout the measurement occasions in the year 2009.The same result was found during the dry spell (30 July-1 September 2009), where average Tf was low (32 ± 9.5 mm) ranging between 62 and 80 % of Pg.Tf during the dry spell however declined with increasing average dbh of each cluster (Fig. 5).Therefore we used this relationship to calculate Tf for all 100 clusters here.Estimated Sf input on the clusters was 0.3±0.25 mm during the whole desiccation period and played therefore only a marginal role. Soil water budget -100 clusters The 25 possible species combinations, cluster dbh and cluster area had no influence on Wu 30d .An LME with species combination as the only explanatory variable was not different from a model with only random effects (L ratio = 3.96, p = 0.14; Fig. 6).Likewise, testing for presence or absence of the 5 species resulted in no effect on daily water uptake.A similar picture was found when Wu 30d of the 100 tree clusters was grouped according to diversity levels (Fig. 7): LMEs showed no significant main or interaction effect of the explanatory variables on Wu 30d .As there were three subsequent measurement intervals throughout the dry spell (Fig. 3), we also calculated water uptake for each interval.Mean water uptake across all clusters was 2.2 ± 0.7 for 30 July-6 August, 1.9 ± 0.2 for 6-24 August, and 1.6 ± 0.7 mm day −1 for 24 August-1 September.Again, no significant differences between tree species combinations or diversity levels were found employing an LME with date of measurement as a random effect. Further correlation tests between Wu 30d and selected stand structural variables from all 100 clusters showed only a slight correlation between bulk density and water use (Table 2).Still, certain stand characteristics correlated with cluster area.Average dbh as well as canopy openness increased with increasing ground area of the clusters (r = 0.39, p < 0.01 and r = 0.25, p = 0.01).Table 2. Relationship between Wu 30d during soil desiccation period from 30 July to 1 September 2009 and selected stand structural variables on the clusters.All 100 clusters were included in the analysis (Spearman's rank correlation). The approach The temporal frequency of measurements in hydrological studies is often very high considering that data can be logged automatically at almost any desired rate.At the same time, it is barely possible to establish a similar level of measurement replication on a broader spatial scale due to restrictive costs for instrumentation or logistical issues.As a result, the number of spatial replicates is often disproportionate to the frequency of sampling, and it is questionable whether such data can be spatially representative.With our 100-cluster approach and 400 measurement points overall, we tried to compensate for the lack of spatial resolution at the cost of a finer temporal resolution.However, a subset of 16 intensive clusters, for which data were gathered more frequently, served to support the 100-cluster approach.During cluster selection, care was taken to ensure clusters were as homogenous as possible in terms of ground area, soil physical properties, tree height, dbh, and terrain inclination.As such, it was not a randomized selection.Moreover, there is still uncertainty around how one can account for stemflow values in water budget calculations, as there is no understanding on how stemflow water distributes through the soil.In our approach, measurement devices were arranged along the median line between each tree pair and in the cluster centre, which made it possible for stemflow water not to be measured where the distance from the device to the next respective stem was too far.However, as we concentrated our measurements on a period of soil water desiccation with low rainfall, the water budget was only very marginally affected by stemflow anyway. An analysis of the relative fine root contribution at 0-0.2 m also showed that below-ground cluster space was not exclusively occupied by roots of tree species forming the respective cluster but also by neighbouring trees outside the cluster (Jacob et al., 2013, supplemental data).However, across all clusters the target tree species contributed 84.2±10.2% to the standing fine root biomass.Single-species clusters of Carpinus and three-species clusters including Fagus and Carpinus appeared to be more affected by root space occupation of non-cluster trees compared to other species.In addition, the fine root biomass on two-and three-species clusters was not always homogenously distributed among the cluster forming tree species.As such, the identification of possible species identity effects on soil water uptake was further complicated.We nonetheless assume that our high number of spatial replicates, which is quite unusual in ecohydrological studies, represents a special advantage of this design over others and that it may be very helpful in unravelling possible effects of species composition and diversity.Additionally, the strong relationship between cluster water use and global radiation gave us confidence in the data. Throughfall and stemflow Throughfall as the main input of water to the system under consideration was not related to species identity in the 16 clusters, nor did the mixed clusters differ from the monocultures.In addition, stand structural parameters only explained Tf during some measurement occasions (e.g.average cluster dbh explained Tf during the dry spell).This finding may have several reasons: first of all, we set up our experiment to test for effects of differing diversity levels or species combinations.Thus, clusters were selected to minimize variations in ground area, tree size and tree age, etc., and a lack of correlation between tree or stand structural variables and Tf was expected.Secondly, Tf is not only driven by tree architecture (leaf inclination, nature of the bark, branch angle) but also by stand structural characteristics such as stand height, crown length, and canopy roughness (Krämer and Hölscher, 2009).Consequently, it is expected that these parameters influence rainfall partitioning at a much larger scale than on the rather small tree clusters.All our study clusters were embedded in a larger mixed forest stand, and possible differences between single-and mixed-species stands could only have been detected at a larger scale.However, respective large-scale monocultures of all tree species are not likely to be found in unmanaged mixed forests of advanced age.Thirdly, climatic conditions such as rainfall intensity and duration, wind and relative humidity which affect Tf (Crockford and Richardson, 2000) might additionally work unequally on diverging species.Therefore it depends very much on the nature of the respective rainfall event or the season under consideration if a diversity or species identity effect is detectable (Krämer and Hölscher, 2009).Fourthly, 3-D laser scans on the clusters showed that canopy space exploration, which is highly influential on throughfall, was not influenced by species diversity (Seidel et al., 2013).However, denser canopy crowns were found where Fagus was present, which might also partly explain why Krämer and Hölscher (2009) found decreasing Tf with increasing proportion of Fagus trees present for some of their measurement occasions.However, none of the relationships between Tf and tree diversity, proportions of tree species present or stand characteristics established by them at our research site were stable during different seasons or over years.Indeed, their measured Tf correlated with tree diversity only for half of the seasons for which data were gathered. Hence, we conclude that a clear relationship between Tf and tree diversity and Tf and species identity or other parameters could not be found at our site.This implies that the relationship found for dbh and Tf during the dry spell should only be taken as an aid to transfer Tf measured in a certain period from the 16 clusters to the 100-cluster approach and not as a general rule for the given stand.The second input to our system, stemflow, is of small magnitude compared to the water input to the soil via throughfall and, as our focus was on the dry spell during which precipitation was generally low, Wu 30d was only marginally influenced by Sf .In summary, the water inputs to the soil were not driven by tree diversity or species identity in our study. Soil water uptake Measured Wu 70d (dry spell) on the 16 clusters ranged from about 2.6 to 3.5 mm day −1 and was higher compared to values obtained for the plots with differing diversity levels at our research site based on sap flux estimates for the years 2005 and 2006 (1.1 to 2.5 mm day −1 ; Gebauer et al., 2012).However, in contrast to our method, sap flux studies do not account for understorey transpiration, evaporation from the topsoil and transpiration of trees with dbh below 10 cm (Gebauer et al., 2012).In addition, a species-specific calibration for Fraxinus (Herbst et al., 2007) was not applied by Gebauer et al. (2012) which leads to an underestimation of water use by Fraxinus trees and thus to an overall lower water use of plots with strong Fraxinus presence. Calculated amounts of daily soil water uptake for the whole period from 30 July to 1 September agree well with model calculations for the adjacent plots of differing diversity levels in Hainich forest (Bittner et al., 2010).We also found positive relationships between the calculated volume of daily water uptake of the 16 clusters throughout the season and the average daily global radiation during the respective measurement intervals (Fig. 4), giving us further confidence in the applied water uptake calculation. Our data did not indicate an influence of species diversity (Fig. 7), nor of species composition (Fig. 6) on Wu 30d of the 100 clusters during the dry spell.Further, cluster dbh and area or the presence of any certain species had no effect on water use.Recognizing that the input of water (Tf, Sf ) was alike for all diversity levels, water uptake by roots per unit soil volume must also have been similar.However, this result is in contrast to findings obtained in monocultures and two-, three-and five-species mixtures in a Panamanian tree plantation (Kunert et al., 2012) and in advanced forest plots of two species and their mixture (Schume et al., 2004).We also tested for possible effects of the wider neighbourhood on calculated water uptake on the clusters.Thus, Shannon biodiversity index was determined for a 20 m radius surrounding the centre point of each cluster (Seidel et al., 2013) and correlated with water uptake.As no significant relationship was found (data not shown), we are confident that the ascertained findings remain similar even at a wider spatial resolution. However, in Fig. 6 it can be seen that water use of Fraxinus monocultures during the dry spell was at the upper end of the range of water use rates measured.Also the analysis of the 16 clusters that were monitored intensively in time showed that the water use of Fraxinus was about 0.35 mm day −1 higher compared to the mixture (marginally significant), but not significantly different from Fagus and Tilia clusters.But since the degrees of freedom used in the calculation of Tukey's tests can only be approximated for LMEs (see also Bates, 2006) and given the fact that the differences found are only marginally below our significance level of p = 0.05, this statement should be interpreted with care.Since we did not find any other indication for a diverging water uptake of mixtures compared to monocultures, we suppose that the difference between Fraxinus and mixed clusters is based on Fraxinus properties rather than on specific properties of the mixture.Indeed, Fraxinus differs in many characteristics from other tree species.Herbst et al. (2007) mention a considerably higher magnitude of sap flux densities of Fraxinus, compared to diffuse-porous species with calibrated sap flux sensors.Also a higher transpiration per unit leaf area of Fraxinus was reported for our area (Hölscher et al., 2005).But the higher water use could also result from the water use of the undergrowth in Fraxinus clusters. It is somewhat remarkable that the water use of the monospecific plots only differed marginally from one another (slightly higher water use of Fraxinus clusters), since many authors found strongly differing hydraulic parameters and sap flux densities for the trees grown at our site (Hölscher et al., 2005;Gebauer et al., 2008;Köcher et al., 2009).Moreover, trees in our 16 clusters were shown to take up water from different soil depths when tree species were mixed and varied in dbh (Meißner et al., 2012), despite the lack of vertical fine root stratification among the species under consideration from the Hainich forest (Meinen et al., 2009).These findings lead to the assumption that if a species-dependent water use of trees as supported by physiological measurements exists, the spatial arrangement of different species might override such an effect (in particular below ground) and yield similar water uptake per unit soil volume among the monospecific plots and the diversity levels of the clusters.This balancing effect could not be found in the Panamanian plantation (Kunert et al., 2012), since this plantation was newly established (7 yr old) and arranged in regular planting schemes. The same would be valid if, in contrast to species identity, simple size effects of trees governed their water use ("functional convergence"), which means that large trees should use more water than smaller ones, irrespective of species identity (Meinzer et al., 2005).This implies that large trees, having a higher water use per individual, must occupy more ground area compared to smaller ones if the water uptake per unit soil volume is not affected by the size of cluster trees.In our clusters we found a positive correlation between average cluster dbh and cluster area (R 2 ad j = 0.3; p ≤ 0.01), which could indicate the latter.Likewise, there was a positive correlation between canopy openness and cluster area, which could additionally lead to higher amounts of throughfall input on large clusters.Krämer and Hölscher (2009) found a relationship between canopy gap fraction and Tf (r = 0.74) in one out of three seasons for which gap fraction was determined.In our short measurement period, however, this could not be found. Furthermore, it was stated that the understorey in forests can effectively buffer differences in tree canopy transpiration (Roberts, 1983).Since both cover and species richness of the herb layer increased with tree diversity in our clusters (Vockenhuber et al., 2011), it is likely that some sort of feedback between herb and tree layer exists.Still there is much uncertainty in the estimation of the contribution of understorey (evapo-)transpiration to the overall cluster water use because the density of herb layer cover varies during the vegetation period and, under prolonged desiccation, herb layer cover is diminished, because most herbaceous plants are droughtsensitive.Moreover, the thickness of the litter layer was negatively related to tree species diversity/decreasing Fagus abundance in our area (Mölder et al., 2008).A thick litter layer would intercept much of the throughfall but prevent water from evaporating from the soil and suppress competition for water by the undergrowth.A closed herb layer on the other hand would intercept rainfall as well, but it would also transpire water taken up from the soil. Nevertheless, the effects of (evapo-)transpiration differences between different trees of a cluster and among trees and the cluster understorey might cancel each other out.In basic terms, in mature forests with less human interference, trees with differing demands for resources as well as the herb layer of the understorey might "arrange" according to resource availability.Stand transpiration may therefore be more extensively controlled by other stand structural variables; it is not by stand species composition or species diversity in our case.This is in line with conclusions drawn by Roberts (1983), who states that forest transpiration is a rather "conservative process" with little variation of transpiration among (differently composed) stands. In addition, one might also argue that besides a mere tree diversity effect, interactions between tree diversity and certain environmental conditions (e.g.rainfall intensity and duration, evaporative demand, soil water availability, etc.) are crucial.That would explain why relationships between species composition/diversity and throughfall seem to be dependent on prevalent rainfall and weather conditions (Krämer and Hölscher, 2009), and canopy transpiration only differed among diverse and less diverse stands in certain years (Gebauer et al., 2012).This is further supported by the fact that diversity effects on soil water extraction only occurred in certain periods (Krämer and Hölscher, 2010).These findings indicate that it is not only that there is no "magic effect" of biodiversity per se (Hector et al., 2000) (the characteristics of underlying species determine whether tree diversity matters or not), but that it also seems that an ecosystem needs to be subject to specific environmental conditions under which tree diversity can accomplish importance. Furthermore, more than one characteristic or trait of a species can influence a single ecosystem process (such as water use).These traits may additionally be linked or may counteract each other: the variability of drought sensitivity (high to low: F. sylvatica > A. pseudoplatanus > T. cordata > C. betulus > F. excelsior) and water consumption (high to low: F. sylvatica > A. pseudoplatanus > C. betulus > T. cordata > F. excelsior) among tree species in Hainich (Hölscher et al., 2005;Köcher et al., 2009) reveals an almost similar behaviour of species in both parameters.However, it still depends very much on the severity and duration of a given drought event if a certain species uses much water, because it is a big water consumer or because it is very drought-tolerant.In addition, the volume of soil water extraction of a stand is strongly dependent on the percentage mixture of drought-tolerant and high water-using trees, because both act on stand transpiration in differing ways under certain soil water availability.These complex relationships between traits within one species, the combined traits of a mixture as well as between traits and environmental conditions were discussed in a simplified modelling exercise of water use in artificial stands of Fagus, Tilia and Fraxinus (Bittner et al., 2010): Fraxinus was parameterized to have half of the transpiration of Fagus under wet soil conditions (based on findings with uncalibrated sap flux sensors; Gebauer et al., 2012).However, Fraxinus was also set up to maintain high transpiration at much drier soil conditions compared to Fagus.It was observed that, at times of high potential transpiration rates accompanied by soil water depletion, modelled Fraxinus monocultures maintained higher water uptake rates compared to times with low evaporative demand and sufficient soil water supply.Modelled Fagus monocultures showed the opposite behaviour: transpiration in wet years was higher compared to dry years, despite the lower evaporative demand during these times, since it was more sensi-tive to declining soil water availability.Thus the differences in soil water uptake between modelled Fagus and Fraxinus monocultures were lower in the dry years than in the wet years.The authors conclude further that, depending on the mixture and the climatic conditions, drought-tolerant species may even exert damage to drought-sensitive species depending on the severity of the drought.We have confidence that no pronounced water stress occurred during the dry spell in 2009 since there was no drop in water uptake during periods of high evaporative demand (Fig. 4), and water uptake from the topsoil layer continued throughout the whole dry spell.Therefore we believe that not "drought tolerance" but "maximum water use rate under wet soil conditions" of the trees was the trait influencing measured soil water uptake by trees here.It remains questionable whether we could have detected an influence of tree diversity on water uptake under more severe drought since Krämer and Hölscher (2010) found that differences in soil water extraction rates of diverse and Fagus-dominated stands in our area disappeared as soil drought advanced. In summary, we did not find differences in water uptake among single species clusters besides a marginally higher water use of Fraxinus clusters or among tree clusters of differing diversity levels throughout the vegetation period of 2009.We discuss that water use may indeed be a conservative process, that differences in tree-species-specific traits do not necessarily translate into neighbourhood or stand level and that they can be compensated for by one another or by stand parameters such as herb layer and tree spatial arrangement.Furthermore, species identity or diversity effects on stand water use may only arise under certain environmental conditions.Thus, considering effects of tree diversity on stand water use exclusively may not be an appropriate simplification of the complex network of interactions between species traits, stand properties and environmental conditions that have varying influence on stand water use, both in space and time. Figure 1 . Figure 1.Location of the 100 tree clusters in the two forest areas.The grey dots and black rectangles indicate cluster positions.The 16 black rectangles represent intensively measured clusters (figure based on Seidel, 2011). Figure 2 . Figure 2. Schematic study plot design (tree cluster) with locations of FDR, TDR sensors and throughfall samplers. Figure 3 . Figure 3. Average volumetric soil water content (FDR sensor) at 0.1 m soil depth during the study period in 2009.Values are means ± sd (n = 16 clusters).Dotted lines indicate the occasions where θ was measured on all 100 clusters with TDRs; the shaded area represents the dry spell (three subsequent measurement intervals) for which Wu 30d and Wu 70d were determined. Figure 4 . Figure 4. Average Wu 70 as a function of daily global radiation (Rg) for 4 different species combinations (n = 4 per species combination).Shown are data of 11 measurement occasions from June to mid-September 2009 when trees were fully foliated and linear regression models between average Wu 70 and radiation. Figure 6 . Figure 6.Wu 30d for all possible species combinations of Fagus, Tilia, Fraxinus, Acer and Carpinus during the soil desiccation period from 30 July to 1 September 2009.Values are means ± sd (n = 4); same letters specify no significant difference between species (LME and Tukey's HSD).
2018-12-07T19:13:17.592Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "9f0a6ef48e167061b366708e4574f74381c0fa54", "oa_license": "CCBY", "oa_url": "https://we.copernicus.org/articles/13/31/2013/we-13-31-2013.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9f0a6ef48e167061b366708e4574f74381c0fa54", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
248834199
pes2o/s2orc
v3-fos-license
Chemical Case Studies in KeYmaera X Safety-critical chemical processes are the backbone of multi-billion-dollar industries, thus society deserves the strongest possible guarantees that they are safe. To that end, models of chemical processes are well-studied in the formal methods literature, including hybrid systems models which combine discrete and continuous dynamics. This paper is the first to use the KeYmaera X theorem-prover to verify chemical models with differential dynamic logic. Our case studies are novel in combining the following: we provide strong general-case correctness theorems, use particularly rich hybrid dynamics, and have particularly rigorous proofs. This novel combination is made possible by KeYmaera X. Simultaneously, we tell a general story about KeYmaera X: recent advances in automated reasoning about safety and liveness for differential equations have enabled elegant proofs about reaction dynamics. Introduction Modern industry relies critically on all kinds of chemical processes: some occur in computer-controlled reactors, some occur free of control. Chemical engineering has provided many classical insights about both: safe and optimal control [12] of reactors [32] is a field in its own right, as are reaction kinematics (dynamics) even in the absence of control [30]. Because both controlled and uncontrolled reactions are crucial, we consider both: an irreversible exothermic reaction with a model-predictive bang-bang controller (Sec. 3.1) and an uncontrolled reversible reaction (Sec. 3.2). Both have verification challenges which make for good benchmark problems. The nonreversible reaction's nuanced dynamics entail nontrivial correctness arguments for model-predictive controllers. The reversible reaction's long-term asymptotic behavior, though classic, tests the ability of current-generation tools to verify asymptotic properties, e.g., stability [22] or persistence [31]. Safe reactions are crucial to human safety. Properties like persistence, stability, and optimality are crucial to human productivity. Thus, formal methods for chemical reactions are extensively studied [3,28,20,14,24]. To our knowledge, however, the reaction models and proofs presented here are the first-ever in a hybrid systems theorem prover. Specifically, we use the KeYmaera X [11] prover for differential dynamic logic (dL) [26] to achieve a unique arXiv:2205.08270v1 [cs.LO] 17 May 2022 combination of expressive dynamics, general-case guarantees, and rigor for the first time. The tradeoffs between theorem-proving and other formal methods are well-known; see Sec. 4 for detailed discussion. Our contribution was enabled by new stability [33], variant [33], and Darboux polynomial [27] proof tools in KeYmaera X, simplifying our proof arguments. Our case studies make essential use of these features and thus demonstrate the impacts of the latest advances in proof automation. Background All our proofs are computer-checked in the KeYmaera X prover, which carefully prevents the use of unsound reasoning [5]. This rigor is crucial in practice: many techniques used here had predecessors [33, Table 1] which were found to be unsound, which is unacceptable for safety-critical systems. In KeYmaera X, correctness properties are stated and proved in differential dynamic logic (dL) [26], where hybrid systems are written in hybrid program notation. We discuss dL, then KeYmaera X usage. Differential Dynamic Logic We provide a primer on dL syntax and semantics (meaning); see the literature [26] for details. Semantics are state-based: a state ω maps every variable x to a realnumber value ω(x) : R. The syntax consists of terms (with a numeric meaning in each state), hybrid programs (which can nondeterministically change the state when run), and formulas (which are true or false in each state). Hybrid programs and formulas may both contain each other. We use standard notation to define syntax, e.g., B ::= C | D means every B is either a C or a D. Definition 1 (Terms). Terms e,ẽ of dL are defined by: e,ẽ ::= q | x | e +ẽ | e ·ẽ where q ∈ Q Rational-valued literal numbers are written q. Real-valued variables are written x. Sum e +ẽ is the sum of terms e andẽ. Product e ·ẽ is the product of e and e. In every state, the meaning of every term is some real number. Definition 2 (Hybrid Programs). Hybrid programs α, β are defined by: is a term. The duration of evolution is nondeterministic. If an evolution domain constraint Q is provided, Q is tested continuously, and evolution must stop before Q ever becomes false. Choices α ∪ β nondeterministically run either α or β, as opposed to running both. Composition α; β runs α, then β in the resulting state(s). Duration of loops α * is nondeterministically-chosen but finite: zero, one, or many repetitions can occur. If desired, standard conditional and looping constructs are derivable (where P is a formula, ¬P is its negation, and α is a hybrid program): Definition 3 (Formulas). There are many formulas P, Q in dL. We only use: Formulas represent true/false questions about the state ω. Comparison e ≥ẽ is true whenever the value of e is at least that ofẽ in a given state. All other comparisons e >ẽ, e =ẽ, e =ẽ, e ≤ẽ, e <ẽ are definable using e ≥ẽ and other logical connectives, so we use them freely. Negation ¬P is true when P is false. Conjunction P ∧ Q is true when both P and Q are. Implication P → Q is true when P 's truth would imply Q's truth. The defining formulas of dL, [α]P and α P, are respectively true in state ω if every or some of α starting from state ω ends in a state where P is true. When α is an ODE, all runs equates to all time, e.g., these readings apply: -P → [α]Q assumes P at first, then proves Q forever -P → α Q, assumes P at first, then proves Q eventually -P → α [α]Q, assumes P at first, then proves Q eventually becomes true, then stays true forever. KeYmaera X proves truth in every state, called validity. Definition 4 (Validity). A dL formula is valid if it is true in every state. We use standard notation for axioms and proof rules. Definition 5 (Proof Rules). Each rule has a horizontal line and means: if all premise formulas above the line are valid, so is the conclusion formula below the line. Rules can use schema variables such as P or α when the rule applies to all programs or formulas, respectively. For example, the loop rule means for all P, Q, J, α that if premises P → J, J → [α]J, and J → Q are all valid, so is P → [α * ]Q. Formula J is proved true for all iterations, thus we call J the loop invariant. This proven loop invariant should not be confused with use of the word invariant in hybrid automata to mean an assumed constraint on ODE evolution. We call such constraints evolution domain constraints. KeYmaera X We briefly discuss the user experience of KeYmaera X [23]. The user interface is displayed in Fig. 1. KeYmaera X is an interactive, tactic-based prover. This means that the user interactively tells the prover which proof technique to use, but each technique is implemented as a tactic [10], i.e., a program. A proof technique can be a simple, specific rule or a complex proof search procedure. For example, there is a default (or auto) proof procedure which attempts many proof techniques and can solve many simpler problems fully-automatically. In summary, the amount of user effort can vary greatly between proofs. Throughout this paper, we will discuss the level of interaction needed for each proof and discuss how new rules and automation helped keep the level of user effort manageable. The tactic-based approach also means that no matter how complex proof methods are, they are implemented using simple steps from the small trusted core of the prover, thus proofs stay rigorous. Results We contribute case studies on two classic kinds of chemical reactions. The first is an irreversible reaction in a well-mixed adiabatic batch reactor, which we chose because batch reactors [30, §2.10] are a foundational technology for chemical plants throughout industry. The second case study is a reversible reaction between two compounds, i.e., where the output can react again and form the input. We chose reversible reactions because they too are essential to industry. Notably, ammonia synthesis is a reversible reaction that provides the backbone for modern fertilizer-based, industrial-scale agriculture [16]. Both case studies emphasize recent advances in KeYmaera X proof automation, which contributed to highly general results. Remaining limits on generality are discussed in each subsection. Controlled Irreversible Reactions We formalize a classic scenario: an irreversible, exothermic reaction in an adiabatic, well-mixed batch reactor. Irreversible [30, §2.1] means the reaction is one-way: outputs do not react to create inputs. Adiabatic [30, §2.14] means heat does not leave or enter the reactor. Well-mixed [30, §2.12] means the reaction occurs evenly in space throughout the reactor. In this basic synthesis reaction, two (first-order) reactants react to form a third, plus heat: The case study contains four models, each with proof. The first shows conservation of energy, validating that adiabatic reactors are closed systems. The remaining three models add a model-predictive bang-bang controller [12], which predicts future behavior according to the model, then applies an all-or-nothing control action. It is proved that the control ensures a safety property: overheating is prevented. We use this standard control approach in order to focus on the continuous reaction dynamics. The driving difference between the last three models is their increasingly complex reaction dynamics, which mandate increasingly complex controls and proofs. In the second model, the reaction rate is constant. In the third model, the rate depends linearly on temperature, changing exponentially with respect to time. In the final model, the rate is proportional to the product of temperature and each concentration, with resulting dynamics beyond a simple exponential, yet still approximate. Approximate results are the best that can be expected. We discuss why, including verification challenges. Each model approximates textbook [30, Eq. 2.93] reaction dynamics, where the reaction rate is proportional to the product of concentrations of each reactant A and B multiplied by a coefficient. Recall that the concentration of a reactant in a mixture is the quantity of that reactant per unit quantity of the mixture. The rate equation is rate = kAB where k is an exponential given by the Arrhenius equation [30,Eq. 5.1]. That is, k(T ) = k 0 e −E/RT where T is temperature, R is the ideal gas constant, E is the reaction's activation energy and k 0 a constant. Analysis of the reaction rate dynamics is nontrivial: rate is a product of three continuously-changing quantities, resulting in a non-linear ODE. Moreover, k(T ) is exponential in T, resulting in a non-polynomial ODE. KeYmaera X handles non-linear ODEs well, but is restricted to polynomial ODEs, as is standard. We thus reach out first limitation: to ensure a polynomial ODE, we approximate the temperature dependence as linear. This assumption is reasonable because polynomial ODEs are a standard assumption, and our nonlinear dynamics are still richer than prior models [36,28,20,14,24]. Our second limitation is that the reactants are first-order, so their influence on rate is linear. We do so because such reactions are common and lead to elegant equations. KeYmaera X supports polynomials of any degree, so we expect the approach to work for higher-order reactions, so long as the order is fixed. Notwithstanding these limitations, the results are fully general in the sense that they are fully parametric, e.g., the results can be applied to any reactants in any amount by plugging in new coefficients and concentrations. Energy Conservation The basic dL model for energy conservation is presented in Fig. 2. Energy conservation is interesting in its own right, because it implies the system is closed. This helps support our claim that the model is adiabatic: heat energy does not leave nor enter. The variables A, B, and C stand for the current concentration of each reactant present in the reactor. Reactor temperature is written T. In our analysis, we decompose energy into kinetic (heat) and potential (chemical) energy: E ≡ KE + U. Potential energy U ≡ min(A/k A , B/k B ) k T is the product of the amount (concentration) of C remaining to be produced (the reaction ends when either A or B is exhausted) with the heat released per unit amount (C). That is, we model C as if it possesses no potential energy, since we are interested only in energies relevant to the current reaction. We model the reaction rate as T s A 0 B 0 k ra + k rb , which makes two intentional simplifications. First, we use approximate current concentrations A, B with initial concentrations A 0 , B 0 . Secondly, we simplify the temperature factor to T s , which is a constant even as temperature T changes, thus the influence of heat is static throughout the reaction. We determine the reaction rate as a product of the concentration factor and temperature factor. For generality, the coefficients k ra , k rb let the rate be any linear function of the product. Formula const simply specifies the signs of constants. The ode indicates that all concentrations A, B, C and the reactor temperature T all change proportional to the reaction rate; A and B are lost as C and heat are gained. Coefficients k A , k B , k C , k T indicate the rates at which each changes, which may depend respectively on the stoichiometric coefficients of the reaction or how strongly exothermic it is. Finally, the theorem statement (P → [α]Q) states that under the simple constant assumptions, energy is conserved because at all times the current energy E remains equal to its initial value E 0 . We now describe the proof of the theorem in KeYmaera X. Proof. The default proof procedure of KeYmaera X (Sec. 2.2) proves the theorem automatically with differential invariants [26,Lem. 11.3], demonstrating the capabilities of this standard dL rule. We present (the relevant case of) differential invariant [26,Lem. 11 which shows e =ẽ is true throughout an ODE if it holds initially and differentials are equal throughout. We prove E 0 = E thus: E 0 is constant, so proving E = 0 throughout suffices. Expanding the definition of E yields (E) = ( Due to KeYmaera X's automation, the entire proof is automatic. On-Off Reactions This model keeps the basic heating dynamics but adds bang-bang control. Fig. 3 describes the model in full. Parts unchanged from Fig. 2 are grayed out to aid comparison. The impact of this theorem is that the reactor is provably safe under idealistic assumptions, i.e., when concentrations and temperatures change very little or have little impact on reaction rate. The greatest change is the addition of a time-triggered controller: the system now repeats in a loop, with the controller guaranteed to run at least every > 0 time units. The controller (ctrl) is model-predictive because it predicts whether it would be dangerous to keep the reaction running for time. If so, the reaction shuts off (isOn := 0), else it turns on (isOn := 1). Note isOn is an indicator variable; its only possible values are 0 and 1. Specifically, the controller linearly predicts the maximum temperature change as rate k R and shuts off if the safe temperature would be exceeded. Importantly, this approach predicts unsafe events before they occur and shuts off before the damage is done. Either way, the timer t is reset to 0. The ode is updated so that each reaction equation is multiplied by isOn, causing no physical changes to occur when the reactor is turned off. This model is best-suited for situations where it is possible to quickly halt a reaction. The ode gains an evolution domain constraint, which serves to restrict its duration of evolution: an ODE may evolve only while the constraint remains true. Our constraint serves two purposes. Firstly, t ≤ implements time-triggering: if each iteration takes at most time, there is at most delay between control cycles. Secondly, the constraints A ≥ 0 ∧ B ≥ 0 ∧ C ≥ 0 model the physical assumption that concentrations cannot be negative. For example, the reaction would end if A or B reach zero. Finally, the updated theorem statement (P → [α]Q) is now a safety statement, stating that the reactor never exceeds its maximum safe temperature. Proof. As the model now contains a loop, the proof uses loop invariant reasoning in addition to differential invariant reasoning, both distinct concepts from evolution domain constraints. We prove that the safety condition T ≤ T max is a loop invariant, meaning it holds before and after every loop repetition. We use the standard loop rule from Sec. 2.1. Already, a lemma arises in the ODE proof. Certain differential invariant proofs can only succeed by first proving lemmas, called differential cut formulas. which are then available as assumptions in the invariant proof. Specifically, we prove the following cut: -T max − T > ( − t) rate k T , meaning the remaining safe temperature gap exceeds the projected temperature change during the remaining time. The cut proves automatically by differential invariant, from which the loop invariant, then safety condition, follow by automatic proof. Fixed Exponents For the next model, the first fundamental change is that we update the definition of rate to use the current temperature, so that the reaction rate evolves exponentially over time. Because dynamic reaction rates are an increase in complexity, we simply other aspects of the reaction rate formula by dropping k ra and k rb . The remaining changes follow from that one: amts is a helper definition to definitions such as taylor + (x, t), which is an upper bound on temperature over time, constructed as a Taylor series approximation. This use of a Taylor series approximation represents a fundamental change in proof approach for a fundamentally more complicated dynamics: for exponential dynamics, polynomial approximations are a crucial tool to simplify reasoning. However, this Taylor bound is only provably an upper bound on a limited time interval which happens to be 1/(2 amts), which we thus take as our upper limit on . In practice, we hypothesize that the time limit is artificial: time could be expressed in any desired units, increasing the interval. The constants are updated to include assumptions on initial values of amounts and the controller is updated to use the Taylor approximation. The ode is updated to explicitly assume nonnegative temperature, which is a safe assumption since our goal is to avoid high, not low, temperatures. This new result shows safety with idealized modeling of concentrations under more realistic heating assumptions. Proof. The loop invariant is unchanged. We add several differential cuts; order matters since each one can serve as an assumption in following proofs: t ≥ 0 just means time moves forward, -A 0 B 0 T k T ≥ 0 ensures forward reaction rate, and taylor + (T old , t) − T ≥ 0 bounds temperature T above with taylor + () in terms of old temperature T old . The final cut requires advanced proof techniques because term taylor + (T old , t)−T decreases; differential invariants alone are provably [25, Thm 6.1] insufficient for such terms. The earliest suitable techniques required defining new (ghost, ctrl ≡ {if(Tmax ≤ taylor + (T, )){isOn := 0}else{isOn := 1}}; t := 0 ode ≡ {A = isOn · −rate kA, B = isOn · −rate kB, C = isOn · rate kC , Here, both instances of are replaced uniformly with one of > or ≥, where (e) is the differential of e, for polynomials p, g where p is called a Darboux polynomial if the premise holds and g is called its cofactor. It is natural to ask what power is gained by the addition of this proof rule. Certainly it is stronger than differential invariant reasoning which would require Q → (p) ≥ 0 because g p are allowed to be negative. Yet its full usefulness goes deeper, as the rule serves as a basis for differential radical invariant reasoning which is provably complete for semianalytic invariants [27,Thm. 4.5], a large class of invariants. Darboux-based rules are complete for large classes of theorems, yet it is challenging to automatically find suitable polynomials in every case. For our example model, KeYmaera X did not find a suitable polynomial, but performing algebra by hand did result in a suitable polynomial: using the definition of the ODE, solve for a polynomial that satisfies the proof goal, in this case: g ≡ A 0 B 0 k T . After choosing a suitable Darboux polynomial, the remaining proof goals completed using KeYmaera X's default proof method. Further applications of Taylor approximations are discussed in Sec. 4. Dynamic Exponents Even our final controlled model, below, makes some important simplifying assumptions. Note that our model makes the impact of temperature on reaction rate a linear one, whereas the true Arrhenius equation [30,Eq. 5.1] implies an exponential effect on reaction rate. Linear functions can locally approximate exponential ones, but exponentials remain of future interest. Despite these limitations, the final model is important because it shows safety with both non-trivial heating dynamics and nontrivial concentration dynamics. The core change in the final model is a more advanced reaction rate dynamics, where the reaction rate dynamically changes in response to the concentration of each reactant. Definitions amts and are updated for the same reason. The timestep now changes dynamically: as the reaction proceeds, the acceptable delay increases, thus becoming easier to satisfy. It simplifies the analysis to have change only at each loop iteration rather than continuously, so we introduce variables A 1 , B 1 to stand for the values of A, B at the start of each ODE evolution. The changes to the model are modest, but the dynamic changes are notable: the reaction rate is now a product of three changing variables, no longer an exponential with a fixed base. Likewise, additional proof steps will be required to account for changing concentrations, but the core proof approach is unchanged. Proof. In this proof, the reaction rate changes as the concentration of each reactant changes, so we strengthen the loop invariant to capture the status of the reactant concentrations: 0 ≤ T∧T ≤ T max ∧A ≤ A 0 ∧B ≤ B 0 . The differential cuts are similar to before, with an additional lemma that the concentrations of the first two reactants decrease: The differential cut for the Taylor series is unchanged, and the same Darboux polynomial g ≡ A 0 B 0 k T suffices. Uncontrolled Reversible Reactions We study reversible reactions, which are crucial to society. For example, ammonia synthesis is critical to modern agriculture [16]. We consider a textbook scenario where two reactants A and B can each react to form the other: A B To our knowledge, we provide the first computer-checked proofs for the asymptotic behavior of this classic, widely-used textbook scenario. Specifically, our final model shows persistence [31], a relative of stability: the system eventually gets arbitrarily close to its equilibrium state, then stays close forever. We build up to this result with lemmas: the system is always moving toward equilibrium and can arbitrarily approach equilibrium in finite, bounded time. To complete the story, we show that although the equilibrium can always be arbitrarily approximated, it can never be reached exactly. Pure Reactant Decreases We consider a scenario where we start with pure reactant A, which then becomes a mixture. We show the current amount of A never exceeds the initial amount, which is intuitive by conservation of mass. The lemma might be of practical use in its own right, e.g., to verify that a container never overflows, but we mainly use the lemma as a building block for persistence. Here, the two reactants are named A and B, with initial values A = A 0 > 0 and Ch. 3] that the system asymptotically approaches an equilibrium state, called a dynamic equilibrium, in which the forward and reverse reactions perfectly cancel out. We define ode using a classic textbook model of a reversible reaction, which does not model heat: the reaction rates are based solely on concentrations and constants. Proof. This proof completes automatically: the automatic prover successfully reasons by differential invariant. Equilibrium Avoidance We show that the amounts of the reactants never exactly reach the equilibrium. Though not directly used in the persistence proof, we prove this because it is a fundamental property in its own right which tacitly influences how a chemical plant is designed and operated. An operator would never wait for perfect equilibrium to occur, only for the system to get close to equilibrium, because perfect equilibrium (provably) never occurs. The initial condition and ODE are unchanged, only the postcondition changes, which mandates a new proof approach. To state the new postcondition, we define the amountsà andB of A and B present at the equilibrium. The above definitions ofà andB can be found by solving for equilibrium (A = 0 ∧ B = 0) in ode subject to conservation of mass (A + B = A 0 ). Proof. A simple change in postcondition creates a major increase in proof complexity, because we now wish to show a lower bound instead of an upper bound. We use multiple differential cuts, one of which uses Darboux reasoning. -A − A 0 (k R /(k F + k R )) > 0 means A's rate of change is always in the direction of the equilibrium Once these cuts are proved, automation suffices to finish the proof. Equilibrium Approach We show that we get arbitrarily close to the equilibrium, given sufficient time. For every positive epsilon ( > 0), there exists a time when we get that close to the equilibrium. The assumption changes slightly; the theorem statement changes more: we prove a diamond modality ode A ≤à + because we want to show we eventually approach the equilibrium. The practical impact of this result is that if an engineer desires an almost-perfect equilibrium, that can be attained, but the cost is time. Proof. Previous proofs highlighted advances in proof automation for box properties of ODEs; this proof relies on advances in proof automation for diamond properties of ODEs. A differential variant proof is the diamond counterpart to differential invariant reasoning for box properties. The differential variant principle [33,Corr. 24] says: if there is a lower bound on the rate of progress we make toward our goal at all times, we will get there eventually. where stands for either > or ≥, where d is a fresh variable and where x = f (x) provably has a global solution (i.e., for all time). The key insight behind our proof is that the rate of progress is proportional to our current displacement from the equilibrium. Since we seek to get the displacement within some , we can assume without loss of generality that the current displacement is at least , giving a bound d on the progress rate: d = (k F + k R ). This progress rate also confirms standard intuitions about the system dynamics: higher rates of progress are made when far away from the equilibrium and when reaction rates are high. Persistence Persistence means there exists a point after which we forever remain within eps of the equilibrium. Persistence is of practical importance because it shows both the system can get arbitrarily close to equilibrium and that the system stays that way indefinitely. In short, this result is important from a control perspective because it shows the system is well-controlled, even without a controller As a theorem-proving case study, persistence is an excellent comprehensive test case because it combines boxes and diamonds. Only the theorem statement need be updated; all other definitions are unchanged: Proof. We combine proof techniques, first showing we eventually approach the equilibrium (variant reasoning), then showing the concentration of A never increases again (invariant reasoning). A major strength of logic is compositionality: complex proofs are but combinations of simple parts. A dL proof of α [α]P can be divided into a variant proof and invariant proof, for example. At a high level, KeYmaera X lived up to this compositionality promise. At a low level, there is always room for improvement: the [α]P proof assumes const, i.e., it assumes constants never change. Due to limitations of the differential variant rule, we had to prove the constants never change, albeit with a simple proof. The limitation appears incidental to KeYmaera X's implementation, not fundamental. It speaks well of the implementation used in these case studies that this was the only instance where the automation added new proof challenges. This serves as a reminder that theoremproving case studies are dually important, showing both the gains from new automation and which features deserve future optimization. Related Work Related work includes hybrid systems verification, reactor design, and reaction kinetics. We begin with theorem-proving approaches to verification, specifically. Hybrid Systems Theorem Proving. Specialized hybrid systems theorem-provers [11,35] provide a high degree of generality and rigor, while making efforts to mitigate the high degree of user effort typical of theorem-proving. For example, generality in our case study means many different reactions and reactors are supported by modifying parameter values, with no new proof effort. Rigor is not merely of theoretical interest: in many hybrid systems reasoning techniques which do not share our rigorous logical foundations, many soundness edge cases have recently been identified [33,Tab. 1]. Soundness violations are unacceptable in verification. We use the KeYmaera X [11] prover for its exceptional rigor: its axioms have been proved sound in a theorem-prover [5] and it soundly derives its advanced proof methods [33,27, Tab. 1] from sound axioms. Hybrid Hoare Logic (HHL) [17,35] is another notable hybrid prover; an HHL case study similar to ours could be interesting future work. HHL Prover and KeYmaera X both base their ODE invariant automation on the same core algorithm [18], so this aspect of automation is likely comparable in both. Other Logical Approaches We are aware of only one prior logical proof [36] of a chemical process with nontrivial hybrid dynamics. Unlike ours, it is not in a theorem-prover and does not address persistence nor reactions, but rather a mixing process. General-purpose theorem-provers [1,8,21,29] have formalized hybrid systems, including stability [29,21], but not applied them to reactions. Reachability Model-checkers based on reachability analysis [6,2,7,9] are the primary competitors to hybrid systems theorem-provers. They provide greater automation at the cost of accepting restrictions in generality. Details vary, but common restrictions include special-case guarantees (is a specific reaction safe?), time-bounded analyses (am I safe for a time?) or conservative approximations of dynamics. Their trusting computing base is typically larger than a theoremprover's, complicating rigor. Taylor approximations, particularly Taylor models [4], are broadly useful in reachability analysis, e.g., in Flow* [6] and CORA [2]. We have shown that Taylor approximations are equally useful in KeYmaera X, where they come with proofs. Stability and Persistence Hybrid system stability is well-studied both inside [34,21,29] and outside [15,22,19] theorem-provers, with persistence also studied [31]. Lyapunov functions have shown stability of a chemical reaction on paper, but not in a prover [13]. Stability and its relatives in KeYmaera X specifically are a new topic [34]; we contribute the first worked KeYmaera X case study for an application of industrial interest. Chemical Engineering. The chemical engineering results we formalized are classical; our innovation is the generality and rigor with which we formalize them in KeYmaera X. Standard textbooks provided kinetics for well-mixed adiabatic batch reactors [30,Eq. 2.93], uncontrolled reversible reactions [30,Ch. 3], and the Arrhenius equation [30,Eq. 5.1]. Standard control theory textbooks introduce model-predictive control and bang-bang control [12]. Although basic models of reactors are widely-used in formal methods, ours is the first in a theorem-prover. It additional overcomes others' limitations: -Previous chemical proofs ignored persistence and reactors [36] -Optimal scheduling [28] and safety arguments [20] have used simplistic finite state machines -A verified plant design used simple piecewise-constant dynamics [14] -CEGAR verification of tanks [24] ignored reactors Though we build on such broad related work, our contribution of generalpurpose proofs about chemical reactors and reactions in a theorem-prover fills a significant gap in the verification literature. Conclusion We used the KeYmaera X theorem prover for differential dynamic logic to formalize two case studies: a batch reactor and a reversible reaction, each of which consisted of four models and their proofs. This work served two purposes: -To our knowledge, we provide the first proof in a theorem prover of these foundational chemical engineering results -We demonstrate how recent advances in KeYmaera X's automation, such as its implementation invariant checking, Darboux reasoning, and differential variants, contribute to the proofs One direction for future work is verifying reactors with more advanced controllers such as PID (proportional-integral-derivative) controllers [32,Ch. 13]. However, potential future work is broad in nature, reaching well beyond chemical reactor design. Techniques such as invariant checking and Taylor series are of general applicability using various tools, though KeYmaera X provides a rigorous implementation of both. Differential variants are widely useful for proving ODE properties that are true eventually, but not at every moment. We have shown one significant application for all these proof techniques; their are certainly others because the applications of hybrid systems models are diverse.
2022-05-18T06:47:03.560Z
2022-05-17T00:00:00.000
{ "year": 2022, "sha1": "9eef583125452ac4c7a26ad03a3af2631bedfe6b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9eef583125452ac4c7a26ad03a3af2631bedfe6b", "s2fieldsofstudy": [ "Chemistry", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16618399
pes2o/s2orc
v3-fos-license
The elements of human cyclin D1 promoter and regulation involved Cyclin D1 is a cell cycle machine, a sensor of extracellular signals and plays an important role in G1-S phase progression. The human cyclin D1 promoter contains multiple transcription factor binding sites such as AP-1, NF-қB, E2F, Oct-1, and so on. The extracellular signals functions through the signal transduction pathways converging at the binding sites to active or inhibit the promoter activity and regulate the cell cycle progression. Different signal transduction pathways regulate the promoter at different time to get the correct cell cycle switch. Disorder regulation or special extracellular stimuli can result in cell cycle out of control through the promoter activity regulation. Epigenetic modifications such as DNA methylation and histone acetylation may involved in cyclin D1 transcriptional regulation. Introduction During the G1 phase, cells will response to the extracellular signals that influence cell division, growth, and differentiation. Cyclin D1 is thought to play pivotal roles in G1-S phase transition. Mistakes in G1 phase may lead to cell cycle out of control and cause tumorigenesis. Cyclin D1 is a sensor to integrate extracellular signals with the cell cycle machinery, with functions through CDK4/6 to trigger cell cycle progres-sion. In recent years, accumulating evidence suggests cyclin D1 also convey cell cycle or CDK-independent functions, and cell can do without cyclin D1 (Coqueret 2002;Fu et al. 2004;Lamb et al. 2003;Pestell et al. 1999). The cyclin D1 promoter sequence was studied and subcloned in several different laboratories (Albanese et al. 1995;Herber et al. 1994b; Motokura and Arnold 1993;Nagata et al. 2001). The promoter sequence, GenBank number Z29078 (Herber et al. 1994b), contains no obvious TATA box with TF (transcription factor) binding sites such as AP-1, SP-1, E2F, OCT-1, and so on. In this review, the structure of cyclin D1 promoter is discussed with such binding sites, and regulation from signal transduction pathway converging at the binding site. The elements of the cyclin D1 promoter Cyclin D1 promoter popularly studied is 1,810 bp about with many cis-elements that can mediate signals activate or inactivate the promoter activity. From −1,309 (NFAT binding site) to −10 (Ets binding site), there are many regulatory elements reported. And if searching by computer program, there are some more elements that have not yet been studied. To compare the cyclin D1 promoter with rat and mouse, homologues region were found (Eto 2000), which can lead us to find new elements in human cyclin D1 promoter. The elements reviewed here only include the elements that have been studied ( Fig. 1; Table 1). NF-κB ARE Sequence of Egr-1 and Sp1 are also showed. Starting site of transcription is by two arrows due to the data of the referenced papers, and the sequences are showed in the manuscript elements location reference CREB/ATF2 −58 J Biol Chem, 1999 .274(11) (Albanese et al. 1995) or activate (Brown et al. 1998Cicatiello et al. 2004;Watanabe et al. 1996b) cyclin D1 promoter. JunB usually inhibits cyclin D1 promoter and can antagonized the c-Jun activation of cyclin D1 promoter (Shaulian and Karin 2001). A change of AP-1 composition toward an increase of JunB results in downregulation of cyclin D1 (Grosch et al. 2003). So generally c-Jun is an activator and JunB a repressor of cyclin D1 promoter. c-Fos is expressed rapidly and transiently (Balmanno and Cook 1999), so the inhibition effect by c-Fos overexpression (Albanese et al. 1995) probably cannot function in real cell cycle, except for c-Fos prolonged binding by some stimulation, e.g., oxidative stress (Burch et al. 2004). Not only protein level but also the phosphorylated modification status is important to AP-1 proteins. c-Jun activation of cyclin D1 promoter requires phosphorylated on Ser63/73-Pro motifs (Wulf et al. 2001). Phosphorylation of JunB results in decreased JunB protein levels in mitotic and early G1 cells. In contrast, c-Jun levels remain constant with N-terminal phosphorylation. And the modifications of AP-1 proteins may regulate cyclin D1 transcription temporally to control cell cycle progression (Bakiri et al. 2000). Some TFs in addition to Ap-1 family may regulate cyclin D1 promoter activity through AP-1 site directly (Roche et al. 2004) or indirectly, e.g., by protein interaction , cooperation with other TF binding sites such as CREB (Watanabe et al. 1996a). GAS Among the STATs, only STAT3 and STAT5 can bring about the activation of cyclin D1 (Bromberg et al. 1999;Calo et al. 2003;Leslie et al. 2006). Literatures showed that activated form of STAT3 was accompanied by increased expression levels of cyclin D1 (Bromberg et al. 1999;Kijima et al. 2002;Leslie et al. 2006;Masuda et al. 2001Masuda et al. , 2002. And some paper showed that STAT3 can inhibit cyclin D1expression . And during the liver regeneration after partial hepatectomy, the cyclin D1 induction was repressed, but STAT3 was unchanged in mice (Chen et al. 2004), which may suggest that modification of STAT3 is important to its activity. Data also showed cyclin D1 overexpression and STAT3 activation were, mutually exclusive events in MM (Quintanilla-Martinez et al. 2003). But there was no evidence showing STAT3 can directly function through the cyclin D1 promoter, lacking data such as EMSA, ChIP and so on (Masuda et al. 2001(Masuda et al. , 2002. Moreover, cyclin D1 repression may due to CDKN1A or CDKN1B promoter induction. There are some evidences shows that STAT3 can active CDKN1B or CDKN1A promoter through PI3K pathway. Clearly, PI3K pathway can induce cyclin D1 promoter, and new evidences (Bienvenu et al. 2005) show that cyclin D1 is recruited to the CDKN1A promoter by a STAT3-NcoA complex leading to an inhibition of the p21waf1 gene (Bienvenu et al. 2005). In conclusion, in some context STAT3 and cyclin D1 balanced in cell cycle regulation but generally the relation between cyclin D1 and STAT3 may due to cell type and now is unclear. Unlike STAT3, STAT5 can directly bind cyclin D1 promoter in which there are two STAT binding sites, one called GAS1 the other is GAS2 (Magne et al. 2003). The GAS1 site (distal) can bind stat5a/b which can activate cyclin D1 promoter (Brockman et al. 2002;Magne et al. 2003;Matsumura et al. 1999).The phosphorylated modification of STAT5b at Tyr679 induces STAT5b activation and then activate cyclin D1 promoter through interaction with other transcription factors, such as LEF1 and CREB/ATF2 (Kabotyanski and Rosen 2003). STAT5a lacks the Tyr679 site which can explain why only STAT5a/5b heterdimer or STAT5b/5b homodimer but not STAT5a/5a homodimer bind to the cyclin D1 promoter (Magne et al. 2003). Unlike GAS1, the GAS2 site, accurately composite Oct-GAS element, may be masked by Oct-1 protein which binding site overlap with GAS2. The binding of STAT5 to this site is required both GAS2 and OCT-1 element, with the interaction between STAT and PAU domain of Oct-1 (Magne et al. 2003). E2F The E2F binding sites in cyclin D1 promoter illustrate Fig. 1. Among five members of the E2F family, including E2F1, 2, 3, 4, and 5, only E2F1and E2F4 can bind this promoter (Watanabe et al. 1998). E2F transcription factors are bound to RB protein, and when RB is phosphalated by cyclin D1/ CDK4, 6, E2Fs are released free. The free E2Fs then regulate their target genes promoting cell cycle progression. Cyclin D1/CDK4, 6, RB and E2F cooperate together to enter cell cycle and progression in normal cell or to be transformed in tumor cell lines. Although cyclin D1 is upstream upon E2F protein during cell cycle, there are three feedbacks loop between cyclin D1 and E2F to facilitate the progression. E2F4 expresses at early G1 phase (Muller et al. 1997) which can activate cyclin D1 and results in more E2F4 protein level. This is a positive feedback loop, which occurs at early phase of cell cycle and let cell enter cell cycle quickly. There are also two other feedback loops, which respectively result in cell cycle arrest or progression depending cell types. E2F4 and E2F1 are functionally different which also express at different time in cell cycle (Muller et al. 1997). Contrast to E2F4, E2F1 expresses at late G1 phase (Muller et al. 1997). E2F1 regulates a set of genes that can let cell cycle progression. Depending different cell context, E2F1 can activate (Inoshita et al. 1999) or depress (Watanabe et al. 1998) cyclin D1 expression. High level of free E2F1 protein can induce proliferation then apoptosis (Knezevic and Brash 2004). Transgenetic mice expressing high level E2F1 also induce apoptosis (Pierce et al. 1998a). In this context, free E2F1 can depress cyclin D1, which formed a negative feedback loop to avoid apoptosis (Watanabe et al. 1998). The last feedback loop is that free E2F1 proteins can active cyclin D1 (Fan and Bertino 1997). The high level free E2F1 protein can activate another set of genes, e.g., FGFR which let cell cycle progression or transformed cells (Tashiro et al. 2003). The affinity of E2F1 to cyclin D1 promoter is higher than E2F4 (Lee et al. 2000). E2F1 has more potent activator activity than E2F4 (Pierce et al. 1998b). E2F-4 is located in nucleus from G0 until mid-G1 phase and mainly cytoplasmic in late G1, S, and G2 phases. In contrast, endogenous E2F-1 is absent from resting cells and is predominantly nuclear in late G1 and S (Muller et al. 1997). Due to the different affinity, at early stage E2F4 bounding that induce cell cycle entrance, and at late stage E2F1 take place of E2F4 results in cell cycle progression or transformation. CREB Ser 133 phosphorylation is necessary for induction of cyclin D1 promoter through this site (D'Amico et al. 2000;Lee et al. 1999;Sharma et al. 2004), but the POU domain of oct-1 can potent its activation without Ser 133 phosphorylation by protein interaction (Boulon et al. 2002). But CREB Ser 133 phosphorylation may result in repression of cyclin D1 due to cell type (Musa et al. 1999). E box There is an E box element at −558 in human cyclin D1 promoter (Eto 2000; Magne et al. 2003;Zhang et al. 2002). The E box can bind Myc or other transcription factor, so some paper may assigned it c-myc element. Myc proteins bind to cyclin D1 promoter to inhibit its activity (Chien et al. 2008;Gonzalez-Mariscal et al. 2009;Philipp et al. 1994), probably inducing DNA methylation (Hervouet et al. 2009). The element may activate cyclin D1 promoter by different protein interaction with myc, e.g., Max ). Ets In the proximal region of cyclin D1 promoter, an Ets (c-Ets2) site was first identified in 1995 (Albanese et al. 1995). There several putative Ets binding site in cyclin D1 promoter. Tetsu and McCormick (1999) demonstrated four other Ets sites which they named Ets A B C D, but only the B box is mediated by P21RAS. Zhao et al. (2001) demonstrated that the EtsB binding site mediated cyclin D1 promoter regulation by FAK. The proximal box can mediated PKC delta activity (Page et al. 2002), and RAS induced MAPK signal transduction (Albanese et al. 1995). CSL Notch, an evolution-conserved membrane crossed-signal molecular (for review, see Artavanis-Tsakonas et al. 1999) encoding a family of transmembrane proteins that are involved in many cellular processes such as differentiation, proliferation, and apoptosis, can activated cyclin D1 promoter transcription through a CSL site (Jeffries et al. 2002;Ronchini and Capobianco 2001;Stahl et al. 2006). GT box There are four GT box in cyclin D1 promoter but only the GT box A was active which was responsible for the inhibition effect of KLF8 to cyclin D1 promoter (Zhao et al. 2003). Sp1 The transcription factor SP1 is a DNA-binding protein which interacts with a variety of gene promoters containing GC-box elements. Among many possible SP1 sites, the site studied in the promoter overlaps with Egr-1. Induction of the cyclin D1 promoter activity in the early to mid G 1 phase is via the SP1 sites by the Ras-dependent pathway (Nagata et al. 2001). NeuT can induce cyclin D1 promoter by Sp1/3 binding in cooperation with E2F site (Lee et al. 2000). In PC12 cells NGF can induce neurite outgrowth and cyclin D1 transcription via Sp1 and NF-.B binding site in the proximal region of the cyclin D1 promoter (Marampon et al. 2008). Complex motif Complex motif here means that two elements in a promoter are very close, sometimes joined together. In this promoter, e.g., E2F and sp1, stat and oct-1 are close to form complex motifs. More often, the proteins that bound to complex motif could interact with each other. So we can deduce that proteins which can interact with each other may result in DNA sequence rearrangement. The protein and DNA sequence can co-evolve. Starting site Different groups studied the transcription star site with different methods (Herber et al. 1994a;Hsiang and Straus 2002;Motokura and Arnold 1993;Philipp et al. 1994). Among these, CCTCCAGAGGGCTGT (Motokura and Arnold 1993) and CCTCCAGAGGGCTGT (Hsiang and Straus 2002; transcription star site is underlined) were prevalently accepted. In this review, elements positions were normalized to CCTCCAGAGGGCTGT (Motokura and Arnold 1993). Signal transduction pathway There are mainly three signal transduction pathways involved in cyclin D1 promoter regulation, which are MAPK, PI3K/ Akt, and Wnt. Others such as ER, NF-κB, JAK/STAT, Rac1/ NADPH oxidase are also involved. Here, we discuss the main three pathways: MAPK, PI3K/Akt and Wnt including its molecules, response elements and cross-talk points (Fig. 2). The Wnt signaling pathway is conserved in various organisms from worms to mammals, and plays important roles in development, cellular proliferation, and differentiation. Wnt stabilizes cytoplasmic β-catenin and then βcatenin is translocated into the nucleus where it stimulates the expression of genes including cyclin D1 (Kikuchi 2000;Shtutman et al. 1999;Tetsu and McCormick 1999). PI3k/ Akt signal transduction pathway can inhibit GSK3β and then promote β-catenin to activate cyclin D1 promoter via the TCF site (Albanese et al. 2003). PI3k/Akt signal transduction pathway can also activate cyclin D1 promoter by modulating CREB via its binding site. But this may be weaker than that by inhibition of GSK3β (Xie et al. 2003). ILK and PDK1 can activate Akt by phasphation at different amino acid site, ser-473 and ser-308, respectively (Persad et al. 2001a), which all take part in Akt activation which consequently then inhibits GSK3β at ser-9 (Troussard et al. 2003). In some cell type, PKC but not Akt can inhibit GS3Kβ (Xie et al. 2003). Rac1 which can form positive regulation loop with PI3k (Welch et al. 2003), can activate cyclin D1 by NF-.B (Joyce et al. 1999) and CREB site (Bauerfeld et al. 2001;Joyce et al. 1999;Page et al. 2000) independent of ERK (Page et al. 1999b(Page et al. , 2000. Wnt pathway regulation whereby activation of Rac1 amplifies the signaling activity of stabilized/mutated β-catenin by promoting its accumulation in the nucleus, and synergizing with β-catenin to augment TCF/LEF-dependent gene transcription (Esufali and Bapat 2004). PI3k/Akt signal transduction pathway plays important role in regulation of cyclin D1 promoter. The pathway may induce cyclin D1 by CREB site in the promoter and can modulate GSK3β to activate β-catenin, which can induce cyclin D1. There are many interlinks between PI3k and wnt pathway in regulation of cyclin D1 promoter. It is usually thought that MAPK, unlike wnt, distinct from PI3k signal transduction pathway (Page et al. 2000), but there are still many cross-talks between them. The ERK pathway modulated AKT phosphorylation by acting on the PTEN levels (Marino et al. 2003). Persad et al. (2001b) define a pathway that ILK and GSK-3 can regulate βcatenin stability, nuclear β-catenin expression, and its transcriptional activity. Wnt-transactivated ErbB1 was responsible for MAPK activation and the increased levels Civenni et al. 2003). TGF-β1 also first decreases and later potentiates the levels of EGF-activated MEK1/MAPK and PKB, which results in initially suppresses EGF-induced cyclin D1 expression then later releases the inhibition (Yan et al. 2000) implying there are other cross-talks between MAPK and PI3k/Akt. Taken together, there are cross-talks between Wnt and PI3k usually converging at GSK3β. MAPK pathway is generally distinct from PI3k, but they can cross-talk, e.g., by PTEN (Marino et al. 2003;Weng et al. 2001), TGF-β1 (Yan et al. 2000), PAK (Nheu et al. 2004), or others. PTEN is also involved in the regulation of nuclear β-catenin accumulation and TCF transcriptional activation in an APC-independent manner (Persad et al. 2001b). Sometimes in the mammary gland Wnt pathway can activate cyclin D1 by MAPK activation (Civenni et al. 2003). The temporal expression of cyclin D1 Cell cycle progression requires different signal molecules function at the right time. The stimulation from growth factor is temporal, biphasic (Jones and Kazlauskas 2001). So what pathway function at what time is critical for cell cycle progression. Rac/Cdc42 signaling induces cyclin D1 expression in an early G1 phase. In the mid-G1 phase, cyclin D1 is induced by sustained ERK, which can be promoted by Rho kinase. At the same time, Rho kinase suppresses Rac/Cdc42 activity Welsh et al. 2001). MKP, as an inhibitor of ERK, can form a feedback loop to a flexibly balanced ERK activity (Bennett and Tonks 1997;Bhalla et al. 2002;Ryser et al. 2004). MKP overexpression can result in downregulation of cyclin D1 (Kawanaka et al. 2001;Lavoie et al. 1996;Qin et al. 2005). In the later stages of G1, PI3k pathways instead of ERK to sustain cyclin D1 expression to perform S phase entry (Gille and Downward 1999;Marino et al. 2003). Akt/PKB, an important downstream of PI3k, is expressed in late G1phase (Gille and Downward 1999;Paramio et al. 1999), but it only influences partly cyclin D1 expression (Gille and Downward 1999). So there may be multiple signal molecules involved. Epigenetic regulation of the cyclin D1 transcription Epigenetic regulation means a heritable alteration in gene expression without the primary DNA sequence changing. The major mechanisms involved in epigenetic changes are modification of DNA and histone protein such as DNA methylation at cytosine bases and histone acetylation. Epigenetic modification sites involved in cyclin D1 transcriptional regulation include (1) GC-rich Sp1/CRE binding site, (2) remote upstream region mainly in chromosome translocation, a common cause of blood tumor, (3) 1 kb upstream including E-box element, and (4) other DNA methylation sites which have not been studied. Actually, function of DNA methylation and histone modification are commonly studied together. DNA methylation at Sp1/CRE binding sites of rat cyclin D1 promoter may be essential for keeping a number of the stromal cells in the basal layer live (Kitazawa et al. 1999). In hamster cell, using human cyclin D1 promoter, data showed that DNA methylation was found at Sp1/CRE binding sites (Hilton et al. 2005). However, the epigenetic modification including DNA methylation at cytosine bases and H3/H4 acetylation at Sp1/CRE binding sites may not be essential for transcriptional regulation of cyclin D1 (Krieger et al. 2005). Chromosome translocation, a common cause of blood tumor, is thought to transcriptional regulation of cyclin D1. Data showed that such epigenetic modifications mainly were found in the translocation region, distal upstream region of cyclin D1 promoter (120 kb from the transcriptional start site; Liu et al. 2004) and demethylation may due to CTCF and NPM (Liu et al. 2008a). Different group found the DNA methylation or histone acetylation in this region from different blood tumor including MCL, MM, and NHL and so on. Although the epigenetic modification may be essential in gene transcriptional regulation, it was thought that the epigenetic modification have no effect on cyclin D1 transcription. No DNA methylation was found in cyclin D1 promoter by genomewide methylation analysis in MCL patients (Leshchenko et al. 2010). The endogenous cyclin D1 promoter may be inaccessible to the transcription factor and cyclin D1 transcription may be control through other different manner. Actually the MYEOV gene which located approach to cyclin D1 was transregulated by this epigenetic modification (Janssen et al. 2002), which showed that epigenetic regulation may need a proper transcriptional status. Interestingly, in some MM and MCL samples that did not express cyclin D1, the cyclin D1 promoter was hypomethylated and hyperacetylated, which suggested that DNA methylation in the promoter may be related to malignant phase rather than to cyclin D1 regulation . And this agreed with the data in NHL research, in which the DNA methylation was identified as a tumor maker, although it is not involved in cyclin D1 transcription (Shi et al. 2007), which showed that the region was proven to be methylated. Genes other than cyclin D1 may be regulated by DNA methylation which can then regulated cylcin D1 including CDKN2A (Vonlanthen et al. 1998;Kawauchi et al. 2004;Takahira et al. 2004;Hutter et al. 2006;Liu et al. 2008b;Matsuda 2008;Takahira et al. 2004;Kawauchi et al. 2004;Hashiguchi et al. 2001;Hutter et al. 2006;Dominguez et al. 2002), wnt (Fox et al. 2008Martin et al. 2009), and miRNA (Ilnytskyy et al. 2008). Data from blood tumor, epigenetic modification in 1 kb region upstream from transcription start site may not affect cyclin D1 transcription . Although data from blood tumor cell mainly showed that epigenetic modification may not involved in cyclin D1 transcription, in glioma cells Hhervouet et al. (2009) showed a DNA methylation mechanism in depression of cyclin D1 transcription via Ebox, a site-specific DNA methylation site in the 1 kb upstream region of cyclin D1.And different from blood tumor research which showed treatment of TSA or 5-Aza had no effect on cyclin D1 transcription (Krieger et al. 2005), data showed that the epigenetic regent can regulate its transcription or translation in glioma cell,H1299 cell, follicular lymphoma (also blood tumor) cell and MCF-7 cell (Alao 2007;Alao et al. 2006a, b;Bennett et al. 2009;Hervouet et al. 2009;Rocha et al. 2003). Data from HCC (primary liver cancer) showed DNA methylation in cyclin D1 promoter (Matsuda 2008) and in lung cancer, DNA methylation of CDKN2A promoter in regulation of cyclin D1 may be different (Zhou et al. 2001). So epigenetic regulation may be different due to cell types. Other than histone acetylation, histone methylation of H3k9 may inhibit human (Krieger et al. 2005) and mouse (Shirato et al. 2009) cyclin D1 transcription, and this may function in development (Ait-Si-Ali et al. 2004). Considering CpG islands identifying, other sites may be studied to reveal the epigenetic regulation mechanism involved in cyclin D1 for example there are many other CpG inlands (Krieger et al. 2005) except for the region mentioned above. In conclusion, epigenetic modification (DNA methylation and histone modification) involved in cyclin D1 transcriptional regulation may be cell type-specific. In most blood tumor, cyclin D1 transcription is not due to DNA or histone modification, but this was not the barrier for the DNA methylation to be used as a putative tumor marker. Other gene (especially CDKN2A) may be regulated by epigenetic modification. There may be other epigenetic modification which can be studied to provide insight into a new mechanism of epigenetic transcriptional regulation of cyclin D1, for there are other CpG islands not studied yet. Conclusion Cell cycle control is complex, in which cyclin D1 transcription regulation may be important. But firstly, cell cycle control is not only in transcription level but also in post-transcriptionally regulated manner, e.g., protein degradation, modification, which all play an important role in cell cycle control. For example, GSK3β can also increases cyclin D1 protein degradation (Hamelers et al. 2002;Jirmanova et al. 2002;Kim et al. 2002;Zou et al. 2004), and cyclin D1 mRNA half-life becomes shorter when serum is removed (Guo et al. 2005). And secondly, much study got from synchronized cell by serum deprivation, which cannot reflect the real cycle. In actively cycling cells, cyclin D1 may be induced to high levels in G2 phase, and the expression levels of cyclin D1 in G2 phase determine the fate of the next cell cycle (Guo et al. 2005;Stacey 2003). Thirdly, some cell can proliferfy and organ developed without cyclin D1 (Kozar et al. 2004;Malumbres et al. 2004). Taken together, the regulation of cyclin D1 promoter is important in cell cycle control, but it is not all. (2000) The integrin-linked kinase regulates the cyclin D1 gene through glycogen synthase kinase 3beta and cAMPresponsive element-binding protein-dependent pathways. over-expression correlates with beta-catenin activation, but not with H-ras mutations, and phosphorylation of Akt, GSK3 beta and ERK1/2 in mouse hepatic carcinogenesis. Carcinogenesis 24:435-442 Graham NA, Asthagiri AR (2004) Epidermal growth factor-mediated T-cell factor/lymphoid enhancer factor transcriptional activity is essential but not sufficient for cell cycle progression in nontransformed mammary epithelial cells. J Biol Chem 279:23517-23524
2016-05-12T22:15:10.714Z
2011-02-11T00:00:00.000
{ "year": 2011, "sha1": "8a7f3fbbcef3e52e62b656d336ff36d6fe64c65e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/s13148-010-0018-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "44a19f4cd8f3945c6fc6c257b3272aa72ea8d5b8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
96958325
pes2o/s2orc
v3-fos-license
Kinetic Studies on Cure Kinetics of DGEBA (Diglycidyl Ether of Bisphenol-A) with Terephthalamide Hardening System generated from PET waste : An aromatic amide system for epoxy resin based on diglycidyl ether of Bisphenol-A was developed through ammonolysis of PET waste. The ammonolysis of PET waste was carried out at ambient conditions of temperature & pressure. The end product, characterized as terephthalamide was used as hardener in epoxy resin (Diglycidal ether of Bisphenol-A) and triethylamine and sodium hydroxide were used as catalysts. Several samples were used to study the curing kinetics having varying amounts of the catalysts by means of Differential scanning calorimetry (DSC). Isothermal and Dynamic DSC characterization of the formulations were performed. The curing kinetics of terephthalamide with epoxy resin shows high energy of activation as 50.18 KJ/mole in the absence of catalyst which was lowered towards negative values in their presence. The optimum curing of epoxy resin heated with aromatic hardener can be obtained in 28 minutes at 320 ˚C. The use of catalysts reduced the curing time to 2.0 minutes at 60 ˚C. INTRODUCTION Epoxy resins are polymer materials which are used for a wide range of applications, either unmodified or as matrix materials for composites.The advantageous properties of epoxies include good adhesion to many substrates, no emission of volatiles upon cure, enhanced mechanical properties, high electrical insulation, good chemical resistance, low shrinkage and a broad formulating range [1].Epoxies find use as adhesives, caulking compounds, casting compounds, sealants, varnishes, paints as well as laminating resins for a variety of industrial applications.It is essential to control the degree of cure of resins in order to achieve the desired end properties corresponding to the applications.Many studies can be found in literature [2][3][4][5][6][7][8] in which the epoxy resins are blended with other materials in order to achieve desired end properties.Different types of hardening systems for epoxies have been developed and their curing kinetics have been studied with the help of differential scanning Calorimetry [9][10][11][12][13].Differential scanning calorimetry has been widely recognized as useful method to determine cure kinetics of thermoset resins.Several workers investigated the cure kinetics of DGEBA with different amine systems such as poly(oxypropylene) triamine [14], poly(oxypropylene) diamines [15], 4, 4'-*Address correspondence to this author at the Department of Polymer Science, Bhaskaracharya College of Applied Sciences, University of Delhi, Delhi-110075, India; Tel: +919891401243; E-mail: kkdchauhan@gmail.comdiaminodiphenylsulphone [16], 4, 4' diaminediphenylmethane [17] etc. by means of DSC. The objective of the current research is to develop an aromatic amide hardening system for epoxy resin, through ammonolysis of PET waste with liquor ammonia [18,19] at ambient conditions of temperature and atmospheric pressure.The end product obtained by the ammonolysis of PET waste was characterized as terephthalamide with the help of spectroscopic techniques, SEM and Differential Thermal Analysis.To our knowledge, cure kinetics of this system has not yet been investigated.Several samples were prepared using epoxy resin and terephthalamide having varying amounts of triethylamine and sodium hydroxide, which were used as catalysts.Dynamic characterization of the samples was performed at a constant heating rate within the temperature range 40 ˚C -360 ˚C.The isothermal cure kinetics of diglycidyl ether of Bisphenol-A using terephthalamide as hardener has been investigated by means of DSC both in absence and presence of catalysts. Materials and Characterization The PET waste used in this study was obtained from various post-consumer sources such as soft drink bottles, water bottles etc.The PET waste bottles were collected manually and processed before use.Liquor ammonia solution (sp.Gravity 0.91) was of A.R. grade and procured from M/s Qualigens.The ammonolysed end products were synthesized and characterized as per procedure given in our earlier publications [18,19] and used in the present study.The epoxy resin used in this study is diglycidyl ether of Bisphenol A (DGEBA) procured from M/s Huntmen and it was of commercial grade (LY 556).The number average molecular weight M n =360 was obtained from epoxy equivalent 180 g/eq, that was determined by chemical titration of the end groups.Triethylamine and sodium hydroxide were obtained from M/s Qualigens and used as such without further purification. Methods The cure of epoxy resin with aromatic amide hardening system generated from PET waste in absence and presence of catalysts was studied by means of DSC on a Mettler star SW 9.01 differential scanning calorimeter, using an empty aluminum pan as a reference.Prior to DSC runs, the temperature and heat flow were calibrated using indium and zinc standards.The measurements were conducted under nitrogen atmosphere.Dynamic DSC experiments were performed to determine the curing temperatures for each sample keeping constant heating rate of 10 ˚C within the temperature range of 40 ˚C -360 ˚C.The sample mass used was kept in the 10 to 25 mg range.Since the reaction rate constant is a function of temperature, the calculation of kinetic parameters needs at least three isothermal experiments at different temperatures.For isothermal experiments samples were placed in the preheated DSC cell and scan was started when the temperature equilibrium was regained.The reactions were conducted at three different temperatures for each sample.The recorded isothermal thermograms were analyzed with the help of DSC kinetic software (STARe). Sample Preparation Table 1 shows composition of different samples comprising epoxy resin, terephthalamide used as hardener and catalysts (triethylamine and sodium hydroxide).These formulations have been designated as S 0, S 1, S 2, S 3 and S 4 where S 0 is the control sample without any catalyst.In these samples the epoxy resin and the hardener used were 67% and 33% respectively.The concentrations of epoxy resin and terephthalamide were kept constant and catalysts were used in the range 1g-4g in different samples.All the components were mixed thoroughly by stirring with the help of glass rod at room temperature.The thoroughly mixed solutions of epoxy resin, hardener and catalysts were used to perform the dynamic and isothermal DSC characterizations. Dynamic DSC Characterization DSC curves of the investigated systems (S 0, S 1, S 2, S 3 and S 4 ) at heating rate of 10 ˚C within the temperature range of 40 ˚C -360 ˚C are shown in Figure 1.Table 2 shows the data of Dynamic characterization of all samples.All the exothermic peaks are symmetrical and give curing temperature for each reaction mixture.For S 0 formulation, which is a control sample the reaction mixture was cured between 295 -335 ˚C.This curing temperature is in the absence of catalysts.The thermogram shows an exothermic peak with peak point of 320 ˚C.The exothermic peak is quite symmetrical and suggests that the epoxy resin cures with terephthalamide by following auto catalytic cure.S 1 formulation contains 1g of triethylamine and 1g of sodium hydroxide as catalysts.This reaction mixture was found to be cured within a temperature range of 50-88 ˚C with peak temperature 70 ˚C hence the curing temperature was markedly decreased.For S 2 sample it was observed that the reaction mixture cured between 40-75 ˚C with peak temperature of 64 ˚C and for S 3 sample the peak temperature was 76 ˚C.Again it was seen that S 4 sample cures between 50-80 ˚C with peak temperature of 60 ˚C.These results emphasize that catalysts are lowering the activation energy for curing and hence the curing temperature has been decreased as the concentration of the catalyst is being increased.In S 3 formulation the amount of NaOH used is the highest among all the samples, however it possess higher peak temperature than S 2 sample.But when we increased the amount of Triethylamine in S 4 sample, a marked decrease was observed in the peak temperature. Isothermal DSC Characterization The curing kinetics of the samples was studied by DSC isothermal method keeping three constant temperatures for each sample (Table 3). Curing Kinetics of Terephthalamide with Epoxy Resin in S 0 Sample Figure 2 shows a DSC thermogram of the sample which was prepared by mixing 4.5 g terephthalamide and recorded three different iso-thermal curves at 290, 300 and 310 ˚C temperatures.The curves show that on increasing the temperature the rate of reaction is increasing.These thermogram were analyzed with the help of STARe software and the analytic results show that the energy of activation was found to be quite high of the range of 50.18 KJ/mole at 320 ˚C and the order of the reaction was observed near one i.e. 0.57.The Figure 3 shows conversion plot between percentage of conversion vs time at constant temperatures of 50, Curing Kinetics of Terephthalamide with Epoxy Resin in S 1 Sample Figure 5 shows DSC thermogram of the sample S 1 showing iso-thermal curves at three different temperatures i.e. 125, 140 and 150 ˚C.The curves show that on increasing the temperature the rate of reaction increases.These thermograms were analyzed with the help of STARe software and the analytic results show that the energy of activation was found to be quite low of the range of 2.56 KJ/mol and the order of the reaction was observed near one i.e.0.45.The Figure 6 shows conversion plot between percentage of conversion vs time at constant temperature of 0, 50, 100, 150, 200, 250 and 300 ˚C.It was found that at 250 ˚C, 70 % of conversion takes place in just 9 minutes.While more than 90 % conversion can be obtained in 12 minutes at a cure temperature of 300 ˚C. Figure 7 shows iso-conversion plot between curing time and temperature.It is seen that 90 % iso-conversion takes place at 290.4 ˚C and at 267.7 ˚C in 10 minutes and 16 minutes respectively. Curing Kinetics of Terephthalamide with Epoxy Resin in S 2 Sample Figure 8 shows a DSC thermogram of the sample which was prepared by mixing 4.5 g terephthalamide and recorded three different iso-thermal curves at 160, 180 and 200 ˚C temperatures.These thermograms were analyzed with the help of STARe software and the analytic results shows that the energy of activation was found to be quite low of the range -0.29 KJ/mol and the order of the reaction was observed near one i.e. 0.51.The Figure 9 shows as conversion plot between percentage of conversion verses time at constant temperature 0, 50, 100, 150, 200, 250 and 300 ˚C.It can be recorded that 80 % of conversion takes place at 250 ˚C in 25 minutes.Whereas, 90 % of conversion can be obtained in 22.67 minutes at the cure temperature 300 ˚C.Here the effect of temperature is not significant.Figure 10 shows isoconversion plot between curing time verses temperature.The slope line shows 10, 20, 30, 40, 50, 60, 70, 80 & 90 % conversion at different temperatures from 0 -300 ˚C.In this case, 90 % iso-conversion takes place at 0 ˚C in 19.5 minutes.This figure shows that on increasing the temperature, the curing time is also increased. Curing Kinetics of Terephthalamide with Epoxy Resin in S 3 Sample Figure 11 shows DSC thermogram of the sample which was prepared by mixing 4.5 gm terephthalamide and recorded at three different iso-thermal curves at 60, 70 and 80 ˚C.The curve shows that increasing the temperature increases the rate of reaction.This thermogram were analyzed with the help of STARe software and the analytic results shows that the energy of activation was found to be quite low of the range of 3.49 KJ/mole at 76 ˚C and the order of the reaction was observed near one i.e 0.42. The Figure 12 shows as conversion plot between percentage of conversion verses time at constant temperature 0, 50, 100, 150, 200, 250 and 300 ˚C.It can be recorded that 90 % conversion occurs at 250 ˚C in 11 minutes.While more than 90 % of conversion can be obtained in 10.33 minutes at a curing temperature of 300 ˚C. Figure 13 shows iso-conversion plot between curing time verses temperature.The slope line shows 10, 20, 30, 40, 50, 60, 70, 80, 90% conversion at different temperature such as 0 -300 ˚C.It is observed that 90 % iso-conversion takes place at 0.4 ˚C and 100 ˚C in 19 and 13 minutes respectively. Curing Kinetics of Terephthalamide with Epoxy Resin in S 4 Sample Figure 14 again shows that increase in temperature increases the rate of reaction.These thermograms were analyzed with the help of STARe software and the analytic results shows that the energy of activation as 5.22 KJ/mole at 60 ˚C and the order of the reaction was observed near one i.e. 0.42. Figure 15 shows a conversion plot between percentage of conversion verses time at constant temperatures 50, 100, 150, 200, 250 and 300 ˚C.It can be recorded that at 250 ˚C, the conversion is 80 %.Whereas more than 90 % of conversion can be obtained in 12.67 minutes at a cure temperature of 300 ˚C. Figure 16 shows iso-conversion plot between curing time verses temperature.The slope line shows 10, 20, 30, 40, 50, 60, 70, 80, 90 % conversion at different temperature between 0 -300 ˚C.In this case, 90 % iso-conversion was achieved at 290.4 ˚C in 10 minutes and at 267.7 ˚C in 20 minutes. Reaction Mechanism of Curing Kinetics The curing of the epoxy resin with amines is well established and has been shown to follow autocatalytic cure by several authors.The synthesized amide has free amine groups at the end of the molecule which can react with epoxy group of the epoxy resin as per the Reaction Scheme 1. CONCLUSIONS Curing kinetics of DGEBA-terephthalamide system was studied by means of DSC by isothermal method.Catalytic influence on the reaction rates has been described.An efficient methodology for aromatic amide hardening system for epoxy resin, which works at ambient conditions of temperature and pressure have been developed.Terephthalamide used in the present study has been generated from PET waste, which offers another application of products obtained from PET waste recycling.In the present DGEBAterephthalamide system, the energy of activation was markedly lowered from 50.18 KJ/mol to -0.29 KJ/mol for formulation S 2 in the presence of catalysts. Figure 2 : Figure 2: DSC Thermogram of the isothermal curves of S 0 formulation at 290, 300 and 310 ˚C. Figure 4 : Figure 4: Shows the iso conversion of S 0 formulation time verses temperature. Figure 7 : Figure 7: Shows the iso conversion of S1 formulation time verses temperature. Figure 10 : Figure 10: Shows the iso conversion of S2 formulation time verses temperature. Figure 13 : Figure 13: Shows the iso conversion curves of S3 formulation time verses temperature. Figure 16 :Reaction Scheme 1 : Figure 16: Shows the iso conversion of S4 formulation time verses temperature.
2019-04-06T00:41:13.436Z
2013-03-29T00:00:00.000
{ "year": 2013, "sha1": "bc14c44c3ff861c9aebbd7f5aad7905cafeb581d", "oa_license": "CCBYNC", "oa_url": "https://lifescienceglobal.com/pms/index.php/jrups/article/download/1017/534", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "56e3bedd63053285b3a2238465c21ba6ab3accc8", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
11957645
pes2o/s2orc
v3-fos-license
Geometrical description of non-linear electrostatic oscillations in relativistic thermal plasmas We develop a method for investigating the relationship between the shape of a 1-particle distribution and non-linear electrostatic oscillations in a collisionless plasma, incorporating transverse thermal motion. A general expression is found for the maximum sustainable electric field, and is evaluated for a particular highly anisotropic distribution. Introduction High-power lasers and plasmas may be used to accelerate electrons by electric fields that are orders of magnitude greater than those achievable using conventional methods [1]. An intense laser pulse is used to drive a wave in an underdense plasma and, for sufficiently large fields, non-linearities lead to collapse of the wave structure ("wave-breaking") due to sufficiently large numbers of electrons becoming trapped in the wave. Hydrodynamic investigations of wave-breaking were first undertaken for cold plasmas [2,3] and thermal effects were later included in non-relativistic [4] and relativistic contexts [5][6][7] (see [8] for a discussion of the numerous approaches). However, it is clear that the value of the electric field at which the wave breaks (the electric field's "wave-breaking limit") is highly sensitive to the details of the hydrodynamic model. Plasmas dominated by collisions are described by a pressure tensor that does not deviate far from isotropy, whereas an intense and ultrashort laser pulse propagating through an underdense plasma will drive the plasma anisotropically over typical acceleration timescales. Thus, it is important to accommodate 3-dimensionality and allow for anisotropy when investigating wave-breaking limits. The sensitivity of the wave-breaking limit to the details of the plasma model suggests that it could depend on the anisotropy of the pressure tensor. One method for investigating the wave-breaking limit of a collisionless anisotropic plasma is to employ the warm plasma closure of velocity moments of the 1-particle distribution f satisfying the Vlasov-Maxwell equations [7]. Successive order moments of the Vlasov equation induce an infinite hierarchy of field equations for the velocity moments of f and at each finite order the number of unknowns is greater than the number of field equations. The warm plasma closure scheme sets the number of unknowns equal to the number of field equations by assuming that the terms containing the third order centred moment are negligible relative to those including second, first and zeroth order centred moments. Our aim is to uncover the relationship between wave-breaking and the shape of f . In general, the detailed structure of f cannot be reconstructed from a few low-order moments so we adopt a different approach based on a particular class of piecewise constant 1-particle distributions. Our choice of distribution, although somewhat artificial, reduces the Vlasov equation to that of a boundary in the unit hyperboloid bundle over spacetime. Combining the equation for the boundary with the Maxwell equations yields an integral for the wave-breaking limit in terms of the shape of the boundary. Our approach may be considered as a multi-dimensional generalization of the 1-dimensional relativistic "waterbag" model employed in [5]. Vlasov-Maxwell equations The brief summary of the Vlasov-Maxwell equations given below establishes our conventions. Further discussion of relativistic kinetic theory may be found in, for example, [9,10]. We employ the Einstein summation convention throughout and units are used in which the speed of light c = 1 and the permittivity of the vacuum ε 0 = 1. Lowercase Latin indices a, b, c run over 0, 1, 2, 3. Preliminary considerations Let (x a ) be an inertial coordinate system on Minkowski spacetime (M, g) where x 0 is the proper time of observers at fixed Cartesian coordinates (x 1 , x 2 , x 3 ) in the laboratory. The metric tensor g has the form Let (x a ,ẋ b ) be an induced coordinate system on the total space T M of the tangent bundle (T M, Π, M) and in the following, where convenient, we will write x instead of x a andẋ instead ofẋ b . We are interested in the evolution of a thermal plasma over timescales during which the motion of the ions is negligible in comparison with the motion of the electrons. We assume that the ions are at rest and distributed homogeneously in the laboratory frame. Their worldlines are trajectories of the vector field N ion = n ion ∂/∂x 0 on M where n ion is the constant ion number density measured in the laboratory frame. The electrons are described statistically by a 1-particle distribution f (x,ẋ) which induces a number 4-current vector field One may write the Maxwell equations on M as where F ab are the components of the electromagnetic field tensor, F ab = η ac η bd F cd , q is the charge on the electron (q < 0) and (η ab ) is the matrix inverse of (η ab ). The scalar field f satisfies the Vlasov equation, which may be writteṅ Exterior formulation In this section we recast the above using the tools of exterior differential calculus as it affords a succinct and powerful language for subsequent analysis. We make extensive use of Cartan's exterior derivative d, the exterior product ∧ and the Hodge map ⋆ on differential forms (see, for example, [11,12]). The spacetime volume 4-form ⋆1 is and the Maxwell equations (4, 5) can be written where F = 1 2 F ab dx a ∧ dx b is the electromagnetic 2-form, and the 1-forms N , N ion are the metric duals of the vector fields N , N ion respectively. (The metric dual Y of a vector field Y satisfies Y (Z) = g(Y, Z) for all vector fields Z.) Introduce the vector fields L, X, on T M and the 6-form ω, on T M where ι Y is the interior operator on forms with respect to vector Y , the 4-form ⋆1 V is the vertical lift of the spacetime volume 4-form ⋆1 from M to T M and the 4-form #1 The integral (3) can be written where E x = Π −1 (x) is the fibre of (E, Π, M) over x ∈ M, and it can be shown that the Vlasov equation (6) can be written where ≃ denotes equality under restriction to E by pull-back. Thus, it follows where B is a 6-dimensional region in E and using the generalized Stokes theorem on forms (see, for example, [12]) we obtain where ∂B is the boundary of B. Piecewise constant distributions We consider distributions for which f = α is a positive constant inside a 6-dimensional region U ⊂ E and f = 0 outside. In particular, we consider U to be the union over each point x ∈ M of a domain W x whose boundary ∂W x in E is topologically equivalent to the 2-sphere. Such distributions are sometimes called "waterbags" in the literature. Choosing B in (20) to be a small 6-dimensional "pill-box" that intersects ∂W x and taking the appropriate limit as the volume of B tends to zero, we recover a jump condition on f ω that leads to where λ = 0 is the union over x of the boundaries ∂W x . If λ = 0 is the image of the embedding map Σ, where ξ = (ξ 1 , ξ 2 ) is a point in S 2 , then it follows from (10,11,12) that (21) is equivalent to Here, V ξ and Ω ξ are families of vector fields and 2-forms on M respectively, defined by where dx a = η ab dx b . Note that since the image of Σ lies in E, it follows that, for each ξ ∈ S 2 , V ξ is timelike, unit normalized and future-directed: We adopt (23) as the equation of motion for ∂W x . It may be shown that a particular class of solutions to (23) satisfies and using (9) we obtain the field equation on M with the condition that d V ξ is independent of ξ. For simplicity, we have neglected the direct contribution of the laser pulse to the total electromagnetic field in (27). Electrostatic oscillations Before analysing (28, 26) further it is useful to briefly discuss their analogue on 2-dimensional spacetime for facilitating comparison with the approach adopted in [5]. Electrostatic oscillations in 1 spatial dimension Although formulated on 4-dimensional spacetime, equations (28, 26) have a similar structure for any number of dimensions. In particular, we now consider 2-dimensional Minkowski spacetime (M, g) where (t, z) 1 is a Cartesian coordinate system in the laboratory inertial frame. An induced coordinate system on T M is (t, z,ṫ,ż) and note that in this sub-section of the article the fibre space of (E, Π, M) is 1-dimensional, whereas in the rest of the article it is 3-dimensional. Furthermore, ξ is now an element of the 0-sphere {+, −} and Ω ξ = 1 is a constant 0-form. Thus, the analogue to (23) is where V ± satisfy the conditions and the only non-trivial Maxwell equation for the 2-form F is where N ion = n ion ∂/∂t is the ion number 2-current and F = Edt ∧ dz where E is the electric field along the z-axis. On E,ṫ = √ 1 +ż 2 and the components of the electron number 2-current N = N t ∂/∂t + N z ∂/∂z corresponding to (17) are where with α a positive constant and {X + , X − } scalar fields over spacetime. The 2-velocity fields {V + , V − } satisfy and it follows Unlike their 4-dimensional analogue, which may include transverse electromagnetic fields, (31) are uniquely 2 solved by and using (33) subject to the condition d V + = d V − . Alternatively, one may follow the approach adopted in [5] employing a warm fluid model: Here, where U is the bulk 2-velocity of the electron fluid, 2 Proper incorporation of transverse fields requires at least 2 spatial dimensions. and, in the electron fluid's rest frame, ρ is the fluid's energy density and p is the fluid's pressure defined as It may be shown Thus, (31, 32) may be replaced by an equivalent field theory expressed in terms of a finite set of moments of f on 2-dimensional spacetime. However, the situation is more complicated for waterbags over 4-dimensional spacetime where the moment hierarchy is not automatically closed. We will now use (39) to obtain a non-linear oscillator describing 1-dimensional electrostatic oscillations. Let all field components with respect to the laboratory frame (dt, dz) be functions of ζ = z − vt only (the "quasi-static assumption"), where 0 < v < 1, and let (e 1 , e 2 ) be the basis The coframe (γe 1 , γe 2 ) is an orthonormal basis adapted to observers moving at velocity v along z (i.e observers in the "wave frame") where γ = (1 − v 2 ) −1/2 is the Lorentz factor of such observers relative to the laboratory. For example, γe 2 (N ion ) = −γn ion v is the ion 1-current in the wave frame. In the basis (e 1 , e 2 ), V ± can be decomposed as Note that this is the most general decomposition compatible with equation (38) and the quasi-static assumption. Solving (32) for ψ 2 ± gives and additional physical information is needed to fix the sign of ψ ± . Here, we demand that all electrons described by the waterbag are travelling slower than the wave so ψ ± = − (µ + A ± ) 2 − γ 2 and (50) is Substituting (50) into equation (38) yields and equation (39) yields the nonlinear oscillator equation with the algebraic constraint Longitudinal electrostatic oscillations in 3 spatial dimensions We now consider electrostatic waves in 3 spatial dimensions by closely following the above description of 1 dimensional electric waves. To proceed further we seek a form for W x axisymmetric aboutẋ 3 whose pointwise dependence in M is on the wave's phase ζ = x 3 − vx 0 only, where 0 < v < 1. As before, the following results are applicable only if the longitudinal component of V ξ in the wave frame is negative (no electron described by W x is moving faster along x 3 than the wave). Decompose V ξ in the wave frame as for 0 < ξ 1 < π, 0 ≤ ξ 2 < 2π where R > 0 is constant and Here, (γe 1 , γe 2 , dx 1 , dx 2 ) is an orthonormal basis (the wave frame) with γ = 1/ √ 1 − v 2 . In the wave frame the relativistic energy of P ξ = mV ξ is m(µ + A)/γ and it follows that µ + A > 0. The component ψ is determined using (26), where the negative square root is chosen because no electron is moving faster along x 3 than the wave. Substituting (56) into equation (27) leads to and (28, 17, 56, 58) yield (c.f. equation (54)) and (c.f. equation (55)) where α is the value of f inside W x . The form of the 2nd order autonomous non-linear ordinary differential equation (60) for µ is fixed by specifying the generator A(ξ 1 ) of ∂W x subject to the normalization condition (61). Electrostatic wave-breaking The form of the integrand in (60) ensures that the magnitude of oscillatory solutions to (60) cannot be arbitrarily large. For our model, the wave-breaking value µ wb is the largest µ for which the argument of the square root in (60) vanishes, because µ < µ wb yields an imaginary integrand in (60) for some ξ 1 . The positive square root in (62) is chosen because, as discussed above, µ + A(ξ 1 ) > 0 and in particular µ wb + A(ξ 1 ) > 0. The electric field has only one non-zero component E (in the x 3 direction). Using F = E dx 0 ∧ dx 3 and (56, 57, 59) it follows and the wave-breaking limit E max is obtained by evaluating the first integral of (60) between µ wb where E vanishes and the equilibrium 3 value µ eq of µ where E is at a maximum. Using (61) to eliminate α it follows that µ eq satisfies since α, v > 0. Equation (60) yields the maximum value E max of E, (71) 3 Note that the equilibrium of µ need not coincide with the plasma's thermodynamic equilibrium. For a ≪ R ≪ 1 equations (67, 68, 72) yield where mcω p 2(γ − 1)/|q| is the usual relativistic cold plasma wave-breaking limit of E (see, for example, [13]) and ω p = n ion q 2 /(mε 0 ) is the plasma angular frequency. Note that the speed of light c and the permittivity ε 0 of the vacuum have been restored. The parameter R may be eliminated in favour of an effective transverse "temperature" T ⊥eq defined as T ⊥eq = 1 2k B n ion (P 11 eq + P 22 eq ), where W eq is the support of the distribution with µ = µ eq (see footnote 3) and k B is Boltzmann's constant. It follows where the speed of light c has been restored. Conclusion We have developed a method for investigating the relationship between the shape of a 1-particle distribution and electrostatic non-linear thermal plasma waves near breaking. An approximation to the wave-breaking limit of the electric field was obtained for a particular axisymmetric distribution. Further analysis of (66, 64, 62) will be presented elsewhere.
2008-07-30T03:34:45.000Z
2008-07-23T00:00:00.000
{ "year": 2008, "sha1": "d22e03a74106383f02fd985530b0b5ad628e3bf2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c33222cd6bfe927a612340728db3350bdab19218", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
195874147
pes2o/s2orc
v3-fos-license
A Short Virtual Reality Mindfulness Meditation Training For Regaining Sustained Attention The ability to focus one's attention underlies success in many everyday tasks, but voluntary attention cannot be sustained for a long period of time. Several studies indicate that attention training using computer-based exercises can lead to improved attention in children and adults. a major goal of recent research is to create a short (10 minutes) and effective VR Mindfulness meditation particularly designed for regaining or improving sustained attention. In this study, we have created a custom virtually relaxing environment including an archery game with multiple targets. In the experiment, the attention span of 12 adults are tested before and after the virtual reality session by a non-action video game ([19]) score and Muse headband EEG-signals. After the 10-minute virtual reality session participants' game scores increased (according to game experience): for the beginner by 275%, for intermediate by 107%, and for an expert by 17%. For Muse headband data, calm points increased by 250% irrespective of the participants gaming experiences. After the experiment, all participants reported feeling recharged to continue their daily activities. INTRODUCTION Sustained attention is "the ability to direct and focus cognitive activity on specific stimuli over a long period of time." In order to complete any cognitively planned activity, any sequenced action, or any thought, one must use sustained attention. It is what makes it possible to concentrate on an activity for as long as it takes to finish, even if there are other distracting stimuli present. Examples of sustained attention include listening to a lecture, reading a book, playing a video game, or fixing a car. Problems occur when a distraction arises. A distraction can interrupt and consequently interfere in sustained attention. Cohen R.A. (2011) [1] describes Sustained attention as one of the primary elements or component processes of attention. It enables the maintenance of vigilance, selective and focused attention, response persistence, and continuous effort despite changing conditions. DeGangi and Porges (1990) [2] indicate there are three stages to sustained attention which include: attention getting, attention holding, and attention releasing. The level of sustained attention varies from person to person. However, a key aspect of sustained attention is the ability to refocus on the task after a distraction arises. Mindfulness meditation training has been linked to a broad range of cognitive, affective, and health outcomes. Some of the most robust findings in the cognitive domain pertain to how mindfulness meditation training can foster on-task, sustained attention and reduce mind-wandering (see e.g. [3] [4] [5] [6] [7] [8]). Hayley A. Rahl,2016 [9] tested two competing accounts for how mindfulness training reduces mind-wandering. Virtual reality technologies have been successfully used in many therapies, especially those that rely on mental imagery: to elicit and modulate psychophysiological symptoms of anxiety and fear reactions, in both patients with anxiety disorders and healthy individuals [10]. Patients with phobias who traditionally have attempted to desensitize themselves within their imagination can face their fears in a controllable virtual environment [11]. Similarly, patients with eating disorders who suffer from distorted body image can change their self-image through a virtual body [12]. We propose "10-minute Virtual Reality Mindfulness Meditation Training" particularly for regaining sustained attention. Anyone can take even in the middle of busy everyday life for recharging and regaining sustained attention at their best. Most effective existing methodology/system of Sustained Attention Training Traditionally, meditation has been considered as something that needs to be exercised almost daily and for long periods. However, recently there has been a growing interest in shortduration meditation or mindfulness programs, which could provide results quickly. There are now several studies showing that brief mindfulness meditation training reduces mind-wandering and improve sustained attention (see e.g. [4] [5]). Neuroadaptive virtual reality meditation system (that combines virtual reality with neurofeedback) provides a very effective Meditation and Mindfulness Training. Shaw et al. [13] introduced the Meditation Chamber, an immersive virtual environment for meditation training. The system used skin conductance as the biofeedback mechanism in three guided meditation and relaxation exercises. RelaWorld [14] measures participants' brain activity in real time via EEG and calculates estimates for the level of concentration and relaxation. These values are then mapped into the virtual reality. Similarly, PsychicVR [15] noninvasively monitors and records the electrical activity of the brain and incorporates this data in the VR experience using an Oculus Rift and the MUSE headband. When the participant is focused, they are able to make changes in the 3D environment and control their powers (focusing ability). A study by Zeidan et al. [16] hints that a brief mindfulness meditation intervention of only three sessions leads to reduced heart rate and increased heart rate variability (which is related to well-being and positive affect; [17]) immediately after or during meditation tasks, whereas Steffen and Larson [18] [8]) are effective but not enough attractive to motivate the participant to take the session in the busy daily schedule. Neuroadaptive virtual reality meditation training is enjoyable and effective but time-consuming (EEG or Skin conductance sensors require at least 10-minutes to record quality signals before starting the training). Also, these are complex procedures such as attaching EEG sensors or skin conductors to the body and those are not usually comfortable during the mediation. To minimize the downsides of these systems, in this research, we are contributing a short and simple but effective VR mindfulness meditation training particularly for improving or regaining sustained attention. P1: Hand-eye coordination training is not available Hand-eye coordination is a complex cognitive ability, as it calls for us to unite our visual and motor skills, allowing for the hand to be guided by the visual stimulation our eyes receive. Most activities that we do in our day-to-day life use some degree of eye-hand coordination, and it is a crucial aspect of sustained attention training. P2: Reaction time training is not available Reaction time refers to the amount of time that takes places between when we perceive something to when we respond to it. It is the ability to detect, process, and respond to a stimulus. Reaction time necessarily includes a motor component, unlike processing speed. Therefore, having good reaction time is associated with having good reflexes. Its mean distractions can be avoided without dividing the attention. P3: Traditional video game is not enjoyable Some research is effective, but they are based on traditional computer/mobile video games (action or non-action) training. These video games are not interesting enough in this modern VR world to motivate the participant to play it. We explained above three problems ( Methodology for problems Solution 1 for P1: We introduced a hand-eye coordination training in the VR archery game by placing five targets in a very systematic position (four of them are placed on the corners and one is in the center of a virtual cube). Additionally, the archer must shoot in a way similar to shooting in real life archery; otherwise the archery shot will not be complete. In these situations, hand-eye coordination is very important. Solution 2 for P2: We also included reaction time training in the VR archery game. If the arrow hits the target, then it will flash out (and disappear) and reappear after five seconds. There are total 5 targets so that participant need 5 seconds to finish all targets. If the participant takes one second to finish one target then after 5 seconds, the participant will see only one target and it will be continuing until participant continue this cycle (one target in one second). Participants are advised to finish all targets in a way that you should be able to see only one target at a time. To achieve this condition, the archer must shoot each target in one or less than one second. In these situations, the reaction time is going to play an important role. Solution 3 for P3: An immersive and calm virtual environment was created where the participant can walk and will feel refreshed by observing the mountains, trees, fireplace, paintings, and aquarium. Strong Points of 10-minute Virtual Reality Mindfulness Meditation Training 1. Hand-eye Coordination Training with archery game in a very calm and enjoyable virtual environment. 2. Reaction time training by creating a competitive situation in the archery game. 3. Peaceful music, beautiful scenery, and realistic feeling of air flow through the mountains and jellyfish movement in an aquarium to give an immersive and refreshing virtual experience to the participant. These three points together make an effective VR Sustained Attention Training through Mindfulness Meditation session. Our Method vs Other Archery games Ordinary archery games are designed just for fun or general brain training. Our archery game is particularly designed for sustained attention training, and VR immersion provides an enjoyable and calm environment. The main advantages are as follows: (1) Easy shooting: The participant does not have to do any extra effort to get/align the arrow with the bow because the arrow will appear and be aligned automatically when the archer pulls the string of the bow. (2) Easy adjustment with short distance motion to next target: Archery game has six-degree of freedom. Therefore, the participant can easily change their short distance position and orientation to shoot the next target. (3) Rhythmical shooting Motivation: In the archery game, there are total 5 targets. If the arrow hits the target, then it will flash out (and disappear) and reappear after five seconds. This will create a cycle and motivate the archer to keep focus and continue the cycle (one target in one or less than one second) because if a single target is missed, then the cycle will be broken. System Feature and Structure The underlying motivation and design principle of this system was to create a short training by combining the proven methods of Mindfulness Meditation and virtual reality into one package that would allow novice or even experienced meditators to regain the sustained attention even in the middle of busy everyday life. Virtual Reality Environment: A Virtual Reality Environment was created with a room made of three walls. Instead of the 4th wall, there was a balcony, and from the balcony, the participant a wonderful view of mountains and trees. To decorate the room, there was a fireplace, some paintings, and chandeliers. We created a large aquarium in the balcony. By standing beside the aquarium, participant can shoot light-bubbles (archery target) with an archery bow. In the aquarium, there are some jellyfish, making the real sound while moving to provide a calm environment to the participant. Also, the participant can feel the sound of air and tree leaves while standing or playing the archery game in the balcony. This setup is created to provide a very calm environment and fresh feeling for the participant. Hand-eye Coordination Training: In the archery game, the participant is standing beside the aquarium, oriented towards the mountain view. There are five light bubbles (archery target). Four of them are placed on the corners, and one is in the center of a cube. The archer does not have to do any extra effort to get/align the arrow with the bow because the arrow will appear and be aligned automatically when the archer pulls the string of the bow. However, both of the archer's hands should be aligned in the right direction (such as the real archery hand positions); otherwise the shot will not be completed. This type of situation is created for Hand-eye coordination Training. Reaction Time Training: In the archery game, if the arrow hits the target, then it will flash out (and disappear) and reappear after five seconds. Participants must finish all target in a way where you are able to see only one target at a time. To achieve this condition, the archer must shoot a target in one second or less. This will create a cycle and motivate the archer to keep focus and continue the cycle because if a single target is missed, then the cycle will be broken, and more than one target will have appeared. These situations are created for Reaction Time Training. This training helps the participant to increase the attention span, and Virtual Reality Environment provides an enjoyable and calm environment. All these three parts together make an effective sustained attention training. Evaluation To evaluate the system, we were looking for a game which would be able to check response time and sustained attention with a very good participant experience. Finally, we selected a nonaction video game [19] after playing more than 30 similar types of games. All you must do in this game is guess the right color and tap on the right answer of the three given options. Hence the questions are a set of words spelling a color, although the color in which they are written is different. You must select the right color filling that word. What makes this game interesting but confusing at the same time is that when you play the game, there is a strict time limit for each answer. Therefore, it becomes very difficult, and any break in your attention forces you to start over. Your score increases with every right answer you give, and you start over every time you answer incorrectly. Figure 3: Home and Play screen of Game [19] We let participants practice the video game [19] before evaluation to become familiar with it. In the evaluation process, participants were instructed to play this game before (three times) and after (three times) the VR Training and recorded the game-score. We set up to play the game three times before and after the training to get a precise score (average) and reducing any accidental errors. Also, participants wore Muse headband (5 EEG sensors) while playing the game, and we captured the mind state of the participant by recording the eye blinking and EEG signals. Data was recorded in the interval of every 100 ms (10 times in a second). Participants Subject: 12 random participants (eight male and four female) were selected between 20 to 24 years old. During selection, we also considered gaming experience of the participants (four beginning, four intermediate, and four expert gamers). Two participants wore glasses (-1<eyesight<+1.25) on daily basis. Also, participants were invited to our lab in the middle of the day when their morning freshness has faded, and their sustained attention is not at their best. Major arm: Two participants were left handed. The rest of the 10 participants were right handed. Game name: VR Archery for sustained attention. Procedure: When arriving in the lab, the participant sat in a comfortable chair. We checked the heart rate with a wristband (Fitbit charge 3) and waited until the heart rate is not steady to get a real insight of heart rate variation during the experiment (training and evaluation). (1) VR participants experienced a demo game (Oculus first contact) before VR training. (2) Before and after the VR Training, the participant is instructed to play a computerbased non-action video game [19] to evaluate the attention span. (3) We let the participant practice the video game [19] before evaluation for getting familiar with it. For evaluating the calmness of the participant during the evaluation game, a Muse-headband were used to record brain signals and eye-blink data. Hardware This Mindfulness Meditation Session is a 3D-virtual reality environment designed for Sustained Attention Training. It utilizes the Oculus Rift DK2 head-mounted display and its touch controllers. A Fitbit charge-3 wristband used to check the heart rate variability of the participant during VR Training and Evaluation. A Muse headband was used to capture eye-blink data and EEG signals during evaluation. Results: Every participant played the game [19] for evaluation before (3-times) and after (3-times) the training and recorded the scores. Also, by using the Muse headband, three types of data were recorded, a) Calm points, b) Recoveries, and c) Birds count. Also, by using a wristband (ECG sensors), heart rate variability (HRV) was recorded during the entire experiment (training as well as evaluation). In this section, we define all the terms in which we have presented the recorded muse data. Participants' state of mind can be divided into three states (neutral, calm, and active). Calm points are calculated with this formula -[(neutral (seconds) * 1 + calm (seconds) * 3)/8]. Recoveries mean total counts of returning from the active state to the neutral state. Also, birds count stands for being in a calm state for a long time. Evaluation game [19] is an endless game. So, for getting the highest score possible, we let 3 participants (1 from each category) to play the game until they are satisfied with their score and picked the highest one among all game's scores. They played the game around 45-50 times to achieve these scores. Game [19] score increased on average for the 12 participants: for the beginner from 32 to 120, for intermediate from 98 to 203, and for the expert from 278 to 323, after the training. Also, by observing three types of data recorded by the Muse headband, calm points increased from 6 to 21, Birds count increased from 3 to 10, and Recoveries decreased from 2 to 0indicating that participants always remain either in neutral or in a calm state (did not enter the active state) after the training. Headband data was irrespective of the participants' gaming experience. Figure 5: Average data of all participants We found that participants' game scores increased for beginner by 275%, for the intermediate by 107%, and for expert by 17%. For Muse headband data, calm points increased by 250%, Birds count increased by 233%, and Recoveries decreased from two to zero. Participants were able to regain sustained attention at their best and recharge for the rest of the day. Heart rate variability (HRV) also slightly increased after the experiments. Discussions Evaluation game [19] required sustained attention and even mind wandering for a second can force participant to restart the game. After the training, participants recharged and got their sustained attention at their best that's why they were able to perform better. As well as Museheadband data also shows that they were fully calm, and their mind remained always in a calm or neutral state during the evaluation game. In this section, we will describe heart rate changes during the VR training. In the beginning, when participants were observing the immersive virtual environment, heart rate increased but slowly went down to normal as participant got familiar with the Virtual environment. Moreover, during the VR archery game, heart rate was constantly increasing as the participant came close to achieving the game's objectives and then changes were persistent during the entire time of the game. We can relate these changes with the competency After Training of the game (for response time training), and these changes will always be there whenever participant plays the game. Because of this, we are considering these changes as an improvement in HRV (Heart Rate Variability). Furthermore, during the evaluation after the training, heart rate was near to the resting heart rate of the participant. That indicates that participants were so relaxed and focused after VR training. Poor vision can affect the VR experience of the participant. In our experiments, three participants use glasses on a daily basis. The first one had the eyesight of +1.25 for the right eye and +1.00 for the left eye. The second one had the eyesight of +0.75 for the right eye and +1.00 for the left eye. The third one had -6.00 for the left eye and -5.00 for the right eye. We asked the participants to close their eyes one at a time. We discovered that eyesight of in between ±1 don't affect the experience of the participant in VR. They shared the experience as a normal participant. However, more the +1 and less than -1 affected the experience at a significant level. That's why we didn't consider the 3rd participant evaluation scores in our average. In general, high game scorers cannot improve their scores after any type of short training. Also, in our experiments expert game participant scores increased by only 17% even though they felt refreshed and their biological data improved (improved sustained attention and reduced mindwandering) compared to a normal participant. That's where we were able to achieve our objective of improving sustained attention and recharging the participant for the rest of the activity of the day after only a short 10-minute training. In general, left-handed people do better in games. In our experiments, left-handed (two participants) participants because familiar with the Oculus touch easily, and their target hit rate (archery game) was also slightly better than righthanded participants. When we asked participants about the difference between this VR archery game and other similar types of games (VR or Non-VR), most of them replied that the game was enjoyable and even they can play it daily because of combination of competency, VR Immersive environment, hand-eye coordination training, and ease of use. CONCLUSION In this research, we are contributing a short (10 minute) and effective VR Mindfulness meditation particularly designed for regaining or improving sustained attention. Anyone can use it in the middle of a busy day to recharge and regain sustained attention at their best. We evaluated our training by a game [19] score, Muse headband data, and ECG sensor data. Participants mind wandering (no one entered the active state) was significantly reduced, and evaluation-game [19] score was increased after the training.
2019-07-10T02:18:44.000Z
2019-07-10T00:00:00.000
{ "year": 2019, "sha1": "9a170e682a82e11a64ce6dca62b360f66da0206a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9a170e682a82e11a64ce6dca62b360f66da0206a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
46779512
pes2o/s2orc
v3-fos-license
Predicting Factors of Depression and Anxiety in Mental Health Nurses: A Quantitative Cross-Sectional Study Introduction The nursing profession is characterized as one of the most stressful and emotional dementing professions. It is widely agreed that many nurses are experiencing anxiety and depression as a results of their profession. Purpose The purpose of this study was to assess the prevalence and associated factors of depression and anxiety among mental health nurses working in public psychiatric hospitals, in order to identify independent predictors of mental health disorders risk. Material and Methods A descriptive, cross-sectional study was conducted in which 110 mental health nurses who were working in public psychiatric hospitals of Greece participated. The Patient Health Questionnaire-2 (PHQ-2) and the Generalized Anxiety Disorder-2 (GAD-2) questionnaire along with a sheet with basics demographic, social, and work characteristics, including gender, age, marital status, educational level, working experience in nursing, working position and shift, were used as instrument for data collection. Results The mean age of the nurses was 42.64 years (SD = 5.87 years) and working experience in nursing 15.73 years (SD = 5.64 years). Most participants were women 64.5%, married 59.1% and nursing assistant 53.6%, while 48.2% of them held a higher education degree. A very large percentage found to be classified as depressed (52.7%) and anxious (48.2%) and factors that found to be associated were age, marital status and educational level (for depression and anxiety) and working experience (only for depression). Conclusions Mental health nurses are in high risk for developing psychiatric disorders such as depression and anxiety. Being single, divorced or widowed, older, with many years of experience and a higher education degree can be predicting factors associated with depression and anxiety in mental health nurses. INTRODUCTION Depression and anxiety are considered the most frequent mental disorders in human life (1,2). According to WHO, depression and anxiety disorders show an increasing tendency in different countries of the world. In today's human development conditions, they are associated with severe consequences and, therefore, affect not only the quality of life and the social functioning of human, but even contribute to the growth of social problems and economic losses (3,4). Among the wide range of this problem´s issues, the question of determining Predicting Factors of Depression and Anxiety needs particular research (5). The multidimensional nature of the manifestations of depressive and anxiety disorders in modern conditions of society's development leads to the fact that there is still no consensus regarding the definition of Predicting Factors. Most scholars associate the personal factors of mental health with harmony, mental balance, creativeness, the personality integration and semantic regulation of life, spirituality, moral and social principles, the ability to self-actualization, which is oriented towards the meaning of life (6)(7)(8)(9)(10)(11)(12). Psycho-educational cognitive behavior therapy, using internet for preventive interventions, steppedcare interventions are the most used preventive intervention for depression and it has been implemented in many countries (13,14). At the same time, our analysis of scientific literature has revealed that the system of development, preservation and maintenance of mental health of staff of psychiatric institutions during the professional activity is not sufficiently developed (15, ORIGINAL PAPER | MED ARCH. 2018 FEB; 72(1): 62-67 16). The relationship of the specifics of professional, socio-axio-genesis, as a condition for the full development of the personality of nurses require particular attention. The notion of "mental health" is defined as an integrative characteristic of the social, psychophysiological, personal and spiritual levels of human development that opens up a way to fully realize his/her internal and social potentials in the direction of social and moral choice (16). The study of peculiarities of the development of mental health of nurses in mental health facilities also requires the study of their age, gender, socio-psychological and individual-personality characteristics under conditions of a certain meso and microsocial environment. Detection of associated Factors that could Predict Depression and Anxiety of nurses in mental health facilities will contribute to a better understanding of the mechanisms of their mental health. At the same time, it will be useful for defining an individual approach to assessing, analyzing, forecasting and creating favorable mental health conditions not only for nurses but even and for their patients (17,18). Therefore, we want to stress the topicality of outlined problem in today's conditions (Therefore, the topicality of outlined problem in today's conditions becomes the most urgent). The purpose of this study was to assess the prevalence and associated factors of depression and anxiety among mental health nurses working in public psychiatric hospitals, in order to identify independent predictors of mental health disorders risk. STUDY DESIGN AND SAMPLE This study utilized a descriptive, cross-sectional research design. The source population was all mental health nurses who were working in public psychiatric hospitals of Greece. The sample consisted of 110 mental health nurses who were randomly selected from two public psychiatric hospitals in Athens, which is the capital of Greece. Stratified random sampling procedure per hospital was used in recruiting samples. The sample size is approximately 10% of the source population's nurses. The response rate was 73.3% (110 out of 150 questionnaires). Nurses were chosen using the following criteria: (1) being a nurse or nurse assistant with immediate association with patients, (2) having working experience in nursing at least 1 year, (3) having adequate knowledge of the Greek language and satisfactory level of communication, and (4) being consented to participate in the study. The data collection of the survey was done from 1st April to 31st May 2017. Local ethical committees approved the study protocol. ASSESSMENT INSTRUMENTS The Patient Health Questionnaire-2 (PHQ-2) and the Generalized Anxiety Disorder-2 (GAD-2) questionnaire were produced as ultra-brief screening instruments for depression and anxiety, suitable for use in epidemiological studies. The PHQ-2 includes two questions and it was found to have good sensitivity and specificity for detecting depressive disorders (19). Likewise, the GAD-2 questionnaire which comprises two questions appears to have acceptable accuracy for detecting generalized anxiety, panic, social anxiety and post-traumatic stress disorder (20). In both questionnaires, each question requires respondents to rate on a four-point scale ranging from «0 = not at all» to «3 = nearly every day». PHQ-2 and GAD-2 total scores are calculated by adding the two questions score, resulting in a range from 0 to 6 for each questionnaire, with higher score indicative of higher mental health disorder. According to receiver-operating characteristic curve analysis, the optimal cutpoint is ≥ 3 on both the PHQ-2 and GAD-2 scales (11,21). In primary and secondary care settings, the ultra-brief tools can be used as an initial screening method. The basics demographic, social, and work characteristics, including gender, age, marital status, educational level, working experience in nursing, working position and shift were collected. STATISTICAL ANALYSES Means and standard deviations for continuous data and frequencies and percentages for categorical data are presented to demonstrate nurses characteristics (independent variables). The occurrences of depression and anxiety were used as the outcomes (dependent variables) of the under research correlations. Odds ratio with 95% confidence interval was used as measure of association. Associations between potential prognostic determinants and outcomes were examined using univariate logistic regression analysis. Predictors univariately associated with outcome (p-value < 0.10) were included in a multivariate logistic regression model. The fit of the multivariate model was assessed by the Hosmer-Lemeshow goodness-of-fit test. All reported p-values were two-tailed, and a p-value under 0.05 was considered statistically significant. Statistics of the research's empirical data were processed with IBM SPSS for Windows (version 21.0, SPSS Inc., Chicago, IL, USA). SAMPLE CHARACTERISTICS The mean age of the nurses was 42.64 years (SD = 5.87 years) and working experience in nursing 15.73 years (SD = 5.64 years). Most participants were women 64.5%, married 59.1% and nursing assistant 53.6%, while 48.2% of them held a higher education degree. Table 1 shows the demographic, social, and work data of the respondents. DEPRESSION AND ANXIETY IN THE MENTAL HEALTH NURSES Total scores of study scales are provided in Table 2. The mean total score of PHQ-2 and GAD-2 was 2.57 (SD = 1.82) and 2.66 (SD = 1.90) respectively. According to the cut off points of depression (PHQ-2 score ≥ 3) and anxiety (GAD-2 score ≥ 3), a high percentage of the mental health nurses were at risk for psychiatric disorder. The screening method showed prevalence 52.7% (58/110) for depression and 48.2% (53/110) for anxiety. FACTORS ASSOCIATED WITH DEPRESSION Univariate analyses showed that factors associated with elevated depression symptoms in mental health nurses were age, marital status, educational level, working experience in nursing and working position. The crude odds ratios are presented in Table 3. A stepwise logistic regression (backward method based on maximum likelihood) was conducted to predict the possibility of depression using the significant factors from univariate analyses. After two steps, the final model included the following four significant predictors of depression risk: age, marital status, working experience in nursing and working position. The adjusted odds ratios are presented in Table 3. Particularly, the risk of depression development was increased by 13% for every single year of age growing and 16% for every single year of extended working experience in nursing. Also, single and divorced/widowed mental health nurses were 10.30 times and 10.21 times, respectively, more likely to develop a risk for depressive disorder compared with married nurses. In addition, nurses were 2.93 times more likely than nurses assistant to be at risk for depression but it was not found significant difference with the heads of departments. The multivariable model as a whole explained 46.5% of the variance in depression risk and correctly classified 78.2% of cases. According to the Hosmer-Lemeshow test, the data fit the model perfectly (p=0.829). FACTORS ASSOCIATED WITH ANXIETY Simple and multiple logistic regression analysis were performed to explore the relationship between anxiety and mental health nurses characteristics. Table 4 presents both the crude and the adjusted odds ratios. Results of univariate analysis showed that age, marital status, educational level, working experience in nursing and work- ing position were significantly correlated with the risk of anxiety. After three steps in multivariate analysis (stepwise with backward method) age, marital status and educational level, but not working experience in nursing and working position, emerged as significant predictors of elevated anxiety symptoms. Age of nurses was a positive predictor of anxiety disorder. Specifically, the risk of anxiety development was increased by 11% for every single year of age growing. As well, single mental health nurses were 4.63 times more likely to develop a risk for anxiety disorder compared with married nurses but it was not found significant difference with divorced or widowed nurses. In addition, an interesting finding is that tertiary education nurses and nurses with a postgraduate degree were more likely (3.44 times and 4.24 times respectively) to have elevated anxiety symptoms compared with secondary education nurses. This model seems to explain 26.9% in the variation of anxiety disorder and it can properly classify 70.0% of study cases. Finally, the fit of the multivariate model was perfectly (p=0.854). DISCUSSION The purpose of this study was to investigate the prevalence of depression and anxiety among mental health nurses and the importance of associated factors such as age, educational status, working experience, to the development of such disorders. Overall, a very large percentage found to be classified as depressed (52.7%) and anxious (48.2%) and factors that found to be associated were age, marital status and educational level (for depression and anxiety) and working experience (only for depression). Health care sector is characterized as one of the most stressful and emotional demanding filed of work (22) and specially nurses, which are constantly exposed to various stressful situations as pain, death, grief and conflicts. That can lead to the experience of anxiety, negative emotions and depressive symptoms (23,24). In addition, the rates and the prevalence of anxiety and depression in Greece is increasing over the years and especially after 2009 in which enter the financial crises. Moreover, the cuts on begets and the lack of nursing staff that came as consequence of this crisis may had have increased stressors in healthcare setting. That can partial explain the high rates of depression and anxiety among our participants (25,26). Psychiatric wards consider to be a very stressful department and the working condition can be very dementing, leading nursing personnel that works there to experiencing high levels of work stress, depressive and anxiety symptoms as well as high levels of burnout (17,(27)(28)(29). While other studies found that the extended stress in such departments can often lead nurse to suicide attempts (30). According to our results almost 50% of the participants are experiencing anxiety and depressive symptoms. In a previous study was found that there is a strong relations between depression and anxiety and nurses who were reporting elevated levels of anxiety also reported in depression to, fact that supports the coexistence of those two disorders (31). In a recent study conducted in Taiwan in which 156 psychiatric nurses participated the depression level, consider to be moderate for the total sample and 15.6% of the psychiatric nurses had distinctly depressed mood Table 4. Association between characteristics and anxiety in mental health nurses (n=110). (32). The 15.6% can be lower compared to our finding, yes is notable. Similar finding were reported in Greece in a study conducted by Papathanasiou (24), in which 240 health care employees were participated and the majority of them were nurses (n=183). More specific the percentage that considers being depressed and anxious was 14.20% and 17.60% respectively (24). The variety on depressive levels' classification can be explained from different psychometric instruments that are used on the assessment of depression. Previous studies in nursing personnel indicating that nurses are exhibiting higher rates of anxiety, somatization and social dysfunction among other psychiatric problems and anxiety and depression are the most common among them (33). The high prevalence of depression and anxiety is reinforcing finding from previous studies that have been conducted in different departments and in different cultures revealing the universality of the stressor, difficulties and the emotional demands that nurses are facing (34). According to Firth-Cozens et al. (35), at least 1/3 of the nursing personnel is experiencing high levels of occupational stress from the begging of their work career and are vulnerable to psychiatric disorders. In addition psychiatrist and nurses have the highest rates on suicides among various professions. This findings can be reinforced from our results as well as from a recent study conducted in Hong Kong nurses of which 35.8% and 37.3% were experiencing depression and anxiety respectively (36). Regarding factors associated with anxiety and depression in mental health nurses we found that as age, educational status, working experience, to the development of such disorders. This finding are in agreement in previous national and international studies conducted among nurses (23,(37)(38). Thus, reinforcing there assumptions that various demographic factors, educational status and perceived support in nurses may associated with mental health status (22). In addition, we observed that depression and anxiety are increasing along as the age increasing. In a previous study in Greece in which the levels of anxiety of nurses working in NHS was examined it was found that indeed anxiety and age had an positive correlation (38). This findings are in construct to the existing literature. Results from several international studies arguing that age is a protective factor and younger nurses seem to be more voluanarbe to depression and anxiety (36,(39)(40), may attribute to the fact that older nurses are more experienced so they experiencing less work related stress, they may have increasing social support from a spouse or children that can be a protective factor to psychosocial problems (22,37). Furthermore regarding to educational status our finding indicating that university graduates and those who hold an post graduate degree are more likely to have depression and anxiety at 3.44 and 4.24, respectively. In a study conducted in China in 2012 and 1437 nurses participated found that indeed higher education may be a factor associated with depression and anxiety among nurses (37). In general such findings are in construct with existing international literature (41) and it may be explained by the responsibility which can be a stressor and it is increasing along with educational status. Moreover, higher education nurses usually having high expectation for their profession and can feel disappointment when experiencing lack of progress or improvement in the clinical practice (42). Thus, can lead to the experience of depression and anxiety symptoms. On the other hand, gender, rotated shifts weren't associated with depression and anxiety. Previous studies supporting that gender can be an important factor in the development of depression and anxiety. In a study conducted in 2011 by Uwaoma et al (43), in nurses working in various departments found that women were experiencing more anxiety than men. In the same conclusions were driven and Kourakos et al. (38) and Karanikola et al. (44), in studies which conducted Greece among NHS's nurses. Regarding the rotated shifts our finding is reinforcing findings from a very large study in which 1437 nurses were participated and they didn't found any association between night shift with depression and anxiety (37). According to the existing literature, nurses working on rotating or night shifts, need special attention and frequent health checks because there are consider to be at high risk for health effects (45). Among others night shifts in nursing have been associated with poor psychological well-being and quality of life, less satisfied by work and increased levels of burnout, elevated levels of depression, anxiety and stress, decreased resilience and negative coping (46)(47)(48)(49). Limitations of the study: The study has some limitations. Firstly, the sample is low. However, this study conducted in a specific population of nurses those who are working in psychiatric hospitals of Greece. Finally, a cross-sectional study does not give statistical information about the variance that those disorder can have in time. In addition, self-administrated psychometric instruments can't replace clinical interview conducted by a specialized psychiatrist. Strengths of the study: As already indicated, there is not much literature that assesses the prevalence of depression and anxiety among mental health nurses as well as the associated factors. In this way, nursing managers as well as nursing personnel could know and recognize factors related with depression and anxiety and how they are related. CONCLUSIONS To sum up, a significant number of mental health nurses in the study were found to have elevated levels depressed and anxiety. Depression and anxiety have been well researched and well documented over the years in both national and international level in general population as well as in nurses. The current study conducted in a specific population among nurses those who working in two from the three remaining psychiatric hospitals of Greek NSH. Socio-demographic and occupational variables, apart from age, educational level and working experience, do not seem to influence the prevalence of depression and anxiety. Almost 50% of the respondents have an active symptomatology of depression and anxiety and in many cases from both, a statistic which should ORIGINAL PAPER | MED ARCH. 2018 FEB; 72(1): 62-67 be improved to avoid additional health problems that may lead to absence from work and of Corse poor quality of provided patient's care. • Authors contributions: KT, IP, MK and EF designed the study, and wrote the initial draft of the manuscript. VV, AP, MAK, and MK contributed to analysis and interpretation of data, and assisted in the preparation of the manuscript. KT, IP, MK, VV, AP, MAK and EF reviewed and approved the final version of the manuscript. • Conflict of interest: none declared. • Acknowledgments: The authors would like to thank all staff nurses who participated in the study.
2018-04-03T02:00:21.989Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "40f55e9918856c9ec5f35061f449cf1164bf76e9", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc5789556?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "40f55e9918856c9ec5f35061f449cf1164bf76e9", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
30152525
pes2o/s2orc
v3-fos-license
Microbial Succession and Flavor Production in the Fermented Dairy Beverage Kefir Traditional fermented foods represent relatively low-complexity microbial environments that can be used as model microbial communities to understand how microbes interact in natural environments. Our results illustrate the dynamic nature of kefir fermentations and microbial succession patterns therein. In the process, the link between individual species, and associated pathways, with flavor compounds is revealed and several genes that could be responsible for the purported gut health-associated benefits of consuming kefir are identified. Ultimately, in addition to providing an important fundamental insight into microbial interactions, this information can be applied to optimize the fermentation processes, flavors, and health-related attributes of this and other fermented foods. drive microbial succession, or changes in the microbial population structure over time, in these environments (4). It has been proposed that microbial communities from fermented foods could provide a useful model for elucidating the determinants of microbial succession, given that they are considerably less complex than, for example, those from the gut or soil (5). Indeed, cheese rind communities have previously been used to great effect for this purpose (6). Here, we show that kefir provides an alternative model microbial community that is less complex and provides results even more quickly. Kefir is a traditional fermented milk beverage that is typically produced by inoculating a kefir grain, a cauliflower-like exopolysaccharide (EPS) matrix containing a symbiotic community of bacteria and yeast (7), into milk and incubating it at room temperature for approximately 24 h, resulting in a beverage that has been described as having a pleasantly sour or yogurt-like taste (8). This flavor can vary, depending on the microbial composition of the grain that is used (9). High-throughput sequencing investigations have demonstrated that kefir grains are typically dominated by the bacterial genus Lactobacillus and the fungal phylum Ascomycota (9,10). In contrast, kefir milk is dominated by the bacterial genera Lactobacillus, Lactococcus, Acetobacter, and Leuconostoc and the fungal genera Kazachstania, Kluyveromyces, Naumovozyma, and Saccharomyces (9,11,12). The consumption of kefir has been associated with numerous health benefits, including anticarcinogenic, anti-inflammatory, and antipathogenic effects (13)(14)(15), as well as the alleviation of the symptoms of lactose intolerance and the reduction of cholesterol (16,17). There is mounting evidence to suggest that the microorganisms present in kefir exert at least some of these health benefits (18)(19)(20)(21)(22), but there is a lack of understanding of the mechanisms by which they do so. In this work, amplicon sequencing and whole-metagenome shotgun sequencing were combined with metabolomics and flavor analysis to highlight how the microbial composition, gene content, and flavor of kefir change over the course of 24-h fermentations. We demonstrate that the integration of multiple omics data can predict the contribution of individual microorganisms to metabolite production in a microbial environment, using flavor formation as an example, and we validate these findings through supplementation with specific microbes. To our knowledge, this is the first study to combine metagenome binning and metabolic reconstruction to determine the microbial composition, at both the species and strain levels, and the functional potential of a fermented food, respectively, at different stages of fermentation. In addition, this is the first study to combine whole-metagenome shotgun sequencing with metabolomics to link microbial species with volatile-compound production in kefir. Our findings reveal a dynamic flux from Lactobacillus kefiranofaciens domination during the early stages of fermentations to Leuconostoc mesenteroides domination during the latter stages, establish a causal relationship between microbial taxa and flavor, and highlight genes that likely contribute to kefir's purported health-associated attributes. RESULTS Microbial composition of kefir. 16S rRNA and internal transcribed spacer (ITS) gene sequencing was used to determine the changes in the microbial population of kefirs over the course of 24-h fermentations initiated with three separate grains, designated Fr1, Ick, and UK3, from distinct geographic locations, namely, France, Ireland, and the United Kingdom. Analysis of the grains showed that Lactobacillus was the dominant bacterial genus and constituted Ͼ92% of the populations of all three grains (see Fig. S1 in the supplemental material). Acetobacter was subdominant and accounted for between 1 and 2% of the population of each grain. In addition, Leuconostoc was present in all three grains, although its abundance varied from 0.2 to 1.5%. Other genera that were detected at a relative abundance of Ͼ1% were Propionibacterium, in Fr1 (4.6%) only, and Bifidobacterium, in UK3 (3.4%) only. A fungal population was detected in the grains Fr1 and Ick but not in UK3. Saccharomyces and Kazachstania were the only fungal genera present (see Fig. S1). Analysis of milk samples revealed that an initially relatively high bacterial diversity decreased over time, with a small number of genera becoming dominant by 8 and 24 h (see Fig. S2 in the supplemental material). On average, at 0 h, or immediately before the grains were added to the milk, the bacterial genera present at a relative abundance of Ն1% were Pseudomonas (16.9%), Anoxybacillus (7.1%), Thermus (6.5%), Acinetobacter (5%), Streptococcus (4.5%), Geobacillus (3.2%), Clostridium (2.4%), Butyrivibrio (2.2%), Serratia (2.1%), Enterobacter (1.3%), Turicibacter (1.3%), and Lactococcus (1%). A further 46.5% of the bacterial genera had a relative abundance of Ͻ1% (Fig. 1A). This microbial profile is consistent with that of pasteurized milk, as reported previously by Quigley et al. (23). We were unable to generate an ITS amplicon for the three samples collected at 0 h, and quantitative PCR (qPCR) indicated that fungal DNA was present at Ͻ2 pg/l. qPCR measurements revealed that the total bacterial and fungal levels increased after kefir grains were added to milk (see Table S1 in the supplemental material). At 8 and 24 h in Fr1, Ick, and UK3, Lactobacillus, Leuconostoc, and Acetobacter accounted for Ͼ98% of the total bacterial population, while Saccharomyces and Kazachstania accounted for Ͼ99% of the fungal population. No other bacterial or fungal genera were present at a relative abundance of Ͼ1%. Although there were some differences in their compositions at each time point, the bacterial communities of the three kefirs all followed the same pattern of succession (Fig. 1A). Between 0 and 8 h, there was an increase in the relative abundances of Lactobacillus, Leuconostoc, and Acetobacter. Lactobacillus was the dominant genus at 8 h. However, between 8 and 24 h, the relative abundance of Lactobacillus decreased. Concurrently, the relative abundances of Leuconostoc and Acetobacter increased. On average, Leuconostoc accounted for approximately one-third of the bacterial population at 24 h. In contrast to the bacterial communities of the three kefirs, the respective fungal communities displayed various patterns of succession (see Fig. S3A in the supplemental material). 16S rRNA and ITS compositional data were supplemented by composition-based analysis of shotgun metagenomic data. Kraken (24) was used to determine the bacterial composition of kefir after 0, 8, and 24 h of fermentation and yielded results that corresponded well to amplicon sequencing results at the genus level but which could be further assigned to the species level. It was established that the kefir milk was dominated by L. kefiranofaciens at 8 h (Fig. 1B). However, between 8 and 24 h, the relative abundance of L. kefiranofaciens decreased, whereas the relative abundance of Leuconostoc mesenteroides increased. During the same period, there were also increases in the relative abundances of Acetobacter pasteurianus, Lactobacillus helveticus, Leuconostoc citreum, Leuconostoc gelidum, and Leuconostoc kimchii. These results were generally consistent with those generated by MetaPhlan2 (25) (see Fig. S3B in the supplemental material), except that MetaPhlan2 did not detect some of the species present in lower abundance (i.e., A. pasteurianus, L. citreum, L. gelidum, or L. kimchii). MetaPhlan2 predicted that Saccharomyces cerevisiae was the dominant fungal species and that it accounted for 0.9% and 0.2% of the microbiota in kefir at 8 and 24 h of fermentation, respectively. However, it did not detect Kazachstania species. In addition, PanPhlAn (26) was used to provide strain level characterization of the most dominant bacterial species identified by Kraken and MetaPhlan2. The results indicated that, across all of the kefirs tested, the strains present were most closely related to L. kefiranofaciens DSM 10550, L. mesenteroides ATCC 8293, and L. helveticus MTCC 5463 (see Fig. S5 in the supplemental material). Despite this relative homogeneity, it was still apparent that the strains in a particular kefir were more closely related to each other than they were to strains from other kefirs (see Fig. S4 in the supplemental material). Gene content of kefir. Whole-metagenome shotgun sequencing was used to characterize the functional potential of the kefir microbiome at different stages of fermentation, and the HUMAnN2 pipeline (https://bitbucket.org/biobakery/humann2) was used for metagenomic metabolic reconstruction. The default HUMAnN2 pathway abundance table was regrouped by using a custom mapping file to assign individual MetaCyc pathways (27) to a hierarchy of 534 gene product categories to achieve an overview of the kefir microbiome (Fig. 2). The statistical tool LEfSe (28) was used to identify changes in the abundances of genetic pathways over the course of fermentation. Notably, we observed that pathways involved in carbohydrate metabolism, carboxylate degradation, and unsaturated fatty acid biosynthesis were most prevalent at 8 h, whereas those involved in amino acid metabolism and 2,3-butanediol degradation were most prevalent at 24 h (Fig. 2). Inspection of the default pathway abundance table revealed that pathways involved in fatty acid beta oxidation were present in kefirs. The pathways mentioned here are of particular interest because they are potentially involved in the production of volatile compounds (Table 1). In addition, the default HUMAnN2 gene family table was regrouped to Gene Ontology (GO) terms (gene product categories [29]) and, in total, we detected 1,288, 1,006, and 947 GO terms associated with carbohydrate, amino acid, and lipid metabolism in the kefir microbiome. Interestingly, pathways involved in aromatic amino acids and proline biosynthesis were assigned to L. mesenteroides but not to L. kefiranofaciens. Similarly, pathways involved in arabinose, maltose, pentose, sucrose, xylose, and xylulose metabolism were present in L. mesenteroides but not in L. kefiranofaciens. Finally, the HUMANnN2 gene family table was inspected for genes associated with probiotic functionalities to better understand the basis of the health benefits of kefir. We observed that L. kefiranofaciens in Fr1, Ick, and UK3 contained genes encoding EPS synthesis proteins (UniRef50_W5XGS2, UniRef50_F6CC46, and UniRef50_F0TGY1), bile salt transporter proteins (UniRef50_Q74LX5 and UniRef50_F6CE74), adhesion proteins (UniRef50_F6CFB4 and UniRef50_Q040W2), mucus binding proteins (UniRef50_F6CE70, UniRef50_F6CE69, UniRef50_F6CDG7, and UniRef50_F6CBX6), and the type III bacteriocins/bacteriolysins helveticin J (UniRef50_D5GYX2) and enterolysin A (UniRef50_ D5GXY3 and UniRef50_F6CAP6). On the basis of these findings, we downloaded publicly available metagenome sequences from cheeses and kimchi (Table 2) todetermine the prevalence of similar genes in other fermented foods. HUMAnN2 indicated that genes encoding EPS synthesis proteins, adhesion proteins, mucus binding proteins, bile salt hydrolases, bile salt symporters, and bacteriocins/prebacteriocins were widespread in the 14 cheese varieties investigated (Fig. 3). In addition, we observed several instances where multiple genes were assigned to individual species (see Table S2 in the supplemental material). We identified similar genes in kimchi (Fig. 5), although HUMAnN2 was unable to assign them to individual species because of the lower sequencing depth of those samples. Volatile-compound profiling and sensory analysis of kefir milk. Gas chromatography-mass spectrometry (GC-MS) was used to determine the volatile- compound profile of kefir milk after 0, 8, and 24 h of fermentation. Thirty-nine volatile compounds that could contribute to flavor were identified and semiquantified in kefir milks produced with each of the three kefir grains. These consisted of nine ketones, seven aldehydes, six esters, eight alcohols, five carboxylic acids, and two sulfur compounds ( Table 1). The results of the volatile-compound analysis are presented in Fig. 4. The levels of all of the compounds detected increased after 0 h, apart from 1-pentanol, pentanal, hexanal, heptanal, heptanol, acetone, and 2-butanone (Fig. 4). Sensory acceptance evaluation and ranking descriptive analysis (RDA) of the Fr1 and Ick kefir milks were performed after 24-h fermentations. These revealed perceptible differences between the milks. Specifically, Fr1 samples had a more likeable, buttery flavor whereas Ick samples had a less likeable but fruity flavor (see Fig. S5 in the supplemental material). These results confirm that the volatile-compound profile data are consistent with subsequent flavor. Correlations between microbial taxa and volatile compounds. The Spearman rank correlation test was used to identify correlations between the levels of individual taxa and flavor compounds. At the genus level, based on amplicon sequencing results, there were strong correlations between Lactobacillus and carboxylic acids, esters, and 3-methyl-1-butanol; between Saccharomyces and carboxylic acids and esters; between Acetobacter and acetic acid, 2-methyl-1-butanol, and 2,3-butanedione; between Leuconostoc and 2,3-butanedione; and between Kazachstania and acetic acid, 2-methyl-1butanol, 2,3-butanedione, 2,3-pentanedione, and 2,3-hexanedione (see Table S3 in the supplemental material). At the bacterial species level, on the basis of Kraken results, there were strong correlations between L. kefiranofaciens and carboxylic acids and esters; between A. pasteurianus and carboxylic acids and 2,3-butanedione; and between L. mesenteroides and 2,3-butanedione. At the fungal species level, on the basis of MetaPhlan2 results, there were strong correlations between S. cerevisiae and alcohols and esters (Table 3; Fig. 5). In summary, correlations were found between compounds associated with vinegary flavors and A. pasteurianus, those associated with cheesy flavors and L. kefiranofaciens, those associated with buttery flavors and L. mesenteroides, and those associated with fruity flavors and L. kefiranofaciens and S. cerevisiae. Impact of supplementing kefir with kefir isolates. The consequences of adding L. kefiranofaciens NCFB 2797 to Fr1, a kefir with a low indigenous L. kefiranofaciens population level, was investigated. GC-MS revealed that this addition caused increases in the levels of the esters ethenyl acetate (by 59.15%), ethyl acetate (100%), methyl-3-butyrate (26.83%), and 2-methylbutyl-acetate (11.44%) and the ketone 2-heptanone (65.86%). In contrast, the addition of L. mesenteroides 213M0 to Ick, a kefir with a low indigenous L. mesenteroides population level, resulted in increases in the levels of acetic acid (168.28%) and 2,3-butanediol (14.91%), a precursor to 2,3butanedione (see Table S4 in the supplemental material). Despite changes in the volatile-compound profile, there were no perceptible changes in flavor (see Fig. S5 in the supplemental material). DISCUSSION Many traditional fermented foods have been reported to have health benefits (30,31). These foods are often produced on a small-scale, artisanal basis. However, the increased demand for health-promoting foods among the public presents an opportunity to bring traditional fermented foods to a wider audience and serves as an incentive to optimize starter cultures for the mass production of fermented foods with enhanced sensory qualities (32). In recent years, genetic characterization has been increasingly employed to guide starter culture development for numerous fermented foods, including wines, beers, cocoa, and meats (33)(34)(35)(36). Similarly, integrated molecular omics approaches (37) have emerged as powerful methods of investigating the microbial dynamics of food fermentations with the aim of optimizing processes like flavor production (38). In this study, we combined compositional and shotgun DNA sequencing with GC-MS and flavor analysis to predict microbes involved in the production of different flavor compounds in kefir. We identified significant correlations between the abundances of particular microbial genera and species and the levels of different volatile compounds and showed that the microbes in kefir had genes necessary for the production of these compounds. Specifically, Acetobacter pasteurianus correlated with acetic acid, which is associated with vinegary flavors; L. kefiranofaciens correlated with carboxylic acids and ketones associated with cheesy flavors and with esters associated with fruity flavors; L. mesenteroides correlated with 2,3-butanedione, which is associated with buttery flavors, and with acetic acid; and S. cerevisiae correlated with esters. Sensory analysis revealed that Fr1, a kefir high in L. mesenteroides, had a likeable buttery flavor, whereas Ick, a kefir high in L. kefiranofaciens, had a less likeable but fruity flavor. Thus, our data suggested a causal relationship between specific taxa and flavor characteristics that was subsequently supported by experimentally manipulating the kefir community. In line with predictions, adding L. kefiranofaciens NCFB 2797 to Fr1 resulted in increases in the levels of 2-heptanone and esters, whereas the addition of L. mesenteroides 213M0 to Ick resulted in increases in the levels of acetic acid and 2,3-butanediol, a precursor of 2,3-butanedione. However, sensory analysis indicated that these changes were imperceptible and therefore higher inoculum levels might be necessary to change flavor. On the basis of these results, we predict that the final flavor of kefir can be manipulated by altering the ratio of microbes in the grain. Unfortunately, to date, it has not been possible to artificially reconstruct kefir grains in the laboratory and this might hamper the practical application of our findings. However, we propose that the approach outlined here can be used to accelerate the development of superior multistrain starter cultures to improve the flavor of a variety of fermented foods. From a systems biology perspective, our work confirms that kefir is suitable as a model microbial community. There are two advantages to using the kefir model, rather than other fermented foods, in this way. First, kefir contains fewer species and so is a simpler environment in which to investigate how microbial communities are formed. Second, kefir is quick and easy to produce, with the fermentation taking just 24 h when it is incubated at room temperature. In addition, others have demonstrated that kefir is a highly culturable system and, indeed, all of the species that were detected at a relative abundance of Ͼ1% at 8 and 24 h across the kefirs examined have been isolated previously (39). Ultimately, Kraken and MetaPhlAn2 showed that the microbial population of kefir was dominated by L. kefiranofaciens at 8 h of fermentation. However, between 8 and 24 h, there was a fall in the relative abundance of L. kefiranofaciens and L. mesenteroides superseded it as the dominant species. The shift from L. kefiranofaciens to L. mesenteroides is similar to patterns of microbial succession seen in other fermented foods (40,41). We propose that kefir could be a particularly appropriate model community in which to determine the driving forces behind microbial succession. Early colonizing bacteria in other fermentations have been reported to modify the environment in such a way as to make it more suitable for the growth of other bacteria, thus driving succession (5), and this could explain the observed shift that occurs during kefir fermentation. Our HUMAnN2 results revealed that genes involved in aromatic amino acid biosynthesis were assigned to L. mesenteroides but not to L. kefiranofaciens. This may be significant because free amino acid analysis showed that there was a significant decrease in the levels of tyrosine in kefir between 8 and 24 h (see Text S1 in the supplemental material). It is possible that its ability to synthesize tyrosine underlies the increased prevalence of L. mesenteroides, relative to L. kefiranofaciens, in the latter stages of fermentation. Future work will focus on investigating the effect of modifying the levels of tyrosine on the microbiota and volatilecompound profile of kefir. Thus, a "kefir model" has the potential to yield insights into the effects of nutrient availability on microbial succession and metabolite production in other, more complicated, environments. Finally, we showed that L. kefiranofaciens has genes that encode proteins that are considered to be important for probiotic action, including EPS synthesis proteins, bile salt transporters, mucus binding proteins, and bacteriolysins (42,43). The presence of these genes suggests that the L. kefiranofaciens strains present in these kefirs have the potential to survive gastric transit, colonize the gut, and inhibit the growth of pathogens. Indeed, previous studies using mice have shown that L. kefiranofaciens protects against enterohemorrhagic Escherichia coli infection (44). Further analysis of shotgun metagenomic data from cheese and kimchi indicated that similar genes are present in other fermented foods. Our findings are consistent with previous observations that fermented food-borne microbes can colonize the gut (45) and support designating some fermented foods, like kimchi, "probiotic foods" (31). In summary, in this study, it has been demonstrated that a combined metagenomic and metabolomic approach can potentially be used to identify the microbes from a particular environment that are responsible for the production of certain metabolites, using the production of flavor compounds during kefir fermentation as a model. Furthermore, we have provided additional evidence of the use of microbial fermentations to provide valuable insights into the dynamics of microbial succession and, in the process, identified genes in L. kefiranofaciens that potentially confer important probiotic traits. To conclude, our analyses confirm the value of using kefir as a model microbial community, while also providing a valuable insight into the microbiology of this natural health-promoting beverage. MATERIALS AND METHODS Kefir fermentations. Three kefir grains, Fr1, Ick, and UK3, from distinct geographic locations, France, Ireland, and the United Kingdom, respectively, were used for kefir fermentations. The grains were weighed and inoculated in full-fat pasteurized milk at a concentration of 2% (wt/vol) in separate fermentation vessels. The milk was incubated at 25°C for 24 h. A 20-ml volume of milk was collected after 0, 8, or 24 h. In total, there were 15 2% (wt/vol) kefir milk samples: three 0-h samples that were collected immediately before the addition of Fr1, Ick, or UK3; three 8-h samples (one each from Fr1, Ick, and UK3); and nine 24-h samples (one from each of the three replicate fermentations with Fr1, Ick, or UK3). The samples were stored at Ϫ20°C until DNA extraction and volatile-compound analysis. Kefir grains were washed with sterile deionized water between fermentations. Additional fermentations were performed in which milk inoculated with specific kefir grains was supplemented with kefir isolates to assess the consequences of increased levels of these taxa on volatile-compound levels and flavor. Specifically, L. kefiranofaciens NCFB 2797 and L. mesenteroides 213M0 were grown overnight in 10 ml of de Man, Rogosa, and Sharpe broth; pelleted at 5,444 ϫ g; and resuspended in 5 ml of pasteurized milk. L. kefiranofaciens NCFB 2797 cells were added to Fr1 milk, and L. mesenteroides 213M0 cells were added to Ick milk. Unspiked Fr1 and Ick served as negative controls. As described above, milk was incubated at 25°C for 24 h and the fermentations were carried out in triplicate. A 5-ml volume of milk was collected for volatile-compound analysis, and the samples were stored at Ϫ20°C. A 400-ml volume of milk was collected for sensory evaluation, and the samples were stored at Ϫ80°C. Volatile-compound profiling of kefir by GC-MS. For volatile-compound analysis of kefir, 1 g of the sample was added to a 20-ml screw-cap solid-phase microextraction (SPME) vial with a silicone/ polytetrafluoroethylene septum (Apex Scientific, Maynooth, Ireland) and equilibrated to 75°C for 5 min with pulsed agitation for 5 s at 400 rpm with a GC Sampler 80 (Agilent Technologies Ltd., Little Island, Cork, Ireland). A single 50/30-m Carboxen-divinylbenzene-polydimethylsiloxane SPME fiber (Agilent Technologies Ltd., Ireland) was used; it was exposed to the headspace above the samples for 20 min at a depth of 1 cm at 75°C. The fiber was retracted and injected into the GC inlet and desorbed for 2 min at 250°C. After injection, the fiber was heated in a bakeout station for 3 min at 270°C to cleanse the fiber. The samples were analyzed in triplicate. Injections were made on an Agilent 7890A GC apparatus with an Agilent DB-5 column (60 m by 0.25 mm by 0.25 m) with a multipurpose injector with a Merlin microseal (Agilent Technologies Ltd., Ireland). The temperature of the column oven was set at 35°C, held for 0.5 min, increased at 6.5°C·min Ϫ1 to 230°C, and then increased at 15°C·min Ϫ1 to 325°C, yielding at total run time of 36.8 min. The carrier gas was helium held at a constant pressure of 23 lb/in 2 . The detector was an Agilent 5975C MSD single-quadrupole mass spectrometer detector (Agilent Technologies Ltd., Ireland). The ion source temperature was 230°C, the interface temperature was set at 280°C, and the MS mode was electronic ionization (Ϫ70 V) with the mass range scanned between 35 and 250 atomic mass units. Compounds were identified by mass spectrum comparisons to the National Institute of Standards and Technology 2011 mass spectral library, the automated mass spectral deconvolution and identification system, and an in-house library created in TargetView software (Markes International, Llantrisant, United Kingdom) with target and qualifier ions and linear retention indices for each compound. Autotuning of the GC-MS system was carried out prior to the analysis to ensure optimal GC-MS performance. A set of external standards was also run at the start and end of the sample set, and abundances were compared to known amounts to ensure that both the SPME extraction and MS detection were performing within specifications. Volatile-compound profiling of spiked and unspiked kefir samples was done by a slightly modified GC-MS protocol (see Text S1 in the supplemental material). Sensory analysis of spiked and nonspiked kefir. Twenty-five naive assessors were recruited for sensory acceptance evaluation, and 10 trained assessors were recruited for RDA. Analysis of variancepartial least-squares regression (APLSR) was used to process the results of the sensory acceptance evaluation test and RDA with Unscrambler software version 10.3. See Text S1 in the supplemental material for a more in-depth description of the sensory analysis methods used. Total DNA extraction from kefir (milks and grains). DNA was extracted from 15 ml of kefir milk as follows. Milk was centrifuged at 5,444 ϫ g for 30 min at 4°C to pellet the microbial cells in the liquid. The cell pellet was resuspended in 200 l of PowerBead solution from the PowerSoil DNA Isolation kit (Cambio, Cambridge, United Kingdom). The resuspended cells were transferred to a PowerBead tube (Cambio, Cambridge, United Kingdom). A 90-l volume of 50 mg/ml lysozyme (Sigma-Aldrich, Dublin, Ireland) and 50 l of 100 U/ml mutanolysin (Sigma-Aldrich, Dublin, Ireland) were added, and the sample was incubated at 60°C for 15 min. A 28-l volume of proteinase K (Sigma-Aldrich, Dublin, Ireland) was added, and the sample was incubated at 60°C for a further 15 min. DNA was then purified from the sample by the standard PowerSoil DNA Isolation kit protocol (Cambio, Cambridge, United Kingdom). Total DNA was also extracted from each of the three grains. Fragments of 50 mg were removed from different sites on each of the grains and added to separate PowerBead tubes (Cambio, Cambridge, United Kingdom). The grain fragments were homogenized by shaking the PowerBead tube on the TissueLyser II (Qiagen, West Sussex, United Kingdom) at 20 Hz for 10 min. Following homogenization, DNA was purified from the sample by the method outlined above. Total DNA was initially quantified and qualified by gel electrophoresis and the NanoDrop 1000 (BioSciences, Dublin, Ireland) before more accurate quantification with the Qubit High Sensitivity DNA assay (BioSciences, Dublin, Ireland). Bacterial and fungal abundances were determined by qPCR by the protocol described by Fouhy et al. (46) and the Femto Fungal DNA Quantification kit (Cambridge Biosciences, United Kingdom), respectively. Amplicon sequencing. 16S rRNA gene libraries were prepared from extracted DNA by the 16S Metagenomic Sequencing Library Preparation protocol from Illumina (47). ITS gene libraries were prepared for the samples with a modified version of the 16S rRNA gene extraction protocol. Briefly, the initial genomic DNA amplification was performed with primers specific to the ITS1-ITS2 region of the ITS gene (48), but they were modified to incorporate the Illumina overhang adaptor (i.e., ITSF1 primer 5= TCGTCGGCAGCGTCAGATGTGTATAAGAGACAGCTTGGTCATTTAGAGGAAGTAA 3= and ITS2 primer 5= G TCTCGTGGGCTCGGAGATGTGTATAAGAGACAGGCTGCGTTCTTCATCGATGC 3=). After amplification of the ITS1-ITS2 region, PCR products were treated as described in the Illumina protocol. Samples were sequenced on the Illumina MiSeq in the Teagasc sequencing facility, with a 2 ϫ 250 cycle V2 kit, in accordance with standard Illumina sequencing protocols. Whole-metagenome shotgun sequencing. Whole-metagenome shotgun libraries were prepared in accordance with the Nextera XT DNA Library Preparation Guide from Illumina (47). Samples were sequenced on the Illumina MiSeq sequencing platform in the Teagasc sequencing facility, with a 2 ϫ 300 cycle V3 kit, in accordance with standard Illumina sequencing protocols. Bioinformatic analysis. 16S rRNA gene sequencing data were processed with the pipeline described by Fouhy et al. (49). Briefly, sequences were quality checked, clustered into operational taxonomic units, and aligned and diversity (both alpha and beta) was calculated with a combination of the Qiime (1.8.0) (50) and USearch (v7-64bit) (51) pipelines. Taxonomy was assigned with a BLAST search (52) against SILVA SSURef database release 1 (53). ITS gene sequencing data were processed with a slightly modified pipeline. Taxonomy was assigned by using a BLAST search against the ITSoneDB database (54). Raw reads from whole-metagenome shotgun sequencing were filtered on the basis of quality and quantity and trimmed to 200 bp with a combination of Picardtools (https://github.com/broadinstitute/picard) and SAMtools (55). Subsequently, function was assigned to reads with the HUMAnN2 suite of tools (56), which assigned function based on the ChocoPhlan databases and genes based on UniRef (57). The HUMAnN2 gene abundance table was regrouped by a mapping of MetaCyc pathways and a mapping of GO terms for amino acid, carbohydrate, and lipid metabolism. MetaPhlAn2 and Kraken were used to profile changes in the microbial composition of kefir milk at the species level (24,25). Statistical analysis of metagenomic and metabolomic data. Statistical analysis was done with R-3.2.2 (58) and LEfSe (28). The R packages ggplot2 and gplots and the cladogram generator Graphlan (59) were used for data visualization. Accession number(s). Sequence data have been deposited in the European Nucleotide Archive (ENA) under the project accession number PRJEB15432. ACKNOWLEDGMENTS We thank Ben Bourrie and Mairéad Coakley for providing us with the strains used in this study. In addition, we thank Eric Fransoza for providing us with mapping files to regroup the HUMAnN2 gene abundance table. This research was funded by Science Foundation Ireland in the form of a center grant (APC Microbiome Institute grant no. SFI/12/RC/2273). Research in the Cotter laboratory is also funded by Science Foundation Ireland through the PI award "Obesibiotics" (11/PI/1137). Orla O'Sullivan is funded by Science Foundation Ireland through a Starting Investigator Research Grant award (13/SIRG/2160).
2018-04-03T02:22:21.817Z
2016-10-04T00:00:00.000
{ "year": 2016, "sha1": "7337bad227c1202777ce4df9a9c93c8f7dbf6cfb", "oa_license": "CCBY", "oa_url": "https://msystems.asm.org/content/msys/1/5/e00052-16.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5634de8f252733e8d04e1980988ae2182ad660f1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
233204716
pes2o/s2orc
v3-fos-license
Privacy-Preserving Supervisory Control of Discrete-Event Systems via Co-Synthesis of Edit Function and Supervisor for Opacity Enforcement and Requirement Satisfaction This paper investigates the problem of co-synthesis of edit function and supervisor for opacity enforcement in the supervisory control of discrete-event systems (DES), assuming the presence of an external (passive) intruder, where the following goals need to be achieved: 1) the external intruder should never infer the system secret, i.e., the system is opaque, and never be sure about the existence of the edit function, i.e., the edit function remains covert; 2) the controlled plant behaviors should satisfy some safety and nonblockingness requirements, in the presence of the edit function. We focus on the class of edit functions that satisfy the following properties: 1) the observation capability of the edit function in general can be different from those of the supervisor and the intruder; 2) the edit function can implement insertion, deletion, and replacement operations; 3) the edit function performs bounded edit operations, i.e., the length of each string output of the edit function is upper bounded by a given constant. We propose an approach to solve this co-synthesis problem by modeling it as a distributed supervisor synthesis problem in the Ramadge-Wonham supervisory control framework. By taking the special structure of this distributed supervisor synthesis problem into consideration and to improve the possibility of finding a non-empty distributed supervisor, we propose two novel synthesis heuristics that incrementally synthesize the supervisor and the edit function. The effectiveness of our approach is illustrated on an example in the enforcement of the location privacy. I. INTRODUCTION With the development of Internet and mobile devices, we are now in an era of information explosion and big data, which not only brings about tremendous advantages, for example, better decision-making capability, increased productivity, and improved agility, but also results in many challenges when implementing big data analytics initiatives. Storing big data, particularly sensitive data, can make the system a more attractive target for cyber attackers, one kind of which aims to infer the secret states of the system. To defend against such attackers and enforce security property, opacity came into being and was first introduced in computer science to analyze cryptographic protocols [1]. In the context of DES, opacity is an attribute that expresses system security in a general language based theoretical framework. Its parameters are a predicate, given as a subset of runs of the system, and an observation function, from the set of runs into a set of observables [2]. If the secret cannot be inferred through observation by the external intruder, then the information is opaque. Depending on the type of behaviour that is considered secret, two families of opacity properties are usually considered: state-based opacity [3], [4] and languagebased opacity [5], [6], where the difference is that, for the state-based opacity, one is given a subset of secret states, while for the language-based opacity, one is given a subset of secret strings. Then the question is whether there exists some information sequences such that the observations generated by these sequences enable an external intruder to infer that the system state has transited to a secret state or a secret string has been executed by the system. For the state-based opacity, five kinds of derived opacity notions are mostly studied: 1) currentstate opacity [3], [4]; 2) initial-state opacity [3]; 3) initial-andfinal-state opacity [7]; 4) K-step opacity [4]; 5) infinite-step opacity [8]. In the DES community, a substantial amount of studies have been focusing on opacity, including verification and enforcement. In this work, we shall focus on the opacity enforcement. For the opacity verification, [9] provides algorithms for checking strong and weak language-based opacity and verification algorithms for state-based opacity properties are proposed in [4], [7], [10] - [14]. By modeling the system as a Petri net, [15], [16] address the verification of statebased opacity, where the decidability issue is considered in [16]. In addition, by modeling the system as a probabilistic finite state automaton, [17]- [20] investigate the verification of opacity in the context of stochastic DES, where the violation of opacity is characterized by the probability, not a binary value (0 or 1) as in the case of a non-stochastic DES. Furthermore, the investigation on the verification of opacity has also been extended to networked DES recently in [21], [22], where the communication delays and losses in the observation channel and the control channel have been taken into consideration. Readers could refer to [23], [24] for a more comprehensive literature review. For opacity enforcement, there are typically three approaches, which are shown in Fig. 1: 1) Supervisory control, which restricts the system's behavior such that the secret can be preserved; 2) Edit function, which modifies the information flow such that the external intruder cannot infer the system secret; 3) Mask, which turns on/off the associated sensors to enforce the opacity. In [25]- [31], the technique of adopting supervisory control for opacity enforcement is investigated, where maximally permissive controllers are synthesized. Specifically, [29] specifies the finite transition systems as modal transition systems to ensure opacity of a secret predicate on all labeled transition systems. To mitigate the complexity of the synthesis procedure, [26] proposes abstraction-based synthesis of opacityenforcing controllers by using alternating simulation relations for labeled transition systems. The topic of opacity enforcement by using edit functions is investigated in [13], [32]- [40], all of which assume that the edit function and the external intruder have the same observation capability and could observe all the observable events fired by the plant. [32] considers the problem of enforcing currentstate opacity and language-based opacity by using insertion functions. [33] deals with current-state opacity and proposes an enforcer to change the order of observations in the event occurrences. [34]- [37] study the problem of enforcing currentstate opacity under the assumption that the intruder either knows or does not know the structure of the insertion function. In addition, deletion functions are considered in [35], which is also extended to nondeterministic insertion and deletion functions in [36]. To reduce the computational complexity, [38] proposes abstraction based methods to synthesize edit functions for current-state opacity enforcement and then [39] extends the work in [38] by taking the synchronous composition into consideration under modular DES. [13], [40] adopt runtime enforcer, which enforces opacity by using delays, to ensure K-step opacity. For the techniques of adopting masks to enforce opacity, [41] designs masks to restrict the observable outputs of the system either in a static or dynamic way to ensure currentstate opacity. [42] investigates the problem of synthesizing dynamic masks that preserve infinite-step opacity. [43] studies the problem of maximum information release while ensuring (weak or strong) language-based opacity. As we have described above, lots of fruitful works have been dedicated to opacity enforcement of DES. However, existing research only considers either synthesis of supervisor to restrict the system behavior or synthesis of edit function or mask to ensure that the information flow is opaque when the system behavior is not restricted. In reality, it is more likely that the system behavior is restricted meanwhile we need to enforce opacity w.r.t such restricted behavior. Thus, the following privacy-preserving control problem needs to be solved. On one hand, the system needs to fulfill some specific requirement, which might not satisfy the opacity property, by adopting supervisory control, and on the other hand, we expect that the information sequences generated by the system would not expose the system secret to the external intruder by adopting the edit functions or masks. In this work, we choose to adopt edit functions. To achieve the goals in this privacy-preserving control problem, the edit function and the supervisor ought to cooperate to control the system and confuse the intruder. However, the difficulty is that what the supervisor observes is the information sequence altered by the edit function, which is originally used to deceive the intruder but it might also confuse the supervisor. Thus, the edit function and the supervisor should be designed carefully enough such that only the intruder would be confused and the supervisor could still issue the appropriate control commands under the altered information sequences. In this work, besides the opacity enforcement that should be guaranteed in the above-mentioned privacy-preserving control issue, we also take the covertness into consideration when we synthesize the edit function. In the previous works, it is usually assumed that the external intruder has the full knowledge of the plant as its prior knowledge, based on which it could infer the system secret. In this paper, we consider a more powerful intruder that could not only infer the secret but also could discover the existence of edit function, since the intruder can compare its online observations with its prior knowledge to determine whether information inconsistency has happened. We assume that once the intruder detects such inconsistency, the existence of the edit function is exposed to the intruder, i.e., the edit function is not covert. Our goal is that the synthesized edit function should always remain covert to the intruder, making it as ambiguous as possible for the intruder, which imposes more challenges when we synthesize the edit function and the supervisor, since now the feasible edit operations initiated by the edit function should not only ensure the opacity but also cannot expose its own existence. In this work, we shall study a privacy-preserving control issue, by focusing on the problem of co-synthesizing the edit function and the supervisor for opacity enforcement in the supervisory control of DES. To the best of our knowledge, this is the first time when such a synthesis problem is investigated in the context of DES. The contributions of this work are as follows: 1. We consider the privacy-preserving supervisory control issue by addressing the problem of co-synthesis of the edit function and the supervisor, which is more in line with the need for the resilient control of a closed-loop system. In this work, we adopt a general setup for this privacy-preserving control problem, where the observation capabilities of the edit function, the supervisor, and the intruder could be different. This general setup has never been considered in previous works on opacity enforcement with edit functions. In addition, we also consider the covertness enforcement for the edit function, so the external intruder is never sure whether there exists an edit function. 2. By formulating the system components as finite state automata, the problem of co-synthesizing the edit function and the supervisor for opacity enforcement is addressed. The solution methodology proposed in this work is to model the co-synthesis problem as a distributed supervisor synthesis problem in the Ramadge-Wonham supervisory control framework. 3. To solve the co-synthesis problem, which has been modelled as a distributed supervisor synthesis problem, we propose two incremental synthesis heuristics that exploit the structure of the distributed control architecture arising from the co-synthesis of the supervisor and the edit function. Different from the existing incremental synthesis approaches, which attempt to synthesize a nonblocking local supervisor at each step and can immediately result in an empty solution for our problem, our approach avoids this pitfall and attempts to synthesize a local supervisor to ensure the marker-reachability first, which can thus increase the possibility of generating a feasible solution for the distributed supervisor synthesis problem studied in this work. This paper is organized as follows. In Section II, we provide some basic notions which are needed in this work. In Section III, we introduce the component models that help us to model the co-synthesis problem as a distributed supervisor synthesis problem. Section IV proposes a method to synthesize the edit function and the supervisor for opacity enforcement. An example is given to show the effectiveness of the proposed method in Section V. Finally, conclusions are drawn in Section VI. II. PRELIMINARIES Given a finite alphabet Σ, let Σ * be the free monoid over Σ with the empty string ε being the unit element and the string concatenation being the monoid operation. For a string s, |s| is defined as the length of s. Given two strings s, t ∈ Σ * , we say s is a prefix substring of t, written as s ≤ t, if there exists u ∈ Σ * such that su = t, where su denotes the concatenation of s and u. A language L ⊆ Σ * is a set of strings. The prefix closure of L is defined as L = {u ∈ Σ * | (∃v ∈ L) u ≤ v}. The event set Σ is partitioned into Σ = Σ c∪ Σ uc = Σ o∪ Σ uo , where Σ c (respectively, Σ o ) and Σ uc (respectively, Σ uo ) are defined as the sets of controllable (respectively, observable) and uncontrollable (respectively, unobservable) events, respectively. As usual, P o : Σ * → Σ * o is the natural projection defined such that (1) P o (ε) = ε, A finite state automaton G over Σ is given by a 5-tuple (Q, Σ, ξ, q 0 , Q m ), where Q is the state set, ξ : Q × Σ → Q is the (partial) transition function, q 0 ∈ Q is the initial state, and Q m is the set of marker states. We write ξ(q, σ)! to mean that ξ(q, σ) is defined and also view ξ ⊆ Q × Σ × Q as a relation. En G (q) = {σ ∈ Σ|ξ(q, σ)!}. ξ is also extended to the (partial) transition function ξ : Q × Σ * → Q and the transition function ξ : 2 Q × Σ → 2 Q [49], where the later is defined as follows: for any Q ⊆ Q and any σ ∈ Σ, ξ(Q , σ) = {q ∈ Q|(∃q ∈ Q )q = ξ(q, σ)}. Let L(G) and L m (G) denote the closed-behavior and the marked behavior of G [49], respectively. When Q m = Q, we shall also write G = (Q, Σ, ξ, q 0 ) for simplicity. The "unobservable reach" [49] of the state q ∈ Q under the subset of events Σ ⊆ Σ is given by U R G,Σ−Σ (q) := {q ∈ Q|[∃s ∈ (Σ − Σ ) * ] q = ξ(q, s)}. We define P Σ (G) to be the finite state automaton (2 Q , Σ, δ, U R G,Σ−Σ (q 0 )) over Σ, where the unobservable reach U R G,Σ−Σ (q 0 ) of q 0 is the initial state, and the (partial) transition function δ : 2 Q × Σ → 2 Q is defined as follows: (1) For any ∅ = Q ⊆ Q and any σ ∈ Σ , δ(Q , σ) = U R G,Σ−Σ (ξ(Q , σ)), where for any Q ⊆ Q; (2) For any ∅ = Q ⊆ Q and any σ ∈ Σ − Σ , δ(Q , σ) = Q . We here remark that P Σ (G) is over Σ, instead of Σ , and there is no transition defined at the state ∅ ∈ 2 Q . A finite state automaton G = (Q, Σ, ξ, q 0 , Q m ) is said to be nonblocking if every reachable state in G can reach some marker state in Q m [49], and marker-reachable if some marker state in Q m is reachable. As usual, for any two finite state automata where the (partial) transition function ζ is defined as follows: for any (q 1 , q 2 ) ∈ Q 1 × Q 2 and σ ∈ Σ: Notation. Let Z denote the set of integers, N the set of nonnegative integers, and N + the set of positive integers. Let Γ = 2 Σc − {∅} denote the set of all the possible control commands, deviating from the standard definition of Γ, where each control command only contains the controllable events that it will enable. It is assumed that uncontrollable events could be executed independently of a control command. For an alphabet Σ, we use Σ # to denote a copy of Σ with superscript "#" attached to each element in Σ. Intuitively speaking, "σ # " denotes the message edited by the edit function and the specific meanings of the relabelled events will be introduced later in Section III. III. COMPONENT MODELS WITH EDIT FUNCTION AND SUPERVISOR The architecture of the privacy-preserving supervisory control system with an edit function for opacity enforcement is illustrated in Fig. 2, where the components are listed as follows: • Edit function E (subject to edit constraints EC). • Supervisor S (subject to supervisor constraints SC). In the following subsections, we shall explain how we model the above-mentioned five components. A. Edit function The set of observable events for the edit function is denoted as Σ o,E ⊆ Σ o , where Σ o denotes the set of observable events for the supervisor. The set of editable events for the edit function is denoted as Σ s,E ⊆ Σ o,E , that is, the edit function could only delete, insert, and replace events in Σ s,E . The basic assumptions of the edit function in this work are given as follows: • The edit function could implement insertion, deletion, and replacement operations. • In Fig. 2, any event σ ∈ Σ o,E fired by the plant G will be firstly observed by the edit function, then the output of the edit function, if observable to the intruder, would be eavesdropped by the intruder. • The edit function carries out edit operation each time when it observes some event in Σ o,E . Each time when the edit function observes one event in Σ o,E , the number of events that it can simultaneously send to the supervisor is bounded by U , i.e., we consider bounded edit function. • The edit action initiated by the edit function is instantaneous. Next, we shall introduce two models that will be used in this work: 1) edit constraints; 2) edit function, where the former one serves as a "template" to describe the capabilities of the edit function and the latter one is the edit function that we aim to synthesize. Edit Constraints: The edit constraints is modeled as a finite state automaton EC, which is shown in Fig. 3. ec } The (partial) transition function ξ ec is defined as follows: 5. For any n ∈ [0 : U ], ξ ec (q n , stop) = q init ec . Next, we shall present some explanations for the model EC. In the state set Q ec , • q init ec is the initial state. It is a state denoting that 1) no edit operation has been conducted, or 2) the edit function has not observed any event σ ∈ Σ o,E since the end of the last edit operation. • q n (n ∈ [0 : U ]) is a state denoting that the edit function has sent n events since it observes some event 1 . Specifically, at the state q 0 , the edit function could either delete the observed event or replace the observed event with any editable event. At the state q n (n ∈ [1 : U − 1]), the edit function could insert any editable event or end the current round of edit operation. At the state q U , the edit function must end the current round of edit operation and cannot insert editable events anymore since the upper bound of the output is U . In the event set Σ ec = Σ∪Σ # s,E ∪Γ∪{stop, decode}, σ ∈ Σ denotes the firing of σ by plant G, any σ # ∈ Σ # s,E denotes the event of sending an editable event σ ∈ Σ s,E by the edit function, and the event stop denotes the end of current round of edit operation, which can be controlled and observed by the edit function. Any element in Γ ∪ {decode} denotes an event happening in other three components: supervisor, command execution component and intruder. Intuitively speaking, any element in Γ denotes a control command issued by the supervisor and the event decode denotes that the secret state of plant G has been inferred by the intruder; these will be introduced later in Section III-B, III-C, and III-E. All of the events in Γ ∪ {decode} are assumed to be unobservable and uncontrollable to the edit function in this work. For the (partial) transition function ξ ec , • Case 1 says that, at state q init ec , if any event σ ∈ (Σ − Σ o,E ) ∪ Γ ∪ {decode} happens, the edit function will not carry out any edit operation since it cannot observe σ, and such event will lead to a self-loop. • Case 2 says that, at state q init ec , after the edit function observes any event σ ∈ Σ s,E , it would transit to state q 0 , at which it could either delete σ or replace σ with any editable event in Σ s,E . • Case 3 says that, at state q init ec , after the edit function observes any event σ ∈ Σ o,E − Σ s,E , it would transit to state q 1 and let σ pass because it cannot edit σ. Since the number of events that the edit function can simultaneously send after it observes one event in Σ o,E is upper bounded by U , the edit function could still insert at most U − 1 events in Σ s,E after observing σ. 2 • Case 4 says that at any state q n (n ∈ [0 : U − 1]), the edit function could insert any editable event σ ∈ Σ s,E . Since the number of the events that can be sent by the edit function after observing some event is upper bounded by U , at the state q U , the edit function cannot insert any editable event. • Case 5 says that at any state q n (n ∈ [0 : U ]), the edit function could end the current round of the edit operation and transit back to the initial state q init ec with the event stop. Based on the model of EC, it can be seen that, in the output event set of the edit function, all the events in Σ s,E are relabelled as the copies in Σ # s,E by attaching the superscript "#". Based on the model of EC, we have |Q ec | = U + 2. 2 In this work, we shall count σ ∈ Σ o,E −Σ s,E in the events sent by the edit function. If readers prefer to not count σ in the events sent by the edit function, then only minor modifications are needed. One possible way is to replace the transition ξec(q init ec , σ) = q 1 with the transition ξec(q init ec , σ) = q 0 . Edit Function: The edit function is modeled as a finite state automaton E. where Σ e = Σ ec = Σ∪Σ # s,E ∪Γ∪{stop, decode}, that satisfies the following constraints: • (E-controllability) For any state q ∈ Q e and any σ ∈ Σ e,uc : E-controllability states that the edit function can only disable events in Σ e,c = Σ # s,E ∪ {stop}. E-observability states that the edit function can only make a state change after observing In this work, by construction, all the controllable events for the edit function are also observable to the edit function. In the following text, we shall refer to (Σ e,c , Σ e,o ) as the edit function-control constraint. B. Supervisor In this part, we shall introduce two models that will be used in this work: 1) supervisor constraints; 2) supervisor, where the former one serves as a "template" to describe the capabilities of the supervisor and the latter one is the supervisor that we aim to synthesize. Supervisor Constraints: Firstly, due to the existence of the edit function, all the events in Σ s,E are relabelled in the output of the edit function, resulting in that the set of observed events by the supervisor is Then, the supervisor constraints is modeled as a finite state automaton SC, which is shown in Fig. 4. Intuitively speaking, when the system initiates, the supervisor could issue an initial control command without observing any event in (Σ o − Σ s,E ) ∪ Σ # s,E . Then, the supervisor could issue a new control command again only after it has observed at least one event in In this work, we impose the natural assumption that the issued control command is observable to the supervisor. Fig. 4: The (schematic) model for supervisor constraints SC = (Q sc , Σ sc , ξ sc , q init sc ) The (partial) transition function ξ sc is defined as follows: Next, we shall present some explanations for the model SC. In the state set Q sc , • q init sc is the initial state. It is a state denoting that 1) the supervisor has not issued any control command since the system initiates, or 2) the supervisor has observed at least s,E since it issues the last control command. At state q init sc , the supervisor could choose to either wait for the next observable event in s,E or issue a control command. • q issue is a state denoting that the supervisor has just issued a control command. At this state, the supervisor will wait for the next observable event in In the event set, any γ ∈ Γ denotes the event of issuing a control command γ by the supervisor. For the (partial) transition function ξ sc , • Cases 1 and 2 say that, at state q init sc , the supervisor would make a transition to state q issue only after it issues a control command γ ∈ Γ. If any other event σ ∈ Σ sc −Γ happens, the supervisor would only do a selfloop transition. These two cases model the situation that the supervisor could either immediately issue a control command or wait for the next observation when 1) the supervisor has not issued any control command since the system initiates, or 2) the supervisor has observed at least s,E since it issues the last control command. • Cases 3 and 4 say that, at state q issue , since the supervisor has just issued a control command, it would not issue a control command again until receiving a new observation. Thus, at state q issue , no γ ∈ Γ is defined and the supervisor would make a transition to state q init sc only after it observes an event If any other event σ ∈ Σ uo ∪ Σ s,E ∪ {stop, decode} happens, the supervisor would only do a self-loop transition since such events are unobservable to the supervisor. Based on the model of SC, we have |Q sc | = 2. Supervisor: The supervisor is modeled as a finite state automaton S. • (S-observability) For any state q ∈ Q s and any σ ∈ Σ s,uo : S-controllability states that the supervisor can only disable events in Σ s,c = Γ. S-observability states that the supervisor can only make a state change after observing events in In this work, by construction, all the controllable events for the supervisor are also observable to the supervisor. In the following text, we shall refer to (Σ s,c , Σ s,o ) as the supervisor-control constraint. C. Command execution component The command execution automaton serves to explicitly describe the execution phase of a control command, where the procedure from using a control command to executing an event is shown. Since the number of all possible control commands issued by the supervisor is finite 3 , the command execution component can be modeled as a finite state automaton CE, which is illustrated in Fig. 5. ce } The (partial) transition function ξ ce is defined as follows: ce . 4. For any σ ∈ Σ uc , ξ ce (q init ce , σ) = q init ce . Next, we shall present some explanations for the model CE. In the state set Q ce , • q init ce is a state denoting that the command execution component is not using any control command. At this state, the command execution component is waiting for the arrival of a control command issued by the supervisor. It is noteworthy that at state q init ce , any uncontrollable event is always allowed to be executed. • q γ ∈ Q ce (γ ∈ Γ) is a state denoting that the command execution component has just received the control command γ. For the (partial) transition function ξ ce , • Case 1 says that, at state q init ce , if the command execution component receives a control command γ issued by the supervisor, then it will transit to the state q γ and be ready to use γ. • Case 2 says that, at state q γ , if any event σ ∈ (γ ∪ Σ uc ) ∩ Σ uo is executed by the command execution component, then the command execution component will reuse the control command γ. • Case 3 says that, at state q γ , if any event σ ∈ (γ ∪ Σ uc ) ∩ Σ o is executed by the command execution component, then it will transit back to the state q init ce and wait for the next control command to be issued by the supervisor. • Case 4 says that, at state q init ce , any uncontrollable event σ ∈ Σ uc can be executed since uncontrollable events are always allowed to be fired. Based on the model of CE, we have |Q ce | = 2 |Σc| . D. Plant Plant G is modeled as a finite state automaton where the set of secret states in plant G is denoted as Q sec ⊆ Q, the set of bad states to avoid in the plant G is denoted as Q avoid ⊆ Q, the set of blocking states in plant G is denoted as In this work, we consider current-state opacity (CSO), which is defined as follows. Definition E. Intruder As illustrated in Fig. 2, the intruder is an external observer that aims to infer the system secret based on its observations. In this work, the assumptions about the intruder are given as follows: • The set of observable events for the intruder is denoted as Σ o,I ⊆ Σ. It is noteworthy that Σ o,I might be different from Σ o , the set of observable events for the supervisor, and Σ o,E , the set of observable events for the edit function. • The intruder only has the full knowledge of the structure of the plant G and does not know the model of the supervisor and the specification. Based on the above assumptions, it is known that, • Due to the existence of the edit function, all the events in Σ s,E have been relabelled as the copies in Σ # s,E . Thus, in the modeling, the intruder could only observe events • Since the structure of the plant G is a prior knowledge of the intruder, the intruder is able to compare its online observation sequences during the system running with the ones that could have been observed under the absence of an edit function. Once the information inconsistency happens, the intruder will conclude the existence of the edit function. Thus, under the supervision of S and in the presence of E, the following goals should be achieved: 1. Plant G would never reach the state in Q avoid and the closed-loop system behavior is nonblocking. 2. The intruder would never infer that plant G has reached a secret state in Q sec . 3. The existence of the edit function is never exposed to the intruder. Next, we shall explain how to model the intruder, which consists of the following two steps. Step 1: On one hand, the intruder could only observe events in Σ o,I . On the other hand, the intruder could discover the existence of an edit function based on its online observations. To capture the above-mentioned two features of the intruder, we construct the following finite state automaton is essentially a state estimator, where we have the following two facts: 1) once P Σ o,I (G) transits to a state in 2 Qsec − {∅}, the intruder infers the secret state of the plant G; 2) once P Σ o,I (G) transits to state ∅, the existence of the edit function is discovered by the intruder. Step 2: We shall make some minor modifications on P Σ o,I (G) due to the following reasons: 1) The intruder can infer the secret state of plant G after P Σ o,I (G) transits to a state in 2 Qsec − {∅}. We model it in such a way that once the intruder infers that the plant G has reached a secret state, it would transit to a new state q unsaf e by a new transition labelled with the event decode, which will be introduced below; 2) Since the intruder could only observe events in (Σ o,I − Σ s,E ) ∪ (Σ o,I ∩ Σ s,E ) # , we need to replace any σ ∈ Σ s,E by σ # in the model of the intruder. Then, based on P Σ o,I (G), the model of the intruder I is generated by the following procedure. We shall give some explanations for the above procedure. • In Step 1 and Step 2, a new state q unsaf e and a new event decode, denoting that the intruder infers that plant G has reached a secret state, is added to the state set and the event set, respectively, which generates the new state set Q i and new event set Σ i , respectively. • In Step 3, to encode the situation that the intruder infers that plant G has reached a secret state at each state q ∈ 2 Qsec −{∅}, a new outgoing transition is added to state q, such that q ∈ 2 Qsec −{∅} ⇔ ξ i (q, decode) = q unsaf e . In this case, as long as the intruder transits to state q unsaf e , the secret state of plant G has been inferred. Since the event decode is uncontrollable to the edit function and the supervisor, thus, to enforce current-state opacity, the intruder should never transit to any state q ∈ 2 Qsec −{∅} under the supervision of S in the presence of E. • In Steps 4 and 5, all the transitions labelled by events in Σ s,E are replaced with the relabelled copies in Σ # s,E while other transitions remain the same. • In Step 6, any event in (Σ − Σ s,E ) ∪ Σ # s,E is defined as a self-loop at state ∅ or q unsaf e since now any event execution or any further observation at the intruder would not change the fact that it has already either inferred the system secret or discovered the existence of the edit function. Based on the constructed model of the intruder, from the point view of the edit function and supervisor, it should avoid the transitions to state ∅ and q unsaf e in I. Based on the model of I, we have |Q i | ≤ 2 |Q| + 1. IV. CO-SYNTHESIS OF EDIT FUNCTION AND SUPERVISOR FOR OPACITY ENFORCEMENT In this section, firstly, based on the component models presented in Section III, we shall formalize the closed-loop behavior of the system under edit function, supervisor, and intruder. Based on the closed-loop behavior, we shall introduce several definitions, including opacity and covertness. Then, we shall solve the co-synthesis problem of edit function and supervisor for opacity enforcement by modeling it as a distributed supervisor synthesis problem in the Ramadge-Wonham supervisory control framework. A. Solution Methodology In Fig. 2, given the plant G, the command execution component CE, the edit constraints EC, the supervisor constraints SC, the intruder I, the edit function E, and the supervisor S, the closed-loop system, defined as B, is the synchronous product given as follows: Next, we shall further explain our approach in modeling the problem of co-synthesis of edit function and supervisor for opacity enforcement as a distributed Ramadge-Wonham supervisory control problem. Since the closed-loop system is B = G||CE||EC||SC||I||E||S, we can view P = G||CE||EC||SC||I = (Q P , Σ P , ξ P , q init P , Q P,m ) as the new plant and treat E and S as the distributed supervisor to be synthesized over the distributed control architecture A = ((Σ e,c , Σ e,o ), (Σ s,c , Σ s,o )). Our goal is to synthesize E and S such that • B is nonblocking and plant G would never reach any state in Q avoid . • E combined with S is an opaque edit function-supervisor pair w.r.t. the edit function-control constraint (Σ e,c , Σ e,o ) and supervisor-control constraint (Σ s,c , Σ s,o ) for G, CE, EC, SC, and I. • E combined with S is a covert edit function-supervisor pair w.r.t. the edit function-control constraint (Σ e,c , Σ e,o ) and supervisor-control constraint (Σ s,c , Σ s,o ) for G, CE, EC, SC, and I. Before we present our heuristic for solving the co-synthesis problem, we briefly discussed about some previous approaches for addressing the distributed supervisor synthesis problem, which is known to be undecidable in general [45]- [47]: 1) [48] proposes a distributed synthesis approach by adopting an coordinator, which receives part of the partial observations of the subsystems and serves to satisfy the global specification and nonblockingness. Nevertheless, in the architecture shown in Fig. 2, we do not allow such a coordinator for the privacypreserving control problem. Thus, the approach in [48] is not applicable for the synthesis problem to be solved in our work. 2) [49] summarizes the supervisor localization algorithm for the distributed control for DES. However, this algorithm often needs to lift the observation alphabets of the local supervisors, which is not suitable for the distributed synthesis problem to be solved in our work, which has a fixed distributed control architecture. 3) [50] proposes an aggregative synthesis approach that computes nonblocking distributed supervisors, which always tries to synthesize a nonblocking supervisor at each step. However, for the distributed supervisor synthesis problem to be solved in this work, since the events denoting the edit operations (respectively, the sending of control commands) are uncontrollable to the supervisor (respectively, edit function), no matter whether we synthesize S or E first, the algorithm in [50] is very likely to output an empty solution at the first step. In this work, we take the special structure of this distributed supervisor synthesis problem into consideration and propose two heuristics to generate the desired E and S to achieve the safety, opacity, covertness, and nonblockingness goal, where one heuristic first synthesizes S and then synthesizes E, and the other heuristic first synthesizes E and then synthesizes S. The details of these two heuristics would be explained in Section IV-B and Section IV-C, respectively. B. Incremental synthesis: first S and then E In this heuristic, we first synthesize the supervisor S to ensure the safety of G and the marker-reachability of the closed-loop system, and then we synthesize the edit function E to ensure the opacity, covertness and nonblockingness. The details of the synthesis procedure are as follows: Procedure 1: 1. Compute P = G||CE||EC||SC||I = (Q P , Σ P , ξ P , q init P , Q P,m ). Generate Synthesize a supervisor S 0 = (Q s,0 , Σ s,0 , ξ s,0 , q init s,0 , Q s,0,m ) over the supervisor-control constraint (Σ s,c , Σ s,o ) by treating P as the plant and P 0 S as the requirement such that P||S 0 is marker-reachable and safe w.r.t. P 0 S . If S 0 exists, go to Step 4; otherwise, end the procedure. If Q del = ∅, go to Step 7; otherwise, go to Step 9 and treat S k as the desired supervisor S, denoted as S := S k = (Q s , Σ s , ξ s , q init s , Q s,m ). q init s,k+1 , Q s,k+1,m ) over the supervisor-control constraint (Σ s,c , Σ s,o ) by treating P as the plant and P k+1 S as the requirement such that P||S k+1 is marker-reachable and safe w.r.t. P k+1 S . If S k+1 exists, let k ← k + 1 and go to Step 5; otherwise, end the procedure. . Synthesize a supervisor E = (Q e , Σ e , ξ e , q init e , Q e,m ) over the edit function-control constraint (Σ e,c , Σ e,o ) by treating P E as the plant and P r E as the requirement such that P E ||E is nonblocking and safe w.r.t. P r E . In the above procedure, Steps 1-8 are dedicated to synthesizing the supervisor and Steps 9-11 are dedicated to synthesizing the edit function based on the synthesized supervisor. In the part regarding the synthesis of the supervisor, for Steps 1-3, P and P 0 S are treated as the plant and the requirement, respectively. The requirement P 0 S is generated by removing three kinds of states in P: 1) the states where the plant G reaches a state in Q avoid , denoted by Q 1 in Step 2; 2) the states where the plant G reaches a state in Q block , denoted by Q 2 in Step 2; 3) the states where the plant G has not reached a marker state in Q m meanwhile the command execution component is using a control command γ such that γ ∪ Σ uc has no intersection with the enabled events of current state of G, denoted by Q 3 in Step 2. We need to delete such states because: 1) the first kind of states are the "bad" states that are not allowed by the user requirement and they should be avoided; 2) the second kind of states are those states where the nonblockingness goal of G already cannot be satisfied; 3) the third kind of states are those deadlocked, non-marked states where the supervisor issues some control commands that cannot be used by the plant. Based on P 0 S , at Step 3, we compute the supervisor S 0 that could ensure the safety w.r.t. P 0 S and the marker-reachability. At this step, the nonblockingness of the closed-loop system is hard to ensure since the events denoting edit operations are uncontrollable to the supervisor and can easily cause blockingness. It is noteworthy that although S 0 could ensure the reachability of some marker states in the closed-loop system behavior, it is still possible that the blockingness 4 can happen in the plant G under the supervision of S 0 . If so, then the nonblockingness of the closed-loop system behavior can be hard to ensure when we synthesize the edit function based on S 0 , since the events denoting the sending of control commands by the supervisor are uncontrollable to the edit function. Thus, to improve the possibility of finding a non-empty edit function, we need to iteratively perform the synthesis until the blockingness would not happen in the plant G under the supervision of such a supervisor. We shall refer to Step 1-3 as the 0-th iteration. The iterative computations are given in Steps 5-8: Firstly, for the (k+1)-th iteration, at Step 5, we compute the synchronous product of P and S k synthesized at the k-th iteration. Then, at Step 6, we identify the state q ∈ Q P k S that satisfy one of the following conditions: • For any state in P||S k , the tuple consisting of the first five terms of this state is not equal to q, denoted by (q, q s ) / ∈ Q P||S k . The first condition corresponds to the situation that blockingness happens in terms of the behavior of plant G under the supervision of S k . The second condition corresponds to the situation that P would not transit to state q under the supervision of S k , thus, such state q should also be avoided in the requirement at the (k + 1)-th iteration. Any state in Q P k S satisfying the above two conditions would be contained in Q del . If Q del = ∅, then we need to remove such states in the requirement P k S to generate a new requirement P k+1 S at Step 7, based on which we compute S k+1 to ensure safety and reachability at Step 8. If Q del = ∅, then S k is the desired supervisor and the procedure moves to Step 9. In the part regarding the synthesis of the edit function (Steps 9-11), P E and P r E are treated as the plant and requirement, respectively. The plant P E is generated based on the synthesized supervisor S in Steps 1-8. The requirement P r E is generated from P E by removing the states where the intruder reaches the state q unsaf e or ∅, implying that either the intruder has inferred the system secret or the existence of the edit function has been discovered, both of which should be avoided by the edit function. Finally, we compute the edit function E that could satisfy the opacity, covertness, and nonblockingness at Step 11. Theorem IV.1: Given any plant G, command execution component CE, edit constraints EC, supervisor constraints SC, and intruder I, Procedure 1 terminates within finite steps. Proof: To show this, we only need to check whether the iterative computation in Steps 5-8 can terminate within finite steps. Since the continuation of the iteration at Step 6 depends on whether Q del is equal to ∅, the worst case is that only one state is removed from P k S at each iteration. In addition, P 0 S is a finite state automaton, which implies that the iterative computation in Steps 5-8 can always terminate within finite steps. This completes the proof. Theorem IV.2: Given any plant G, command execution component CE, edit constraints EC, supervisor constraints SC, and intruder I, the computed S and E in Procedure 1, if not empty, satisfy the following properties: • G||CE||EC||SC||I||E||S is nonblocking and any state in {(q, q ce , q ec , q sc , q i , q e , q s ) • E combined with S is an opaque edit function-supervisor pair w.r.t. the edit function-control constraint (Σ e,c , Σ e,o ) and supervisor-control constraint (Σ s,c , Σ s,o ) for G, CE, EC, SC, and I. • E combined with S is a covert edit function-supervisor pair w.r.t. the edit function-control constraint (Σ e,c , Σ e,o ) and supervisor-control constraint (Σ s,c , Σ s,o ) for G, CE, EC, SC, and I. Proof: Based on Step 11 of Procedure 1, the synthesized E should satisfy that P E ||E is nonblocking, that is, G||CE||EC||SC||I||E||S is nonblocking. Based on Step 2 in Procedure 1, the set of states Q 1 has been removed in the requirement P 0 S , i.e., they are treated as "bad" states in the synthesis of S 0 . Thus, any state in {(q, q ce , q ec , q sc , q i , q e , q s ) ∈ Q × Q ce × Q ec × Q sc × Q i × Q e × Q s | q ∈ Q avoid } is not reachable in G||CE||EC||SC||I||E||S. In addition, the set of states Q 4 := {(q, q ce , q ec , q sc , q i , q s ) ∈ Q P E | q i = q unsaf e ∨ q i = ∅} has been removed in the requirement P r E , i.e., they are treated as "bad" states in the synthesis of E. Thus, any state in {(q, q ce , q ec , q sc , q i , q e , q s ) ∈ Q × Q ce × Q ec × Q sc × Q i × Q e × Q s | q i = q unsaf e ∨ q i = ∅} is not reachable in the closed-loop system G||CE||EC||SC||I||E||S. Based on the definition IV.1 and IV.2, E combined with S is an opaque and covert edit function-supervisor pair w.r.t. the edit function-control constraint (Σ e,c , Σ e,o ) and supervisor-control constraint (Σ s,c , Σ s,o ) for G, CE, EC, SC, and I. This completes the proof. Next, we shall analyze the computational complexity of Procedure 1. In Steps 5-8, the worst case is that only one state is removed from the requirement at each iteration. Thus, by adopting the normality based synthesis approach in [51], the complexity is O(|Σ P |2 |Q P | + · · · + |Σ P | × 2 + |Σ P ||Q P E | 2 4 |Q P E | ) C. Incremental synthesis: first E and then S In this heuristic, we first synthesize the edit function E to ensure the opacity, covertness and the marker-reachability of closed-loop system. Then, we synthesize the supervisor S to ensure the safety and nonblockingness. The details of the synthesis procedure are as follows: Procedure 2: , Q e,m ) over the edit function-control constraint (Σ e,c , Σ e,o ) by treating P as the plant and P r E as the requirement such that P||E is marker-reachable and safe w.r.t. P r E . If E exists, go to Step 4; otherwise, end the procedure. 4. Compute P S = G||CE||EC||SC||I||E = (Q P S , Σ P S , ξ P S , q init P S , Q P S ,m ). 5. Generate P r S := (Q P r S , Σ P r S , ξ P r S , q init P r S , Q P r S ,m ) • Q P r S := Q P S − Q 6 -Q 6 := {(q, q ce , q ec , q sc , q i , q e ) ∈ Q P S | q ∈ Q avoid } • Σ P r S := Σ P S • (∀q, q ∈ Q P r S )(∀σ ∈ Σ P r S ) ξ P S (q, σ) = q ⇔ ξ P r S (q, σ) = q • q init P r S := q init P S • Q P r S ,m := Q P S ,m − Q 6 6. Synthesize a supervisor S = (Q s , Σ s , ξ s , q init s , Q s,m ) over the supervisor-control constraint (Σ s,c , Σ s,o ) by treating P S as the plant and P r S as the requirement such that P S ||S is nonblocking and safe w.r.t. P r S . In the above procedure, Steps 1-3 focus on the synthesis of the edit function E and Steps 4-6 focus on the synthesis of the supervisor S. In the part regarding the synthesis of edit function, P and P r E are treated as the plant and the requirement, respectively. The requirement P r E is generated from P by removing the states where the intruder reaches the state q unsaf e or ∅. Then, at Step 3, we compute the edit function E that can ensure the safety w.r.t. P r E and the markerreachability. At this step, the nonblockingness of the closedloop system is hard to ensure since the sending of control commands by the supervisor is uncontrollable to the edit function. In the part regarding the synthesis of supervisor, P S and P r S are treated as the plant and the requirement, respectively. The plant P S is generated based on the edit function E synthesized in Steps 1-3. At Step 5, the requirement P r S is generated from P S by removing the set of states Q 6 , which is not allowed by the user requirement. Finally, we compute the supervisor S that can ensure the safety w.r.t. P r S and nonblockingness at Step 6. Procedure 2 clearly terminates within finite steps. Theorem IV.3: Given any plant G, command execution component CE, edit constraints EC, supervisor constraints SC, and intruder I, the computed E in Procedure 2, if not empty, with any supervisor S = ( Q s , Σ s , ξ s , q init s , Q s,m ) is an opaque and covert edit function-supervisor pair w.r.t. the edit function-control constraint (Σ e,c , Σ e,o ) and supervisor-control constraint (Σ s,c , Σ s,o ) for G, CE, EC, SC, and I. Proof: Based on Step 2 in Procedure 2, the set of states Q 5 has been removed in the requirement Q P r E , i.e., they are treated as "bad" states in the synthesis of E. Thus, any state in {(q, q ce , q ec , q sc , q i , q e ) ∈ Q×Q ce ×Q ec ×Q sc ×Q i ×Q e | q i = q unsaf e ∨ q i = ∅} is not reachable in P||E, which means that any state in {(q, q ce , q ec , q sc , q i , q e , q s ) Based on the definition IV.1 and definition IV.2, the proof is completed. Theorem IV.4: Given any plant G, command execution component CE, edit constraints EC, supervisor constraints SC, and intruder I, the computed E and S in Procedure 2, if not empty, could satisfy the following goals: • G||CE||EC||SC||I||E||S is nonblocking and any state in • E combined with S is an opaque edit function-supervisor pair w.r.t. the edit function-control constraint (Σ e,c , Σ e,o ) and supervisor-control constraint (Σ s,c , Σ s,o ) for G, CE, EC, SC, and I. • E combined with S is a covert edit function-supervisor pair w.r.t. the edit function-control constraint (Σ e,c , Σ e,o ) and supervisor-control constraint (Σ s,c , Σ s,o ) for G, CE, EC, SC, and I. Proof: Based on Step 6 of Procedure 2, the synthesized S should satisfy that P S ||S is nonblocking, that is, G||CE||EC||SC||I||E||S is nonblocking. In addition, at Step 5, the set of states Q 6 has been removed from P r S , i.e., they are treated as "bad" states in the synthesis of S. Thus, any state in {(q, q ce , q ec , q sc , q i , q e , q s ) ∈ Q × Q ce × Q ec × Q sc × Q i × Q e × Q s | q ∈ Q avoid } is not reachable in P S ||S = G||CE||EC||SC||I||E||S. Based on Theorem IV.3, the computed E in Procedure 2 with any supervisor is an opaque and covert edit function-supervisor pair, thus, the computed E with the computed S in Procedure 2 is also an opaque and covert edit function-supervisor pair. This completes the proof. Next, we shall analyze the computational complexity of Procedure 2. By adopting the normality based synthesis approach in [51], the complexity is In this section, we shall present an example to show the effectiveness of the proposed method to synthesize the edit function and the supervisor for opacity enforcement in the supervisory control of discrete-event systems. Example 5. 1 We adapt the location-based privacy example of [44] for an illustration. In this example, a batch of confidential experiment devices are transported by an autonomous vehicle to the EEE building of the Nanyang Technological University. After completing the transportation task, the vehicle is required to leave the campus. The location of the vehicle is obtained based on the Global Positioning System (GPS) and the location information acquisition channel is eavesdropped by the intruder whose target is to infer whether the confidential experiment devices have been transported to the EEE building. The Nanyang Technological University campus map is shown in Fig. 6, where we discretize the model by selecting seven locations as states, marked by blue and red circles, and several connection routes between those locations, marked by blue lines. Location (state) 5 represents the EEE building, which is the secret location (state) that the intruder intends to infer. Plant G is shown in Fig. 7, where the state 5 is the secret state. The requirement of G is shown in Fig. 8. Command execution automaton CE is shown in Fig. 9. Edit constraints EC is shown in Fig. 10. Supervisor constraints SC is shown in Fig. 11. Intruder I is shown in Fig. 12. Fig. 9: Command execution CE We use SuSyNA [53] to synthesize the edit function and the supervisor based on the procedures proposed in Section IV-B and IV-C. By adopting the incremental synthesis from S to E, the synthesized supervisor and edit function are shown in Fig. 13. By adopting the incremental synthesis from E to S, the synthesized edit function and supervisor are shown in Fig. 14. In carrying out the incremental synthesis from E to S, we slightly restrict the capabilities of the edit function because the synthesized E at the first step might always delete the events in Σ o since Σ o = Σ o,E = Σ s,E in this example, resulting in the situation that the supervisor would not observe any information and then does not issue any control command. Thus, in this example, we shall implement the incremental synthesis from E to S by assuming that the edit function cannot delete editable events. In addition, since U = 1, the edit function can only replace editable events. It can be checked that both of the two synthesized results can achieve the following goals: 1) the closed-loop system is nonblocking and satisfy the requirement (see Fig. 8); 2) the intruder could never infer that plant G has reached the secret state; 3) the edit function always remains covert. Next, we shall present some explanations for the two synthesized results. In Fig. 13 (Fig. 14), under the cooperation of the synthesized E and S, the actual strategies adopted by S (E) are marked as the blue highlighted parts. The intuitive explanation of the two synthesized results is that: the edit function E would always alter the authentic sensor readings into different values to trick the intruder I such that I believes that the motion path of the autonomous vehicle is bcb (recall that the intruder does not know the specification), a path in the campus map that would not expose the secret state; meanwhile, based on the changed sensor readings, the supervisor S would issue the appropriate control command to guarantee that the true motion path of the vehicle is ac uo acb uc b, a path that could fulfill the specification K. The details of the synthesized strategies in Fig. 13 and 14 are as follows: In the first few steps, the strategies of the synthesized edit functions combined with supervisors by two incremental synthesis methods are the same. At the initial state, the supervisor S issues the initial control command v 1 or v 5 . After receiving the control command v 1 or v 5 , plant G would execute event a, which could be observed by the edit function E. Then E changes a to b # , triggering S to issue the control command v 5 or v 7 , after which plant G would execute event c uo . Since c uo ∈ Σ uo , v 5 or v 7 would be reused by plant G and event a is then executed. After observing a, E would change it to c # , triggering S to issue the control command v 3 or v 5 or v 6 or v 7 . Then the event c is executed by plant G. Afterwards, the strategies of the synthesized edit functions combined with supervisors by two incremental synthesis methods are different: 1. Incremental synthesis from S to E: After observing c, E would delete it, resulting in that S would not issue any control command. E would wait until the uncontrollable event b uc is fired by plant G, then it would change b uc to b # , triggering S to issue the control command v 2 or v 4 or v 6 or v 7 . Then event b is executed by plant G, after which E could either delete b or change b to anyone of a # and b # . In this case, what the intruder observes during the whole process is anyone of which would not break the opacity and covertness property. 2. Incremental synthesis from E to S: After observing c, E would change it to b # , which could be observed by S. Then, two situations might happen: a. The uncontrollable event b uc is fired immediately after E replaces c with b # , which preempts the event of issuing a control command by S. After observing b uc , E would replace it with b # , resulting in that S would observe b # again and issue the control command v 2 or v 4 or v 6 or v 7 ; b. The uncontrollable event b uc is not fired immediately after E replaces c with b # . Then S issues any control command from v 1 to v 7 , after which the event b uc is executed. The observation of b uc would trigger E to replace it with b # , resulting in that S would issue the control command v 2 or v 4 or v 6 or v 7 . Then event b is executed by plant G, after which E would change it to c # . In this case, what the intruder observes during the whole process is b # c # b # b # c # , which would not break the opacity and covertness property. To illustrate the advantage of our proposed incremental synthesis method, we also adopt the aggregative synthesis based approach proposed in [50] to synthesize the desired edit function and supervisor for this example. By directly using the make supervisor operation in SuSyNA, we always generate an empty distributed supervisor because the approach in [50] would always try to find a nonblocking local supervisor at each synthesis step, resulting in an empty solution even at the first synthesis step, no matter whether S or E is synthesized first. Liyong Lin received the B.E. degree and Ph.D. degree in electrical engineering in 2011 and 2016, respectively, both from Nanyang Technological University, where he has also worked as a project officer. From June 2016 to October 2017, he was a postdoctoral fellow at the University of Toronto. Since December 2017, he has been working as a research fellow at the Nanyang Technological University. His main research interests include supervisory control theory, formal methods and machine learning. He previously was an intern in the Data Storage Institute, Singapore, where he worked on single and dual-stage servomechanism of hard disk drives. Yuting Zhu received the B.S. degree from Southeast University, Jiangsu, China, in 2016. She is currently pursuing the Ph.D. degree with Nanyang Technological University, Singapore. Her research interests include networked control and cyber security of discrete event systems. Rong Su received the Bachelor of Engineering degree from University of Science and Technology of China in 1997, and the Master of Applied Science degree and PhD degree from University of Toronto, in 2000 and 2004, respectively. He was affiliated with University of Waterloo and Technical University of Eindhoven before he joined Nanyang Technological University in 2010. Currently, he is an associate professor in the School of Electrical and Electronic Engineering. Dr. Su's research interests include multi-agent systems, cyber security of discrete-event systems, supervisory control, model-based fault diagnosis, control and optimization in complex networked systems with applications in flexible manufacturing, intelligent transportation, human-robot interface, power management and green buildings. In the aforementioned areas he has more than 220 journal and conference publications, and 5 granted USA/Singapore patents. Dr. Su is a senior member of IEEE, and an associate editor for Automatica, Journal of Discrete Event Dynamic Systems: Theory and Applications, and Journal of Control and Decision. He was the chair of the Technical Committee on Smart Cities in the IEEE Control Systems Society in 2016-2019, and is currently the chair of IEEE Control Systems Chapter, Singapore, and a co-chair of IEEE Robotic and Automation Society Technical Committee on Automation in Logistics.
2021-04-12T01:15:46.447Z
2021-04-09T00:00:00.000
{ "year": 2021, "sha1": "1f68f874add7505efacff39734d07d26215b7f2b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1f68f874add7505efacff39734d07d26215b7f2b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
252911451
pes2o/s2orc
v3-fos-license
Formation of orogenic gold deposits by progressive movement of a fault-fracture mesh through the upper crustal brittle-ductile transition zone Orogenic gold deposits are comprised of complex quartz vein arrays that form as a result of fluid flow along transcrustal fault zones in active orogenic belts. Mineral precipitation in these deposits occurs under variable pressure conditions, but a mechanism explaining how the pressure regimes evolve through time has not previously been proposed. Here we show that extensional quartz veins at the Garrcon deposit in the Abitibi greenstone belt of Canada preserve petrographic characteristics suggesting that the three recognized paragenetic stages formed within different pressure regimes. The first stage involved the growth of interlocking quartz grains competing for space in fractures held open by hydrothermal fluids at supralithostatic pressures. Subsequent fluid flow at fluctuating pressure conditions caused recrystallization of the vein quartz and the precipitation of sulfide minerals through wall-rock sulfidation, with some of the sulfide minerals containing microscopic gold. These pressure fluctuations between supralithostatic to near-hydrostatic conditions resulted in the post-entrapment modification of the fluid inclusion inventory of the quartz. Late fluid flow occurred at near-hydrostatic conditions and resulted in the formation of fluid inclusions that have not been affected by post-entrapment modification as pressure conditions never returned to supralithostatic conditions. This late fluid flow is interpreted to have formed the texturally late, coarse native gold that occurs along quartz grain boundaries and in open spaces. The systematic evolution of the pressure regimes in orogenic gold deposits such as Garrcon can be explained by relative movement of fault-fracture meshes across the base of the upper crustal brittle-ductile transition zone. We conclude that early vein quartz in orogenic deposits is precipitated at near-lithostatic conditions whereas the paragenetically late gold is introduced at distinctly lower pressure. Here we report on the occurrence of extensional quartz veins from the Garrcon orogenic gold deposit in the Neoarchean Abitibi greenstone belt of Canada that show exceptionally well-preserved primary textures, although the related fluid inclusion inventory has been severely affected by post-entrapment modification. It is demonstrated that the textural evidence alone allows the reconstruction of the mechanisms of quartz vein formation and the relative timing of gold introduction, constraining the temporal evolution of the fluid flow regime at the scale of individual veins. A framework is proposed for how orogenic gold deposits can form through progressive movement of a network of interlinked shear fractures and extensional fractures-so-called faultfracture meshes 19 -across permeability barriers within the upper crustal brittle-ductile transition zone, which is a complex region of alternating brittle and ductile behavior 20 . Geological setting With a total endowment of more than ~ 200 million ounces of gold, the southern Abitibi greenstone belt of Ontario and Quebec is one of the most prolific orogenic gold provinces in the world 21,22 . The belt encompasses Neoarchean metavolcanic rocks formed in a submarine setting between 2795 and 2695 Ma [22][23][24] , and flysch-like metasedimentary rocks of the Porcupine assemblage deposited as a result of crustal thickening and emergence of a shallow marine or subaerial hinterland 22,25 between 2690 and 2685 Ma [22][23][24] . Large-scale folding and thrusting during a deformational event occurring prior to 2679 Ma 22,25 resulted in the development of a regional terrestrial unconformity surface that is overlain by 2679-2669 Ma 22-24 molasse-like metasedimentary rocks of the Timiskaming assemblage, which locally contain intercalated subaerial alkaline metavolcanic units 22,[26][27][28] . Crustal shortening and thick-skinned thrusting resulted in the structural burial of the molasse-like metasedimentary rocks after 2669 Ma 22,25 . Panels of the metasedimentary rocks of the Timiskaming assemblage are preserved in the footwall of these thrusts, which are today represented by major fault zones transecting the supracrustal rocks of the southern Abitibi greenstone belt. All major gold camps in the belt are located along these crustal-scale fault zones 21,22,25,29 , which include the E-trending Porcupine-Destor fault zone in the north and the Larder Lake-Cadillac fault zone in the south (Fig. 1a). The metavolcanic and metasedimentary host rocks of the orogenic gold deposits in the southern Abitibi greenstone belt were metamorphosed to prehnite-pumpellyite to lower greenschist facies prior to ore formation 22,30 . The Garrcon gold deposit (20.6 million tonnes of ore containing 636,000 oz Au 31 ) is located in the Garrison camp near the provincial border between Ontario and Quebec within a transtensional segment of the Porcupine-Destor fault zone (Fig. 1b). The deposit occurs within a ~ 600-m-wide fault block that is bound by the subvertical Munro fault zone in the north and the Porcupine-Destor fault zone in the south 32 . The ore zones consist of east-dipping sets of veins (Fig. 2a) that are hosted by massive metagreywacke of the Timiskaming assemblage and meta-intrusive rocks 32 . Abundant extensional veins (Fig. 2b) are ~ 1 mm to ~ 5 cm in width (Fig. 2c) and are locally associated with minor zones of hydrothermal brecciation (Fig. 2d). The quartz veins are surrounded by distinct beige-colored halos caused by pervasive albite alteration of the metasedimentary and meta-intrusive www.nature.com/scientificreports/ host rocks ( Fig. 2b-d). The altered wall-rocks contain abundant disseminated pyrite. Visible gold in the extensional veins appears to be paragenetically late and commonly is present in fractures cutting the quartz (Fig. 2e). Vein quartz petrography Thin section petrography shows that key primary textural relationships are preserved in samples of the extensional veins from Garrcon. The veins consist mostly of elongate to blocky quartz grains that commonly increase in grain size from the vein walls toward the vein centers (Fig. 3). The quartz grains are approximately perpendicular to the wall-rock contact along the vein margins but show no preferred shape or crystallographic orientation in the vein centers (Fig. 3a). Grain boundaries between adjacent quartz grains range from planar to interlocking (Fig. 3a). Optical cathodoluminescence (CL) imaging reveals that the quartz grains have a short-lived blue emission ( Fig. 3b) that changes to light purple during continued electron bombardment. The quartz exhibits a longlived red-brown to brown CL color (Fig. 3c). The elongate and blocky quartz crystals appear unzoned in plane and cross-polarized light ( Fig. 3d) but exhibit complex oscillatory and sector zoning patterns in CL (Fig. 3e,f). Electron microprobe mapping shows that the growth zones in the elongate to blocky quartz grains recognized by CL vary in Al content. Growth zones showing a bright luminescence have comparably high Al concentrations (Fig. 3e,f). Electron backscatter diffraction analysis illustrates that the elongate to blocky quartz grains in the vein centers have different orientations, but that the crystallographic orientation within individual grains does vary by more than four degrees. The lack of crystallographic orientation changes within individual grains (Fig. 3g,h) suggests that stress-induced dislocation glide or creep was minor after quartz crystal growth within the veins. Mechanical Dauphiné twins are present only in some quartz grains (Fig. 3g,h). The quantitative orientation analysis confirms that the complex oscillatory and sector zoning visible in CL is a primary characteristic of the elongate to blocky quartz grains. However, in many vein samples from Garrcon, the originally elongate and blocky quartz grains are affected by recrystallization (Fig. 4). Recrystallization is particularly pronounced surrounding microscopic ribbons of pyrite and minor arsenopyrite that cut across the vein quartz or are present along grain boundaries between larger quartz grains. Polycrystalline aggregates consisting of small (10-20 μm) and nearly equidimensional grains occur in these zones of recrystallization (Fig. 4a,b). The spatial association between recrystallized quartz and sulfide minerals suggests that the sulfides formed paragenetically after the growth of the early elongate and blocky quartz grains, contemporaneous with recrystallization of the earlier quartz. Pyrite occurring in the sulfide ribbons shows complex patchy zoning patterns in backscatter electron images that are primarily related www.nature.com/scientificreports/ www.nature.com/scientificreports/ to variations in As content (Fig. 4c). Gold occurs as microscopic inclusions in the pyrite. The gold inclusions are encapsulated by the pyrite or, more commonly, occur along small fractures within the pyrite grains (Fig. 4c). Coarse native gold is texturally late and present within microfractures transecting earlier quartz grains or at grain boundaries of the vein quartz (Fig. 4b,d). Native gold is also present in late vugs filled with clear euhedral quartz crystals or calcite and is commonly closely associated with chlorite. Grains of native gold are present along healed microfractures cutting the latest quartz and calcite. Fluid inclusion petrography The early elongate to blocky quartz and recrystallized quartz grains are cloudy in thin section due to abundant secondary fluid inclusions forming dense, wispy arrays (Fig. 5a). Decrepitation textures are common (Fig. 5b), indicating that many fluid inclusions hosted by the vein quartz were affected by post-entrapment modification (Fig. 6). However, some secondary inclusion assemblages do not show evidence of post-entrapment modification, but instead the inclusions contain consistent phase proportions (Fig. 5c,d). Such assemblages are discernable in clearer quartz locally present in the veins. These assemblages of fluid inclusions containing consistent phase proportions include three-phase fluid inclusions with double bubbles (Fig. 5c), in which an aqueous liquid wets the inclusion walls and suspends a bubble of carbonic liquid that encloses a bubble of carbonic vapor, as well as two-phase H 2 O-dominant fluid inclusions (Fig. 5d). Discussion The vein textures at Garrcon record the evolution of the hydrothermal system that formed this orogenic gold deposit. The paragenetically earliest event resulted in the deposition of the barren, elongate to blocky quartz in the extensional veins. This was followed by the formation of microscopic sulfide ribbons along many of the ubiquitous microfractures cutting the earlier quartz and along grain boundaries between the elongate and blocky quartz grains, with the earlier quartz recrystallizing along these microscale zones of fluid flow. Pyrite formed during this paragenetic stage contains gold as microscopic inclusions. Later, formation of euhedral quartz in open spaces occurred, which was accompanied by late chlorite growth. Paragenetically latest is native gold, which occurs along microfractures cutting late euhedral quartz and calcite, or along grain boundaries of the earlier elongate and blocky quartz. (Fig. 7) resulted in intermittent, catastrophic hydraulic fracturing 7,8 . Following failure, the fractures created were held open by the supralithostatically overpressured fluids until they were sealed by quartz 33,34 . The comparably large volume of flow resulted in significant quartz deposition 35 and initial vein formation through cooling of the metamorphic fluids when migrating through the fracture mesh away from the main faults controlling upflow from depth. Continuous quartz precipitation in the newly formed, gaping fractures explains the interlocking texture of the paragenetically early quartz grains that competed for space, with each vein being sealed during a single episode of fluid flow 33,[36][37][38] . The rate of separation of the vein walls caused by the high-pressure fluids must have been faster than the rate of mineral deposition to avoid formation of crack-seal textures 39 . At Garrcon and other orogenic deposits [11][12][13][14] , recrystallization of the early quartz occurred concomitantly with sulfide formation as a result of fluid advection through the earlier formed veins (Fig. 7). Local failure in the presence of hydrothermal fluids caused the dynamic recrystallization of the earlier quartz 40 and resulted in the observed textural association between the fine-grained polycrystalline quartz aggregates and the pyrite and arsenopyrite forming microscopic ribbons that cut the elongate and blocky quartz or are located along pre-existing quartz grain boundaries. In many vein samples, multiple crosscutting sulfide ribbons are present suggesting that individual veins have reopened multiple times causing repeated dissolution and recrystallization of the vein quartz. The petrography of the fluid inclusions in the quartz (Fig. 6) provides strong evidence for the hypothesis that the formation of the sulfide ribbons and associated recrystallization of the earlier quartz was associated with large fluctuations in fluid pressure. Fluid inclusions entrapped during the formation of the early elongate and blocky quartz in the gaping structures at supralithostatic pressures have been affected by post-entrapment textural modification (Fig. 6). Post-entrapment modification takes place when large pressure differentials occur between fluid inclusions and their surroundings [11][12][13][14] , which is the case during decompression from supralithostatic to near-hydrostatic fluid pressures. However, the early elongate and blocky quartz as well as the recrystallized quartz also contain myriads of healed microfractures defined by secondary fluid inclusions that were The cyclic fluctuations in fluid pressure can be explained by temporary drainage of the geopressured fluids from the permeability barrier into the overlying near-hydrostatic realm (Fig. 7). Prior to failure, the fluid pressure www.nature.com/scientificreports/ builds up to supralithostatic conditions within and below the permeability barrier. During fault activation, the pressure drops to near-hydrostatic conditions and the hydrothermal fluids drain. As the fluids are drained and fractures providing throughgoing permeability are sealed, the pressure in the fluid conduit returned to nearlithostatic conditions. Textural observations at Garrcon (Fig. 4d) suggest that deposition of native gold along microfractures, grain boundaries, and in open spaces was late in the paragenesis. Similar to other orogenic gold deposits [12][13][14] , late gold introduction at Garrcon occurred subsequent to the permanent decompression of the fault-fracture mesh to nearhydrostatic conditions (Fig. 7). Secondary fluid inclusion assemblages present in recrystallized quartz, in clear quartz overgrowths with euhedral crystal terminations, or in euhedral quartz or calcite grown in open spaces have not been affected by post-entrapment modification (Fig. 6). Although these fluid inclusion assemblages show consistent liquid to vapor volumetric proportions, homogenization temperatures for these inclusions were not determined because they would only yield minimum temperatures. No evidence for phase immiscibility existed within the fluid inclusion assemblages, nor are there independent constraints on the pressures that prevailed at the time of gold deposition at Garrcon. Nevertheless, the petrography of these late inclusion assemblages showing consistent phase proportions provides unequivocal evidence that high pressure conditions were never reestablished during or after the formation of the clear quartz, after which native gold was deposited. Based on the findings of this study, it is hypothesized here that the late gold precipitation within the fault-fracture mesh at Garrcon occurred because of the pressure drop metamorphic fluids experienced as they traversed the upper crustal brittle-ductile transition zone, escaping from the geopressured regime prevailing under ductile and mixed brittle-ductile conditions into the overlying, near-hydrostatically pressured, brittle crust (Fig. 7). This pressure drop may have caused direct native gold deposition or perhaps triggered the formation of gold colloids in the hydrothermal fluids [43][44][45] . The origin of the paragenetically late gold is unknown in the case of Garrcon. Previous workers studying other orogenic gold deposits suggested that the gold might have been derived from remobilization of gold that was originally deposited with pyrite and arsenopyrite earlier in the paragenesis 46-48 . There is no petrographic evidence for this process at Garrcon although it cannot be ruled out that gold remobilization occurred at greater depth, outside of the current deposit. Systematic changes in pressure regime during vein formation at Garrcon, as indicated by mineral and fluid inclusion petrography, suggest that the structural setting governing fluid flow changed over time. It is hypothesized here that the fault-fracture mesh progressively moved across the upper crustal brittle-ductile transition zone over time (Fig. 7). This could have been accomplished through downward cooling of the metamorphic belt allowing the brittle-ductile transition to migrate down towards the core of the cooling orogen 49 and/or regional uplift and exhumation 50,51 . Initial quartz vein formation occurred below a low-permeability barrier capping the geopressured portion of the crust. Fluid-induced recrystallization of the quartz and concomitant sulfide formation caused by the sulfidation of the wall-rocks occurred during intermittent breaching of this seal and drainage of the hydrothermal fluids into the overlying near-hydrostatically pressured portion of the crust. High pressures were never reestablished after deposition of the late clear quartz and the calcite as well as the late native gold deposition, which occurred along grain boundaries and in open spaces at near-hydrostatic pressure conditions. Conclusions The mineral paragenesis observed in the extensional quartz veins at Garrcon as well as the petrographic characteristics of the fluid inclusion inventory of the vein quartz-which includes early fluid inclusions affected by post-entrapment modification and later unmodified fluid inclusions-are similar to those recently recorded in other orogenic deposits [12][13][14] . This suggests that the origins of orogenic gold deposits can be explained by a common mechanism of progressive movement of fault-fracture meshes across permeability barriers within the upper crustal brittle-ductile transition zone. This new model has important implications with regard to exploration strategy and grade control. The textural observations indicating that gold precipitates during two different distinct paragenetic stages could explain why grade distribution in these deposits is variable. Wall-rock sulfidation during pressure cycling between supralithostatic and near-hydrostatic conditions causes the deposition of early microscopic gold within sulfide minerals. Bonanza-type gold may occur in ore shoots where near-hydrostatic conditions were permanently established late in the paragenesis. Methods Polished thin (~ 30 µm) and thick (~ 60 µm) sections of variably mineralized quartz veins from Garrcon were studied by optical petrography. Subsequent optical cathodoluminescence microscopy was conducted using a HC5-LM microscope by Lumic Special Microscopes, Germany. The microscope was operated at 14 kV and a current density of ~ 10 mA mm −2 . Images were captured using a high-sensitivity, double-stage Peltier cooled Kappa DX40C CCD camera. Small-scale textural relationships were studied by scanning electron microscopy using a TESCAN Mira 3 LHM Schottky field-emission-scanning electron microscope with an attached Bruker XFlash 6|30 silicon drift detector for energy-dispersive X-ray spectroscopy. The trace element distributions of Al and Ti in selected quartz crystals were mapped using a JEOL JXA-8900 electron microprobe following the procedure of Ref. 52 . An accelerating voltage of 20 kV and a beam current of 100 nA (measured on the Faraday cup) were employed. At a detection limit of 105 ppm, the Al distribution map yielded useful information on compositional zoning of the quartz. The concentration of Ti in the quartz was typically below the detection limit of 300 ppm. Representative vein quartz samples were also studied by electron backscatter diffraction analysis using a FEI Quanta 450 field emission scanning electron microscope operated at 20 kV and low vacuum. Electron backscatter diffraction patterns were acquired with an EDAX Digiview IV detector set to 4 × 4 binning. Post-acquisition data processing was performed using the Oxford Instruments software suite.
2022-10-17T13:39:07.914Z
2022-10-17T00:00:00.000
{ "year": 2022, "sha1": "e9ac12b4b68fca4200db602c9da0a409466a91b0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e9ac12b4b68fca4200db602c9da0a409466a91b0", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
17998915
pes2o/s2orc
v3-fos-license
The Impact of Hospital-Based Cardiac Rehabilitation on Signal Average ECG Parameters of the Heart After Myocardial Infarction Background: Cardiac rehabilitation is a combination of integrated programs aimed at improving outcomes in patients recovering from heart events. Objectives: The present study aimed to evaluate the early benefits of supervised exercise training on electrophysiological function of post-ischemic myocardium. In this regard, signal-averaged electrocardiogram (SAECG) was used. Patients and Methods: Between May and September 2012, all patients (n = 100) admitted to our center, with the diagnosis of acute Myocardial Infarction (MI), were enrolled in this study. Every other patient was assigned to two groups receiving either inpatient cardiac rehabilitation plus standard post-MI care (cases) or only standard post-MI care (controls). Electrophysiological function was assessed by SAECG in all the patients at baseline and on the day 5. The patients were considered as having late potential if they had abnormalities in at least two SAECG indices. Results: Cardiac rehabilitation led to significant improvements in QRS duration (P < 0.001), square root of amplitude in the last 40 ms (P < 0.001) and duration of terminal signal with low amplitude (P < 0.001). Cardiac rehabilitation also resulted in amelioration of SAECG parameters; frequency of patients with late potential significantly decreased from 64% to 20% after five days (P < 0.001). Conclusions: Supervised in-hospital exercise training was associated with improvements in SAECG-measured electrical activity post-MI. Background Several mechanisms contribute to poor prognosis in post Myocardial Infarction (MI) patients, among which ventricular fibrillation and dysfunction, impaired baroreflex sensitivity and electrophysiological disturbances are associated with marked mortality and morbidity early after the event (1)(2)(3)(4)(5)(6). Aside from lethal arrhythmias as a major cause of death early after MI, patients with more subtle ECG abnormalities, including microvolt T-wave alternans and heart wave turbulence, are at increased risk of subsequent fatal events (7). Post MI patients are at increased risk of sudden death in the first 30 days following MI even despite normal ventricular function (ejection fraction > 40%) (8). Cardiac rehabilitation refers to a combination of integrated programs aimed at improving the outcomes in patients recovering from heart events. These programs involve exercise training, management of lipid abnormalities, hypertension, weight loss and nutritional and psychological education (9). Research on animal models and human subjects suggested that cardiac rehabilitation can reduce the risk of ventricular arrhythmia and sudden death (10,11). Exercise training, a core component of rehabilitation programs, has proven efficacy on both heart function and modification of the underlying risk factors (12,13). These noninvasive techniques can be implemented in patients with a wide range of heart problems, bearing favorable effects on most risk factors associated with high mortality (9). Objectives The present study aimed to evaluate early benefits of supervised exercise training on electrophysiological function of post-ischemic myocardium after MI. To this end, signal-averaged electrocardiogram (SAECG) was used. SAECG is superior to conventional ECG since it is able to remove noise signals, allowing for identification of small but significant variations in the QRS complex (14). Abnormalities in SAECG predicts subsequent occurrence of tachyarrhythmias with high sensitivity (15,16). Herein, we assessed whether early in-hospital cardiac rehabilitation impacts electrophysiological abnormalities detected by SAECG in patients with MI. Patients and Methods Between May and September 2012, all patients admitted to our center with the diagnosis of acute MI were enrolled into this study. Patients were found eligible if their condition had been stabilized following the acute episode; i.e. (1) they had experienced no chest pain in the past eight hours, (2) no rise in serum concentrations of cardiac creatine kinase and troponin, and (3) they did not exhibit signs and symptoms of cardiac/respiratory distress including but not limited to dyspnea and bilateral rales. Patients with orthopedic conditions or rheumatologic diseases were excluded from the study since the ability of exercise training is limited in these patients. Patients with emergent Coronary Artery Bypass Graft (CABG) and long QRS complex (> 120 ms) were excluded, as well. In this case-control study, every other patient was assigned into two groups receiving either inpatient cardiac rehabilitation (see below) plus standard post-MI care (training group) or only standard post-MI care (controls). Before entering the study, a thorough medical history, including history of diabetes, hypertension, hyperlipidemia and smoking, was obtained and recorded using pre-designed questionnaires. All the patients gave verbal informed consent prior to entering the study. Local Ethics Committee also approved the study protocol. Cardiac Rehabilitation Protocol The patients received in-hospital cardiac rehabilitation supervised by a nurse and an experienced physiotherapist. Precise monitoring of heart rhythm, heart rate and blood pressure were performed during the sessions. For each patient, exercise training for 45 minutes daily was scheduled. If the patients experienced chest discomfort, dyspnea and palpitation or if abnormalities emerged in ECG rhythm, exercise training was immediately halted. Efficacy Assessment Electrophysiological function was assessed by SAECG in all the patients at baseline and on day 5. SAECG was recorded by Cardioscan Resting 12-Lead (DM software Inc., California, US) during sinus rhythm with bipolar X, Y, and Z leads and bandpass filters at 25-250 Hz. In each assessment, three parameters were computed: (1) duration of filtered QRS complex, (2) root mean square of amplitude in terminal 40 ms and (3) duration of low amplitude signal. Abnormalities were detected if filtered QRS complex was longer than 114 ms, square of terminal signal was lower than 20 µV or low amplitude signal took longer than 38 ms. Patients were considered as having late potential if had abnormalities in at least two SAECG indices. Statistical Analysis Continuous variables, including SAECG parameters, presented as mean ± standard deviation. On the other hand, categorical variables displayed as proportions. Baseline characteristics were compared using Chi-square and Fisher's exact test. Besides, between group changes in SAECG indices were assessed using Analysis of Covariance (ANCOVA) with baseline measurement entering the model as covariates. In addition, changes in proportion of patients exhibiting late potentials between baseline and day 5 were investigated using McNemar test. All the analyses were performed using IBM SPSS Statistics 19 for Windows (IBM Inc., Armonk, NY, USA) and P-values less than 0.05 were considered as statistically significant. Results A total of 100 patients were recruited and assigned to receive standard care (control group) (n = 50) or standard care plus inpatient cardiac rehabilitation (the training group) (n = 50). The study participants included 44 females and 56 males with the mean age of 61.41 ± 11.60 years. Baseline characteristics of the recruited patients are presented in Table 1. Accordingly, no significant difference was found between patients in training and control groups regarding age, sex, type of MI and previous history of diabetes, hypertension, hyperlipidemia and smoking. QRS complex duration. In the control group, the mean duration of filtered QRS complex was 110.5 ± 6.5 ms at baseline and increased to 113.2 ± 6.0 ms on day 5. In the training group, on the other hand, a declining trend was observed in this parameter, which diminished from 111.2 ± 6.1 to 104.8 ± 10 ms during the study course (Figure 1 A). ANCOVA revealed significant differences between the two arms of the trial regarding changes in QRS duration (P value < 0.001). Among the control group patients, 21 (42%) and 23 (46%) had abnormal QRS (longer than 114 ms) on the first and last days, respectively (P value = 0.013). In the training group, 23 subjects (46%) had abnormal QRS at baseline, which reduced to 3 (6%) on day 5 (P value < 0.001) ( Table 2). Root mean square of amplitude in the last 40 ms. The mean value of this parameter in the control group was 30.7 ± 20.8 µV at baseline and 27.0 ± 17.2 µV on day 5 (P = 0.12). In the training group, this value increased from 21.4 ± 12.2 to 28±11.9 µV after five days (P < 0.001) (Figure 1 B). Moreover, 20 patients in the control group (40%) had square amplitude less than 20 µV and this proportion did not change after five days (P value = 1). On the other hand, 29 subjects in the training group (58%) had this abnormality at baseline, but it was detectable in only 11 patients (22%) by the fifth day (P value < 0.001) ( Table 2). Duration of terminal signal with low amplitude (< 40 µV). Slight increase was noted in this parameter in the control group and the mean value increased from 42.3 ± 11.8 to 45.3 ± 11.0 ms (P = 0.039). The patients in the training group, on the other hand, experienced a decline from 44.3 ± 7.3 to 37.6 ± 5.6 ms (P < 0.001) (Figure 1 C). The proportion of patients with terminal signal longer than 38 ms is presented in Table 2. Late potential. As described earlier, patients were considered as having late potential if had at least two abnormal SAECG parameters. Among the patients who only received routine care (control group), 24 (48%) and 32 (64%) had late potential on the first and fifth days, respectively (P = 0.039). However, cardiac rehabilitation resulted in amelioration of SAECG parameters; frequency of patients with late potential decreased significantly from 64% to 20% after five days (P < 0.001) ( Table 2). Discussion The results of the current study suggested that in-hospital cardiac rehabilitation after an MI episode was associated with lower electrophysiological abnormalities as detected in SAECG. Based on the SAECG results, it is expected that patients who receive rehabilitation would have a lower risk of subsequent ventricular tachyarrhythmia and sudden death. Recent studies pointed out that cardio protective effects of rehabilitation programs might be due to improved regulation of the autonomic nervous system. For instance, in a study by Malfatto et al. (17), short-term effects of exercise were investigated in 22 post-MI patients. The researchers proposed that eight weeks of exercise training could modulate cardiovascular autonomic function by increasing vagal (parasympathetic) tone, which is known to be associated with better cardiovascular outcome (18). Similar results were also obtained after rehabilitation in patients with Ischemic Heart Disease (IHD). Lucini et al. reported significant improvements in baroreflex sensitivity and increases in R-R interval in 29 patients who underwent exercise training compared to 11 individuals in the control group (19). Exercise training can also enhance heart rate variability in healthy older adults (20). Increased activity of parasympathetic nervous system paralleled with decreased sympathetic overdrive could subsequently lower the risk of sudden cardiac death due to fatal tachyarrhythmias. Exercise can also improve autonomic function indices like heart rate variability index (HRV) in patients with heart failure, but autonomic dysfunction can predict poor outcome after rehabilitation (21). In the present study, improvement in SAECG parameters was consistent with the aforementioned effects. Tanaka et al. (22) demonstrated that regular exercise helped maintaining arterial elasticity and even reversed aging-related changes. Moreover, Luk et al. (23) observed that eight weeks exercise training could increase flowmediated dilation, high density lipoprotein and decrease heart rate at rest. Beneficial effects of exercise training on endothelial function have been addressed by other research groups (24,25). It has been suggested that enhanced release and activity of Nitric Oxide (NO) resulting in improved vasodilatation might be a key event in this regard (26,27). In the same line, Hambrecht et al. (28) indicated that four weeks of physical activity could significantly improve endothelium dependent vasodilatation. Therefore, improvements in coronary artery blood flow gained by exercise training programs can limit the ischemic episodes of myocardium during future activities. Despite the fact that cardiac rehabilitation has proven effects on patients' outcome, only 10-20% of patients with MI participate in rehabilitation programs in the U.S. (29). This has been attributed to lack of experience or necessary equipment in different regions, low referral rates in women and elderly and low socioeconomic status of patients (30,31). These underlying factors also contribute to low utilization of rehabilitation programs in Iran. Yet, with increased awareness regarding short-and long-term benefits of such programs, more frequent use of cardiac rehabilitation programs is ensued. Limitation of the Study During the study, case group patients should stay in the hospital to complete their rehabilitation program. This strategy in some patients increased the cost of hospital stay and may outweigh the cost-effectiveness of complete protocol of in-hospital rehabilitation. Stratification of patients according to the duration of hospital stay may answer this question in future studies. Supervised in-hospital exercise training was associated with improvements in SAECG parameters in post-MI patients. Nevertheless, further studies are required to investigate whether these promising preliminary findings favorably affect long-term patient outcomes to reduce fatal arrhythmia event.
2016-08-09T08:50:54.084Z
2015-08-01T00:00:00.000
{ "year": 2015, "sha1": "49c6c611c4f4011744528224a3983749dc17e13d", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5812/cardiovascmed.26353v2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49c6c611c4f4011744528224a3983749dc17e13d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6933365
pes2o/s2orc
v3-fos-license
Optimized polymer enhanced foam flooding for ordinary heavy oil reservoir after cross-linked polymer flooding A successful cross-linked polymer flooding has been implemented in JD reservoir, an ordinary heavy oil reservoir with high permeability zones. For all that, there are still significant volumes of continuous oil remaining in place, which can not be easily extracted due to stronger vertical heterogeneity. Considering selective plugging feature, polymer enhanced foam (PEF) flooding was taken as following EOR technology for JD reservoir. For low cost and rich source, natural gas was used as foaming gas in our work. In the former work, the surfactant systems CEA/FSA1 was recommended as foam agent for natural gas foam flooding after series of compatibility studies. Foam performance evaluation experiments showed that foaming volume reached 110 mL, half-life time reached 40 min, and dimensionless filter coefficient reached 1.180 when CEA/FSA1 reacted with oil produced by JD reservoir. To compare the recovery efficiency by different EOR technologies, series of oil displacement experiments were carried out in a parallel core system which contained cores with relatively high and low permeability. EOR technologies concerned in our work include further cross-linked polymer (C-P) flooding, surfactant-polymer (S-P) flooding, and PEF flooding. Results showed that PEF flooding had the highest enhanced oil recovery of 19.2 % original oil in place (OOIP), followed by S-P flooding (9.6 % OOIP) and C-P flooding (6.1 % OOIP). Also, produced liquid percentage results indicated PEF flooding can efficiently promote the oil recovery in the lower permeability core by modifying the injection profile. Introduction Currently, chemical flooding has been widely used in ordinary heavy oil reservoirs in China (Zhou et al. 2006(Zhou et al. , 2013Gao 2011;Hou et al. 2013a, b;Zhang et al. 2014). As one of the main techniques, cross-linked polymer flooding has been widely implemented for enhancing oil recovery of high water cut reservoirs and achieved good development effect (Urbissinova et al. 2010;Renouf 2014). Yet for all that, there are still significant volumes of oil remaining in place. In most cases, however, 40-50 % of the original oil in place (OOIP) can not be easily extracted due to stronger reservoir heterogeneity and more complicated plane distribution characteristic. Along with further promotion and application of polymer flooding technology, similar remaining oil reserves will rise continually. Many cases showed that it played a limited role in oil recovery improvement to continue using polymer as a profile control and flooding agent in different chemical methods after cross-linked polymer flooding, such as secondary cross-linked polymer flooding, surfactant-polymer (SP) flooding and alkali-surfactant-polymer (ASP) flooding. That is mainly due to the further development of thief channels. The number of thief channels increased and the distribution of remaining oil became more complex after the first cross-linked polymer flooding (Maghzi et al. 2014). Besides, stronger heterogeneity of post cross-linked polymer flooding reservoirs makes injected fluid prefer to flow through highly permeable thief channels, rather than displace the remaining oil in low permeability areas,where a mass of remaining oil locates as continuous state. A successful cross-linked polymer flooding has been implemented in the target oil field (Yang et al. 2008), JD reservoir, an ordinary heavy oil reservoir with high permeability zones. After cross-linked polymer flooding, the distribution of remaining oil became more complicated and it was more difficult to use the chemical flooding which taking only polymer as a profile control for enhancing its oil recovery. Therefore, a new kind of profile control and flooding agent with greater sweep efficiency is of great importance for the development of post-polymer flooding reservoirs. Foam flooding using foam generated by a mixture of gas (nitrogen, natural gas or other gases) and foaming agents as an oil displacement medium (Pang 2010;Chen et al. 2010;Li et al. 2010). A lot of works reported that enhanced foam flooding was widely used to further develop this kind of post-polymer flooding reservoirs (Li et al. 2009). Generally, nitrogen was used as foaming gas because of its good compatibility with foam agent (Zitha and Du 2010;Hou et al. 2013a, b). However, nitrogen supply needs to increase nitrogen production equipment, which is a big investment for the kind of high water cut reservoirs. There was rich natural gas produced by JD reservoir. Also for low cost and good economy, natural gas was used as foaming gas in our work. However, compared with nitrogen, it is more difficult to select foam agent for natural gas. In the former work (Pan et al. 2013), foam Agent CEA:FSA1(7:3) was selected for natural gas foam flooding. Considering the evaluation indexes of foaming capability, foam half-life time and interfacial tension, CEA:FSA1(7:3) and CEA:DHF-1(7:3) were chosen by laboratory experiments conducted under simulated water-oil conditions of JD Reservoir. And, further foam performance evaluation experiments showed that CEA:FSA1(7:3) has better oil tolerance ability, anti-adsorption ability and aging resistance ability. CEA:FSA1(7:3) was recommended as foam agent for natural gas foam flooding in JD Oilfield and the concentration was optimized as 0.5 %. In the work, EOR technologies concerned in our work included further cross-linked polymer (C-P) flooding, surfactant-polymer (S-P) flooding, and polymer enhanced foam (PEF) flooding. It was necessary to optimize the best EOR technology for JD oilfield, and characterize the potentiality of polymer enhanced foam (PEF) flooding for the kind of post cross-polymer flooding reservoirs. Experimental Experimental equipments Chemical flooding displacement experiment system produced by the American TEMECO was used in the experiments. The experimental equipments included a gas-liquid injection system, the temperature control and pressure sensor system, the core clamping system and the separation and measurement system of produced fluid. The gas-liquid injection system was used to provide the fluid conditions for the chemical flooding experiments. Four kinds of fluid including polymer, surfactant, oil and formation water can be simultaneously injected. Also, the gas can be injected at the designed speed. The fluid and gas can be injected to the core clamping system, which included two cores with different permeability. This kind of design can reflect the vertical heterogeneity of real reservoirs. The temperature control and pressure sensor system can make sure that the experiments were carried out in the designed condition. The separation and measurement system was used to automatically segregate and measure the fluid produced at the core outlet. In the chemical flooding selection experiments, the accuracy of the temperature was ±0.5°C, the flow rate was limited to 0-30 mL in the control apparatus of gas mass flow, the pressure was limited to the range of 0-10 MPa in the back-pressure valve and the pressure-controlling accuracies of both the back-pressure valve and digital pressure gauge were 0.01 MPa. Materials Materials used in experiments included natural gas, crude oil, AS-polymer, JRJL-3 crosslinker, JRJL-3 modifier, and surfactants. The natural gas and crude oil came from well G104-5 in JD reservoir. The crude oil had a viscosity of 95 mPa s and a density of 0.82 g/cm 3 . The component analysis result of natural gas was listed in Table 1. The JRJL-3 crosslinker, JRJL-3 modifier, AS-polymers were supported by JD oilfield. The JRJL-3 crosslinker, JRJL-3 modifier, AS-polymers were used in the actual cross-linked polymer (C-P) flooding process of JD reservoir. The surfactant system, CEA:FSA1(7:3), was screened in our former work. And it has good compatibility with natural gas. The physical properties of main chemical agents involved in the work were listed in Table 2. The parallel core system which contained cores with relatively high and low permeability. The parallel core system was a sand-packed tube model of 30 cm in the length and 2.5 cm in the diameter. The sand used in this work was siliceous sand, eighty percent of which in size was between 45 and 50 mesh. For each test, fresh sand was packed to ensure the same wettability. The following procedure was sand packing. And then measure the air permeability of the model. The absolute permeability of high permeability core was approximately 2500 9 10 -3 and 600 9 10 -3 lm 2 in the low permeability core. Then saturate the model with brine water. Water-saturated process continued for 5 h, and the following oil-saturated process continued for more than 10 h. The oil saturation of models was absolute 80 % after oil-saturated process. The parallel core parameters are listed in Table 3. Scheme design Four experimental schemes were designed in this study, as shown in Table 4. Each experiment had the first crosslinked polymer flooding process. After the first crosslinked polymer flooding process, EOR technologies concerned in our work included further water flooding, crosslinked polymer (C-P) flooding, surfactant-polymer (S-P) flooding, and polymer enhanced foam (PEF) flooding. Injection parameters of different chemical methods for each scheme were illustrated in Table 5. As a comparative experiment, there was no chemical agent injected in the further water flooding experiment. The chemical agent used in the further cross-linked polymer (C-P) flooding was same with the actual cross-linked polymer in JD reservoir. The surfactant agent used in the further surfactant-polymer (S-P) flooding and polymer enhanced foam (PEF) flooding was CEA:FSA1(7:3), which was selected for natural gas in our former work. Foam performance evaluation experiments showed that CEA:FSA1(7:3) had good oil tolerance ability, anti-adsorption ability and aging resistance ability. These experiments were conducted at a temperature of 65°C and the injection rate was set to 0.5 mL/min. Experimental procedure There were four procedures in the four experiments, including experimental apparatus connection, sand-packed Experimental apparatus connection Connect the experimental apparatus according to Fig. 1. Sand-packed model preparation Saturate the sand-packed model with water, and flood the model with oil until the irreducible water saturation was reached. First cross-linked polymer flooding (1) Water flooding Flood the model with water at a constant rate until the water cut of outlet reaches 94 %. (2) First cross-linked polymer flooding Inject cross-linked polymer system of 0.2 PV into the parallel core system at a constant rate. (3) Following water flooding Flood the model with water at a constant rate until the water cut of outlet reached 97 %. Further chemical flooding (1) Chemical slug injection Inject the chemical slug of 0.3 PV into the model at a constant rate according to Table 5. (2) Following water flooding Flood the model with water at a constant rate until the water cut of outlet reached 98 %. End the experiment and prepare for the next one. It was different for the four experiments in the post cross-polymer flooding process. As a comparative experiment, there was no chemical agent injected in the experiment (a). And the water cut curve kept increasing until the end with the final recovery of 53.7 %. Cross-polymer solution slug of 0.3 PV was injected followed by water flooding until the water cut reached 98 % in the experiment (b). During the secondary cross-polymer flooding, the water cut curve decreased to 88.1 with 8.9 % down. Compared with the recovery before the secondary crosspolymer flooding, the final recovery at the end of experiment (b) reached 59.0 with 6.1 % increased. Surfactantpolymer solution slug of 0.3 PV was injected followed by water flooding until the water cut reached 98 % in the experiment (c). During the surfactant-polymer flooding, the water cut curve decreased to 63.4 with 33.6 % dropped. Compared with the recovery before surfactant-polymer flooding, the final recovery at the end of experiment (c) reached 61.7 with 9.6 % increased. Surfactant-polymer solution slug of 0.3 PV and natural gas with 1-1 gas liquid ratio were injected followed by water flooding until the water cut reached 98 % in the experiment (d). During the enhanced foam displacement stage, the water cut curve Liquid fraction comparison Liquid fraction of cores means the flow rate ratio at the outlet of the high permeability and low permeability cores. It can characterize the chemical profile blocking effect. The liquid fraction comparison of high permeability core and low permeability core in the four experiments was shown in Fig. 4a-d. From the initial stage of four experiments, it can be seen that the liquid fraction of high permeability core remain at more than 90 %, while less than 10 % for the low permeability core. Compared with the low permeability core, the injected water preferred to flow in the high permeability core at the same pressure difference. With the thief zone formed in the high permeability, there was more and more fluid flowing in the high permeability core, and there was less and less injected fluid flowing in the low permeability. The water cut of experiments got higher and higher. Then cross-polymer fluid of 0.2 PV was injected in every experiment followed by water flooding until the water cut reached 97 %. The cross-polymer solution preferred to flow in the thief zone of high permeability core, and played plugging effect. It can be seen that the liquid fraction of the high permeability core decreased, while the liquid fraction of low permeability increased during the cross-polymer flooding process in the four experiments. During the following water flooding stage after the first crosspolymer flooding, the new thief zone was formed and the liquid fraction of high permeability core increased to more than 90 %. Meanwhile, there was fluid of less than 10 % flowing through the low permeability core. In the post crosspolymer flooding process, liquid fraction curves of high and low permeability cores for the four experiments were different because different chemical fluid was injected in the four experiments. As a comparative experiment, there was no chemical agent injected in the experiment (a). And the liquid fraction of high permeability core kept increasing until the end with the final fraction of 100 %. The change trend of liquid fraction in the low permeability was opposite with that in the high permeability. In the post cross-polymer flooding process, further cross-polymer solution slug of 0.3 PV was injected in the experiment (b). As shown in Fig. 4b, a drop funnel of liquid fraction in high permeability core was formed during the further cross-linked polymer flooding stage. The liquid fraction of high permeability core decreased from 97.8 to 84.8 % with the biggest decline value of 13.0 %. And then the liquid fraction of high permeability core increased slowly along with the PV number of following water flooding increased. In the further cross-linked polymer flooding process, the width of the funnel was 0.98 PV. The biggest decline value and the width of the drop funnel reflect the selective plugging effect of different chemical methods. In the experiment (c), surfactant-polymer solution slug of 0.3 PV was injected. In the experiment (d), surfactant-polymer solution slug of 0.3 PV and natural gas with 1-1 gas liquid ratio were injected. The biggest decline value and the width of the drop funnel of experiment (b-d) were shown in Fig. 5. It can be seen that there was the biggest decrement (47.7 %) in experiment (d), also the decrement last the longest PV (1.46 PV). It indicated that the enhanced foam flooding had the best selective blocking effect for the post cross-polymer flooding reservoirs. Recovery comparison Liquid fraction of cores can characterize the chemical profile blocking effect. However, it cannot reflect the displacement oil ability of chemical agent in different experiments for the post-polymer flooding reservoirs. The recovery comparison of high permeability core and low permeability core in the four experiments was shown in Fig. 6a-d. It can be found that the recovery of high permeability core was obviously higher than low permeability core in each experiment. From experiment (a) to experiment (b), the recovery of high permeability core were 76.5, 74.4, 80.1, 74.6 %. Figure 7 gave the increased recovery of high permeability and low permeability cores for chemical methods after cross-link polymer flooding. Compared with the cross-polymer flooding process, the increased recovery of high permeability core were 2.8, 1.6, 7.2, 11.5 %, respectively. The enhanced oil recovery of experiment (c) and experiment (d) were higher, which indicated that CEA:A1(7:3) system screened had a good ability to displace oil because of its good ability to reduce the interfacial tension. From experiment (a-b), the recovery of low permeability core was 33.2, 44.1, 44.0, 58.4 %. Compared with the cross-polymer flooding process, the increased recovery of low permeability core were 0.7, 10.4, 11.9, 26.6 %, respectively. The enhanced oil in the further cross-polymer flooding experiment mainly come from the low permeability core. Compared with the cross-polymer flooding experiment, there was no big difference in the recovery of low permeability core of the surfactant-polymer flooding experiment. It indicated that the surfactant injected did not play good washing oil effect in the low permeability core. Surfactant-polymer solution slug of 0.3 PV and natural gas with 1-1 gas liquid ratio were injected followed by water flooding until the water cut reached 98 % in the experiment (d). Foam with a high apparent viscosity formed in the high and low permeability cores during the injection process. And it played the role of selectively blocking in the high permeability core, and displaced the oil in the low permeability core at the same time. Also, as surfactant with good ability of reducing interfacial tension in the enhanced foam system, CEA:A1(7:3) can increase capillary number, accordingly, reduce saturation of irreducible oil and enhance oil recovery in the high and low permeability cores. Based on the above analysis, enhanced foam system can simultaneously enlarge sweep volume and increase washing efficiency in the high and low permeability cores. Considering selective plugging feature and good washing oil ability, polymer enhanced foam (PEF) flooding was taken as following EOR technology for JD reservoir. Conclusion 1. To compare the recovery efficiency by different EOR technologies, series of oil displacement experiments were carried out in a parallel core system which contained Results showed that PEF flooding had the highest enhanced oil recovery of 19.2 % original oil in place (OOIP), followed by S-P flooding (9.6 % OOIP) and C-P flooding (6.1 % OOIP). 2. Produced liquid percentage results indicated PEF flooding can more efficiently promote the oil recovery in the lower permeability core by modifying the injection profile. 3. Considering selective plugging feature and good washing oil ability, polymer enhanced foam (PEF) flooding was taken as following EOR technology for JD reservoir.
2018-04-03T03:34:46.803Z
2016-01-08T00:00:00.000
{ "year": 2016, "sha1": "68b749d1003e9b7504d1dd2b84113d828b4d0dc3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13202-015-0226-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "68b749d1003e9b7504d1dd2b84113d828b4d0dc3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
211044103
pes2o/s2orc
v3-fos-license
Size of bulk fermions in the SYK model The study of quantum gravity in the form of the holographic duality has uncovered and motivated the detailed investigation of various diagnostics of quantum chaos. One such measure is the operator size distribution, which characterizes the size of the support region of an operator and its evolution under Heisenberg evolution. In this work, we examine the role of the operator size distribution in holographic duality for the Sachdev-Ye-Kitaev (SYK) model. Using an explicit construction of AdS$_2$ bulk fermion operators in a putative dual of the low temperature SYK model, we study the operator size distribution of the boundary and bulk fermions. Our result provides a direct derivation of the relationship between (effective) operator size of both the boundary and bulk fermions and bulk $\text{SL}(2; \mathbb{R})$ generators. Conclusion 19 A SYK size effective action 24 Contents 1 Introduction In recent years, significant progress has been made on characterizing quantum chaos in many-body systems. Developments in the study of holographic duality [1] pointed out a close connection between chaotic many-body dynamics and gravitational physics, especially black hole dynamics [2,3]. Motivated by this connection, new characteristics of quantum chaos have been studied, such as the out-of-time-ordered correlator (OTOC) [3][4][5][6]. The OTOC can be viewed as a quantum generalization of the Poisson bracket in classical dynamics. In the classical case, the Poisson bracket of canonical coordinates x(t) with the coordinates at an earlier time x(0) determines how sensitive the trajectory is to initial conditions, which characterizes chaos and is related to Lyapunov exponents. Similarly, the OTOC provides a measure of "operator scrambling" -how operators become more complicated under Heisenberg evolution. The decrease of the OTOC corresponds to an increase of the commutator between two operators A(t) at time t and B(0) at time 0, which captures that the support of A(t) in operator space grows. Another related measure of operator scrambling is the operator size distribution [7,8]. By expanding each operator in a polynomial of simple operators, such as single Pauli operators in a spin chain, or single fermion creation/annihilation operators in a fermion system, one obtains a superposition of terms, each of which is a product of multiple simple building blocks. This leads to a definition of operator size distribution as the distribution of support over products of different lengths. A single Pauli operator in a spin chain has size 1, while a product of two Pauli's on two sites has size 2. An operator's size distribution provides a more sophisticated characteristic of its complexity than the OTOC. It was also shown that a particular average of OTOCs gives the average size, i.e. the first moment of the operator size distribution [8,9]. The operator size distribution has been studied in various models including the Sachdev-Ye-Kitaev models [8,9] and spin models [10][11][12]. Finite temperature generalizations of (effective) operator size has been discussed recently [13,14]. Interestingly, the operator size distribution has also been related to a certain momentum quantum number in the holographic dual theory [8,15,16]. Here, we investigate the role of operator size distribution in holographic duality by studying the bulk operator size in the dual theory of SYK model. The SYK model has been proposed to be dual to Jackiw-Teitelboim gravity [17,18] with certain bulk matter content. Although the duality is not completely proven, from the behavior of the boundary fermion in the SYK model (a conformal field with known conformal dimension, and large-N factorization) it is reasonable to apply the known dictionary and determine the corresponding bulk fermion field. In this paper, we compute the size distribution of the boundary Majorana field in the strongly-coupled regime of SYK, using a method inspired by a combination of [9] and [19] that makes clear the connection of this boundary quantity to bulk quantities for a fermion in static AdS 2 . Then, in close analogy with previous work for fields with other spin [20][21][22][23] (often referred to collectively as the Hamilton-Kabat-Lifschytz-Lowe, or HKLL, constructions), we give an explicit construction of the bulk fermion operator in terms of the boundary Majoranas. This enables us to give a direct computation of the bulk fermion size, as well as provide a direct proof of conjectured relations between operator size and certain components of bulk momentum (i.e. SL(2, R) charge) both in SYK models and their bulk duals. The remainder of the paper is organized as follows. In Sec. 2 we review the essential results on the definition of operator size distribution, and the generating function approach that will be useful for us. In Sec. 3 we review the key features of the SYK model and derive the boundary operator size distribution at strong coupling. In doing so, we directly find a relationship between the boundary size generating function at strong coupling and SL(2, R) generators. In Sec. 4 we develop a construction of bulk fermions from the boundary, and use this to study the size of bulk fields in SYK. As a cross-check, we also present numerical results based on the known large-q boundary size distribution [9]. Finally, Sec. 5 contains the conclusion and further discussion. Appendix A gives details on the derivation of the boundary size. Appendix C contains explicit expressions for expectations of momenta in static AdS 2 . Appendices D and E give the derivation of the bulk fermion reconstruction. The size operator and its distribution In this work, we will use the machinery developed in [9] to treat operator size on the boundary. In this section, we present a brief overview of the setting and main results in that work. For concreteness, imagine the space of operators in our quantum mechanical Hilbert space H is generated by some finite collection of N Majorana operators, {χ j , χ k } = δ jk . Any operator O can be written where the O (n) j 1 ···jn are complex numbers. We are interested in characterizing the "weight" of O in different n-sectors. There is an abstract Hilbert space of operators, with Hilbert-Schmidt inner product O, S = Tr O † S, and we choose to measure the size in some n-sector by the length of the projection of O to the subspace of length-n χ j strings in this inner product. It can be useful to work in the setting of a "doubled Hilbert space" H (2) = H ⊗ H, associated to two copies of the physical system, as opposed to the "operator Hilbert space", H op = H ⊗ H * . This requires a (non-unique) choice of isomorphism between the two spaces. Fermionic or bosonic operators that appropriately commute or anti-commute with all operators on the right or left tensor factor of H (2) are written O L or O R , respectively 1 ; if {χ j } generates the operator algebra on H, there are {χ L j } that generate the same algebra (and satisfy the same relations as the {χ j } amongst themselves). In our case, we can take an irreducible representation of the Majorana algebra for 2N fields (call this H (2) ), and arbitrarily call N of them χ L j , and the other N χ R j . This gives a factorization of H (2) = H ⊗ H, where even products of χ L j (χ R j ) act only on the left (right) factor. States in the doubled Hilbert space are written as |ψ). To implement our isomorphism, for an operator O ∈ H op , we define O L by expanding in terms of a generating set {χ j } and making the replacement χ j → χ L j . Then, we choose a state |0) ∈ H (2) and define |O) = O L |0). In order for this map to be injective, we require |0) to have full Schmidt rank between the two tensor factors, and to be proportional to an isometry, which requires the Schmidt weights must all be equal. In other words, |0) is a maximally entangled state between the two copies. For our purpose, there is a particularly useful choice of maximally entangled state. Form the fermionic annihilation operators and choose |0) such that c j |0) = 0 for all j. It can be checked that this state is maximally entangled between the two tensor factors. Furthermore, this state has Then the information about the distribution of O across operators of different size is contained in the moments of the numbers of χ L j operators, or equivalently the c j fermions, This is a state-independent measure of size, which does not allow the characterization of operator scrambling at a given energy scale or a subspace of states. To remedy this, Ref. [9] proposed to measure the size in the thermal ensemble by moments of the generating function . (2. 3) The derivatives of the logarithm of this generating function over µ are the differences between the size cumulants for Oρ 1/2 β and ρ 1/2 β , for example the first moment is The "thermal size" n β [O] depends on the reference state ρ β , which takes into account the fact that certain operators are more important than others when applying to a subspace of states. Finally, we note that Z β µ [O(t)] is related to a particular "thermal" two-point function of the O. In particular, following Ref. [9] we define the "boundary size kernel" , The function G ∂ µ can be computed as a single-sided quantity, as was discussed in Ref. [9]. The Sachdev-Ye-Kitaev model The Sachdev-Ye-Kitaev (SYK) model is an ensemble of Hamiltonians where the J j 1 ···jq are independently drawn from normal distributions, with variance This model is chaotic, in the sense that the out of time ordered four-point function grows exponentially, but is at the same time solvable in a 1/N expansion. The SYK model has an emergent approximate reparametrization symmetry, which is explicitly broken by a UV cutoff term. At low temperature, the symmetry breaking is small, suppressed by 1 βJ . The quasi-Goldstone modes of reparametrization symmetry breaking are governed by a Schwarzian action, which is also the action (in appropriate variables) for Jackiw-Teitelboim (JT) two dimensional gravity with negative cosmological constant. In this sense, the SYK model is approximately dual to JT gravity. The complete bulk description is not known, but there is a possibility that the SYK model is an example of the AdS/CFT duality between a d = 1 "nearly" CFT and a "nearly" AdS 2 bulk described by JT gravity coupled to interacting matter fields [24]. Given the full boundary size kernel for the χ j fermions in the SYK model at strong coupling, if we assume that there is a weakly coupled gravity dual to SYK, we can explicitly compute the size of the bulk fermions dual to the χ j operators. In fact, the boundary size kernel can be computed in two regimes; both at large q, computed in [9], and at strong coupling where the model is governed by the Schwarzian effective action. We discuss the size distribution in the strong coupling, or Schwarzian, regime in the following section. In Section 3.2 we describe a connection between the strong coupling boundary size operator and bulk AdS 2 isometry generators that will be important for understanding the bulk size. Boundary operator size in SYK models In this section we examine the boundary size of the Majorana fermions in SYK in the strong coupling limit, βJ 1. To understand the bulk size, it is helpful to examine the derivation of the boundary size distribution in some detail. We work at large N , so that in particular the non-local bulk interactions are suppressed. Then the SYK action can be written in terms of non-local fields G and Σ. In particular, Σ is a Lagrange multiplier that enforces where for convenience we take the arguments of G to be imaginary time. As they are subleading in N , we ignore the normal ordering constants in the effective action for the SYK model. We also assume that the model is totally self-averaging at leading order in N , so that we can directly use the effective action after averaging over the couplings. Then the effective action for size (derived in more detail in Appendix A) is [25,26] in the sense that the size distribution function can be computed as At large βJ , the low energy excitations of the SYK model can be thought of as reparametrizations of time φ → θ(φ), with Schwarzian action [25][26][27] where φ is related to the boundary time by φ = 2πτ /β, and α S is a q-dependent constant computed in [26]. It is useful to interpret the Schwarzian as the leading non-trivial part of the extrinsic curvature of a long curve in Euclidean AdS 2 . In particular, in Rindler coordinates, for a curve γ(φ) = (θ(φ), ρ(φ)) of large length L parameterized proportionally to arc length by an angular coordinate φ ∈ [0, 2π), the extrinsic curvature is and by the Gauss-Bonnet theorem we find where R = −2 is the Ricci scalar, and A is the area of the region bounded by γ. For this reason, the curve corresponding to the saddle point for µ = 0 is simply a circle with large length L. Under a reparametrization φ → θ(φ), the two-point function changes by When the φ are sufficiently separated in imaginary time, we approximate the saddle point G by its conformal form, and compute S µ on a reparametrization using (3.2). This shows that we can think of S µ as providing some "tension" between points on opposite sides of the circular saddle point solution, and for sufficiently small µ it is self-consistent to compute around this saddle point. Because of the symmetry of the problem, the new saddle point will be approximated by a path which consists of two segments of equal length, each of which is a portion of a circle C with the same fixed radius. Since we keep the length L fixed, we can parameterize the problem by a single number, the fraction of the circle C that makes up one of the two segments, in other words the angle λ. We give an illustration of the saddle point in Figure 1. (This calculation is quite similar to that in [19].) This solution is only an approximation: we consider small reparametrizations, which will keep the curve smooth, while the approximation has sharp corners at φ = 0, π. χ χ λ Figure 1: Overview of the computation of the saddle point. We show Euclidean AdS 2 with radial coordinate r = tanh ρ 2 ∈ [0, 1), with the conformal boundary drawn as a dotted line. In the first panel, we draw the saddle point at µ = 0, which is a circle of length L. We also show the operator insertions at φ = 0, π that give "tension" to the solution in the µ = 0 case. In the presence of these operator insertions, the true saddle is a shape that is "pinched" towards the center. The solution will consist of two circular segments. In the second panel, we show the top circular segment in a coordinate where its center is at r = 0, with the µ = 0 saddle for reference. The length of each segment is fixed to L/2, so for an inner angle λ > π, the radius of the circle must shrink accordingly. In the last panel, we show the µ = 0 saddle in a coordinate that is symmetric between the two segments, namely one where φ = 0, π coincide with θ = 0, π and these points are equidistant from r = 0, where φ ∈ [0, 2π) parameterized the saddle curve proportionally to arc length. The transformation of coordinates between the first and second panel is a boost in embedding coordinates that moves the endpoints of the segment to the θ = 0, π line, given explicitly in (A.9). We reiterate that on the saddle point solution, times in the boundary theory are related to points in Euclidean AdS 2 by the point on the saddle point curve at parameter φ = 2πτ /β. To compute the two-point function on this saddle, we use the relation (A.8), which gives (for τ 2 later on the thermal circle than τ 1 ) where δ is the geodesic distance between the points corresponding to τ 2 and τ 1 on the saddle point curve. For the sizes of the thermal state and boundary fermion, it remains to find the dependence of the angle λ on µ. In Appendix A we find this to be Our main interest is in the first moment of size, for which we only need the first term in this expression. In general, for small µ, δλ can be expanded order-by-order in powers of 2 tanh µ 2 L 1−2∆ . The cumulants of the size distribution of the thermal state ρ 1/2 β are the derivatives of the action N S µ . For example, the average size is We can see that the nth cumulant will be of order N , but is in general of order (βJ ) n(1−2∆)−1 in coupling. Since we take βJ N , we can consider the fluctuations in size of the thermal state to be suppressed by (βJ ) We note that this matches the results in the large-q limit [9]. Size of χ fermions and SL(2, R) generators The geometrical picture allows us to not only compute the size of boundary fermions, but also to uncover directly the relation of the size operator to bulk isometry generators. A similar relationship for the "diagonal" matrix elements of the size operator, n β [χ(u)], was found by [15]. For the discussion of bulk size, we will need the more general matrix elements (χ(u)ρ 1/2 β |n|χ(u )ρ 1/2 β ), and in finding their relationship to isometry generators we also give a direct derivation of the result in [15]. The key idea is that matrix elements of the size operator are determined by the change of the two-point function G ∂ µ as a function of µ. The two-point function on the saddle is approximately a function of the geodesic distance (as in (A.8)), so µ affects the two-point function by deforming the boundary curve in Figure 1, which changes the distance between these two points in the bulk. Therefore the µ dependence can be mapped to a relative motion of the two points geometrically, which can be achieved by applying the bulk isometry transformations to one of the two points while keeping the other point fixed. The key elements of the computation in this section are illustrated in Figure 2. In the first panel, we show the µ = 0 location of the point X < , which will lie on the first segment of the µ > 0 solution. In this coordinate, it lies at some θ 0 < π. We indicate the location of the point corresponding to X < with a dark dot, and show its previous location in a lighter color. Throughout, we show only the top segment of the µ = 0 saddle. A line of points at θ = 0, π is drawn for reference. In the second panel, we use a coordinate so that the segment X < lies on is centered at r = 0. The boundary time is an affine parameter for the saddle point solution, so X < now lies at angle θ 1 = λ π θ 0 . In the last panel, we have changed to the more symmetric coordinate of the third panel of Figure 1 by the boost (A.9). Note that we can approximate the first move of X < by a rotation, generated by B (E) , and the second is the coordinate transformation boost, generated by E (E) . A point X > on the second segment, with some θ 0 > π on the µ = 0, is transformed similarly (of course with the opposite boost). Once we compute the positions of the points in the symmetric coordinate system, we can transform by a final isometry to restore the position of one of the points, say X > . whose flows are shown in Figure 3. A symmetric coordinate system to consider our problem is one where the two distinct segments meet at θ = 0 and π. We must consider the motion of two points, X < on the first arc, and X > on the second, as we perturb λ. This is easiest to compute by placing the center of the circle our point is on at ρ = 0, computing the location of X ≷ in that coordinate for the given λ, then performing a boost by E (E) to move the segment to its final position. The result is that (see Figure 2 and Appendix A for more details) Alternatively, we could have considered a coordinate system where X > remains fixed as a function of λ; this amounts to the replacement X > → X < on the right side of (3.5). Analytically continuing the derivative to Lorentzian signature and using φ − = π − + it 1 , where the infinitesimal Lorentzian generators in our standard coordinates are Flows of these generators are shown in Figure 4. We can now compute the size in the strong coupling limit. For convenience, we start with the Euclidean expression (keeping only the leading order in L ∼ βJ ) As noted in [9], the size at φ = 0 should be given by 2G ∂ 0 (π, 0). We can use this to fix a UV regulator so that the size units match at φ = 0, and find = 8π ∆ 2 L . Thus we find the boundary size at strong coupling is where we have written the size in boundary time units u. In general, in terms of the behaviour of size, the regulator simply sets the units as long as it is ∼ π/βJ . We took the case u 1 = u 2 , and so were able to ignore contributions that vanish in this limit. Evaluating the derivative for u 1 = u 2 gives the general Schwarzian contribution to the fourpoint function, which was also pointed out to be given by applying symmetry generators to one time argument of the conformal two-point function in [28]. We mention that this method extends to higher moments of size. These moments depend on higher order derivatives of G ∂ µ over λ. From the exact bulk location of X ≷ as a function of λ (computed in Appendix A), we just compute the derivatives of these points over λ to the required order. We then compute the derivatives of λ over µ to the same order, and differentiate G ∂ µ to find the size moment to the desired order. In addition to providing an analytic computation of operator size distribution in the low temperature region, the discussion in this subsection also gives a direct relation of boundary operator size with SL(2, R) generators. If we consider a bulk dual fermion, the correlation function of which reproduces that of the boundary fermion for points approaching the boundary, we can also relate the boundary operator size to an SL(2, R) quantum number of the bulk fermion. This is a warm-up calculation for the bulk operator size results in the next section, but we present it here since it does not depend on any bulk reconstruction, and is closely related to the previous part of this subsection. Since the two-point function of a bulk fermion in static AdS 2 approaches G c as ρ → ∞ (c.f. Appendix C), from (3.6) we can conclude that the boundary size at strong coupling is given by the bulk expectation of generators for a free fermion, taken to the boundary and normalized by the appropriate factors. Explicitly, define If we take the natural Rindler vielbien, there is a particular fermion component, say with index j, whose two-point function vanishes slower as ρ → ∞ (corresponding to the eigenvalue of γ 1 that does not vanish in the limit of (C.8); if we take the bulk mass positive (negative) this is the +1 (−1) eigenvector). Then we find, writing · to mean expectation values for a free fermion in the Poincare vacuum on AdS 2 , where N ∆,1 is defined in Appendix C. In the Euclidean signature, the term in the derivative of the coordinates that gives rise to J 1 is Thus the contribution of this term to the size matrix element is actually just a constant, and we can replace J 1 → − 2∆ π . Its contribution to the boundary size is subleading in (since it gives a contribution to the matrix element that is proportional to the two-point function). The J 2 generator is not as simple, but its contribution also vanishes when t 2 = t 1 . We then find that, to leading order in βJ 1, the boundary size is proportional to the expectation of E − B. In this sense we have given a direct derivation of the similar result in [15]. One important difference is that we have also computed the "off-diagonal matrix elements" of n, namely expectations like (χ(u 2 )ρ 1/2 β |n|χ(u 1 )ρ 1/2 β ) where u 2 = u 1 . These will be essential for the computation of bulk operator size, as will be discussed in next section. Size of bulk fields In order to use boundary CFT computations to determine the size of "bulk" operators, we use an explicit construction of certain bulk operators as superpositions of boundary operators of various size. After describing our construction, we present some general properties of bulk size for SYK-type models. The explicit construction of bulk fields The method for constructing bulk fields we pursue is analogous to that first worked out by Hamilton, Kabat, Lifschytz, and Lowe (HKLL) for certain quantizations of scalar [20][21][22] and higher-spin fields [23]. There are different ways to understand this procedure; here, we take an approach that makes explicit corrections due to interactions. Consider a d-dimensional CFT with a bulk dual, with a spinor field χ of dimension ∆. We work in the limit of large N and strong CFT coupling, so the dual AdS d+1 theory is weakly coupled. Sources (of dimension d − ∆) for the boundary field χ correspond to boundary conditions for a bulk fermion ψ. The fluctuating modes of ψ in the absence of sources, when taken near the boundary and appropriately scaled, behave as a fermion of dimension ∆ and are identified with χ. Explicitly, if z is some coordinate that approaches zero near the conformal boundary of AdS (and x are the remaining coordinates), The behaviour of χ can then distinguish different ways of approaching the boundary. Our main example will be a d = 1 model where the boundary lies at constant Rindler ρ coordinate in AdS 2 . In this case, the explicit expression is Figure 5: Illustration of the perturbative HKLL reconstruction, in the right Rindler wedge of AdS 2 . We show a diagram that appears at lowest, and at next-to-lowest order in the interaction. We will focus on the lowest order contribution; our kernels only have support on the right boundary, so to this order the fermion is reconstructed only from boundary operators inside the right light cone (indicated in grey). In the picture with the spacelike propagator G F as in (4.2), all the interaction vertices must be contained in the gray spacelike separated region. Fermion propagators are shown with a solid line, and the propagator of some putative scalar field interacting with the fermion is drawn with a wavy line. When the fermion is weakly interacting, we have the approximate equation This, in addition to the holographic principle, inspires us to look for a bispinor with support only for spacelike separated x, x . Then we have, for any spinor ψ(x) on a d + 1-manifold M which we take to assume the boundary value ψ(x) → z ∆ χ(x), where N is the outward-point normal vector to the boundary ∂M , and / N = N µ Γ µ . This expansion provides a perturbative (in the interaction) diagrammatic approach to computing the bulk operator ψ(x) from boundary data; these will be bulk Witten diagrams with propagators replaced by ones like G F with spacelike support. Since we are in the large N limit, which suppresses interactions, we focus only on the first term in this expansion. We are assuming that in this limit, we can ignore the contribution of the interaction vertices to the bulk fermion. Regardless of the contribution of this term, as long as the boundary field χ is nearly conformal, keeping just the first term gives an approximately local bulk field. In special coordinate systems in the free limit, there is a more direct way to understand the HKLL procedure (this is the fermionic version of the "mode sum" approach taken in the original work). Essentially, the Fourier transform of the reconstruction kernel is the operator F ω that takes constant spinors to solutions of the Dirac equation and is an eigenfunction under the flow by the time coordinate t, normalized so that as the coordinate z → 0, the dependence of F ω on z and t becomes F ω → z ∆ e −iωt . Then in Fourier space, the reconstruction happens simply by multiplying the boundary creation and annihilation operators at each momentum by the appropriate function of frequency. We give more details in Appendix D. In particular, we will use the fact that, for time-translation invariant quadratic expectations of fermions on the boundary, the Fourier transform of a bulk function in these special coordinates is a product of the Fourier transforms of the boundary function and the kernel. In Appendix D and E we find concrete position-space expressions for these kernels for AdS 2 . There are two cases to consider, depending on the sign of ∆ − 1/2, where ∆ is the boundary spinor dimension. The simpler case is ∆ > 1/2, where we show that there exists a "smearing kernel" K ∆ (x, z|y) such that, to leading order in N and at strong coupling (so that χ(y) is nearly conformal and free), behaves exactly as a free fermion on AdS d+1 of mass |m| = ∆ − d/2. This smearing function is indeed supported on points of the conformal boundary such that (x, z) is spacelike separated from (y, z ) as z → 0. For example, in the AdS 2 Rindler coordinate (we do not normalize the kernels in any particular way, since our discussion will not depend on the normalization), with gamma matrices {γ j , γ k } = η jk . Note that although there is only a single component fermion on a d = 1 boundary, the smearing kernel has two components for the two bulk fermions. Then, given a boundary size kernel G ∂ µ (y, y ), we can simply compute The diagonal entries G B µ (x, z, x, z) jj will be the generating function for the relative size distribution of ψ(x, z), as measured in terms of χ(y) at t 0 . In the SYK model, the boundary fermions have ∆ = 1/q < 1/2, so the simpler kernel above does not apply. For a fermion of mass m in the bulk, the Dirac equation admits solutions that behave as z ∆ ± for z → 0, where ∆ ± = d 2 ±|m|. In fact, for |m| < d/2 there are two inequivalent ways to quantize a free fermion with no boundary sources, distinguished by their boundary behaviour. Since these quantizations have different behaviour (for example near the boundary), the smearing kernels must be different. We give more details on derivations of this kernel in Appendices D and E, and summarize the important points here. The simplest approach turns out to be to use the analytic form of the kernel (4.5) (except changing v −1 → v 1 if we would like to keep the mass positive in the Dirac equation), but give a prescription for handling the non-integrable divergences in the kernel on the light cone for ∆ < 1/2. One method is to use the analyticity of the free bulk modes in ∆, so that when integrating against analytic functions we just use a contour that analytically continues from the ∆ > 1/2 case. Another possibility is to write the kernel as a linear differential operator that does not depend on time, acting on a function with integrable divergences. One way to accomplish this is in this expression, the derivatives only act on t R , not u. For u away from the light cone of (t R , ρ), we can evaluate the derivatives and find exactly (4.5), except with v −1 replaced by v 1 . The prescription is to regulate divergences by formally pulling the derivatives out of integrals against the kernel, and evaluating the derivatives on the now convergent integrals. More details on the different regularizations and quantizations can be found in Appendix D. The important point is that even in the case that regularization is required, the reconstruction is explicitly supported on points spacelike separated to the bulk point. We also point out that for the ∆ < 1/2 case, the kernel diverges at the light cone, and there is a large weight for operators "as late as possible" in boundary time. Bulk size at strong coupling From the full boundary size matrix elements at low energy in (3.13), and a perturbative definition of bulk operators, we proceed to compute the size of the bulk fermions. A schematic formula for the size of the bulk fermions constructed by HKLL in terms of the boundary Majorana fermions is given by is the covariant AdS 2 fermion two-point function discussed in Appendix C. Note that as written, this formula does not seem to depend on the choice of coordinate, but does depend on the choice of vielbein used to define the kernel K ∆ (since we are taking particular components). The AdS 2 fermion two-point function appears in the strong coupling limit since integration of the kernel K ∆ against a conformal boundary two-point function gives the bulk function by construction. Likewise, "expectations" (i.e. expressions like ψ(x)Eψ(x ) ) of bulk symmetry generators are given by integrals against K ∆ of expectations of boundary symmetry generators, since the reconstruction acts at the operator level. To compute the numerator in (4.11), we treat the three terms in (3.9) separately. The S = E − B generator has no boundary time dependence, so we can simply integrate over the two boundary fields separately before taking the expectation value, and find that this term becomes the bulk expectation of the same generator S. As discussed at the end of Section 3.1, we can make the replacement J 1 → − 2∆ π , and so can make that replacement in the bulk as well. At this point, we need to address the question of regulating (4.11). In principle, a UV regulation in the boundary theory means that reconstructed bulk operators will have the singularities in their two-point functions smeared out as well, so we can take the two bulk points exactly equal in (4.11). One way to understand the effect of a UV regulator is to regulate the boundary conformal two-point functions by an i prescription (in the SYK model, we should take ∼ π/βJ ). Since the kernel K ∆ is time translation invariant, an i regulation of a conformal boundary correlator is the same as splitting the bulk points by i in the time coordinate, and keeping the boundary theory exactly conformal. Note that we have introduced some additional dependence on the coordinate choice. When we consider bulk points such that cosh ρ 1/ , the two points are split by a small (Euclidean) geodesic distance, so reconstructed bulk quantities at such coordinates will be dominated by the short distance divergence in the true bulk functions. Importantly, for coordinates such that cosh ρ 1/ the Euclidean geodesic distance is large, and the bulk correlation functions become conformal. Thus in the large ρ limit, the bulk size is simply the boundary size, and approximated by an expectation of the symmetry generator S = E − B. It remains to understand the contribution of the J 2 term in the bulk. Since it is time translation invariant, we can directly compute the Fourier transform of its bulk contribution. We will need the Fourier transform Since the bulk size matrix element is given by dividing bulk expectations of generators by the bulk two-point function, only the high-frequency behaviour contributes as we take the separation between bulk points to zero. The contribution of the J 2 term becomes 1 π (1+ω∂ ω )G(ω), where G(ω) is the Fourier transform of the boundary two-point function.G decays exponentially for large negative ω, but for large positive ω it has power law behaviour ∼ ω 2∆−1 . Consequently, at large frequency this term becomes ∼ 2∆ π G(ω), and therefore contributes a constant 2∆ π to the bulk size. Therefore we find that the bulk size (such that cosh ρ 1/ ) is given by the bulk quantity (4.12) Using (3.14) to fix the normalization constants (i.e. units of size), we conclude that the bulk size both for small and large ρ is well-approximated by where ∼ π/βJ determines what we mean by "small" and "large" ρ, and ν is some numerical constant that can be used to fix units of size. Calling the vector field V that generates the symmetry of AdS 2 associated to E −B, we have ψ( is a covariant two-point function. As mentioned, the behaviour at large ρ is just the boundary size (3.8). We compute the full expression for this expectation of generators in Appendix C, but here we note the simple behaviour in the limit → 0, where we have also given the limit in global coordinates σ, t G . At fixed , this is an accurate approximation to the size for cosh ρ 1/ . The behaviour of this function in the Rindler wedge is shown in Figure 6. : Bulk size in the limit cosh ρ 1/ (in arbitrary size units; note that in this limit, the bulk fermion size of both components has the same behaviour). The plot is shown in global coordinates σ, t G over the part of the Rindler wedge t R > 0. As discussed in Section 4.3, this remains a good approximation to the bulk size in the presence of finite βJ corrections on the boundary. Numerics at large q The boundary size distribution is also known at large q, for all values of the coupling [9]: where G µ (u) = sin γ µ sin(γ µ + iα µ u) 2/q , (4.17) and α µ and γ µ satisfy sin γ µ = α µ J and sin α µ β 2 + 2γ µ = e −qµ sin α µ β 2 . (4.18) At strong coupling βJ 1, our discussion above applies. To help further understand the effect of finite βJ corrections on the boundary, we numerically compute the bulk size using (4.9) directly 2 . We have to regulate divergent integrals against the kernel, and have checked that both methods described in Appendix D.1 agree; details on practically useful numerical versions of these schemes are given in Appendix F. First, we note that even at relatively small coupling the approximation (4.14) captures both the qualitative and quantitative behaviour of size away from the boundary. We illustrate this by showing the logarithm of the ratio between the approximation and the numerical result for a relatively small coupling βJ ≈ 61 in Figure 7. For smaller couplings, the agreement holds nearer to the boundary, as expected. In light of this, we will concentrate on the behaviour near the boundary, ρ → ∞, for the remainder of this section. Here, the size of the component of the bulk field decaying faster near the boundary, in other words the field "not present" at the boundary, asymptotes to a constant size greater than n β [χ(t R )], while the other component briefly levels off at this larger size, then rapidly drops to n β [χ(t R )] as we go further towards the boundary. We refer to the two components, respectively, as the "non-boundary" and "boundary" components. Figure 10 shows this behaviour for a particular temperature. The location of this rapid drop in the size is a function of βJ , with lower temperatures pushing the location of the drop in size to larger ρ. Some example sizes demonstrating this pattern are shown in Figures 8 and 9. This suggests to identify the approximate location of the boundary with this drop. Further numerical evidence for this identification is that, once we find some ρ at some fixed t R at which the bulk size of the "boundary" component approachesñ β [χ(t R )], the boundary value is approached at the same ρ for different times t R . Conclusion In conclusion, we have studied the operator size distribution of bulk dual fermion of SYK model, using a combination of the HKLL formalism and SYK calculations. Our results provide an explicit proof of the relation between operator size and AdS 2 quantum number in the bulk. Operator size grows exponentially for operators deeper in the bulk, which Comparison of the approximation (4.14) and numerical results for q = 1000 and βJ ≈ 61 (π − βα µ=0 = 0.1). In particular, we show the logarithm of the ratio between these expressions for the two bulk fermion components. We show it both as a function of t G and σ, and as the profile seen from σ = 0 or t G = π/2. The ratio is constant for a large portion of the bulk. The deviations near σ = π/2, t G = 0 are significant for both components. There is an abrupt drop in the size of the component decaying slower near the boundary as the boundary is approached that only appears as a line of points in this figure that is not present in the simple approximation. For the purpose of this plot, we have chosen size units (an overall constant multiplying the numerical size) such that the ratio approaches 1 as ρ → ∞. therefore can be used as a measure of the bulk emergent spatial dimension. In higher dimensions, it is easier to see how operators deeper inside the bulk are more complicated, since they can only be reconstructed on a bigger region on the boundary [29]. For a 0 + 1dimensional bulk theory, since there is no spatial locality on the boundary, it is more difficult to quantify the relation of emergent bulk spatial dimension with complexity and quantum error correction. The operator size distribution provides a useful tool to make progress along this direction. There are many open questions along this direction. One question is whether there is an analog of the quantum error correction understanding of bulk locality in higher dimensions. How local is the bulk dual theory of SYK model in sub-AdS scale? How is sub-AdS locality related to the operator size distribution? It is also interesting to ask how to generalize the operator size measure and its dual interpretation to other models, such as the eternal traversable wormhole (i.e. global AdS 2 ) geometry that is dual to a pair of coupled SYK sites [30]. Intuitively, when a fermion moves from one boundary to the other in the global AdS 2 geometry, one expects the operator size to increase and then decrease. The temperature dependent operator size measure (2.4) does not directly apply, because the two sites together could be at zero temperature. This suggests that a more general relation between operator size and bulk spatial dimension requires a modified operator size measure. Here, we work out some details related to the SYK effective size action. First, we derive the size effective action. We begin by noting that e −µn j = e − µ 2 +ln cosh µ 2 (1 + 2 tanh µ 2 χ L j (−iχ R j )), and then simply expand the definition (0|ρ where in (A.4) we introduce a time-ordered path integral, whence all the fermions χ become Grassmanian variables squaring to zero, and in (A.6) we use this property of Grassmanians to introduce G. The denominator in the effective size expression is derived with similar manipulations. Next, we find the saddle point for small µ. As discussed in the main text, we really just need to find the dependence of λ on µ. We note that where δ is the geodesic distance. Thus, we can approximate in the large L limit. Thus, we should find the opening angle λ(µ) that minimizes The basic quantities in the action are easiest to find by first considering the circle to have center at ρ = 0 in Rindler coordinates, with radius r and segment angle λ. The distance between the endpoints of the circular segment is simplest to find by the inner product in embedding coordinates, cosh 2 r(1 − tanh 2 r cos λ). The area of a single segment is given by a fraction λ of the area of the circle, plus the area of the triangular wedge, which we find from the interior angles (we call the one that is not 2π − λ, γ) after another application of Gauss-Bonnet, Using these expressions, we can expand the derivative of the action to leading order in L to find an equation for λ = π + δλ, If we further expand to second order in δλ, we find In general, for small µ we can solve the equation for δλ order-by-order in a power series in 2 tanh µ 2 L 1−2∆ . The behaviour of size can be understood as the response under small changes in the angle λ, with the complication that we need to multiply by the appropriate derivative of λ over µ. For concrete computations using the geometric saddle point solution, we need to map from boundary time to location on the bulk curve, using that the former is an affine parameter for the latter. To this end, it is useful to transform between coordinate systems where one of the µ = 0 segments is centered, as in the second panel of Figure 1, and a coordinate that is symmetric between the two segments, as in the third panel of Figure 1. This transformation is given by some AdS 2 isometry, which is simplest to express in the embedding coordinate. Suppose we start in the coordinate where the first segment is centered, as in the second panel of Figure 1. Then, the endpoints of the first segment are located at (cosh ρ, − sinh ρ sin λ−π 2 , ± sinh ρ cos λ−π 2 ). The isometry that brings these points to points equidistant from the origin with θ = 0, π is the boost with parameter tanh β = tanh ρ sin λ−π 2 generated by E (E) . Explicitly, it is the embedding coordinate matrix The inverse boost gives the transformation to the symmetric coordinate from the coordinate where the second segment is centered. In this way, we can always work in centered coordinates for the appropriate segment to map boundary times to bulk points. Using this transformation, we compute the location of the bulk points corresponding to φ ± = π ± + it ± for the saddle point solution corresponding to angle λ. Start with the − coordinate, so use a coordinate system where the upper segment is centered as in the second panel of Figure 1. The radial coordinate ρ is fixed by the requirement sinh ρ = L/2λ. The angular coordinate is given by the affine parameter condition, (A.10) Forming this into a coordinateX < = (cosh ρ, sinh ρ sin θ 1− , sinh ρ sin θ 1− ), the coordinate in the symmetric system is given by X < (λ) = F βX< . For the + coordinate, we have X > = (cosh ρ, sinh ρ sin θ 1+ , sinh ρ cos θ 1+ ) in the coordinate where the second segment is centered, and X > (λ) = F −βX> in the symmetric system. To compute derivatives over λ, we use the definitions of θ ± , ρ, β, and the useful identities B AdS space coordinates and symmetries For convenience, we collect here some AdS d , and in particular AdS 2 , coordinate systems and related expressions. B.1 Embedding coordinates A convenient definition of AdS d+1 involves the hyperboloid X 2 = −1 in the space R d+2 , with metric η of signature (−, −, +, . . . , +). We will also refer to the two timelike coordinates as T 0 and T 1 , and in general start our numbering of embedding coordinates from 0. The global AdS d+1 space is defined as the universal covering of this hyperboloid, but we will also be interested in coordinate patches that cover only part of the global space. The Killing vectors in AdS d+1 are the suitably restricted Killing vectors of the Lorentz group in the embedding space, It will be useful in what follows to identify a particular set of "light-cone" coordinates, and to identify the Casimir B.1.1 AdS 2 Rindler coordinates The Rindler coordinate takes some boost, say K 20 , to be time translation. Orbits of this boost occur at the intersection of the constant T 1 planes with the hyperboloid; an explicit coordinate choice is with metric ds 2 = − sinh 2 ρdt 2 R + dρ 2 . Then the restrictions of the symmetry generators become B.1.2 AdS d Poincare coordinates In the Poincare coordinate, some boost, say K 21 , becomes the naive coordinate "dilatation" when restricted to the projective boundary of the hyperboloid. An explicit coordinate system is where indices are lowered on the x j with the same signature as the X j . The restricted symmetry generators are B.1.3 AdS d Global coordinates In the "global" coordinate system, we choose the T 0 − T 1 rotation to give the local time translations. The explicit coordinates are and the symmetry generators become C Position space fermion two-point function In this section, we extend the work [31,32] on geometric expressions for propagators in symmetric spaces to spinor representations in arbitrary dimension D = d+1, and curvature normalized to s R = R D(D−1) ∈ {0, 1, −1}. Call the geodesic distance from x to x , δ(x, x ), g µν the metric tensor, and Π(x, x ) µν the operator that parallel transports vectors along the shortest geodesic from x to x . We will repeat the convention established in [32] that primed (unprimed) indices correspond to indices that refer to the tangent space at x (x), and omit the arguments x, x where there is no ambiguity. The tangent vectors at the ends of the geodesic connecting x and x are n µ = ∇ µ δ (n (L) without indices), and n µ = ∇ µ δ (n (R) without indices). Define also s xx = n µ n µ = n µ n µ , which is 1 (−1) for points that are spacelike (timelike) separated. Then it can be shown [32] that any tensor acting on the tangent spaces at x and x that is invariant (i.e. has zero Lie derivative) under the flow by an isometry can be written as scalar functions of the geodesic distance multiplying tensor products of Π, g, n (L) and n (R) . We call such tensors "invariant". For example, where we have used ∇ n (L) n (L) = 0, Πn (R) = −n (L) (parallel transport of geodesic tangent vector), n µ ∇ ν n µ = 0, ∇ n (L) Π(x, x ) = 0, and that parallel transport preserves all inner products to fix the forms of the above tensors. Note that we must have A + B → 0 as δ → 0 since the components n (L) approach −n (R) (equivalently, Π → 1). To find C, use Taking the trace of (C.2) shows A = 1 d ∇ 2 δ. Furthermore, we can find the derivatives of A, B in several ways by taking derivatives along geodesics. For example, Likewise, by considering n σ ∇ σ (∇ µ n ν ) and n σ ∇ σ (∇ µ n ν ), we find the relations Finally, since all fields arise from the same principal bundle (so only the generators change in the "spin connection"), the parallel transport operator S(x, x ) for any associated bundle satisfies where Σ νσ are the appropriate spin group generators. Now we specialize to the case of spinors. Fix some vielbien by a choice of an appropriate vector-valued one-form σ. Then if we define Γ µ = γ a σ a µ and / n = Γ µ n µ , we have For a different choice of vielbien Next, we use that there are no invariant totally antisymmetric tensors with more than 1 index (of course, we take all these indices to be at a single point, say x). This can be shown by induction, or simply by noting all invariant tensors with indices at a single point x must be built out of sums of products of g and n (L)3 . Call the invariant two-point function of spinors G ψ (x, x ). Since traces of G ψ (x, x )S(x, x ) −1 against products of the Γ µ give invariant tensors with indices at a single point, the most general form of the fermion twopoint function is 3 We need to consider discrete reflections to eliminate the totally antisymmetric tensor. for scalar functions of the geodesic distance F 0 , F 1 . It will also be useful to have the covariant derivative and Dirac operator The Dirac equation when x = x , ( / ∇ − m)G ψ (x, x ) = 0, reduces to the pair of equations In the non-flat case s R = 0, there is a useful function K = −A/B, whence from our above relations we find K (δ) = −s R s xx K and ∇ 2 K = −s R DK, so A = −s R K/K . Then in terms of y = 1+K 2 we can write a second-order equation for F 0 , From here, we consider s R = −1. Then the solution of (C.7) that is properly normalized to a delta-function singularity in Euclidean signature and has the right decay at infinity for the Poincare vacuum is , where as long as ∆ > 0, we can choose ∆ = d 2 ± m. We fix the branches of 2 F 1 for the Lorentzian Wightmann function by analytic continuation of a Euclidean time coordinate τ → + it. This is well-defined (meaning all relevant covariant derivatives continue in a consistent way) as long as we continue with respect to a Euclidean time τ such that ∂ τ is a Killing vector orthogonal to a family of hypersurfaces (in other words, the metric can be taken independent of τ , and with no cross-terms involving dτ ). The "expectations" of symmetry generators, i.e. expressions like ψ(x)Eψ(x ) , are especially simple to compute in this formalism. Given the vector field V generating the isometry of the manifold associated to E, we have where the covariant derivative acts on the unprimed coordinate. We also note that for practical computations, S(x, x ) can be found explicitly as the spinor transformation, smoothly connected to the identity, corresponding to Λ ab (x, x ) = σ µ a (x)Π µν σ ν b (x ). Π µν itself can be found from ∇ µ n ν . C.1 AdS 2 propagator and generator expectation values Our main focus is on the Rindler coordinate on AdS 2 . This is the coordinate with Euclidean metric We take the natural vielbien σ(x) = sinh ρe 0 dτ + e 1 dρ. The bulk points we consider in the main text are at the same ρ coordinate, but different τ . We consider imaginary point splitting by some fixed amount . In fact, for different ρ this corresponds to different regimes of geodesic distance. If we first take the ρ → ∞ limit, then we should consider the propagator in the limit of large geodesic distance. In this limit, / n → γ 1 . It remains to find S(x, x ). If we take a vielbienσ(x, x ) a µ = Π µν σ aν (x ), thenS(x, x ) = 1, since we have chosen a non-coordinate basis in which parallel transport along the geodesic from x to x is trivial. In the original σ vielbien, S(x, x ) = Λ the spinor Lorentz transformation, smoothly connected to the identity, that corresponds to Π ab (x, x ) = σ(x) ν a σ(x) ν b Π νν . If we take ρ → ∞ at fixed , then Λ, the Lorentz transformation taking n (R) → −n (L) in the non-coordinate basis given by σ, becomes a rotation by (minus) π, so S(x, x ) → e −πΣ 01 = −γ 0 γ 1 . Then the large-ρ limit becomes where the last term is a projector onto a certain eigenspace of γ 1 . This is an indication that there is only a single fermion component on the boundary. In the small geodesic distance limit, F 0 (δ) → −mG d (δ), where G d is the Green function for the flat Laplacian in dimension d + 1. We also have that F 1 (δ) → −G d (δ). The vector n → csch ρ∂ τ , and S(x, x ) → 1. The main use of this propagator in this paper is to compute the expectation of the generators E − B; call the vector generating this isometry V . Continuing the equation i∇ V G ψ to Euclidean signature, we find that we need to compute The exact expression is then given by taking the inner product of V (E) with (C.5) and analytically continuing. For this it is useful to have the (Euclidean) spinor propagator S((τ, ρ), (τ , ρ)) = 1 − cosh ρ tan τ −τ 2 γ 2 1 + cosh 2 ρ tan 2 τ −τ 2 . (C.9) D Fermion modes in AdS 2 In this section, we give fermion mode solutions corresponding to the natural time in several AdS 2 coordinates. These modes serve four roles in this work. First, they give an explicit consistent quantization of the "unusual" fermions with boundary dimension ∆ < d 2 . Second, the Fourier transform of a reconstruction kernel can be read off of the modes. Third, they are used to show that our regularization of the kernel for ∆ < d 2 is correct. Finally, they have been used to check the two-point function we derive by a mode sum. We illustrate these points in detail for the example of Poincare coordinates. D.1 Poincare coordinates We start in the simpler Poincare coordinates to illustrate some general points. The metric is with Dirac operator We are looking for modes of the equation ( / ∇ − m)ψ = 0. The boundary fermion will have dimension ∆ = d 2 ± m depending on our particular mode choice, and we allow either sign as long as ∆ > 0. To emphasize this point, when |m| < d 2 , there are two consistent quantizations, one with ∆ > d/2, and one with ∆ < d/2. As we will show, both choices give rise to normalizable modes in AdS D . Define π g z to be the projector onto some eigenvalue of γ z , π g z = 1 2 (1 + g z γ z ). Define also |p| = −p a p b η ab and / n = iγ a p b η ab /|p|. The (matrixvalued) function solves the Dirac equation with m = g z (∆ − d 2 ). In terms of this function, the normalized modes of the Dirac equation (associated to the Poincare Killing vector) are where u j g z (0) is a basis for the γ z = g z eigenspace, γ z u j g z (0) = g z u j g z (0), and p is restricted to be timelike. Λ 1/2 (p) is the Lorentz boost that takes the timelike vector (sign p 0 −p 2 , 0, . . .) → p. There are solutions with spacelike p, but these are not normalizable in the bulk. These modes are normalized according to As we take z → 0, the dominant behaviour is as an operator equation. A completely similar sequence of steps in other coordinate systems gives the Fourier transform of the reconstruction kernel directly from the bulk mode solutions. The key ingredient is that, near the boundary, the mode becomes proportional to the eigenspace of some γ-matrix. This particular type of decay is easiest to anticipate by examining the Dirac operator in a given coordinate system. A general feature is that the Fourier transform of the reconstruction kernel is given by the spinor operator that is The analytic continuation method is also related to a simple high-frequency regulator. This can be done by giving an exponential energy damping e − |ω| on each mode in (D.14). Instead of the sharp step in (D.15), the integral defining the regulated K ∆ (t, z) becomes, calling α = (x 0 − y 0 )/z, We can now freely take derivatives of this integral as in (D.16) to find the regulated Poincare kernel; the difference will only be non-vanishing in near α = 1. Integrating this regulated kernel against analytic functions is the same as our contour prescription in the limit → 0. Since we only consider integrals of the kernel against analytic functions, and the analyticity argument is simpler than carrying out explicit regulated integrals, in other coordinate systems we will simply use the analytic continuation as the definition of the kernel for ∆ < 1/2. D.2 d = 1 global coordinates In global coordinates, we have We choose vielbiens e a = cos σ∂ a ; with this choice the nonzero component of the spin connection is w τ 01 = −w τ 10 = − tan σ and the Dirac operator is The normalized positive frequency solutions to this equation are given by where P (α,β) n are the Jacobi polynomials, u s are eigenvectors of γ 1 with eigenvalue s, such that ∆ = 1/2 − sm, and γ 2 = γ 0 γ 1 . If we work in a basis where γ 2 and u s are real, then we can take the negative frequency modes just the complex conjugates of (D.20); in general they are dσ sec σψ s n (τ, σ) † ψ s m (τ, σ) = δ nm δ ss . D.3 d = 1 Rindler coordinates It is convenient to make the change of variable to u = − ln tanh ρ 2 . The new coordinates are T 1 = coth u = cosh ρ V ± = csch ue ±t R = sinh ρe ±t R . We also show that the the form of the kernel is fixed where it is nonzero by demanding diffeomorphism invariance for the bulk spinor, while the boundary spinor is a quasi-primary operator of dimension ∆. To find equations for the kernel, we use that the unitaries generating bulk isometries generate boundary conformal transformations. The bulk field should transform according to the flow generated by the appropriate vector field. Concretely, we fix an orthonormal frame e a , and consider the transformation generated by the flow of a Killing vector ξ. Under the pushforward by this flow, the components of e a change by −L ξ e a = −[ξ, e a ]. Since ξ is a Killing vector, the generator J (ξ)ab = e a , L ξ e b is antisymmetric. A bulk field in a representation ρ of the spin group transforms by −L B ξ ψ(x, z) = −(ρ(J (ξ)ab ) + ξ)ψ(x, z). This is the flow generated by the operatorξ, and in the case of AdS d+1 , the same operator generates conformal transformations on the boundary. Then if the bulk field is written ψ(x, z) = d d yK(x, z|y)χ(y), and the conformal transformation acts on χ(y) by where on the Lie derivatives we have indicated which variables the differential parts can act on. Actually, these are too restrictive in general dimension (specifically odd AdS), where the difference from (E.1) can be by terms that integrate to zero against a boundary field, but in even AdS this stronger constraint can be satisfied. The transformation of boundary primary fields is particularly simple in Poincare coordinates. We can derive the constraints on the Poincare kernel, K(x, z|y) = K(x − y, z) (2x j (∆ − d) − (x 2 + z 2 )∂ x j + 2x k Σ jk + 2zΣ jz )K(x, z) = 0 (x j ∂ j + z∂ z )K(x, z) = (∆ − d)K(x, z) where Σ jk is the generator of the Lorentz group in the desired bulk representation, Σ ∂ jk is the generator of the Lorentz group in the boundary representation, and ∆ is the dimension of the boundary field. The last equation shows that K(0, z) is an intertwiner for representations of the boundary Lorentz group. These equations can be used to find the form of the kernel for arbitrary spin. In the spinor case (we assume the spinor transforms irreducibly on the boundary), a non-zero solution where ι 1 maps the boundary spinor representation into the γ z = 1 eigenspace of the bulk spinor representation (its presence and image as a particular eigenspace of γ z is mandated by the fact that K(0, z) is an intertwiner for an irreducible representation of a Lorentz subgroup, so its image is irreducible and hence γ z is constant on the image, but the choice of sign of eigenspace is arbitrary). This is the unique non-zero solution, up to choice of scale and the sign of the γ z eigenspace for the irreducible representation. It is also straightforward to show directly from the constraints that K must satisfy a Dirac equation, F Numerics Here we note some practical formulas for numerics with our divergent kernels. If we have some boundary quantity F ∂ (u) that is linear in boundary fields, we need to compute We will also need the partial derivatives dxĨ(x, ρ) e −ρ x 1 − (e −ρ x) 2 1 − (2∆ + 1)e −ρ x + 2∂ t R F ∂ (t R − u ρ (x)). The bulk size distribution is We can symmetrize the integrand, which is the same as taking the real part in a basis where all the γ j are real. Following the above, the bulk quantity can be written in terms of and its derivatives. This function satisfies the identity H µ (t R , ρ, t R , ρ ) = H µ (t R , ρ , t R , ρ). F.1 Chebyshev polynomial method Another strategy to numerically integrate against the kernel is to expand in the complete Chebyshev polynomials T n (x), and use the analytic continuation of 1 −1 (1 − x 2 ) α T n (x)dx = 2 2α+1 B(α + 1, α + 1) 3 F 2 −n, n, α + 1; n even , where we can define the integral for α < 1 by a figure-eight contour.
2020-02-07T18:41:26.695Z
2020-02-05T00:00:00.000
{ "year": 2020, "sha1": "a3e01478bc98e7962608727403fb979508e6a58e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP10(2020)053.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a3e01478bc98e7962608727403fb979508e6a58e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246605338
pes2o/s2orc
v3-fos-license
Supporting safe metamodel evolution with edelta Metamodels play a crucial role in any model-based application. They underpin the definition of models and tools, and the development of model management operations, including model transformations and analysis. Like any software artifacts, metamodels are subject to evolution to improve their quality or implement unforeseen requirements. Metamodels can be defined in terms of existing ones to increase the separation of concerns and foster reuse. However, the induced coupling can give additional evolution complexity, and dedicated support is needed to avoid breaking metamodels defined in terms of those being changed. This paper presents a tool-supported approach that can automatically analyze the available metamodels and alert modelers in case of change operations that can give place to invalid situations like dangling references. The approach has been implemented in the Edelta development environment and successfully applied to metamodels retrieved from a publicly available Ecore models dataset. Introduction In Model-Driven Engineering (MDE), metamodels are typically used to encode the knowledge of analyzed domains, which get formalized in terms of identified concepts, relationships, and constraints.Metamodels are created by capturing the concepts and structures of a particular domain to construct models of that domain [41].UML is an example of metamodel for general modeling languages.Still, we can also find examples of domain-specific modeling languages created to address specific software engineering domains, e.g., Autosar for automotive [14], WebRatio [1] for web applications development.The definition of metamodels is preparatory to the development of inherently different artifacts, including models, metamodels, model transformations, and code generators.All of them contribute to the definition of modeling ecosystems [23].To foster separation of concerns and reuse, metamodels can be defined in terms of existing ones.By including elements, which are encoded in already defined metamodels, it is possible to avoid duplicating the same elements across different metamodels [31].Even though such reuse practices can speed up the definition of new metamodels, they can give place to complex dependencies, with subsequent increases of metamodel couplings and negative impacts on usage flexibility. Like any other software artifacts, metamodels are subject to evolutive operations because of various reasons [19] such as addressing unforeseen requirements [32], implementing design improvements [20], or removing bad smells [6,36].Examples of metamodel changes include renaming a concept, moving a property from a metaclass to another, and redirecting a relation between two metamodel elements.Moreover, atomic changes can compose more refined and complex patterns, giving place to what we call metamodel evolution.Over the last years, several approaches have been conceived to define and apply metamodel changes (see, e.g., [5,40]).In the Eclipse Modeling Framework (EMF) [35], metamodels are specified with a dedicated language called Ecore.In EMF, predefined APIs can programmatically manipulate Ecore models through Java code.Even though such techniques permit modelers to specify the evolution of individual metamodels, they do not provide first-class mechanisms to manage metamodel dependencies during the specification and application of evolution operators.Consequently, by evolving a given metamodel, e.g., by removing a metaclass MC, it can happen that the validity of other metamodels is broken, e.g., those that refer to the removed MC.Existing approaches deal with this kind of inconsistencies by performing model repair, i.e., the activity of restoring the validity of the corrupted models.Nevertheless, in this case, if the inconsistency is created by evolving or refactoring a metamodel used as an external resource, it could be problematic to restore the validity of the affected artifacts [4].This problem is common in different domains, as in the case of software package dependencies [12] or model transformation dependency analysis [30]. This paper proposes a tool-supported approach to define and apply safe metamodel evolutions, which do not break the validity of metamodels that depend on the elements being changed.In case of possibly unsafe changes, modelers get early alerts from the development environment.The approach has been implemented in Edelta [7], which is a dedicated framework to evolve Ecore models.The framework comes with a dedicated DSL providing developers with constructs to define complex evolution operators invoked on a subject metamodel.Until now, Edelta has been used on a single subject metamodel.In this paper, we show the extended version of Edelta that supports the safe co-evolution of multiple dependent metamodels simultaneously.This paper extends the Edelta framework [5,7] from both a methodological and technological point of views.In particular, the Edelta framework presented in this paper includes a new component for dependencies analysis of metamodel repositories, a graphical view to represent the result of the analysis in terms of a graph-based representation, and an Edelta template generator.The modeler can use generated Edelta specifications to start a new Edelta program, correctly including the involved metamodels.To show the effectiveness of the proposed approach, we have used Edelta to apply representative evolutions on a dataset of metamodels retrieved from GitHub. Edelta is an open-source project available at https://github.com/LorenzoBettini/edelta. Structure of the paper.The paper is structured as follows: Section 2 shows two motivating examples in which dependant metamodels are shown.Section 3 makes an overview of Edelta and highlights its limitations to avoid unsafe metamodel evolution.Such limitations are addressed by the approach proposed in Sect. 4. Section 5 presents the experiments that have been performed to assess the effectiveness of the proposed approach.Section 6 describes the related work, whereas Sect.7 concludes the paper with some future directions. Background and motivating examples In MDE, metamodels formalize concepts and relationships of a given application domain and underpin the definition of modeling languages and model management operations.Metamodels define the abstract syntax of domain-specific modeling languages as shown in Fig. 1b, which depicts the modeling constructs representing persons and associated credit cards. The definition of metamodels can be performed by importing existing ones as shown in Figure 1a, which represents the dependency occurring between the metamodels WebApp and Persons.The relation occurs because some metaclasses in WebApp refer to some elements in Persons.In such cases, the application of refactoring operations on used metamodels has to be carefully performed to avoid breaking the consistency of the whole ecosystem.For example, the Persons metamodel contains an instance of the Dead Classifier smell due to the metaclass NameElement (see Figure 1b), which is completely disconnected from the other elements of the metamodel [6].In object-oriented design, similar situations are referred to as dead code or oxbow code [10]. The resolution of this smell is usually faced up with the removal of the indicted metaclass.Since the Persons metamodel does not use NameElement, this could be considered as a dead class, and it could be removed.However, this metaclass is being used by the metamodel WebApp as supertype for the classifiers with the name attribute, i.e., WebApp and Service.Thus, even though the considered smell resolution can be beneficial for the Persons metamodel, it is not safe concerning the whole ecosystem: it will lead to metaclasses with null supertype in the WebApp metamodel, raising a validation error as shown in the left-hand side of Fig. 1b. Figure 2 shows an example involving the Persons and WebApp metamodels in a cross-referencing [9] scenario.In particular, the metamodel on the left is linked to the one on the right by using a weaving model [11].In the weaving model, the relation is expressed by using the metamodel Subscription in the center.In this metamodel, we model the subscriptions by referencing a person and a credit card to a specific account.Moreover, on the top center, a constraint on the weaving metamodel is defined.This constraint checks that for the subscriptions defined in the model, if an account activated services of the web app, then at least a credit card has to be defined in the weaving.(Here, we could also check that the service is a paid service, but we use a more straightforward constraint for simplicity.)Cross-referencing implicitly involves several artifacts during the metamodel evolution Inlining a class is a well-known refactoring [33] in which a referenced metaclass is deleted, and the contained properties are moved to the old source of the relation.If we observe the Subscription metamodel in the center, the metaclass Subscription has three cross-references, one of which has type CreditCard, which will be a dangling reference in the evolved metamodel after the inlining.Moreover, the validation constraint defined on the weaving metamodel is also corrupted since it predicates over the credit card defined as a reference in the weaving metamodel.Also, in this example, an evolution considered on a single artifact created inconsistencies.In fact, the subjects of the evolution should be multiple to perform the changes correctly on various points, e.g., by first removing the future dangling reference before the inlining.The validation constraints should be adapted as well, but this regards the problem of coevolution of constraints [18], which is out of the scope of this paper. To summarize, Fig. 1a represents the import dependency, where metamodel WebApp imports the metamodel Persons as an external resource.In this case, the dependency exists because WebApp uses concepts of Persons, so WebApp may be corrupted if Persons is modified.Figure 2 represents the cross-reference dependency.The dependency is unidirectional from the weaving metamodel in the center to the linked metamodels.For this reason, if we modify Persons and/or WebApp we may create inconsistencies in the Subscription metamodel.One way to manage metamodel dependencies could be to use the EMF API [35] programmatically, but this presents some limitations.In fact, the EMF Java APIs do not check for inconsistencies during the execution, making it easy to create invalid resources.For instance, even if all the related metamodels are loaded in the same resource set, EMF does not check automatically that modifying a metaclass does not introduce a dangling reference.Such validity constraints on the Ecore models are checked only when the resources are saved to disk, which might be too late.Another difficulty dimension is related to the size and complexity of the metamodel repository being used.For instance, the dataset considered in Sect. 5 consists of approximately 2'400 metamodels, and it contains elements reused by more than 200 metamodels.Managing dependencies in such complex configurations is error-prone, and it demands dedicated support.In the following sections, we present an approach based on Edelta to address such issues. Evolving metamodels with Edelta In this section, we recall the main features of Edelta that we rely on to implement the approach presented in this paper.We also highlight the new features that were added to Edelta to support such an approach.Edelta [7] is a framework for refactorings and evolutions of EMF metamodels.Edelta consists of a runtime library and a DSL.It aims at providing EMF modelers with linguistic constructs for specifying basic metamodel changes (i.e., additions, deletions, and a few basic changes applied on meta-elements), and complex reusable metamodel changes by properly aggregating already declared ones in libraries (e.g., defining an operation for extracting a metaclass given a set of references).The Java API of Edelta is built on top of the standard EMF API, but it aims at providing a more statically safe set of operations that can be easily chained in a "fluent" style. The Edelta DSL provides a syntax that is similar to Java but removes much "syntactic" noise.For example, terminating semicolons are optional and the parenthesis can be omitted in a method call expression with no arguments.The return keyword is also optional: the last expression will be returned.Edelta provides syntactic sugar for getters and setters: one can simply write o.name instead of o.getName() and o.name = "..." instead of o.setName("..."). The Edelta DSL is statically typed, relying on type inference so that most types can be omitted in declarations.In particular, the type system of Edelta is completely compliant and interoperable with the Java type system, so that from an Edelta program we can access any Java type.This means that an Edelta program can seamlessly use any existing Java code and Java libraries.The Edelta compiler will translate Edelta programs into standard Java code, which uses the Edelta runtime library.In Edelta lambda expressions have the shape: When a lambda is the last argument of a method call, it can be moved out of the parenthesis; for example, instead of writing m(..., [...]), one can write m(...)[...].When a lambda is expected to have a single parameter, the parameter can be omitted and it will be automatically available with the name it.In general, the symbol it acts as an implicit receiver, so, just like this, it can be omitted in method invocations. Edelta provides a specific syntax to refer to Ecore elements in a statically typed way, ecoreref(...).Indeed, Edelta programs refer directly to the classes inside of an Ecore model.Note that this approach works even in situations where the EMF Java model has not been generated at all.References to Ecore elements, such as packages, classes, data types, features, and enumerations, can be specified by their fully qualified name in an ecoreref expression using the standard dot notation, or by their simple name if there are no ambiguities (possible ambiguities are checked by the compiler). All these features make Edelta programs much more compact than Java programs and much easier to read and maintain.The syntax of Edelta should be easily understood by Java programmers. An Edelta program consists of a few parts, besides Javalike import statements (for importing Java types) and a Java-like package declaration (used for the generated Java code).First, existing EMF metamodels are imported using the syntax metamodel followed by the EPackage's name.(The Ecore files are searched for in the classpath of the current project.)Then, existing Edelta libraries can be imported with the syntax use ... as ....Such libraries can then be used in the current program just like standard Java objects (e.g., for method invocation).Some reusable functions can be defined with a syntax similar to Java methods (starting with the keyword def; the return type can be omitted and it is inferred from the operation's body).Such functions can be used in the same program or imported in other Edelta programs with the above-mentioned use syntax.Finally, actual evolution operations on a specific imported EPackage are specified with the syntax modifyEcore.For further details on the Edelta syntax, we refer the interested reader to [7]. The Edelta DSL is embedded in an Eclipse-based IDE, with all the typical IDE mechanisms, such as syntax highlighting, content assist, code navigation, quick-fixes, incremental building, error reporting, and also debugging.In particular, the Edelta editor provides a "live" development environment for evolving metamodels.This feature is particularly useful for the modelers who will receive immediate feedback on the evolved version of the metamodels in the IDE.Moreover, Edelta performs many static checks, also employing an interpreter that keeps track on-the-fly of the evolved metamodel, enforcing the correctness of the evolution right in the IDE, based on the flow of the execution of the evolution operations specified by the user.The Edelta Outline view shows the preview of the evolved metamodels, which is the result of the interpretation of the Edelta program.This way, modelers can immediately inspect the evolved metamodels before applying the actual evolutions.The interpretation is performed on an in-memory copy of the original metamodels, so modelers are free to experiment without affecting the original metamodels.These mechanisms allow for very fast development cycles since the "live" preview is available even without saving the program in the editor. Finally, Edelta allows the users to easily introduce additional validation checks in their Edelta programs, which are taken into consideration by the Edelta compiler and the IDE.The Edelta refactoring library heavily relies on this feature, so that we can provide error and warning feedback directly from our Edelta reusable operations, without having to modify the Edelta compiler. Edelta can be used in different ways, e.g., to directly apply metamodel evolutions or to programmatically exploit bad smells resolutions.These two applications have been explored in [5,7] and in [6], respectively.The abovementioned Edelta refactoring library includes the bad smells resolutions. In previous works [5,7] Edelta was used to evolve only a single metamodel.For the approach presented in this paper, we extended Edelta so that it uses all the imported metamodels when performing static checks (see next section). For smell resolution, Edelta provides a mechanism based on three different components that work in synergy: the bad smell finder (in charge of matching a bad smell in metamodels), the refactoring library (which includes, for example, the above-mentioned inlineClass), and the resolver.This last component associates a bad smell with an operation, automatically matching and resolving the found smell.All these components are implemented with the Edelta DSL. As an explanatory example, Listing 1 shows the operations detecting the bad smell Dead classifier shown in Fig. 1.The implementation should be easy to read: we find the classifiers that do not refer to other classifiers (i.e., including data types) and that are not used by other classes.Note that, as said above, Edelta can access any existing Java types.In this case, we rely on the EMF EcoreUtil for finding cross-references and usage cross-references.The Edelta utility function packagesToInspect retrieves all the EPackages in the current resource set so that we can inspect all the imported metamodels when this bad smell finder is used from within an Edelta program.This function is part of the extension of Edelta that is required for the goals of this paper.When all the dependant metamodels are correctly imported in an Edelta program, Edelta will be able to inspect them all when performing static checks.In Listing 2 we report the resolver associated with the dead classifier smell.This metamodel change simply removes the indicted metaclass by using the EcoreUtil.removemethod.We observe that this resolver is particularly trivial since the smell is associated with an atomic operation and not with a complex evolution pattern.In Fig. 3 we show an Edelta program that executes the above-mentioned resolver for dead classifiers on the single imported metamodel Persons (see Fig. 1).The symbol it refers to the EPackage specified in the modifyEcore. (For demonstration, we also perform another basic operation, i.e., the rename of a feature.) In this case, the dead classifier is matched, as can be seen from the Outline where the class NameElement is not present anymore.Indeed, since we imported only the metamodel Persons, Edelta correctly detects that NameElement is a dead classifier.However, as stressed in Sect.2, the WebApp metamodel would then be invalid.The modeler should be aware of WebApp depending on the currently evolved imported metamodel Persons.If the modeler imported also WebApp in the Edelta program, then Edelta, extended with the new features like the abovementioned packagesToInspect, would be able to avoid such problems right away.As shown later in Sect.4.2, Edelta would not consider NameElement as a dead classifier if also WebApp was imported. For these reasons, and to avoid such problems, we propose an approach based on aligned metamodel evolutions supported by an extension of the Edelta tool.We show that co-evolving metamodels in a dependency-aware manner is safer than evolving them as single units. Safe metamodel evolutions with Edelta In this section, we present an approach to deal with the issues discussed in the previous sections and that are due to metamodel dependencies, which are not managed during evolutions.In particular, as shown in the explanatory example shown in Fig. 4.a, two possibly related metamodels are singularly evolved in separated stages.If the maintenance of these metamodels is conducted in a way that Stage 1 precedes Stage 2 and the evolution of the metamodel in Stage 1 touches elements cross-referenced in the metamodel in Stage 2, then the evolution might give place to inconsistencies. Figure 4 b shows the way the proposed approach works: it supports the evolution of all the depending metamodels in a Fig. 4 Metamodel evolution phases single stage.In particular, the application of metamodel operators is restricted depending on the occurring dependencies to reduce the risk of creating inconsistencies. In the next sections, we describe the two main phases of the proposed tool-supported approach, i.e., dependency analysis, and aligned evolution.The former is in charge of automatically deriving a graph encoding the dependencies among all the metamodels under the availability of the user.The latter employs the created dependency graph to guide users while specifying metamodel evolutions with Edelta.Early alerts are raised in case of evolutions that might produce inconsistencies. The original Edelta framework [5,7] has been extended for this application to include: i) a new component for dependency analysis of metamodel repositories, ii) a graphical view to represent the result of the analysis, iii) and an Edelta template generator.This generated Edelta specifications can be used to compose a new Edelta program, correctly including the involved metamodels, calculated by the analysis phase.The Edelta plugin has been extended to include these new components. Dependency analysis In this phase, the metamodel being evolved is analyzed with the goal of searching for cross-references or references to external resources.To this aim, the whole available metamodel repository is analyzed, and a model conforming to the metamodel shown in Fig. 5 is generated.It is inspired by the one presented in [13] and it allows us to represent model repositories 1 as graphs.In particular, a Repository can be represented as a graph that is composed of Nodes and Edges.Nodes can be model-based artifacts, e.g., models or metamodels.The attribute highlighted is used for visualization purposes, e.g., highlighting the node in the rep- 1 For simplicity, in this paper, repositories are considered as local projects (stored in workspaces) instead of online resources.The analysis mechanism that generates dependency models conforming to the metamodel in Fig. 5 starts from the package of the metamodel that is the subject of evolution and analyses all the model elements to get possible references to other packages.The interesting parts of this phase are shown in Listing 3.This way, the modeler has a double help: i) the evolution program already includes the dependant metamodels (by means of the Edelta metamodel statements, Sect. 3), so that Edelta can perform its static checks on the evolved metamodels; ii) the modeler has an immediate and graphical feedback of the dependencies. .filter ( desiredType :: isInstance ) .map ( desiredType :: cast ) ; } Then, by using the code of Listing 3, the procedure is iterated over all the packages of the repository, recursively, avoiding possible cycles.This way, we compute the closure of dependencies, both the outgoing and the incoming dependencies.During this procedure, we also build the model conforming to the metamodel in Fig. 5. The dependency analysis process has been implemented in an Eclipse plugin, part of the Edelta distribution, as shown in Fig. 6.In particular, a contextual menu, enabled on the Ecore files, is provided, which invokes the above-mentioned analysis process on the selected Ecore file and on the other Ecore files in the same directory.The menu generates a model conforming to the metamodel in Fig. 5 in an output directory (analysis/results).The generated graph model is also coupled with a generated model to text transformation (the file with extension picto), which we do not show here.Such a transformation uses the Picto [24] view for rendering the graph of dependencies.This view represents the local repository in which the nodes are the metamodels and the edges are their dependencies.The subject metamodel is depicted in red (by using the highlighted attribute of the node), representing the metamodel of interest to the modeler.The view can be filtered, e.g., by selecting a class only and the metamodels connected to it will be shown. 2The contents of the view can be easily navigated, rotated, and zoomed.The context menu automatically opens the generated graph model and the Picto view.Another context menu is provided to generate an initial Edelta template file (also shown in Fig. 6).This file imports the metamodels to be included in the evolution program, due to occurring dependencies.In the next section, we show the use of the generated templates to support aligned metamodel evolutions by considering the explanatory examples shown in Sect. 2. In Fig. 6 we show the example in Fig. 2 actualized in the tool.The two context menus described above have been executed on the Ecore file corresponding to the metamodel Persons (PersonsMM.ecore).In fact, its node is highlighted in red.The Subscription metamodel has two dependency links to the metamodels Persons and WebApp.This way, the view offers an immediate feedback w.r.t. the metamodel of interest in the repository, which in this case, for demonstrative purposes is quite simple, but in general it can include a large set of models, nodes, and dependencies. The dependency analysis generates the graph model once, and it can be used as a cached representation until the repository is untouched.For a medium project like the one used in Sect. 5 consisting of ≈2'400 metamodels, the dependency analysis takes ≈700ms. Use of generated edelta templates to evolve dependant metamodels In this section, we show how Edelta can be used together with the dependency analysis tool, introduced in Section 4.1, to implement the proposed approach and achieve safe evolutions of interrelated metamodels.In Fig. 7, we report a part of the Edelta program to evolve the metamodels Persons and Subscription of the example shown in Fig. 2. The metamodel imports are automatically generated in the template file, by using the contextual menu on the PersonsMM.ecorefile (as described in Sect.4.1, see also Fig. 6).The file has then been renamed to "Example.edelta." The modelers can then specify the evolution as in the remaining of the screenshot for instance.The metamodels must be imported together so that the scenario in Fig. 2 can be evolved avoiding the dangling cross-reference.Indeed, as mentioned in Sect.3, we extended Edelta so that it uses the entire resource set, which contains the imported metamodels.This way, when performing validation checks, Edelta can immediately detect problems such as the mentioned dangling cross-references.For example, in Fig. 7, as soon as the modeler specifies the inlineClass refactoring, an error pops up: such a refactoring cannot be applied since it requires a single usage of the class to inline.Since Edelta has both metamodels in the resource set, it can detect such an ambiguity, avoiding a possible dangling reference if the class was inlined in the class Person.The refactoring inlineClass, which is part of the Edelta refactoring library, uses the mechanism for participating in the validation of Edelta programs mentioned in Sect.3. On the contrary, if we had not imported Subscription, the refactoring would succeed, as shown in Fig. 8.However, while the evolved metamodel Persons would still be valid, Fig. 7 The inlineClass refactoring shows an error since we imported also the Subscriptions metamodel that refers to CreditCard Fig. 8 The inlineClass refactoring succeeds since we imported only the Persons metamodel the dependant metamodel Subscription would be corrupted by a dangling reference, as anticipated in Sect. 2. Once the modeler is notified by the system about the problem she can decide how to fix it.For example, before applying inlineClass, the reference card to the class CreditCard (in the other metamodel) can be removed.Figure 9 shows such a situation.Note that we use the fully qualified name of the reference in the ecoreref to avoid the ambiguity with the homonymous reference in Person.Consequently, the inlineClass can be safely performed.Recall that Edelta interprets the current program on the fly, keeping the order of the statements into consideration.Note that the Outline view of the Eclipse IDE shows the preview of the evolved metamodels, where the elements that were modified are highlighted in bold: we can see that the class CreditCard disappeared, its features have been inlined in Person (with the specified prefix), and that the reference card has been removed from Subscription. In Fig 10, we show an Edelta program that tries to apply the resolver for dead classifiers on the metamodel Persons (see Fig. 1) in a program where also the dependant metamodel Fig. 9 The inlineClass refactoring succeeds since we first remove from Subscription the reference to CreditCard Fig. 10 The resolution for the smell dead classifier is NOT matched since we imported also WebApp: the NameElement is still there WebApp is imported.Also in this example, the metamodel imports are automatically generated by clicking on the contextual menu enabled on the subject metamodel.Differently from what we shown in Fig. 3 (Section 3), the bad smell finder for the dead classifiers is not matched when the dependant metamodel WebApp is also imported: NameElement is used as a supertype in the dependant metamodel.Indeed, in the Outline, the NameElement is still present.Of course, in this case, no error is shown: the bad smell resolver simply did not detect any dead classifier. The approach described in this paper is based on the abstract architecture reported in Fig. 11.Basically the developed tool is based on the Eclipse Modeling Framework as core for manipulating models.In particular we rely on Epsilon [25] for the visualization part of the dependency models.Epsilon is a family of languages for automating common model-based software engineering tasks, such as code generation, model-to-model transformation.We have used the Eclipse UI extension points to create the contextual menus described in Sect.4.1. The contextual menus impose the selection of a metamodel as subject from which the analysis is performed.The result of the Edelta template generator can be further refined and extended by using the Edelta editor for producing the evolved metamodel result of the evolution.The obtained model will be stored in the initial repository.The exam- Experiments In this section we discuss the experiments that have been performed to assess the effectiveness of the proposed toolsupported approach with the aim of answering the following research question: RQ: Given a metamodel to be refactored, does the proposed approach correctly generate Edelta templates so to correctly raise errors in case of unsafe refactorings? In the following subsections, we first explain the setup of the experiments (Sec.5.1), and then we discuss the obtained results (Sec.5.2).Threats to validity are discussed in Sec.5.3, by distinguishing them in internal and external. Experiment setup For our experiments, we selected a dataset of metamodels publicly available and presented in [3].This dataset contains 2'417 metamodels collected by crawling online GitHub repositories. By exploiting the dependency analysis approach discussed in Sec.4.1, we took as input the whole dataset and generated a dependency graph conforming to the metamodel shown in Fig. 5.The graphical representation of the whole graph is publicly available online3 .We randomly selected existing dependencies and the corresponding metamodels as subjects of evolution operations.By applying our approach, we have generated the Edelta specifications to manage the involved subject metamodels.For explanatory reasons, some of the selected metamodel dependencies are represented in Fig. 12, which are also shown in column Subgraph of Table 1.To check the correctness of the generated templates, and thus of the import statements that are needed to possibly detect unsafe evolutions, for each subgraph we have performed mutations consisting of two actions: i) keep the generated import statements untouched (marked with the symbol '=' in Tab. 1) or remove some of them as represented by the symbol '-' (e.g., concerning subgraph 1 , the removal of the Workbench import has been operated for three mutations out of five), ii) application of metamodel change to elements of the target metamodel of the considered dependency (e.g., the element store::Checkout has been removed for one of the mutations of subgraph 24 ). Results All the mutations shown in Tab. 1 have been manually analyzed to check if unsafe evolutions are correctly detected by the approach.We can have the following cases: The metamodel import statements are untouched (=): in this case the expected results can be as follows: if the meta-elements affected by the metamodel mutation (e.g., removal of wikicontent::Wiki in the fourth mutation of subgraph 1 ) are part of the dependant elements (e.g., the element was used by the dependant metamodel, i.e., Workbench), then the proposed approach is effective if Edelta shows an error due changes on the metamodel wikicontent and does not permit to produce inconsistent states because the metamodel workbench depends on wikicontent (e.g., see the Expected value marked as x for the fourth mutation of the subgraph 1 ).Moreover, we can have the case in which the mutated meta-element is not used by the dependant metamodel, and in this case Edelta does not show any error because the removal can be operated safely, even if the metamodels are dependant (see the output x of the first 1 mutation).The dependant metamodel import statements are dropped (-): we can have two cases: i) the mutation applied on the meta-element affects a dependant metamodel (e.g., the metaclass customer::CustomerType in the first mutation of subgraph 3 ); ii) the mutation does not affect a depen-dant meta-element (e.g., customer::AddressType of 3 ).In both cases, a metamodel evolution should be allowed by the tool, without raising any errors even though in the first case we will have an invalid metamodel, which is not recognized by the tool due to the removal of the import statement. As shown in Tab. 1, the outputs produced by the unsafe evolution detection mechanism are always as expected.Thus, this supports that the approach correctly raises errors when needed by forcing the modeler to maintain the interrelated metamodels in a valid state.When the mutation removes the required imports, the metamodels are posed in an invalid state if the operated metamodel mutation affects dependant elements.This confirms the effectiveness of the approach in response to RQ . Threats to validity We distinguish the threats in internal and external validity of the performed experiments, and in the following we discuss the most relevant ones. Internal validity Internal validity threats are the internal factors that may influence the outcomes of the experiment.We have used a relatively small number of metamodels for the experiment.The reason is that first, we wanted to manually check the obtained results and second, the Edelta specification has to be inspected to check the found and not found inconsistencies.However, we considered random subgraphs of metamodels from the extracted repository to cover different domains and metamodels.The precision of the dependency analysis seems to be reliable, from the manual sample inspection.This can be considered as a threat since the algorithm could identify not existing dependencies or miss existing ones for a different pattern used for referring to the external resource.This has been mitigated by exploring samples and by implementing the algorithm with the available data.We plan to refine it by importing further Ecore models and manually inspecting a larger sample. External validity The main threat in this category regards the generalizability of our findings, i.e., whether they would still be valid outside the scope of this paper.We considered different kinds of metamodels belonging to different domains.However, we plan to evaluate the approach by considering a bigger dataset, covering more subgraphs of the extracted repository as future work.Moreover, the metamodel mutations that have been used for the evaluation might not reflect all the possible evolutions that can be applied to metamodels.Indeed, only removal has been used, but also other complex evolutions, e.g., moving meta-elements would lead to inconsistencies.To the best of our knowledge, the model mutation is a technique that is commonly used to artificially create artifacts that are needed for performing this kind of experiments.However, we will further extend the evolution operators applied as mutants to consider other possible corrupting instructions. Related work The section has been organized to explore refactoring approaches, automatic detection of evolutions, and dependency analysis tools.Dependency Analysis.Various approaches work in the direction of dependency analysis in multiple domains, e.g., package dependency in OS [39] or source code analysis [37].We limit this discussion to model-based artifacts dependency analysis, as for instance the work in [15].This work presents an automated approach to generating and validating trace dependencies among software development artifacts, such as model descriptions, diagrammatic languages, abstract (formal) specifications, and source code.Some of the authors of this paper previously presented an approach for reconstructing relationships among modelbased artifacts in repositories.The work in [13] shares many similarities with the proposed approach and it has been used as the main source of inspiration, but it works in a different level of relations, i.e., dependencies among metamodels, that was not covered in [13].Some of the authors of this paper presented a tool for evaluating the impact of changes applied to metamodels on existing artifacts [21], by using a dependency representation between metamodeling languages to derive existing dependencies among instances.Differently from this paper, the dependencies are not computed automatically but specified by the modeler.Moreover, the dependencies are not explicitly included in the models but are semantically defined by the user. In [31] the authors present two strategies to describe relationships between metamodels.The first one is based on the definition of explicit dependencies between concrete metamodels.The second one is based on the description of contracts for metamodel entities.This last strategy introduces a new level of indirection in the definition of the dependencies that specifies the name of methods and events used to bound elements.The goal of that work is to propose new types of relations between metamodels, models, and model instances specifically in the Cumbia platform, and it is not in the direction of discovering metamodel dependencies as presented in our work.Refactoring Approaches.The concept of model refactoring has been explored using a UML class diagram in [28] and applied to Ecore models in [33].The authors of these works show how graph transformations are applied for supporting model refactoring.Indeed, every refactoring is expressed as a graph production.On the contrary, in Edelta the refactoring is directly translated into Java code, and in the Edelta editor the refactoring is applied on the fly on the subject metamodels to perform static checks, giving immediate feedback to the modelers. A research on refactoring tools is reported in [29], where the need to address the refactoring process in a more consistent, generic, and scalable way is strongly highlighted.The authors in [8] present a metamodel for specifying atomic operations.A single change is seen as an atomic transformation and the metamodel used in that approach is similar to the one at the base of Edelta. In [2] a tool called EMFRefactor is presented with the intent of specifying and applying refactorings on models.This tool uses Henshin's model transformation engine for executing refactorings.The main difference with Edelta is that this tool implements the refactorings by implementing Java methods and coupling them with the UI.Edelta provides a DSL that is more extensible w.r.t.new refactorings, which in the other approach can only be implemented with more coding. Concerning Edelta applications, in [5,6] a library of reusable metamodel refactorings has been used by following the formal definitions at https://www.metamodelrefactoring.org, previously inspired to Fowler [17].Another work that has been part of the inspirational examples for building the Edelta refactoring catalog has been presented in [34].This work presents a catalog of nine co-evolution operation specifications for automating the migration of ArchiMate models when the ArchiMate language is evolved.A set of refactorings preserving the behavior of UML models is also presented in [38].Automatic detection of changes.Automatic detection of code refactoring is the topic of [27].The authors present approach that takes as input an external library containing a list of possible refactorings, a set of structural metrics, and the initial and revised versions of the source code.As output, it generates a sequence of detected refactorings from the input by using a search-based process.Edelta collects the refactorings in libraries that can be reused in the entire process by modelers and developers.A different approach in [42] proposes a detection mechanism for identifying refactorings by analyzing the system evolution at the design level.Also the work in [16] detects high-level model changes.The authors in [26] search for occurrences of complex refactorings within a set of detected atomic ones in a post-processing approach.Another detection mechanism is proposed in [22] where the detection of complex changes applied to metamodel evolutions is presented.In these cases, the main difference with Edelta is that our DSL works in a programmatic application of the defined changes to produce the evolved model, whereas the above approaches already compute two versions of models and source code. Conclusion and future work In this paper, we presented an extension of the Edelta framework for supporting safe metamodel evolutions.Metamodels are not often used in isolation; for this reason, when languages are interrelated, e.g., in cross-referencing, evolving them in a standalone stage can create inconsistencies.The proposed approach consists of a two-factor process in which first the dependencies of the given repository of metamodels are analyzed by also considering the metamodel to be evolved.This analysis helps modelers in multiple ways.Moreover, the proposed approach generates Edelta templates, including the necessary import statements, to recognize possible unsafe evolution patterns.This way, all the dependant metamodels will be loaded in the same resource set.Thus, by exploiting its live evolution environment, the Edelta framework will consider all the dependant metamodels.Future directions are manifold: i) enriching the visual representation of the repository; ii) including a quality evaluation mechanism by considering all the dependant metamodels, which are subject to the evolution; iii) evaluate the approach with bigger datasets.Moreover, we plan to investigate the usage of Edelta to co-evolve different metamodels linked by non-physical dependencies (e.g., metamodels that underpin the definition of the same model transformations).In such cases, using a single Edelta specification to co-evolve related metamodels may result efficient and less verbose compared to multi-stage evolution mechanisms. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Fig. 2 Fig. 2 Evolution creating a pending reference type error Fig. 3 Fig.3The resolution for the smell dead classifier is matched since we imported only Persons Evolution with uncontrolled metamodel dependencies (b) Proposed dependency-aware metamodel evolution Fig. 6 Fig.6The developed approach at work Fig. 11 Fig. 12 Fig. 11 Abstract architecture of the developed toolkit Table 1 Experiment results
2022-02-06T16:53:26.921Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "08bab37ea118df925f60ef4da8f85c1cb4c73e49", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10009-022-00646-2.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b4eda64ab4d73cc230356cadc556486999f6969d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
221257469
pes2o/s2orc
v3-fos-license
Effect of hydrothermal coupling on energy evolution, damage, and microscopic characteristics of sandstone Abstract Heat–cool (H–C) cycle is a serious natural weathering mechanism for rock engineering in temperate desert climate; meanwhile, engineering rocks usually involve responses to impact loads arising from blasting operation, mechanized construction, and seismic oscillation. Considering the universality and destructiveness of rock failure caused by H–C cycle weathering coupled with dynamic loading, split Hopkinson pressure bar tests were conducted for sandstone with various H–C cycles. Additionally, hydrothermal coupled damage (D) was defined based on variation of total import strain energy. Energy evolution, damage, and microscopic characteristics of sandstone after diffierent H–C cycles were studied. Finally, the microcosmic structure changes of sandstone after various H–C cycles are compared by means of scanning electron microscopy (SEM) and energy dispersive spectrometer (EDS) technology. Results show that decreasing rate of total import strain energy in high temperature group is significantly larger compared with that in low temperature group and moderate temperature groups. Repeated H–C cycles produce the thermal stress at the mineral boundary constantly and fracture along the boundary of the mineral particle according to the SEM and EDS results. Introduction Diurnal temperature difference and annual temperature difference is enormous in temperate desert climate. Meteorological records indicate that rock materials will experience a typical repeated heat-cool process, which is attributed to the high low temperature cycling in summer and winter. Rock materials degradation is a typical process of rock fracture and mineral metamorphism under the combined effects of radiation, biology, and geology [1][2][3][4]. Rock materials are commonly applied building materials in construction engineering, which always suffers from repeated heat-cool (H-C) cycles weathering. Additionally, engineering rock typically involves responses to dynamic impact loads arising from blasting excavation, mechanized construction, and seismic oscillation [5]. The study of mechanical properties of rock materials under harsh weathering conditions has engineering significance to the design, construction, and application of rock materials in temperate desert climate regions. Previous investigation demonstrated that water content and freezing temperature significantly affect the damage degree of rock weathering in H-C cycles. Rock is a heterogeneous composite porous material, which contains joints, cracks, and natural flaws [6][7][8]. Rock is repeatedly eroded by water and the clay mineral in rock is hydrophilic, which weakens the cohesion and slip each other, leading to the weakening of the cementation between mineral particles inside rock materials [9,10]. Hence, water inside rock undergoes water phase transition of solid (ice)-liquid (liquid water)-gas (steam) with temperature changing coupled with water [11]. When temperature is below freezing point, 9% volume expansion for pore water occurs inside rock with solid-liquid phase change, resulting in the damage of pore structure inside rock; in contrast, when temperature is above freezing point, the freezing water thaws and migrates between pores inducing reduced cohesion between rock particles. The frequent freeze-thaw (F-T) of water expands the original fissures and voids inside the rock and induces new microcracks, which causes great damage to rock engineering [12][13][14]. Temperature increment during H-C cycles leads to secondary thermal stress in rock due to various coefficients for different mineral particles. The main mechanism of H-C cycles on rock is the loss of water and inner structure damage caused by thermal reactions [15][16][17]. By summarizing previous theoretical and experimental investigations it can be observed that recent developments in the field of strength and deformation characteristics of rock after F-T and thermal shock (TS) cycles have led to an increased interest. The main representative study results are as follows: Yavuz [18] performed 10 F-T cycles and 50 TS cycles on andesites, and test results found that with the increase of cycle numbers, the velocity of Pwave, compressive strength, and hardness of andesites specimen decrease, while both the porosity and water absorption increase. Demirdag [19] compared the physical and mechanical properties of filled and unfilled travertine after different F-T and TS cycles (e.g., 10, 20, 30, 40, and 50 cycles). Investigation results showed that both F-T and TS cycles affect the rock weathering degree on the mechanical resistance properties. Ghobadi and Babazadeh [20] studied the differences in the accelerated weathering processes of sandstone among F-T, salt crystallization (SC), H-C, and heating, cooling, and wetting (H-C-W) cycles. Wang and Xu [6] tested the quasi-static and dynamic splitting tensile of red sandstone after 10, 20, 30, and 40 TS cycles. Test results demonstrated that the antigrain fracture increased with the increase of strain rate based on the scanning electron microscopy (SEM) results, and TS cycle reduced the adhesion of cement and the bond between particles, reducing the splitting tensile strength of red sandstone after TS cycle. The mechanical response of marble under coupled high temperature and dynamic loads was investigated using SHPB device, and the temperature threshold was observed based on the changes of stress-strain curves, peak stress, peak strain, and failure mode of marble [21], in addition, the effects of impact load rate and high temperature on strain rate were also analyzed [22,23], and test results indicated that under the same load rate, the strain rate of rock showed an obvious increase trend with the increase of high temperature. By summarizing the theoretical and experimental studies achievements of predecessors, it can be found that researches, in terms of the physical and mechanical changes of rock materials after F-T and TS cycles, mainly concentrate on changes in physical and mechanical properties. However, research has consistently shown that its energy dissipation and damage mechanism have not attained an adequate understanding. Moreover, study on the influence of cycle temperature on the dynamic mechanical property of rock materials after H-C cycles is still insufficient. In this research, the influence of treatment temperature and cycle number on physical property (i.e., P-wave velocity, density, and porosity), dynamic energy evolution, damage, and microscopic properties of rock during the weathering process. SEM and energy dispersive spectrometer (EDS) technique were used to detect microscopic damage of sandstone, and the damage is defined according to the variation of total import strain energy. Finally, the relationships between energy characteristic index and cycling number are also discussed. Sample preparation The fine-grained sandstone was collected in the Zhujidong coalmine located in Huainan city, Anhui Province, China. Samples with similar P-wave velocity were selected and cylindrical specimens with diameters of 50 mm and heights of 25 mm were obtained by boring, cutting, and polishing the slate chunks according to the suggested methods and standard requirements of ISRM [24]. According to the treatment temperature in H-C cycles, the test sandstone samples were divided into four groups: high temperature (H-T) group from −20°C to 400°C, moderate temperature (M-T) group from −20°C to 100°C, low temperature (L-T) group from −20°C to 50°C, and the control group. For L-T group or M-T group, four experimented sets are prepared according to 10, 20, 30, and 40 H-C cycles. Because of the fast deterioration rate of the high temperature group, three experimented H-C cycles (4, 8, and 12) were set. Figure 1 is the scheme of the testing procedure. There are five specimens in each experimented set to fulfill the repeatability conditions. H-C cycling and testing equipment In this test, SX-5-12 box-type resistance furnace, which could heat rock specimen to 1,200°C, was adopted as the heating device, which was composed of control box and electric furnace. Sandstone specimens were heated at a rate of 6°C/min to desired temperature. The low temperature adopted the freezing temperature test box. The temperature of test chambers drop from 20°C to −20°C need 40 min. The experiment samples were immersed in water for 12 h, and then put into freezing temperature test box at −20°C for 6 h, after that, put samples in a high temperature box at a predetermined temperature heating for 6 h. The above test process was treated as one H-C cycle and lasts for 24 h. The basic physical properties (mass, volume, and ultrasonic P-wave velocity) of sandstone specimens before and after cycling were tested and shown in Table 1. Dynamic impact loading tests of sandstone were carried out involving the SHPB system with a diameter of 50 mm under 0.4 MPa impact air pressure. In SHPB test, sandstone specimens after different H-C cycle conditions are placed between incident and transmitted bars, and two strain gauges are mounted on incident and transmitted bar to collect the stress waves, respectively. Strain acquisition instrument is adopted to collect the origin strain signals. Moreover, the calculation method of dynamic stress, strain, and strain rate has been reported in the previous investigation [5]. The stress equilibrium of sandstone specimen after H-C cycle is checked and shown in Figure 2. It can be clearly noticed that sandstone specimen is basically satisfied stress equilibrium during impact process. Test results and analysis Thermodynamics laws indicate that the transmission and transformation of energy is combined with the entire failure process of material. Notably, energy dissipation reflects the evolution of the microcracks inside rock, which leads to strength weakening and failure of rock materials. Hence, investigation on the rock damage from energy perspective can denote the whole failure process under external load [25,26]. As an open system, rock specimen continuously transfers and transforms energy to the environment during the loading failure process. Environment works on the rock system, and the import energy is absorbed by rock system. As shown in Figure 3, the process of management transfer and transformation of loaded rock system can be divided into four main forms: energy import, energy accumulation, energy dissipation, and energy liberation. Computational formula Previous theory reveals that there is no heat exchange between rock and environment during SHPB test. Considering the deformation of one rock element under external load, the total input energy, dissipated energy, and the releasable elastic strain energy of a rock unit can be calculated based on the first law of thermodynamics [27]. where U is the total import strain energy of one rock unit generated by the external load, U d and U e are the dissipated energy and the releasable elastic strain energy of a rock unit, respectively. The units of U, U d , and U e are all MJ/m 3 , equivalent to the stress unit, MPa. The calculation scope of them is from initial point to residual point, in addition, the value of them under different stages is calculated from initial point to the end of the stage point. Taking axial stress and strain into consideration [27], energy evolution equation of a rock unit in principle stress space can be expressed as where σ, ε, and ε e are the axial stress, strain, and elastic strain of the rock unit, respectively. Variation in total import strain energy with cycling number The energy evolution curves of sandstone specimens after H-C cycles are obtained by calculation. The representative stress-strain curve and energy evolution curve without cycle are adopted for description, as shown in Figure 4. It is obviously noticed that the whole process of energy evolution can be divided into four stages: compression stage, linear deformation stage, yield stage, and failure stage. In addition, Figure 4 also illustrates that the energy evolution curve for H-C cycles damaged sandstone specimen under impact compression process shows obvious stage characteristics. In compaction stage, with the increase of strain, the total import strain energy, both the releasable elastic strain energy and dissipation energy increases, and the rise rate increase gradually. At linear elastic deformation stage, the rise rates of total import strain energy and releasable elastic strain energy are relatively fast, while the dissipative energy is relatively slow. In yield stage, the rise rate of the released elastic strain energy begins to slow down gradually and reaches the peak value, while both dissipation energy and total import strain energy are continuously increasing. In the failure stage, both the total import strain and dissipative energy increase continuously, while released elastic strain energy decreases. The dissipation energy increases continuously and approaches the total import strain energy. The results show that the total import strain energy decreases with increasing cycling number for L-T, M-T, and H-T groups. The total import strain energy (U) under different test conditions is shown in Table 2. From the data of three groups it is found that under the same H-C cycles, the decrease rate of rock for H-T group is larger than that of other two groups, specifically, the value of U for H-T group decreases to 0.60 MJ/m 3 after 12 H-T cycles. Finally, from the comparative analysis it can be noticed that with the increase of cycling number, both the total import strain energy and total dissipation energy of sandstone in the whole impact compression and failure process gradually reduce. Relationship between energy characteristic index and cycling number in different stages The above analysis illustrates that energy evolution process of rock under dynamic loading can be divided into four stages: compression, linear deformation, yield, and failure. Moreover, energy characteristic indexes (i.e., U, U e , and U d ) are different with various dynamic compression deformation and failure stages of rock. Energy distribution characteristic and damage degree are varied corresponding to four stages [26,27]. According to equation (2) the energy characteristic indexes for L-T, M-T, and H-T groups are obtained and shown in Figure 5. Figure 5 illustrates that: (1) three energy characteristic indexes (i.e., U, U e , and U d ) are affected by cycling number, deformation and failure stage, and treatment temperature. Additionally, similar effects of treatment temperature on energy characteristic index for three cycle groups are observed. (2) With increasing cycle numbers, energy characteristic index (i.e., U, U e , and U d ) gradually decreases. For example, the U d at yield stage in M-T group decreases from 1.264 to 0.695 MJ/m 3 with cycle number increasing from 0 to 40. (3) Furthermore, the decrease degree is quite different under different deformation and failure stages, the decrease degree for U and U d at failure stage exhibited the maximum value compared with the other three stages. However, the decrease degree for U e at failure stage shows the minimum value. (4) Additionally, the treatment temperature also has significant effect on the energy characteristic index (i.e., U, U e , and U d ) at the same deformation and failure stage. The energy characteristic index is constantly changing and transforming into each other at different deformation and failure stages. In compaction stage, test results indicate that U ˃ U e ˃U d , and the total import strain energy in this stage is converted into elastic energy. The elastic energy and dissipation energy of each specimen exist simultaneously in linear elastic deformation stage, while the total import strain energy is converted into elastic energy and dissipated. Meanwhile, the total import strain energy of sandstone begins to dissipate in this process and a small percentage is converted into elastic energy in yield stage. At failure state, dissipation energy in this stage is larger than the total import strain energy, while the elastic energy is approximately zero, indicating the elastic energy stored in pre-peak stage is released. Hydrothermal coupling damage caused by H-C cycles With the increase of H-C cycles number, the damage degree of sandstone gradually increases, which shows more easily failure under the same conditions. From the above analysis it can also be noticed that the total import strain energy gradually reduces with the increasing cycling numbers. To further describe the hydrothermal coupling damage of rock caused by H-C cycles, the hydrothermal coupled damage (D) is defined and calculated based on the variation of total import strain energy required for the compression deformation process after different H-C cycles [27] and the function can be expressed as: where D is the hydrothermal coupled damage value of sandstone after H-C cycles; U n and U 0 are the total import strain energy of sandstone samples subjected to n and 0 H-C cycles, respectively. According to above calculation method, fitting curves between damage of rock and cycling number are shown in Figure 6. The variation of H-C cycle damage with cycling numbers of sandstone can be found in formulas (4) Figure 6 and formulas (4)-(6) reveal that damage of rock specimen caused by H-C cycles gradually increase with the increase of cycling number. Under the same cycle number, rock damage for H-T group became more serious compared with that for M-T and L-T groups. After 10 H-C cycles, the damage value for H-T group is 0.69, which is much higher than that of 0.16 and 0.07 for M-T and L-T groups. Effect of H-C cycle on microstructure changes of rock The microstructural changes have significant effect on the variations of physical and mechanical properties of rock materials after different H-C cycles, which is beneficial to better understand the H-C damage mechanism for rock materials. H-C cycle is a complex three-field coupling damage process of the interaction of water (ice and steam), heat (temperature), and force (temperature force) in rock media. To further investigate the mechanical damage of H-C cycles on dynamic strength and energy dissipation of rock, SEM and EDS tests are conducted in this section to observe the microstructural damage of rock. Water-rock interaction mechanism Physical mechanism. Sandstone is composed of different kinds of minerals (such as albite, quartz, and calcite) [28][29][30]. Figure 7 shows expansion process of clay mineral. In this test, sandstone belongs to sedimentary rock, the whole structure of which is mainly clastic structure. The clay minerals, such as kaolinite, which has small particle size and high cohesion force can bond with each other and on the surface of other minerals, such as quartz, albite, and calcite. Hence, clay minerals act as binder mediums to stabilize rock structure. Additionally, kaolinite accounts for the main part of clay minerals in sandstone specimen, which shows better hydrophilic ability, and it shows hydration and expansion characteristics because of the effect by capillary force, surface hydration, and osmotic hydration [31]. It can be seen from Figure 7 that clay mineral polymer absorbs water giving rise to the expansion of volume, which produces stress on mineral particles and affects the direction of clay mineral, resulting in original equilibrium state being broken. Water molecules and hydrated ions fill the clay mineral aggregates, which weakens the cohesion and slip each other, leading to the weakening of the cementation between sandstone mineral particles [32], resulting in the macroscopic damage of rock after water-rock reactions. Chemistry mechanism. According to the mineral composition of sandstone, the chemical changes that occur between sandstone and water are mainly albite hydrolysis, calcite weight carbonation, and colloidal dispersion of quartz insoluble minerals [9]. The chemical reaction formulas are calculated and shown in (7)-(10). The above reaction formulas illustrate that the materials exchange between aqueous solution and rock minerals occur continuously. Specially, the ion concentration close to the mineral particles is much larger than that in free water. Hence, large thermodynamic potential energy is generated between them, which promotes the chemical reaction. Formulas (7)-(10) also illustrate that the chemical reaction products are easier to dissolve or decompose in water compared with chemical reactants. Hence, with the increase of H-C cycles, more and more chemical reaction product generates, which makes rock minerals become looser and more fragile. Thermal damage mechanism Rock is a typical multimineral complex material; the thermal damage mechanism in H-C cycle process is shown in Figure 8. From the above figure it is noticed that in saturated process, all the pores and cracks inside rock are filled with water, and there is no thermal damage in this process. In cooling process, part of water freezes and expands about 9% of the original volume, and this expansion induces tensile stress concentration and damages the micropores. In heat process, mineral particles generate uncoordinated deformation caused by the difference in thermal-expansion coefficient [33,34], leading to the thermal force, which will damage the rock structure. Recurrent H-C cycles gradually lead to the accumulation of damage inside rock and further weaken its mechanical properties. Microstructure characteristic of rock after H-C cycle During SEM observation, the most crucial issue is how to distinguish the cracks and pores between mineral particles. Hence, EDS technology is adopted to further study the damage characteristics (i.e., cracks and pores). Typical microstructure characteristics of sandstone specimen after different H-C cycles are displayed in Figure 9. Figure 9 illustrates that at 20℃ (room temperature), only very few initial pores and microcracks can be observed in the rock (Figure 9(a) and (b)). In the L-T group, when sandstone specimen experiences 10 H-C cycles, some initial microcracks and pores propagation are induced by the freezing expansion and thermal expansion of mineral particles which improves the microstructure of sandstone, as shown in Figure 9(e) and (f); after 40 H-C cycles, due to the disharmony of thermal deformation between minerals particles, the thermal stress concentrates at the edge of the mineral particle (Figure 9(g) and (h)); after repeated H-C cycles, the thermal stress at the mineral boundary is constantly produced, leading to the fatigue rupture along the boundary of the mineral particle, which indicates that the microstructure inside rock is weakened by the thermal stress. The variation of microstructure of rock after H-C cycles in M-T group is similar to that in L-T group, the cracks propagate and pores expand with increasing H-C cycles. Additionally, new pores and cracks between the mineral particles begin to be concentrated and interlinked (Figure 9(i)-(l)). In contrast, by observation and comparison of EDS results under room temperature of L-T group, M-T group, and H-T group, a large number of cracks and pores appear inside rock after 12 H-C cycles in H-T group (Figure 9(c) and (d)); study results show that the presence of clay minerals, such as kaolinite, may lose water with hydroxyl groups and change the structure of rock after 400°C treatment [35][36][37][38]. With increasing cycling number, thermal expansion of sand particles inside the crystal increases the volume of sandstone specimen. The internal free water and organic matter in the sandstone gradually evaporate, which leads to decrease in the P-wave velocity, density ( Table 1), and total import strain energy ( Table 2) and increase in the porosity ( Table 1) and damage ( Figure 6). Conclusions (1) The whole energy evolution process of sandstone after H-C cycle treatment can be divided into four stages. With the increase of cycle times, the total import strain energy decreased. The decreasing rate of total import strain energy in H-T group was significantly larger compared with that in L-T group and M-T groups. (2) In the dynamic compression deformation and failure process of sandstone, the total input of strain energy, dissipation energy, and the characteristics of releasing projectile are different at different stages, the decrease degree for U and U d at failure stage exhibited the maximum value compared with other three stages. However, the decrease degree for U e at failure stage shows the minimum value. (3) Under the same cycle number, rock damage for H-T group became more serious compared with that for M-T and L-T groups. After repeated H-C cycles, the thermal stress at the mineral boundary is constantly produced, leading to the fatigue rupture along the boundary of the mineral particle.
2020-08-24T13:14:30.860Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "124177e10fb656697b2e4acb11c19814f01c726f", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/htmp-2020-0070/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "124177e10fb656697b2e4acb11c19814f01c726f", "s2fieldsofstudy": [ "Geology", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
233139203
pes2o/s2orc
v3-fos-license
Family Medicine Physicians’ Perspectives Regarding Rural Behavioral Health Care: Informing Ideas for Increasing Access to High-Quality Services Primary care settings often function as the front lines for behavioral health services in rural areas. The lack of formal behavioral health care in rural areas is also well documented. Rural family practice physicians were interviewed regarding the state of behavioral health care in their communities and their ideas for increasing access to quality care. Thirteen family practice physicians in rural locations participated in in-depth semi-structured interviews. Interviews were transcribed, coded, and analyzed following a phenomenological design. Physicians described a lack of quality behavioral health services and challenges for integrating and collaborating with those that do exist. Participants also described the changing role of stigma, service delivery strategies that are currently working, and the unique role primary care plays in rural behavioral health care. Several ideas for increasing access to and efficacy of services are discussed; these ideas are informative for future research and interventions. Introduction Existing evidence indicates that there is a shortage of mental health professionals in rural areas. 1,2 As of 2017, 62% of designated mental health shortage areas were situated in rural contexts. 3 This is noteworthy because significant concerns relating to mental health rates, severity, and outcomes persist for rural communities. 4,5 Compared to urban counterparts, rural respondents are more likely to describe their mental health status as poor. 5 They are also more likely to report higher levels of depression, suicide, substance abuse, domestic violence, and child abuse. 6,7 A growing body of research has been investigating the overlap of behavioral health care in primary care settings and has found promising indications regarding the efficacy of care and increased accessibility. 1,[8][9][10] While rates of psychiatrists per capita rapidly decrease as levels of rurality increase, the rate of family medicine physicians providing mental health care significantly increases in rural settings. 11 Similarly, Miller and associates 12 indicated that as rurality increased, the prevalence of mental health care services being co-located within primary care settings also increased. Data also show that primary care physician availability is associated with better mental health ratings for rural communities. 13 In a landmark study of hospitals that integrated mental health services in 22 US states, Bird and associates 14 proposed a model of four ways in which behavioral health care was integrated within primary care: (a) diversification (mental health care provided directly onsite), (b) linkage (an independent mental health practitioner or agency operates onsite), (c) referral (formal or informal arrangements made for patients to see offsite mental health professionals), and (d) enhancement (training primary care physicians to recognize, diagnose, and treat behavioral health concerns independently). Examples of each of these models were present in the primary care settings represented in this investigation. Much research on integrated care and updated models has been proposed. 15, 16 Bird et al.'s 14 model, however, was developed specifically for rural contexts. There is a growing body of research regarding the presence and function of barriers to behavioral health care. 2,4,17 Penchansky and Thomas 18 proposed a model for conceptualizing barriers that outlines dimensions of access: (a) affordability, (b) accessibility (location), (c) availability (number of providers), (d) acceptability (attitudes), and (e) accommodation (the relationships between the way services are organized and patients' abilities to integrate said services into their schedules and lives). This model focuses more specifically on fit between patients and the health care system. 18 These concepts have been revised and adapted in rural behavioral health research, and the factors have appeared as a consolidated list of three service-related foci: availability, accessibility, and acceptability. 4,19,20 Further understandings about the presence and function of these barriers were explored in the investigation presented here. Research Questions The present study was positioned to address a knowledge gap in understanding about how barriers to behavioral health care function for rural populations. The two grand tour questions that guided this investigation were: (a) What experiences and observations of barriers to behavioral health care among patients and their families do rural physicians witness in their practices? and (b) What ideas do rural physicians have for overcoming barriers to behavioral health care? By interviewing Family Medicine physicians, this investigation also elucidated existing strategies that are working and/or not working and contributed to ideas about what steps are next in terms of inquiry and intervention aimed at reducing barriers to rural behavioral health care. Method Research Design This investigation was informed by phenomenological and hermeneutic phenomenological research designs, which follow the study of experiential meanings as they are lived out and present themselves in human consciousness. 21 These approaches contribute to the understanding of observed phenomenon via rich descriptions of the personal and professional experiences of those immersed in the said phenomenon as it is occurring (in this case, physicians' experiences in rural communities with barriers to behavioral health care). Hermeneutic phenomenology has been applied to studies of the experiences of rural physicians before 22,23 and was chosen for this investigation because of its interpretive strengths and the way it centers the meaning-making and expertise of the study participants themselves. The research design distilled meaning about the essence of rural physicians' experiences from a vast number of words used by the participants as they told stories, shared opinions, and engaged in critical thought about their communities and patients' experiences. Hermeneutic analysis originated in philosophical literature, 24 and when it is applied to research, data are converted into written documents (in this case transcribed interviews), which then become the subject of analysis. Research design diverged from pure hermeneutic phenomenology in that the bracketing of researcher assumptions and biases was intentionally prioritized. In hermeneutic phenomenology, researchers and study participants are viewed as co-creators of meaning, and researchers' own experiences, ideas, and assumptions are intentionally imbedded into the analysis. 24 Both traditions value reflexivity, the process of self-reflection, and identification of biases and assumptions, but in phenomenology, reflexivity is followed by bracketing, the intentional separation of the self from the interpretation of findings. 24 In this study, the following strategies were used to enhance the process of reflexivity and the subsequent bracketing of the self. The first strategy was memoing. Notes about opinions, emerging ideas, hypotheses, and personal responses were kept throughout the data collection and analysis phases. These notes were discussed throughout the research process during weekly research meetings. Ongoing analytic memos were kept in the coding documents to mark specific instances where perceived subjectivity or lack of certainty could be influencing data analysis. These instances were also discussed in research meetings. The researchers also kept an ongoing paper trail to log research activity, decisions, and milestones throughout the entire research process. The interviews were guided by semi-structured questions that were created through consulting existing research, human ecology theory, 25,26 and the Andersen model of health service use. 27,28 The human ecology assumption that "environments do not determine human behavior, but pose limitations and constraints as well as possibilities and opportunities" 29 (p. 426) informed interview questions related to perspectives on barriers to care and ideas physicians had for increasing access to behavioral health care in their practices. An initial interview schedule was revised after a practice interview was conducted with a rural family physician who practiced outside of the sampling region for this study. Interviews were conducted via telephone or online video-conferencing (depending on participants' preferences) and lasted approximately 30 min. All were audio-recorded and transcribed verbatim. Transcripts were then reviewed and edited for accuracy. Sampling and Recruitment Potential participants were identified by creating a list of every zip code contained in a ninecounty area of the Upper Midwest. The Minnesota Department of Human Services (MDHS) Provider Directory 30 and the Wisconsin Medical Society (WMS) Physician Directory 31 were used to search for family medicine physicians in each respective zip code. The directories included physician names, public addresses, and phone numbers for general hospital systems and clinics. The search revealed a total of 172 family medicine physicians that practiced in the range of zip codes represented by the nine-county area. As the scope of this investigation was on the experiences of rural communities, 71 potential participants were initially removed from the list because they practiced in the zip codes that covered two large cities (as identified by Rural Urban Commuting Area (RUCA) Codes 32 ). RUCA Codes are a zip-code level measurement system that calculates level of rurality based on population density, urbanization, and daily commuting patterns in and out of each zip code. The most current codes are based on 2010 US Census data. 32 After removing the urban providers, 101 potential rural physicians were left to be recruited. The list of 101 potential physicians was initially reviewed by the director of the University of Minnesota Medical School's Rural Physician Apprenticeship Program (RPAP) for potentially willing participants based on his familiarity and past work experience with physicians on the list. The director's perusal resulted in a list of seven physicians who were then contacted in the first wave of recruitment. A recruitment script was created, and potential participants were cold-called. After the first wave of recruits had been contacted, a second wave of potential participants was identified based on the level of rurality in which they practiced. Rurality was measured using RUCA Codes, 32 and the intention was to prioritize recruiting to those who were experiencing the greatest degree of rural practice. Three waves of phone-based recruitment were identified using this method, yielding 11 participants. Another participant was recruited through a personal connection with the first author and one more by snowball sampling. Sample Demographics Thirteen physicians participated in the study. They ranged in years of post-residency experience from less than 2 years to 48 years in practice (mean = 20.27 years). Most had been at their present locations for the majority of their careers, and all but one had been in rural settings for their entire careers. See Table 1 for demographic details about the participants. Analysis There is variation to the analytical methods used in hermeneutic phenomenology when applied to research, 33 but three key strategies make up the hermeneutic cycle: reading, reflective writing, and interpretation. 33 Inspired by these strategies, in this study, three waves of coding analysis-holistic, detailed, and interpretive-were employed. 21,34 This sequence served to distill and extract meaning from, and identify themes within and across, the interviews. During the first wave, text was read through in its entirety and coded using selective highlighting to distinguish meaning-rich passages. At the completion of this wave, holistic summaries of each interview were created. During the second wave, interviews were read again, this time to synthesize and describe passages of meaningful text at an immediate level. During this wave, codes followed as closely to the language used by the participants as possible. The third wave of coding was interpretive in nature, whereby efforts to ascribe meaning to the distilled ideas and tie them to the emerging themes from the rest of the interviews were advanced. The interpretive wave of coding is where the protocol diverged from a pure hermeneutic phenomenology procedure and instead followed a more objective approach akin to phenomenology. Bracketing strategies were employed throughout the analysis process, but in this interpretive stage, they were used with greatest emphasis. Ongoing analytic memos were recorded to reflect emerging ideas about potential themes and to note personal biases that could influence objectivity. After the first four interviews were analyzed, preliminary codes and recurring ideas were noted. After each subsequent interview was coded, revisions and additions were made to the list. The first author coded all 13 interviews, the second author coded six interviews, and the third author coded three-all following this exact coding protocol. Reconciliation meetings among the researchers were conducted throughout the process and were used to identify differences in interpretations, bracket assumptions and biases, refine the coding protocol, and discuss emerging themes. At the end of the coding process, the list of emerging themes, categories, and subcategories was reorganized, revised, and run through peer-checks (with fellow researchers) and member-checks (with interviewees) as another measure to ensure that data interpretations were trustworthy and representative and that individual assumptions had been bracketed and set aside. Results Findings were synthesized into seven overarching themes, each with categories and subcategories. What follows here are reported data and descriptions of each theme. Family Physicians Are Intentional in Their Choices to Practice in Rural Communities Participants were asked to share their reasons for choosing rural practice; all reflected a sense of intention. Most had been in their current practice locations for the majority of their careers. For those that had changed locations, all but one had been in rural areas for the entirety of their career. Reasons for this ranged from personal (e.g., enjoyed proximity to nature, grew up in rural areas, had family in rural communities, liked small towns) to professional (e.g., family medicine is suited to rural practice, family medicine allows an extensive range of medical practice, loan repayment programs). These findings were consistent with existing research about characteristics of rural physicians 35 but were also unique in the way the personal reasons were emphasized by the physicians in this study. Existing research has found that compared to urban counterparts, rural physicians are similarly compensated but tend to work longer hours and see more patients. 36 For the physicians in this study, more workload seemed to be offset by a strong sense of liking the work and the Family Medicine communities, especially among those who had lived there for many years. Speaking to the culture of their small community, one physician stated, "So, the thing that some people find creepy is that you go to the grocery store and everyone knows who you are. I find it fascinating. Comforting." These reflections about rural communities often segued into ideas about the culture of the communities themselves. Rural Culture Presents Challenges and Opportunities for Behavioral Health Respondents described several unique features about their communities. One that came up frequently had to do with the tight-knit, interconnected nature of rurality, e.g., "I always joked that you had to be nice to everyone because something will break in your house and your friend's exbrother-in-law will be the only one who could fix it." Comments about everyone knowing everyone else were commonplace. One shared this example of lack of anonymity impeding care with a behavioral health provider: It's even come up where the counselors felt uncomfortable with some things. There was one episode where she was counseling a parent at a child visit, well it turns out that the counselor's kid had bit her kid the day prior at daycare. So, there is just a lot of that. Physicians also discussed the implications of lack of anonymity on health care and shared that they often treat entire families within their practices Several physicians mentioned job loss and economic insecurity as causes for family instability and behavioral health concerns. Poverty was frequently described both as a cause for behavioral health and health concerns and as a barrier for seeking and receiving appropriate treatment. Another unique feature of rural communities that was described as a barrier to receiving care was a general fear or mistrust of cities and of outsiders. One participant described this as "tribalism" and went on to say: We have a lot of people that are fearful of going through to the big city. We have a lot of people who have never traveled on an airplane or . . . met a lot of people outside of their own racial group or their own socioeconomic group. These and other cultural factors (including how the presence of American Indian reservations and historic immigrant communities shape culture, the importance of industry to rural towns, and the effects of remote location of these communities) were described by participants in the beginning of interviews and referenced back frequently as they discussed implications for behavioral health. A Range of Behavioral Health Concerns Persist for Rural Communities Physicians discussed several behavioral health concerns that persist for their patients and several shared ideas about factors that cause and/or exacerbate presenting symptoms. All described anxiety and depression among their patients, and many discussed substance abuse (alcohol, opioids, etc.) as a co-occurring and/or separate concern. One physician shared, "The opioid epidemic is alive and well. A lot of heroin abuse. A lot of prescription drug abuse. A lot of deaths related to that." Deaths by suicide and emergency behavioral health situations were described by several physicians. One explained that behavioral health concerns and suicide rates may be inflated not by residents of the community but by individuals who travel to the area as a destination for escaping perceived problems and carrying out suicide plans: "The other thing that I was not aware of, and has been an unfortunate reality of being here is, I forget the exact term they've dubbed it, but 'suicide tourism', where people will come here and commit suicide." Other behavioral health concerns were mentioned with lower frequency but often reflected concerns with specific age groups. Age-related concerns for elderly patients, such as dementia, were mentioned, as were concerns over the dearth of care available for children's behavioral health services. Physicians often thought holistically about patients' circumstances and offered hypotheses about the causes for behavioral health concerns in their communities. Ideas included aging, family instability and stress, historic trauma and oppression, social isolation, unemployment, and poverty. With those who treat younger patients, stressors related to social media were mentioned and were also described. Existing Services Are Often Insufficient or Disconnected from Health Care Physicians described a range of existing behavioral health resources, including medication management (handled by nurse practitioners, family practice doctors, and-in rare cases-psychiatrists), emergency room services, and some private and community behavioral health services. In some cases, integrated social work and counseling, crisis response teams, and telehealth services for emergency assessment (and in a minority of cases, ongoing telehealth therapy) were also present. Several categories of concerns about existing services emerged across the interviews, exposing shared challenges with existing behavioral health services. These included limited availability of services, inconsistent availability of services, lack of access to quality services, and lack of collaboration among providers advancing said services. In sum, most participants were concerned about the availability of behavioral health services. In rare cases, some expressed that behavioral health resources had improved over time in their communities. More frequently, however, they expressed concern about services having left their communities, and that there were several limitations with the services that did exist. Several Persisting Barriers Prevent Patients from Accessing Care Although all physicians identified resources that are available in their communities, they also all named barriers that prevent patients from accessing said resources. Several barriers are related to availability of providers (as described above), alongside issues with cost and lack of insurance, distance, patients not wanting to take the time to go to appointments, and symptoms from behavioral health concerns inhibiting patients' abilities to attend appointments. Other, more frequently mentioned, barriers had to do with patients' attitudes towards and perceptions of behavioral health concerns and care. These included a lack of trust for providers, lack of anonymity around help-seeking, and behavioral health stigma. One unique finding was that many physicians, especially those who had been in practice for many years, described observing a decrease in the effects and presence of behavioral health stigma for their patients. Some explained that it was still present for older patients but that (likely due to media exposure) it has been reducing with younger generations and in general. Family Physicians Have Unique Roles in Rural Practice As described by one participant, family physicians are the "front lines" for all health care in rural communities (including behavioral health care). This position means that family physicians often take on unique roles and duties. Some participants described doing assessment and triage services for behavioral health concerns. These physicians moved from the "front lines" role to more of a bridging role, connecting patients to more specified services if and as they were available. Some physicians also described doing basic behavioral therapy interventions and attempting to destigmatize mental illness and associated services. Concerns around medication management were mentioned several times, as well. Some physicians explained that they have to practice at the very edge of their competence when it comes to psychiatric medication. Many maintained that increased training, providers, and resources would ease some of the difficulties and pressures they experience. Several explained that their unique roles, too, result in awareness of family dynamics and stressors can impact patients' behavioral health and behavioral health care. Rural Family Physicians' Unique Ideas for Increasing Access to Behavioral Health Care All participants offered ideas regarding how to increase behavioral health care in rural communities. Some of the ideas already being employed involved strategies for increasing the reach of existing behavioral health professionals. Some communities approach this by sharing social workers and counselors between clinics. Because small clinics cannot afford fulltime professionals on their own, they collaborate with other small clinics-and are able to meet needs that way. To increase the reach of psychiatrists, one clinic has monthly video consultations wherein all family physicians bring cases to discuss medications, diagnoses, and treatment plans. Two other physicians mentioned that they have regular phone access with a regional psychiatrist for similar assistance. Another creative strategy was employed by a small hospital when they added a behavioral health professional to their team: When we integrated a behavioral health worker in our clinic, there was a concern that you can't just walk in the waiting room and say, you know, "the therapist will see you now." So we had to create an anonymous way to call people back like they were just regular clinic patients so they could be comfortable. We are just trying to be cautious about that because it is an issue in a small town. Like the strategies that increase the reach of professionals, this does not cost extra money or take more time. These strategies are aimed at increasing the effectiveness of existing resources. Ideas shared by participants for future strategies included a range of methods for building on existing resources and introducing new ones. Several spoke to the efficacy of telehealth technology. Others described a need for increased financial resources. One physician explained that "money talks" and that if reimbursement rates for psychiatry and behavioral health increased, access to quality care would too. Other ideas included better provider training for behavioral health concerns, rural training programs for behavioral health professionals, increases in collaboration with existing resources, culturally appropriate treatment, increasing the number of professionals who are able to prescribe medications, programs to aid with transportation to appointments, and prevention-oriented school programs. One commonly mentioned idea was increasing community awareness of behavioral health concerns and services through education and outreach. Implications for Behavioral Health Services The themes described above represent the essence of these rural physicians' experiences. In following the hermeneutic phenomenological approach, the interpretations have been guided by the language of the physicians themselves-centering their lived experiences and personal meaningmaking. Several of the findings that emerged from this process are noteworthy. One key finding that largely confirmed existing research is that behavioral health care is integrated within primary care in these rural communities (to varied degrees and through varied means). All respondents described behavioral health services and resources that are available to patients, and, alongside these descriptions, several strengths and weaknesses of available services emerged. The four methods of behavioral health integration described by Bird et al. 14 -including diversification, linkage, referral, and enhancement-were all put forth as models that these communities are using or have used in the past. For each, specific associated barriers to the method of integration were also identified. For diversification (integrating behavioral health professionals directly onsite), lack of funding, lack of providers, inconsistency, inability to meet demand, and lack of trust for providers were all listed as barriers. For linkage (integrating independent, outside professionals and agencies onsite), barriers related to funding, lack of providers, and inconsistency were mentioned. As an opportunity, some clinics integrated outside agencies via telehealth technology. For the model of referral (formal and informal arrangements for care with outside professionals), lack of availability of providers, lack of quality providers, stigma, distance, cost, lack of communication and collaboration with health care, and lack of awareness of existing community resources were all mentioned. For the model of enhancement (training primary care professionals to identify, diagnose, and treat behavioral health concerns), the only barrier mentioned was a lack of training. Several identified this integration model as a future idea for increasing access to care. Referral to outside behavioral health sources may be the most apparent integration method, but it was also the method associated with the most barriers based on the physicians' reports. Lack of provider availability in rural areas and challenges for existing rural behavioral health providers have been well documented. 3,17,37,38 These findings, combined with evidence from existing literature, suggest that more effective strategies for increasing access to behavioral health care in rural communities may involve integrating and maximizing existing resources instead of creating space for and recruiting new services to these areas. Another key finding is that the presence and impact of behavioral health stigma has been decreasing according to physicians in the present investigation. This finding is in line with some research, 39 but challenges more established ideas regarding stigma as a barrier to behavioral health care. 7,[40][41][42] Though it was still mentioned as a barrier by physicians, the main takeaway in this study is that its presence and impact have diminished over time. Physicians hypothesized that media has played a large role in normalizing behavioral health concerns and services; if this trend continues, stigma may be a smaller barrier to overcome as time goes on. Despite this, however, the presence of behavioral health stigma was still mentioned as a barrier by nearly every physician. If physical health care carries less stigma than behavioral health care (and it does), then increased integration of behavioral health care into primary care settings continues to be indicated. 43,44 As primary care physicians, the respondents in this study may have experienced less behavioral health stigma in their practices because they themselves were actors in the efforts to reduce stigma. Primary care may not carry the same stigma that psychiatry or behavioral health treatment does, and for this reason, it may be a more welcoming entry point for those for whom stigma is a barrier to needed care. Strengths and Limitations Rural family physicians have participated in surveys and a mixed methods investigation about perspectives on behavioral health before, 9 but this qualitative study is the first that used in-depth interviews about physicians' experiences with and perspectives regarding behavioral health care in rural communities. The range of opinions brought forth represents an important contribution to our understanding about behavioral health barriers. Practices of trustworthiness and reflexivity, too, represent strengths of this study in its informing of accurate findings. Attention to the geographic loci (using RUCA code measurements) also ensured that participants represented rural communities-versus small towns or colloquially "rural" areas. Limitations of this study are important to consider, as well. First, recruitment methods relied on convenience and snowball sampling strategies in order to reach a meaningful sample of rural family physicians. Though high levels of saturation did occur across the 13 interviews, a larger sample could provide more insights. Second, self-selection may have shaped participants' responses. It is possible that physicians who already had an interest in behavioral health concerns were the most likely to agree to participate in the study. They thereby could have presented stronger narratives regarding health care services as amenable to behavioral health needs. Finally, while generalizability is not always primary goal of qualitative research, 45 it should be noted that all responses came from a seven-county region in a North Central US State. As a location-based variable, rurality is difficult to operationalize and generalize across studies. These responses reflect rural culture for their specific region, but it should be noted that rural cultures (i.e., situational and contextual factors for each region) vary. Findings presented here should be interpreted with this understanding. Next Steps for Research and Intervention Future research should continue to explore the efficacy of strategies for integration of behavioral health services within primary care locations. It would be beneficial to interview and survey residents of rural communities who do not have professional or medical backgrounds as well. While it can be difficult to access patients as research participants, their voices and experiences will be invaluable in shaping appropriate and effective interventions. The respondents in this study have specific views as physicians, and it makes sense that integration with primary care services would be a key focus of their ideas for improving care. Future investigations should also consider the possibility of integrating behavioral health services with other community institutions (e.g., first responders, churches, schools). There was a high level of focus in the present study on the dearth of emergency services, but not much attention to preventative behavioral health care. It is likely that as the front lines for behavioral health emergencies, exacerbated conditions, assessment, triage, and treatment were more salient in their responses. However, preventative approaches may lead to lower costs and better outcomes. 46 It is likely that rural residents are interfacing with other community institutions more regularly and doing this long before they access health care systems. More research is needed to understand ways to access preventative care within existing social structures (e.g., school programs that advance stress-reduction and/or mindfulness-based interventions) and to evaluate its reach and effectiveness. Conclusion This investigation represents a unique contribution to existing literature about barriers to rural behavioral health care and regarding the integration of behavioral health care into primary care settings. Experiences from a range of family practice physicians revealed several challenges and opportunities. Key findings suggest that barriers to behavioral health care services are experienced differently depending on the type of services and level of integration with primary care that is available. More research is needed to advance strategies for integrating already-extant behavioral health resources into both rural health care systems and other community institutions.
2021-04-07T14:04:39.267Z
2021-04-06T00:00:00.000
{ "year": 2021, "sha1": "2634bee640876b691859e6c17592b8b215358025", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11414-021-09752-6.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "2634bee640876b691859e6c17592b8b215358025", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
251002249
pes2o/s2orc
v3-fos-license
FLOT2 Promotes the Proliferation and Epithelial-mesenchymal Transition of Cervical Cancer by Activating the MEK/ERK1/2 Pathway Background: As prevalent cancer in women, approximately 569,847 cases of cervical cancer occur every year. Aims: This study aimed to explore the role of FLOT2 and its related mechanism in the development of cervical cancer. Study Design: Cell culture study and animal experimentation. Methods: Quantitative reverse-transcription polymerase chain reaction PCR and Western blot analysis were performed to evaluate the expression of FLOT2. Flow cytometry was applied for the evaluation of cell apoptosis. Cell Counting Kit-8 and colony formation were utilized for proliferation measurement. Cervical cancer mice model was employed to measure the role of FLOT2 in vivo. Results: FLOT2 mRNA and protein levels were dramatically elevated (P < 0.001) in cervical cancer cell line HcerEpic cells. The cell viability and proliferation of cervical cancer cells were enhanced (P < 0.01) by overexpression of FLOT2 and reduced (P < 0.01) by FLOT2 downregulation. In addition, FLOT2 overexpression elevated (P < 0.01) the cell migration abilities of cervical cancer cells, whereas its depletion inhibited (P < 0.01) the cell migration abilities. Moreover, the protein expression of epithelial-mesenchymal transition markers including Vimentin, N-cadherin, and E-cadherin were assessed, and the results showed enhanced Vimentin and N-cadherin levels (P < 0.05) by FLOT2 upregulation and declined (P < 0.01) by FLOT2 downregulation. FLOT2 upregulation reduced (P < 0.05) the level of E-cadherin protein, whereas FLOT2 suppression attenuated this effect (P < 0.05). Furthermore, FLOT2 increased (P < 0.05) p-MEK/MEK, p-ERK1/2/ERK1/2, and p-AKT/AKT levels to activate the MEK/ERK1/2 and AKT pathways in cervical cancer. Finally, our results indicated that FLOT2 inhibited (P < 0.001) cervical cancer growth in vivo. Conclusion: FLOT2 aggravates the proliferation and epithelial-mesenchymal transition of cervical cancer by activating the MEK/ERK1/2 and AKT pathways. INTRODUCTION As prevalent cancer in women, approximately 569,847 cases of cervical cancer (CC) occur every year. 1 Moreover, 311,365 patients died from the disease, 85% of which occurred in developing countries. 2,3 Previously, human papillomavirus (HPV) infection has been identified to be the main cause of CC, and early treatment such as surgery combined with chemotherapy has been widely implemented. 4,5 Although patients with various CC received standardized treatments, the risk of recurrence and morbidity in these patients were still high 6 . In previous studies, numerous special biomarkers have been identified to be associated with the occurrence or treatment of CC 7 , and the prognosis of patients with CC is still poor. Exploring more effective and relevant biomarkers for CC remains of great importance. As a specialized domain in cell membranes, lipid rafts are involved in different transductions of cell signals. 8 As important indexes of lipid rafts, the flotillin family proteins including isoforms flotillin-1 (FLOT1) and flotillin-2 (FLOT2) are found to be involved in the cell process. 9 In a previous study, flotillin proteins are found to participate in the vesicular invaginations of the plasma membrane and regulate signal transduction 10 . Evidence showed that flotillin proteins were ubiquitously expressed and were implicated in various biological processes, such as actin reorganization, signaling transduction, endocytosis, cell adhesion, actin cytoskeleton dynamics, and phagocytosis. 9 FLOT2 is one of the flotillin protein, which was confirmed to directly interact with signaling molecules including kinases, G proteins, and adhesion molecules receptors 11 . More importantly, FLOT2 was validated to be involved in cancer development. For instance, FLOT2 forms a positive feedback Background: As prevalent cancer in women, approximately 569,847 cases of cervical cancer occur every year. Aims: This study aimed to explore the role of FLOT2 and its related mechanism in the development of cervical cancer. Study Design: Cell culture study and animal experimentation. Methods: Quantitative reverse-transcription polymerase chain reaction PCR and Western blot analysis were performed to evaluate the expression of FLOT2. Flow cytometry was applied for the evaluation of cell apoptosis. Cell Counting Kit-8 and colony formation were utilized for proliferation measurement. Cervical cancer mice model was employed to measure the role of FLOT2 in vivo. Results: FLOT2 mRNA and protein levels were dramatically elevated (P < 0.001) in cervical cancer cell line HcerEpic cells. The cell viability and proliferation of cervical cancer cells were enhanced (P < 0.01) by overexpression of FLOT2 and reduced (P < 0.01) by FLOT2 downregulation. In addition, FLOT2 overexpression elevated (P < 0.01) the cell migration abilities of cervical cancer cells, whereas its depletion inhibited (P < 0.01) the cell migration abilities. Moreover, the protein expression of epithelial-mesenchymal transition markers including Vimentin, N-cadherin, and E-cadherin were assessed, and the results showed enhanced Vimentin and N-cadherin levels (P < 0.05) by FLOT2 upregulation and declined (P < 0.01) by FLOT2 downregulation. FLOT2 upregulation reduced (P < 0.05) the level of E-cadherin protein, whereas FLOT2 suppression attenuated this effect (P < 0.05). Furthermore, FLOT2 increased (P < 0.05) p-MEK/MEK, p-ERK1/2/ERK1/2, and p-AKT/AKT levels to activate the MEK/ ERK1/2 and AKT pathways in cervical cancer. Finally, our results indicated that FLOT2 inhibited (P < 0.001) cervical cancer growth in vivo. Conclusion: FLOT2 aggravates the proliferation and epithelialmesenchymal transition of cervical cancer by activating the MEK/ ERK1/2 and AKT pathways. loop with TBL1X to participate in the nasopharyngeal carcinoma cell metastasis. 12 The increased FLOT2 is implicated in the development and prognosis of colorectal cancer. 13 Despite the significant findings of FLOT2 in cancer development, its role in CC remains largely unknown. In the present study, we aimed to explore the role of FLOT2 in the growth of CC. We found that FLOT2 aggravates the proliferation and epithelial-mesenchymal transition (EMT) of CC via the activation of the MEK/ERK1/2 and AKT pathways. The findings of this study might offer evidence on the role of FLOT2 in the prevention and treatment of CC in the future. RNA Extraction and Real-time Polymerase Chain Reaction (RT-qPCR) TRIzol reagent (Thermo Fisher Scientific) was employed for the extraction of RNA from CC cells and mice tissues. RNA concentration was measured via a NanoDrop ND-1000 (Thermo Fisher Scientific). The qRT-PCR with Maxima SYBR Green on an ABI 7500 instrument (Applied Biosystems, MA, USA) was applied for testing FLOT2 level with glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as the endogenous control. The expression of FLOT2 was calculated via the 2 -ΔΔCt method. The primers for FLOT2 and GAPDH were as follows: FLOT2: Flow Cytometry The cells were harvested 48 h posttransfection and subjected to trypsin digestion. After rinsing with phosphate-buffered saline, the Annexin V-APC and propidium iodide was used to label the CC cells and measure their apoptosis via a cell apoptosis assay kit (BD Biosciences, USA). CC apoptoses were evaluated using a flow cytometer (Beckman Coulter, CA, USA) and analyzed by FlowJo v5.7.3 software (LA, USA). Cell apoptosis rate = (Q2+Q3)%. Transwell Assay The migration of CC cells was examined in a 24-well Transwell plate (Costar). Briefly, the CC cells (1 × 10 3 ) were planted in the upper chamber with serum-free medium and incubated for 4 h at 37 °C. Moreover, 500 µL of DMEM containing 20% FBS were added to the lower chamber. After culturing for 1 day, the cells that migrated to the lower chamber were stained with 0.1% crystal violet, and those that remained in the upper chamber were removed. The microscope was applied to visualize the migrated cells. Images were obtained in five random fields. Immunohistochemistry Analysis Mice tissues were sliced into sections, followed by dewaxing by xylene. Primary alcohol and antigen water were employed for recycling the sections in turn. After boiling under microwave irradiation for 10 min at 95 °C, the samples were subjected to hydrogen peroxide (3%) for 0.5 h, followed by sealing with goat serum (20%) for 40 min. The sections were then incubated with primary antibodies against Ki-67 (0.1 µg/ml, ab15580, Abcam) and FLOT2 (1/1000, ab96507, Abcam) at 4 °C overnight. Then, the HRP-conjugated secondary antibody (Abcam) was cultured with the sections for 1 h, followed by staining with 2-aminobenzidine and hematoxylin. Mice Model of CC The animal assay was authorized by the Affiliated Changzhou No. 2 People's Hospital of Nanjing Medical University. In total, the Vital River Laboratory (Beijing, China) provided us 6-week-old female Balb/c mice (total n = 10; each group, n = 5). HeLa cells transfected with siFLOT2 were injected into the right-side back of mice subcutaneously. On day 35, all mice were enthused, and tumor tissues were extracted. The size, volume, and weight of the tumor were measured. Volume = (D × d 2 )/2, where D was the longest diameter and d was the shortest diameter. The tissues were subjected to immunohistochemistry (IHC) staining to measure the expressions of Ki-67 and FLOT2. The levels of MEK/ERK1/2 and AKT pathway-related proteins were examined in the tissues via Western blot analysis. Cell Counting Kit-8 (CCK-8) Assay To investigate the proliferation of CC cells, the transfected CC cells were grown on 96-well plates at 37 °C and CO 2 (5%). After culturing for 24, 48, and 72 h, 10 µL of CCK-8 (Beyotime, Shanghai, China) was added into each well and grown for 2 h. Subsequently, a microplate reader (Thermo Fisher Scientific) was applied to measure the 450-nm absorbance. The growth curves were made based on the absorbance of every 24 h. Colony-forming Assay The harvested CC cells posttransfection was resuspended in DMEM with FBS (10%) and plated into six-well plates. After maintaining for 2 weeks, colonies were fixed with methanol, followed by crystal violet staining. Images were captured in five random fields. We counted the colonies with a diameter of > 2 mm. Statistical Analysis Data were displayed as the mean ± SD, and differences were compared via Student's t-test between two groups, and one-way analysis of variance, followed by Tukey's post hoc test, was used to compare the difference among more than two groups. The statistical difference was set as P < 0.05. All tests were repeated at least thrice. The calculations were made by IBM SPSS Statistics for Windows, version 20.0 (IBM Corp., Armonk, NY, USA). FLOT2 Expression was Upregulated in CC Cells To assess the role of FLOT2 in CC, RT-qPCR was performed to measure its mRNA in CC cells (HeLa and C33A). The results showed that the FLOT2 mRNA level was dramatically elevated in CC cells compared with that in HcerEpic cells (Figure 1a). Similarly, the protein level of FLOT2 was also enhanced in CC cells (Figure 1b). Overall, FLOT2 expression was upregulated in CC cells. FLOT2 Aggravated CC Cell Proliferation Subsequently, the function of FLOT2 in CC was explored. FLOT2 was overexpressed or knocked down in CC cells via respectively transfecting of pc-FLOT2 or siFLOT2 plasmids. Data depicted that FLOT2 mRNA and protein expressions were evidently increased as a result of pc-FLOT2 transfection, whereas decreased as a result of siFLOT2 transfection in CC cells (Figure 2a-c). These results indicated that pc-FLOT2 and siFLOT2 could be employed for the following functional assays. The CCK-8 assay unveiled that the viability of CC cells was enhanced by FLOT2 overexpression and reduced by FLOT2 downregulation (Figure 2d). In addition, FLOT2 upregulation increased the colony formation of CC cells, and its downregulation decreased the colony formation of CC cells (Figure 2e). Moreover, the suppressed apoptosis of CC cells was observed in the FLOT2 overexpression group, and the aggravation of apoptosis of CC cells was found in the FLOT2 knockdown group (Figure 2f). These results revealed that FLOT2 aggravated CC cell proliferation. FLOT2 Increased CC Cell Metastasis and EMT Moreover, the effects of FLOT2 on CC cell metastasis and EMT were explored. As exhibited in Figure 3a, FLOT2 overexpression elevated the migration abilities of CC cells, whereas its downregulation inhibited the migration. Meanwhile, the protein expressions of EMT markers including Vimentin, N-cadherin, and E-cadherin proteins were assessed, and the results showed that the Vimentin and N-cadherin levels were enhanced by FLOT2 overexpression and declined by FLOT2 knockdown. FLOT2 overexpression reduced the level of E-cadherin, while its knockdown inhibited E-cadherin levels (Figure 3b). Taken together, FLOT2 increased CC cell metastasis and EMT progression. FLOT2 Attenuated CC Growth In Vivo Finally, the role of FLOT2 in CC was investigated in vivo. After the construction of the in vivo mice model of CC, we evaluated the size, volume, and weight of the tumor. The tumor size, volume, and weight were decreased by FLOT2 downregulation ( Figure FIG. 2. FLOT2 aggravated CC cell proliferation. (a-c) The level of FLOT2 was tested via RT-qPCR and Western blot analysis. The overexpression and knockdown efficiencies were confirmed. (d-e) CCK-8 was used to examine the viability and proliferation of CC. Cell proliferation was enhanced after overexpressing FLOT2 and reduced after silencing FLOT2. (f) The apoptosis of CC cells was detected by flow cytometry. Cell apoptosis decreased after overexpressing FLOT2 and increased after silencing FLOT2. *P < 0.05, **P < 0.01, ***P < 0.001 compared with the NC group. ^P < 0.05, ^^P < 0.01, and ^^^P < 0.001 compared with the siNC group. 5a). Additionally, IHC staining identified that the protein levels of FLOT2 and Ki-67 (cell proliferation index) in the tumor tissues were restrained by FLOT2 knockdown (Figure 5b). Besides, the levels of p-MEK/MEK, p-ERK1/2/ERK1/2, and p-AKT/AKT were all decreased in the tumor tissues of the siFLOT2 group (Figure 5c-d). Altogether, FLOT2 inhibited CC growth in vivo. DISCUSSION As one of the most prevalent cancer in women, CC seriously affects the health of women every year. 14 Despite the wide application of the HPV vaccine, CC was still one of the fatality cancers in women 15 . The prognosis and survival of patients with CC who were at an advanced stage were even poor. Thus, it is of great value to explore novel biomarkers for preventing CC occurrence or improving the outcomes of patients with CC. In previous studies, messenger mRNAs (mRNAs) were frequently reported to be associated with CC development. For instance, sponged by miR-1236-3p, tripartite motif-containing 37 mediates the cell proliferation and cell cycle in CC development. 16 MDM2 expression is enhanced in CC tissues and cells to aggravate the viability and inhibit the apoptosis of CC cells. 17 KIF20A is boosted by long non-coding RNA UCA1 to modulate the growth and metastasis of CC cells. 18 Although FLOT2 was identified to be implicated in nasopharyngeal carcinoma 12 , colorectal cancer 13 , and glioma 19 , whether FLOT2 participated in CC growth remains to be elucidated. In the present study, FLOT2 mRNA and protein levels were increased in CC cells. The levels of AKT pathway-related proteins (AKT and p-AKT) were examined by Western blot analysis. The AKT pathway was activated after overexpressing FLOT2 and retarded after silencing FLOT2. *P < 0.05, **P < 0.01, and ***P < 0.001 compared with the NC group. ^P < 0.05, ^^P < 0.01, and ^^^P < 0.001 compared with the siNC group. elevated the migration of CC cells. EMT progression was also aggravated because of FLOT2 overexpression. More importantly, we also identified that FLOT2 downregulation suppressed the tumor growth in vivo. In summary, FLOT2 was involved in the tumorigenesis of CC in vitro and in vivo. The MEK/ERK signaling pathway has been extensively accepted to play a vital part in the progression of various cancers. For instance, the MEK/ERK1/2 pathway is modulated by DGCR5/ miR-3619-5p and involved in the development of gallbladder cancer. 20 The MEK/ERK1/2 signaling pathway participates in the CCL21/CCR7 interaction-mediated urinary bladder cancer cell migration and invasion and lymphatic metastatic spread. 21 The MEK/ERK1/2 pathway regulates the migration and invasion of glioblastoma by enhancing the mesenchymal phenotypes. 22 The MEK/ERK signaling pathway is also involved in the insulin-like growth factor-1 receptor-induced immunosuppression in lung cancer. 23 FLOT2 has been shown to modulate MEK/ERK pathway activation in cancers. 24,25 FLOT2 has also been found to modulate the AKT pathway in glioma. 19 Non-etheless, the regulatory effects of FLOT2 on the MEK/ERK and AKT pathways in CC are unclear. In conclusion, we evaluated the role of FLOT2 in CC and found that FLOT2 promoted the proliferation and EMT of CC by activating the MEK/ERK1/2 pathway. The findings of this study might highlight the role of FLOT2 in the prevention and treatment of CC. Ethics Committee Approval: Ethical approval was obtained from the Ethics Committee of the Affiliated Changzhou No.2 People's Hospital of Nanjing Medical University. Data Sharing Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request. Conflict of Interest: No conflict of interest was declared by the authors. Funding: The authors declared that this study received no financial support. (d) Western blot analysis detected the expressions of AKT pathway-related proteins. The AKT pathway was retarded after FLOT2 knockdown. *P < 0.05, **P < 0.01, ***P < 0.001 compared with the siNC group
2022-07-24T15:19:09.077Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "3de48add917f8f9c081b8560f61b95fa0bcf0115", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "500f62fe8a0c0c0ddbd0c02dac5ab9d1a11d8b44", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
219762182
pes2o/s2orc
v3-fos-license
Instance Segmentation Method Based on Improved Mask R-CNN for the Stacked Electronic Components : Object-detection methods based on deep learning play an important role in achieving machine automation. In order to achieve fast and accurate autonomous detection of stacked electronic components, an instance segmentation method based on an improved Mask R-CNN algorithm was proposed. By optimizing the feature extraction network, the performance of Mask R-CNN was improved. A dataset of electronic components containing 1200 images (992 × 744 pixels) was developed, and four types of components were included. Experiments on the dataset showed the model was superior in speed while being more lightweight and more accurate. The speed of our model showed promising results, with twice that of Mask R-CNN. In addition, our model was 0.35 times the size of Mask R-CNN, and the average precision (AP) of our model was improved by about two points compared to Mask R-CNN. Introduction In the industrial assembly field, plug-in electronic components are often manually inserted due to their complicated shape and fragile nature. Because of the massive workload, low efficiency, and high cost, it is difficult to ensure the assembly quality. With the rapid development of the electronics industry, the higher demand for assembly speed and accuracy of plug-in electronic components is prioritized. Traditional manual assembly does not satisfy the development needs of the electronic industry anymore. Therefore, the automatic assembly of electronic components has become an inevitable trend. Detecting the category and location of some objects is a necessary condition for automatic assembly. For human beings, recognizing and grabbing some specific objects in a stacked scenario is an intuitive behavior. However, for robots, finishing a series of motions smoothly, including identifying, locating, and grasping an object, is not an easy task. With the development of convolutional neural networks, object-detection methods based on deep learning have been greatly improved in speed and accuracy compared with traditional detection methods [1][2][3][4][5]. The self-adjusting ability of deep neural networks can effectively enhance the robots' autonomy in terms of object detection. Among all detection methods, instance segmentation can identify object contours at the pixel level and achieve higher location accuracy [6]. Differing from semantic segmentation, instance segmentation mostly focuses on the differences among the instances. In recent years, instance segmentation, as a critical technology of artificial intelligence, has been widely used in the medical field, the engineering field, and so on. He et al. [6] put forward the Mask R-CNN algorithm and used it for human pose estimation. The average precision of segmentation on the COCO dataset can reach 64.7%. Hang et al. [7] proposed a mammography quality detection and segmentation system based on the Mask R-CNN algorithm, which can effectively detect the quality of mammography without human intervention. Dai et al. [8] implemented the segmentation of prostate and intraprostatic lesions based on Mask R-CNN, which is of considerable significance to help radiologists in clinical practice. Chiao et al. [9] utilized Mask R-CNN to segment the ultrasound breast images for lesion detection and diagnosis of benign and malignant, which provides a non-invasive method for breast-lesion detection. Furthermore, to analyze the environment inventory, Xu et al. [10] used the method of instance segmentation to segment trees from the urban scenes, while the accuracy of semantic labeling of trees reaches around 0.9. Bert De et al. [11] proposed a loss function with two terms to determine the entity to which the embedded pixel belongs by the intra-cluster pull and intra-cluster push forces. This method uses the pixel embedding to solve the problem of semantic instance segmentation at the pixel level and promotes the development of autonomous driving technology. These research works indicate that instance segmentation possesses the ability to generate a high-quality segmentation mask for each object. In general, instance segmentation includes two kinds of methods: detection-based methods and segmentation-based methods. The detection-based methods focus on generating region proposals and predicted bounding boxes, then masking the objects in the predicted bounding boxes [12]. Hariharan et al. [13] proposed a method of simultaneous detection and segmentation (SDS). By using the R-CNN algorithm to extract features of each region, this method can generate a rough estimate of the mask based on the bounding boxes, combined with the region proposals, and eventually obtain a fine mask. To improve the accuracy of detection and segmentation, they further proposed a pixel descriptor called hypercolumn, which can calculate the vector of activations of all the convolutional layer pixels above a specified pixel. By embedding the pixel descriptor into the classifier, the mAP (mean average precision) raises from 50.3% to 56.5% on the PASCAL VOC 2012 verification set [14]. Dai et al. [15] replaced the pixel-category classifier in fully convolutional networks (FCN) [16] with the relative-position classifier of the pixel object instance, and the local correlation of the image was used to estimate the instance. On this basis, they designed a convolutional feature masking (CFM) method that extracts segmented features directly from feature maps instead of from original images [17]. The mAP increases from 56.5% to 61.5% on the PASCAL VOC 2012 validation set. To further improve accuracy, they divided the instance segmentation task into three sub-tasks: distinguishing instances, estimating masks, and classifying objects. After that, the multi-task network cascades (MNCs) [18] method was used to enhance the information flow among sub-tasks to accomplish fast and accurate instance-aware semantic segmentation. The mAP can reach 63.5% by using this method. Li et al. [19] put forward a fully convolutional instance-aware semantic segmentation (FCIS) instance segmentation method, which could make the mAP reach up to 65.7%. This method accomplished the detection and segmentation of the two sub-tasks by executing the inside score maps and outside score maps in parallel. The fully connected layers were replaced with a softmax classifier, thus reducing the possibility of overfitting. Based on FCIS, feature maps of different scales were used by Pham V Q et al. to generate score maps, which were fused with skip structure to produce segmentation results. Bayesian inference was put in to optimize the segmentation results, further increasing the mAP to 67.3% [20]. Another instance segmentation method is segmentation-based. Compared with the detection-based methods, the segmentation-based methods, on the contrary, first get a pixel-level segmentation map from an input image, then identify the object instances based on this segmentation map obtained [12]. Pinheiro [21] estimated the probability that an object is wholly contained in an image patch by using the object-proposal method, while the segmentation masks and correlation scores are generated simultaneously by giving the image input patch. Based on this approach, they further proposed a method called SharpMask [22]. Firstly, a coarse mask encoding was output in the feedforward process, and then a module which fuses the features extracted from the lower layer in the way of top-down was used to refine segmentation. To obtain the feature map with higher resolution, path enhancement was used in these references [12,23,24] to strengthen the information flow between network layers. The verification of these improved methods was performed on public datasets. These research works indicate that instance segmentation is an effective object-detection method, which can immensely improve the accuracy. Compared with the bounding boxes of object detection, instance segmentation can get a more definite edge of an object. In the meantime, instance segmentation possesses better performance than semantic segmentation in labeling different instances among the same kind of objects. Generally speaking, the classification task is only to identify an image containing some objects, but when segmenting some instances, it will become a more complex procedure. Especially in a stacked scenario with multiple overlapping objects, it is necessary to not only classify diverse objects but also determine the boundaries, divergence, and relationships among all objects. Nicholas et al. [25] used action primitives to touch multiple objects simultaneously. Object detection and grasp planning can be performed in a cluttered environment with multiple objects. Guo et al. [26] proposed a shared convolutional neural network that can detect target objects from stacked objects in real-time. Zhang et al. [27] put forward a multi-task convolutional neural network for automatic robotic grasping, focusing on object detection problems in the case of different object-stacking situations, which is suitable for grasping tasks in multi-object stacking scenarios. However, due to the small size of the electronic components, these multi-object detection methods are not available. In the case of stacked electronic components, accurately detecting their categories and locating their positions for the subsequent grasping operation has come to be an inevitable problem in need of urgent solutions. Instance segmentation can effectively detect all the objects from an input image, and at the same time, generate a high-quality segmentation mask for each instance to get a delicate position of the detected object [6]. Until now, the relative research on instance segmentation has been rarely published, especially in the assembly field of electronic components. The diversity of electronic components poses challenges in the investigation process. The key to the realization of autonomous detection of electronic components is to enhance the generalization ability of instance segmentation methods, such as the robustness of detection and recognition in complex scenarios, and the balance between samples. In this paper, an instance segmentation method based on an improved Mask R-CNN algorithm was proposed to detect stacked and occluded electronic components. The experiments were performed on a dataset of electronic components. The results were analyzed and discussed. Image Collection for the Dataset Three kinds of datasets, which were training set, validation set, and testing set, were included in this study. Three hundred images (744 × 992 pixels) were collected for building the dataset. Four types of plug-in electronic components were included: tantalum capacitor, resistor, electrolytic capacitor, and potentiometer. Their shape features were representative in the electronic assembly field, as shown in Figure 1. Data Augmentation The number of images in the dataset was augmented to prevent overfitting, enhance the generalization ability, and improve the robustness of the model. In this work, 1200 images were obtained by using four data-augmentation methods, which were flipping, rotating, random cropping, and color jittering (see Figure 2). The number of images in the training set, validation set and testing set was 600, 240, and 360, respectively. The composition of the training set, validation set, and testing set is shown in Table 1. Image Annotation Images were annotated with the VGG Image Annotator (VIA), which is an open-source annotation tool developed by the Visual Geometry Group. In the labelling process, a total of 2783 targets in 1200 images were labelled (examples of labelling can be seen in Figure 3). We used polygons from the VIA to label the region shapes. The region attribute was set to "electronic." The identities of the four types of electronic components were 1, 2, 3, and 4, respectively, and the corresponding descriptions were "Capa," "Resis," "Tcapa," and "Poten." We show the corresponding labels for the four types of electronic components in Table 2. Structure Mask R-CNN [6] is an extension of Faster R-CNN [1]. By adding a mask branch, masks can be generated to cover the objects based on the location and classes of the detected objects. As shown in Figure 4, Mask R-CNN is a two-stage architecture. In the first stage, the region proposal network (RPN) is used to generate object region proposals and determine the foreground and background of input images. In the second stage, convolutional neural network extracts features from candidate proposals, classifies the proposals, and generates bounding boxes and masks for possible objects. With the development of instance segmentation methods, the accuracy of detection is constantly improved. However, the improvement of accuracy does not necessarily make the network more advantageous in terms of speed and model size. As accuracy increases, the complexity and computational burden also increase. In many real-world applications such as automatic drive and robotics, detection tasks need to be performed in a timely manner on a computationally limited platform. Our goal was to optimize the network, reduce the calculation parameters, and speed up the detection while ensuring accuracy. Figure 4. The schematic architecture of Mask R-CNN. "Cls layer" denotes classification layer, "Reg layer" denotes regression layer, "Conv" denotes convolution operation, and "Deconv" denotes deconvolution operation, "NMS" denotes non-maximum suppression, "BBox regression" denotes bounding box regression, "RoIAlign" denotes region of interest align. RPN: region proposal network. Backbone Two backbones are proposed as feature extractors in Mask R-CNN: deep residual networks (ResNets) [28] and feature pyramid networks (FPNs) [29]; each backbone corresponds to a mask head architecture. To make the network lightweight, we optimized the feature extraction network of Mask R-CNN. MobileNets [30], as one of the representatives of the lightweight neural network, can narrow a model, decrease the number of parameters, and improve the detection speed of a model while ensuring accuracy. Mask R-CNN is known for its high segmentation accuracy, and MobileNets can simplify the model and enhance the speed of detection while ensuring detection accuracy. In order to achieve a balance between accuracy and speed, we used MobileNets as part of the feature extractor of Mask R-CNN for the instance segmentation of electronic components. The architecture of MobileNets (see Figure 5a) is based on depthwise separable convolution, which factorizes a standard convolution into a depthwise convolution and a pointwise convolution (see Figure 5b). The depthwise convolution uses a single convolution kernel to each input channel. The pointwise convolution uses a 1 × 1 convolution kernel to linearly combine the outputs of the depthwise convolution. Each depthwise convolution and pointwise convolution is followed by a batch normalization layer and the rectified linear unit (ReLU) [31] activation function. In addition, two hyperparameters are introduced by MobileNets: width multiplier and resolution multiplier. The width multiplier is used to control the number of channels for input and output, and the resolution multiplier is used to control the resolution of the input. The use of these two hyperparameters greatly reduces the computation load and expedites the speed of calculation. Multi-feature fusion aims to aggregate features of different resolutions. In FPNs, different levels of feature maps are efficiently fused through three ways of bottom-up, top-down, and lateral connection. It is worth noting that FPNs use not only deep but also shallow feature maps to extract features, which are very helpful for the detection of small objects like electronic components. Combining MobileNets and the FPN, we developed an improved Mask R-CNN, which consistently achieves better accuracy with much fewer parameters and faster speed than Mask R-CNN. Firstly, the last average pooling layer, fully connected layer, and softmax layer of MobileNets are deleted, then the structure of MobileNets is divided into five stages (see Table 3). Stage 1 contains a standard convolution and a depthwise separable convolution. Both stage 2 and stage 3 include two depthwise separable convolutions. Six depthwise separable convolutions are included in stage 4, and two depthwise separable convolutions are included in stage 5. S1 to S5 represent the output of each stage of MobileNets, respectively. The feature fusion of MobileNets and the FPN is shown in Figure 6. The bottom feature layer obtains the same number of channels as the previous feature layer through 1 × 1 convolution. The upper feature layer gets the same length and width as the next feature layer through upsampling. To obtain a new fusion layer, add the length, width, and the number of channels; this fusion operation is shown in Figure 7. As a concrete example, the S4 layer gets the same number of channels as the FPN-P5 layer, and after upsampling, the length and width of the FPN-P5 layer are the same as that of the S4 layer. Finally, the two are added to get the fusion layer FPN-P4. Note that the FPN-P2 to FPN-P5 layers are used to predict the bounding boxes, position regression, and masks of objects, while the FPN-P2 to FPN-P6 layers are used to train the RPN, that is, the FPN-P6 layer is only used in the RPN. RPN The RPN takes an image as input and outputs a set of rectangular object proposals, each with an objectness score [1]. It determines whether the anchor is the foreground or the background and performs the first coordinate correction for the anchors belonging to the foreground. The structure of the RPN is shown in Figure 8. The RPN uses sliding windows on shared convolutional feature maps to generate k object boxes (k = 15 in this paper) with a preset aspect ratio and a scale for each pixel, which are called anchor boxes. An anchor is centered at the sliding window in question and is associated with a scale and aspect ratio [1]. In Mask R-CNN, the number of region proposals fed to Region of Interest Align (RoIAlign) is very big, generally ranging from 100 to 300. In this case, the number of segmentation maps to be learned is large, which makes it difficult to extract features in the mask branch. To solve this problem, the threshold of non-maximum suppression (NMS) in the RPN is increased from 0.5 to 0.7, and the intersection over union (IoU) threshold for NMS is fixed at 0.7. The setting of anchors in Faster R-CNN contains three scales of anchor boxes, and each scale corresponds to three aspect ratios. In order to adapt to the size requirements of electronic components and obtain more precise region proposals, we used five scales with box areas of 32 2 , 64 2 , 128 2 , 256 2 , and 512 2 pixels, and three aspect ratios of 1:1, 1:2, and 2:1. RoIAlign Region of interest pooling (RoI-Pooling) is used to extract features from shared convolutional layers, and the features are input into fully connected layers for classification in Faster R-CNN [1]. Nearest-neighbor interpolation, which is a quantization operation, is used by RoI-Pooling when features are extracted from shared convolutional layers. Due to this quantization operation, the features corresponding to each RoI are converted into a fixed dimension, and the RoI of output feature maps after RoI-Pooling does not match the RoI of the input image. Different from RoI-Pooling, RoIAlign uses bilinear interpolation instead of nearest-neighbor interpolation to calculate the pixel value of each position and eliminate quantization operation. It firstly traverses region proposals and divides each region proposal into k × k units, leaving the boundaries of each unit unquantified. Then, the values of coordinates are calculated in each unit and the pixel values of positions are calculated by bilinear interpolation, and finally, the max-pooling operation is performed. The detection accuracy for small objects is more obvious owning to the elimination of the quantization operation. Head Architecture Two-head architectures are proposed in Mask R-CNN. We used one of them, as shown in Figure 9. In the mask branch, deconvolution operation [32] is used to increase the spatial dimension of the feature map. Finally, a mask of 28 × 28 × 80 is output. Figure 9. The head architecture we used in the improved Mask R-CNN [6]. RoI: region of interest. Loss Function Since a mask branch is added, the multi-task loss function of Mask R-CNN can be expressed as: L final = L RPN-cls + L RPN-bbox + L cls + L bbox + L mask where L RPN-cls is the classification loss function in the RPN, L RPN-bbox is the position regression loss function in the RPN, L cls represents the classification loss function, L bbox is the position regression loss function, and L mask is defined as the average binary cross-entropy. The new mask branch is k × k × m for each RoI output dimension, where m × m is the size of the mask, and k represents the number of classes, thus a total of k masks generated. After the predicted masks are obtained, a per-pixel sigmoid is used to classify the masks, and the obtained results are taken as one of the inputs of the L mask . Note that only pixels that are considered foreground are used to calculate L mask . The overall structure of the improved Mask R-CNN is shown in Figure 10. Implementation Details We used the open-source Mask R-CNN library to complete the experimental research. All experiments were performed on computers equipped with Intel Xeon(R) E5-1680 v4@3.40G Hz CPU and the Quadro M5000 graphics processing unit through Pycharm, CUDA 9.0 and CUDNN 9.0 realized. We trained a total of 44 epochs with 200 steps each. We used a mini-batch size of 1 image per GPU and trained the model for 11k iterations, starting from a learning rate of 0.001. We used a weight decay of 0.0001 and a momentum of 0.9. It took four hours of training on a single 1-GPU machine under this setting. The average precision (AP) is usually used to evaluate the performance of the object detector, and the precision/recall curve is summarized by calculating the area under the curve. For a given category, precision is used to account for the proportion of positive samples that are judged to be true, and recall is used to indicate the proportion of positive samples that are judged to be true in the classifier. The mAP is a performance metric for the algorithms that predict locations and categories of objects, and it refers to the average of multiple classes of APs. In this paper, we used the standard COCO [33] metrics including AP, AP 50 , and AP 75 . Training, Validation, and Test Results We trained the improved network by using the weight file of COCO [33] in Mask R-CNN and evaluated its accuracy by using the testing set. The training time in GPU mode was four hours. The time required to evaluate an image of 744 × 992 pixels in GPU mode was 1.8 s. We recorded the APs of four types of electronic components. It can be seen from Figure 11 that the AP of tantalum was the highest, at 97.32%, and the AP of electrolytic capacitor, resistor, and potentiometer were 86.55%, 92.23%, and 96.36%, respectively. According to the method of sample-by-sample as the threshold dividing point, it can be seen from Figure 11a that the precision value of the electrolytic capacitor appears to oscillate. This is because as the threshold points are shifted to the left, the number of positive samples that are determined to be positive increases, and the number of negative samples that are determined to be positive also increases. Testing New Images We used 14 new images for testing to understand the performance of the improved Mask R-CNN in the instance segmentation of electronic components. These images were taken in a different environment than the images of previous training and testing. The distance from the camera was 5 cm, and the lighting conditions were different. The input image and output image are shown in Figure 12. Figure 12a,b are the images of electronic components collected under intense light. Some electronic components have the phenomenon of light reflection due to the problem of the surface material of electronic components, which is a factor that affects the detection accuracy. From Figure 12a,b we can see that the detection had a high success rate under intense light. Figure 12c,d shows the images of electronic components collected under weak-light or shadow conditions. The resistance in Figure 12d is not detected. Therefore, light intensity can affect the accuracy of detection, and the conditions for detection still require an appropriate light intensity. In addition, during the testing process, it appeared the conditions that the electronic components were not detected in a stacked scenario, and multiple similar electronic components were considered as one object. In Figure 13a, instance segmentation does not perform well on pins of electronic components. The thin characteristic of electronic components and cross placement are the reasons for this test result. Improving the performance of instance segmentation on the pins of electronic components is a meaningful direction for future research. In Figure 13b, the resistance in the middle of the image and the electrolytic capacitor are detected as one object. However, in other images, these electronic components can be successfully tested. The condition of false detection and missed detection is rare compared with the overall detection success rate. Despite some segmentation errors, the results demonstrate the superior performance of the improved Mask R-CNN on segmenting stacked objects. These errors may be caused by lighting problems, small amounts of training data, or overly complicated stacked scenes. Therefore, in future research, we will enrich our database, which has more types of electronic components, more lighting environments, and more images to improve the robustness and generalization of the model. Comparative Study In order to evaluate the performance of this method, it was compared with other methods under the same dataset. For the Mask R-CNN method, either the FPN or ResNet was used as the backbone network. In the feature extraction stage, the FPN and ResNet consumed a lot of time and slowed down the segmentation due to their deeper network layers and more calculation parameters. We reported our improved Mask R-CNN on the testing set for comparison. As shown in Table 4, our improved Mask R-CNN with MobileNets-FPN trained on the dataset of electronic components already outperformed Mask R-CNN. Trained and tested with images of 992 × 744 pixels, our method outperformed the single model of Mask R-CNN with nearly two points under the same initial models. At the same time, the improved Mask R-CNN method reduces the model size from 255.9 MB to 91.1 MB, and the detection speed is twice that of Mask R-CNN, as shown in Table 5. The detection accuracy of Cascade Mask R-CNN reaches 64.74%, which is about two points higher than our model. But the model size of Cascade Mask R-CNN is 615.7 MB, which is 6.7 times the size of our model. Moreover, in terms of detection speed, the time to test per image is about 1.5 times that of our model. We show the detection accuracy of the four types of electronic components in Table 6. The detection performance is better than Cascade Mask R-CNN and Mask R-CNN. The method proposed in this paper can be effectively used for the detection and segmentation of the four types of electronic components and can ensure the best detection accuracy and faster detection speed. Conclusions We proposed an improved Mask R-CNN model. We investigated some of the important factors leading to an efficient network. We then demonstrated how to optimize the feature extractor of Mask R-CNN to build a smaller and faster model. Finally, we compared the improved Mask R-CNN to popular models, demonstrating superior size and speed characteristics. The accuracy of instance segmentation surpassed the Mask R-CNN by two points. In this paper, the method was applied to the detection and segmentation of four types of electronic components. In future research, we will increase the types of electronic components and the number of images in the dataset to improve the robustness of this network. Conflicts of Interest: The authors declare no conflict of interest.
2020-06-04T09:05:45.565Z
2020-05-27T00:00:00.000
{ "year": 2020, "sha1": "5ed0f436537e86a5bf64fe96528e401fa00fe968", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/9/6/886/pdf?version=1591180124", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7d3a8b5c236090a5e31edfc2bd428cefa5a6bfb9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
248119748
pes2o/s2orc
v3-fos-license
The in fl uence of oiled fi ber, freeze - thawing cycle, and sulfate attack on strain hardening cement - based composites : The interfacial transition zone ( ITZ ) between the fi ber and the matrix signi fi cantly in fl uences the strength ening and toughening e ff ect of the fi ber for the matrix. The ITZ between the fi ber and mortar is a weak link in strain hardening cementitious composites ( SHCC ) , the loose structure is easily damaged, and it is the main channel of ion attack. The oil reduces the hydrophilicity of fi ber and the bonding strength of fi ber and mortar, which decreases fl exural and compressive strengths, but increases the tensile ductility. The compressive strength increased with the increase in the microhardness of ITZ. Both freeze - thawing cycle and sulfate attack reduce the fl exural and compressive strengths, and ductility of SHCC. The failure of ITZ is responsible for the perfor mance decrease. The strength of the samples containing oiled fi ber after the freeze - thawing cycle and sulfate attack decreases more signi fi cantly. Introduction At present, concrete is still a well-used construction material. However, cement hydration products' heterogeneous structure and characteristics result in making the concrete fragile and prone to crack [1], thereby making the reinforced concrete structure often work with cracks, jeopardizing the durability of concrete structures [2][3][4]. A water film is generated between the fibers and mortar during SHCC hardening, while Ca(OH) 2 crystals produced by cement hydration accumulate in the water film layer, creating the interfacial transition zone (ITZ) between the fiber and mortar [24]. The ITZ is often regarded as the "weak link" in a material. The relatively high porosity in the ITZ will provide a more accessible pathway for the aggressive ions to penetrate [25]. Many scholars have conducted theory and engineering application research on the ITZ. Feng et al. [26] investigated the effect of silica fume on the ITZ between steel fibers and matrix, and the microstructure of the ITZ was evaluated using scanning electron microscopy (SEM). The results showed that incorporating silica fume reduced the ITZ breadth and improved the micromechanical properties. Rocha Ferreira et al. [27] used SEM and X-ray diffraction to evaluate the effect of carboxyl styrene-butadiene rubber coating on the performance of the vegetable fiber. The results showed that the interaction between the polymer and natural fibers depends on the cellulose amount of fibers and their crystallinity. Hong et al. [28] studied the microstructure and bonding properties of the ITZ in fiber-reinforced concrete. The results showed that many micro-cracks presented in the fiber-mortar ITZ and the content of hydration products were much lower than that in the matrix. Actual engineering structures are often subjected to harsh environments, such as sulfate attack and freezethawing cycle, but few studies have reported the effects of freeze-thawing cycle and sulfate attack on the structure of ITZ and mechanism of SHCC. Therefore, this article focuses on observing the effects of service environment and oiled fiber on the SHCC's ITZ, combined with fiber-interface indicators to determine the connection between the microstructure and macroscopic properties. Table 1. (2) Fine aggregate: the fine aggregate is river sand produced in Pingdu, Qingdao with fineness modulus of 2.5. (3) PVA fiber: the REC15 type PVA fiber is from Japan Kuraray company, the performance index is shown in Table 2. (4) Fly ash and slag were used as admixtures, the chemical compositions are presented in Table 3. Mix proportion The mass proportion of cement, slag, fly ash, fine aggregate, water, and PVA fiber in SHCC is 2:1:1:2:1.61:0.098, the water-binder ratio is 0.4, the specific mixing ratio is shown in Table 4. The specific treatment and experiment schemes are shown in Table 5. The sample preparation and performance test are shown in Figure 1. Methods of testing 2.3.1 Oil treatment scheme The oil bath was prepared with the oil and deionized water by the liquor ratio of 1:15 in a beaker. The mixing was carried out with a magnetic stirrer for 5 min at 60°C. After that, PVA fibers were added to the beaker and soaked for 30 min. This allows the oiling agent to deposit on the fiber surface as thin films. Later, the samples were padded at a pressure of 0.2 MPa on a padding machine (one dip, one nip). Finally, the oiled fibers were dried in an oven at 65°C for 30 min. Mechanical test GB/T17671-1999 "method of testing cements-Determination of strength" was used to determine the flexural and compressive strengths of SHCC after 28 days curing. The sample size is 40 mm × 40 mm × 160 mm. Measurement of microhardness The specimens for the ITZ microhardness test were cut into slices of 40 mm × 40 mm × 10 mm. The slices containing the ITZ between fiber and the matrix were polished with 600# sandpaper and then with 1500# sandpaper to obtain an adequately smooth surface. The HX-1000T microhardness tester measured the microhardness of ITZ. Due to the low microhardness of fiber, the indentation area is too large. The test was performed at 20 μm from the fiber, and the average microhardness is calculated from 10 tests. The Microhardness test is shown in Figure 2. Tensile test An electronic universal mechanical testing machine is used to load the sample. The sample is in the shape of dog-bone as shown in Figure 3(a), the loading device is shown in Figure 3(b), and the loading rate is 0.1 mm·min −1 . Freeze-thawing cycle and aggressive ions attack experiments The number of freeze-thawing cycle was 150 times. The test blocks were immersed in 5% Na 2 SO 4 solution for 7 days or 60 days, and the solution was replaced every 15 days to ensure that the PH value of the solution remained constant. 3 Results and discussion 3.1 Effects of oiled fiber, freeze-thawing cycle, and sulfate attack on mechanical properties The change in fiber hydrophilicity and service environment will change the bonding properties between the fiber and mortar, which will affect the interfacial transition zone of SHCC and impact the mechanical properties. Therefore, before studying the ITZ, the effects of oiled fiber, freeze-thawing cycle, and sulfate attack on the mechanical properties of SHCC were analyzed, as shown in Table 6. Oiled fiber significantly reduced the flexural and compressive strengths of SHCC. The reason is that the oiled fiber reduces the hydrophilicity of the fiber, making the hydration products challenging to adhere to the fiber surface, reducing the friction and mechanical bite force between the fiber and the mortar, so, the strength is reduced [7]. The flexural and compressive strengths of SHCC first increase and then decrease with the increase in the sulfate attack age. This may be because of the formation of massive expansive products in the environment of sulfate attack. The products fill the porosity to make the ITZ denser, increasing the early strength [29]. However, when the stresses generated by the expansive products exceed the tensile strength of SHCC, the cracks expand, which destroys the internal structure of SHCC and leads to a decrease in strength [30]. The flexural and compressive strengths of O 4 specimens is lower than O 1 and B 4 . This indicates that SHCC with the oiled fiber is more susceptible to damage and has more significant strength loss under sulfate attack. After 150 freeze-thawing cycles, the flexural strength of the SHCC decreased, and the flexural strength of O 2 was lower than B 2 . The main reason is a strong affinity between the unoiled fiber and mortar, which reduces the surface peeling when damaged [31]. However, the bonding strength between the oiled fiber and mortar becomes smaller, so the breaks are more severe when subjected to freeze-thawing cycle damage. 3.2 The influence of oiled fiber, freezethawing cycle, and sulfate attack on tensile properties The uniaxial tensile stress-strain curve of SHCC is shown in Figure 4. Comparing the specimens with untreated fiber, finding that the SHCC under standard curing presents the strain hardening characteristics, the stress jitter is gentle and shows multiple cracking phenomena. After sulfate attack and freeze-thawing cycle, the tensile deformation capacity and jitter of the stress-strain curve decreases. However, the specimens after freeze-thawing cycle still exhibit strain hardening characteristics [32,33]. Different from the flexural and compressive strengths, oiled fiber enhances the tensile ductility of SHCC. The ultimate strain is up to 5.4% for original specimens which is significantly higher compared to that of sulfate attack and freeze-thawing cycling specimens, but the ultimate tensile strength decreases. Influence of oiled fiber and sulfate attack on microhardness The influence of oiled fiber and sulfate attack on the microhardness of ITZ is shown in Figure 5(a). Because the samples in the freeze-thawing cycle are severely damaged, the freeze-thawing specimens are not analyzed for microhardness. Under sulfate attack, the microhardness first increases then decreases with the attack age. The reason is that the sulfate first attacks SHCC from ITZ and diffuses into ITZ, making the structure denser, leading to an increase in microhardness. However, when the swelling stress exceeds the tensile strength, causing cracking, spalling, and strength loss in ITZ [34,35], the microhardness decreases. Subsequent Section 3.4 will further analyze the microstructure changes in ITZ and the attack channel of sulfate into SHCC. Oiled fiber reduces the microhardness of SHCC, indicating that the de-hydrophilic treatment for fiber reduces the bonding force between the fiber and the mortar, resulting in a decrease in ITZ hardness. The best-fitting line of compressive strength and average microhardness is shown in Figure 5(b). The microhardness and compressive strength of SHCC are linearly correlated. The compressive strength increases with the microhardness, indicating that it is necessary to study the interfacial properties of SHCC and ITZ microhardness which are the main factors affecting the mechanical properties of SHCC. Micro-morphology of ITZ The ITZ of SHCC is not only different from mortar in microstructure but also in chemical composition. To more accurately analyze the influence of oiled fiber, freeze-thawing cycle, and sulfate attack on ITZ, the SEM technology was used to observe the interface morphology. The energy dispersive spectroscopy (EDS) technology was used to analyze the element changes in SHCC to explore the difference between the hydration products. The effect of oiled fiber on the ITZ As exhibited in Figure 6, the mortar completely covered the fibers without exposing the fiber surface, indicating that the untreated fibers have a solid hydrophilic and considerable bonding strength with mortar. Figure 6(b) shows that the ITZ is broken when the fiber is pulled out, indicating that ITZ is weaker than mortar and has a loose structure, often as the cracks' origin. As shown in Figure 7, there is an oil film on the fiber surface, and the friction is slight, which is not enough to bring out the loose structure. The ITZ is relatively intact. There is no apparent rupture or collapse mark, consistent with its ductility. Using EDS technology to analyze the element composition of the ITZ and mortar of B 1 specimen and O 1 specimen, the location in the B 1 is shown in Figure 8, and the element content is displayed in Figure 9 and Table 7. Both B 1 and O 1 specimens in ITZ have a higher Ca element and O element content than the mortar, while the Si content is lower than the mortar, indicating that the ITZ is an enriched zone of Ca(OH) 2 , but it is a poor zone of C-H-S. Ca(OH) 2 has a layered structure and weak bonding strength. Once subjected to stress, it is often the origin of cracks, so the ITZ is more easily destroyed. The difference in element content between B 1 and O 1 specimens is slight, so it can be inferred that the oiled fiber only changes the hydrophilicity of the fiber and does not reduce the hydration products. The main reason for the decrease in the compressive and flexural strengths is that the oil decreases the adhesion between mortar and fiber. The influence of oiled fiber, freeze-thawing cycle, and sulfate attack on SHCC  213 3.4.2 The effect of freeze-thawing cycle on ITZ Figures 10 and 11 show the effects of freeze-thawing cycle on the ITZ. As shown in Figure 10, after the freezethawing cycle, the ITZ is looser, more pores, and cracks extend to the mortar. The main reason is that the freezethawing cycle makes the toughness of mortar worse, the volume of water film attached to the fiber surface and the free water in the mortar constantly change, which increase the pores between the fiber and the mortar, making the ITZ structure become loose [36]. As exhibited in Figure 11, comparing the B 2 and O 2 specimens, the fiber surface is smooth in O 2 specimen, and the ITZ structure is loose and porous, which reveals that the freeze-thawing cycle destroys the ITZ structure and reduces the bonding strength between the fiber and mortar. Oiled fiber has less bonding strength after freezethawing cycle. Using EDS technology to analyze the elemental composition of the ITZ and mortar in B 2 specimen and O 2 specimen, the element content is shown in Figure 12 and Table 8. The ITZ in B 2 and O 2 specimens with high Ca and low Si content further proves that ITZ is the weak link in SHCC. The element types for B 1 , B 2 , and O 2 specimens are similar, indicating that the source of freeze-thawing damage is the volume change when water turns into ice, and the hydration product types of ITZ do not change. The influence of sulfate attack on the ITZ The microstructure of the ITZ by sulfate attack for 7 days and 60 days is shown in Figures 13 and 14. Figures 13 and 14 show that there are many hydration products on the surface of the fibers after sulfate attack for 7 days. Compared with the specimens without sulfate attack, the structure is denser, and products produced by sulfate attack fill the loose system and improve the denseness of ITZ. While the specimens with sulfate attack for 60 days had less adhesion on fiber surface, and the system is more flexible. This shows that the long-term sulfate attack reduces the drawing property of fiber, resulting in decreased flexural strength, compressive strength, and ductility for SHCC. Oiled fiber reduces the bonding strength between the fiber and mortar, making the fiber surface smoother. EDS technology was used to analyze the element composition of the ITZ and mortar in B 4 and O 4 samples. The element content is shown in Table 9. Energy spectrum analysis shows that the S content in ITZ is significantly higher than that in the mortar, indicating that the ITZ is the primary way for sulfate attack to invade SHCC, and the sulfate ions first invade the ITZ in SHCC. The influence of oiled fiber, freeze-thawing cycle, and sulfate attack on SHCC  215 Figure 12: Energy spectra from element analysis. To further prove the intrusion path of sulfate attack in SHCC, the ITZ in B 4 specimen was analyzed by EDS mapping-scanning, and the results are shown in Figure 15. The groove in SEM is the ITZ in SHCC. The content of the S element in ITZ is significantly higher than mortar, and the element distribution is denser. It shows that the sulfate attack is more likely to first invade the ITZ in SHCC. The ITZ is the primary channel for sulfate attack. In summary, ITZ between fiber and mortar is a weak link in SHCC, an enriched zone containing Ca(OH) 2 , which results in main channel of ion attack. The oil reduces the hydrophilicity of fiber, reduces the interaction of fiber-matrix, which decreases flexural and compressive strengths, but lowers the threshold for crack initiation and increases the tensile ductility. Freeze-thawing cycle and sulfate attack for 60 days destroy the ITZ structure, reduce the fiber-mortar bonding strength, which is the main reason for the decrease in the flexural and compressive strengths. Conclusion The influence of oiled fiber, freeze-thawing cycle, and sulfate attack on the flexural and compressive strengths and flexibility of ITZ between the fiber and mortar of SHCC were studied, and the main conclusions are below: (1) The flexural and compressive strengths of SHCC is related to the bonding strength between the fiber and mortar. The greater the friction bonding strength between the fiber and mortar, the higher the mechanical properties for SHCC. The oiled fiber reduces the fiber hydrophilicity, decreasing flexural and compressive strengths, but increases the tensile ductility. Oiled fiber mitigates the microhardness of ITZ. The microhardness increases first and then decreases with an increase in the sulfate attack age. (2) The ITZ between fiber and mortar is the weak link in SHCC, the structure is loose and prone to damage, usually the origin of cracks. The EDS results show that the Ca(OH) 2 content of ITZ is higher than that of mortar, but C-H-S content is lower than mortar. Moreover, ITZ is the primary channel for sulfate attacks, so ITZ is more likely to cause performance damage. (3) Sulfate attack and freeze-thawing cycle reduce the flexural strength, compressive strength, and ductility of SHCC. Microscopic analysis shows that ITZ damage between fiber and mortar is the main reason for declining performance. Oiled fiber increases the ductility of the sample subjected to freeze-thawing cycle and sulfate erosion, but the strength decreases. The strength of the samples containing oiled fiber after freeze-thawing cycle and sulfate attack reduces more significantly.
2022-04-13T16:40:10.035Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "8b72439a1b12d53e50b8b6579434f29f48d5c11c", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/rams-2022-0023/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aba0bfd52fd3cfd35e9e11985947ba3d78439b4d", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
119169349
pes2o/s2orc
v3-fos-license
The cocenter of graded affine Hecke algebra and the density theorem We determine a basis of the (twisted) cocenter of graded affine Hecke algebras with arbitrary parameters. In this setting, we prove that the kernel of the (twisted) trace map is the commutator subspace (Density theorem) and that the image is the space of good forms (trace Paley-Wiener theorem). Introduction The affine Hecke algebras arise naturally in the theory of smooth representations of reductive p-adic groups. Motivated by the relation with abstract harmonic analysis for p-adic groups (such as the trace Paley-Wiener theorem and the Density theorem [BDK, Ka, Fl]), as well as the study of affine Deligne-Lusztig varieties (such as the "dimension=degree" theorem [He2,Theorem 6.1]), it is important to describe the cocenter of affine Hecke algebras, i.e., the quotient of the Hecke algebra by the vector subspace spanned by all commutators. In this paper, we solve the related problem for the graded affine Hecke algebras introduced by Lusztig [Lu1]. To describe the results, let H be the graded Hecke algebra attached to a simple root system Φ and complex parameter function k, Definition 2.2.1. As a C-vector space, H is isomorphic to C[W ] ⊗ S(V ), where W is the Weyl group of Φ, and S(V ) is the symmetric algebra of V , the underlying (complex) space of the root system. Let δ be an automorphism of order d of the Dynkin diagram of Φ which preserves the parameters k, and form the extended algebra H ′ = H ⋊ δ . (The automorphism δ could of course be trivial.) The cocenterH ′ = H ′ /[H ′ , H ′ ] of H ′ and the δ-twisted cocenterH δ = H/ [H, H] δ of H are related in section 3.1. In section 6.1, we construct a set of elements {w C f JC ,i } of H, where C runs over the δ-twisted conjugacy classes in W . To each class C, we attach a δ-stable subset J C of the Dynkin diagram, and pick w C ∈ C ∩ W JC , where W JC is the parabolic reflection subgroup of W defined by J C ; the elements f JC,i are chosen in S(V ), see 6.1 for the precise definitions. Our first result gives a basis forH δ (and hence a basis forH ′ ), which is independent of the parameter function. Theorem A. The set {w C f JC ,i } is a basis for the vector spaceH δ . The proof that the set {w C f JC ,i } spansH δ relies of certain results about δtwisted conjugacy classes in the Weyl group, section 5, as well as the use of a filtration in H and its associated graded object, which allows us to reduce the proof to the case when the parameter function is identically 0. The case k ≡ 0 is proved directly in Proposition 6.1.1. For the linear independence we use the representation theory of H to produce modules whose traces "separate" the elements w C f JC ,i . This is done in conjunction with a proof of the Density theorem and (twisted) trace Paley-Wiener theorem for graded Hecke algebras. More precisely, let R δ (H) be the Z-span of the δ-stable irreducible H-modules Irr δ H, and let R * δ (H) = Hom C (R δ (H), C) be the (complex) dual space. The twisted trace map is a linear map tr δ :H δ → R * δ (H), see section 4.1. If R * δ (H) good is the subspace of good forms (Definition 4.1.1), the image of the trace map is automatically in R * δ (H) good . Theorem B. The map tr δ :H δ → R * δ (H) good is a linear isomorphism. This is a graded affine Hecke algebra analogue of results from p-adic groups, [BDK], [Ka], and [Fl]. However, our proof of injectivity (which uses the explicit spanning set ofH δ ) is essentially different. Our approach also leads to the following result on the dimension of the space of δ-elliptic representations R δ 0 (H) (4.2.2). Theorem C. The dimension of the δ-twisted elliptic representation space R δ 0 (H) is equal to the number of δ-twisted elliptic conjugacy classes in W . When δ = 1 and the parameter function k is positive, this result was previously known from [OS], where it was obtained by different methods. Using the explicit description of the cocenterH δ , we can argue that the dimension is at most the number of δ-elliptic conjugacy classes. To show equality, we construct explicitly in section 8, via a case-by-case analysis, a set of linearly independent elements of R δ 0 (H) with the desired cardinality and other interesting properties, see Theorem 8.1.1. Finally, using Clifford theory for H ′ and the relation betweenH ′ , R(H ′ ) andH δ i , R δ i (H) (i = 1, d) respectively, we obtain: Corollary D. The trace map tr :H ′ → R(H ′ ) * good is a linear isomorphism. For Hecke algebras with real parameters, similar results were announced recently by Solleveld [So3], as part of his calculations of Hochschild homology. The proofs are based on deep results from [So2], where a version of the Aubert-Baum-Plymen conjecture, involving the extended quotient of the "first kind", is proved. In particular, the paper uses a Q-basis of R(H ′ ) Q which depends analytically on the real parameter function. Our method is different from loc. cit. and appears to be more elementary. For example, based on knowledge of minimal length elements, (twisted) elliptic elements of finite Weyl groups (section 5), and explicit basis of the elliptic space R δ 0 (H), constructed in section 8, we are able to handle arbitrary complex parameters. We also obtain explicit Q-basis of R(H ′ ) Q depending linearly on the complex parameter function. It is not clear to us if there is a connection between the basis in [So3] and the one constructed in the present paper. Finally, our approach seems to be related naturally to the Aubert-Baum-Plymen conjecture with the extended quotient of the "second kind" [ABP]. Definition 2.2.1 ( [Lu1]). The graded affine Hecke algebra H = H(Φ, k) attached to the root system Φ and parameter function k is the unique associative complex algebra with identity generated by w ∈ W and S(V ) such that: In the sequel, we write f for 1 ⊗ f , f ∈ S(V ), and w for w ⊗ 1, w ∈ W. From Definition 2.2.1(ii), it is easy to deduce that is the difference operator. Moreover, by induction on w ∈ W , one can then verify that for some f w ′ ∈ S(V ), where < denotes the Bruhat order in W . This relation will be used implicitly in the proofs below. Lu1,Proposition 4.5]). Since H is finite over Z(H), every simple H-module is finite dimensional, and the center Z(H) acts by scalars (central character) in every irreducible module. The central characters are thus parameterized by W -orbits in V ∨ . Denote Θ(H) = W \V ∨ and cc : IrrH → Θ(H), the central character map, a finite-to-one map. We say that an irreducible H-module π has real central character if cc(π) ∈ W \V ∨ 0 . Let δ be an automorphism of I as in section 2.1 and suppose the parameter function k satisfies k α = k δ(α) for all α ∈ R + . In this case, δ defines an automorphism of H, and we may define the extended graded affine Hecke algebra 2.3. The elements ω. The algebra H ′ has a natural conjugate-linear anti-involution * defined on generators ([BM1, section 5]) by where w 0 is the long Weyl group element. This definition is motivated by the relation with Iwahori-Hecke algebras and p-adic groups (see [BM1]). A direct computation shows that In particular, ω * = − ω. Notice also that δ( ω) = δ(ω). 2.4. A filtration of H ′ . Define a notion of degree in H ′ as follows. From Definition 2.2.1, one sees that every h ∈ H ′ can be uniquely written as h = w∈W ′ wa w , where a w ∈ S(V ). Define the degree of h to be the maximum of degrees in S(V ) of all a w . Set F j H ′ to be the set of elements of H ′ of degree less than or equal to j. This defines a filtration and letH ′ be the associated graded object. It is apparent from the commutation relation in Definition 2.2.1 thatH ′ may be naturally identified with the (extended) graded affine Hecke algebra H ′ 0 with parameter function k = 0. In particular, if σ is an H ss J -module, and χ ν : S(V WJ ) → C is a character parameterized by ν ∈ (V ∨ ) WJ , one can form the induced H-module 3. The cocenter and Clifford theory 3.1. δ-commutators. We retain the notation from the previous section. In particular, δ is an automorphism of the Dynkin diagram of order d and H ′ = H ⋊ δ is the extended graded affine Hecke algebra. We prove the following result. Notice that Moreover, as before, we have that Thus (a) is proved. Then It is easy to see that ( X, δ i ∈ Γ X (each one of these isomorphisms is unique up to scalar). In general, this defines factor set (2-cocycle) However, in our particular case, Γ X is a cyclic subgroup, generated by say δ iX and we can normalize the isomorphisms φ δ i such that φ δ ki X = φ k δ i X . This has the consequence that the factor set β can be chosen to be trivial. If U is an irreducible Γ X -module, there is an action of H ⋊ Γ X on X ⊗ U : (1) If X is an irreducible H-module and U an irreducible Γ X -module, the induced H ′ -module X ⋊ U is irreducible. (2) Every irreducible H ′ -module is isomorphic to an X ⋊ U. Lemma 3.2.1. Let X ⋊ U be an irreducible H ′ -module as in Theorem 3.2.1. For h ∈ H, δ ′ ∈ Γ, where δ ′ (U ) is the root of unity by which δ ′ acts in U . [i] from Proposition 3.1.1, in a sense to be made precise in the next section. Let O be a Γ-orbit on Irr(H). Set Γ O = Γ X for any X ∈ O. This is welldefined since Γ is cyclic. Then for any irreducible Γ O -module U and X ∈ O, (Twisted) Trace Paley-Wiener theorem In this section, we prove that trace Paley-Wiener theorem in the setting of graded affine Hecke algebra. The proof follows the general outline for the similar theorems for p-adic groups, [BDK] and [Fl], but for certain steps, e.g., Lemma 4.6.1, we give different arguments. Trace forms. Define the trace linear map (4.1.1) It clearly descends to a linear map This is compatible to the decompositions from Proposition 3.1.1 and (3.2.3) as follows. To simplify notation, we write the details in the case of δ, the same results hold for every δ i . Let R * δ (H) = Hom C (R δ (H) C , C) be the space of C-valued linear forms on the vector space spanned by Irr δ (H). The twisted trace map descends to a linear map The content of the trace Paley-Wiener theorem is that in fact the two spaces are equal: , the subgroup of twisted parabolically induced modules, and R δ |I| = 0. These subgroups form a decreasing filtration Let Θ δ (H) 0 denote the set of elliptic central characters, i.e., the subset of Θ(H) of all central characters of elliptic π ∈ Irr δ H. Langlands classification. The parabolic induction part of the Langlands classification for H is proved in [Ev], see also [KR,Theorem 2.4]. Proof. This is the Hecke algebra analogue of [Fl,Lemma 1.2]. By Theorem 4.3.1 and the remarks following it, Thus, by induction on the length of the ν-parameter, it follows that π is a linear Z-combination of standard modules i are all tempered δ-elliptic H-modules (and have the same central character as π). 4.4. Induction and restriction in R(H). If K ⊂ J(⊂ I) are given, denote by i J K : R(H K ) → R(H J ) the functor of induction, and by r J K : R(H J ) → R(H K ) the functor of restriction. We also have the corresponding functors, denoted again by i J K and r J K between R δ (H K ) and R δ (H J ). The following lemma is the analogue of [BDK,Lemma 5.4] and [Fl,Lemma 2.1]. elements. Then Proof. Claim (i) is obvious in our setting. We prove claim (ii). We need to prove that where τ ws is the isomorphism σ → w s • σ. We need to check that B s is well-defined. Since H Kw s is generated by W Kw s and S(V ), it is sufficient to check on these generators. Secondly, let a ∈ S(V ) be given. Then for some u ∈ W , a u ∈ S(V ). This means that aw s ≡ w s · w −1 s (a) in E s , i.e., modulo E s−1 . In the same way as for w ∈ W Kw s , it is then easy to see that Claim (iii) follows from (ii) and the parabolic induction part of Langlands classification (Theorem 4.3.1) identically with the proof of Lemma 5.4 of [BDK]. For (iv), one can adapt the proof of (ii) exactly as in [Fl,Lemma 2.1(iv)]. For every J = δ(J) ⊂ I, define the operator (4.4.7) Formal manipulations with the properties in Lemma 4.4.1 yield the following formulas (see [BDK,Corollary 5.4]). As a consequence, one sees that the operators T K respect the filtration {R δ ℓ (H) from section 4.2. Moreover, if ℓ = |K|, then T K acts on the quotient R δ ℓ (H) by: As in [BDK,section 5.5], define Since every A ℓ preserves the filtration and kills R On the other hand, from Lemma 4.4.2, it is apparent that A is of the form A = a + 4.5. Inclusion and restriction for H. For every J ⊂ I, let i J : H J → H denote the inclusion. Define r J : H → H J as follows. Given h ∈ H, let ψ h : H → H be the linear map given by left multiplication by h. This can be viewed as a right H J -module morphism. Since H is free of finite rank right H J -module, with basis W J , one can consider tr ψ h ∈ H J . Set r J (h) = tr ψ h . Set T J = i J • r J : H → H. As in section 4.3, for every K ⊂ J, we may also define i J K and r J K . Lemma 4.5.1. The maps i J and r J are Tr( , )-adjoint to r J and i J , respectively,i.e.: Thus T J is Tr( , )-adjoint to T J as well. Proof. Claim ( The analogous discussion with section 4.4 holds here and also the δ-twisted version. In particular, define the filtration of H: As before, A ℓ preserves the filtration and kills E ℓ+1 δ H. Thus Proof. Suppose {π 1 , π 2 , . . . , π k } is a set in ⊂ R δ (H) such that its image in R δ 0 (H) is linearly independent. Applying the operator A from section 4.3, one obtains a linearly independent set {A(π 1 ), A(π 2 ), . . . , A(π k )} in R δ (H). This is because A(π) ≡ aπ in R δ 0 (H), for a nonzero integer a. Since the characters of simple modules are linear independent, so are the characters of any linear independent set in R δ (H). Thus there exist elements h 1 , h 2 , . . . , h k of H, such that the matrix (Tr(h i δ, A(π j )) i,j is invertible. By Lemma 4.5.1, the matrix (Tr( A(h i )δ, π j ) i,j is invertible. Since A vanishes on J,δ(J)=J H J by (4.5.1), it follows that k ≤ dim H/ [H, H] δ + E 1 δ H . By Proposition 6.2.1 proved in section 6, the right hand side is bounded above by the number of δ-elliptic conjugacy classes in W . This proves the first claim. For the second claim, for every central character λ ∈ Θ δ (H) 0 , let R δ (H) λ be the span of cc −1 (λ) ⊂ Irr δ H, and let R This is because irreducible H-modules with different central characters are necessarily independent, and the central character is the same for all constituents of a parabolically induced from a module with central character. Since R δ 0 (H) is finite dimensional, then Θ δ (H) 0 must be finite. Since cc is a finite to one map, Irr δ (H) ell is also finite. Remark 4.6.1. The proof of Lemma 4.6.1 we presented is different than the argument from [BDK]. The classical proof (adapted to this setting under the assumption that k is real valued) shows that the set of δ-elliptic central characters Θ δ (H) 0 is finite, as follows. Firstly, the set Θ δ (H) 0 is a finite union of locally closed (in the Zarisky topology) subsets of Θ, see [Fl,Proposition 1.1]. Secondly, let * : Θ(H) → Θ(H) be the anti-algebraic involution given by the hermitian dual. More precisely, if ν ∈ Θ(H) is the central character of an irreducible module π, let ν * be the central character of the hermitian dual of π with respect to the operation * from section 2.3. Since k is real, it follows from [Op,Proposition 2.35] that every tempered H-module is * -unitary. In particular, using Lemma 4.3.1, ν = ν * for every ν ∈ Θ δ (H) 0 . It follows that Θ δ (H) 0 is finite. Let f ∈ R * δ (H) good be given. Since Irr δ (H) ell is a finite set and the (twisted) characters of irreducible H-modules are linearly independent, we can choose f 1 ∈ R * δ (H) tr such that f (π) = f 1 (π) for all π ∈ Irr δ (H) ell . By replacing f with f − f 1 , we may therefore assume, without loss of generality, that f (R δ (H) ell ) = 0. Apply to f the operator A defined in the previous section. Then It is also easy to see that (4.6.1) in the case of p-adic groups, the second inclusion requires an argument, see [BDK,section 5.3], but since for H, r J is just restriction, it is immediate. and so is f , concluding the proof. Now we define the good forms for H ′ = H ⋊ Γ. Definition 4.6.1. For any J ⊂ I and σ ∈ Irr(H ss J ), we set is called good if for every J ⊂ I, σ ∈ Irr(H ss J ) and irreducible representation U of Γ J,σ , the function ν → f (X(J, σ, ν) ⋊ U ) is a regular function on the variety (V ∨ ) WJ ⋊ΓJ,σ . Denote the subspace of good forms by R * (H ′ ) good . As a consequence of Theorem 4.1.1 and Clifford theory (section 3.2), we obtain the trace Paley-Wiener Theorem for H ′ . Proof. It is obvious that R * (H ′ ) tr ⊂ R * (H ′ ) good . By (3.2.3), By Theorem 4.1.1, the image of the map tr δ i : Twisted elliptic conjugacy classes in the finite Weyl group In this section, we discuss the (twisted) conjugacy classes of finite Coxeter groups. These results will be used in the rest of this paper. In this section, we fix a finite irreducible Coxeter group (W, I) and a group automorphism δ : W → W with δ(I) = I. Let d be the minimal positive integer such that δ d (i) = i for all i ∈ I. We define the δ-twisted conjugation action of W on itself by w· δ w ′ = ww ′ δ(w) −1 . Any orbit is called a δ-twisted conjugacy class of W . A δ-conjugacy class O of W is called elliptic if O ∩ W J = ∅ for all proper δ-stable subset J of I, i.e., supp δ (w) = I for all w ∈ O. An element w ∈ W is called δ-elliptic if it is contained in an elliptic δ-conjugacy class of W . Recall that V is the vector space spanned by α i (for i ∈ I). As before, we regard W as a subgroup of GL(V ) and δ as an element in GL(V ) in the natural way. For w ∈ W , set p w,δ (q) = det(q · id V − wδ). Then it is easy to see that p w,δ (q) = p w ′ ,δ (q) if w is δ-conjugate to w ′ . We have the following well-known result for elliptic conjugacy classes. We include the proof here for completeness. Proposition 5.1.1. Let O be a δ-twisted conjugacy class of W . The following are equivalent: (1) O is elliptic; (2) p w,δ (1) = 0 for some (or equivalently, any) w ∈ O; (3) For some (or equivalently, any) w ∈ O, there is no nonzero point in V that is fixed by wδ. Minimal length elements. We follow the notation in [GP, section 3.2]. Given w, w ′ ∈ W and i ∈ I, we write w si − → δ w ′ if w ′ = s i wδ(s i ) and ℓ(w ′ ) ≤ ℓ(w). If w = w 0 , w 1 , · · · , w n = w ′ is a sequence of elements in W such that for all k, we have w k−1 sj − → δ w k for some j ∈ I, then we write w → δ w ′ . If w → δ w ′ and w ′ → δ w, then we say that w and w ′ are in the same δ-cyclic shift class and write w ≈ δ w ′ . For w ∈ W and i ∈ I, define the length function ℓ i (w) as the number of generators in I conjugate to s i occurring in a reduced expression of w. By [GP,Exercise 1.15], it is independent of the choice of reduced expression of w. Set Then it is easy to see that if w ≈ δ w ′ , then l i,δ (w) = l i,δ (w ′ ) for all i ∈ I. We have the following main result on elliptic conjugacy classes of W . Remark 5.2.1. It was first prove via a case-by-case analysis for untwisted case by Geck and Pfeiffer in [GP,Theorem 3.2.7] and for twisted case by the secondnamed author in [He1,Theorem 7.5]. A case-free proof for part (1) and (2) was found recently in [HN]. It would be interesting to find a case-free proof for part (3) and/or Theorem 5.2.2 below. The following result can be checked easily from the list of Dynkin diagrams. Lemma 5.2.1. Let J ⊂ I with δ(J) = J. Then we may write J as J = J 1 ⊔ J 2 with δ(J i ) = J i for i = 1, 2 and (1) J 1 is a union of connected components of type A; (2) For any connected component K of J 1 , either δ | K is identity or there exists another connected component (3) Either (i) J 2 = ∅ or (ii) J 2 is a connected component of J not of type A or (iii) J 2 is a connected component of J of type A and δ | J2 is nontrivial. We have the following consequence that elliptic classes never fuse. Theorem 5.2.2. Let J ⊂ I with δ(J) = J. Let C be a δ-twisted conjugacy class of W such that C ∩ W J contains a δ-elliptic element of W J . Then C ∩ W J is a single δ-twisted conjugacy class of W J . Remark 5.2.2. The untwisted case was due to Geck and Pfeiffer in [GP,Theorem 3.2.11]. The general case can be proved in a similar way by using Theorem 5.2.1 (3) and Lemma 5.2.1. We omit the details. In the rest of this section, we discuss some further properties on elliptic conjugacy classes of a parabolic subgroup of W . Remark 5.2.3. It is easy to see that J is a minimal δ-stable subset of I with Proof. Let C be the δ-twisted conjugacy class of W that contains w and C J be the δ-twisted conjugacy class of W J that contains w. Let x ∈ N W,δ (W J ). By Theorem 5.2.2, xwδ(x) −1 ∈ C ∩ W J = C J . Therefore xwδ(x) −1 = ywδ(y) −1 for some y ∈ W J . Hence y −1 x ∈ Z W,δ (w) and x ∈ W J Z W,δ (w). On the other hand, if x ∈ W J Z W,δ (w), then xwδ(x) −1 ∈ C J . By Proposition 5.2.1, x ∈ W J ZW J ⊂ N W,δ (W J ). 6. Spanning set of (twisted) cocenter For each ∼ δ -equivalence class in I δ , we choose a representative. Such representatives form a subset of I δ , which we denote by I δ 0 . Then there is a natural bijection between I δ 0 and I δ / ∼ δ . For any J ⊂ I δ 0 , we choose a basis {f J,i } of S(V WJ ⋊ δ ) N W,δ (WJ ) . For each δ-twisted conjugacy class C of W , we fix a minimal element J C ∈ I δ 0 such that C ∩ W JC = ∅. Such J C is uniquely determined by C. We fix an element w C in C ∩ W JC . By definition, w C is a δ-elliptic element in W JC . Proof. Notice that for any x, y ∈ W and f ∈ S(V ), ThusH 0 is spanned by w C S(V ), where C runs over δ-twisted conjugacy classes of W . Now we fix a δ-twisted conjugacy class C. Hence Here the last inclusion follows from the fact that δ(Z W,δ (w C )) = Z W,δ (δ(w C )). Since w C is δ-elliptic in W J , δ(w C ) is also δ-elliptic in W J . By Proposition 5.2.2, W J Z W,δ (δ(w C )) = N W,δ (W J ) and 6.2. Spanning set ofH δ . LetH be the graded algebra associated to the filtration of H given by the degree of S(V ) defined in section 2.4. Recall thatH ∼ = H 0 . By induction on degree one shows that if L is a spanning set ofH/[H,H] δ i , then L is also a spanning set of H/[H, H] δ i . To see this, first notice that (using the relations in H) the δ i -commutators preserve the filtration, i.e., More precisely, if a 1 and a 2 are elements of S(V ) homogeneous of degrees l and j, respectively, then [w 1 · a 1 , w 2 · a 2 ] δ i ∈ (w 1 w 2 · w −1 2 (a 1 )a 2 − w 2 δ i (w 1 ) · δ i (a 1 )δ i (w −1 1 )(a 2 )) + F l+j−1 H. (6.2.1) Leth ∈ H/[H, H] δ i be given and assume that h ∈ F l H. Since h can be written as h = w wa w for some a w ∈ S(V ), set h 0 = w wa w ∈ H 0 . Then h 0 − x∈L c x x 0 ∈ [H 0 , H 0 ] δ i , where only finitely many c x are nonzero, x 0 is a representative ofx in H 0 . Moreover, we may choose the x 0 's that contribute to the sum to have (maximal) degree less than or equal to i. Thus h 0 = x c x x 0 + j [y 0,j , y ′ 0,j ], for some y 0,j , y ′ 0,j in H 0 homogeneous and [y 0,j , y ′ 0,j ] ∈ F l H 0 . Let y j , y ′ j ∈ H be the corresponding elements for y 0,j , y ′ 0,j , respectively. By (6.2.1), [y j , y ′ j ] δ i differs from [y 0,j , y ′ 0,j ] δ i by an element in F l−1 H. Then h − x c x x − j [y j , y ′ j ] ∈ F l−1 H, and the claim follows by induction. (Here x denotes a representative ofx in H corresponding to x 0 .) Now we have the following result. Proposition 6.2.1. We keep the notations as in §6.1. Then {w C f JC,i } spansH δ as a vector space. Now we give the trace formula of the element from the spanning set on the induced representations. Proposition 6.2.2. Let J, J ′ be δ-stable subsets of I. Let w be an δ-elliptic element in W J and C be the δ-twisted conjugacy class of W that contains w. Let M be an wδ(x 1 )yf y m. It is easy to see that wδ( We have that x −1 wf δx = x −1 wxf δ. By Theorem 5.2.2, there exists x ′ ∈ W J such that If H ′ has parameter function of geometric type in the sense of [Lu2], then res W ′ T rcc is in fact a Z-basis of R(W ′ ) and the change of bases matrix between res W ′ T rcc and IrrW ′ is upper uni-triangular in an appropriate ordering. Part (a) of Theorem 7.1.1 is proved by homological algebra in [So1, Theorem 6.5]. Part (b) follows from [Lu2] together with Clifford theory, see [BC,]. In fact, (b) is now known to hold for all real parameters k and all root systems except some cases in type F 4 (where it is again expected to be true). 7.2. Basis Theorem. We retain the notation from section 6.1. In particular, let {w C f JC ,i } be the spanning set ofH δ from Proposition 6.2.1, indexed by conjugacy classes C in W and for every C, a basis {f JC,i } of S(V WJ C ⋊ δ ) N W,δ (WJ C ) . Theorem 7.2.1 (Basis Theorem). The set {w C f JC ,i } forms a basis ofH δ . Proof. In light of Proposition 6.2.1 we need to prove that the set is linearly independent. To see this, we proceed by induction and use the formula for the trace of induced modules in Proposition 6.2.2 to separate the subsets for various J ′ . Let C,i a C,i w C f JC ,i = 0 (7.2.1) be a linear combination. The base case is J ′ = ∅. For every character χ ν : H ∅ = S(V ) ⋊ δ → C, parameterized by ν ∈ (V ∨ ) δ , consider the induced module X(ν) := Ind H H ∅ (χ ν ). By Proposition 6.2.2, Tr(w C f C,i δ, X(ν)) = 0 for all C = {1} and all i. Thus applying Tr(−, X(ν)) to (7.2.1), we get |W | i a 1,i (f ∅,i , ν) = 0 for every ν ∈ W \(V ∨ ) δ . This means that the polynomial function i a 1,i f ∅,i vanishes on its natural domain, thus By induction, suppose we are left with a (smaller) linear combination as in (7.2.1). Let J ′ be a minimal element in I δ 0 that appears in this combination. Suppose C is a conjugacy class that occurs and C ∩ W J ′ = ∅. By the construction of the spanning set (see section 6.1) and the minimality of J ′ , we must have J ′ = J C . Specialize M to M = σ ⊗ χ ν , where σ is an irreducible tempered module with real central character of (H J ′ ) ss , and χ ν : S(V WJ ⋊ δ ) → C is a character indexed by ν ∈ N W,δ (W J )\(V ∨ ) WJ ⋊ δ . By Theorem 7.1.1, when the parameter function k takes real values, the set of such σ separates the w C 's. For arbitrary parameters k, we can specialize σ to the representations explicitly constructed in Theorem 8.1.1(1) below. By the same discussion as for J ′ = ∅, the unramified characters χ ν separate the f J ′ ,i 's. In conclusion, a C,i = 0 for all C such that J C = J ′ . Remark 7.2.1. Theorem 7.2.1 implies a description of the cocenter for the extended graded Hecke algebra H ′ = H ⋊ δ , δ d = 1. From Proposition 3.1.1, we haveH Now Theorem 7.2.1 gives an explicit basis for eachH δ i . The following corollary is stated here for convenience. It was already proved in Lemma 4.6.1 and it is a consequence of the explicit description of the basis of H δ . The case δ = 1 was proved in the setting of affine Hecke algebras in [OS,Proposition 3.9] via different methods. Proof. Suppose x = C,i a C,i w C f JC,i ∈H δ is in ker tr δ . This means that for every π ∈ R δ (H) Q , Tr(xδ, π) = 0. The inductive argument in the proof of Theorem 7.2.1 then implies that a C,i = 0 for all C, i, and so x = 0. In the setting of H ′ = H ⋊ δ , we defined the trace map tr :H ′ → R(H ′ ) * in section 4.1. As a consequence of Theorem 7.3.1 and Clifford theory (section 3.2), we obtain a density theorem for H ′ . By claim (a) in the proof of Proposition 3.1.1, this is the same as hδ ′ ∈ Hδ ′ ∩ [H ′ , H ′ ], which is what we wanted to prove. Bases of R(H) In this section, we exhibit linearly independent sets in R δ 0 (H) of cardinality equal to the number of δ-elliptic conjugacy classes in W . This implies in particular that equality holds in Corollary 7.2.1, and that our sets are bases. Moreover, our construction is such that the W -structure of these bases elements does not change for various values of the parameter function k of H, and in addition the action of the Hecke algebra elements ω (equivalently ω) depends linearly in k. The main result of the section follows. Theorem 8.1.1. Let H k = H(Ψ, k) be a graded Hecke algebra associated to a simple root system Ψ and parameter function k : R + → C as in Definition 2.2.1. Let δ be an automorphism of the Dynkin diagram of Ψ of order less than or equal to 3. (1) There exists a Z-basis of R δ 0 (H k ) represented by a set of genuine representations {π 1 , . . . , π m } ⊂ R δ (H k ), where m is the number of δ-elliptic conjugacy classes in W such that (P1) the restriction res W π j is independent of k for every j = 1, m; (P2) the actions of π j (ω) j = 1, m, ω ∈ V C , depend linearly in the parameter function k. The proof of (1) will occupy the rest of the section; we exhibit an explicit basis {π 1 , . . . , π m } for every pair (Ψ, δ). We remark that except for simply-laced systems and parameter k ≡ 0, or certain special values of k in type F 4 , the bases we give consist of elements of Irr δ (H) ell (for any parameter function). In type F 4 , for almost all values of the parameter k, the same is true; however for a few special values of k some of the π j may become reducible. Proof of (2). Let R δ ℓ (H) be as in (4.2.2) and R δ (H) = ⊕ ℓ R δ ℓ (H) be the associated graded object. As Q-vector spaces R δ (H) Q ∼ = R δ (H) Q , so it is sufficient to construct a basis of R δ (H) Q with the desired properties. Recall the set I δ 0 from section 6.1. For every J ∈ I δ 0 , the parabolic subalgebra H J decomposes as H J = H ss J ⊗ S(V WJ ). Then we have the following decomposition: where the subgroup of unramified characters (S(V WJ ⋊ δ ) N W,δ (WJ ) ) * can be canonically identified with N W,δ (W J )\(V ∨ ) WJ ⋊ δ . Let B J,0 := B(R δ 0 (H ss J )) be the basis of R δ 0 (H ss J ) given by (1). Then the desired basis for R δ (H) Q (and so of R δ (H) Q ) is Let H A n,k be the graded Hecke algebra of GL(n) with (constant) parameter k (for simplicity of formulas we consider GL(n) rather than SL(n)) generated by w ∈ S n and {ǫ 1 , . . . , ǫ n }. Let s i,j denote the reflection in the root ǫ i − ǫ j . As it is well-known, there is a surjective algebra morphism (8.2.1) Using the adjoint map φ * k : R(S n ) → R(H A n,k ), we can lift every irreducible S nrepresentation to an irreducible (in general non-hermitian) H A n,k -module. If σ is a partition of n parameterizing an irreducible S n -module, denote by π A n,k (σ) the resulting simple H A n,k -module. There is a single elliptic conjugacy class in S n , the class of n-cycles. The space R 0 (H A n,k ) is one dimensional spanned by the class of the trivial H A n,k -module π n,k ((n)). 8.3. 2 A n−1 . Let δ be the automorphism of order 2 of the Dynkin diagram of type A n−1 . The elliptic δ-twisted conjugacy classes in S n are in one-to-one correspondence with partitions of n where every part is odd, see [He1,§7.14]. Every irreducible S n -representation is δ-stable, i.e., Irr δ S n = IrrS n . The representations π n,k (σ) constructed in section 8.2 may seem therefore like good candidates for constructing a basis in R δ 0 (H A n,k ), but the problem is that they are not typically δ-invariant. This is because δ maps an irreducible H-module to its contragredient, and the modules π n,k (σ) are not self-contragredient in general. A basis of R where J(σ) is the subset of I corresponding to the partition σ; more precisely, if σ = (n 1 , n 2 , . . . , n ℓ ), then 8.4. B n /C n . The set of elliptic conjugacy classes in W (B n ) is in one-to-one correspondence with partitions of n, [Ca2]. For every partition λ = (λ 1 , . . . , λ k ) of n, let w λ be a representative of the corresponding elliptic conjugacy class, explicitly, w λ is a Coxeter element for W (B λ1 ) × · · · × W (B λ k ). From Definition 2.2.1, one sees that there is an isomorphism between the graded Hecke algebra of type B n with parameters k 1 on the long roots and k 2 on the short roots, and the graded Hecke algebra of type C n with parameters k 1 on the short roots and 2k 2 on the long roots. Because of this isomorphism, we consider only the graded Hecke algebra of type B n (with arbitrary parameters). In particular, R 0 (H B n,k,0 ) ∼ = R 0 (H D n,k ) ⊕ R δ 0 (H D n,k ), and therefore this case follows from the general type B n case using that the number of elliptic conjugacy classes in W (D n ) plus the number of δ-twisted elliptic conjugacy classes in W (D n ) equals the number of elliptic conjugacy classes in W (B n ) (see [He1,§7.20]).
2012-08-13T01:28:12.000Z
2012-08-04T00:00:00.000
{ "year": 2012, "sha1": "1ee32dcc58875eb7fee7b600b3f6dc5a9ab5afd3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1208.0914", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1ee32dcc58875eb7fee7b600b3f6dc5a9ab5afd3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
19001679
pes2o/s2orc
v3-fos-license
Kinetic theory in curved space: a first quantised approach We study the real time formalism of non-equilibrium many-body theory, in a first quantised language. We argue that on quantising the relativistic scalar particle in spacetime with Minkowski signature, we should study both propagations $e^{i(p^2-m^2)\tilde \lambda}$ and $e^{-i(p^2-m^2)\tilde \lambda}$ on the particle world line. The path integral needs regulation at the mass shell $p^2=m^2$. If we regulate the two propagations independently we get the Feynman propagator in the vacuum, and its complex conjugate. But if the regulation mixes the two propagations then we get the matrix propagator appropriate to perturbation theory in a particle flux. This formalism unifies the special cases of thermal fluxes in flat space and the fluxes `created' by Cosmological expansion, and also gives covariance under change of particle definition in curved space. We comment briefly on the proposed application to closed strings, where we argue that coherent fields and `exponential of quadratic' particle fluxes must {\it both} be used to define the background for perturbation theory. Introduction. When we quantise a field in curved space, the notion of a particle becomes very curious. For example in an expanding Universe we have particle creation, which means that a spacetime which looks empty in terms of particle modes natural in the past, may look full of particles on using co-ordinates natural to the future. The Minkowski vacuum appears to have a particle flux for an accelerated observer. For black holes, imposing vacuum conditions at past null infinity gives Hawking radiation in the future, due to the time-dependent gravitational field of a collapsing object. ( [1] and references therein.) How 'real' are such particles, and more specifically, how do they affect the gravitational field? Beyond semi-classical approximations, such questions have traditionally been deferred to a time when a consistent theory is available with both matter and gravity quantised. Strings provide such a consistent theory, so we wonder if the above questions have been implicitly answered in computing string amplitudes. One deals with closed strings (which include the graviton in their spectrum) in a first quantised language. The particle analogue of this approach is summing over all particle trajectories between initial and final points (with branching trajectories giving possible interactions). If we just take a free scalar particle and compute the two-point function by such a first quantised path integral, which notion of particle are we using? The answer to this question is known [2]. If |0 > in is the vacuum based on positive frequency modes at t = −∞ and |0 > out for the modes based at t = ∞ then (The notion of 'in' and 'out' vacuua here does not necessarily require flat spacetime at t → ±∞.) But we may not want such a hybrid 'in-out' expectation value for our two-point function. We may wish to use the notion of 'in' particles, which requires computing Or we may have an ensemble of 'in' particles to start with, which implies a density matrix based on 'in' states (e.g. ρ ∼ e −βn |n > in in < n|) with propagator Tr{ρT [φ(x ′ )φ(x)]}. How do we modify the first quantised path integral to obtain the requisite propagators, and carry out perturbation theory? To handle non-equilibrium situations with general density matrices one uses the 'real time' formalism to develop a perturbation theory. If ρ is specified in terms of the states at t = −∞ then we evolve the fields from t = −∞ to t = ∞ and back to −∞ where we insert ρ and take a trace. The propagator becomes a 2x2 matrix propagator, with a = 1, 2 labelling operators on the first and second parts of the above time path. With correspondingly generalised interaction vertices, one computes Feynman diagrams in the usual way to obtain the correlators in the many-body situation ( [3] and references therein). The goal of this paper is to study this real-time formalism from a first quantised viewpoint, using the simple example of a scalar field. We require a covariant language for the density matrix (and time path) rather than a Hamiltonian language based on spacelike slices, because it is such a covariant language that we can extend to strings. In brief, our approach and results are as follows. For flat space, we know that a density matrix of the form 'exponential of linear in the field' gives a coherent state. Shifting the field by its classical value removes such a part of ρ. The class of ρ of the form 'exponential of quadratic in the field' is also special; for correlators with such ρ the Wick decomposition holds [4]. The matrix propagator of the real-time formalism encodes such ρ and the choice of time path for the perturbation theory. Departures from this 'exponential of quadratic' form of the density matrix gives 'correlation kernels', which are handled perturbatively as vertices analogous to the interaction vertices in the Lagrangian. For the first quantised language, we argue that a careful quantisation of the relativistic scalar particle requires considering both the propagation e i(p 2 −m 2 ) and the propagation e −i(p 2 −m 2 ) on the world line. We may collect these two possibilities into a 2x2 matrix form, getting a world line Hamiltonian diag[−(p 2 − m 2 ), (p 2 − m 2 )]. Near the mass shell p 2 − m 2 = 0 the path integral needs regulation. If we add −iǫI to the Hamiltonian (I is the 2x2 identity matrix) then we get a diagonal matrix propagator, with (1.1) and its complex conjugate as the first and second diagonal entries. But if we regulate instead by adding −iǫM , where M is a non-diagonal matrix, then we get the matrix propagator corresponding to an 'exponential of quadratic' particle flux. We compute M for a thermal distribution in Minkowski space, for the Niemi-Semenoff [5] choice of time path, and for the 'closed time path' developed for non-equilibrium theory by Keldysh [6], Schwinger [7], and others. These two M matrices are not the same, which reflects the fact that the matrix propagator depends on both ρ and the choice of time path. For a curved space example, we consider a 1 + 1 spacetime with 'sudden' expansion. We compute M to obtain the matrix propagator appropriate to perturbation theory with the 'in' vacuum; i.e. for ρ = in |0 >< 0| in and time path beginning and ending at past infinity. Thus the 'exponential of quadratic' form of the density matrix, basic to perturbation theory, is naturally obtained in the first quantised formalism. Thermal fluxes in flat space, and the flux given by the Bogoliubov transformation due to spacetime expansion, are special cases of this form. Further, this class of density matrices is closed under change of the basis of functions used to define particles, so our formalism is covariant under such transformations. We conclude with a discussion of the significance of our results for strings, which were the motivation for this study of the first quantised formalism. The usual first quantised path integrals for strings would give the analogue of (1.1), which corresponds to specific boundary conditions at spacetime infinity. To handle phenomena involving particle fluxes, we propose extending the world sheet Hamiltonian to a 2x2 Hamiltonian as in the above particle case. Thus we would allow not only classical deformations of the background field (analogous to the 'exponential of linear' ρ in the particle case) but also particle flux backgrounds (corresponding to 'exponential of quadratic' density matrices). Studying βfunction equations [8] for this extended theory should give not equations between classical fields but equations relating fields and fluxes. The latter kind of equation, we believe, would be natural to a description of quantised matter plus gravity. fThe plan of this paper is as follows. Section 2 reviews finite temperature perturbation theory, and discusses the significance of 'exponential of quadratic' density matrices for the curved space theory. Section 3 translates the finite temperature results of flat space to first quantised language. Section 4 gives a curved space example. Section 5 is a summary and a discussion relating to strings. 2. Propagators in the presence of a particle flux. Review of the real time formalism. Consider a scalar field in Minkowski spacetime (metric signature+ − . . . −) Suppose this field is at a temperature T = (β) −1 . To take into account this temperature we evolve the field theory in imaginary time from t = 0 to t = −iβ, and identify these two time slices. In the 'real time' approach of Niemi and Semenoff [5] we use instead the following path C 1 in complex t space to connect these two slices The concept of time ordered correlation functions is now replaced by 'path ordered' correlation functions, with t running along the above path C 1 . One argues that the parts II, is the number density of particles for the free scalar field at temperature (β) −1 . We can write Feynman diagrams with these rules gives the correlation function at temperature β −1 . The power of the real time approach is that the density matrix need not be thermal, and the system need not be in equilibrium. Suppose the density matrix is specified at t = −∞ in terms of the free field operators, so that the perturbation Hamiltonian will modify the distribution as time progresses. One uses the closed time path C 2 [6], [7]: for time evolution, and takes a trace after inserting ρ. Perturbation theory may be developed with the same rules as above, but using the matrix propagator appropriate to the contour C 2 . In particular, if ρ at t = −∞ is of thermal form with temperature (β) −1 , then the matrix propagator is Perturbation theory based on the contours C 1 and C 2 are not equivalent, even for zero temperature [9] . Contour C 2 corresponds to computing correlators with ρ = |0 >< 0| inserted at t = −∞. Consider the scalar field in 0 + 1 spacetime dimensions, for simplicity. Let the only perturbation be the time dependent term µ dtδ(t−t 0 ) : φ 2 (t) :, and compute to first order the two point correlator for field insertions at t 1 < t 0 < t 2 : With the contour C 1 , for β → ∞, we just get the first term on the RHS. The second term on the RHS comes from correcting the state at t = ∞ away from the vacuum |0 >, to be such that it evolves from |0 > after suffering the perturbation at t 0 . In short, the time path is significant because perturbation theory works in the interaction picture, while the physical situation is described in the Heisenberg picture. The second leg of the time path makes the state at t = ∞ the same as the state at t = −∞ in the Heisenberg picture, which implies that perturbation corrections must be made along this path segment in the interaction picture. This is a manifestation of the fact that these corrections arise from a flux of real particles. To understand the origin of the on shell terms, let us consider the element D 11 in (2.3). We can decompose the free scalar field in Minkowski space into Fourier modes. Each quantised mode is a harmonic oscillator with frequency ω(p) = (p 2 + m 2 ) 1/2 . The two each harmonic oscillator. Thus we focus on a single oscillator (we suppress its momentum label). Let t 2 > t 1 . Then The first term on the RHS corresponds to the stimulated emission of a quantum at t 1 with absorption at t 2 . The second term corresponds to the annihilation at time t 1 of one of the existing quanta in the thermal bath, and the subsequent transport of a hole from t 1 to t 2 , where another particle is emitted to replace the one absorbed from the bath. The Fourier which gives the matrix element D 11 in (2.3). Thus the correction to < T [φ(x 2 )φ(x 1 )] >= D 11 (x 2 , x 1 ) (and other elements of D) due to the particle flux is not an effect of interactions with the particles of the bath. This correction arises from the possible exchange of the propagating particle with identical real particles in the ambient flux. Using such a corrected propagator with the interaction vertices gives the interaction of the bath particles with the propagating particle. 'Exponential of quadratic' density matrices. Perturbation theory in the vacuum involves separating a free part which is described by propagators, and an interaction part which is described by vertices. In studying kinetic theory in Minkowski space, we also need to identify a free part and an interaction term. The free part involves specifying a density matrix ρ of a special form, and a choice of time path [4]. These special ρ are of the form 'exponential of an expression with quadratic and linear terms in the field' [4]. The linear term deccribes a coherent state, and gives a change in the classical value of the field. Shifting the field by this classical value gets rid of this linear term, and we will always assume that this has been done. The quadratic part of the exponential implies a density matrix of the form where the product is over different frequency modes, and a i , a † i are the annihilation and creation operators for these modes. We will call the form (2.13) an 'exponential of quadratic' density matrix. The special role of density matrices (2.13) is due to the fact that Wick's theorem extends to correlators computed with such ρ [4]. Thus for operators A i linear in the field, We sketch a proof of (2.14) in the appendix. The time path specifies where ρ directly gives the particle flux. We take the point of view that it is incorrect to construct a theory of quantised matter and gravity without including 'exponential of quadratic' density matrices in the possible backgrounds about which the perturbation will be developed. Suppose we choose a time co-ordinate t and start with a distribution e −βH , thermal for the Hamiltonian giving evolution in t. As the Universe expands, the distribution will not remain thermal, in general. Redshifting of wavelengths gives an obvious departure from thermal form if the field has a mass or is not conformally coupled. But even a massless conformally coupled field departs from thermal form if the time co-ordinate t is not appropriately chosen [10]. However, the density matrix remains within the class (2.13), if it starts in this class, even for a massive field. Indeed, we can describe such ρ in a covariant fashion by choosing a set of global solutions to the wave-equation, and attaching operators a i , a † i to pairs of functions f i , f * i . ρ gives the linear map (on this space of solutions) that must be made before identifying the two ends of the time path of the perturbation theory. The class (2.13) of ρ is closed under change of the basis of functions used to define creation and annihilation operators. For example an expanding Universe suffers particle creation, so that an initial vacuum state would be seen as filled with particles by an observer using 'out' frequency modes [1]. The density matrix in terms of 'in' modes is The fact that in each case we are describing the state at the past time boundary is encoded in the time path, which starts at this boundary, and returns to it. To summarise, the class of 'exponential of quadratic' ρ is natural for defining the propagator. Considering this class unifies the special cases of thermal fluxes in flat space and the fluxes 'created' in spacetime expansion, and also gives covariance under change of the basis functions in spacetime used to define particles. The regulator matrix. The Feynman propagator for a scalar field in Minkowski space can be written as can be used to express G F (p) in a first quantised language, with p 2 = −⊔. (See for example [11], [12]. Quantising the relativistic particle. What is the origin of the two components a = 1, 2 of the state on the world line? We would like to offer the following heuristic 'derivation' as a more physical description of the matrix structure in (3.2). The geometric action for a scalar particle is where τ is an arbitrary parametrisation of the world line. The canonical momenta satisfy the constraints We choose the range of the parameter τ as [0, 1]. Following the approach in [12], we impose the constraint at each τ through a δ-function: The path integral amplitude to propagate from X i to X f becomes where N is a normalisation constant, P µ X µ , τ = m(X µ , τ X µ , τ ) 1/2 is the original action There are two ways to consider the symmetry of the action (3.9). The action is invariant under and The difference between S 1 and S 2 is best seen by considering the finite transformations on λ: Using S 1 we can gauge fix any function λ(τ ) to any other function λ 1 (τ ), provided λ, λ 1 have the same value of 1 0 λ(τ )dτ ≡ Λ (3.14) With S 2 , λ transforms as an einbein under the diffeomorphism τ → τ ′ (τ ). Note that for regular ǫ(τ ), λ either changes sign for no τ or for all τ . We take Diff as the group of regular diffeomorphisms connected to the identity; then we have only the former case. These diffeomorphisms cannot gauge-fix λ(τ ) to any preassigned function λ 1 (τ ). We again have the restriction (3.14),where Λ may now be interpreted as the length of the world line. This restriction is usually assumed to mean that the length of the world line is the only remaining parameter after gauge-fixing. What we find instead is that there is a discrete infinity of classes, each with one or more continuous parameters. One member of this class comes from configurations λ(τ ) which are everywhere positive; this class can be gauge-fixed to haveλ (τ ) = 0, with 0 < Λ 1 < ∞, −∞ < Λ 2 < 0. We would like to identify this sector as the contribution to the amplitude to start with a state of type 1 and end with a state of type 2 (the off-diagonal element D 12 of the matrix propagator). Similarily, all sectors beginning and ending with Λ > 0 (thus having an even number of changes of the sign of Λ) contribute to D 11 . We can add together these sectors for D 11 once we choose the factor to be attached to each change in the sign of Λ. Choosing this factor is equivalent to choosing the regulator matrix M , and an explicit summation of sectors reproduces the matrix propagator. 1 A restriction to the range (0, ∞) for Λ can be naturally obtained using a Newton-Wigner formalism [13]. Here the particle travels only forwards in the time co-ordinate X 0 , thus it is not a co-variant approach. But we can define another nilpotent BRST charge which generates the symmetry The equation of motion givesη 2′ = η 1 ′ λ ′ , which suggests that we should identify the above symmetry with S 2 . The symmetries Q and Q ′ are related through the identifications all other primed variables equalling the unprimed ones. From (3.21) we find that −∞ < λ < ∞ corresponds to 0 < λ ′ < ∞. If we perform a path integral with the primed variables and sum over both positive and negative λ ′ then we are summing over more than is being summed in the unprimed variable path integral. We thus see sources of ambiguity on the quantisation of the relativistic particle working with a Fadeev-Popov approach in sec 3.2 and a BRST approach in sec. 3.3. In fact the action we start with, (3.5), is itself ambiguous because of the two possible signs of the square root. The particle trajectory would keep switching in general between timelike and spacelike, and at each switch we have to choose afresh the sign of the real or imaginary quantity obtained in these two cases respectively. This suggests that the world line configuration should be described by the pair {X µ (τ ), σ(τ )} with σ = ±1 giving the choice of root. Evaluating the quadratic form of the action (given in (3.9)) classically we find the sign of λ to be related to the sign of the square root chosen for (3.5). The above discussion suggests a close connection between the ambiguities found in three different approaches to the quantum relativistic particle, and it would be good to determine if they indeed are the same. For the rest of this paper we simply adopt as basic the picture of two complex conjugate propagations on the world line, with switching between them possible through the regulator matrix. A curved space example: spacetime with expansion. Consider the free scalar field ((2.1) with λ = 0) propagating in 1 + 1 spacetime with metric The conformal factor C(η) tends to A ± B at η → ±∞. The limit κ → ∞ gives a step function for C(η); the Universe jumps from scale factor A − B to A + B at η = 0. We will work in this limit to ensure simpler expressions. with We define the 'in' vacuum by a n |0 > in = 0, for all n (4.5) Similarily, for η → ∞ we write with The 'out' vacuum is defined through a n |0 > out = 0, for all n (4.8) The 'out' vacuum does not equal the 'in' vacuum, even in the free theory: The Bogoliubov transformation (4.9) says that an 'out' observer will find particles in his frame as η → ∞, if the 'in' observer sees a vacuum. Since the Bogoliubov transformation is given by the exponential of a quadratic in the field, we guess that our formalism developed in the preceeding section should apply. In other words, the effect of the flux created by spacetime expansion can be incorporated by a change in the regulator matrix of the first quantised path integral. We demonstrate this explicitly in our example. The first quantised path integral gives [2] (we denote the pair (η, x) by z) In studying kinetic theory we typically wish to specify the density matrix in terms of the 'in' states, e.g. ρ = e −βn |n > in in < n|/( e −βn ) where |n > in gives the occupation number n state for some positive frequency mode at past infinity. For β → ∞, ρ ≡ ρ 0 = |0 > in in < 0|. We use ρ 0 for our illustration; it should be straightforward to consider both expansion of spacetime and an initial exponential distribution of particles, by putting together the results of this section and the last section. To develop a perturbation theory using ρ 0 we need a real time contour running from η = −∞ to η = ∞, and then back to η = −∞ where we insert ρ 0 and take a trace to close the path. This perturbation theory requires a matrix propagator D: Using the 'matrix action' i(−⊔ − m 2 )σ 3 − ǫI in the path integral (4.11) gives the matrix to change only the operator multiplying ǫ, and we wish to get (4.12). Let us set up the calculation of Green's functions in the first quantised formalism. We need eigenfunctions of the world line Hamiltonian: The following is a complete set: (−∞ < n < ∞) −∞ < s < m 2 + n 2 A + B : These functions are normalised as Let us first recover the propagator (4.11) in this formalism. Let η ′ , η > 0. The range Similarily the range −∞ < s < m 2 + n 2 A+B provides the contribution There is a branch cut in the complex ν + plane joining ν + = ±i This results in a contour passing over the cut. Evaluating the resulting contour integrals one obtains the result which may be readily verified in the operator language using (4.6) and (4.9). In propagates as e is . Within each type (+ or −) we have two linearly independent functions for any given s near the mass shell s = 0. 2 Since we wish to compute the propagator with the 'in' vacuum density matrix, we choose in this space the basis which at the mass shell becomes (ω − n > 0) Discussion. We have taken the view that in a theory of quantised matter and gravity the background for perturbation theory should not only be a specification of classical values of fields, but also a specification of 'exponential of quadratic' particle fluxes. We know that these two different aspects of the background arise naturally in flat space kinetic theory. With curved space, it becomes natural to consider the kinetic theory and to not construct a 'vacuum' theory at all. The reason is that starting with physically acceptable conditions in the past, say, particle fluxes can be created in the co-ordinates natural in the future. The usual first quantised approach of string theory, applied to the scalar particle, gives a 'vacuum' theory, where certain 'in-out' vacuum boundary conditions are chosen at temporal infinity. These boundary conditions appear unnatural for physical purposes, so we would like to be able to move to a more general class of states at the boundary. In particular we would like to be allowed a radiation flux at the past time boundary, as in the radiation dominated Cosmologies. We find that in the first quantised language there is a natural way to obtain this more general theory. Quantisation of the relativistic particle indicates that we should consider both propagations e −iHλ and e iHλ on the world line. The regulator matrix needed to regulate this path integral need not be diagonal in these two modes of propagation. Off-diagonal terms encode an 'exponential of quadratic' density matrix and a choice of time path for perturbation theory. One special case of particle flux, the flux for constant temperature β −1 , may be studied without the real time formalism. One studies the theory on spacetime with time rotated to Euclidean signature and identifies t with t−iβ [15]. But constant temperature is unnatural in a theory with gravity, as the particle density gives a gravitational field, which gives an We should distinguish two different limits in which the physics of fluxes may be studied. One limit is where the collisions are so rapid that approximate thermal equilibrium is maintained at all times, and we need only let β be a function of time. The other limit is that of kinetic theory, where we assume that collisions are rare; particle wavefunctions evolve on the time-dependent background, and collisions between these particles are taken into account by perturbation theory. Our approach assumes the latter limit. The limit ǫ → 0 implies that the effect of the regulator matrix M is felt only on-shell (i.e. for p 2 − m 2 = 0). Equivalently, we may say that only world lines of infinite length (λ = ∞) see the regulator matrix. To see this, let M and M ′ be two different regulator matrices. We write The first square bracket on the RHS is D M , the last vanishes with the indicated limits, while the second has support only on world lines of infinite length. The analogue of the above statement for strings is that the effect of ρ is to give a contribution to the boundary of the moduli space of Riemann surfaces, where a homologically trivial or non-trivial cycle is pinched. (It is important to have Minkowski signature target space, and correspondingly a Minkowski signature world sheet, to allow the on-shell condition for the particle flux.) A β-function calculation for the string world sheet theory would have to take into account such pinches while considering the small handle contribution studied by Fishler and Susskind [18]. This calculation should yield a relation between the classical fields and the particle fluxes, rather than just among the classical fields giving the background. Appendix A. Wick theorem for 'exponential of quadratic' density matrices. We wish to establish Wick's theorem for density matrices of the form ρ = e αa † a † e −βa † a e γaa (A.1) A string of creation and annihilation operators can be brought to normal ordered form in the same way as for the usual Wick theorem in the vacuum. What we need to show in addition is that where the RHS has a summation over all possible pairings of the a † , a operators on the LHS. We sketch below some of the steps involved in the derivation. where in computing C, ∂ β is a partial derivative with α, γ held fixed. Here We find Trρ = e β [C 2 − AB] 1/2 = K −1/2 (A.9) (A.10) Using the above formulae, we can establish (A.2) by induction. Suppose (A.2) holds with 2p operators 'a' and 2q operators 'a † '. A typical term on the RHS would have the form F A n 1 B n 2 C n 3 , where F is a constant and n 1 , n 2 , n 3 ≥ 0. To establish the result for 2p operators 'a' and 2q + 2 operators 'a † ' we get for the LHS of (A.2): (A.11) The first term on the RHS of (A.11) gives the pairing of the two new operators a † with each other. The second term gives the n 1 ways to choose an existing pair (a † a † ) and to contract the new a † operators with members of this pair instead. The third term corresponds to choosing an (aa) pair in the original expression and contracting the 'a' operators with the new 'a † ' operators instead. The last term corresponds to exchanging the a † in an existing a † a pair with one of the new a † operators. It is easily seen that this generates all the new terms required on the RHS of (A.2) for the induction to hold. To work with the case of an odd number of a and a † operators we start with the expression CTrρ + 2γ∂ γ Trρ = Tr{e −βa † a e α ′ a † a † a † ae γaa } in place of Trρ, and proceed as above to introduce extra a † a † and aa pairs in the induction.
2014-10-01T00:00:00.000Z
1993-01-12T00:00:00.000
{ "year": 1993, "sha1": "4df5c643e852b868e4c62195712fde92fdb9e7c4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4df5c643e852b868e4c62195712fde92fdb9e7c4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244710334
pes2o/s2orc
v3-fos-license
From Guild Artisans to Entrepreneurs: The Long Path of Italian Marble Mosaic and Terrazzo Craftsmen (16th c. Venice – 20th c. New York City) Abstract Marble mosaic and terrazzo were a very common type of stone paving in Venice, Italy, especially between the sixteenth and eighteenth centuries. Throughout the period, migrant craftsmen from the nearby Alpine foothills area of Friuli (in northeastern Italy) virtually monopolized the Venetian marble mosaic and terrazzo trade. Thus, on February 9, 1583, the Venetian Council of Ten granted maestro (master) Sgualdo Sabadin from Friuli and his fellow Friulian workers of the arte dei terazzeri (art of terrazzo) the capacity to establish a school guild dedicated to St. Florian. The first chapters of the Mariegola de’ Terazzeri (Statutes of the Terrazzo Workers Guild), which set the rules for the guild of terrazzo workers, was completed three years later, in September 1586. From the 1830s onward, Friulian craftsmen began to export their skills and trade from Venice across Europe and later, at the turn of the twentieth century, overseas to several American cities. Prior to reaching America, mosaic and terrazzo workers left from their work places outside Italy, initially from Paris. Friulian mosaic and terrazzo workers were regarded as the “aristocracy” of the Italian American building workforce due to their highly specialized jobs: This contrasted with the bulk of Italians in the United States who were largely employed as unskilled. The New York marble mosaic- and terrazzo-paving trade was completely in the hands of the Italian craftsmen, who demonstrated a strong tendency to become entrepreneurs. They made use of their craftsmanship comparative advantages to build a successful network of firms that dominated the domestic market, in a similar fashion to what had already been occurring in France, Germany, the United Kingdom, and other European countries. This paper argues that immigrants can be powerful conduits for the transfer of skills and knowledge, and emphasizes the importance of studying skilled migrant artisan experiences. A closer look at ethnic migration flows reveals a variety of entrepreneurial experiences, even in groups largely considered unskilled. The Italian marble mosaic and terrazzo workers’ experience sheds new light on ethnic entrepreneurship catering for the community as a whole, it reveals a remarkable long-lasting craftsmanship experience, thus demonstrating the successful continuity in business ownership and the passing down of craftsmanship knowledge across family generations. Creativity skills and innovative productive methods adopted by firms appear as a key factor that allowed these artisans to control the trade for such a long time. Introduction This article refers to marble mosaic and terrazzo workers who came from a handful of villages in the Alpine foothill area of Friuli in northeastern Italy. Mosaics have traditionally been formed by hand-setting small pieces (known as tesserae) of stone, marble, ceramic, or glass in a decorative pattern applied to a surface that has been prepared with an adhesive. The word "terrazzo" is of Venetian origin, and the name is used broadly to designate almost any kind of interior flooring surface made from bits of marble or stone. Also known as pavimento alla veneziana (Venetian pavement) and seminato, terrazzo is a flooring in which chips of marble, stone or glass are scattered at random (literally, seminato is Italian for "sown") or arranged to form simple linear patterns or more elaborated figures on a lime and (later) cement matrix. On setting, the surface is ground smooth to show a cross section of the chips through the mixture. At first, the marble mosaic and terrazzo artisans fulfilled the demands of the job market in Venice and the rest of northern Italy, then in France and the rest of Europe, and finally venturing overseas to countries like the United States, Canada, and Australia. The villages of origin of these craftsmen (initially Sequals and Solimbergo, later on, e.g., Fanna, Cavasso Nuovo, Arba, Spilimbergo), have never been the places where they carried on their trade. Prior to reaching the United States at the turn of the twentieth century, the majority of these artisans had experienced a period of work in mosaic and terrazzo trade in other European countries. In that respect, the marble mosaic and terrazzo workers from Friuli might be considered a group of artisans trained for labor markets situated outside of their places of origin. For many families this professional path, accomplished outside their villages of origin, lasted for centuries. Before a specialized school institutionalizing the local mosaic and terrazzo techniques was created in the early 1920s, it is worth noting that marble mosaic and terrazzo training was passed on from one generation to the next by means of family members and trusted apprentices in closely knit family-run businesses. 1 This paper outlines the five-century-long migration trajectory and accumulated experience of a creative, skilled workforce as a continuum, regardless of the national (mainly Venice) or foreign destinations of these emigrant craftsmen. It focuses on a business network set up by a highly specialized group of emigrant artisans. The experience of marble mosaic and terrazzo workers from Friuli represents a striking experience in the history of Italian immigration to America: it is a paradigmatic example of the strong relationship between migration and trades, between particular districts and townships and particular occupations and migrant destinations. This article is divided into six sections. The first introduces and discusses the early Venetian experience of mosaic and terrazzo workers from Friuli from the sixteenth century onward. It describes the height and the breaking up of the terrazzo guild system, which prompted a rapid de facto entrepreneurial process, materialized initially in Venice's mainland dominions, in northern Italy, From Guild Artisans to Entrepreneurs and later in southern France, where many "Venetian" craftsmen settled in the early nineteenth century. The second section explores the arrival of mosaic and terrazzo workers from Friuli in the United States in the last decades of the nineteenth century: the presence of highly-skilled Italian mosaic and terrazzo workers (who were regarded as the aristocracy of the work force), contrasted with the more than two-thirds of the Italians arriving at the port of New York who were registered by the American authorities as either farm laborers or laborers. The third section focuses on the role of the renowned New York-based Herter Brothers decoration firm in the diffusion of the marble mosaic flooring on a large scale: All the mosaicists that were employed by the company were Italians. However, as the trade expanded in the last two decades of the nineteenth century, so did the network of the marble mosaic firms, the majority of which were owned by Italian craftsmen who had worked in the Herter Brothers' mosaic department. The fourth section traces the pathway that drove the New York-Italian marble mosaic masters to unionize successfully in 1888: They were the first Italian building-trade workers to establish a trade union. Despite the initial socialist orientation of these skilled artisans, their success as business owners eclipsed their enthusiasm for unionism. The widespread popularity and acclaim for terrazzo from early on in the 1920s, which soon overtook marble mosaic in popularity and came close to becoming a ubiquitous flooring material for public buildings and apartment buildings, is discussed in section five. The following section delves into the creation of the mosaic and terrazzo contractor association in 1924. By then, Italian and Italian American mosaic and terrazzo entrepreneurs had built a powerful network of firms that dominated the market across North America. The concluding section stresses the marble mosaic and terrazzo migration as a longlasting craftsmanship experience, buttressed by self-employment, organizational, and creative attitudes of the artisans who came from a handful of villages in northeastern Italy. Whereas attention has been given to the more numerous, less-skilled emigrants, this article purposely focuses its attention on the importance of studying skilled migrant artisan experiences, which are often overlooked by scholars. Yet recent research on craftworkers, such as Welsh tinplate workers, Belgian glass workers, and British shipbuilders, 2 offers good examples of the considerable role played by skilled immigrants in the development of American industry. Mosaic and terrazzo workers: from late sixteenth century Venice to early nineteenth century France In the first few decades of the sixteenth century, Rome and Venice both gave a new lease on life to their local mosaic schools. 3 In the Lagoon city, skillful master mosaic workers (for example, the Zuccato and Bianchini families, Bartolomeo Bozza, Lorenzo Ceccato) developed unique techniques of mosaic production, which allowed them to translate into mosaic the fine details of paintings made 62 ILWCH, 100, Fall 2021 by masters such as Titian, Raphael, Salviati, Tintoretto, and Sansovino. 4 At the time, some mosaic workers from Friuli, such as the Bianchini family, were chiefly engaged in the restoration and execution of the mosaics in St. Mark's Cathedral. Between 1517 and the early 1580s, the brothers Domenico and Vincenzo Bianchini and his son, Gian Antonio (presumably from the small village of Solimbergo), executed many of the mosaics in St. Mark's Cathedral, namely the Judgment of Solomon and the marvelous Tree of Jesse. 5 The first Venetian terrazzo floorings also dated back to the sixteenth century: they consisted of very simple archetypes and were executed by workers from Friuli. 6 On February 9, 1583, the Venetian Council of Ten granted maestro (master) Sgualdo Sabadin (or Sabadini) from Provesano (Friuli) and his fellow Friulian workers of the arte dei terazzeri (art of terrazzo) the capacity to establish a school dedicated to St. Florian. 7 Other master members of the terrazzo guild were Zuanne Roiter from Barbeano, Piero Pangon and Battista Crovat [Crovatto] from Sequals, Nicolò Sabadin from Provesano, Bortolomio de Mazzuoli [Mazziol/i] from Solimbergo, and Bernardo de Ceser from Fanna. 8 It is not clear if the school was simply a devotional confraternity or whether it was conceived as an institution created for an autonomous art. Regardless, the first chapters of the Mariegola de' Terazzeri (Statutes of the Terrazzo Workers Guild), which set the rules for the guild of terrazzo workers, was completed only three years later in September 1586. 9 The creation of a guild entailed the replacement of isolated artisans with teams of organized workers that had to strictly conform to societal and professional rules. The art and professional guild statutes went to great lengths to preserve an equilibrium within the individual branches of the labor force. 10 The terazzeri, as terrazzo workers were then called, were regarded as true artists who had prudently handed the secrets of their craft down from father to son. 11 For outsiders, the training path to become a maestro terazzer (terrazzo master) was firmly fixed and demanding: candidates were allowed to register for the master exam only after a seven-year-training period as an apprentice (garzone/apprendista) and a three-year-period as an assistant (lavorante). Between 1669 and 1688, Venice experienced a building boom that, although sluggish at times, continued until the 1760s. Over this period an estimated forty buildings and churches, a dozen theatres, at least a half dozen hospitals, and a dozen schools were built in Venice. 12 In the first few decades of the eighteenth century, the number of terrazzo masters and apprentices rose significantly (also due to the seasonal influx of workforce in the assistant ranks coming from the mainland), despite the fluctuating Venetian building market, which could not guarantee job opportunities to all terrazzo artisans. Many of these workers were immigrants. In fact, the art of terrazzo was dominated by craftsmen from the Alpine foothill areas of Friuli. Pellarin, Crovatto, Carnera, Mazzioli, Cristofoli, Odorico, Del Turco, Foscato, Mora, Mander, Patrizio, and Pasquali from Sequals and Solimbergo were the most widespread surnames among the terrazzo (and marble mosaic flooring) masters in late seventeenth and early eighteenth century Venice. 13 From Guild Artisans to Entrepreneurs Many of these "Venetian" terrazzo masters (and apprentices), after having executed terrazzo in cities of the Venetian mainland, such as Padua, Vicenza, Treviso, Este, Bassano, and in other northern Italian cities like Genoa, decided to establish their businesses in these centers. 14 The years that precede and follow the downfall of the Venetian Republic (1797) were marked by political instability, social crisis, and economic recession, which caused widespread unemployment and led to the overwhelming impoverishment of the inhabitants, a sharp drop in the number of city dwellers in the old town center (from 160,000 to 100,000 in just a few years), and the significant fall in the value of real estate. The building market came to a near standstill, and even the renovation of homes slowed down significantly. The city of Venice offered insufficient opportunities to satisfy the demand of assistants, apprentices, and terrazzo masters. The breaking up of the guild system, set forth officially in 1807, and the relaxation of the terrazzo workers' group solidarity prompted a rapid de facto entrepreneurial process. While terrazzo masters, who possessed the talent, the skills, the tools, and the necessary assets, transformed themselves into entrepreneurs, apprentices and assistants were only too willing to offer their labor to the former. 15 In the early nineteenth century, the discovery of antique mosaics in southern France drew some Friulian terrazzo masters, who moved from Venice to the other side of the Alps to restore the Roman and medieval masterpieces that had been discovered. 16 The terrazzo and mosaic artisan Angelo Giovanni Battista Mora, from Sequals, settled in Lyon as early as 1829/1830, and established a marble mosaic company, initially devoted to the restoration of the ancient mosaics in the city. The Entreprise Mora, later run by Angelo's sons Edoardo and Pietro, is considered one of the first firms founded by an Italian mosaic craftsman in France. 17 Angelo Giovanni Battista Mora had been living in Venice before moving to France. In the years that followed Mora's move to France, many of Mora's fellow countrymen living in Venice had relocated to other French cities, such as Nîmes, Montpellier, Narbonne, Béziers, Orange, and Avignon, to bring to light and restore marble mosaic floors. Gian Domenico Facchina, who was born in Sequals in 1826, is the master craftsman and entrepreneur that deserves credit for relaunching marble mosaic in France and then in the rest of Europe and overseas. At an early age Facchina joined a relative in Trieste, where he attended a school of design. In this period he worked as an assistant in restoring the mosaics at the local cathedral of Saint Giusto. Captivated by mosaics, Facchina decided to move to Venice, where his uncle Giuseppe, a cleric at St. Mark's Cathedral, introduced him to some local mosaic masters for whom Facchina worked as an apprentice. Around 1847, he moved to southwestern France, where he and many of his fellow craftsmen countrymen were involved in the restoration of the recently discovered antique mosaics. Three years later, in 1850, Facchina established his own mosaic company, and in 1858 he patented a "method of detaching ancient mosaics and relaying them without altering their design" (système d'extraction et pose sans alteration des mosaïques antiques). This technique led to the so called "indirect method," which made it possible to produce 64 ILWCH, 100, Fall 2021 mosaics on what was virtually an industrial scale. 18 The work, which was executed in the studio, entailed setting the tesserae upside down on a temporary paper base. The mosaic was then shipped to its destination and installed in situ (both to walls and floors). This technique of prefabrication, probably used as early as the Greco-Roman age, was a cheaper method of producing mosaic and was devised in order to meet the demands of the new scale of production. In addition to this, the "indirect method" does not require the designer, the maker, the layer, and the polisher of the mosaic to be the same person, increasing labor division within the trade: Italian craftsmen were only responsible for the making and, above all, the laying and the polishing of the mosaics. By using this technique, Facchina together with the firms of his fellow countrymen Cristofoli, Mazzioli, and Del Turco (all from Sequals), had successfully installed mosaics in Charles Garnier's Paris Opera House in 1866. For the first time in France, this Friulian craftsman introduced decorative mosaic in a public building, as architect Garnier himself declared and an inscription in Greek characters stated inside the Opera. 19 They "have brought a special training, a traditional aptitude:" Italian mosaic and terrazzo workers in America at the turn of the century From the end of the nineteenth century, Italian emigration exhibited marked regional characteristics. Relatively small numbers departed from the central regions of Italy, while from the north, particularly from Veneto, Friuli, Lombardy, Liguria, and Piedmont, emigration took on vast proportions, continuing in part along its seasonal trans-Alpine emigrant paths, but also, significantly, propelling hundreds of thousands of emigrants to Argentina and Brazil where most of them permanently settled. At the turn of the twentieth century, the southern regions, foremost Campania, Calabria, Abruzzi and Molise, Basilicata, and Sicily, began to register high rates of departures, primarily to the United States. 20 From the 1880s to the outbreak of World War I, Friuli represented "the greatest single source of Italian emigrants, where the élite of the population came increasingly to think in terms of temporary migration," commented Frank Thistlethwaite. 21 "The skilled workmen of Udine," observed Robert F. Foerster in his classic and, to some extent, unexcelled early work on Italian emigration, "have been able to labor [temporary] abroad upon terms acceptable to their employers and themselves and have found it cheapest to spend the winter in their natal country; they have done better than their agricultural brethren. That is why, counting upon future employment in neighboring countries, the children of Friuli have been trained to the more skilled non-agricultural occupations." 22 The mystifying fact, added Foerster, is that "in the temporary emigration from [the] Friuli [between the last decades of the nineteenth century and the outbreak of World War I] these more or less skilled workmen, far from being occasional, are a very great part of the emigrants and often indeed the élite of the population." 23 According to Foerster "What is remarkable in this From Guild Artisans to Entrepreneurs story is hardly that boys are trained with reference to work in other lands, but rather that as a result of such training a population should grow up which absolutely depended for its livelihood upon employment abroad." 24 Thus, the way to approach the study of a group of emigrants who traditionally and continuously earn a living by drawing from work in a foreign labor market (as in the case of workers of Friuli and some other Italian Alpine villages) needs to be different vis-a-vis other emigrants whose departure is in response to the call of a foreign country at a certain point in time. It is likely that, over the time, the first group developed an array of trades (which became their stable occupation) required by the foreign labor market, while the second group tried to rapidly adjust to the labor needs in their host nation. The outstanding arrival of Italians to the United States in the last decades of the nineteenth century corresponded with a major transformation in the traditional American sources of immigration. Prior to the 1890s, the great majority of Europeans reaching the United States were British, Irish, German, and Scandinavian. From the 1890s on, the sources for these flows moved eastward and southward. This "new immigration" was composed predominantly of Slavs, Jews, and Italians. 25 Growing steadily from the 1880s onward (in the latter part of the nineteenth century, the United states welcomed about eight hundred thousand Italians), the influx of immigrants from Italy reached mass proportions after 1900. In the first fifteen years of the century, over three million Italians entered the United States. They constituted the largest nationality of the "new immigration" and over 20 percent of the total immigration of this period. The main characteristics of Italian emigration to America were apparent. These migrants were largely from southern Italy, were increasingly inclined to return to Europe after working as laborers on railway construction, in mines, and in construction sites, had taken on a variety of mostly unskilled occupations in the United States, and had developed the institution of the padrone system. 26 However, some villages in northern Italy also distinguished themselves in particular as areas of departure for America. In 1902, the US Industrial Commission included Udine (the capital of Friuli) and the nearby area as "collecting points" and "contributing districts" of European emigration to the United States. 27 The map prepared by the US Industrial Commission depicted Italy as a country where much of the emigrants to America came from the Southern regions (Puglia and some areas in Sicily excluded). In the northern part of the country, a high number of migrants left from Friuli as well. The route followed by emigrants took them from Udine to New York City via the railroad routes of Milan, Turin, Modane, Basel, and Paris, and the French ports of Havre and Cherbourg. At the turn of the twentieth century, over two-thirds of the Italians arriving at the port of New York were registered by the American authorities as either farm laborers or laborers. A minority of the immigrants, less than 15 to 20 percent, were artisans. 28 By 1893, the Italians constituted three quarters of the building laborers in the city of New York: "Thousands more entered the 66 ILWCH, 100, Fall 2021 country in the next twenty years." 29 According to data from the Immigration Commission, between 1899 and 1910, about three quarters of the Italian immigrants reporting occupations (296,662 Northern Italians, 1,471,659 Southern Italians) were farmers; in the same period skilled immigrants represented 20.4 percent (around 60,500 workers) of those from Northern Italy reporting occupations, and 14.6 percent (214,800) of those from Southern Italy. 30 In 1900, about half of the Italian men in the United States were employed as unskilled laborers, and this percentage did not change much before World War I. Italians were excluded from higher-paying and better jobs not only because of the language barrier and lack of skills but because of the racial prejudice against them. In fact, even educated and skilled immigrants were often compelled to take up the pick and shovel, further reinforcing the stereotype of the Italians as nothing but unskilled workmen. 31 That was not the case with Italian mosaic and terrazzo workers from Friuli. Because their work was so highly specialized and well paid, they were regarded as the aristocracy of the Italian work force. In 1893, Frederick L. Matthes, a construction engineer interviewed by the Real Estate Record and Builders Guide magazine, declared that "as a rule the Italian laborer does not advance in building; some indeed, learn the various trades, but are at best only rough workmen, not at all fitted for fine work. There are exceptions though in some branches. Take mosaic work: where can you find any class who will surpass the Italian? They monopolize the business too." 32 The presence of this group within the huge building sector came to the attention of both American authorities and attentive scholars of Italian emigration in the United States. In 1902, the Industrial Commission stated that, among the Italian population, whereas "skilled workmen from the north of Italy in large numbers go directly to the interior [of the state of New York] as marble-cutters, miners, mill hands, etc . . . some 2,000 workers in marble and mosaic, and many mechanics, masons, stonecutters, bricklayers, carpenters, and cabinet-makers" had remained in New York City. 33 Robert F. Foerster wrote that the expertise and competence of skilled Italian building-trades workmen, who appear to have an outstanding importance in the statistics of disembarking aliens, were: easily adapted to the circumstances of the country, and sometimes it is of a superior order, and is prized. From Venetia and Tuscany, for example, workers in mosaics and stucco [respectively] have brought a special training, a traditional aptitude of which Americans have been glad to avail themselves […] It is common to find them at work on the most exacting tasks, ensuring the neatness of appearance, or the beauty, of the most ambitious public and private structures. 34 The first Friulian mosaicists, Domenico Pasquali and a fellow countryman (whose name has been lost), both from Sequals, came to the United States in 1870. Some years later, according to William Henry Burke, the owner of one of the first London-based mosaic firms, 35 they were engaged by the Herter Brothers company to lay mosaic floors in the New York City homes of Jay From Guild Artisans to Entrepreneurs Gould and Darius Ogden Mills. Domenico Pasquali and his fellow countryman subsequently laid mosaic flooring in the hallways of several small residences in New York and Boston, but after a short stay and due to their lack of success, they left the United States for an unknown destination in South America. 36 Interestingly, Domenico Pasquali came to the United States from Liverpool, and not from the more common emigration ports of Havre or Cherbourg. This, and the fact that Burke explicitly mentioned Pasquali in his story of marble mosaic pavements, might suggest that the former might have worked for the Burke company in London prior to his arrival in New York. 37 In fact, from the 1880s to the beginning of World War I, almost all the Italian marble mosaic and terrazzo artisans who reached the United States had started their careers in France, Germany, Austria, Switzerland, and Great Britain where they had worked for companies ordinarily owned by their fellow countrymen, from whom they also learnt the trade. This was not only the case of the aforementioned Domenico Pasquali, and the mosaic workers hired by the Herter Brothers firm, but also of many of the artisans whose names were recorded in The Art of Mosaic and Terrazzo magazine. The patterns that led emigrants from Europe to the United States appear to be clear. "The immigrant in the United States in a large measure assists as well as advises his friends in the Old World to emigrate," stated the Immigration Commission in 1911: 38 Emigration from Europe proceeded "according to welldefined individual plans rather than in a haphazard way" added the Immigration Commission. 39 This emigration path applied also to marble mosaic and terrazzo workers. In view of the fact that these artisans continued to be involved in the same trade on either side of the Atlantic, marble mosaic and terrazzo workers preferred the American market over the French, German, Swiss, Austrian, or British ones because they considered the former to offer more immediate economic earnings and better perspectives for future upgrades than the latter. It is clear, as pointed out by Thistlethwaite referring to skilled emigrants, that: "The connection between migration and a trade was often close, and it was, moreover, already well established in Europe before the attraction of America began to be felt." 40 "Any competent mechanic with a little money and experience could set up a shop as a contractor:" The Herter Brothers and the network of the New York marble mosaic firms The renowned New York-based Herter Brothers decoration firm, which was established by the German-born brothers Gustave and Christian, was the first in the United States to execute marble mosaic floors on a large scale, which were carried out by Friulian craftsmen who had previously worked in Paris. In 1879, Christian Herter began the most elaborate commission of his career, the William H. Vanderbilt residence in New York City on Fifth Avenue at Fifty-First Street. Six to seven hundred craftsmen, some imported from Europe, worked to complete the elaborate decoration of the building by January 1882. 41 The mosaic work had been assigned to the company Maison 42 Soon after, in 1881, the Herter Brothers firm organized a department solely for mosaics, and thereafter carried out a considerable amount of work in New York and in various parts of the United States. While German workers represented the majority of the artisans in the Herter Brothers' cabinet-maker department, all the mosaicists that were engaged by the company were Italians. 43 The working hierarchy, at least for the last few years of operation of the company's mosaic department, was clear: while mosaic masters (mechanics) were almost entirely from Friuli (Luigi Zampolino was in charge of the mosaic department from 1880 to the company's closure in 1907), assistants (helpers, polishers, and marble cutters) mostly came from other Italian regions. 44 Eventually, most of the former (the mechanics) established their own company. According to Grace Palladino, in those days: "Any competent mechanic with a little money and experience could set up a shop as a contractor." 45 The building industry was organized according to a (sub)contracting system, and most of the marble mosaic and terrazzo firms and co-operatives followed this scheme: "The mason builder, or general contractor, secures the contract from the owner, or 'client', and generally puts up the brickwork; but he submits by competing bidding all the other work to as many contractors as there are kinds of work." 46 Sub-contractors supplied both equipment and skilled men in their specialty. The padrone system never distinguished the work relationship within the mosaic and terrazzo trade, because the main feature on which it was based (the padrone was a middleman who stood between the contractor and the worker) was absent. Herter Brothers mosaic workers were responsible for the gem-like mosaic decoration of many of New York's most famous buildings, such as the palatial homes of the Goulds and the Villards, the ceilings of the dome of the Metropolitan Life Insurance Company Building, the Metropolitan Club, the Morgan Library, and the New York Historical Society. Italian mosaicists who worked for Herter Brothers executed mosaics in many other buildings all over the United States, embellishing the Boston Public Library and the opulent residences of Chicago barons such as George Mortimer Pullman, Philip Danforth Armour, and Potter Palmer. 47 Simultaneously many other mosaic workers established their own companies. In 1890, The Art Amateur journal of New York stated that while mosaics were practically unknown here when Mr. W.H. Vanderbilt brought over from Paris two workmen to assist in decorating his new house on Fifth Avenue, today there are eight firms in New York City alone which make mosaics the whole or a part of their business, giving employment to fifty mosaic workers and double that number to helpers and masons. 48 The journal added that one of the most famous house-decorating firms in this city [surely a reference to the Herter Brothers] has enough orders on hand to keep its mosaic workers busy for a year to come, and another firm has over fifty specimens of its work in residences, churches, banks, theatres, and other public buildings in this city, besides having a generous patronage in the East, West, North, and South, so rapidly has the industry developed. During the same period, other mosaic firms owned by Italians operated in the New York building industry. Vittorio (Victor) Foscato from Sequals, who, prior to coming to New York, worked in Manchester, England, was the proprietor of "V. Foscato Inc.," while Luigi De Paoli from Istrago (Spilimbergo) established the "De Paoli Company, Inc." with his brothers Vincenzo and Alessandro, first in New York, then in Boston. Likewise, many Italian mosaicists worked cooperatively, as was common in Europe among Friulian mosaic and terrazzo workers. In 1889, Luigi (Louis) Pasquali, from Sequals, was instrumental in organizing a group of fellow artisans into a company that became known as the "Marble & Enamel Mosaic Co-operative Co." 52 Marble mosaic firms in America employed the "indirect method" developed by Facchina, which allowed companies to complete mosaic in the shop and then deliver it to the job ready for installation: "Few people seem to know that the designs are first laid out on full-sized drawings, the little chips being glued to the paper, and then the entire pattern laid upon the wall, floor or ceiling in sections" according to an article published in the Worcester Daily Spy on March 31, 1889. 53 The article described the momentum gained by mosaic: "Mosaics are decidedly the architectural fad of the daymosaic floors and ceilings, mosaic walls and mantels, mosaic pictures, mosaic everything […] The growth and multiplication of the art seem wonderfully rapid. Every architect gets it into every new plan he makes." 54 At the turn of the twentieth century, American observers became aware of "the progress this [mosaic] art had made in the United States and the wide diversity of its application in the 70 ILWCH, 100, Fall 2021 decoration of the modern buildings of today," as stated in The New York Times on March 13, 1897. The article predicted the large spread of mosaic in the coming years stating that it "is bound to enter more and more into the plans of architects, for beyond its possibilities for artistic decoration, its durability commends it." The growth in the use of marble mosaic in American buildings from 1880s onward resulted largely from the rise of Beaux-Arts architecture throughout the country. The works of the American Beaux-Arts architects, and even of some decorators who got acquainted with the most fashionable European styles and who collaborated with artists and craftsmen, played a great lead in the consolidation of the movement. 55 The Herter Brothers experience epitomized the role of a decorator's firm, which drew from the French Second Empire architecture the best elements of design. Christian Herter sojourned extensively in Paris, where he "must have followed the progress of Charles Garnier's Opera, the most significant edifice built during the Second Empire." 56 Thus, the importance of the Herters in the spread of marble mosaic is apparent. An exclusively Venetian organization: Marble mosaic and terrazzo mechanics and helpers unionized In the United States, the process that leads mosaic and terrazzo mechanics (masters) and helpers to unionize was different for each group, not only because the tasks performed by each were different but also due to the diverse bargaining power of mechanics and helpers. According to the US Department of Labor, mechanics in marble mosaic and terrazzo work (who unionized first) were in charge of "marble mosaic, venetian enamel, and terrazzo, the cutting and assembling of art ceramic, glass mosaic, and the casting of all terrazzo in shops and mills." 57 Mechanics were also responsible for "all bedding above concrete floors or walls, that preparation, laying, or setting of the metal or wooden strips and grounds, where mosaic and terrazzo is to be applied." Terrazzo helpers, for their part, managed "all the handling of sand, cement, lime, terrazzo, and all other materials that may be used by the marble, mosaic, and terrazzo workers after being delivered at the building, or at the shop; rubbing and cleaning all marble, mosaic, and terrazzo floors, bare wainscoting when run on the building by hand or machine." 58 The capacity and desire of the Italian mosaic and terrazzo workers to organize themselves was apparent soon after their arrival in the United States. In 1888, Italian marble mosaicists in New York created the Italian Mosaic Marble Workers union, later renamed the Mosaic and Terrazzo Workers Association of New York & Vicinity. They were the first Italian building trade workers to unionize successfully in the United States. 59 Union membership consisted of the most expert marble mosaic and then terrazzo workers, the so-called mechanics. The fact that mosaic and terrazzo mechanics "had a skill which their employers could not replace helps to explain the early success of the union movement among them." 60 Their expertise gave them a monopoly, so that they completely controlled the mosaic and terrazzo industry in New York From Guild Artisans to Entrepreneurs without the support of a national union, "although they were forced to maintain friendly relation with other building trades' workers who could have refused to work on a job where they were employed." 61 Since the marble mosaic and terrazzo execution process was done largely by hand, technological changes in building trades (e.g., the shift from masonry to structural steel construction between the 1880s and 1890s) 62 did not affect the mosaic and terrazzo mechanics' union bargaining power. The adoption of machinery (electric grinding and polishing machine) did not hinder the powerful position of the mechanic union either, since the polishing was performed by the helpers. One year after the establishment of the mosaic union, the Central Labor The baptism by fire for the marble mosaic union was probably the strike of New York building industries in April 1890. The mechanics of the marble mosaic union demanded nine hours and $3.50 a day, with eight hours on Saturdays. Herter Brothers' marble mosaic and terrazzo workers received remarkable support from their German cabinet-maker fellows, who embarked on a sympathetic strike: "All the cabinetmakers, varnishers and painters employed in the Herter Brothers factory, at Twenty-eight St. and First Ave., went on strike yesterday in sympathy with the Italian marble mosaic workers, who have been on strike for two weeks," stated the New York Tribune on May 3, 1890. In October 1890, the Workmen's Advocate, the official journal of The Socialist Labor Party, reported that the mechanics of the Italian Mosaic Marble Workers union had unanimously resolved "to send delegates to the Socialist Labor Party Convention for political action." 64 In September 1891, the Italian Mosaic Marble Workers union participated in New York City's Labor Day Parade, as part of the nearly 9,000 men in a line that ran from Union Square to Washington Square. 65 The mosaic and terrazzo mechanics' and helpers' unions, however, showed little involvement in the Central Labor Federation. They twice lost their seats for non-attendance at meetings. The seats of delegates Egidio Marchesini from the mechanic's union and of B. Binaghi, Pompeo Spagnuolo, and Endredi Cuneo from the helpers' union were declared vacant in April and November 1892. 66 "Despite its socialist orientation, the [mechanics of the] Italian Marble and Mosaic Workers' Union began to exhibit some of the characteristics of conservative craft unionism. It closed its ranks to many men who wished to become members and once became involved in open violence with members of the Italian Marble and Mosaic Workers' Helpers, who demanded the right to join 71 The common ethnicity shared by marble mosaic and terrazzo workers and entrepreneurs did not always secure worker's loyalty and vice-versa. From October 1, 1903, to December 31, 1909, the grievances filed by the mosaic and terrazzo mechanics' union amounted to twenty-one (sixteen of which were referred to arbitration, fifteen were decided favorably to the union, and one was decided adversely), while those filed by the mosaic worker's helpers total only two. 72 The grievances and complaints had surely contributed to improve conditions and wages of mosaic and terrazzo mechanics in New York City, which soared from a minimum daily (eight hours constituted a day's work, four hours on Saturday, for forty-four hours per week) rate of $4.00 in 1906 to $4.25 in 1907, and $4.50 in 1914; mosaic worker's helpers rose from $2.50 in 1906 to $2.75 in 1907, and $3.00 in 1914. In 1914, the daily wage rate for bricklayers and marble carvers amounted to $6.00, $5.50 for marble cutters and setters, $3.25 for marble cutters and setters' helpers, $4.00 for marble polishers, $4.25 for marble sawyers, $5.50 for tile layers, $3.38 for tile layers' helpers, and $3.00 for hod carriers. 73 In November 1919, the Bricklayers, Masons, and Plasterers' International Union of America (BMPIU) granted a Charter to the mechanics of the mosaic and terrazzo union, which became Union No. 3 of New York City. A mosaic and terrazzo mechanic was paid $0.75 cents per hour. 74 The union's business agent, Federico G. Patrizio (Frederick J. Patrizio), who was born in Sequals in 1889, and came to the United States at the age of fourteen, was instrumental in successfully affiliating the union with the BMPIU. In the early 1920s, the Italian Mosaic Marble Workers Helpers joined, instead, the International Association of Marble, Stone and Slate Polishers, Rubbers, and Sawyers, Tile and Marble Setters' Helpers. In the 1920s and 1930s, the mechanics of the Mosaic and Terrazzo Workers' union counted 180 members. All of them were Italian and nearly all of them were from Friuli: the minutes of the meetings were written both in Italian and English. 75 In New York City, during the 1880s and 1890s, and even in the 1920s and 1930s, many of these Italian marble mosaic and terrazzo workers lived in the Eighteenth and Twenty-first Wards, in the area between First and Third Avenues bounded by East Twenty-fourth Street to the south and East Thirtysixth Street to the north. 76 They lived in the same area where the Herter Brothers factory was located, on 479-85 First Avenue near East Twentyeighth Street. Thus for marble mosaic and terrazzo workers, their choice of residence soon after their arrival in New York City was subject to craft rather than ethnicity. In fact, when the first groups of Italian marble mosaic and terrazzo workers arrived in the city in 1880s and 1890s, they did not cluster with the majority of Italians in lower Manhattan (where Italians immigrants had moved into the old Irish section and fashioned a Little Italy west of the Bowery in the Sixth and Fourteenth Wards), in the Fourth, the Eight, and the Ninth Wards, and in Harlem. In the Eighteenth and Twenty-first Wards Italian mosaic and terrazzo workers initially mingled randomly with Americans and other Europeans. In the area of East Twenty-fifth and East Twenty-sixth Streets bounded by First and Third Avenues, for example, Italian mosaic and terrazzo workers gathered with Irish, German, Swedish, English, Swiss, and Bohemian people, and with Russian and Polish Jews. Italian marble mosaic and terrazzo workers who arrived in the early 1900s and during the interwar period joined their craftsmen fellow countrymen who resided in the Eighteenth and Twenty-first Wards. By this time, however, the area drew also emigrants from Friuli not involved in the mosaic and terrazzo trade. For Friulians, the choice about where to dwell in New York City now depended on ethnicity. In the 1920s and 1930s, the area progressively became less multi-ethnic than it was at the turn of the century and adopted a rather homogeneous Italian aspect becoming a sort of Friulian enclave. These families experienced a powerful sense of community, which comes from a shared provenance, and from the practice of a common craft by many of the male workers. The majority of these craftsmen lived with their families (wives and children), which they generally brought to the United States some years after their own arrival. Indeed, the mosaic and terrazzo workers migration to New York overwhelmingly became a definitive journey, and further movements (mostly in search of new labor markets and opportunities) involved internal (to the midwestern, southern, and western states) rather than external (directed abroad) displacements. The decline of marble mosaic and the spread of terrazzo in the 1920s and 1930s The first terrazzo floors were laid in the United States in the late 1890s by the same Italian mosaic workers who had been practicing the marble mosaic 74 ILWCH, 100, Fall 2021 trade for a number of years earlier. Terrazzo, however, would not gain acceptance for several decades. Most stone and concrete floors in the United States were marble mosaic until the mid-1920s, when American architects became aware of terrazzo's design potential. Terrazzo was well suited for the smooth, curvilinear designs of the Art Deco and Modern styles prevalent from the late 1920s to 1940s. Terrazzo soon overtook marble mosaic in popularity and came close to becoming a ubiquitous flooring material for public buildings and apartment buildings. 77 The terrazzo manufacturing process adopted in the United States in the 1890s was different from traditional terrazzo techniques used in Venice. In the past, terrazzo workers placed or sprinkled the irregular bits of marble over a lime mastic and pounded them into it rather than mixing the marble granule with the cement as was practiced in the United States, and earlier-from the 1870s onward-in nearly all European countries. In fact, it was with the advent of Portland cement in the middle of the nineteenth century that terrazzo was developed as it is known today. Terrazzo floor surfaces are manufactured from a mixture of 70 percent or more marble chips and 30 percent or less Portland cement matrix over a concrete base. Decorative chips, typically marble, are chosen for their color and strength. The chips are graded by size, varying from number 1 (between ⅛ and ¼ inches) to number 8 (1 to 1⅛ inches). A proportional mixture of sizes is most commonly used. A factor that further influenced the spread of terrazzo all over the United States was the invention of the electric grinding and polishing machine. Before the early 1910s, the mosaic and terrazzo floor surfaces had to be ground down manually by workers using a galera, a piece of stone attached to a long handle of iron pipe that was pushed and pulled back and forth by a workman, gradually wearing down the terrazzo to a smooth, level surface. With the introduction of electric polishing for stone and terrazzo finishing, the terrazzo trade gained speed and accuracy, and overall costs were reduced. 78 Likewise, technological advances positively affected terrazzo's popularity in the United States. Before 1919, terrazzo floors were laid in large monolithic slabs. This method presented problems in twentieth century office tower applications where terrazzo toppings were prone to cracking, particularly over structural elements. In 1919, the L. Del Turco and Brothers Company of Harrison, New Jersey, introduced a method to subdivide terrazzo surfaces with brass divider strips. "The innovation was adopted enthusiastically by the industry" wrote the developer of the method, Luigi (Louis) Del Turco, who came to New York from Sequals in 1907. 79 Strips were used not only for the purpose of controlling cracks but also for decorative effects. The introduction of abrasive aggregates in terrazzo (such as Alundum) to render it non-slippery, has been called by Del Turco as "another improvement worthy of mention." 80 The American market thus began to request terrazzo workers more than mosaicists. The majority of them arrived in New York after World War I. In 1932, the architect Eugene Clute wrote that: "Most of the terrazzo workers in this country seem to have come originally from the Friuli province of Udine, From Guild Artisans to Entrepreneurs a few hours from Venice." 81 Prior to coming to the United States (namely before the Great War), almost all of them had been working in Europe, mostly in Germany, for terrazzo companies owned by their fellow countrymen; after the war, however, job conditions in Germany were highly unfavorable, and mosaic and terrazzo artisans were forced to migrate overseas and to other European countries. In the 1920s and 1930s, newly arrived terrazzo workers were often employed by Italians who had established their own companies in the early twentieth century. Other non-Italian American terrazzo companies also employed many Italians. These companies executed almost all the terrazzo floors in America's buildings during the interwar period. One of the most impressive of these terrazzo floors is situated in the New York Empire State Building. In the late 1920s, the De Paoli, Del Turco, and Foscato companies created a corporation to install 250,000 square feet of terrazzo in the corridors. This terrazzo flooring required about 1,250 yards of sand, 12,500 bags of cement, and 15,000 bags of marble chips. 82 The Italian terrazzo companies, however, did not turn their backs on their ancient mosaic expertise. Bruno De Paoli and his firm of mosaic craftsmen in Long Island City installed the wonderful mosaics of Christ Church at 520 Park Avenue in New York City. And in 1935, Victor Foscato executed the much-admired Aztec Sun Stone mosaic in the Judy and Josh Weston Pavilion of the New York American Museum of Natural History at Columbus Avenue and West Seventy-ninth Street. A powerful network of firms: terrazzo and mosaic contractors organized For immigrant entrepreneurs, the initial market typically arises within the immigrant community. 83 A large number of Italian entrepreneurial initiatives grew alongside the increasing number of Little Italies popping up in the United States, continually expanding and drawing on a clientele of fellow countrymen. In most cases the entrepreneurs came from within the community and catered to this community in particular. The commerce of ethnic products like pasta, wine, olive oil, and the catering for all fellow countrymen represented the most common entrepreneurial initiatives among the Italian businessmen. Conversely, terrazzo and mosaic workers had to deal with the tastes and needs of the American market and population, and their success is therefore unique in the history of Italian emigration to the United States. As mentioned previously, the first Italian mosaic workers who came to the United States in the early 1880s were to be engaged by the New York decoration firm Herter Brothers. Several other mosaic workers arrived in New York during the same period and shortly after, thanks to the flow of information (letters) between Europe and the United States regarding employment possibilities in the city. In many cases, the mosaic workers who were already in New York sponsored tradesmen with whom they had worked in Europe to join them. This mechanism also secured a trustworthy skilled labor force for marble mosaic and terrazzo firms. In turn, the most adventurous of these mosaic and terrazzo 76 ILWCH, 100, Fall 2021 workers started businesses of their own, first in New York and other northeastern American cities like Philadelphia, Boston, and Washington, DC, then later in the nearby Midwestern states of Kentucky, Tennessee, and West Virginia, and finally moving either further south or heading west. In fact, New York contractors sent artisans to other urban areas to affix pre-cut and set mosaics. If these employees saw market opportunities they often remained in the city to open a local branch of the business or begin their own business. 84 In a sense, they became paradigms of self-made businessmen and built a powerful network of firms that dominated the market across America. Thus, Italian marble mosaic and then later terrazzo workers extended the trade throughout the country. To paraphrase Frank Thistlethwaite, who examined the migration of pottery artisans from Staffordshire to America (and wrote " […] if we can trace the potters we can trace the industry"), pinpointing marble mosaic and terrazzo flooring in American building amounts to identifying a settlement of Italian marble mosaic and terrazzo workers. 85 As the terrazzo industry began to expand, so did the need for an organization to support the growing number of installers. Costante (Gus) Cassini from Cavasso Nuovo in Friuli sent an invitation to twenty-seven terrazzo and mosaic contractors from all over the United States to meet in Chicago, in 1924, with the intent of creating the National Terrazzo and Mosaic Contractors Association, nowadays known as The National Terrazzo and Mosaic Association, Inc. (NTMA). 86 Debate was raised in the association about the suitability of using the word "mosaic" in the title. In fact, in the mid-1920s, companies did considerably more terrazzo and less marble mosaic work. Twenty years earlier, the situation was exactly the opposite. Indeed, at the end of the 1800s or beginning 1900s contractors joined forces to create the New York-based The Mosaic Employers' Association, which became a member of the Building Trades' Employers' Association of Greater New York. 87 Surviving information regarding the association is scarce, but it appears to be the first mosaic employers' organization in the United States. 88 The membership of the National Terrazzo and Mosaic Contractors Association indicates not only the geographic changes in the spread of the trade, but also the intergenerational continuity within companies. In 1924, the old mosaic companies established in the 1880s and 1890s were, with only a few exceptions, no longer active. New companies established in the 1910s and early 1920s-from Minneapolis to Oklahoma, from Kansas City to Indianapolis, and from St. Louis to Chicago-represented the greater part of the new contractors association. By 1926, two years after its establishment, the membership of the National Terrazzo and Mosaic Contractors Association counted almost sixty firms, forty of which were owned and run by Italians. More than ten years later in 1938, of the ninety-three companies which made up the association, approximately sixty belonged to Italians or Italian Americans. At that point, the second generation of the company founders began to take control of their family enterprises. Nowadays, according to the registers of NTMA, Italian Americans control more than half of the mosaic From Guild Artisans to Entrepreneurs and terrazzo companies throughout the United States, although the labor force is now comprised of only a minimal number of people of Italian origin. 89 Creative responses of a craftsmen group in a diachronic perspective The successful intergenerational continuity in business ownership, and the control of the trade over time by skilled marble mosaic and terrazzo workers from Friuli was secured by innovations adopted by artisans and companies. Creativity skills appear as a key factor that allowed these artisans to control the trade for such a long time, as well as to keep alive for centuries the popularity of marble mosaic and terrazzo (as regards to, e.g., ceramic tile or marble) among customers. The adoption of the "indirect method" for restoration and then for the industrial manufacturing of marble mosaic, 90 the invention of a new method of laying mosaic, granolithic and similar floors, 91 the development of the electric polishing machine, 92 the invention of brass divider strips, 93 the manufacturing of ornamental terrazzo, 94 the development of epoxy terrazzo (a resin adopted in late 1960s that led to use of an unthinkable, boundless matrix for multicoloured patterns and designs) to the employment of computers to perform terrazzo designs, 95 represent some of the steps of this innovative process. These examples of the "new thing" as a result of innovative action (the "new thing" that "need not be spectacular or of historic importance," in the words of Schumpeter) aligned the creative response of the marble mosaic and terrazzo entrepreneurs with the Schumpeterian entrepreneur whose "defining characteristic is simply the doing of new things or the doing of things that are already being done in a new way (innovation)." 96 Only by renovating the way of doing marble mosaic and terrazzo flooring did artisans secured its long-term persistence. Moreover, what this craftsmanship experience shows is that their success was not merely a match between job market requirements and the skills offered by immigrants; instead, the marble mosaic and terrazzo workers brought to the fore knowledge and trade either on the whole or in part unknown in many of the new labor markets. In such a context the limited financial resources brought by these artisans appeared not to be an issue. It is important to point out that in the marble mosaic and terrazzo industry the need for capital to establish a company was not crucial in terms of the low investment required to purchase the tools and instruments necessary to execute a marble mosaic or a terrazzo. The technical and artisanal training that craftsmen brought with them was much more valuable than their financial resources. Creative and innovative outcomes criss-crossed the shores of the Atlantic. In the late 1910s, the advantages of the use of the electric polishing machine, first developed in Germany, spread to the United States. A decade later, in the early 1930s, innovative techniques and new aesthetic trends in America, which was at that time the most important global market for terrazzo, reached the British terrazzo market. 97 Peter Mazzioli, the son of the work manager at the Londonbased firm Diespeker & Co., one of the most important terrazzo companies in the United Kingdom, remembers that: 78 ILWCH, 100, Fall 2021 My father was in touch with Del Turco, Pellarin and other fellow countrymen from Sequals who were in the terrazzo business in the United States, so he was up-todate on the innovations in the American terrazzo sector. In the Twenties, in England we used Cassani polishing machines, which were made in Italy and had only one plate for polishing. In the following decade instead, Diespeker started to import some machines made in the United States that, compared to the Italian ones, had more plates for polishing and were more efficient. 98 Friulian marble mosaic and terrazzo entrepreneurs took advantage of the network of family members and townspeople that extended from the European to the American shores. In fact, relationships between marble mosaic and terrazzo workers and entrepreneurs that had settled around the world were as strong as the bonds with their homeland. On the side of the intense relationship with the homeland, even during a very troublesome period like the Great Depression, the NTMA decided to support the Friuli Mosaic School in Spilimbergo with a yearly contribution (from 1929 to 1933) of $500.00. Within the Italian American marble mosaic and terrazzo industry a kind of labor hierarchy reflected the distinct skills resources and work traditions of the craftsmen: While the high-level workforce (the mechanics) were mainly from Friuli, helpers and polishers (both manual and non-manual) came chiefly from other Italian regions. 99 Skilled and unskilled Italian laborers coexisted within the same industrial field. This is evidence of the fact that a highly segmented economic sector like the building industry, to which marble mosaic and terrazzo belong, does not accommodate only unskilled emigrant labourers but also skilled emigrant workers. Even though the two groups originated from the same country, skilled and unskilled workers did not come from the same Italian region. A national approach clearly falls short of contributing to defining the characteristics of the two groups, and their identity as well. As historian Marcus Lee Hansen well recognized "at any given moment the phenomenon of emigration is characterized, not by the nation as a whole, but by a comparatively restricted part of it, and when again it makes its appearance, though the participants are still listed as Germans or Italians, their origin was distinct." 100 What the Italian case shows is that the heterogeneous emigrant experiences of the different local groups cannot be explained through the lens of a superficial homogeneous national background. Thus both marble mosaic and terrazzo mechanics showed a clannish loyalty to their craft, and a strong labor identity instead of an ethnic identity. In the United States, they firstly created unions, not national, regional, provincial, or village ethnic associations. However, it was the village, the province, or the regional area, rather than the national entity with which the group identified. In New York, e.g., a Friulian club was established only in 1929, and most of its founders worked in the marble mosaic and terrazzo industry. However, this regional association, which might be considered a regional loyalty reaction to the fascist attempts to "Italianize" the immigrant community, arose much later than the establishment of the mosaic mechanics union in the 1880s, and even after the creation of the employer's association in 1924. Skills rather than ethnicity seem a better key for interpreting marble mosaic and terrazzo workers' experience and assessing their success or failure. Conclusions The transplantation and diffusion in the United States of a typically Italian artisan heritage is to be credited to the immigrants themselves, proving that immigrants can be powerful conduits for the transfer of skills and knowledge. The marble mosaic and terrazzo workers' experience also demonstrates that a study into the diverse work experience of Italians and other immigrant groups who practiced artisan crafts in the United States, such as stonemasons, stonecutters, carpenters, masons, musicians, shoe-makers, cabinet-makers, glassmakers, pottery artisans, brewers-that despite scholarly works have been confined to the shadows of historiography-can surely open new research horizons. 101 Job specializations or, in other words, "comparative advantages" of migrants (the various sorts of "capital" that migrants embody) often play a crucial role in the whole migratory experience, from the choice of migratory destinations to the length of the migrant experience. An in-depth examination and comparison of many artisanal and skilled migration experiences could lead to an interesting discussion on the commonly held view that migrants occupy the lower echelons of the job market. An assumption that can be challenged is the one that considers that immigrants become entrepreneurs in economic sectors that do not require high levels of specialization (e.g., neighborhood retailers, grocery stores), or in sectors that are fully interwoven into the fabric of the immigrant community since the immigrant entrepreneur exclusively meets the needs of the co-ethnics. Many aspects support the story of the marble mosaic and terrazzo migration experience as a continuum. The story of Pellarin, Pasquali, and Foscato families, who were deeply involved in the art of terrazzo in seventeenth and eighteenth century Venice and in nineteenth and twentieth century North America, demonstrates the unusual perpetuation of a craft across several generations, as well as the extraordinary control of the trade by the same (migrant) families from the seventeenth and eighteenth to twentieth centuries. There is no evidence of such an extended duration of family craftsmanship across wide geographical areas and economic contexts: A diachronic perspective thus appears crucial in order to understand the complex structure of this skilled migration experience. A marked long-lasting organizational approach seems to distinguish the marble mosaic and terrazzo workers' experience: Attempts to secure standards and to protect the interests of members guaranteed by the craft-guild in sixteenth century Venice, developed into a union arrangement in nineteenth century America but also in a trade association later on. The marble mosaic and terrazzo workers' strong tendency to become entrepreneurs over time 80 ILWCH, 100, Fall 2021 might be explained by a consolidated tradition that goes back to seventeenth century Venice. This outcome is related not only to the guild system but specifically to its breaking up and the ensuing entrepreneurial process that encouraged the more talented and skilled terrazzo masters to embrace entrepreneurship. The immigrant's tendency toward self-employment induced by a blocked upward mobility in the host country, as is theorized by some scholars, 102 does not really explain the marble mosaic and terrazzo entrepreneurial experiences. The largely assumed predisposition toward business does not seem to be a natural, general, and overwhelming feature of immigrants themselves, unless we consider the socio-economic and labor background characteristics of each migrant group. In many artisan migration experiences, only a long-term perspective, a historical reconstruction of the individual, familial, and villager migration patterns might highlight the predisposition to become (or even not become) an entrepreneur. The importance of the study of these entrepreneurial processes supports Joseph A. Schumpeter's assertion that: "Cumulation of carefully analyzed historical cases is the best means of shedding light […], of supplying the theorist with strategic assumptions, and banishing slogans." 103 The research agenda prompts scholars to study the rich variety of immaterial skills and knowledge heritage transferred to the United States by migrants. The lives and fortunes of the New York sample mentioned in this paper probably weighed in favor of those who succeeded, but this paper is far from being a celebratory narrative. By further piecing together information and evidence from miscellaneous sources it will be possible to trace the experiences and works of artisans who crossed the Atlantic to practice their trade and transfer a knowledge heritage to North America. Many aspects of the contribution offered by historical (and contemporary) skilled craft workers in the United States are still unknown. Further investigation and research may shed light on other unexplored experiences in the history of immigration to North America.
2021-11-29T14:09:54.338Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "3acaf48ef085703bc3f6147719491eb6b30eba0f", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/1E09712E3DF5DB6D4DD6BB83E66A5197/S0147547920000253a.pdf/div-class-title-from-guild-artisans-to-entrepreneurs-the-long-path-of-italian-marble-mosaic-and-terrazzo-craftsmen-16th-c-venice-20th-c-new-york-city-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "3acaf48ef085703bc3f6147719491eb6b30eba0f", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [] }
225704660
pes2o/s2orc
v3-fos-license
Tribology and Dowson It is with great sadness that we note the passing of Professor Duncan Dowson on 6th January 2020. Duncan was an esteemed member of the Editorial Board of this journal. He will be remembered as one of the founding fathers of tribology and as a true gentleman. He was the last living member of the Jost Committee, set up by the UK Government (1964–1966) to investigate the state of lubrication education and research, and to establish the requirements of industry in this regard [1]. This committee coined the term “tribology”.Duncan contributed to many areas of tribological research and established many of them, including elastohydrodynamic theory and biotribology.[...] Introduction It is with great sadness that we note the passing of Professor Duncan Dowson on 6th January 2020. Duncan was an esteemed member of the Editorial Board of this journal. He will be remembered as one of the founding fathers of tribology and as a true gentleman. He was the last living member of the Jost Committee, set up by the UK Government (1964)(1965)(1966) to investigate the state of lubrication education and research, and to establish the requirements of industry in this regard [1]. This committee coined the term "tribology". Duncan contributed to many areas of tribological research and established many of them, including elastohydrodynamic theory and biotribology. His research interests have provided both academic and practising industrial tribologists with many analytical tools and methods, including the Dowson and Higginson extrapolated oil film thickness formula for the prediction of minimum film thickness in lubricated line contacts, and the Hamrock and Dowson oil film thickness formula for lubricated point contacts. He also provided a formula for film thickness for hip joint prostheses, which emanated from his innovative research on the design of pioneering total hip arthroplasty in the 1960s. The publication of more than 600 papers and five books is testament to Duncan's vast contribution to science and engineering. In addition, he has also been the Editor of several engineering journals, including Wear and the Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science and Part H: Journal of Engineering in Medicine. He played a key role in creating a global tribological research community, organising many events, including the long-running (since 1974) Leeds-Lyon Symposium on Tribology, which he co-founded with the late Professor Maurice Godet of Institut National des Sciences Appliquées (INSA) Lyon. Duncan was one of the most decorated scientists of our times, over a career that spanned seven decades. His distinctions include Fellow of the Royal Society (FRS), Commander of the Most Excellent Order of the British Empire (CBE), Honorary Fellow of the Royal Society of Edinburgh (FRSE) and Fellow of the Royal Academy of Engineering (FREng). His accolades include seven honorary doctorates from both national and international universities, and numerous scientific and technical awards, including the Thomas Hawksley Gold Medal, the British Society of Rheology Gold Medal, the Tribology Gold Medal, the James Alfred Ewing Medal, the Kelvin Medal and the James Watt International Gold Medal. This journal recognised his immense long-standing achievements in 2018, by establishing the annual Duncan Dowson Travel Grant, which is subject to competition by young PhD and Post-doctoral researchers intending to present at a tribology-related conference. Lubricants is honoured to publish this special issue dedicated to the memory of Duncan Dowson, edited by guest editors Nicholas Morris and Patricia Johns-Rahnejat, who enjoyed his advice and guidance. We are very grateful for all the contributions submitted in so many aspects of tribology in commemoration of the achievements of Duncan Dowson. Contributions of Duncan Dowson As already noted, Duncan Dowson contributed immensely to the field of tribology. In particular, he made innovative contributions to hydrodynamics and elastohydrodynamics and the application of these to bearings, engine and powertrain tribology as well as to biotribology. He is regarded as the father of biotribology and a pioneer in the establishment of elastohydrodynamic theory. He also contributed significantly to other areas, such as nanotribology and contact mechanics. His research remained at the leading edge of developments right up to his unfortunate death in January 2020. Duncan enjoyed a remarkable research career spanning nearly 70 years, with hundreds of published papers and important books. Therefore, it is well beyond the scope of this editorial to recount all his many contributions. Instead, we will confine ourselves to highlighting the significance of his work by placing some of his seminal contributions within the context of research developments in tribology at the time. Elastohydrodynamic Lubrication (EHL) The ground-breaking contribution to the field of lubrication was made by Osborne Reynolds [2] who developed the theory for hydrodynamics in narrow conjunctions of lubricated contacts. He demonstrated that a converging wedge-shaped film of fluid generated high contact pressures. Therefore, the presence of a rising and falling pressure distribution along a hydrodynamic wedge endows it with a load carrying capacity. Reynolds concluded that the load carrying capacity is the raison d'être of all hydrodynamic bearings. If the applied load were to exceed the hydrodynamic load capacity of a bearing, there should be some evidence of wear. Therefore, the absence of wear under such prescribed conditions (where the applied load exceeded the predicted hydrodynamic load capacity) was rather puzzling at that time. For instance, Martin [3] used Reynolds' equation to predict the hydrodynamic lubricant film thickness between meshing spur gear teeth pairs when they were represented by a pair of rigid cylinders, and remarked on the absence of wear, which should have been present with his very thin predicted films. The problem with the early applications of Reynolds' hydrodynamic theory to gears and rolling element bearings at medium to relatively high loads was twofold. Firstly, the hydrodynamic theory assumed iso-viscous fluid behaviour (no changes in lubricant viscosity with pressure). Secondly, contacting bodies are not rigid and, due to the concentrated nature of their contact, can undergo localised deformation, thus increasing the conjunctional gap between them when subjected to sufficient load. The problem of localised deflection of contacting solids of revolution under small strain was tackled by Heinrich Hertz [4] around the same time as the advancements of Osborne Reynolds. However, there is no evidence that the two scientists ever communicated. Consequently, the clear connection between Reynolds' hydrodynamics and Hertzian contact mechanics took a further half-century to emerge. In the period 1936-1941, various researchers investigated the effect of elastic deformation of loaded rolling contacting members on the potential formation of a lubricant film in their contact [5,6]. These analyses were based on Hertzian contact mechanics under dry contact conditions. Such studies gave credence to the supposition that led to Ertel and Grubin's [7] piezo-viscous-elastic hypothesis and the definition of elastohydrodynamic lubrication. Petrusevitch [8] confirmed the findings of Ertel and Grubin [7], proposing an initial solution that satisfied both hydrodynamic and elasticity equations, but not in an integrated manner. A numerical solution of elastohydrodynamic lubrication for the assumed case of infinite line contact of rollers was presented by Dowson and Higginson [9]. This solution is regarded as the first detailed and accurate representation of EHL. Dowson and Higginson [10] also provided early supplementary contributions, undertaking parametric sensitivity analysis for EHL line contact. This led to their seminal and long-lasting book "Elasto-Hydrodynamic Lubrication" [11], as well as the Dowson and Higginson lubricant film thickness formula for EHL line contact [12]. By 1962, Dowson [13] had proposed the generalised form of Reynolds' equation for the solution of hydrodynamic problems, including for bearings and seals. This approach was extended to the case of EHL analysis. The finite difference solution for the case of the circular point contact of a ball on a flat race was presented by Cameron and Gohar [14] in 1966. In fact, Duncan Dowson and his contemporary Ramsey Gohar, whilst rarely co-authors, consulted regularly throughout their working lives, and were two of the most influential researchers in the early developments of EHL theory and experimentation, including interferometry [15,16]. They both developed theory for EHL of point contacts, with Dowson providing a series of papers dealing with finite difference solutions of circular and elliptical point contacts under different conditions, including for fully flooded and starved inlet boundaries [17][18][19][20], in addition to providing some new oil film thickness equations for different regimes of lubrication. Later, it was noted that the lubricant film thickness would alter in various applications, owing to changes in contact kinematics or transient effects, such as squeeze film motion. Variations in the direction of lubricant entrainment were analysed by Dowson, as well as by a number of his contemporaries, with some providing new and more comprehensive equations [21][22][23][24][25][26]. Other solutions have included the effect of squeeze film motion or surface waviness under transient conditions where, owing to squeeze film effect, the load carrying capacity of the contact is enhanced [27][28][29][30][31][32][33]. Dowson contributed to many of the other developments of EHL theory, applications and/or methods of solution. One important issue has been thermal effects; others are non-Newtonian shear and micro-elastohydrodynamics (EHL of rough surfaces) [34][35][36][37]. As early as 1979, the 6th Leeds-Lyon Symposium [38,39] was dedicated to the generated heat in hydrodynamic/elastohydrodynamic conjunctions. The mechanics of contact is critical to the durability of load bearing surfaces, with the limiting factor often being the generated sub-surface stresses, which can lead to fatigue spalling and the exfoliation of coatings and overlays. Therefore, the determination of such stresses is an important consideration, as noted by Dowson and Higginson [10]. The sub-surface stresses are induced by applied contact pressures and surface traction in load transmitting conjunctions [40]. This area of research was closely followed by another of Duncan Dowson's contemporaries, Kenneth Johnson, who studied rolling contact fatigue under various conditions [41,42], as well as surface adhesion [43]. The determination of sub-surface stresses is critical in the assessment of fatigue life of contacts [44], often requiring detailed numerical analysis [45][46][47]. In many cases, these stresses depend on surface coatings that are often used for a multitude of reasons such as wear-resistance, reduced friction, etc. Dowson was an early contributor to this area of research [10,48], advancing predictive methods for soft overlays in hip and knee joint prostheses [49,50] and for coatings in bearings and gears [51]. Engine and Powertrain Throughout his career, Duncan Dowson developed many applications of hydrodynamic and elastohydrodynamic theories. His main interest and most of his contributions were in the tribology of biological systems, particularly in the endo-articular joints; the hip and knee (Section 2.3). However, he also contributed significantly to applications of EHL and hydrodynamic theories for the prediction of tribological conditions in internal combustion (IC) engines and other powertrain subsystems (such as gearing systems [11,52,53]), with the aim of improving their energy efficiency [54,55]. With regard to IC engines, Dowson conducted research in all the major tribological conjunctions, including cam-follower pairs, which are subject to EHL [56,57]. His experimental monitoring of lubricant film thickness using its electrical resistivity [57] had only previously been measured by Hamilton [58], using a deposited capacitive micro-transducer. Hamilton first reported the use of these transducers for the piston ring conjunction [59]. Capacitive, pressure and temperature sensitive micro-transducers have been used to monitor contact conditions in a variety of applications, including some under EHL conditions [60][61][62][63][64][65]. Dowson also paid considerable attention to piston-cylinder conjunctions as piston rings and piston skirt conjunctions account for nearly 50% of all the frictional losses of an IC engine, which, in turn, accounts for 15-25% of all the engine losses. Other sources of loss are thermal and pumping losses. Dowson's initial work on piston rings [66] set the scene for analytical EHL predictive research in this area, which includes multi-physics integrated lubrication, dynamics and gas flow analyses. He also worked on the cavitation boundary and contact exit boundary condition [67], as well as on the effect of cylinder liner temperature on the lubrication and friction of piston compression ring conjunctions [68]. His research on piston-cylinder systems includes the effect of surface coatings of liners, and the topical issue of surface modification/texturing of liners and piston skirts with engine testing, to ascertain their impact on friction [69]. Improved lubricant film thickness was found both experimentally and through numerical predictions in the case of surface texturing, owing to its micro-hydrodynamic effect [70]. This was an area of special interest to Dowson, as micro-elastohydrodynamics is inherent to the behaviour of rough cartilage in the lubrication of natural joints (Section 2.3). Dowson was also keen to investigate the effect of new engine technologies upon frictional performance. In particular, the effect of cylinder deactivation (CDA) technology was investigated, showing that tribology should be taken into account in the design of modern engines using new technologies, such as CDA [71,72]. Biotribology The unification of the study of friction, wear and lubrication into the integrated discipline of tribology was a step-change for engineering [1]. It is notable that the Jost report [1], which surveyed practitioners of the newly defined discipline, did not mention those concerned with biological systems. By the middle of the 20th century, there were several explanations for the mechanism of mammalian synovial joint lubrication. Charnley [73], at the newly inaugurated Hip Centre at Wrightington Hospital, suggested that synovial joints relied on boundary phenomena. MacConaill [74] proposed a hydrodynamic mechanism, whilst McCutchen [75] suggested a weeping mechanism of lubrication. Clarity was provided at a landmark symposium held in 1967, organised by Dowson and Neale on behalf of the Institution of Mechanical Engineers (IMechE), with Charnley and Scales representing the British Orthopaedic Association (BOA). At this event, Dowson used a mechanical bearing analogy to demonstrate that the primary modes of lubrication were elastohydrodynamic and squeeze film effect in origin [76]. He also noted that complementary mechanisms related to boundary lubrication would be of significant importance [76]. Shortly afterwards, Dowson, who was at this point the Chairman of the IMechE Tribology group, coined the term biotribology [77] and provided evidence of the complementary mechanisms of entrapment and enrichment [77][78][79]. Prior to the mechanistic understanding of tribology of synovial joints, remarkable progress was already being made in the United Kingdom on the development of experimental total joint arthroplasty. Operations were conducted by surgeons in an attempt to alleviate severe and chronic joint conditions. The first recorded of these operations was conducted by Wiles in 1938 using stainless steel components [80]. The Mckee-Farrar joint was developed in the 1950s, and widely adopted for patients in 1961. It used a more inert chromium-cobalt alloy steel with much greater success [81,82]. The McKee-Farrar joint was designed to prioritise minimal wear, whilst Charnley placed a greater emphasis on the minimisation of friction, pioneering a metallic femoral head paired with a polymeric acetabular cup. At first, polytetrafluoroethylene (PTFE) polymer was used for the acetabular socket. This caused a number of unwanted issues for the recipients. The search for an alternative polymer led to the meeting of Dowson and Charnley when the latter took an interest in Dowson's development of novel bearing materials for use in the presence of water and environments of high humidity for the Ministry of Defence, leading to the use of ultra-high molecular weight polyethylene (UHMWPE) [83]. This began a lasting relationship between the two men, and many visits to Leeds and Wrightington ensued [84]. The Charnley low-friction arthroplasty [82] (generally referred to as the "Charnley joint") with 22.225 mm diameter metal femoral head and thick UHMWPE socket was the first total hip replacement to be adopted worldwide. When Dowson invited his colleague Longfield to conduct an analysis of the Charnley joint, it was found that the manufactured dimension was actually very close to optimal (25-27 mm) when considering wear [84][85][86]. To date, the Charnley joint with significant contributions by Dowson remains the gold standard for total hip replacement. Innovative numerical solutions for elastohydrodynamic lubrication problems pioneered by Dowson provided further insights into the mechanism of ankle, knee and hip synovial joint lubrication. He showed the action of articular cartilage rugosity using numerical micro-elastrohydrodynamic perturbations [87] and, from similar numerical analysis, derived empirical formulae for film thickness and load carrying capacity of the major synovial joints [88]. Dowson investigated total joint replacement with an UHMWPE acetabular component, cushion bearing behaviour for knee and hip arthroplasty [89,90] and lubrication of total hip replacement joints created with materials of high elastic modulus [91]. Alongside his numerical research, he had a successful programme of experimental research, focusing on the wear of total hip joint replacement [92][93][94][95] and knee joint replacement [96][97][98][99][100]. Dowson presented his definitive review, entitled: "New joints for the Millennium" at the IMechE in 2000 [101]. He continued to contribute to the advancement of understanding in total joint replacement in many areas, such as joint simulator performance of metal-on-metal joints [102], tribo-corrosion and tribo-film formation on medical implants [103][104][105], non-Newtonian effects in metal-on-metal joints [106], wear modelling of metal-on-metal joints [102,107], poro-elastic effects of endo-articular cartilage [50] and hydrogels [108]. Closure The foregoing is a short commemoration of Duncan Dowson's achievements and contributions to all aspects of tribology. It is but a very brief recounting of his work, covering nearly seven decades of his most pertinent, pioneering and original contributions. In addition, he worked widely with many other researchers, who benefited from his patient and considered guidance. Duncan was a forerunner in the development of elastohydrodynamic theory and the leading light in biotribology. The community of tribologists and all those practising any aspect of the broad subject will benefit directly for many years to come from his sustained and long-standing contributions. In particular, total hip replacement, now one of the most common elective surgeries, is thought of as one of the landmark surgeries of the 20th century. Indeed, the majority reading this article will have benefited, either personally or through a loved one, from such an operation. For more than sixty years, Duncan Dowson sustained invaluable contributions towards the advancement of total joint replacement prostheses. For this, and so much more, he is owed an immense debt of gratitude.
2020-06-11T09:05:19.620Z
2020-06-09T00:00:00.000
{ "year": 2020, "sha1": "9f3a1042caa398d94a644fe4db04e362656686cc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4442/8/6/63/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8ebe73f86215044596afd86891678202e239d5d0", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Sociology" ] }
236140099
pes2o/s2orc
v3-fos-license
Prehemorrhage antiplatelet use in aneurysmal subarachnoid hemorrhage and impact on clinical outcome Background Literature is inconclusive regarding the association between antiplatelet agents use and outcome after aneurysmal subarachnoid hemorrhage. Aims To investigate the association between clinical outcome and prehemorrhage use in aneurysmal subarachnoid hemorrhage patients as well as the impact of thrombocyte transfusion on rebleed and clinical outcome. Methods Data were collected from prospective databases of two European tertiary reference centers for aneurysmal subarachnoid hemorrhage patients. Patients were divided into “antiplatelet-user” and “non-user” according to the use of acetylsalicylic acid prior to the hemorrhage. Primary outcome was poor clinical outcome at six months (Glasgow Outcome Scale score 1–3). Secondary outcomes were in-hospital mortality and impact of thrombocyte transfusion. Results Of the 1033 patients, 161 (15.6%) were antiplatelet users. The antiplatelet users were older with higher incidence of cardiovascular risk factors. Antiplatelet use was associated with poor outcome and in-hospital mortality. After correction for age, sex, World Federation of Neurosurgical Societies score, infarction and heart disorder, pre-hemorrhage acetylsalicylic acid use was only associated with poor clinical outcome at six months (adjusted OR 1.80, 95% CI 1.08–3.02). Thrombocyte transfusion was not associated with a reduction in rebleed or poor clinical outcome. Conclusion In this multicenter study, the prehemorrhage acetylsalicylic acid use in aneurysmal subarachnoid hemorrhage patients was independently associated with poor clinical outcome at six months. Thrombocyte transfusion was not associated with the rebleed rate or poor clinical outcome at six months. Introduction Aneurysmal subarachnoid hemorrhage (aSAH) is a potentially fatal disease, carrying a six months' case fatality rate of 55-60% 1-3 and more than one third of survivors have severe disability. 4 Many complications such as rebleed, delayed cerebral ischemia (DCI) and hydrocephalus are multifactorial and negatively affect clinical outcome. 3 Antiplatelet agents are used for secondary prevention of various cardiovascular and cerebrovascular events in a wide range of high-risk patients, [5][6][7] and their use has been associated with a lower incidence of aSAH. It is hypothesized that this might be mediated International Journal of Stroke 17 (5) International Journal of Stroke, 17 (5) by a protective effect against chronic inflammation and subsequent aneurysm wall degeneration. [8][9][10] However, antiplatelet use has also been related to early rebleeds, treatment-related complications and worse outcome after aSAH. [11][12][13] Decision-making regarding the management of patients with prehemorrhage antiplatelet agents use is mainly based on inconsistent results and nonsignificant findings. [11][12][13][14][15] As no evidence-based recommendations regarding the management of prehemorrhage antiplatelet use in aSAH patients exist, the decision to stop antiplatelet medication is often associated with the presence or absence of local guidelines. 16 One recent study 17 found an association between thrombocyte transfusion and poor clinical outcome after six months in patients with aSAH and this needs further clarification. Aims The main purpose of this study was to investigate the influence of prehemorrhage antiplatelet use on the clinical outcome after aSAH, considering confounding factors. Additionally, we studied potential effects of thrombocyte transfusion on the clinical outcome and rebleed after aSAH. Data collection Patients were retrieved from prospectively collected databases, including patients at the Department of Neurosurgery of the University Hospital Zurich treated between January 2005 and December 2016 and the Academic University Medical Center Amsterdam treated between December 2011 and December 2015. Both hospitals are high-volume tertiary reference centers for the treatment of aSAH. The research ethics board of the Canton Zurich, Switzerland approved this study. Patient characteristics We included patients older than 18 years of age who had confirmed aSAH on admission computed tomography (CT) imaging or positive lumbar puncture and a proven aneurysm in either computed tomography angiography or digital subtraction angiography. Subjects meeting any of the below-mentioned criteria were excluded from this study: patients with non-aneurysmal SAH, perimesencephalic hemorrhage (according to the previous published definition 18 ) as well as patients with traumatic SAH. Patients were divided into the groups ''antiplateletuser'' and ''non-user'' according to the use of acetylsalicylic acid (ASA) prior to the hemorrhage. Because ASA in combination with other antiplatelet agents was assumed to be associated with a worse outcome than ASA alone, the outcome of this patient group was explored prior to including them to the ''antiplateletuser'' cohort and excluded if they showed a significant difference in outcome. Patients with anticoagulation therapy were excluded on beforehand. ASA was stopped in all patients immediately after the radiological diagnosis of aSAH. The patients' characteristics, clinical and radiological data as well as clinical outcome data were collected by trained staff and verified by an attending vascular neurosurgeon. Furthermore, we collected the treatment modalities, in-hospital complications as well as cardiovascular risk factors (smoking, hypertension, hypercholesterolemia, heart disorder, diabetes). Heart disorders were characterized according to World Heart Federation. 19 The initial clinical severity and radiological grade were assessed using the World Federation of Neurosurgical Societies (WFNS) grade and the Fisher score, respectively. We dichotomized the WFNS into WFNS 1-3 and 4-5 and the Fisher score into 3 vs. 1, 2 and 4. 20,21 In-hospital complications including hydrocephalus and its treatment modality (external ventricular drainage (EVD) or ventriculoperitoneal (VP) shunt placement), rebleed, occurrence of DCI and infarction were registered for outcome comparison between groups. Only patients with radiologically confirmed rebleed were included in the rebleed group. DCI was defined according to Vergouwen et al. 22 Only confirmed new ischemic lesions on follow-up imaging (CT and/or magnetic resonance imaging), which were not seen immediately after the aneurysm excluding procedure, were included in the analysis. The clinical outcome was evaluated using the Glasgow outcome scale (GOS), with GOS 1 (death) at initial hospital admission and sixmonths' follow-up, and poor (GOS 1-3) and favorable (GOS 4-5) outcome at six-months' follow-up. Data analysis Prior to data analysis, we explored the outcome of patients with ASA in combination with other antiplatelet agents in relation patients with ASA alone. As all patients with double antiplatelet medications had a poor outcome, which was significantly different from the group with ASA alone, we decided to exclude these patients from further analysis. Baseline characteristics, disease-associated complications, treatment and outcome factors were compared between ''antiplatelet users'' and ''non-users.'' Continuous variables are presented as mean with its standard deviation (SD) if normally distributed and as median with its interquartile range if not. International Journal of Stroke, 0(0) International Journal of Stroke, 17 (5) Categorical variables are presented as count and percentages and are dichotomized to relevant clinical cutoff points. Group differences are calculated using the Chi-square test and Student's t-test. A two-sided p-value < 0.05 is considered significant. Crude and adjusted odd ratios (OR) were calculated for prehemorrhage antiplatelet use in relation to poor outcome, in-hospital mortality and mortality at six months with logistic regression analysis. If the change between crude and adjusted OR was >10%, the corresponding parameter for which the stratification was performed was considered as a confounder. A multivariable logistic regression analysis was performed, adjusting for confounders. In the antiplatelet user group, the impact of thrombocyte transfusion on rebleed and poor outcome was calculated using a Chisquare test. Analysis was performed with STATA version 16.0 (StataCorp, Stata Statistical Software: Release 16, College Station, TX) software. Baseline characteristics and treatment modalities A total of 1123 patients were eligible for the study. Ninety patients (8.0%) with missing outcome data at six months were excluded from the analysis. The study flowchart is given in Figure 1. The remaining 1033 patients consisted of 692 (67%) women and 161 (15.6%) used ASA prior to the onset of aSAH (Table 1). Antiplatelet users were older compared to non-users with a higher prevalence of hypertension, diabetes, heart disorder and hypercholesterolemia (Table 1). No difference was seen in clinical status on admission, nor in the Fisher score between both groups (Table 1). Outcome The proportion of poor outcome at six months' followup was higher in the antiplatelet group compared to the non-user group ( Tables 2 and 3). We did not find a difference in rebleed rate, DCI, infarction, rate of hydrocephalus, need for urgent EVD or VP-shunt dependency ( Table 2). In the antiplatelet user group, five patients were diagnosed with NSTEMI, most likely due to the aSAH itself. Risk factors and poor outcome at six months The following parameters were assessed as confounders and are included into the multivariable analysis (Supplementary file 1, 2 and 3): age, sex, WFNS score, infarction and heart disorder for poor outcome Cumulative, 1123 patients with aSAH and prehemorrhage acetylsalicylic acid use were available for inclusion in this prospective cohort study. Ninety patients with missing six-month clinical outcome data were excluded from the study. In the final analysis, 1033 patients were included. aSAH: aneurysmal subarachnoid hemorrhage. International Journal of Stroke, 0(0) and age, sex, infarction, heart disorder, hypercholesterolemia and smoking for in-hospital and six months' mortality. Antiplatelet use was independently associated with poor outcome at six months (adjusted OR 1.80, 95% CI 1.08-3.02), whereas it was neither associated with in-hospital mortality (adjusted OR 0.97, 95% CI 0.56-1.66) nor with mortality at six months (adjusted OR 0.89, 95% CI 0.51-1.53). Thrombocyte transfusion Of the 161 patients with a history of prehemorrhage ASA use, 67 (41.6%) received a thrombocyte transfusion. Patients who received a thrombocyte transfusion after the rebleeding event (n ¼ 2) were not included in the transfusion group. The rate of rebleed was higher in patients who did not receive a thrombocyte transfusion; however, this difference was not significant (transfusion vs. no transfusion: 5 (7.7%) vs. 17 (17.7%), p ¼ 0.07; Table 3). There was no difference in poor outcome between patients receiving thrombocyte transfusion and those who did not (transfusion vs. no transfusion: 37 (55.2%) vs. 53 (56.4%), p ¼ 0.83; Table 3). Discussion In this multicenter study, the prehemorrhage ASA use in aSAH patients was identified as an independent risk International Journal of Stroke, 0(0) factor for poor outcome (defined as GOS 1-3) at six months after aSAH. Thrombocyte transfusion in the subgroup of antiplatelet users had no significant impact on the rate of rebleed or patient outcome at six months. Impact of prehemorrhage ASA use on clinical outcome after aSAH Previous studies have reported conflicting results regarding the impact of prehemorrhage antiplatelet use and patients' outcome and some did not find an association between ASA use (other antiplatelet or anticoagulant agents were excluded) and poor outcome. 12 Our data show a significantly higher proportion of poor outcome and in-hospital mortality in the ASA users. This cannot be explained by a higher rebleed rate, nor by a more severe hemorrhage pattern. Our finding that prehemorrhage antiplatelet use is an independent risk factor for poor outcome has previously been suggested. 11 Kato et al. reported the influence of antiplatelet agents before the onset of hemorrhage and concluded that antiplatelet agents was significantly associated with worse outcome in patients of 70-79 years. 11 Cerebrovascular and cardiovascular effects of antiplatelet agents Antiplatelet agents, especially ASA, have well-known positive effects on reducing the risk of cardiovascular 23,24 and cerebrovascular 25,26 diseases. Moreover, several recent studies suggested that ASA may decrease the risk of growth and rupture of cerebral aneurysms. 8,10,27 After rupture, however, ASA is possibly associated with an increased risk of recurrent bleeding before treatment 28 and ASA given after aneurysm treatment in aSAH does not improve clinical outcome. 29 Given the fact that we wanted to address aneurysm rebleeding cases, the two patients, who received a thrombocyte transfusion after the rebleeding event were for this crosstab calculation included in the non-transfusion group. International Journal of Stroke, 0(0) Sebök et al. 5 International Journal of Stroke 17 (5) International Journal of Stroke, 17 (5) One study showed that long-term ASA and anticoagulant use among patients with aSAH and endovascular aneurysm treatment was not associated with increased mortality or complication rates. 14 Another study suggested a potential beneficial effect of ASA in the setting of intracranial aneurysms by weighing the risk of rupture against its potential adverse effects on hemorrhage severity. Those findings contradict with our findings, however, both studies drew their conclusions on a lower number of patients. A possible reason for the increased risk of poor outcome in antiplatelet users could be the inhibition of platelet activation. This can theoretically exacerbate the initial hemorrhage and the use of antiplatelet drugs can complicate surgical procedures. 11 However, in our cohort we could not confirm this hypothesis as we did not find a significant difference in rebleed rate. Furthermore, higher age and higher proportion of heart disorder in antiplatelet users could have influenced the outcome. However, when we corrected for these confounders, antiplatelet use remained associated with poor outcome. This is opposing to findings by Bruder et al. 30 who in the matched-pair analysis did not find a different outcome as evaluated by modified Rankin scale between patients with continuous ASA and patients without ASA. Due to the aging population and rising number of patients with cardiovascular or neurovascular disease, there is an increasing number of patients taking antiplatelet and anticoagulative drugs. 31 A previous study showed a significant increase in the rate of patients with continuous ASA use at the time of aneurysm rupture over the observed period of 15 years. 30 This emphasizes the importance of guidelines and treatment recommendations in this patient group. A recently conducted survey shows that there is significant variability in the management of patients with aSAH and antiplatelet use before admission. 16 Departmental guidelines are only present in 32% and have an impact on decisionmaking to stop the antiplatelet agent and/or transfuse thrombocytes. 16 Impact of thrombocyte transfusion on outcome after aSAH Thrombocyte transfusion in patients with spontaneous intracerebral hemorrhage has been investigated in several studies but patients with aSAH were excluded. 27,32,33 Results of the randomized controlled PATCH trial 34 showed a higher case fatality rate when thrombocytes were acutely transfused in non-surgically treated patients taking antiplatelet therapy prior to intracerebral hemorrhage. Although patients with aSAH were not included in the study, it opened the discussion about whether thrombocyte transfusion in aSAH could be harmful to patients instead of beneficial. In a recent consecutive series of 364 patients with aSAH, 38 patients used antiplatelet therapy prior to admission and underwent thrombocyte transfusion during hospital admission; those patients showed poor clinical outcome at six months after correcting for confounders. 17 Based on the available data, however, no firm recommendation regarding thrombocyte transfusion in patients with aSAH can be given so future research in larger cohorts are needed. No difference in poor outcome at six months between the patients with and without transfusion was seen. So, based on the findings of our study, thrombocyte transfusion does not seem to be harmful in aSAH patients. On the contrary, a recent study by Post et al. found an association between thrombocyte transfusion and poor clinical outcome at six months. 17 Based on current very limited findings, we still do not know the balance between risk and benefit of thrombocyte transfusion in patients with aSAH and prehemorrhage antiplatelet use. A pooled analysis of data from both cohort and larger international (randomized) studies could clarify this issue. Limitations We performed a retrospective analysis of prospectively collected data and some data were collected retrospectively. This way of data collection could have induced information bias. Moreover, information bias could also have been introduced by failure in recalling whether the patient used ASA by the next of kin. To minimize the information bias, the validated outcome measurements were collected by physicians and research nurses who were all trained for performing outcome assessment. Since studies have found that 25% of all patients with aSAH die before reaching hospital, our data are not generalizable to the complete aSAH population including non-hospitalized patients. 35 We did not collect data on duration of ASA use before the hemorrhage, neither did we assess thrombocyte aggregation status. Finally, the timing of thrombocyte transfusion was not assessed in the current cohort. Conclusion In this multicenter study, the use of ASA before onset of aSAH was found to be independently associated with poor outcome (defined as GOS 1-3) at six months. Prehemorrhage ASA use is not associated with a higher in-hospital or six months' mortality rate. Thrombocyte transfusion had no impact on the rebleed rate or poor outcome at six months. Future studies are necessary to assess the optimal management International Journal of Stroke, 0(0) International Journal of Stroke, 17 (5) of patients who use ASA with or without other antiplatelet agents before the onset of aSAH. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Supplemental material Supplemental material for this article is available online.
2021-07-21T06:18:09.246Z
2021-07-20T00:00:00.000
{ "year": 2021, "sha1": "71b7d59e8e0b2378d9f956e166e90a181c4a8ca6", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/17474930211035647", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "86074156fb3102d18326dc06e9fce4dfb64c7d37", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231391990
pes2o/s2orc
v3-fos-license
GNA13 regulates BCL2 expression and the sensitivity of GCB-DLBCL cells to BCL2 inhibitors in a palmitoylation-dependent manner GNA13, encoding one of the G protein alpha subunits of heterotrimeric G proteins that transduce signals of G protein-coupled receptors (GPCR), is frequently mutated in germinal center B-cell-like diffuse large B-cell lymphoma (GCB-DLBCL) with poor prognostic outcomes. Due to the “undruggable” nature of GNA13, targeted therapy for these patients is not available. In this study, we found that palmitoylation of GNA13 not only regulates its plasma membrane localization, but also regulates GNA13’s stability. It is essential for the tumor suppressor function of GNA13 in GCB-DLBCL cells. Interestingly, GNA13 negatively regulates BCL2 expression in GCB-DLBCL cells in a palmitoylation-dependent manner. Consistently, BCL2 inhibitors were found to be effective in killing GNA13-deficient GCB-DLBCL cells in a cell-based chemical screen. Furthermore, we demonstrate that inactivating GNA13 by targeting its palmitoylation enhanced the sensitivity of GCB-DLBCL to the BCL2 inhibitor. These studies indicate that the loss-of-function mutation of GNA13 is a biomarker for BCL2 inhibitor therapy of GCB-DLBCL and that GNA13 palmitoylation is a potential target for combination therapy with BCL2 inhibitors to treat GCB-DLBCL with wild-type GNA13. Introduction GNA13 encodes one of the alpha subunits (GNA13/ Gα13) of the heterotrimeric G proteins that transduce signals of G protein-coupled receptors (GPCR). It is expressed in various tissues, including lymphoid, vascular, and bone tissues in embryos and adults. Although GNA13 is classified into the Gα12/13 subfamily and highly homologous to GNA12 1 , GNA13 has unique functions. It has been shown to play critical roles in localization of germinal center (GC) B cells 2 , angiogenesis 3 , female fertility 4,5 , bone homeostasis 6 , and platelet activation 7,8 . Recurrent mutations in the GNA13 gene have been identified in multiple tumor types. As GNA13 activation can promote migration, invasion, and metastasis in pancreas, prostate, and ovarian cancer, it was originally classified as an oncogene [9][10][11] . However, loss-of-function mutations in GNA13 have recently been identified in diffuse large B-cell lymphoma (DLBCL) [12][13][14] , indicating that GNA13 may also function as a tumor suppressor. Consistent with this observation, GNA13-deficient mice develop GC B-cell-derived lymphoma 2 . DLBCL is the most commonly diagnosed lymphoma and accounts for 25-35% of all B-cell non-Hodgkin lymphomas 15 . Based on the gene expression pattern and cell-of-origin, DLBCL is usually classified into two main subtypes, namely, GC B-cell-like (GCB) and activated Bcell-like (ABC) DLBCL 16,17 . Although nearly 60% of DLBCL patients can be cured by Rituximab plus chemotherapy-based standard treatment (R-CHOP), the rest may die due to therapy nonresponsiveness or disease relapse resulting from the complexity and heterogeneity of the disease 13 . Identifying valuable therapeutic targets for treating DLBCL remains an urgent need. In the GC, B cells are strictly confined within follicles by the GPCR signaling, such as sphingosine-1phosphate receptor S1PR2 and purinergic receptor P2RY8 signaling [18][19][20] . GNA13 was found to activate ARHGEF1-RHOA and subsequently inhibits the phosphoinositide 3-kinase (PI3K)/AKT pathway 21 . A recent CRISPR/Cas9-based screen in primary GC B cells showed that GNA13 depletion strikingly enhances cell survival and proliferation, indicating its major suppressive role in constraining GC B cells 22 . Consistent with this, over 18% of germinal center B-celllike diffuse large B-cell lymphoma (GCB-DLBCL) patients harbor loss-of-function mutations or homozygous deletions in the GNA13 gene locus [12][13][14] . Additionally, some partners of GNA13, like S1PR2, P2RY8, ARHGEF1, and RHOA, are also frequently mutated or dysregulated in GCB-DLBCL, implying the critical role of this GPCR signaling in lymphomagenesis 12,23,24 . Although GCB-DLBCL prognosis is generally more favorable than that of ABC-DLBCL, a recent comprehensive analysis of 1001 DLBCL patients revealed that GCB-DLBCL patients who harbor GNA13 mutations and also express high level of BCL2 have an extraordinarily high risk of poor outcomes 25 . However, no effective therapeutic strategy is available for this DLBCL subtype. Post-translational protein modifications regulate protein function and can be used as therapeutic targets. Spalmitoylation involves palmitoyl acyltransferase (PAT)mediated covalent lipid modification of cysteine side chains with the 16-carbon fatty acid, palmitate 26,27 . Palmitoylation regulates the membrane association, subcellular trafficking, stability, and function of proteins 26 . We previously showed that palmitoylation of NRAS is essential for its plasma membrane (PM) translocation, signal transduction, and leukemogenesis, both in vivo and in vitro 28 . Palmitoylation is required for GNA13 to associate with the PM and the activation of Rho-dependent signaling 29 . Here, we show that palmitoylation of GNA13 also regulates its stability and is required for its tumor suppressor function in GCB-DLBCL cells. Interestingly, GNA13 negatively regulated BCL2 expression in GCB-DLBCL cells in a palmitoylation-dependent manner. Inactivating GNA13 by targeting its palmitoylation enhanced the sensitivity of GCB-DLBCL cells to the BCL2 inhibitors. Our studies suggested that GNA13 loss-of-function mutations may serve as a biomarker for BCL2 inhibitormediated precision therapy of DLBCL and that GNA13 palmitoylation may be a potential target for combination therapy with BCL2 inhibitors to treat DLBCL with wildtype (WT) GNA13. Palmitoylation regulates GNA13 protein stability To elucidate the role of GNA13 palmitoylation in GCB-DLBCL, we first confirmed the palmitoylation sites in GNA13 employing isobaric iodoTMT switch labeling in HeLa cells stably expressing HA-tagged GNA13. The proteomics data showed that both cysteine 14 (C14) and 18 (C18) contained iodoTMT 6 -127, indicative of palmitoyl modifications (Fig. 1A). All other cysteines could be excluded as palmitoylation sites except for C236, because the tryptic peptide containing this residue could not be resolved by mass spectrometry owing to its small size. Similarly, a click chemistry-based, single-cell in situ proximity ligation assay (Supplementary Fig. S1A-C) showed that GNA13 was palmitoylated (red fluorescence) and that palmitoylation was almost abolished by the C14/ 18S double mutation. We further confirmed the above results using bioinformatic algorithms (CSS-PALM 4.0 30 , MDD-PALM 31 ) and an Acyl-RAC assay ( Supplementary Fig. S1D, E). These results were consistent with previous findings 29 . Next, we characterized the PM localization of GNA13 by co-staining with the PM marker, Na-K ATPase (Supplementary Fig. S3A). A large fraction of WT GNA13 (GNA13 WT ) localized to the PM, whereas GNA13 harboring the single or double mutations in palmitoylation sites showed diffused cytoplasmic staining. Furthermore, 2-bromopalmitate, a pan-palmitoylation inhibitor, markedly inhibited the membrane localization of GFP-GNA13 WT after a short treatment ( Supplementary Fig. S3B). We further employed a biochemical method to isolate the PM and cytosolic fractions of the cell. Consistent with a previous report 29 , the data showed that the PM fraction of palmitoylation-deficient GNA13 mutants was considerably less than that of GNA13 WT (Fig. 1B). Notably, the expression level of the GNA13 C14/18S mutant was lower than that of GNA13 WT , as well as that of the single mutants (Fig. 1B), indicating that palmitoylation affected protein stability. A cycloheximide-based pulse-chase experiment revealed that the loss of palmitoylation accelerated GNA13 degradation compared with GNA13 WT (Fig. 1C). Palmitoylation can regulate protein stability by affecting critical proteolytic processes, such as those associated with the ubiquitin-proteasome 32,33 , autophagosome-lysosome 34 , and caspase systems 35,36 . To examine if palmitoylation regulates GNA13 protein stability through these systems, we tested whether the downregulation of the GNA13 C14/18S mutant could be rescued by treatment with the proteasomal inhibitor MG132, the autophagy inhibitors HCQ and ULK-101, and several caspase inhibitors. Interestingly, the downregulation of GNA13 C14/18S was marked reversed by exposure to the pan-caspase inhibitors Z-VAD(OMe)-FMK, Q-VD-Oph, and Emricasan, as well as by the group II Fig. 1 Palmitoylation of GNA13 regulates its protein stability. A The scheme of isobaric iodoTMT switch labeling-based mass spectrometry assay and results of MS/MS spectrum of palmitoylated peptide of GNA13. IodoTMT 6 -127 labeling on Cys14 and Cys18 (the two C in lowercase) of GNA13 peptide sequence was shown in peaks graph (upper panel). B Total, membrane (Mem) and cytosolic (Cyto) fractions of HeLa cells expressing HAtagged WT GNA13, C14S, C18S, or C14/18S mutant of GNA13 were immunoblotted with an anti-HA antibody. α-Tubulin was used as a loading control for total cellular proteins, while Na-K-ATPase and GAPDH were used as markers of the membrane and cytosol, respectively. C HeLa cells overexpressing WT GNA13 or C14/18S mutant were incubated with cycloheximide (CHX) and analyzed by western blot at the indicated time points. D Protein levels of WT GNA13 and C14/18S mutant in HeLa cells treated with or without indicated caspase inhibitors for 24 h were detected by immunoblotting with an anti-HA antibody. α-Tubulin was used as a loading control. caspase-specific inhibitor Ac-DEVD-CHO (Fig. 1D), indicating that the palmitoylation of GNA13 regulates its stability through the evasion of caspase-associated degradation. Interestingly, the level of GNA13 WT can be upregulated by MG-132 ( Supplementary Fig. S2C), indicating that GNA13 is also subject to proteasome-mediated degradation. Meanwhile, protein levels of both GNA13 WT and GNA13 C14/18S can be moderately upregulated by autophagic inhibitors HCQ and ULK-101 ( Supplementary Fig. S2D), suggesting that GNA13 can also be regulated by the autophagosome-lysosome system. Palmitoylation does not appear to affect these processes. These data demonstrated that GNA13 palmitoylation regulates both its PM localization and its stability. Palmitoylation of GNA13 is required for its tumor suppressor function GNA13 is frequently mutated in GCB-DLBCL. Various sites are mutated throughout the GNA13 gene, consistent with their loss-of-function nature [11][12][13][14] . Through the whole genome sequencing data obtained from St. Jude Cloud 37 , we found that there were at least two patients harboring the GNA13 C14S mutation (Supplementary Fig. S3) 38,39 , suggesting that palmitoylation of GNA13 regulates its tumor suppression function. To examine the role of palmitoylation in GNA13's tumor suppressor function, we compared the tumor suppressor activity of palmitoylation-deficient mutants of GNA13 to that of WT counterpart in two GCB-DLBCL cell lines. First, we transduced GNA13 WT , GNA13 C14S , GNA13 C18S , and GNA13 C14/18S into OCI-LY1, a GCB-DLBCL cell line harboring loss-of-function GNA13 mutations. Consistent with the tumor suppressor function of GNA13, we found that ectopic expression of GNA13 WT markedly suppressed proliferation of OCI-LY1 cells ( Fig. 2A). The three GNA13 palmitoylation mutants, on the other hand, did not inhibit proliferation of OCI-LY1 cells ( Fig. 2A), indicating that palmitoylation of GNA13 is required for its tumor suppressor function. Interestingly, the palmitoylation mutants of GNA13, particularly GNA13 C14/18S , actually promoted proliferation of OCI-LY1 cells ( Fig. 2A). This result suggests that OCI-LY1 cells retain partial GNA13 tumor suppressor functions, either the GNA13 mutant in OCI-LY1 cells retain partial tumor suppressor function or the presence of the wild-type allele of GNA13, and that palmitoylation mutants of GNA13 have a dominant negative effect. To test this possibility, we knocked down the endogenous GNA13 by introducing GNA13 shRNAs (Fig. 2B). We found that two independent GNA13 shRNAs, shGNA13 600 and shGNA13 UTR , both promoted proliferation of OCI-LY1 cells (Fig. 2B), supporting the assumption that OCI-LY1 cells retain partial GNA13 tumor suppressor functions. Next, we further confirm the role of palmitoylation in GNA13's tumor suppressor function in SU-DHL4, a GCB-DLBCL cell line with the WT GNA13. In this experiment, we first transfected SU-DHL4 cells with either scrambled or GNA13-specific shRNAs. The endogenous GNA13 expression was significantly reduced in SU-DHL4 cells transfected with specific shRNAs (Fig. 2C). More importantly, proliferation of SU-DHL4 cells transfected with GNA13 shRNAs was significantly increased as compared to SU-DHL4 cells transfected with control shRNA (Fig. 2C). We chose SU-DHL4 cell line stably expressing the GNA13-UTR shRNA (SU-DHL4-shGNA13 UTR ) for further experiments. We then transduced GNA13 WT , GNA13 C14S , GNA13 C18S , and GNA13 C14/18S into SU-DHL4-shGNA13 UTR cells. As expected, ectopic expression of GNA13 WT markedly suppressed proliferation of SU-DHL4-shGNA13 UTR cells (Fig. 2D). The three GNA13 palmitoylation mutants, on the contrary, did not inhibit proliferation of SU-DHL4-shGNA13 UTR (Fig. 2D), further demonstrating that palmitoylation of GNA13 is required for its tumor suppressor function. The increased proliferation of SU-DHL4-shGNA13 UTR expressing palmitoylation mutants of GNA13 compared to that transfected with the vector control may be a result of the dominant negative effect of palmitoylation mutants of GNA13 over the residual GNA13, as discussed above ( Fig. 2A, D). In addition, we found that SU-DHL4-shGNA13 UTR expressing palmitoylation mutants of GNA13 exhibited a significant decrease in annexin V/PI positive apoptotic cell population compared to SU-DHL4-shGNA13 UTR expressing the WT GNA13 ( Fig. 2E and Supplementary Fig. S4A), indicating that palmitoylation of GNA13 is required for its pro-apoptotic function. We also found that the cells overexpressing GNA13 WT showed a higher level of the cleaved Caspase3, which means the WT GNA13 could induce an active state of caspase3 (Supplementary Fig. S5). Consistently, a similar trend in cell proliferation could also been observed among these cells when we assessed the cell-cycle progression using a BrdU/7-AAD labeling assay ( Fig. 2F and Supplementary Fig. S4B). GNA13 negatively regulates BCL2 expression in GCB-DLBCL in a palmitoylation-dependent manner Intriguingly, the clinical data from PPISURV 40 revealed that GNA13 and BCL2 expressions exhibited opposite prognostic effects on the survival of DLBCL patients (Fig. 3A, B). To gain insights into the mechanism by which GNA13 functions as a tumor suppressor in GCB-DLBCL, we analyzed RNA sequencing data of 102 GCB-DLBCL cases from public datasets (R2: Genomics Analysis and Visualization Platform, https://hgserver1.amc.nl/ cgi-bin/r2/main.cgi). We found that BCL2 expression and GNA13 expression are inversely correlated (Fig. 3C). To check if the BCL2 expression is affected by the GNA13 activity, we examined BCL2 expression levels in GCB-DLBCL cell lines either with the WT GNA13 (SU-DHL4) or mutant GNA13 (OCI-LY1 and SU-DHL6). We found that the expression of BCL2 is significantly higher in cells with GNA13 mutant than the cells with WT GNA13 (Fig. 3D). These data suggest that GNA13 may exert its tumor suppressor function partially by regulating the BCL2 expression. To test this hypothesis, we examined the expression level of BCL2 in SU-DHL4 cells vs. SU-DHL4-shGNA13 UTR cells. We found that BCL2 expression was drastically elevated when GNA13 was knocked down (Fig. 3E), indicating that GNA13 negatively regulates the expression of BCL2. We then moved on to do a rescue experiment and found that ectopic expression WT GNA13 in SU-DHL4-shGNA13 UTR inhibited the expression of BCL2 (Fig. 3E), confirming that GNA13 negatively regulates BCL2 expression level. Consistent with the previous finding that palmitoylation of GNA13 is required for its tumor suppressor function, ectopic expression of palmitoylation mutants of GNA13 were found to be incapable of suppressing the expression of BCL2 in SU-DHL4-shGNA13 UTR cells (Fig. 3E). Similar results were obtained in OCI-LY1 cells bearing a loss-offunction mutant of GNA13 (Fig. 3F). These data demonstrate that GNA13 is a negative regulator of BCL2 and that palmitoylation is required for this function of GNA13. GNA13-deficient GCB-DLBCL cells are hypersensitive to the treatment with BCL2 inhibitors To find potential therapies for GNA13-deficient GCB-DLBCL, we carried out a cell-based drug screening using a chemical library comprising FDA-approved drugs and bioactive compounds with known targets. The SU-DHL4-shGNA13 UTR described above was used as a model for GNA13-deficient GCB-DLBCL and the parental SU-DHL4 was used for the counter screening. We found that two BCL2 inhibitors, ABT-737 and ABT-263 (the firstand second-generation BCL2 inhibitors, respectively), exhibited the most significant efficacy in killing SU-DHL4-shGNA13 UTR as compared to the SU-DHL4 control cells (Fig. 4A). It has been shown by us (Fig. 4E) and others 2 that the PI3K-AKT signaling pathway is a well-known downstream target of GNA13. We found that inhibitors of the PI3K-AKT signaling pathway, such as MK-2206, can also effectively kill GNA13-deficient SU-DHL4 cells (Fig. 4A, B), demonstrating the validity of this screening. Having found that GNA13-deficient SU-DHL4 cells are hypersensitive to the treatment with BCL2 inhibitors, we analyzed sensitivity of DLBCL cell lines to BCL2 inhibitors from published data 41 . Figure 4C shows that DLBCL cell lines with GNA13 mutations are more sensitive to the treatment with BCL2 inhibitors than those with WT GNA13. To further confirm that GCB-DLBCL cells with GNA13 mutations are hypersensitive to the treatment with BCL2 inhibitors, we compared the sensitivity of ABT-199 (also known as venetoclax, the third-generation BCL2 inhibitor 41 ) in treating GCB-DLBCL cell lines either with the WT GNA13 (SU-DHL4) or mutant GNA13 (OCI-LY1). As shown in Fig. 4D, OCI-LY1 cells are much more susceptible to the treatment with ABT-199 than SU-DHL4 cells. Consistent with the function of GNA13 in suppressing the phosphoinositide 3-kinase (PI3K)/AKT pathway 21 , the AKT Serine 473 phosphorylation (P-AKT S473 ) level was elevated in SU-DHL4 cells transfected with GNA13 shRNAs (Fig. 4E). To test whether the high expression of BCL2 is correlated to the activation of PI3K-AKT pathway in DLBCL, we used the pan-PI3K inhibitor GDC-0941 and PI3Kα/δ inhibitor Copanlisib 42 in treating WT SU-DHL4 cells. Both of the inhibitors could suppress the phosphorylation of AKT S473 and BCL2 protein expression accordingly (Fig. 4F). Likewise, in SU-DHL4-shGNA13 UTR cells, the protein expression level of BCL2 was also drastically repressed upon PI3K-AKT pathway inhibition by the above two inhibitors (Fig. 4G), implying a high correlation between PI3K-AKT signaling and BCL2 anti-apoptosis pathway may exist in the GNA13-deficient background. These data, together with the above finding that GNA13 negatively regulates the expression of BCL2, suggest that the loss-of-function mutation of GNA13 is a biomarker for the precision BCL2 inhibitor therapy for GCB-DLBCL. Inactivation of GNA13 by targeting its palmitoylation sensitizes the GCB-DLBCL cells to BCL2 inhibitors Although GCB-DLBCL with loss-of-function mutation of GNA13 could be treated effectively with BCL2 inhibitors, the majority of GCB-DLBCL patients harbor WT GNA13. As GNA13 negatively regulates BCL2 expression, the sensitivity of GCB-DLBCL to the treatment with BCL2 inhibitors, the effective therapy of GCB-DLBCL with WT GNA13 may be achieved by targeting both GNA13 and BCL2. Since our data show that palmitoylation of GNA13 is required for its function in regulating the BCL2 expression, inhibiting GNA13 palmitoylation may sensitize the GCB-DLBCL cells with WT GNA13 to BCL2 inhibitors. As a proof-of-concept experiment, we attempted to test the sensitivity of GCB-DLBCL cells bearing palmitoylation mutant of GNA13 to the treatment with BCL2 inhibitor. We first confirmed that SU-DHL4-shGNA13 UTR cells, in which the WT GNA13 was knocked down with the specific GNA13 shRNA as described earlier, were more susceptible to the treatment with ABT-199 compared to the parental SU-DHL4 cells (Fig. 5A). To test the ABT-199 efficacy in vivo, we generated two xenograft models by serially transplanting SU-DHL4-shGNA13 UTR -OE WT or SU-DHL4-shGNA13 UTR -OE C14/18S cells into recipient NOD/SCID mice. Tertiary recipients transplanted with two different SU-DHL4 cells were randomly assigned to two cohorts that were orally administered 100 mg/kg ABT-199 or vehicle, respectively. As shown in Fig. 5B, SU-DHL4-shGNA13 UTR -OE C14/18S tumors in mice treated with vehicle control (n = 13) grew faster than SU-DHL4-shGNA13 UTR -OE WT tumors (n = 13), which is consistent with previous in vitro result in Fig. 2D. The ABT-199 treatment slowed down the growth of shGNA13 UTR -OE WT tumors (n = 10), while the effect of the ABT-199 treatment was much more dramatic on SU-DHL4-shGNA13 UTR -OE C14/18S tumors (n = 8) (Fig. 5B). At the end of this experiment (56 days), 8 out of 8 SU-DHL4-shGNA13 UTR -OE C14/18S mice (100%) treated with the ABT-199 remained tumor-free. These data suggest that inhibition of palmitoylation of GNA13 is an effective therapeutic strategy for GCB-DLBCL in combination with the BCL2 inhibitor. Discussion In this study, we demonstrate that palmitoylation of GNA13 not only regulates its plasma membrane localization, but also regulates GNA13's stability. It is essential for the tumor suppressor function of GNA13 in GCB-DLBCL cells. Interestingly, we found that GNA13 negatively regulates BCL2 expression in GCB-DLBCL in a palmitoylation-dependent manner. Consistently, we found that GCB-DLBCL with loss-offunction mutations of GNA13 are sensitive to the treatment with BCL2 inhibitors and that inactivating the WT GNA13 by targeting its palmitoylation dramatically enhanced the sensitivity of GCB-DLBCL to the BCL2 inhibitor. These results indicate that the loss-offunction mutation of GNA13 is a biomarker for the precision BCL2 inhibitor therapy for GCB-DLBCL and that GNA13 palmitoylation is a potential target for combination therapy with the BCL2 inhibitor to treat GCB-DLBCL with WT GNA13. It has been shown that palmitoylation regulates protein stability 26 . We show here that palmitoylation of GNA13 does affect its stability (Fig. 1B). Our data show (see figure on previous page) Fig. 4 GNA13-deficient GCB-DLBCL cells are hypersensitive to BCL2 inhibitors. A Volcano plot of FDA-approved drugs and bioactive compounds with known targets on SU-DHL4-shGNA13 Scr and SU-DHL4-shGNA13 UTR cells. Drug effect size ratio between SU-DHL4-shGNA13 Scr and SU-DHL4-shGNA13 UTR vs. statistical significance (P value) were plotted. Red and blue points indicate drug identified as differentially inhibited between the two types of cells. B Dose-response curves for SU-DHL4-shGNA13 Scr or SU-DHL4-shGNA13 UTR treated with AKT inhibitor MK-2206. C IC 50 of BCL2 inhibitors ABT-199 and ABT-263 in DLBCL cell lines with either wild-type (blue dots) or mutant (red dots) GNA13. D Dose-response curves for SU-DHL4 and OCI-LY1 cells treated with ABT-199. E SU-DHL4 or OCI-LY1 cells were transfected with constructs containing shGNA13-600, shGNA13-UTR, or a scrambled shRNA (Scr). Levels of total (T)-AKT and phosphorylated (P)-AKT S473 in these cells were examined by western blot analysis. GAPDH was used as loading controls. F Western blot analysis of BCL2, total (T)-AKT, and phosphorylated (P)-AKT S473 level in SU-DHL4 cells treated with PI3K inhibitors Copanlisib or GDC-0941 at indicated concentrations for 6 h, respectively. G Western blot analysis of BCL2 level in SU-DHL4-shGNA13 Scr or SU-DHL4-shGNA13 UTR cells treated with PI3K inhibitors Copanlisib or GDC-0941 at indicated concentrations for 6 h, respectively. α-Tubulin was used as a loading control. that downregulation of palmitoylation-deficient mutant GNA13 C14/18S was rescued significantly by caspase inhibitors such as Z-VAD(OMe)-FMK (Fig. 1D), indicating that palmitoylation of GNA13 regulates its stability by inhibiting caspase-mediated protein degradation. It is possible that palmitoylation affects the conformation of GNA13, exposing the caspase cleavage sites. Alternatively, changed cellular localization due to lack of palmitoylation may subject GNA13 to the caspase-mediated protein degradation. These possibilities will be tested in the future. Mutations of GNA13 have been found in about 18% in GCB-DLBCL, 13% Burkitt lymphoma (BL), and 15.6% Follicular Lymphoma (FL), respectively [43][44][45] , including multiple point mutations and truncated variants. In BL, several prevailing GNA13 point mutations, such as L184R, L197Q, and F245S, were shown to be loss-offunction mutations in terms of Gα13/RHOA activity in GPCR signaling 46 . Here, we identified two palmitoylation sites, C14 and C18, in the N-terminal GNA13 protein by using both genetic and biochemical approaches. Consistent with our finding that palmitoylation of GNA13 is required for its tumor suppressor function, at least two DLBCL patients were found to bear the GNA13 C14S mutation 38,39 . Our data further support the idea that GNA13 plays tumor suppressor function in germinal center B cells and loss-of-function mutations of GNA13 are involved in lymphomagenesis. Venetoclax, the FDA-approved BCL2 inhibitor 47,48 , is being tested to extend its application in DLBCL 49 . Our study indicates that inactivation of GNA13 alone could lead to higher expression of BCL2 that eventually conferred higher drug sensitivity to BCL2 inhibitor, implying that DLBCL patients with inactive mutations of GNA13 are more sensitive to the BCL2 inhibitor treatment, providing a patient-stratification biomarker for the venetoclax therapy. Inactivating a tumor suppressor may sound counterintuitive in fighting cancers. But it works as a strategy of synthetic lethality. A recent systematic drug sensitivity screening shows that loss of SETD2 sensitized tumor cells to CDK7 inhibitor, and BAP1 depletion confers vulnerability to inhibitor of DNMT1 50 . Our work that inactivating GNA13 increases sensitivity of GCB-DLBCL cells to the BCL2 inhibitor may serve as a new therapeutic strategy for GCB-DLBCL. S-palmitoylation is catalyzed by PATs. To date, at least 23 mammalian PATs have been identified 51 . The limited responsibilities of each PAT give hope that targeting specific PAT may be safe and effective in cancer therapies. It is important to identify which PAT is responsible for the palmitoylation modification of GNA13 and develop specific PAT inhibitors. Such agents would be effective for treating GCB-DLBCL with WT GNA13 in combination with the BCL2 inhibitor. As mentioned in the Introduction, the role of GNA13 in tumorigenesis is cell-context dependent. It is not clear whether GNA13 also regulates the expression of BCL2 in tumors other than GCB-DLBCL and whether the therapeutic scenario for targeting palmitoylation of GNA13 in combination with the BCL2 inhibitor holds in other types of cancer. We will test the role of palmitoylation of GNA13 in other type of cancer in the future. Cell culture The GCB-DLBCL cell lines SU-DHL6 and OCI-LY1 were kindly provided by Dr. Chenghua Yang, Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences (CAS). The GCB-DLBCL cell line SU-DHL4 was obtained from Stem Cell Bank, CAS. All the GCB-DLBCL cell lines were cultured in RPMI 1640 (Basal Media, Shanghai, China) with 10% (v/v) FBS (Thermo, Waltham, MA, USA) in a humidified incubator at 37°C under 5% CO 2 . The HeLa cells were bought from ATCC and cultured as previously described 52 . All cell lines were authenticated via STR profiling and periodically treated with Plasmocin (Invivogen, San Diego, CA, USA) to exclude mycoplasma contamination. Cell proliferation/viability assay Cell proliferation/viability were assessed using CellTiter-Glo Luminescent Cell Viability Assay (Promega, Madison, WI, USA) as previously described 52,53 . Briefly, cells were seeded into 96-well cell plates (5000 cells/well) and supplemented with drugs at various concentrations. After 48 h of incubation, cells were lysed by CellTiter-Glo reagent and the resulting luminescence was measured using an Envision plate reader (PerkinElmer, Akron, OH, USA) after a 30-min incubation at room temperature. Membrane and cytosolic protein isolation Protein extractions from membrane and cytosolic part of cells were isolated as previously described 54,55 . Western blot analysis Western blot analysis was performed as previously described 53 . Briefly, cells were lysed in 1× sodium dodecyl sulfate (SDS) sample loading buffer, and then equal amounts of protein samples were loaded to polyacrylamide gel, transferred to nitrocellulose membrane, and then blotted with specific primary and secondary antibodies. Luminescence signals on membrane were detected with Immobilon Western HRP Substrate (Millipore, Darmstadt, Germany) and blots were imaged by the FluorChem Multiplex imaging system (ProteinSimple, San Jose, CA, USA). Drug screening Both SU-DHL4-Scr and SU-DHL4-UTR cells were seeded into 96-well plates at a density of 5000 cells/well. Individual chemicals from FDA-approved Drug Library were added into each well in duplicates by an Explorer automation workstation (Perkin Elmer) at the concentration of 2 μM. After 48 h of incubation, cell viability was measured using CellTiter-Glo (Promega) following the manufacturer's instructions. Statistical analysis GraphPad Prism 7 and the Student's t-test were used for data analysis. Statistical significance threshold was set at P = 0.05, and different levels were denoted as *, P < 0.05, **, P < 0.01, and ***, P < 0.001, respectively.
2021-01-10T14:18:06.961Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "b6fb37995fe95dc34719f0a478652deaffda5f52", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-020-03311-1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cf59244fb01adb3b565cb99ea522c6a4226d6169", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211524702
pes2o/s2orc
v3-fos-license
Beneficial Impact of an Extract from the Berries of Aronia melanocarpa L. on the Oxidative-Reductive Status of the Submandibular Gland of Rats Exposed to Cadmium Oxidative stress underlies the pathomechanisms of toxic action of cadmium (Cd), including its damaging impact on the oral cavity. This study investigated whether the administration of an extract from Aronia melanocarpa L. berries (AME), characterized by their strong antioxidative potential, may have a beneficial impact on the oxidative-reductive status of the submandibular gland in an experimental model of low-level and moderate human environmental exposure to cadmium. The main markers of the antioxidative status (glutathione reductase, superoxide dismutase, catalase, reduced glutathione, total antioxidative status (TAS)), total oxidative status (TOS), oxidative stress index (OSI = TOS/TAS), and lipid peroxides, as well as cadmium concentration, were evaluated in the submandibular gland tissue of female Wistar rats who received a 0.1% aqueous AME and/or a diet containing 0, 1, and 5 mg Cd/kg for 3 and 10 months. The treatment with cadmium decreased the activities of antioxidative enzymes (29%–74%), reduced glutathione concentration (45%–52%), and TAS and increased TOS, resulting in the development of oxidative stress and enhanced concentration of lipid peroxides in the submandibular gland. The administration of AME at both levels of exposure to cadmium offered significant protection against these actions of this xenobiotic. After the 10 month exposure to the 1 and 5 mg Cd/kg diet, TAS was decreased by 77% and 83%, respectively, TOS, OSI, and lipid peroxides concentration were increased by 50% and 52%, respectively, 11.8-fold and 14.4-fold, respectively, and 2.3-fold and 4.3-fold, respectively, whereas, in the case of the extract co-administration, the values of these parameters did not differ compared to the control group. The results indicate that the consumption of aronia products under exposure to cadmium may have a beneficial impact on the oxidative-reductive status of the submandibular gland and prevent oxidative stress development and enhanced lipid peroxidation in this salivary gland. Introduction It is well known that oxidative stress underlies the pathomechanisms of the development of various diseases, including diseases of the oral cavity [1,2], as well as of the toxic action of numerous xenobiotics [3][4][5][6]. Cadmium (Cd) is one of them [3,6]. The growing industrial use of this element in the last decades has resulted in the contamination of the natural environment with this xenobiotic, and unavoidable human exposure to it during a lifetime [7][8][9]. Nowadays, this heavy metal belongs to the main environmental contaminants in economically developed countries [3,7,9]. Cadmium is a pollutant of air, water, and soil, as thus it is present in food, which is the main source of the general population's exposure to this heavy metal [9][10][11]. Furthermore, an additional source of intoxication with this xenobiotic is tobacco smoking [8,12]. Cadmium concentrations in the blood and urine of habitual smokers are markedly higher than in non-smokers [8,12,13]. The harmfulness of cadmium to the human's and animals' organism is well known and widely reported [3,9,11]. This heavy metal is characterized by strong cumulative properties, and thus its content in the body increases with the duration of exposure and may be a cause of damage to various organs and systems [3,14]. Recent epidemiological studies provide more and more evidence that environmental exposure to this xenobiotic, nowadays occurring in economically developed countries, creates a threat to the health of the general population, mainly including a risk of damage to the kidney, liver, cardiovascular system, and skeleton, as well as the development of cancer and the deterioration of cognitive functions such as hearing and vision [3,9,13,[15][16][17][18]. It has been revealed that numerous effects of toxic action of cadmium, including the damaging impact on the organs of the oral cavity, result from its pro-oxidative properties [5,14,[19][20][21][22][23]. This xenobiotic indirectly mediates the generation of free radicals (FR) and reactive oxygen species (ROS) by weakening the antioxidative barrier (enzymatic and non-enzymatic), induction of the activity of oxidases, and damage to the mitochondria [3,14,21,22]. Because current environmental exposure to cadmium creates a threat to the health of the general population [3,8,10,11,13,[15][16][17][18] and the lifetime human exposure to this xenobiotic will increase [9][10][11], the attention of researchers in recent years has been focused on finding effective ways to prevent the unfavourable health effects of exposure to this heavy metal. Taking into account the strong pro-oxidative properties of cadmium and the involvement of oxidative stress in the mechanisms of its damaging impact on various organs and systems [3,5,14,[19][20][21][22][23], the greatest interest among the possible protective agents has been paid to natural products characterized by strong antioxidative properties, including polyphenol-rich ones [14,[24][25][26]. One natural product possessing high antioxidative potential are the berries of Aronia melanocarpa L. (A. melanocarpa, (Michx.) Elliott, Rosaceae; black chokeberry), which are one of the richest sources of polyphenols (719-6902 mg/100 g) [24,27]. The antioxidative abilities of chokeberries result from properties of their ingredients such as proanthocyanidins, anthocyanins, flavonols, phenolic acids, and tannins, as well as vitamins and minerals [24,27]. The antioxidative potential of polyphenols is determined by their structure, the number and distribution of hydroxyl groups (-OH groups) in the aromatic ring, and the presence or absence of double bonds [27,28]. These compounds' antioxidative action consists of a direct reaction with FR and their binding through the stabilization or delocalization of unpaired electrons, reductive properties (releasing electrons or hydrogen atoms), as well as increasing the dismutation of FR into compounds of significantly lower reactivity, and catalysing the transformation of FR into neutral products. Polyphenolic compounds inhibit the activity of a number of enzymes responsible for the production of ROS, including xanthine oxidase or myeloperoxidase and increase the activity of antioxidants (e.g., fat-soluble vitamins) and improve the total antioxidative potential. Moreover, due to the presence of an -OH group on the C-ring, they chelate metallic ions, e.g., iron and copper, which serve as active inductors of ROS [22,27,29]. Owing to the rich chemical composition, chokeberry fruits and their preparations show a wide spectrum of pro-health effects and there is a lot of evidence of their effective use in the prevention of civilization diseases, including atherosclerosis, diabetes, osteoporosis, and cancer [24,27,28]. Taking into account the strong antioxidative properties of chokeberries [24,27,28] and pro-oxidative action of cadmium [3,14,19,20], our research team has undertaken a comprehensive study, in the experimental model of low-level and moderate environmental human exposure to this xenobiotic (1 and 5 mg Cd/kg diet, respectively), to investigate whether the administration of an extract from the berries of A. melanocarpa (AME) may protect against this heavy metal toxicity. We have previously reported that the co-administration of 0.1% AME during the treatment with cadmium decreased the accumulation of this element in the body [30] and protected from its damaging impact on the skeleton [21,31] and liver [22,23], as well as improving the body status of zinc and copper [32]. In the case of the general population's exposure to cadmium, via both diet and tobacco smoke, the oral cavity is the first place of the possible unfavourable action of this element. That is why, taking into account the findings that cadmium, due to the induction of oxidative stress, may have injurious action on the organs of the oral cavity, including the salivary glands [19,20,33], we have recognized it as necessary to estimate whether the administration of AME may also protect from this impact. With regard to the results of our research to date on the protective impact of AME towards cadmium-induced oxidative stress and its consequences [21][22][23], we have hypothesized that this extract may also protect from the pro-oxidative action of cadmium in the organs of the oral cavity, including the salivary glands. The aim of the present study was to investigate this hypothesis regarding the submandibular gland. For this purpose, the impact of the AME on the oxidative-reductive status of this salivary gland was estimated in the experimental model created by us of low-level and moderate human environmental exposure to cadmium. The main markers of the enzymatic (glutathione peroxidase (GPx), superoxide dismutase (SOD), and catalase (CAT)) and non-enzymatic (reduced glutathione (GSH)) antioxidative barrier and total antioxidative status (TAS), as well as total oxidative status (TOS), oxidative stress index (OSI), and lipid peroxides (LPO; a marker of lipid peroxidation), were evaluated in the submandibular gland tissue. Moreover, to assess the relationship between cadmium accumulation in this salivary gland and its pro-oxidative action, the concentration of this xenobiotic in the submandibular gland tissue was determined as well. Such a planned study allowed to investigate not only whether AME may protect from the pro-oxidative action of cadmium on the submandibular gland, but also to explain whether this impact is related to the extract's antioxidative properties and its influence on cadmium accumulation in this salivary gland. The influence of the polyphenols and products abundant in these compounds on the salivary glands under exposure to cadmium has not been investigated until now. Our studies are the first regarding the effect of the extract from chokeberries on the oxidative-reductive status of the salivary glands at exposure to cadmium, and the present article is the first report presenting the impact on the submandibular gland. Chemicals Cadmium chloride 2.5-hydrate (CdCl 2 × 2.5 H 2 O), sodium chloride, potassium dihydrogen phosphate, and dipotassium hydrogen phosphate were purchased from POCh (Gliwice, Poland). Morbital and butyl-hydroxytoluene were received from Biowet (Pulawy, Poland) and Sigma-Aldrich Gmbh (Steinheim, Germany), respectively. Acetonitrile and trace-pure concentrated (65%) nitric acid were provided by Merck (Darmstadt, Germany). The kits for the determination of SOD (Superoxide Dismutase Assay Kit) and GSH (Glutathione Assay Kit) were received from Cayman (Ann Arbor, USA), whereas GPx and LPO were assayed with the use of BIOXYTECH GPx-340 and BIOXYTECH LPO-586 kits by Percipio Biosciences (Burlingame, CA, USA). The diagnostic ImAnOx (TAS) Kit and PerOx (TOS) Kit by Immundiagostik AG (Bensheim, Germany) were used for the determination of TAS and TOS, respectively. Hydrogen peroxide (30%), used for CAT determination, was purchased from CHEMPUR (PiekaryŚląskie, Poland). Protein concentration was assayed using the BioMaxima kit (Lublin, Poland). A stock of standard solution of cadmium (1000 mg/L; Sigma-Aldrich, Buchs, Switzerland) and palladium matrix modifier (10 g/L; Merck) assigned for atomic absorption spectrometry (AAS method) were used. To check the analytical quality of cadmium determination, the Certified Reference Bovine Muscle (ERM-BB184, Geel, Belgium) was used. All the chemical reagents were characterized by a degree of purity for analysis, except for the reagents used under cadmium assay, which were assigned for trace analysis. Ultra-pure water, received from two-way water purification MAXIMA system (ELGA; Bucks, Great Britain), was used in all the measurements. The AME was administered to animals in the form of 0.1% aqueous solution prepared daily by dissolving the appropriate amount of the powdered extract in an appropriate volume of redistilled water (1 g of the extract per 1 L of redistilled water). The diets containing cadmium at concentrations of 1 and 5 mg/kg were produced by Label Food 'Morawski' (Kcynia, Poland) by the addition of an appropriate amount of CdCl 2 × 2.5 H 2 O into the components of the Labofeed H and Labofeed B diets. During the first 3 months of the experiment, the Labofeed H diet (breeding diet) was administered, and thereafter the Labofeed B diet (maintenance diet) was used. Cadmium concentration in the diets was confirmed by the manufacturer certificate and it was also quantified by us (1.09 ± 0.13 and 4.92 ± 0.53 mg Cd/kg; mean ± standard deviation -SD [30]). The concentration of this toxic element, determined by us in the standard Labofeed diets (H and B diets), reached 0.0584 ± 0.0049 mg/kg [30]. Animals The experiment was performed on 96 young (3-4 weeks old and weighing about 50 g at baseline) female Wistar rats [Crl: WI (Han)] provided by the Laboratory Animal House in Brwinów (Poland; certified breeding). Throughout the experiment, the rats were kept in stainless steel cages (four animals in each) in standard conditions (temperature 22 ± 2 • C, relative humidity 50 ±10%, 12/12 h light-dark cycle). The animals had unlimited access to drinking water and food, the consumption of which was monitored throughout the study. Experimental Design The submandibular salivary glands used in the current study were collected and secured during an experiment conducted at the Department of Toxicology at the Medical University of Bialystok (Poland), which was approved by the Local Ethics Committee for Animal Experiments in Bialystok (approval number 60/2009). The experimental model was described in detail previously [21][22][23][30][31][32]34]. The animals were randomized into the following six experimental groups of 16 rats each: • Control group-the rats throughout the experiment (3 or 10 months) received redistilled water (containing < 0.05 µg Cd/L [30]) and the standard Labofeed fodder; • AME group-the rats received as the only drinking fluid a 0.1% aqueous AME and the standard Labofeed fodder; • Cd 1 group-the rats throughout the experiment (3 or 10 months) received the Labofeed fodder containing 1 mg Cd/kg and redistilled water for drinking; • Cd 1 + AME group-during the whole period of maintaining on the diet containing 1 mg Cd/kg (3 or 10 months), the rats received the 0.1% aqueous AME as the only drinking fluid; • Cd 5 group-the rats received fodder containing 5 mg Cd/kg for 3 or 10 months and redistilled water for drinking; • Cd 5 + AME group-during the feeding with the diet containing 5 mg Cd/kg (3 or 10 months), the rats received the 0.1% aqueous AME as the only drinking fluid. Throughout the experiment, no differences in the consumption of drinking water and food, as well as body weight gain, were noted among the experimental groups [30]. Moreover, there were no symptoms of abnormalities in the health status of the rats [30] and differences in the mean intake of AME and cadmium at particular timepoints (expressed in calculation per kilogram of body weight (kg b.w.)) throughout the study, regardless of whether they were administered separately or together (Table 1) [30,32]. The fact that cadmium concentrations in the blood and urine (markers of exposure to this heavy metal) of the animals maintained on the diets containing 1 and 5 mg Cd/kg alone and in combination with AME (0.103-0.306 µg/L and 0.0852-0.2558 µg/g creatinine, respectively, and 0.735-1.122 µg/L and 0.2839-0.6949 µg/g creatinine, respectively) [30] were within the ranges of values nowadays detected in the general population in economically developed countries [13,17,18] confirms that the experimental model reflects current environmental exposure to this xenobiotic. chloride (physiological saline) and gently dried on filter paper. Next, they were weighed with an automatic balance (OHAUS ® , Nanikon, Switzerland; accuracy to 0.0001 g). All submandibular glands showed proper macroscopic picture and there were no differences in their weight (each gland weighted about 0.2 g) among the experimental groups. Determination of Markers of the Oxidative/Antioxidative Status In order to perform the planned assays of markers of the oxidative/antioxidative status, 20% homogenates of the submandibular gland tissue were prepared. The right submandibular gland and half of the left one (of known weight) were homogenized in cold 50 mM potassium phosphate buffer (pH = 7.4) with the use of a high-performance homogenizer (Ultra-Turrax T25; IKA, Staufen, Germany). In order to prevent autoxidation of the salivary gland tissue, 0.01 cm 3 of 0.5 M butyl-hydroxytoluene in acetonitrile was added for each cm 3 of the homogenate. The prepared homogenates were divided into two portions. One portion was centrifuged (MPW-350R centrifugator, Medical Instruments, Warsaw, Poland) at 700 × g for 20 minutes in 4 • C and the separated aliquot was used to the assay of CAT, GSH, TAS, TOS, and LPO. The second one was centrifuged at 20,000 × g for 30 minutes in 4 • C and the received aliquot was subjected to GPx and SOD assay [35]. The supernatants were stored in deep freezing (−80 • C) until the planned assays were performed. The activities of GPx and SOD and the concentrations of GSH and LPO were assayed with the use of commercially available kits (BIOXYTECH GPx-340 Assay, Superoxide Dismutase Assay Kit, Glutathione Assay Kit, and BIOXYTECH LPO-586 Assay). The precision of these measurements, expressed as a coefficient of variation (CV), was < 4.5%, 5%, 3.5%, and 6%, respectively. The activity of CAT was determined by the spectrophometric method according to Aebi [36] and the CV was < 5.3%. TAS and TOS were measured using the ImAnOx (TAS) Kit and PerOx (TOS) Kit. The values of TAS determined in the control samples included in the kit reached 195.3 ± 12.3 and 306.4 ± 17.5 µmol/L (mean ± SD, n = 2) and were within the manufacturer's range of values (170-230 and 258-350 µmol/L, respectively). Similarly, the values of TOS assayed in the control samples included in the kit (145.9 ± 2.9 and 423.1 ± 6.8 µmol/L; mean ± SD, n = 2) were within the ranges of values stated by the producer (108-200 and 305-509 µmol/L, respectively). The precision of TAS and TOS assay (CV) was < 6% and < 2%. The value of OSI was mathematically calculated as the ratio of TOS and TAS. The measured parameters of the oxidative-reductive status were adjusted for protein concentration. The assay of total protein was performed with the BioMaxima Kit (Lublin, Poland) according to the manufacturer's instruction. All the above-mentioned assays performed with the use of commercial kits were conducted according to the producers' instructions (describing also the rules of particular assays). The measurements were done using the spectrometer UV VIS SPECORD 50 PLUS (Analytik Jena, Jena, Germany) or MULTISCAN GO microplate reader (Thermo Scientific, Vantaa, Finland). Moreover, an automatic Wellwash 4 washer for microplates (Thermo Labsystems, Helsinki, Finland) was used. Determination of Cadmium Concentration Known-weight halves of the left submandibular glands were wet-digested with a trace-pure concentrated nitric acid using UniClever II microwave system (Plazmatronika, Wroclaw, Poland) and then the wet-digests were diluted with ultra-pure water. The concentration of cadmium in such preparations was determined by the graphite furnace AAS method (GF AAS) using HITACHI Z-5000 atomic absorption spectrophotometer (Tokio, Japan) equipped with a graphite cuvette (Pyro cuvette A, Hitachi) and a hollow cathode lamp for this element assay (Photron, Narre Waren, Australia). The concentration of cadmium measured in the simultaneously analysed reference material (0.0020 ± 0.0001; Bovine muscle, ERM-BB184) was consistent with the value provided by the producer (0.0022 µg/g; uncertainty 0.0004 µg/g). The CV was < 6%. Statistical Analysis The obtained results were analysed statistically using Statistica 10 software (StatSoft; Tulsa, OK, USA). The data are presented in figures in the form of mean ± SE for eight rats in each experimental group. In order to assess the statistical significance of differences between the experimental groups, the nonparametric Kruskal-Wallis test was performed. In cases when statistically significant differences occurred among the six experimental groups (level of statistical significance p < 0.05), the Kruskal-Wallis post hoc test was performed to compare individual groups and determine which two means differed statistically significantly. The possible impact of AME administration under exposure to cadmium on the values of estimated parameters was evaluated based on the statistical analysis of differences between the Cd 1 + AME group or Cd 5 + AME group and the respective group treated with cadmium alone (Cd 1 group and Cd 5 group, respectively), as well as differences between the groups co-administered with cadmium and AME (Cd 1 + AME or Cd 5 + AME groups) and the respective control group. When the Kruskal-Wallis test revealed any influence of the co-administration of cadmium and AME on the investigated parameter, a two-way analysis of variance (ANOVA/MANOVA, test F) was conducted with the aim to discern the possible independent and/or interactive impact of these agents on this parameter. F values having p < 0.05 were recognized as statistically significant. Moreover, in the case when the ANOVA/MANOVA analysis revealed an interactive effect of cadmium and AME, the possible character of the interaction was described based on the comparison of the effect of the co-administration of cadmium and AME to the sum of effects noted as a result of their separate administration. The effect of cadmium or/and AME was expressed as a percentage change or a factor of change in a measured parameter compared to the control group. Based on the obtained results it was estimated whether the interaction had an antagonistic (Cd + AME effect < Cd effect + AME effect), additive effect (Cd + AME effect = Cd effect + AME effect) or another character [23]. Spearman rank correlation analysis was carried out to estimate mutual dependences between the measured markers of the oxidative-reductive status of the submandibular gland, as well as between these parameters and cadmium concentration in this salivary gland. A correlation is considered statistically significant at a correlation coefficient (r) having p < 0.05. In the case of r having a negative value and p < 0.05, the correlation is recognized as negative, whereas, in the case of positive r value (and p < 0.05), the correlation is positive in character. Effect of AME on the Antioxidative Status of the Submandibular Gland of Rats Treated with Cadmium The administration of AME alone for up to 10 months had no impact on the measured indices of the antioxidative status of the submandibular gland (GPx, SOD, CAT, GSH, and TAS) except for an increase in the activity of SOD and the value of TAS after 10 months (Figure 1). In the rats maintained on the diets containing 1 and 5 mg Cd/kg for 3 and 10 months, the activities of antioxidative enzymes (GPx, SOD, and CAT) in the submandibular gland and TAS were decreased at each time point (by 29%-74% and 65%-89%, respectively; Figure 1). The concentration of GSH was also decreased (by 45%-52%); however, at the lower level of exposure this effect was noted only after 10 months (Figure 1). The administration of AME under exposure to the 1 and 5 mg Cd/kg diet completely prevented these xenobiotic-induced changes in the determined markers of the antioxidative status of the submandibular gland, except for the decrease in the activities of GPx and SOD due to the 3 month feeding with the diet containing 5 mg Cd/kg. Apart from these two exceptions, the values of all determined indices of the antioxidative status in the animals receiving AME during the treatment with cadmium did not differ compared to the respective control group (Figure 1). Figure 1. The effect of the extract from Aronia melanocarpa L. berries (AME) on the markers of the antioxidative status of the submandibular gland of female rats exposed to cadmium (Cd). (a) glutathione peroxidase (GPx) activity; (b) superoxide dismutase (SOD) activity; (c) catalase (CAT) activity; (d) reduced glutathione (GSH) concentration; (e) total antioxidative status (TAS). The animals received cadmium in the diet at concentrations of 0, 1, and 5 mg Cd/kg and/or 0.1% aqueous AME (+) or not (−). Data are presented as mean ± SE for eight rats. Statistically significant differences (Kruskal-Wallis post hoc test): a compared to the control group, b compared to the AME group, c compared to the Cd 1 group, d compared to the Cd 1 + AME group, e compared to the Cd 5 group, where * p < 0.05, † p < 0.01, and ‡ p < 0.001. Numerical values in bars or above the bars reveal the percentage changes or factors of changes in comparison to the respective control group (↓, decrease; ↑, increase;) or the appropriate group that received cadmium alone ( , increase). The ANOVA/MANOVA analysis revealed that the improvement in the antioxidative status of the submandibular gland due to the administration of AME to the rats treated with cadmium (1 and 5 mg Cd/kg diet) was the result of the independent action of the extract and/or its ingredients interaction with this heavy metal, which seemed to be antagonistic in character (Table 2). However, this analysis revealed the lack of an effect of the extract (both independent and interactive) on TAS in the Cd 1 + AME group after 3 months, in spite of the fact that the value of this parameter in this group did not differ compared to the control group, whereas in the Cd 1 group it was decreased by 65% (Figure 1). Effect of AME on the Oxidative Status of the Submandibular Gland of Rats Treated with Cadmium The administration of AME alone for up to 10 months had no influence on the estimated markers of the oxidative status of the submandibular gland (TOS, OSI, and LPO), except for a decrease in TOS after 10 months (Figure 2). Figure 2. The effect of the extract from Aronia melanocarpa L. berries (AME) on the markers of oxidative status of the submandibular gland of female rats exposed to cadmium (Cd). (a) total oxidative status (TOS); (b) oxidative stress index (OSI); (c) lipid peroxides (LPO) concentration. The animals received cadmium in the diet at concentrations of 0, 1, and 5 mg Cd/kg and/or 0.1% aqueous AME (+) or not (−). Data are presented as mean ± SE for eight rats. Statistically significant differences (Kruskal-Wallis post hoc test): a compared to the control group, b compared to the AME group, c compared to the Cd 1 group, d compared to the Cd 1 + AME group, e compared to the Cd 5 group, where * p < 0.05, † p < 0.01, and ‡ p < 0.001. Numerical values in bars or above the bars reveal the percentage changes or factors of changes in comparison to the respective control group (↓, decrease; ↑, increase;) or the appropriate group that received cadmium alone ( , decrease). The 3 month treatment with the 1 and 5 mg Cd/kg diet had no influence on TOS of the submandibular gland; however, after 10 months of the experiment, the value of this parameter was higher (by 50% and 52%, respectively) compared to the control group ( Figure 2). In all groups of the animals receiving AME under the exposure to cadmium, TOS was lower (by 31%-53%) than in the appropriate groups treated with this metal alone and did not differ compared to the respective control groups (Figure 2). The exposure to the 1 and 5 mg Cd/kg diet resulted in an increase in OSI (3.4-to 14.4-fold) and the concentration of LPO (2.1-to 4.3-fold) in the submandibular gland, whereas the co-administration of AME entirely prevented these changes. The value of OSI and the concentration of LPO in all groups of rats co-administered with AME were within the ranges of values noted in the respective groups that did not receive the extract under the treatment with cadmium ( Figure 2). The two-way analysis of variance showed that the beneficial impact of the administration of AME under exposure to cadmium on the estimated markers of the oxidative status of the submandibular gland was the result of the independent impact of the extract and its interactive action with cadmium, which was antagonistic in character (Table 3). Table 3. Estimation of the main and interactive effects of cadmium (Cd) and the extract from Aronia melanocarpa L. berries (AME) on the indices of the oxidative status of the submandibular gland of female rats 1,2 . Effect of AME on Cadmium Concentration in the Submandibular Gland of Rats Treated with this Heavy Metal The administration of AME alone throughout the experiment had no effect on the concentration of cadmium in the submandibular gland of rats (Figure 3). Figure 3. The effect of the extract from Aronia melanocarpa L. berries (AME) on cadmium (Cd) concentration in the submandibular gland of female rats exposed to this xenobiotic. The animals received cadmium in the diet at the concentration of 0, 1, and 5 mg Cd/kg and/or 0.1% aqueous AME (+) or not (−). Data are presented as mean ± SE for eight rats. Statistically significant differences (Kruskal-Wallis post hoc test): a compared to the control group, b compared to the AME group, e compared to the Cd 5 group, where * p < 0.05, † p < 0.01, and ‡ p < 0.001. Numerical values in bars or above the bars reveal the percentage changes or factors of changes in comparison to the respective control group (↑, increase) or the appropriate group that received cadmium alone ( , decrease). Cadmium concentration in the submandibular gland of the rats feed with the 1 and 5 mg/kg diet in each time point was higher (by 35% and 41% after 3 months and 2.4-and 3.4-fold after 10 months, respectively) than in the control animals ( Figure 3). In the rats administered with AME under the 3 month exposure to the 1 mg Cd/kg diet, the concentration of cadmium in the submandibular gland did not differ compared to the control group. However, after 10 months of the experiment this heavy metal concentration in the Cd 1 + AME group was higher than in the control group (2.2-fold) and maintained within the range of the Cd 1 group (Figure 3). In the case of the extract administration to the rats fed with the 5 mg Cd/kg diet, cadmium concentration in this salivary gland after 3 months did not differ compared to the Cd 5 group and was 34% higher than in the control animals. After 10 months, the concentration of this toxic element in the Cd 5 + AME group was lower (by 32%) than in the Cd 5 group; however, it was higher (2.3-fold) than in the control animals ( Figure 3). The ANOVA/MANOVA analysis revealed that the impact of the co-administration of AME under the 3-month exposure to the 1 mg Cd/kg diet on cadmium concentration in the submandibular gland resulted from its independent (F = 17.64, p < 0.001) and interactive action with this xenobiotic (F = 8.393, p < 0.01). The clear protection against cadmium accumulation in this salivary gland due to the 10 month administration of AME at the treatment with the 5 mg Cd/kg diet was an effect of an independent action of this extract (F = 9.615, p < 0.01). Mutual Relationships Between Investigated Markers of the Oxidative/Antioxidative Status of the Submandibular Gland and Cadmium Concentration in this Salivary Gland Numerous mutual dependences were noted between the markers of the oxidative/antioxidative status of the submandibular gland (Table 4). Mutual positive correlations occurred between the particular indices of the antioxidative status (GPx, SOD, CAT, GSH, and TAS; Table 4). Similarly, mutual positive correlations were noted between the markers of oxidative status (TOS, OSI, and LPO; Table 4). Negative correlations occurred between particular indices of the antioxidative and oxidative status (Table 4). Table 4. Mutual relationships between the investigated markers of the oxidative/antioxidative status of the submandibular gland and cadmium (Cd) concentration in this salivary gland of female rats. Cadmium concentration in the submandibular gland negatively correlated with all markers of the antioxidative status of this salivary gland (GPx, CAT, GSH, and TAS), except for the activity of SOD, and positively with TOS, OSI, and the concentration of LPO (Table 4). Discussion The present study is a part of a wide research project designed to investigate the possibility of using AME in protection from the unfavourable health outcomes of low-level and moderate chronic exposure to cadmium, and it provided new relevant and promising data in this regard. The study not only revealed that the extract improved the oxidative/antioxidative status when it was administered both alone and under exposure to cadmium, but it also allowed for better understanding of the effect of this toxic metal on the salivary glands. In the available literature, some data showing that cadmium has a damaging impact on the salivary glands can be found [19,20,33,34,37,38]. Previously, we have reported the destruction of the oxidative/antioxidative balance and pathological changes in the morphological structure of the sublingual and/or submandibular salivary glands; however, these effects were investigated at higher levels of exposure to cadmium (5 and 50 mg Cd/L in drinking water) than in the present study [19,20,37,38]. The knowledge that this xenobiotic may lead to the development of oxidative stress and enhance lipid peroxidation in the submandibular gland after 3 month low-level exposure and its low concentration in this salivary gland (0.108 ± 0.002 µg/g) is an important result of this investigation. This finding suggests that submandibular gland seems to be very sensitive to the destruction of the oxidative/antioxidative balance due to exposure to cadmium. Regardless of cause, oxidative stress has numerous negative consequences at the cellular level. By inducing conformative changes in all cell components, ROS triggers a number of disruptions in the morphological structure and damage to the physiological functions of cells, tissues, and organs. These frequently include irreversible oxidative damage to proteins, peroxidation of lipids and cell membranes, and modifications of bases of the deoxyribonucleic acid (DNA) and ribonucleic acid (RNA) [6,22,23]. Ultrastructural changes in the cells of the submandibular gland, such as a blurring of the structure of mitochondrial cristae, damage to the mitochondrial membranes, radiolucent lesions in the mitochondrial matrix, changes in the outline of cell nuclei, and chromatin lumping, have been reported by us in rats due to relatively high (50 mg Cd/L) chronic exposure to cadmium, but after the treatment with 5 mg Cd/L the ultrastructural picture was proper [37]. The factors of increase in the values of TOS in the submandibular gland tissue of the animals exposed to cadmium, compared to the respective controls, show that the extent of the intensity of this xenobiotic-induced oxidative stress depended on the level of exposure, and that at the low-level treatment it markedly intensified with its duration. The intoxication with cadmium first weakened the enzymatic antioxidative barrier (GPx, SOD, and CAT), but also decreased the concentration of the main non-enzymatic antioxidants, such as GSH, leading, as a result, to the decrease in TAS. Detailed analysis of changes in particular markers of the antioxidative status, together with the fact that the value of TOS in the submandibular gland at both levels of exposure to cadmium for 3 months did not differ compared to the proper value, allow for the conclusion that the primary mechanism of the pro-oxidative action of this xenobiotic on this salivary gland consists of weakening the antioxidative protection. A key role in preventing oxidative stress in living organisms is played by antioxidative enzymes such as GPx, glutathione reductase (GR), SOD, and CAT [6]. The cadmium-induced insufficiency of the antioxidative mechanisms resulted in the decrease in TOS leading to the destruction of the oxidative/antioxidative balance and the development of oxidative stress with its negative consequences, such as enhanced lipid peroxidation. Because the mechanisms of the pro-oxidative action of cadmium at the cellular level are widely reported [3,5,14,[21][22][23], they were not described in detail in this article. It is important to underline that Gonzalez et al. [39] have suggested a potential use of the determination of cadmium concentration in the saliva as a marker of exposure to this xenobiotic. These authors have reported that, in the case of inhabitants of areas polluted with cadmium, the concentration of this element in the saliva (0.25 µg/100 cm 3 ) may be higher than in the blood and urine. Abdollahi et al. [33] have noted an inhibitory effect of cadmium on the excretory function of the salivary gland in rats. Moreover, 2 hours after the intraperitoneal administration to rats of 10 mg Cd/kg b.w., the authors observed almost a 2-fold decrease in TAS with a simultaneous decrease in the concentration of total thiol groups (-SH groups) and a 3-fold increase in the concentration of substances reacting with thiobarbituric acid (a marker of lipid peroxidation) in the submandibular saliva. These changes in the oxidative-reductive status of the saliva may reflect the destruction of the oxidative/antioxidative balance in the glands secreting the saliva, whereas the inhibition of the proper excretory function of these glands may facilitate cadmium accumulation in them. The results of the current study regarding the submandibular gland, together with our more recent findings on the parotid gland [34], show that the former salivary gland, in spite of its lower ability to accumulate cadmium, is more susceptible to the pro-oxidative action of this xenobiotic. Cadmium concentration in the parotid gland after the 3 month treatment with the diet containing 1 mg Cd/kg reached 0.405 ± 0.029 µg/g and was not accompanied by the occurrence of oxidative stress (evaluated based on the value of OSI). Only after 10 months of low-level exposure oxidative stress in the parotid gland was noted, and it took place at markedly higher cadmium concentrations in this salivary gland tissue (0.733 ± 0.042 µg/g). Based on these findings, it can be concluded that the ability of salivary glands to accumulate cadmium and their susceptibility to damage by these xenobiotics differs depending on the kind of the salivary gland. Our results on the impact of cadmium on the submandibular gland confirm that cadmium may be dangerous for health even at a low exposure and, at the same time, it enhances the significance of the present study focused on the protective impact of AME regarding the outcomes of cadmium action and the importance of the findings of the study. Moreover, the study allowed us to evaluate the influence of aronia extract on the oxidative-reductive status, not only under the low-level and moderate repeated exposure, but also at very low exposure resulting from the trace, but unavoidable, presence of this xenobiotic in the standard diet (0.0584 ± 0.0049 mg/kg diet in our study [30]). Detailed analysis of the results of the determination of the biomarkers of the antioxidative and oxidative status has revealed that prolonged administration of the extract at a daily dose of 51.7-104.7 mg/kg b.w. had beneficial impact both when it was used alone and in the case of intoxication with cadmium. The increase in the activity of SOD and the value of TAS, with a simultaneous decrease in TOS, noted after the 10-month administration of AME alone confirm the antioxidative properties of the extract. The most important finding of the present study is revealing that the administration of AME under exposure to cadmium almost completely prevented the development of oxidative stress and enhanced lipid peroxidation in the submandibular gland. In the available literature there are few data on the impact of some polyphenolic compounds (curcumin, epigallocatechin gallate, and resveratrol) or polyphenol-rich products (green and black tea) on the oxidative-reductive status of the salivary glands and saliva [40][41][42]. Narotzki et al. [41] have suggested that drinking green tea may protect the epithelium of the oral cavity from damage due to oxidative stress. Walvekar et al. [40] have reported that curcumin administered at a daily dose of 30 mg/kg b.w. per 30 days provided effective protection against 5% D-galactose-induced oxidative stress in the submandibular gland in mice; however, its longer (45 days) use led to a decrease in the activity of antioxidative enzymes due to the accumulation of curcumin. Moreover, it has been noted that resveratrol, administered at relatively high doses, can protect from irradiation-caused damage to the salivary gland in rats [42]. Detailed analysis of the results of the present study show that the administration of AME offered protection from cadmium-induced oxidative stress irrespective of the intensity of intoxication with this xenobiotic, and that the beneficial impact of the extract was accompanied by the improvement of the enzymatic and non-enzymatic antioxidative barrier. Taking into account the results of the ANOVA/MANOVA analysis, the beneficial influence of AME on the submandibular gland during cadmium intoxication may be explained by the direct impact of the chokeberry extract, as well as an indirect action related to interactions between the extract ingredients and this toxic heavy metal. The direct effect may be explained by the high antioxidative properties of the ingredients of aronia berries [24,27], reflected in the report in our previous paper [23], and the high ability of the 0.1% AME to scavenge 1,1-diphenyl-2-picrylhydrazyl radical (DPPH·) and to prevent cadmium-induced oxidative stress and its consequences in the liver and bone tissue [21][22][23]. It is important to underline that the beneficial influence of AME might be due to the presence in the extract of not only polyphenols, but also other ingredients with a known ability to counteract cadmium toxicity, such as essential microelements (iron, zinc, and selenium), vitamin C and vitamin E, fiber, pectin, β-carotene, and triterpens [14,[24][25][26]. However, it seems that both the direct and indirect effects might be due to the high abundance of polyphenolic compounds. The mechanism of the interactive protective action of AME might result mainly from the extract ability of not only polyphenols, but also pectin and fiber, to chelate divalent cadmium ions (Cd 2+ ) which in this way influences the body turnover of this element [26,30], including, as was revealed in the present study, its accumulation in the submandibular gland. We have already reported that the administration of AME to the animals maintained on the 1 and 5 mg Cd/kg diet decreased cadmium absorption from the gastrointestinal tract and increased its excretion, with urine leading, in this way, to lower the body burden of this xenobiotic, including its lower accumulation in the kidney, liver and skeleton [30]. The negative correlations between cadmium and the values of markers of antioxidative stress, and positive relationships with TOS, OSI, and LPO, show that the pro-oxidative action of cadmium on the submandibular gland intensifies with an increase in this heavy metal accumulation. Thus, the decreased concentration of this toxic metal in the submandibular gland tissue due to the administration of AME might be one of the causes of the protective impact of the extract, apart from its antioxidative properties. The fact that the administration of the chokeberry extract under the exposure to the 1 mg Cd/kg diet almost completely prevented oxidative stress and provided complete protection from lipid peroxidation in spite of only slightly decreased cadmium concentration in this salivary gland allowed us to recognize that the favourable effect of the extract at a low exposure to this heavy metal might be determined by its strong antioxidative potential. Detailed analysis of the results of the ANOVA/MANOVA analysis also shows that the impact of AME on the oxidative-reductive status of this salivary gland might be more determined by an independent action of the extract than its interaction with cadmium. Because the beneficial effect of aronia extract on the oxidative/antioxidative status of the submandibular gland has been reported by us for the first time, wider discussion of the results is impossible. However, it is important to note that, in these animals, the protective impact of the administration of AME has also been revealed regarding the parotid gland [34]. Although this study has important practical implications, we are also aware of its limitations. The main limitation is the fact that all our data on the protective effect of AME come from a female rat model, and thus our results refer first of all to the female salivary glands. Thus, further research is needed to explain whether similar effects will also occur in males. Moreover, the strong potential of the extract to counteract against the pro-oxidative action of cadmium may suggest that the consumption of aronia products will also provide protection of the organism in the conditions of exposure to other pro-oxidants. In sum, the present investigation has revealed for the first time that even low-level repeated exposure to cadmium may weaken the enzymatic and non-enzymatic antioxidative barrier of the submandibular gland, leading to the destruction of the oxidative/antioxidative balance and development of oxidative stress and enhanced lipid peroxidation. The fact that cadmium may induce oxidative stress and lipid peroxidation in an experimental model of human exposure shows that even low intoxication with this xenobiotic may create a risk of damage to the salivary gland. However, the most important and practically useful finding of this study is revealing that administration of the AME, both alone and under exposure to cadmium, improves the oxidative/antioxidative status of this salivary gland. The extract administration under low-level and moderate exposure to cadmium almost completely prevented xenobiotic-induced oxidative stress and lipid peroxidation. The beneficial effect of AME may result from an independent antioxidative impact of its ingredients, as well as from their interactive action with cadmium, resulting in a decrease in its accumulation in this salivary gland. The findings of this study, together with our more recent results on the beneficial influence of AME on the parotid gland, show that the extract may offer protection against the unfavourable impact of cadmium on the organs of the oral cavity. Moreover, these results provide further evidence that products from the berries of A. melanocarpa may be effective in the protection of the organism under the conditions of exposure to cadmium. SD standard deviation SE standard error SOD superoxide dismutase -SH group thiol group TAS total antioxidative status TOS total oxidative status
2020-02-27T09:30:29.639Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "f27915f46bd1d24a6d8e8c53990cfb28eac59903", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/9/2/185/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6bfb161efaef4bda6e45bd1caf841141dbd7f210", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
239665828
pes2o/s2orc
v3-fos-license
Programmable Organic Chipless RFID Tags Inkjet Printed on Paper Substrates : In this paper, an organic, fully recyclable and eco-friendly 20-bit inkjet-printed chipless RFID tag is presented. The tag operates in the near field and is implemented by means of chains of resonant elements. The characterization and manufacturing process of the tag, printed with a few layers of a commercial organic ink on conventional paper substrate (DIN A4), are presented, and tag functionality is demonstrated by reading it by means of a custom-designed reader. The tags are read by proximity (through the near field), by displacing them over a resonator-loaded transmission line, and each resonant element (bit) of the tag is interrogated by a harmonic signal tuned to the resonance frequency. The coupling between the reader line and the resonant elements of the tag produce and amplitude modulated (AM) signal containing the identification (ID) code of the tag. Introduction Paper electronics is a broad area of flexible electronics with the focus on paper and cellulose-based materials not only as a substrate, but also as functional layers for electronic devices [1][2][3][4][5][6]. Despite the high porosity and roughness of paper, its use reduces production costs and finds applications in many electronic devices, such as thin film transistors (TFT), passive electronic components, energy-storage devices, and MEMS [1,5,7,8]. TFTs are one of the most basic, and yet, most important elements of modern-day electronics [9][10][11][12]. Inkjet printing is a known deposition technology for manufacturing devices in the field of flexible and printed electronics [13][14][15][16][17]. The technology offers various advantages, such as additive printing process ability, accuracy in the micrometer range, and flexibility in terms of material processing [17][18][19][20][21]. Due to its advantages, the technology has begun to replace traditional metal subtractive etching/lithographic as well as sputtering/e-beam evaporation technologies in large-area and flexible substrates [22,23]. These technologies are expensive, complicated, need specific equipment, and impose strict process requirements. In contrast, depending on the field of application, paper-based electronics can be manufactured using photolithography, screen printing, gravure printing, flexography, or direct-writing/-printing technologies [21,[24][25][26][27][28][29]. In these mentioned printing techniques, inks generally have higher viscosity, and this property restricts them from unwanted ink diffusion into the paper fibbers [24,30]. In this study, we promote the use of inkjet printing for fabricating chipless RFID tags on paper substrates as a development of inkjet technology already used in industrial manufacturing. In the last decade, several research groups have actively contributed to the development of the active and passive devices on flexible polymeric substrates, e.g., polyethylene terephthalate, polyethylene naphthalate (PEN), and polyimide films, among others. Some of the examples are inkjet-printed antennas, TFT arrays, capacitors and logical circuits among others [31][32][33][34][35][36][37][38]. However, as regards printed chipless RFID tags manufactured on paper substrate, only a few examples have been reported [39,40]. Chipless Radio Frequency Identification (RFID) is a wireless technology used for identification (ID), tracking, sensing, and authentication/security applications [41,42]. In the field of authentication and security, one promising scenario for chipless RFID is secure paper. Within this particular application, equipping documents (e.g., banknotes, certificates, exams, ballots, official documents, etc.) with a planar ID code is envisaged as a means to fight against counterfeiting. The main general advantage of chipless RFID over chipped RFID systems is the absence of silicon integrated circuits, or chips, which in the case of chipless RFID tags are replaced with printed encoders. Such encoders can be fabricated by means of printing techniques, such as screen printing, rotogravure, flexography, or inkjet, and represent a low-cost solution as compared with conventional chip-based RFID tags [43][44][45][46][47][48][49][50][51][52][53][54]. However, chipless RFID tags present three main limitations: (1) data capacity, (2) tag size, and (3) shorter read ranges. These negative aspects and the fact that the materials (inks) and manufacturing processes (such as substrate functionalization and printing) necessary for tag fabrication are still not significantly cheaper than passive chipped RFID tags have limited the market penetration of chipless RFID technology. However, if only tag manufacture is considered, tag cost can be dramatically reduced by replacing ICs with encoders and the cost of mass-produced chipless RFID tags is predicated to fall below USD0.01 [43]. In our chipless RFID system described in a previous study [55] the tags are formed by chains of identical resonant elements printed or etched in one side of the substrate. These tags are read by proximity (through the near field), by displacing them over the sensitive part of the reader. The tag encoding is achieved by the presence or absence of certain resonant elements at predefined positions and is carried out by means of an interrogation harmonic signal tuned to the resonance frequency of the resonant elements. The presence or absence (resonator detuned or inoperative) of a resonant element will produce a variation of the amplitude of the interrogation signal at the output of the reader line. As a result, an amplitude modulated (AM) signal that contains the tag ID code is obtained. The tag ID code is inferred bit to bit, sequentially, by displacing the tag over the reader-a method of reading based on time division multiplexing [56]. The Tags and the Reader As previously mentioned, near-field chipless RFID systems comprise the reader and the tag. The sensitive part of the reader consists of a microstrip transmission line loaded with a square-shaped split ring resonator (SRR) in bandpass configuration [57,58], as can be seen in Figure 1. When the tag, consisting of a set of identical resonators to that of the reader but oppositely oriented, is located in close proximity to the sensitive part of the reader, coupling between the resonators arises, and the overall response is shifted down. A large excursion of the transmission coefficient at the design frequency generates a high dynamic range at that frequency, i.e., a high difference between two transmission coefficient values. In Figure 2, the unloaded reader allows the signal to be transmitted (equal to the '1' logic state); meanwhile, the reader loaded with the tag resonator blocks and reflects back the signal (equals to the '0' logic state). Note that the simulated response depicted in Materials and Methods Several works report the development of chipless RFID tags by using metallic inks because of their inherently good electrical conductivity [60][61][62]. More recently, conducting polymers such as poly (3,4-ethylenedioxythiophene) polystyrene sulfonate, hereafter referred to as PEDOT:PSS, have been widely employed in capacitors, transistors and solar cells [63][64][65]. Indeed, PEDOT:PSS possesses interesting features due to its ease of processing, mixed electronics/ionic conductivity and biocompatibility. Using PEDOT:PSS is a challenge since organic conductors usually exhibit conductivities from 2 to 3 times lower than those of metallic inks. In order to fabricate organic chipless RFID tags, PEDOT:PSS doped with multiwalled carbon nanotubes (named Poly-Ink HC) was purchased from Poly-Ink. In this study, the rheological property of the conductive ink is suitable for drop-on-demand (DoD) inkjet printing given that its viscosity is between 3-8 mPas. The ink was ultrasonically stirred for 1 minute and then filtered using a 0.2 μm PVDF filter after which the tags were printed onto a standard DIN A4 photocopy paper using an inkjet printer (Ceradrop CeraPrinter X-Series). The printhead (cartridge) had 16 nozzles each capable of ejecting an ink droplet of 10 pL. As the tag dimensions require a considerable resolution, a single nozzle was selected for droplet ejection for preventing an excess of ink which could produce a short-circuit in S1 and S2 owing to the capillarity of the cellulose. During the inkjet printing process, both the substrate and the cartridge were kept at room temperature. The geometry of the printed resonators of the tags was identical to that of the resonator of the sensitive part of the reader (see Figure 1). Note that despite the fact that the resonators of the tag were printed on DIN A4 (a different substrate than the resonator of the reader), system functionality is guaranteed. The reason is that the shift in the frequency response Materials and Methods Several works report the development of chipless RFID tags by using metallic inks because of their inherently good electrical conductivity [60][61][62]. More recently, conducting polymers such as poly (3,4-ethylenedioxythiophene) polystyrene sulfonate, hereafter referred to as PEDOT:PSS, have been widely employed in capacitors, transistors and solar cells [63][64][65]. Indeed, PEDOT:PSS possesses interesting features due to its ease of processing, mixed electronics/ionic conductivity and biocompatibility. Using PEDOT:PSS is a challenge since organic conductors usually exhibit conductivities from 2 to 3 times lower than those of metallic inks. In order to fabricate organic chipless RFID tags, PEDOT:PSS doped with multiwalled carbon nanotubes (named Poly-Ink HC) was purchased from Poly-Ink. In this study, the rheological property of the conductive ink is suitable for drop-on-demand (DoD) inkjet printing given that its viscosity is between 3-8 mPas. The ink was ultrasonically stirred for 1 minute and then filtered using a 0.2 μm PVDF filter after which the tags were printed onto a standard DIN A4 photocopy paper using an inkjet printer (Ceradrop CeraPrinter X-Series). The printhead (cartridge) had 16 nozzles each capable of ejecting an ink droplet of 10 pL. As the tag dimensions require a considerable resolution, a single nozzle was selected for droplet ejection for preventing an excess of ink which could produce a short-circuit in S1 and S2 owing to the capillarity of the cellulose. During the inkjet printing process, both the substrate and the cartridge were kept at room temperature. The geometry of the printed resonators of the tags was identical to that of the resonator of the sensitive part of the reader (see Figure 1). Note that despite the fact that the resonators of the tag were printed on DIN A4 (a different substrate than the resonator of the reader), system functionality is guaranteed. The reason is that the shift in the frequency response Materials and Methods Several works report the development of chipless RFID tags by using metallic inks because of their inherently good electrical conductivity [60][61][62]. More recently, conducting polymers such as poly (3,4-ethylenedioxythiophene) polystyrene sulfonate, hereafter referred to as PEDOT:PSS, have been widely employed in capacitors, transistors and solar cells [63][64][65]. Indeed, PEDOT:PSS possesses interesting features due to its ease of processing, mixed electronics/ionic conductivity and biocompatibility. Using PEDOT:PSS is a challenge since organic conductors usually exhibit conductivities from 2 to 3 times lower than those of metallic inks. In order to fabricate organic chipless RFID tags, PEDOT:PSS doped with multi-walled carbon nanotubes (named Poly-Ink HC) was purchased from Poly-Ink. In this study, the rheological property of the conductive ink is suitable for drop-on-demand (DoD) inkjet printing given that its viscosity is between 3-8 mPas. The ink was ultrasonically stirred for 1 min and then filtered using a 0.2 µm PVDF filter after which the tags were printed onto a standard DIN A4 photocopy paper using an inkjet printer (Ceradrop CeraPrinter X-Series). The printhead (cartridge) had 16 nozzles each capable of ejecting an ink droplet of 10 pL. As the tag dimensions require a considerable resolution, a single nozzle was selected for droplet ejection for preventing an excess of ink which could produce a short-circuit in S1 and S2 owing to the capillarity of the cellulose. During the inkjet printing process, both the substrate and the cartridge were kept at room temperature. The geometry of the printed resonators of the tags was identical to that of the resonator of the sensitive part of the reader (see Figure 1). Note that despite the fact that the resonators of the tag were printed on DIN A4 (a different substrate than the resonator of the reader), system functionality is guaranteed. The reason is that the shift in the frequency response (when a resonant element of the tag is on top of the resonator of the reader) is mainly determined by the geometry of such elements, and this is optimum when the geometries of both elements are identical. Nevertheless, it does not mean that the frequency shift does not depend also on the substrate. It does, but a small shift, as in Figure 2, suffices to differentiate the presence or absence of functional resonant elements in the tag, the principle for tag reading. Prior to tag fabrication, it was necessary to ensure a proper overlap of the droplets and conductivity. Depending on the selected space between adjacent droplets, a printer can eject isolated drops or produce scalloped, uniform, or bulging lines [66]. To form a continuous and conducting film, the drop spacing needs to be properly spaced with partial overlap. By carefully optimizing the drop spacing, it is possible to achieve higher resolutions with even edges. Figure 3a illustrates the printed line topologies across a broad range of drop spacing. As drop spacing is increased above a certain length, the resultant line changes from a scalloped one (for drop spacing 5-10 µm) to a continuous line (drop spacing 20 µm) and then gradually separates into isolated drops (drop spacing over 50 µm). Therefore, the best drop spacing to achieve a uniform pattern is 20 µm since the drop dimension is 42 µm ± 2 µm (Figure 3a). Once the drop spacing is fixed, an assessment of the conductivity of the printed patterns in the function of the number of layers should be performed. Since the substrate used in this work is based on cellulose, the ink deposited onto it tends to be absorbed through it due to the capillarity. This fact may produce a decrease in the resolution compared with polymeric substrates because the pattern not only stands onto the surface, but also in the cellulose bulk. Hence, this configuration leads to a reduction in the conductivity as the printed pattern comprises both the non-conductive cellulose and the PEDOT:PSS. Therefore, we demonstrated the capability to obtain a conductive pattern onto cellulose without the need to seal and passivate the porosity with a primer. Figure 3b,c show optical images of inkjet-printed squares printed with one, two, and three layers using a drop spacing of 20 µm. As can be seen, the patterns present sharp edges even with 3 layers, meaning that there is no loss of resolution. Finally, Figure 3d shows a twenty-bit tag and a zoom of one resonator printed with one, two, and three layers of PEDOT:PSS. Van der Pauw measurements were carried out ( Figure 4) for one, two, and three layers on paper in order to extract the sheet resistance. The study was conducted both with and without annealing the samples, since the paper is a temperature-sensitive substrate, although this step is used to remove the solvent. The thermal annealing treatment was carried out in the annealed samples by applying 110 • C for 45 min in a conventional oven. A noticeable effect in all the cases was the higher sheet resistance in the range of 10 3 Ω/square compared with the samples printed on polymeric substrates 15-400 Ω/square, because of the penetration of the ink into the cellulose hampering the conductivity. In addition, the increment of the layers contributed to the reduction of the resistance for both annealed and non-annealed samples. The annealing process led to a resistivity sheet resistance of 1.2-1.3 × 10 3 Ω/square, whereas the sheet resistance of the non-annealed samples increased to 4.9-4.7 × 10 3 Ω/square, where the absorption of the solvent was attributed to the cellulose fibers. Despite the higher sheet resistance, the removal of the annealing process brings new opportunities for using cellulose-based substrates while simultaneously producing a substantial reduction in the total cost of the tags. Thus, in our study, the RFID tags were fabricated without annealing. Appl. Sci. 2021, 11, x FOR PEER REVIEW 5 of 12 Experimental Tag Validation and Results In this section, three different 20-bit tags with all bits set to '1' were fabricated with one, two, and three printed ink layers in order to compare the effect of the resonator ink conductivity on the measured envelope signal. The tags consisted of a linear chain of resonant elements, identical to the one of the reader line. In addition, two trigger bits set to '1' were added at the beginning and at the end of the tag chain in order to know the start and end of the reader system to acquire the data signal. The tags were fabricated by inkjet printing using PEDOT:PSS on ordinary paper substrates (DIN A4). The measured dielectric constant and loss tangent of the paper were εr = 3.11 and tanδ = 0.036 and the measurements were carried out by means of the resonant cavity Agilent 85072A. The measured envelope functions of the fabricated tags are depicted in Figure 5. As can be appreciated, the functional resonators (providing the logic state '1') are detected by means of dips in the envelope functions, and tag reading provides the corresponding ID code. However, as the number of printed layers decreases (representing a reduction in the conductivity), the modulation index in the envelope function decreases, and this jeopardizes the detection of the ID code by means of post-processing stages. For example, in the tag printed with one ink layer, it is difficult to obtain the ID code, especially when the tag is read face down. On the contrary, by printing two or three layers, good modulation index for both sides of the tag are obtained, and it suffices to detect the tag code on both faces. Note that the different signal offset levels were obtained as a function of the tagreading orientation. When the tags are read face up, the envelope signal offsets are in the same level (near to 1,5V). However, when the tags are read face down, they are in a lower level and different for the three tags. This difference is, in part, due to the ink conductivity and the air gap between the tag and the reader [57]. Another example of a 20-bit tag envelope signal is depicted in Figure 6. In this case, only two layers were printed, and it was enough to detect the ID code clearly. Note that when a bit is set to '0', a very small variation of the envelope signal is observed because of the absence of a certain resonant element at their predefined position (the chain resonator is Experimental Tag Validation and Results In this section, three different 20-bit tags with all bits set to '1' were fabricated with one, two, and three printed ink layers in order to compare the effect of the resonator ink conductivity on the measured envelope signal. The tags consisted of a linear chain of resonant elements, identical to the one of the reader line. In addition, two trigger bits set to '1' were added at the beginning and at the end of the tag chain in order to know the start and end of the reader system to acquire the data signal. The tags were fabricated by inkjet printing using PEDOT:PSS on ordinary paper substrates (DIN A4). The measured dielectric constant and loss tangent of the paper were ε r = 3.11 and tanδ = 0.036 and the measurements were carried out by means of the resonant cavity Agilent 85072A. The measured envelope functions of the fabricated tags are depicted in Figure 5. As can be appreciated, the functional resonators (providing the logic state '1') are detected by means of dips in the envelope functions, and tag reading provides the corresponding ID code. However, as the number of printed layers decreases (representing a reduction in the conductivity), the modulation index in the envelope function decreases, and this jeopardizes the detection of the ID code by means of post-processing stages. For example, in the tag printed with one ink layer, it is difficult to obtain the ID code, especially when the tag is read face down. On the contrary, by printing two or three layers, good modulation index for both sides of the tag are obtained, and it suffices to detect the tag code on both faces. Note that the different signal offset levels were obtained as a function of the tagreading orientation. When the tags are read face up, the envelope signal offsets are in the same level (near to 1,5V). However, when the tags are read face down, they are in a lower level and different for the three tags. This difference is, in part, due to the ink conductivity and the air gap between the tag and the reader [57]. Another example of a 20-bit tag envelope signal is depicted in Figure 6. In this case, only two layers were printed, and it was enough to detect the ID code clearly. Note that when a bit is set to '0', a very small variation of the envelope signal is observed because of the absence of a certain resonant element at their predefined position (the chain resonator is detuned). With these results, the functionality of the proposed tags, implemented by means of organic inks and printing two layers on ordinary paper substrates, was demonstrated. detuned). With these results, the functionality of the proposed tags, implemented by means of organic inks and printing two layers on ordinary paper substrates, was demonstrated. The small difference between the face up and face down measured envelope functions allows the addition of a cellulose layer on top of the printed resonators in order to hide them. This approach improves the security of the printed tags because after printing, the resonant elements are buried (sandwiched) in the paper substrate. Tag Re-programmability and Industrial Scaling-Up As mentioned before, the tags are linear chains of identical resonators, where each resonator provides a bit of information when it is interrogated by the harmonic signal tuned to the resonance frequency. Such tags can be programmed either by cutting the resonant elements associated with the logic state '0' (making them inoperative as shown in Figure 7a, or erased, by short-circuiting the cut resonator through inkjet (thus adding conductive ink in order to set the corresponding bit to '1', as shown in Figure 7b. The main The small difference between the face up and face down measured envelope functions allows the addition of a cellulose layer on top of the printed resonators in order to hide them. This approach improves the security of the printed tags because after printing, the resonant elements are buried (sandwiched) in the paper substrate. Tag Re-programmability and Industrial Scaling-Up As mentioned before, the tags are linear chains of identical resonators, where each resonator provides a bit of information when it is interrogated by the harmonic signal tuned to the resonance frequency. Such tags can be programmed either by cutting the resonant elements associated with the logic state '0' (making them inoperative as shown in Figure 7a, or erased, by short-circuiting the cut resonator through inkjet (thus adding conductive ink in order to set the corresponding bit to '1', as shown in Figure 7b. The main advantage of tag programming/erasing is the possibility of mass-fabrication of all-identical tags, thus reducing fabrication costs. Once fabricated, the tags can be programmed at a later stage, and erased and re-programmed as many times as needed. However, multiple tag programming/erasing cycles may give rise to tag degradation. In typical applications, tags should be programed only once in order to provide a unique ID code after being fabricated (multiple erasing and reprograming is not expected in most applications). Tag Re-programmability and Industrial Scaling-Up As mentioned before, the tags are linear chains of identical resonators, where each resonator provides a bit of information when it is interrogated by the harmonic signal tuned to the resonance frequency. Such tags can be programmed either by cutting the resonant elements associated with the logic state '0' (making them inoperative as shown in Figure 7a, or erased, by short-circuiting the cut resonator through inkjet (thus adding conductive ink in order to set the corresponding bit to '1', as shown in Figure 7b. The main advantage of tag programming/erasing is the possibility of mass-fabrication of all-identical tags, thus reducing fabrication costs. Once fabricated, the tags can be programmed at a later stage, and erased and re-programmed as many times as needed. However, multiple tag programming/erasing cycles may give rise to tag degradation. In typical applications, tags should be programed only once in order to provide a unique ID code after being fabricated (multiple erasing and reprograming is not expected in most applications). A late-stage codification process can be implemented by additive inkjet printing (selective short-circuits) or laser ablation (physical destruction) procedures to be incorporated to the chipless RFID tag design and into the digital workflow standards already settled in the printing industry. For this purpose, Intense Pulse Light (IPL) post-processing of the printed structures on the millisecond time scale enable single line-scan functionalization of the required area, i.e., selective sintering of the light absorbing structures on the full print layout width, in an R2R compatible manufacturing process, thus bringing a whole different response to the interrogation signal. Watermarks, serial numbers, holograms, threads, encoding, specific molding, etc., are used to prevent copying and counterfeiting of numerous valuable documents, such as ID cards, banknotes, medical prescriptions, certificates, diplomas, etc., making the list of coding applications for chipless RFID paper enormous. For security paper applications, the proposed near-field chipless RFID system provides secure and low-cost encoders with unprecedented high data capacity (comparable to that of chipped tags), maintaining recyclability. Data storage resides on the printed tags, not in any further programming step. The tag design and the packaging graphics are printed simultaneously, only resulting in a slight increase in the cost of the package or tag. From the manufacturer's side, the substrate has no visible change and no further adaption of the production process needs to be implemented. In addition, chipless RFID solutions are printed on the inside, or are hidden below different layers, inaccessible for intentional manipulation. Conclusions In this paper, we have discussed fundamental aspects of the development of fully recyclable and organic chipless RFID technology mainly focused on the implementation and measurement of tags. These tags can be manufactured using printing processes, such as inkjet printing with organic conductive ink on conventional paper substrates. In addition, a time-domain chipless RFID approach, where the tags are read through near-field coupling (with sequential bit reading) by means of a harmonic interrogation signal, is reported. Validation examples have been discussed and reported, and we have shown that reasonably good results can also be obtained by printing only two ink layers, reducing the manufacturing and sintering time. It has been pointed out that this novel and unconventional system is of special interest in applications involving secure paper, where tag reading by proximity may represent an added value in terms of confidence. In addition, this is a real improvement for an eco-friendly world because the printing processes, such as flexography or inject printing techniques, using organic inks can achieve a lower environmental impact, and the tag unit cost is comparable to that of an optical barcode. This demonstrates the potential of chipless RFID organic technology, which should represent a very good choice for identification applications in the years to come. The adoption of chipless RFID technology opens a new path to low-cost fully printed chipless RFID solutions where the complexity migrates from the tag to the reader, making it possible to break the eurocent cost barrier.
2021-09-25T15:55:05.002Z
2021-08-25T00:00:00.000
{ "year": 2021, "sha1": "9e4ab90e763ca8cd844b3dbea63606840374437f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/17/7832/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "dd9a967b0a43734b18184c42869b88ee481c3b21", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
18018323
pes2o/s2orc
v3-fos-license
The Value of Estimation of Distal Ureteral Dilatation in Primary Vesicoureteral Reflux Purpose Recently, several studies have suggested that distal ureteral dilatation is an important factor influencing the spontaneous resolution of primary vesicoureteral reflux (VUR). We evaluated the relationship between distal ureteral dilatation and the spontaneous resolution of primary VUR. Materials and Methods The medical records of 114 patients with primary VUR maintained on prophylactic antibiotics from April 1999 to August 2008 were retrospectively reviewed. The patients' mean age was 24.2 months (range, 6-108 months). There were 66 male patients and 48 female patients. The mean follow-up was 37.6 months (range, 12-102 months). We analyzed various factors including age, gender, grade of reflux, laterality, and ureteral diameter ratio (UDR; the largest ureteral diameter was divided by the distance from the L1-4 vertebral body to minimize the differences in diameter by age) to determine whether these factors influenced the spontaneous resolution of primary VUR. Results Unilateral, low-grade reflux and low UDR were significantly associated with the spontaneous resolution of reflux (p=0.048, p<0.001, and p<0.001, respectively). The multivariate analysis revealed that the spontaneous resolution rate of primary reflux was significantly higher in patients with low UDR than in patients with high UDR (p<0.001). Conclusions The degree of distal ureteral dilatation is expected to be another important factor in determining therapeutic course and predicting the spontaneous resolution of VUR. INTRODUCTION Vesicoureteral reflux (VUR) is a relatively common disease in children; the onset rate in children ranges from 0.4% to 1.8% [1]. VUR is diagnosed by voiding cystourethrography, and the severity of reflux is graded in accordance with the standards of the International Reflux Study in Children (IRSC) [2]. The reflux grade is an important diagnostic method for determining the treatment and prognosis of VUR. In lower grade VUR, relatively more patients experience spontaneous resolution. Therefore, patients who are determined to have lower grade VUR tend to be treated conservatively until the VUR resolves. However, there has been much controversy concerning the treatment guidelines for grade III and IV reflux. To this end, Smellie et al reported that pa-tients with high-grade VUR often have spontaneous resolution of VUR [3]. Furthermore, many arguments have been made as to whether the existing grades are suitable for predicting the spontaneous resolution of VUR. In addition to reflux grade, it has also been reported that age, laterality (unilateral or bilateral), and gender exert an influence on the spontaneous resolution of reflux [4]. In addition to the existing prognostic factors and reflux grades, other factors have recently been reported [5,6]. In particular, we have focused on the recent literature that VUR is influenced by the severity of distal ureteral dilatation and analyzed the degree with which distal ureteral dilatation influenced the spontaneous resolution of VUR to ascertain whether ureteral dilatation is as valuable a prognostic factor as the ones currently in existence. MATERIALS AND METHODS This study retrospectively analyzed how the degree of distal ureteral dilatation influenced the spontaneous resolution of VUR. Between April 1999 and August 2008, a total of 114 patients who were diagnosed with primary VUR and were managed with prophylactic antibiotics were included in this study. Patients with congenital anomalies and functional disorders in the urinary tract, such as congenitally malformed urethra, neurogenic bladder, double ureters and urethral valve, hutch diverticulum, and refluxing megaureter, were excluded from this study. The patients' mean age was 24.2 months (range, 6-108 months). Male and female patients numbered 66 and 48, retrospectively. The mean follow-up duration was 37.6 months (range, 12-102 months), and there were 75 and 39 cases of unilateral and bilateral VUR, respectively. There were 10, 32, 39, 20, and 13 cases each of grade I, II, III, IV, and V reflux, respectively ( Table 1). The degree of distal ureteral dilatation was analyzed as the longest diameter from among the distal ureters between the sacroiliac joint and the ureterovesical junction on voiding vesicourethrography. To correct the ureteral diameter, which may change as a child grows, the ureteral diameter ratio was calculated by dividing the maximum diameter of the distal ureter by its distance from the first lumbar vertebra to the fourth lumbar vertebra. We selected lumbar spine length to correct the ureteral diameter because Currarino et al reported that there is linear growth of the spine from birth to 16 years, and the rib, scapula, other bones screen the thoracic spine on voiding vesicourethrography [7]. The correlation between the ureteral diameter ratio and spontaneous resolution was analyzed. For conservative treatment, prophylactic antibiotics were used. Because it is believed that VUR is a predisposing factor for urinary tract infection, which in turn may cause permanent renal injury, voiding cystourethrography was per-formed on all subjects at intervals of 6 months or 1 year. Cases for which VUR was not observed on the voiding cystourethrography were defined as spontaneous resolution. The correlation between each factor and spontaneous resolution was analyzed through both univariate analysis and multivariate analysis. For statistical analysis, the SAS software (version 9.1.2; SAS Institute, Cary, NC, USA) was used; chi-squared test, p for trend, Spearman rank correlation, and a logistic regression model were performed. Cases for which the p-values were less than 0.05 were considered statistically significant. In each grade (I to V), 7 cases (70%), 23 cases (71%), 21 cases (53%), 5 cases (25%), and 1 case (8%) had spontaneous resolution, respectively, demonstrating that spontaneous resolution rates were higher in the lower grades (p<0.001) ( Table 2). The mean ureteral diameter ratio was 0.073 (range, 0.004-0.216), and the univariate and multivariate analyses showed that the spontaneous resolution rate significantly increased as the ureteral diameter Fig. 1). Furthermore, there was a statistical correlation between ureteral diameter ratio and VUR grade (p<0.001, rs=0.643). DISCUSSION It is well known that age, gender, laterality, and reflux grade have a large influence on the occurrence of spontaneous resolution in patients with primary VUR. With respect to age, the spontaneous resolution rate is higher in younger patients. Yeung et al demonstrated that in 155 patients with primary VUR, the spontaneous resolution rate was higher in high-grade patients aged less than 1 year than in those older than 1 year [8]. Park et al who performed conservative treatments of 48 cases of ureteral reflux for 35 months (on average), reported that reflux spontaneously resolved in 27 cases and that the resolution rate was higher in younger patients, despite higher grades of reflux [9]. It is thought that the immature muscles of the vesical trigone develop with time and that the submucosal ureter becomes longer. In this study, the spontaneous resolution rate reached 53% and 44% in patients aged less than 1 year and in those aged 1 year and above, respectively. Although the rate was higher in younger patients, the difference was not statistically significant. Many studies have suggested that the onset rate of VUR is higher in girls than in boys. Chand et al performed a study with 15,504 children and reported that the onset rate was approximately two times higher in girls less than 2 years of age than in boys of the same age [10]. According to the report by Yeom et al, in general, the onset rate of VUR was higher in girls than in boys; however, in the case of children aged less than 1 year, the rate was higher in boys [11]. In our study, the overall onset rate was higher in boys (57%) than in girls (43%). This may have resulted from the fact that children less than 1 year of age accounted for 44 boys and 21 girls in our study, which means that the onset rate may have been inflated owing to the larger number of boys. With respect to spontaneous resolution, Schwab et al reported that VUR was rapidly resolved in boys compared with girls [12], and Sjöström et al reported that the spontaneous resolution rate of high-grade VUR (at least grade IV) was higher in boys aged less than 1 year than in girls of the same age grouping [13]. In our study, however, it was higher in girls (60%) than in boys (42%), although the difference was not statistically significant. This difference might have resulted from there being more high-grade VUR in boys than in girls. With regard to the correlation between laterality and spontaneous resolution, Estrada et al reported that unilateral VUR resolved earlier than bilateral VUR [14]. Smellie et al performed a study among children with grade III or IV reflux and reported that the spontaneous resolution rate was significantly higher in unilateral VUR [3]. Tamminen-Möbius et al reported that the spontaneous resolution rate reached 12% in bilateral VUR but was 54% in unilateral VUR [15]. Similarly, in our study, the rate of spontaneous resolution was significantly higher in unilateral VUR than in bilateral VUR (56% vs. 38%, p=0.048). Zerati Filho et al researched spontaneous resolution rates in 417 children for 2.7 years (on average), and found resolution rates of 87.5%, 77.6%, 52.8%, 12.2%, and 4.3% for grade I, II, III, IV, and V reflux, respectively [16]. Arant reported that the spontaneous resolution rate reached 80% in grade I and II primary VUR without operative management [17]. Schwab et al performed a study with 214 children and showed that the spontaneous resolution rate was 13% in low-grade VUR (grade III and less) and 5% in high-grade VUR (grade IV and over) [12]. Barroso et al observed 178 children with unilateral VUR for 55 months on average and reported that there was a strong possibility that high-grade VUR could develop into bilateral VUR [18]. In our study, 69% of low-grade VUR (grade II or less) spontaneously resolved; however, for high-grade VUR (grade IV and over), the spontaneous resolution rate was just 18%, showing the inverse relationship between the spontaneous resolution rate and the reflux grade. Concerning primary VUR, there have been two dominant views. One is that the antireflux function is weakened due to the congenitally immature muscles that go from the vesical trigone to the terminal ureter. The other is that the ureterovesical junction, formed from the outside in during fetal life, makes the submucosal ureter shorter and causes VUR [4]. Specifically, the length and diameter of the submucosal ureter may be a large part of primary VUR. However, the reflux grade is used to evaluate the severity of primary VUR, a grading system that reflects the structure of the upper urinary system, including the renal pelvis and the renal calyx, and the subjective judgement of the person making the evaluation is reflected in grading. On the authority of such hypotheses, we calculated the ureteral diameter ratio by dividing the maximum diameter of the distal ureter by the distance from the first lumbar vertebra to the fourth one so as to correct the ureteral diameter for change with child growth. We then analyzed the relationship between the ureteral diameter ratio and spontaneous resolution. Few studies have been carried out about distal ureteral dilatation as a factor influencing the spontaneous resolution of primary VUR. Méndez et al classified patients into three groups according to how much the lower ureter was dilated on ultrasonography and analyzed the prognoses after ureteral submucosal injection [5]. They reported that distal ureteral dilatation was an important prognostic factor in treating primary reflux. Lee et al divided the maximum length of the lower ureter, as measured by preoperative voiding cystourethrography using the length of the fourth lumbar vertebra, and performed ureteral submucosal injection according to the calculated ratio and analyzed its relationship to the recurrence of VUR [6]. The results showed that the dilatation of the lower ureter was a significant prognostic factor. We analyzed the influence of distal ureteral dilatation on the spontaneous resolution of VUR, through univariate analysis based on the logistic regression model and multivariate analysis from which age, gender, laterality, and reflux grade were excluded. As a result, we determined that the ureteral diameter ratio was significantly related to the spontaneous resolution of reflux. Thus, it appears that the ureteral diameter ratio may objectively measure vesicoureteral structure and thus may be an appropriate index to predict not only the present state of VUR but also the possibility of spontaneous resolution. CONCLUSIONS We have presented the degree of distal ureteral dilatation, which has the benefit of predicting spontaneous resolution more objectively. Distal ureteral dilatation may be another prognostic factor in VUR, determining therapeutic course and predicting spontaneous resolution of VUR.
2017-09-02T13:27:30.140Z
2010-05-01T00:00:00.000
{ "year": 2010, "sha1": "fd17bb8c1e59f1528540acd398a78ab2d5b75548", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc2873891?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "fd17bb8c1e59f1528540acd398a78ab2d5b75548", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
154656125
pes2o/s2orc
v3-fos-license
Decomposing Spatial Inequality in Sri Lanka: A Quantile Regression Approach This paper uses the Blinder and Oaxaca decomposition method and its recent expansion (Machado and Mata ) to examine whether well-being gaps between urban (richer regions) and rural (poorer regions) areas are the result of (i) regional/spatial differences in household characteristics or (ii) differences in location-specific returns to these characteristics. The data used in this study are from the Household Income and Expenditure Surveys for 2006/2007 and 2009/2010. The analysis suggests that the existence of barriers, such as remoteness and poor access to markets, that prevents lagging regions from being absorbed into the modern sector or growing region plays a larger role in perpetuating spatial inequality, especially for the poor, than disparities in household characteristics (endowments) between regions and sectors. inequality was also evident in South Asia which had long been characterized by relatively low and stable levels of inequality. Recent evidence shows that there was a sizeable increase in inequality in Bangladesh, India, Nepal, and Sri Lanka in the late 1980s and 1990s (World Bank 2005a. For example, inequality in consumption expenditure in Sri Lanka rose from a Gini coefficient of 0.32 in 1990 to 0.40 in 2009/2010, possibly the sharpest increase in inequality in its recent history, making its distribution more unequal than many of its East Asian neighbors, and on par with Bangladesh and Nepal. 1 Rising inequality has two components. The first is within the fast-growing modern industrial sector and region. This is to a large extent vertical inequality, driven by asset and skill differences. 2 The second is between the fast-growing modern industrial sectors and regions, on the one hand, and the traditional agricultural sectors and regions, on the other. What is striking about the latter component of inequality is that two individuals with identical productive characteristics (schooling, skills, training, experience) could face differential returns to their endowments, depending on where they live. These differentials encourage individuals in the low-returns sector or region to move or migrate into the high-returns sector, eventually equalizing returns in both sectors. Cheaper labor in the lowreturns regions may also attract capital (firms and entrepreneurs) to move to these areas. But if these differentials persist over time, it suggests that there are barriers (failures) that prevent the traditional sector or lagging region from being absorbed into the modern sector. This argument is well presented in the World Development Report 2009: Reshaping Economic Geography (World Bank 2008) which argues that higher densities, shorter distances, and lower divisions are essential for the development of an economy. Economic geography suggests that this growth will be initially unbalanced and lead to widening disparities. However, spatial transformations that reduce distance and divisions, and calibrate densities of economic growth with densities of poverty, may lead to inclusive growth and lower inequality. Hence, policies to reduce inequality need to identify the sources of spatial disparities-if there are gaps in characteristics, it is important to improve endowments of the households; if barriers exist between lagging and leading regions, these barriers need to be removed. Although Sri Lanka recorded moderate economic growth during the last decade, the regions that are far away from its economic capital, Colombo, tend to be significantly poorer than the areas closer to Colombo. Though a number of regional development programs have been implemented by successive governments, living standards of the people in the remote areas continue to fall significantly behind their urban counterparts. This suggests that a better understanding of the causes of inequality is essential for effective policy formulation. Programmes and policies in Sri Lanka's current development strategy may be divided into three categories. The first includes the improvement of facilities in schools, establishment of new universities in poor areas, and the spread of information technology facilities (provision of an IT center to each district secretariat), all of which pertain to improving household and community human capital endowments. The second category includes the improvement of roads (primarily rural roads), irrigation systems (small-scale tanks), and promotion of rural industries (primarily via small-, large-, and medium-scale enterprises). These could be considered as efforts to improve location-specific returns, but keep the focus at the community or regional level. The last set of programs which is expected to have an impact on reducing disparities in location-specific returns focuses primarily on transport infrastructure, such as the Colombo-Kandy expressway and the Southern Highway, which improves access to and mobility between regions, and is likely to reduce regional gaps in location-specific returns. This paper uses a standard methodology and its recent expansion to examine whether well-being gaps between urban (richer regions) and rural (poorer regions) areas are the result of (i) regional/spatial differences in household characteristics or (ii) differences in location-specific returns to these characteristics. The purpose of the paper is to examine the role of location-specific returns to households' productive characteristics, and the extent to which they contribute to the gaps in consumption across the distribution of consumption. Analytical Framework This study employs a conventional decomposition method of measuring discrimination and its quantile regression extension to analyze spatial inequality in Sri Lanka. In this section, we (1) explain the decomposition as it applies to average gaps between locations, (2) describe quantile regression, and (3) present the extension of the decomposition to quantile regression estimates. Wan (2007) discusses that inequality between urban and rural areas cannot be totally explained by the geographical division between the urban and rural areas of a particular economy as assumed by the conventional inequality decomposition techniques. Many other factors such as differences in human capital and differences in demographic factors also affect the determination of urban and rural inequalities. But the traditional inequality decomposition techniques do not include control variables for these factors. The decomposition of inequality using regression provides a neat solution for this (Wan 2007). The regression-based decomposition allows for the inclusion of control variables as well as other socioeconomic determinants of inequality rather than the geographical location (Gunatilaka and Chotikapanich 2009). Furthermore, regression-based decomposition analysis enables identification and quantification of the determinants of inequality (Wan 2002) which are important and of interest to economists and policy makers. The seminal works of Blinder (1973) and Oaxaca (1973) provide the basic roots for regression-based decomposition techniques. The Blinder-Oaxaca decomposition is extensively applied in decomposition analysis, and several extensions have been developed recently (Fortin et al. 2011). The intuition behind the conventional method of measuring discrimination, developed independently by Blinder (1973) and Oaxaca (1973), is that in the absence of discrimination, the estimated effects of individuals' observed characteristics on their wages are identical for groups of individuals. Similarly, in the absence of location-specific returns, the estimated effects of a household's observed characteristics on some measure of household well-being (such as income or consumption) are identical for each location. The estimated income gap can be decomposed as follows: where lny is a measure of household income or consumption, X is a vector of income-generating characteristics for the ith household, and β is a vector of coefficients. The asterisks denote mean or average. The first term on the right-hand side is the portion due to differences in coefficients(β urban − β rural ), evaluated at the same set of average income-generating characteristics X * rural , in this case the rural (poorer region). The second term is the portion of the gap attributed to differences in average earnings-generating characteristics (X * urban − X * rural ), weighted by the urban (richer region) returns structure. 3 If there were no location-specific effects, β urban = β rural , i.e., endowments in both locations yield similar returns, the first term would be zero, and any regional disparities would be completely explained by differences in characteristics of households in the two locations, X * urban − X * rural . With no disparity in returns, migration would be low or zero. The Blinder-Oaxaca decomposition is based on ordinary least squares (OLS) regression which assumes that the effect of the regressors does not vary along the conditional distribution of the dependent variable. For example, the effect of schooling on household welfare is assumed to be the same at the bottom of the welfare distribution as it is at the top. If, however, these effects do vary along the distribution of household welfare, quantile regressions, which yield models for different percentiles of the distribution, provide a parsimonious way of describing the whole distribution (Martins and Pereira 2004). The θth quantile of y i conditional on X i is given by where the coefficient β θ is the slope of the quantile line, giving the effect of changes in X on the θth conditional quantile of y. As shown by Koenker and Basset (1978), the quantile regression estimator of β θ solves the following minimization problem. It is easily seen that for the median [θ = 0.5], the quantile regression minimizes the sum of absolute deviations. Machado and Mata (2005) combine quantile regression with a bootstrap approach and derive the following decomposition, which is analogous to the Blinder-Oaxaca decomposition in Eq. 9.1: The first term on the right-hand side is the contribution of the coefficients (returns effect), and the second term is the contribution of the covariates (covariate effect) to the difference between the θth quantile of the urban (rich region) distribution of consumption and the θth quantile of the rural (poor region) consumption distribution. The residual term comprises the simulation errors which disappear with more simulations, the sampling errors which disappear with more observations, and the specification error induced by estimating linear quantile regression (Melly 2005). It is assumed that the linear quantile model is correctly specified. The Machado-Mata (2005) decomposition is interpreted similar to the Blinder-Oaxaca decomposition. Since the decomposition can be conducted at any percentile of the consumption distribution, it reveals whether the relative importance of covariates and coefficients varies along the distribution. Data and Variables The data used in this study are from the Household Income and Expenditure Surveys (HIES) for 2006/2007 and 2009/2010. These national surveys, however, do not cover the Northern and Eastern Provinces of Sri Lanka. 4 The primary sampling unit is the household, and the sample size ranges from 17,037 households in 2006/2007 to 19,958 households in 2009/2010. 5 Both surveys comprise 12 monthly rounds that capture seasonal variations. The sample design allows for These two provinces are the two most severely affected by the armed conflict with the separatist LTTE movement. However, these two provinces were excluded from the study due to nonavailability of comparable data. 5 The sample size of the HIES (DCS) was around 20,100 households in both the survey years. The sample size of this study was reduced to the above numbers due to data cleaning. subgroup analysis at the province and district levels. In the urban-rural analysis, the estate sector was subsumed under the rural sector for the purpose of decomposition. These data sets have been used in poverty analysis and exhibit no major problems in terms of inconsistency and inaccuracy (World Bank 2005b, 2007a. The primary measure of well-being in this paper is real household consumption expenditure per capita. Consumption data are used in preference to income data for several reasons. Consumption is a direct measure of achieving or fulfilling basic needs and a better measure of current welfare, incorporating consumption smoothing by households within a given period of time and over the life cycle (Duclos and Araar 2006;Deaton 1997). Consumption data are more easily observable than income data. The latter are vulnerable to underreporting due to the innate features of income reporting, that is, fewer formal income receivers, seasonal and unrecorded income sources, and the diversified nature of earnings (Heltberg 2003). For these reasons, consumption is typically used in the analysis of poverty and inequality. The measure of consumption expenditure includes over 400 items of household consumption. Food consumption is reported in calendar style, for a week, while non-food consumption is reported for the past month, 6 or 12 months. Consumption on all items is converted to monthly consumption. Reported values are of the amount consumed, which includes goods and services purchased as well as home-produced goods and services. Although the latter comprises a substantial proportion of household consumption, and problems of using imputed values are well known (Deaton 1997), the values used are consistent over time, unlike the problems raised by the use of different (and possibly inconsistent) values in transition countries (Benjamin et al. 2005;Ravallion 2005). The rental value of owned housing is also imputed in the data set. The household is defined as 'one or more persons living together and having common arrangements for food and other essentials of living' (Department of Census and Statistics 1987). Spatial and temporal price indices are computed using district-level nominal poverty lines published by the DCS 6 and are constructed at the district level. These are constructed for each data set, allowing spatial prices to vary (as they do) over time, and are later adjusted with regard to temporal variations. Two categorizations of spatial location are used in the decompositions derived in this paper. The first is the conventional urban and rural (the estate sector is subsumed into the rural) distinction. The second categorization classifies the economically better-off Western Province (WP), which has all the characteristics of a leading region, as the richer region, and includes all other regions in the poorer region category. While this classification ignores the variation among other regions, it is adopted because the methodology requires a binary classification. The gap in consumption between the WP and the other regions is sufficiently large to justify such a classification. Regressors used in the model (Table 9.1) include (1) factors that influence the household's earning ability, such as the number of employed members in the household, the sector of employment (whether any household members were engaged in agricultural work), the household head's age, and household human capital (the highest level of education attained by any member of the household) 7 ; (2) demographic features of the household that influence the level of consumption per capita such as the number of dependents (number of household members aged below 15 years and above 65 years); (3) other demographic features such as the household head's gender and ethnicity; and (4) location variables to control for regional variation within urban and rural sectors (only in the urban and rural specification). 7 Ministry of Education of Sri Lanka categorizes the education system in Sri Lanka as Primary: Year 1-5, Junior Secondary: Year 6-9, Senior Secondary: Year 10-11 (GCE O/L), College: Year 12-13(GCE A/L) and Tertiary: University and Vocational (http://www.moe.gov.lk/modules.php?n ame=Contentandpa=showpageandpid=7). A clear difference in consumption expenditure between urban and rural sectors as well as between the WP and the rest of Sri Lanka (OP) is indicated by the mean calculations of consumption expenditure per capita as shown in Tables 9.2 (Nguyen et al. 2006). Descriptive statistics (Tables 9.2 and 9.3) give an indication of the differences in endowments between urban and rural areas and the WP and other provinces. Urban areas and the WP have more educated households, greater ethnic diversity, as is to be expected, and a much smaller proportion of households employed in agricultural work. Table 9.3 indicates that educational endowments beyond junior secondary level have increased in the population as a whole between 2006/2007 and 2009/2010. The percentage of those employed in the agricultural sector in rural and urban areas continued to decline over the period, while changes (decline) in the number of dependents and household size were marginal. The age and gender of household heads did not change significantly over the period. Discussion of Results and Conclusions In this section, we first present the OLS and quantile regression results on which the decompositions are based. We then present the results of decomposing the consumption gap between urban and rural Sri Lanka, and between the WP and other provinces in two segments: the consumption gap (difference) due to the differences in household endowments (characteristics) and the difference due to location (area of residence). 8 These decompositions are derived by applying the methodology described in Sect. 9.2. Regression Results A detailed discussion of OLS and quantile regression estimates on which the decompositions were based is presented in Kumara (2009Kumara ( , 2012. 9 A brief summary is given here (and in table form in appendix Tables 9.5, 9.6, 9.7, and 9.8). OLS and QR estimates suggest that household consumption increases monotonically with the level of education. An additional finding of the QR analysis is that the impact of education on consumption is significantly higher in the upper consumption quantiles for all education levels but the junior secondary level. Impact of education on consumption expenditure is relatively high for the lower consumption groups. An additional employed member in the family increases the household consumption level to a greater extent in the urban sector (WP) than in the rural (OP) sector, while engagement in agricultural work reduces household consumption irrespective of the area of living. This is consistent with previous findings (World Bank 2004Bank , 2005b. 10 A male-headed household consumes more compared to a femaleheaded household, and the result is stronger in the urban sector and in the WP. The negative relationship between the number of household members engaged in agricultural work and household consumption persists in the QR analysis, and the impact is higher in the upper expenditure quantiles. QR analysis indicates that additional employment in the family generates more positive impacts on consumption in the upper expenditure quantiles, whereas the effect is weaker in the lower quantiles. Agricultural employment reduces consumption of upper quantile households more than the lower quantiles irrespective of the area of living. This finding holds for both the survey years. The consumption advantage of male-headed households becomes weaker in the upper expenditure quantiles and in the urban sector in both the periods. 11 This suggests that the gender of the household head is irrelevant for the upper consumption groups and for the urban sector. Most formal jobs that pay equal wages for both males and females are concentrated in urban areas, whereas most informal jobs that discriminate against females are concentrated in rural areas. This may explain the equal benefits in urban areas and the differences in rural areas. An additional member in the household generates negative results for all the quantiles, and the impact is significantly higher in upper expenditure groups and urban areas. As suggested by the OLS estimation, Muslim and other ethnic groups have a higher consumption expenditure compared to their Sinhala counterparts in all the consumption quantiles. Furthermore, the percentage increase is higher in the upper consumption groups. Analogous to the OLS results, QR estimates also show that there is a negative correlation between the household consumption expenditure and households headed by persons of Tamil ethnic origin, as compared to households headed by Sinhala counterparts. Inequality Decomposition In this section, mean regression (Oaxaca 1973;Blinder 1973) and quantile regression (Machado and Mata 2005) methods are applied in decomposing consumption inequalities in Sri Lanka into two components: (1) a component that is due to the differences in the distribution of household endowments (covariate effect) between urban and rural sectors in Sri Lanka and (2) another component that is due to the location-specific returns (returns effect) to these covariates. The mean regression decomposition method considers the means of two distributions. Adding value to the analysis, quantile regression decomposes the gap according to the differences at each quantile. In decomposing urban-rural (Western-other province) inequality, it compares rural household LREPC with a simulated (counterfactual) LREPC derived from rural characteristics (endowments) and urban (coefficients) returns. This estimates the difference in consumption between an urban household and a rural household that are identically endowed, where the only difference between them is the location. In other words, if the level of average urban (WP) household endowments was suddenly replaced by the level of endowments of average rural (other provinces) households, how large would the spatial consumption gap (i.e., the returns effect) be? If rural (OP) households were to move instantaneously (and without cost) to urban (WP) areas, what would the consumption gap between them and identical households in rural 9 Decomposing Spatial Inequality in Sri Lanka … (OP) areas be? This component captures location-specific effects on consumption. With markets working perfectly, and no barriers to mobility (and a well-specified model), this gap would in the long term be zero. The entirety of the consumption gap would then be due to differences in characteristics between urban and rural (Western and other) households, captured by the covariates effect. Both mean regression and quantile regression decomposition results are presented in Table 9.4 and Fig. 9.2. The second column of the table represents the decomposition results based on the mean regression analysis, and the 3rd to 7th columns represent the 5th, 25th, 50th, 75th, and 95th quantiles of the quantile regression analysis. The first panel illustrates the total consumption gap between the urban and rural sectors in Sri Lanka, and the second panel shows the decomposition results. Mean regression analysis suggests that consumption inequality in Sri Lanka slightly decreased during the study period (second column in Table 9.4). This may be due to the recent regional development projects carried in Sri Lanka, discussed in Sect. 9.1. 12 The urban-rural and WP-OP gaps that remain after controlling for household characteristics (evaluating urban consumption using rural household endowments) are positive. This signifies that even after adjusting urban consumption for rural endowment (characteristics), urban consumption is higher than rural consumption. This component is defined as the location-specific returns to household endowments. The major portion of the mean urban-rural consumption gap in Sri Lanka is explained by the returns effect, and its dominance is the same in both the surveys. 12 A new sea port, an airport, a film village, and a cricket ground in Hambantota; a new expressway between Galle and Colombo; Maga Neguma (Road improvement) and Divi Neguma (life improvement) in rural provinces; and special development projects for the war-affected North and Eastern Provinces are some of the regional development programs carried out recently. The dominance of the returns effect is observed in the quantile regression decomposition for all quantiles. For any quantile, for both survey years, the urbanrural (WO) consumption gap was recorded as positive, implying that the urbanrural (WO) gap favors the urban sector (WP) even after evaluating urban (WP) consumption with rural (OP) characteristics. This suggests that even though the consumption inequalities declined between the two survey years in Sri Lanka, the returns effect is still dominant. Three major policy conclusions can be drawn from the quantile regression decomposition analysis. First, the urban-rural (total or unadjusted) consumption gap is smaller in the lower consumption quantiles and significantly higher in the upper quantiles. The difference in consumption expenditure gaps between the 95th percentile and 5th percentile is around 30 % in both the survey periods. This implies that the urban (WP) rich are much better off than the rural (other province) rich in terms of consumption expenditure. Second, the returns effect dominates throughout the distribution of consumption expenditure. The unexplained percentage of consumption gap is always more than 70 % of the total gap for all quantiles. Third, adding to the second conclusion, the returns effect dominates throughout the expenditure distribution, but tends to decline as it moves toward the upper consumption quantiles. This means that location-specific effects account for more than 80 % of the urban-rural total gap at the 5th quantile compared to less than 70 % of the urban-rural total gap in the 95th quantile. On the other hand, the importance of the covariate effect in explaining the urban-rural gap increases at the upper end of the expenditure quantiles. These findings are different from those of Nguyen et al. (2006) using the same analytical method for Viet Nam. They found that the covariate effect dominates in the lower quantiles, whereas the returns effect dominates in the upper quantiles. A temporal analysis of the urban-rural inequality decomposition based on quantile regression analysis also finds that urban-rural inequality decreased over the study period, and inequality between the lower end and the upper end of the consumption distribution reduced relatively in 2009/2010. Furthermore, the dominance of the returns effect also declined in urban-rural analysis, while there was not much change in the WP and other process analysis. The dominance of the returns effect in the lower quantiles of the consumption expenditure distribution suggests that returns to household endowments matter more than household characteristics to poor people in Sri Lanka. Sri Lanka is recognized as a country that has achieved extraordinary success in health and education indicators despite a lower level of income per capita (World Bank 2005b). Sri Lanka also records a relatively high rank in the UNDP 'Human Development Index.' Gender seems to matter less in determining household characteristics (e.g., education and health) in Sri Lanka compared to many other developing countries. This suggests that the distribution of household characteristics is relatively better in Sri Lanka as compared to Viet Nam. The lower rewards to rural households can be linked to the poor rural markets. Most formal employment is concentrated in urban areas, and rural areas are separated from the urban centers due to the poor road network, leading to poorer market access for the rural population. See Tables 9.5,9.6,9.7,and 9.8 The communication barriers between urban (WP) and rural (OP) areas may also contribute to lower rewards to the characteristics in the rural sector. In addition, if characteristics for which controls were not included due to lack of data, such as the quality of education, are correlated with location (for instance, richer areas have better quality education), then these would add to the location-specific effects. Appendix The foregoing analysis suggests that the existence of barriers such as remoteness and poor access to markets prevents lagging regions from being absorbed into the modern sector or growing region. These barriers play a larger role in perpetuating spatial inequality, especially for the poor, than do disparities in household endowments between regions and sectors. Policies that are indicated for further reduction in the urban-rural gap and higher poverty levels in the rural areas include (1) connecting lagging regions to markets in the growing regions, i.e., improvement of roads and transportation, electricity and communication infrastructure, and improving the investment climate in the rural areas; (2) improving the quality of schooling via better training and resources, especially in remote areas; and (3) removing barriers to labor mobility such as regulations in labor markets and land markets.
2019-05-16T13:05:43.135Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "067b21260f56a0d1e2efdd7cd9383b54edf239a0", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-981-287-420-7_9.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "0ea15ff3b672883317929e5f16f8dc33062456b9", "s2fieldsofstudy": [ "Economics", "Geography", "Sociology" ], "extfieldsofstudy": [ "Economics" ] }
243986358
pes2o/s2orc
v3-fos-license
Weight-bearing status may influence rates of radiographic healing following reamed, intramedullary fixation of diaphyseal femur fractures Abstract Objective: To investigate the effect of weight-bearing status on radiographic healing of diaphyseal femur fractures. Design: Retrospective 1:1 matched cohort study. Setting: Single-level 1 trauma center. Participants: One-hundred forty-four (N = 154) patients matched 1:1 in non-weight bearing (NWB) and weight-bearing as tolerated (WBAT) groups. Intervention: Non-weight bearing following reamed, statically locked intramedullary fixation of diaphyseal femur fracture, generally due to concurrent lower extremity fracture. Main Outcome Measurement: Postoperative radiographic healing using modified Radiographic Union Scale for Tibia fractures (mRUST) scores. Results: Groups were well matched on age, sex, race, prevalence of tobacco and alcohol use, diabetes mellitus status, Injury Severity Score, fracture pattern and shaft location, vascular injury, open fracture prevalence, and operative characteristics. Radiographic follow-up was similar between groups (231 vs 228 days, P = .914). At 6 to 8 weeks status post intramedullary fixation, the median mRUST score in the NWB group (9) was lower than that of the WBAT group (10) (mean: 8.4 vs 9.7, P = .004). At 12 to 16 weeks, the median mRUST in the NWB group (10) was again lower than the WBAT group (12) (mean: 9.9 vs 11.7, P = .003). The median number of days to 3 cortices of bridging callous was 85 in the WBAT group, compared with 122 in the NWB group (P = .029). Median time to mRUST scores of 12 (111 vs 162 days, P = .008), 13 (218 vs 278 days, P = .023), and 14 (255 vs 320 days, P = .028) were all longer in the NWB group compared with the WBAT group. Conclusions: Non-weight bearing after intramedullary fixation of diaphyseal femur fractures delays radiographic healing, with median time to 3 cortices of bridging callous increased from 85 days in WBAT groups to 122 days in NWB groups. These results provide clinicians with an understanding of the expected postoperative course, as well as further support the need to expeditiously advance weight-bearing status. Level of Evidence: IV Introduction Femoral diaphyseal fractures managed with intramedullary fixation heal reliably; union rates approach 85% to 100%. [1][2][3][4][5][6][7] While uncommon, non-unions do occur, with prior work suggesting that open injury, increased preoperative morbidity, and tobacco use may increase this risk. [8][9][10][11][12][13] Interestingly, delayed weight bearing has also been cited as a potential risk factor for non-union. [14] While biomechanical and clinical studies have shown that early weight bearing is safe following appropriately sized, reamed, statically locked intramedullary nailing, early postoperative mobilization may be delayed in certain cases secondary to concomitant injury or surgeon preference. [15,16] Therefore, identifying differences in radiographic healing rates based on weight-bearing status would provide clinicians with an improved understanding of the expected postoperative course, as well as further support the need to expeditiously advance weight-bearing status. Healing of diaphyseal femur fractures treated with intramedullary fixation has previously been analyzed as a dichoto-mous variable (e.g., union/non-union), with limited consideration for graded approaches, and with qualitative measures used to define non-union. [12] Traditionally, this has been related to the lack of consensus regarding the assessment of union amongst orthopaedic surgeons. [17,18] Recently, the Radiographic Union Scale for Tibia, and it's modified version (mRUST), have offered clinicians a reliable, validated tool to quantitatively assess radiographic healing of long bone fractures. [19][20][21] While initially intended for the tibia, this scale has also been used to assess healing in femur fractures. This allows for the evaluation of large numbers of femoral shaft fractures without limiting analysis to the uncommon event of non-union. [22,23] Therefore, the goal of this study was to use mRUST scores to investigate the effect of weight-bearing status on radiographic healing of diaphyseal femur fractures managed with intramedullary fixation. Design and setting This retrospective case-control series was performed at a singlelevel 1 trauma center in the Midwest region of the United States. An institutional database was established to identify femoral shaft fractures (OTA/AO type 32 injuries) managed from January 1, 2010 to December 31, 2018. [24] Institutional review board approval was obtained for the study. Patient selection Six-hundred ninety-five (N = 695) skeletally mature patients with OTA/AO type 32 injuries were identified. Demographic data, baseline health metrics, injury characteristics, and operative specifics were recorded (Table 1). Postoperatively, the variables of interest included weight-bearing status and radiographic follow-up. The number of days from surgery until a patient was advanced to weight-bearing as tolerated (WBAT) was also noted, as were the number of days from surgery to each postoperative femur radiograph. For each radiograph, the mRUST score was calculated. Briefly, the mRUST score measures the radiographic healing of a femur fracture on a scale of 4 to 16. Each cortex is graded: 1 = no callus, 2 = callus present, 3 = bridging callus present, 4 = remodeled; the sum of these values gives the mRUST score. [20,21,22,25,26] An mRUST score of 11 generally corresponds to 3 cortices of bridging callous. Three reviewers (CDF, JC, NMJ) performed mRUST ratings; ICC values have previously been reported (ICC = .74) and were not recalculated for this investigation. [27] Patients were eligible for study inclusion if: the injury was managed with intramedullary fixation (retrograde or antegrade); postoperative weight-bearing status was non-weight bearing (NWB) or WBAT; patients in the NWB group were assigned this status for at least 6 weeks; a postoperative radiograph from 6 to 8 weeks after injury was available for review. To isolate the effect of weight-bearing, patients were excluded for the following indications: severe traumatic brain injury; high-spinal cord injury; initial external fixator temporization; dual construct fixation (e.g., plate and nail fixation); initial presentation was of peri-implant fractures; bone loss requiring advanced reconstruction (e.g., bone transport). For implant selection, all surgeons performed reamed, statically locked fixation utilizing the Synthes Retrograde/Antegrade Femoral Nail (Depuy Synthes Companies, Warsaw, Indiana). Eligible patients in NWB and WBAT groups were then matched 1:1 to control for baseline demographic, health, injury, and operative differences. Patients were matched on the following variables: age, sex, race, tobacco use prevalence, alcohol use prevalence, body mass index, diabetes mellitus status, ASA score, mechanism of injury, AO fracture classification, vascular injury presence, Injury Severity Score (ISS) score, location of fracture in the femoral shaft (proximal, middle, distal), presence of open fracture, time to OR, and retrograde or antegrade start point. The number of patients in each group was N = 77. Analysis Statistical analysis was completed with Prism 7.0a software (GraphPad Software Inc, La Jolla, California) and MatLab R2016b software (Mathworks, Natick, Massachusetts). Power analysis While the utility of a power analysis in a retrospective study is open to debate, we had an interest in powering the study to ensure the effect of weight bearing was captured. An a priori power analysis was performed to calculate sample size. For mRUST scores at 6 to 8 week follow-up, setting b=0.80, a=0.05, and assuming a 1-point difference in mRUST scores between groups with a standard-deviation of 2 units, sample size was determined to be 63 patients per group. As this was a novel investigation, a 25% increase to this number was applied to account for possible errors in the power analysis estimation. Cohort matching and outcomes of interest Outcomes of interest included: differences in mRUST scores between NWB and WBAT groups at 6 to 8 weeks and 12 to 16 weeks following injury; time to mRUST scores of 11, 12, 13, and 14. To evaluate the quality of the matching process, as well as to analyze these outcomes, Student t test with Welch correction, [20,21,22,25,26] An mRUST score of 11 generally corresponds to 3 cortices of bridging callous. Time-to-event analysis was performed using the Kaplan-Meier estimator. Log-rank (Mantel-Cox) test was used to identify differences between time-to-event curves. Patient characteristics and case-control matching The number of patients in each group was N = 77. Overall, patients in this series were majority male (68.2%) and Caucasian (62.3%) with mean age 32.7 years. Tobacco and alcohol use were prevalent (51.3%, 42.9%, respectively). Mean body mass index was in the overweight category (28.0), with a low rate of diabetes mellitus (4.5%). [28] Patients most commonly sustained injury in a motor vehicle or motor cycle collision (70.8%), resulting in ISS of 16.4. Right and left-sided injuries were equally represented. A majority of fractures were middle 1/3 diaphyseal injuries (51.3%), with the majority being simple patterns (50.0%). Open fractures, including ballistic injuries, occurred in 24.0% of cases; vascular injury was rare (2.6%). Most patients underwent operative fixation within 24 hours of injury (94.8%), with a mean ASA score of 2.2. Intramedullary fixation most commonly was performed through an antegrade start point (83.8%), with an 11 mm diameter nail and 3 or 4 total interlocking screws (Tables 1-3). In the NWB group, the most common reason for this weightbearing status was an ipsilateral lower extremity or acetabular/ pelvic ring injury (77.9%); however, in 22.1% of cases, the indication for NWB was not attributable to a concomitant injury. Patients in the NWB group had this weight-bearing status for mean 75 days (standard deviation 25 days). Patients in the NWB and WBAT groups were well matched. Specifically, there were no differences in patient age, prevalence of tobacco or alcohol use, prevalence of diabetes mellitus, fracture pattern or shaft location, vascular injury, open fracture, or delay to surgery. There was also no difference in overall injury severity as measured by ISS or ASA score. Operative fixation characteristics were also similar between groups (Tables 1-3). Radiographic healing by weight-bearing group Mean radiographic follow-up was approximately 7.7 months (231 vs 228 days, P = .914). Radiographs 6 to 8 weeks status postintramedullary fixation were available to review for all patients. The mean number of days from surgery to radiograph was similar between groups (52.7 vs 50.9, P = .27). The median mRUST score in the NWB group (9) was lower than that of the WBAT group (10) (mean: 8.4 vs 9.7, P = .004). While most patients had multiple additional radiographs after the 6 to 8 week window, the time points for these radiographs were less standardized. For example, a 12 to 16 week postoperative radiograph was available for review in 51.3% of patients (NWB: 50.6%, WBAT: 51.9%). As before, the mean number of days from surgery to radiograph was similar between groups (93.5 vs 88.0 days, P = .202). The median mRUST in the NWB group (10) was again lower than that of the WBAT group (12) (mean: 9.9 vs 11.7, P = .003). Two patients in the WBAT group and 3 patients in the NWB group went on to have revision operations for fracture nonunion. Mean time to revision in the WBAT group was 385 days, compared with 372 days in the NWB group (Fig. 1). Time to mRUST scores 11 through 14 The number of patients who achieved radiographic follow-up to an mRUST score of 11 was N = 60 in the NWB group (77.9%) and N = 63 in the WBAT group (81.8%). In the WBAT group, median time to mRUST of 11 was 85 days, compared with 122 days in the NWB group (P = .029) (Fig. 2A). The number of patients who achieved radiographic follow-up to an mRUST score of 12 was N = 50 in the NWB group (64.9%) and N = 53 in the WBAT group (68.9%). In the WBAT group, median time to mRUST of 12 was 111 days, compared with 162 days in the NWB group (P = .008) (Fig. 2B). The number of patients who achieved radiographic follow-up to an mRUST score of 13 was N = 30 in the NWB group (39.0%) and N = 40 in the WBAT group (51.9%). In the WBAT group, median time to mRUST of 13 was 218 days, compared with 278 days in the NWB group (P = .023) (Fig. 2C). The number of patients who achieved radiographic follow-up to an mRUST score of 14 was N = 29 in the NWB group (37.7%) and N = 38 in the WBAT group (49.4%). In the WBAT group, median time to mRUST of 14 was 255 days, compared with 320 days in the NWB group (P = .028) (Fig. 2D). Discussion Femoral shaft fractures managed with reamed, statically locked fixation heal reliably; however, the rates of healing depend on patient, injury, and operative factors. [29,30] Recently, the mRUST tool has allowed for the incremental assessment of radiographic healing. Utilizing this tool, the results of this study suggest that non-weight-bearing status following intramedullary fixation of diaphyseal femur fractures slows the rate of radiographic healing compared with weight-bearing as tolerated counterparts. Appropriate cohort matching was imperative to investigate this research question. Patients in the NWB and WBAT groups were well matched on demographic, health, injury, and operative characteristics. However, several potential confounders require discussion. First, no objective assessment of adherence to weightbearing restrictions occurred in this study. However, prior research suggests a patient compliance rate to NWB restrictions of approximately 72.5%. [31] Therefore, while not specifically assessed, an assumption that a majority of patients adhered to their WB status appears valid. Additionally, the question of fracture energy, and the corresponding degree of periosteal stripping, requires evaluation, as this variable can influence healing rates. While NWB patients generally sustained concomitant fractures, this alone does not indicate a higher degree of energy for the femur fracture itself. Rather, the mechanism of injury, the degree of comminution, the presence of an open injury, and a segmental pattern may represent better indications of fracture energy. The degree of comminution was higher in the WBAT compared with the NWB group, whereas open fractures were more common in the NWB group, though neither difference was significant. Rates of segmental fractures were equal, and mechanisms of injury were similar. Therefore, an assumption that the NWB groups represent higher-energy fractures, and therefore slower healing rates are expected, is not well founded. Radiographic healing in the WBAT group was improved by a mean clinical corollary of 1 cortex of novel callous formation or completed callous bridging at the 6 to 8 week and 12 to 16 week postoperative time points. The differences in mRUST scores were 1.3 and 1.8, respectively. While these numbers appear small, given the 4 to 16 point non-normal mRUST scale, these represent an approximately 10% to 15% difference between groups. Additionally, the clinical difference of 1 to 2 mRUST scores is the difference between callous formation versus bridging callous at 1 to 2 cortices. An additional cortex of healing often is the key for a provider to define a fracture as having achieved union. As such, these differences denote both significant statistical and clinical distinctions. The reason for this improved radiographic healing rate may be related to the mechanism of healing in intramedullary fixation. Intramedullary fixation of femoral diaphyseal fractures typically produces a construct of relative stability. Fractures fixed with relative stability heal by secondary bone healing; to achieve this, strain rates should be between 2% and 10%. [32] Therefore, it is possible that NWB does not produce these levels of strain as effectively as WBAT, which results in slower rates of healing. More biomechanical and clinical studies are necessary to explore this hypothesis. The value of this research is at least 3-fold. First, this research adds to the body of literature supporting the benefits of early weight-bearing following intramedullary fixation of femoral diaphyseal fractures. Non-weight bearing has been shown to be detrimental to return to work, patient income, and return to activities of daily living; this may be especially relevant in geriatric populations. [33][34][35][36][37][38] In the absence of discrete indications, patients should be advanced to WBAT as quickly as possible to minimize negative social, functional, and radiographic outcomes. Second, this data provides clinicians with an expected time course for radiographic healing based on weight-bearing status. Third, this provides surgeons with an additional variable to optimize to achieve union in slow-to-heal fractures, along with such factors as nutrition optimization, smoking cessation, and endocrine normalization. This study has several limitations not previously addressed. First, as a retrospective study, these results are subject to selection and information bias; however, the WBAT and NWB groups were well matched on many salient variables that have been shown to affect fracture healing. However, it remains possible that potential confounding variables, such as socioeconomic and insurance status, could influence our results. Second, it is important to note the study demographics; with mean age of 32.7 the generalizability of these findings must be considered. Finally, unlike other endpoints with discrete event markers (e.g., death), radiographs likely never capture the exact point of fracture progression from 1 mRUST score to another. In the absence of daily radiographs, this limitation must be accepted. However, the congruent follow-up between WBAT and NWB groups limits the concern of this limitation. In conclusion, this study presents the novel finding that weightbearing status following intramedullary fixation of femoral diaphyseal fractures may contribute to the radiographic rate of fracture healing. This adds support to the larger body of literature calling for the expeditious advancement of weight-bearing status. Prospective series are necessary to confirm the results of this retrospective evaluation.
2021-10-18T17:25:01.308Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "2f667ea5e48ac902bc920b4fccd47a9291409bcc", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/oi9.0000000000000154", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "81bdfb2c7f5b32143ce234a8554437fb94bfcf93", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234335820
pes2o/s2orc
v3-fos-license
ASYMPTOTIC STABILITY IN A CHEMOTAXIS-COMPETITION SYSTEM WITH INDIRECT SIGNAL PRODUCTION . This paper deals with a fully parabolic inter-species chemotaxis-competition system with indirect signal production ) , under zero Neumann boundary conditions in a smooth bounded domain Ω ⊂ R N ( N ≥ 1), where d u > 0 ,d v > 0 and d w > 0 are the diffusion coefficients, χ ∈ R is the chemotactic coefficient, µ 1 > 0 and µ 2 > 0 are the population growth rates, a 1 > 0 ,a 2 > 0 denote the strength coefficients of competition, and λ and α describe the rates of signal degradation and production, respectively. Global boundedness of solutions to the above system with χ > 0 was established by Tello and Wrzosek in [J. Math. Anal. Appl. 459 (2018) 1233-1250]. The main purpose of the paper is further to give the long-time asymptotic behavior of global bounded solutions, which could not be derived in the under zero Neumann boundary conditions in a smooth bounded domain Ω ⊂ R N (N ≥ 1), where du > 0, dv > 0 and dw > 0 are the diffusion coefficients, χ ∈ R is the chemotactic coefficient, µ 1 > 0 and µ 2 > 0 are the population growth rates, a 1 > 0, a 2 > 0 denote the strength coefficients of competition, and λ and α describe the rates of signal degradation and production, respectively. Global boundedness of solutions to the above system with χ > 0 was established by Tello and Wrzosek in [J. Math. Anal. Appl. 459 (2018) 1233-1250]. The main purpose of the paper is further to give the long-time asymptotic behavior of global bounded solutions, which could not be derived in the previous work. 1. Introduction. In this paper, we consider the following fully parabolic interspecies chemotaxis-competition system (see [28]): where R N (N ≥ 1) is a smooth bounded domain, d u > 0, d v > 0 and d w > 0 are the diffusion coefficients, χ ∈ R is the chemotactic coefficient, µ 1 > 0 and µ 2 > 0 are the population growth rates, a 1 > 0, a 2 > 0 denote the strength coefficients of competition, and λ and α describe the rates of signal degradation and production, respectively. Moreover, we assume that the initial data (u 0 , v 0 , w 0 ) ∈ (W 1,p (Ω)) 3 , p > N are nonnegative functions. System (1) is an extension of the classical Patlak-Keller-Segel chemotaxis model [15,16,22] and the interspecies Lotka-Volterra competition system [20,29,21] in which individuals belonging to both competing population are assumed to disperse randomly in the region which they jointly occupy. The biased movement is referred to as chemoattraction (i.e. χ < 0) if the cells move toward the increasing signal concentration, while it is called chemorepulsion (i.e. χ > 0) whenever the cells move away from the increasing signal concentration. Moreover, when χ > 0 in [28], individuals of the first species try to avoid encounters with competitors by means of chemorepulsion-a chemosensory reaction to the scent of rivals. In this context, the first species with density denoted by u(x, t), moves towards the decreasing concentration w(x, t) of the signaling chemical secreted by the individuals in the second species, mathematically represented through its density v(x, t). By this indirect mechanism, the first species try to avoid encounters with rivals of the second species. Recently, there exist some interesting results in chemotaxis models with indirect signal production or consumption. Tao and Winkler [26] proposed the following parabolic-elliptic-ODE chemotaxis model with indirect signal production in the unit disk Ω := B 1 (0) ⊂ R 2 , where δ ≥ 0 and τ > 0 are given parameters and µ(t) := 1 |Ω| Ω w(x, t)dx. They showed the global existence of classical solution and found a critical mass m = 8πδ. Later, Hu and Tao [13] studied the boundedness and large time behavior for a parabolic-parabolic ODE chemotaxis-growth system with indirect signal production. Moreover, Ding and Wang [7] investigated global boundedness in the quasilinear fully parabolic chemotaxis model with indirect signal production. Wang [30] studied the boundedness for the quasilinear fully parabolic chemotaxis-growth system with indirect signal production, and applied the results into the quasilinear attraction-repulsion chemotaxis model with logistic source. Fuest [8] analyzed a parabolic-parabolic-ODE chemotaxis system with indirect signal consumption and derived that the solution (u, v, w) is globally bounded and converges to a spatially constant equilibrium when either n ≤ 2 or ||v 0 || L ∞ (Ω) ≤ 1 3n . Furthermore, Xing and Zheng [33] studied the boundedness and long-time behavior for the quasilinear parabolic-parabolic-ODE chemotaxis system with indirect signal consumption. In order to understand the development of two-competing species system (1), let us mention some previous contributions in this direction. In recent years, the following two-species chemotaxis models with an incompressible fluid in Ω × (0, ∞), have been studied by some authors, where κ ∈ {0, 1}, the parameters χ i , µ i , a i , α i , β i , (i = 1, 2) are positive and Ω ⊂ R N (N = 2, 3) is a bounded domain with smooth boundary ∂Ω. When κ = 1, system (3) is called as two-species chemotaxis-Navier-Stokes model. In the two-dimensional setting, Hirata et al. [10] derived global existence, boundedness and stabilization of classical solutions for (3). Moreover, global existence of weak solutions, eventual smoothness and stabilization are studied under the three-dimensional case in [11]. When κ = 0, system (3) is called as two-species chemotaxis-Stokes model. In the three-dimensional setting, Cao et al. [5] studied the global existence and asymptotic behavior of classical solutions for (3) provided that µi χi (i = 1, 2) is sufficiently large. Jin and Xiang [14] further gave the explicit rates of convergence for any supposedly given global bounded classical solution. Zheng et al. [35] investigated the boundedness and convergence rates for the attraction-repulsion chemotaxis-fluid system. Recently, when −(α 1 n 1 + α 2 n 2 )c is replaced with −c+α 1 n 1 +α 2 n 2 in (3), Cao et al. [6] studied the global boundedness and stabilization of classical solutions to (3) in three-dimensional case provided that µ 1 and µ 2 are sufficiently large. Moreover, Zheng et al. [38] considered the global asymptotic stability for two-species chemotaxis-competition-fluid system with two signals. On the other hand, when u = 0 and −(α 1 n 1 + α 2 n 2 )c is replaced with −c + α 1 n 1 + α 2 n 2 in (3), the two-species chemotaxis-competition systems without fluid have also been studied by some authors. The existence of nonconstant positive steady states of (3) with u = 0 was derived in [31] under one dimensional case. Lin et al. [19] proved that the solution of system (3) with u = 0 is global and bounded for any n ≥ 2 provided that Ω is convex. Bai and Winkler [1] studied the large time behavior of global solutions to (3) with u = 0. Moreover, the global existence, boundedness and large time behavior of solutions for parabolic-parabolicelliptic two-species chemotaxis model can be found in [27,24,3]. Furthermore, the boundedness and large time behavior of solutions to two-species chemotaxiscompetition system with two signals are studied by some authors in [2,34,36]. As mentioned above works, the two-competing-species chemoattractant systems have been studied by many authors. Recently, Tello and Wrzosek in [28] studied the interspecies competition chemorepulsion system (1) with χ > 0. They derived the global existence, boundedness and linear stability analysis of the constant steady state for system (1) with a 1 , a 2 ∈ (0, 1) when the strength of chemorepulsion is not too high. However, to the best of our knowledge, the convergence rate of global solutions to (1) remains open in the previous works. Therefore, our main purpose in this paper is to further investigate the asymptotic stability of global bounded solutions to system (1) according to different values of a 1 and a 2 . For the simplicity, we assume that the diffusion coefficients d u = d v = d w = 1 throughout this paper. Our main results in this paper are stated as follows. (i) Suppose that a 1 , a 2 ∈ (0, 1) and µ 1 > , then the global bounded solution (u, v, w) of (1) exponentially converges to the coexistence state (u * , v * , w * ), i.e. there exist positive constants C and γ such that (ii) Assume that a 1 ∈ [1, ∞) and a 2 ∈ (0, 1), then the global bounded solution (u, v, w) of (1) algebraically converges to the constant steady state (0, 1, α λ ), i.e. there exist positive constants C and ς such that , then the global bounded solution (u, v, w) of (1) algebraically converges to the constant steady state (1, 0, 0), i.e. there exist positive constants C and κ such that Remark 1. Compared with the previous results in [28], we further show the exact convergence rate of global solutions for system (1) under the case a 1 , a 2 ∈ (0, 1). Furthermore, we also derive the exact convergence rates of global solutions for system (1) under the cases 0 < a 2 < 1 ≤ a 1 and 0 < a 1 < 1 ≤ a 2 , respectively. However, it remains open for the convergence property of global solutions in the case a 1 > 1 and a 2 > 1. Remark 2. When µ 1 is enough large, it is not difficult to derive that the global bounded solutions of (1) converge to the constant steady state (u * , v * , w * ) and (1, 0, 0) under the cases a 1 , a 2 ∈ (0, 1) and 0 < a 1 < 1 ≤ a 2 , respectively. However, when 0 < a 2 < 1 ≤ a 1 , the global bounded solutions of (1) converge to the constant steady state (0, 1, α λ ) for any µ 1 > 0. Moreover, we do not need any large restriction of µ 2 in the study of long-time asymptotic stability. The methods used in this paper could be applied to the parabolic-parabolic-elliptic chemotaxis systems. The rest of this paper is organized as follows. In Section 2, we show the existence of global bounded classical solution to system (1) and give some preliminary regularity estimates which are important for our main proofs. In Section 3, we shall study the asymptotic stabilization of global bounded solutions for system (1) by constructing different energy functionals according to different parameters a 1 and a 2 and prove Theorem 1.1. Preliminaries. In this section, we provide more stronger regularity properties for any such bounded solution than those shown in [28], which are needed to achieve our desired rates of convergence in L ∞ -norm. To do this, we shall collect the L ∞boundedness of solutions for system (1) as follows. The result of Lemma 2.3 has been proved in [1]. Here we delete the details of the proof. 3. Asymptotic stability. In this section, inspired in [12,1], we shall study the asymptotic stabilization of global bounded solutions for system (1) by constructing different energy functionals according to different parameters a 1 and a 2 . Step 2. We shall show that E 1 (t) ≤ − 1 F 1 (t) with some 1 > 0 for all t > 0. By a series of computations, we deduce from Young's inequality that and as well as for all t > 0. With the help of Lemma 3.1, we shall give the following large time behavior of global solutions for system (1). Lemma 3.2. Let the assumptions of Lemma 3.1 hold. Then the global solution of (1) converges to the coexistence state (u * , v * , w * ), i.e. Proof. It follows from 3.1 and integration over (1, ∞) that According to Lemma 2.2, we derive that (u, v, w) is Hölder continuous uniformly with respect to t ≥ 1 , in Ω × [t, t + 1], so we infer that F(t) is uniformly continuous in [1, ∞). By Lemma 2.3, we obtain i.e. Proof. We again use the function G 1 (ξ) = ξ − u * ln ξ for all ξ > 0, which is given in the proof of Lemma 3.1. According to L'Hopital's rule, we have By Lemma 3.2, we have ||u(·, t) − u * || L ∞ (Ω) → 0 as t → ∞, which implies that there exists t 1 > 0 such that for all t > t 1 . By means of the definitions of E 1 (t) and F 1 (t) in (9) and (10), respectively, it follows from the right inequalities of (29) and (30) that there exists C 1 > 0 such that By Lemma 3.2, we have which implies that there exist C 2 > 0 and L > 0 such that According to the left inequalities of (29) and (30), there exists a positive constant C 3 such that By the Gagliardo-Nirenberg inequality and Lemma 2.2, there exist positive constants C 4 , C 5 , C 6 and C 7 such that for all t > t 1 . The proof of Lemma 3.3 is complete. Lemma 3.4. Let a 1 ∈ [1, ∞) and a 2 ∈ (0, 1) and the assumptions of Lemma 2.1 hold, then there exist l i > 0, i = 1, 2, 3 and 2 > 0 such that the functions E 2 and F 2 satisfy E 2 (t) ≥ 0 (38) and Proof. Define and as well as for all t > 0, then Now, we divide the proof into the following two steps. Step 1. Similar to Step 1 in the proof of Lemma 3.1, it is easy to prove the nonnegativity of E 2 (t) for all t > 0. Step 2. We shall show that E 2 (t) ≤ − 2 F 2 (t) with some 2 > 0 for all t > 0. By a series of computations, we deduce from Young's inequality that and as well as for all t > 0. Combining (44)-(46), we obtain we have The proof of Lemma 3.4 is complete. Proof. By means of Lemma 3.4, it follows from the similar arguments as the proof of Lemma 3.2 that Lemma 3.5 holds. In order to prove Theorem 1.1 (ii), it remains to give the convergence rates for global solutions in (1). Lemma 3.6. Let a 1 ∈ [1, ∞), a 2 ∈ (0, 1) and the assumptions of Lemma 2.1 hold. Then there exist positive constants C and ς such that where t 2 is some fixed time. Proof. Let G 2 (ξ) := ξ − ln ξ for all ξ > 0, according to L'Hopital's rule, we obtain By Lemma 3.5, we have ||v(·, t) − 1|| L ∞ (Ω) → 0 as t → ∞, which implies that there exists t 2 > 0 such that and for all t > t 2 . By means of the definitions of E 2 and F 2 in Lemma 3.4, it follows from (50) and Hölder's inequality that there exist C 1 > 0 and C 2 > 0 such that where we have used the boundedness properties of solution (u, v, w) asserted by Lemma 2.1. Thus we have where . Therefore, it follows from Lemma 3.4 that which implies with some positive constant C 4 . It follows from (51) and (55) that for all t > t 2 there exist C 5 > 0 and C 6 > 0 such that By using the Gagliardo-Nirenberg inequality, there exist positive constants C 7 , C 8 and C 9 such that and as well as Henceforth, it follows from Lemma 2.2 and (56)-(59) that for all t > t 2 , where C 10 > 0, C := C 10 C 1 N +1 6 and ς = 1 N +1 . The proof of Lemma 3.6 is complete. Step 1. Similar to Step 1 in the proof of Lemma 3.1, we can prove the nonnegativity of E 3 (t) for all t > 0. Step 2. We shall show that E 3 (t) ≤ − 3 F 3 (t) with some 3 > 0 for all t > 0. By a series of computations, we deduce from Young's inequality that and as well as for all t > 0. Proof. By means of Lemma 3.7, it follows from the similar arguments as the proof of Lemma 3.2 that Lemma 3.8 holds.
2020-09-03T09:08:29.901Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c0f23c1bf34a3c1501d29178302c8bc4486fc640", "oa_license": null, "oa_url": "https://www.aimsciences.org/article/exportPdf?id=04200ffe-41f3-43d9-8c5e-9428516a96c1", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "874ea9fe91153f6cd5b9fd3d7725066484446e08", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
2076028
pes2o/s2orc
v3-fos-license
On the monodromy of the Hitchin connection For any genus g>1 we give an example of a family of smooth complex projective curves of genus g such that the image of the monodromy representation of the Hitchin connection on the sheaf of generalized SL(2)-theta functions of level l different from 1,2,4 and 8 contains an element of infinite order. Introduction Let π : C → B be a family of smooth connected complex projective curves of genus g ≥ 2 parameterized by a smooth complex manifold B. For any integers l ≥ 1, called the level, and r ≥ 2 we denote Z l the complex vector bundle over B having fibers H 0 (M C b (SL(r)), L ⊗l ), where M C b (SL(r)) is the moduli space of semistable rank-r vector bundles with trivial determinant over the curve C b = π −1 (b) for b ∈ B and L is the ample generator of its Picard group. Following Hitchin [H], the bundle Z l is equipped with a projectively flat connection called the Hitchin connection. The main result of this paper is the following Theorem. Assume that the level l = 1, 2, 4 and 8 and that the rank r = 2. For any genus g ≥ 2 there exists a family π : C → B of smooth complex connected projective curves of genus g such that the monodromy representation of the Hitchin connection ρ l : π 1 (B, b) −→ PGL (Z l,b ) has an element of infinite order in its image. For any genus g ≥ 2 we give an example of a family π : C → B of smooth hyperelliptic curves of genus g and an explicit element ξ ∈ π 1 (B, b) with image of infinite order (see Remark 6.10). In the context of Witten-Reshetikhin-Turaev Topological Quantum Field Theory as defined by Blanchet-Habegger-Masbaum-Vogel [BHMV], the analogue of the above theorem is wellknown due to work of Masbaum [Ma], who exhibited an explicit element of the mapping class group with image of infinite order. Previously, Funar [F] had shown by a different argument the somewhat weaker result that the image of the mapping class group is an infinite group. It is enough to show the above theorem in the context of Conformal Field Theory as defined by Tsuchiya-Ueno-Yamada [TUY]: following a result of the first author [La], the monodromy representation associated to Hitchin's connection coincides with the monodromy representation of the WZW connection. In a series of papers by Andersen and Ueno ([AU1], [AU2], [AU3] and [AU4]) it has been shown recently that the above Conformal Field Theory and the above The mapping class group Γ g is defined to be Γ 0 g = Γ g,0 . 2.1.2. Dehn twists. Given an (unparametrized) oriented, embedded circle γ in R ⊂ S we can associate to it a diffeomorphism T γ up to isotopy, i.e., an element T γ in the mapping class groups Γ n g and Γ g,n , the so-called Dehn twist along the curve γ. It is known that the mapping class groups Γ n g and Γ g,n are generated by a finite number of Dehn twists. We recall the following exact sequence 1 −→ Z n −→ Γ g,n −→ Γ n g −→ 1. The n generators of the abelian kernel Z n are given by the Dehn twists T γ i , where γ i is a loop going around the boundary circle associated to x i for each i. 2.1.3. The mapping class groups Γ 4 0 and Γ 0,4 . Because of their importance in this paper we recall the presentation of the mapping class groups Γ 4 0 and Γ 0,4 by generators and relations. Keeping the notation of the previous section, we denote by R the 4-holed sphere and by γ 1 , γ 2 , γ 3 , γ 4 the circles in R around the four boundary circles. We denote by γ ij the circle dividing R into two parts containing two holes each and such that the two circles γ i and γ j are in the same part. It is known (see e.g. [I] section 4) that Γ 0,4 is generated by the Dehn twists T γ i for 1 ≤ i ≤ 4 and T γ ij for 1 ≤ i, j ≤ 3 and that, given a suitable orientation of the circles γ i and γ ij , there is a relation (the lantern relation) Note that the images of the Dehn twists T γ i under the natural homomorphism Γ 0,4 −→ Γ 4 0 , T γ → T γ , are trivial. Thus the group Γ 4 0 is generated by the three Dehn twists T ij for 1 ≤ i, j ≤ 3 with the relation T γ 12 T γ 13 T γ 23 = 1. For each 4-holed sphere being contained in a closed genus g surface without boundary one can consider the Dehn twists T ij as elements in the mapping class group Γ g . 2.2. Moduli spaces of curves. Let M g,n denote the moduli space parameterizing n-pointed smooth projective curves of genus g. The moduli space M g,n is a (possibly singular) algebraic variety. It can also be thought of as an orbifold (or Deligne-Mumford stack) and one has an isomorphism (1) j : π 1 (M g,n , x) ∼ −→ Γ n g , where π 1 (M g,n , x) stands for the orbifold fundamental group of M g,n . In case the space M g,n is a smooth algebraic variety, the orbifold fundamental group coincides with the usual fundamental group. 2.3. The isomorphism between π 1 (M 0,4 , x) and Γ 4 0 . The moduli space M 0,4 parameterizes ordered sets of 4 points on the complex projective line P 1 C up to the diagonal action of PGL(2, C). The cross-ratio induces an isomorphism with the projective line P 1 C with 3 punctures at 0, 1 and ∞ M 0,4 ∼ −→ P 1 C \ {0, 1, ∞}. We deduce that the fundamental group of M 0,4 is the group with three generators where σ 1 , σ 2 and σ 3 are the loops starting at x ∈ P 1 C \ {0, 1, ∞} and going once around the points 0, 1 and ∞ with the same orientation. We choose the orientation such that the generators σ i satisfy the relation σ 3 σ 2 σ 1 = 1. Clearly π 1 (M 0,4 , x) coincides with the fundamental group π 1 (Q, x) of the 3-holed sphere Q. In this particular case the isomorphism j : π 1 (M 0,4 , x) ∼ −→ Γ 4 0 can be explicitly described as follows (see e.g. [I] Theorem 2.8.C): we may view the 3-holed sphere Q as the union of the 4-holed sphere R with a disc D glued on the boundary corresponding to the point x 4 . Given a loop σ ∈ π 1 (Q, x) we may find an isotopy {f t : Q → Q} 0≤t≤1 such that the map t → f t (x) coincides with the loop σ, f 0 = id Q and f 1 (D) = D. Then the isotopy class of f 1 resticted to R ⊂ Q determines an element j(σ) = [f 1 ] ∈ Γ 4 0 . Moreover, with the previous notation, we have the equalities (see e.g. [I] Lemma 4.1.I) Remark 2.1. At this stage we observe that under the isomorphism j the two elements σ −1 1 σ 2 ∈ π 1 (M 0,4 , x) and T −1 γ 23 T γ 13 ∈ Γ 4 0 coincide. It was shown by G. Masbaum in [Ma] that the latter element has infinite order in the TQFT-representation of the mapping class group Γ g -note that T −1 γ 23 T γ 13 also makes sense in Γ g . We will show in Proposition 5.1 that the loop σ −1 1 σ 2 has infinite order in the monodromy representation of the WZW connection. 2.4. Braid groups and configuration spaces. We recall some basic results about braid groups and configuration spaces. We refer the reader e.g. to [KT] Chapter 1. 2.4.1. Definitions. The braid group B n is the group generated by n − 1 generators g 1 , . . . , g n−1 and the relations The pure braid group is the kernel P n = ker(B n → Σ n ) of the group homomorphism which associates to the generator g i the transposition (i, i + 1) in the symmetric group Σ n . The braid groups B n and P n can be identified with the fundamental groups P n = π 1 (X n , p), where X n and X n are the complex manifolds parameterizing ordered respectively unordered n-tuples of distinct points in the complex plane The points p = (z 1 , . . . , z n ) and p = p mod Σ n are base points in X n and X n . There are natural inclusions B n ֒→ B n+1 , which induce inclusions on the pure braid groups ι : P n ֒→ P n+1 . Over the variety X n there is an universal family (2) F n+1 = (π : C = X n × P 1 → X n ; s 1 , . . . , s n , s ∞ ), parameterizing n + 1 distinct points on the projective line P 1 . The section s i is given by the natural projection X n → C on the i-th component followed by the inclusion C ⊂ P 1 C = C∪{∞} and s ∞ is the constant section corresponding to ∞ ∈ P 1 C . Conformal blocks and the projective WZW connection 3.1. General set-up. We consider the simple Lie algebra sl(2). The set of irreducible sl(2)modules, i.e. the set of dominant weights of sl(2) equals where ̟ is the fundamental weight of sl(2), which corresponds to the standard 2-dimensional representation of sl(2). We fix an integer l ≥ 1, called the level, and introduce the finite set P l = {λ ∈ P + | m ≤ l}. Given any λ ∈ P l we denote by λ † ∈ P l the dominant weight of the dual V † λ of the sl(2)-module V λ with dominant weight λ. Note that λ † = λ. Given an integer n ≥ 1, a collection λ = (λ 1 , . . . , λ n ) ∈ (P l ) n of dominants weights of sl(2) and a family F = (π : C → B; s 1 , . . . , s n ; ξ 1 , . . . , ξ n ) of n-pointed stable curves of arithmetic genus g parameterized by a base variety B with sections s i : B → C and formal coordinates ξ i at the divisor s i (B) ⊂ C, one constructs (see [TUY] section 4.1) a locally free sheaf V † l, λ (F ) over the base variety B, called the sheaf of conformal blocks or the sheaf of vacua. We recall where H † λ denotes the dual of the tensor product H λ = H λ 1 ⊗ · · · ⊗ H λn of the integrable highest weight representations H λ i of level l and weight λ i of the affine Lie algebra sl(2). The formation of the sheaf of conformal blocks commutes with base change. In particular, we have for any point . . , s n (b); ξ 1|C b , . . . , ξ n|C b ) consisting of a stable curve C b with n marked points s 1 (b), . . . , s n (b) and formal coordinates ξ i|C b at the points s i (b). We recall that the sheaf of conformal blocks V † l, λ (F ) does not depend (up to a canonical isomorphism) on the formal coordinates ξ i (see e.g. [U] Theorem 4.1.7). We therefore omit the formal coordinates in the notation. 3.2. The projective WZW connection. We now outline the definition of the projective WZW connection on the sheaf V † l, λ (F ) over the smooth locus B s ⊂ B parameterizing smooth curves and refer to [TUY] or [U] for a detailed account. Let D ⊂ B be the discriminant locus and let S = n i=1 s i (B) be the union of the images of the n sections. We recall the exact sequence where Θ C/B ( * S) denotes the sheaf of vertical rational vector fields on C with poles only along the divisor S, and Θ ′ C ( * S) π the sheaf of rational vector fields on C with poles only along the divisor S and with constant horizontal components along the fibers of π. There is an O B -linear map which associates to a vector field ℓ in Θ ′ C ( * S) π the n Laurent expansions ℓ i d dξ i around the divisor s i (B). Abusing notation we also write ℓ for its image under p We then define for any vector field ℓ in Θ for f a local section of O B and u ∈ H † λ . Here T [ℓ i ] denotes the action of the energy-momentum tensor on the i-th component H † λ i . It is shown in [TUY] that D( ℓ) preserves V † l, λ (F ) and that D( ℓ) only depends on the image θ( ℓ) up to homothety. One therefore obtains a projective connection ∇ on the sheaf V † l, λ (F ) over B s given by Since this connection is projectively flat, it induces a monodromy representation Remark 3.1. For a family of smooth n-pointed curves of genus 0 the projective WZW connection is actually a connection (see e.g. [U] section 5.4). Monodromy of the WZW connection for a family of 4-pointed rational curves In this section we review the results by Tsuchiya-Kanie [TK] on the monodromy of the WZW connection for a family of rational curves with 4 marked points. We consider the universal family F 4 over X 3 introduced in (2) with the collection The rank of the sheaf of conformal blocks V † l, λ T K (F 4 ) equals 2 for any l ≥ 1, see e.g. [TK] Theorem 3.3. Moreover, as outlined in section 3.2, the bundle V † l, λ T K (F 4 ) is equipped with a flat connection ∇ (not only projective). Remark 4.1. It is known [TK] that the differential equations satisfied by the flat sections of (V † l, λ T K (F 4 ), ∇) coincide with the Knizhnik-Zamolodchikov equations (see e.g. [EFK]). Moreover, we will show in a forthcoming paper that the local system (V † l, λ T K (F 4 ), ∇) also coincides with a certain Gauss-Manin local system. We observe that the symmetric group Σ 3 acts naturally on the base variety X 3 . The local is invariant under this Σ 3 -action and admits a natural Σ 3 -linearization. Thus by descent we obtain a local system (V † l, λ T K (F 4 ), ∇) over X 3 . Therefore, we obtain a monodromy representation with t = q(1 + q + q 2 ). Note that both matrices have eigenvalues q 1 4 and −q − 3 4 . Remark 4.3. These matrices have already been used in the paper [AMU]. Infinite monodromy over M 0,4 We denote by ρ l the restriction of the monodromy representation ρ l to the subgroup π 1 (M 0,4 , x) Proposition 5.1. Let σ ∈ π 1 (M 0,4 , x) be the element introduced in (3). If l = 1, 2, 4 and 8, then the element ρ l (σ) has infinite order in both PGL(2, C) and GL(2, C) Proof. Using the explicit form of the monodromy representation ρ l given in Proposition 4.2 we compute the matrix associated to Ψ(σ) = Ψ(σ −1 This matrix has determinant 1 and trace 2 − q − q −1 + q 2 + q −2 . Hence the matrix has finite order if and only if there exists a primitive root of unity λ such that In [Ma] it is shown that this can only happen if l = 1, 2, 4 or 8: using the transitive action of Gal(Q/Q) on primitive roots of unity, one gets that, if such a λ exists for q = exp( 2iπ l+2 ), then for any primitive (l + 2)-th rootq there exists a primitive rootλ such that In particular, we have the inequality |1 − Re(q) + Re(q 2 )| ≤ 1 for any primitive (l + 2)-th rootq. But for l = 1, 2, 4 and 8, one can always find a primitive (l + 2)-th rootq such that Re(q 2 ) > Re(q) -for the explicit rootq see [Ma]. Finally, since ρ l (σ) has trivial determinant, its class in PGL(2, C) will also have infinite order. Remark 5.2. The same computation shows that the element ρ l (σ 1 σ −1 2 ) ∈ GL(2, C) also has infinite order if l = 1, 2, 4 and 8. This implies that the orientation chosen for both loops σ 1 and σ 2 around 0 and 1 is irrelevant. On the other hand, it is immediately seen that the elements ρ l (σ 1 ), ρ l (σ 2 ) and ρ l (σ 1 σ 2 ) have finite order for any level l. In the case l = 4 we recall that the alternating group A 4 has the following presentation by generators and relations A 4 = a, b | a 3 = b 2 = (ab) 3 = 1 . Using the formulae of Proposition 4.2 and 5.1 we check that ord(m 1 ) = ord(m 2 ) = 3 and ord(m −1 1 m 2 ) = 2, so that a = m 1 and b = m −1 1 m 2 generate the group A 4 . In the case l = 8 we recall that the alternating group A 5 has the following presentation by generators and relations A 5 = a, b | a 2 = b 3 = (ab) 5 = 1 . Using the formulae of Proposition 4.2 and 5.1 we check that ord(m 1 ) = ord(m 2 ) = 5 and ord(m −1 1 m 2 ) = 3. Moreover a straightforward computation shows that the element m −1 1 m 2 m −1 1 is (up to a scalar) conjugate to the matrix which has trace zero. Note that t 2 = q + q 2 + q 3 and q −4 = −q. Hence ord(m −1 1 m 2 m −1 1 ) = ord(m 1 m −1 2 m 1 ) = 2. Therefore if we put a = m 1 m −1 2 m 1 and b = m −1 1 m 2 , we have ab = m 1 and ab 2 = m 2 , so that ord(a) = 2, ord(b) = 3, and ord(ab) = 5, i.e. a, b generate the group A 5 . Proof. First, we observe that the image ρ l (π 1 (M 0,4 , x)) in GL(2, C) is finite. In fact, by Proposition 5.3 its image in PGL(2, C) is finite and its intersection ρ l (π 1 (M 0,4 , x)) ∩ C * Id with the center of GL(2, C) is also finite. The latter follows from the fact that the determinant det Mat B ( ρ l (g i )) = −q − 1 2 has finite order in C * . Secondly, we recall that P 3 is generated by the normal subgroup π 1 (M 0,4 , x) and by the element g 2 1 . Since ρ l (g 2 1 ) has finite order and since B 3 /P 3 = Σ 3 is finite, we obtain that ρ l (B 3 ) is a finite subgroup. In the proof of the main theorem we will need the following corollary of Proposition 5.1. We consider the following compact subset of C where D ⊂ C is the closed disc centered at 0 with radius 2 and ∆ z ⊂ C denotes the open disc centered at z with very small radius. We choose as base point b = i ∈ B. Let ξ ∈ π 1 (B, b) be the loop going once around the two points −1 and 0. Let be the family of 5-pointed rational curves, where the 5 sections s 0 , s 1 , s −1 , s u , s −u map u ∈ B to 0, 1, −1, u and −u respectively. Corollary 5.5. For l = 1, 2, 4 and 8, the image of the loop ξ ∈ π 1 (B, b) under the monodromy representation Proof. Since propagation of vacua is a flat isomorphism (see e.g. [Lo] Proposition 22) we can drop the point 0 which is marked with the zero weight. Thus it suffices to show the statement for the same family with the 4 sections s 1 , s −1 , s u , s −u . The cross-ratio of the 4 points 1, −1, u, −u equals t = (u + 1) 2 4u = 1 2 + 1 4 (u + u −1 ). We also introduce the 4-pointed family where the section s t maps u to the cross-ratio t and we observe that there exists an automorphism α : P 1 × B → P 1 × B over B (which can be made explicit) mapping the 4 sections s 1 , s −1 , s u , s −u to the 4 sections s 0 , s 1 , s ∞ , s t . Moreover the automorphism α induces an isomorphism between the two local systems (V † l, λ T K (F ), ∇) and (V † l, λ T K (F ′ ), ∇) over B. We now consider the map induced by the cross-ratio One easily checks that the extension Ψ of Ψ to P 1 gives a double cover of P 1 ramified over 0 = Ψ(−1) and 1 = Ψ(1). Note that Ψ(0) = Ψ(∞) = ∞. Hence Ψ is anétale double cover over its image. The map Ψ induces a map, denoted by Φ, between fundamental groups ). 6.1.1. A family of rational curves. We consider the family of rational curves p : C → A 1 parameterized by the affine line A 1 and given by the equation where (x : y : z) are homogeneous coordinates on the projective plane and τ is a coordinate on A 1 . We denote by C τ the fiber over τ ∈ A 1 . For τ = 0 the curve C τ is a smooth conic and C 0 = L 0 ∪ L 1 is the union of two projective lines given by the equations L 0 = Zeros(y) and L 1 = Zeros(x). For τ = 0 we can parameterize the smooth conic C τ in the following way Note that for τ = 0 this morphism also gives a parametrization of the line L 0 . 6.1.2. A family of hyperelliptic curves. Let g ≥ 2 be an integer and let α be a complex number satisfying |α| > 1. We consider the family p : C → A 1 × A 1 of curves parameterized by two complex numbers (u, τ ) ∈ A 1 × A 1 and such that the fiber C (u,τ ) is the double cover of P 1 ramified over the 2g + 2 points: We assume that these points are distinct. We denote the projection pr : C (u,τ ) → P 1 . The family of curves C can be constructed by taking the closure in P 2 × A 1 × A 1 of the affine curve in A 2 × A 1 × A 1 over A 1 × A 1 defined by the equation and by blowing up g times the singular point at ∞. 6.2. The sewing procedure. We will briefly sketch the construction of the sewing map and give some of its properties (for the details see [TUY] or [U]). We consider a flat family F = (π : C → B × Ω; s 1 , . . . , s n ) of n-pointed connected projective curves parameterized by B×Ω, where B is a complex manifold and Ω ⊂ A 1 is an open subset of the complex affine line A 1 containing the origin 0. We assume that the family F satisfies the following conditions: (1) the curve C (b,τ ) is smooth if τ = 0. We also introduce the family F of n + 2-pointed curves associated to F F = ( π : C → B; s 1 , . . . , s n+2 ) which desingularizes the family of nodal curves F |B×{0} . Here s n+1 (b) and s n+2 (b) are the two points of C b lying over the node of C (b,0) . Remark 6.1. An example of a family F satisfying the above conditions is given in section 6.1.1. For any dominant weight µ the Virasoro operator L 0 induces a decomposition of the representation space H µ into a direct sum of eigenspaces H µ (d) for the eigenvalue d + ∆ µ of L 0 , where ∆ µ ∈ Q is the trace anomaly and d ∈ N. We recall that there exists a unique (up to a scalar) bilinear pairing We It is shown in [TUY] that ]. Therefore we obtain for any µ ∈ P l and any λ ∈ (P l ) n an O B×Ω -linear map called the sewing map. We denote Ω 0 = Ω \ {0}. We recall that the sheaf V † l, λ,µ,µ † ( F ) over B, as well as its pull-back to the product B × Ω 0 under the first projection, is equipped with the WZW-connection (see section 3.2). On the other hand, the restriction of the sheaf V † l, λ (F ) to B × Ω 0 , which is the open subset of B × Ω parameterizing smooth curves, is also equipped with the WZW-connection. The main result of this section (Theorem 6.5) says that the sewing map s µ is projectively flat for both connections. We first need to recall the following Theorem 6.2 ([TUY] Theorem 6.2.2). For any section ψ ∈ V † l, λ,µ,µ † ( F ) the multi-valued formal power series ψ = τ ∆µ ψ has the following properties : (1) it satisfies the relation (2) for any b ∈ B, the power series ψ b converges. (3) if B is compact, there exists a non-zero positive real number r such that the power series ψ defines a holomorphic section of V † l, λ (F ) over B × D r , where D r ⊂ Ω is the open disc centered at 0 with radius r. Proof. Only part (3) is not proved in [TUY] Theorem 6.2.2. Consider a point a ∈ B. We choose holomorphic coordinates u 1 , . . . , u m centered at the point a ∈ B. Locally around the point a ∈ B the section ψ can be expanded as a H † λ -valued power series in the m + 1 variables u 1 , u 2 , . . . , u m , τ . Given a second point b = a with coordinates b = (b 1 , . . . , b m ) with b i = 0, we know by part (2) that ψ b converges if |τ | ≤ ρ for some real ρ. By the general theory of functions in several complex variables (see e.g. [O] Proposition 1.2) we deduce that ψ c converges for |τ | < ρ and for any c = (c 1 , . . . , c m ) such that |c i | < |b i |. Therefore, there exists for any a ∈ B a polydisc ∆ a around a and a real number r a such that the radius of convergence of the series ψ c for any c ∈ ∆ a is at least r a . By considering the covering of B by the polydiscs ∆ a and by the fact that B is compact, we then obtain the desired non-zero real number r. Remark 6.3. We note that the statement given in [TUY] Theorem 6.2.2 says that there exists a vector field ℓ over the family of curves C such that which is equivalent to the above statement using the property θ( ℓ) = −τ d dτ . This last equality is actually proved in [TUY] Corollary 6.1.4, but there is a sign error. The correct formula of [TUY] Corollary 6.1.4 is θ( ℓ) = −τ d dτ , which is obtained by writing the 1-cocycle θ 12 (u, τ ) = ℓ ′ u,τ |U 2 −l u,τ |U 1 . Remark 6.4. By making the base change ν j = τ , where j is the denominator of the trace anomaly ∆ µ , we obtain a section ψ ∈ V † l, λ (F ) The next result says that the sewing map is projectively flat. We now prove part (2). We start with a lemma, which is an analogue of [U] Lemma 5.3.1. Lemma 6.6. Let b ∈ B and let ∂ be a vector field in some neighbourhood U of b. If we choose U ⊂ B sufficiently small, then there exist local coordinates (u 1 , . . . , u m , z) (resp. (u 1 , . . . , u m , w)) of a neighbourhood X (resp. Y ) of s n+1 (U) ⊂ C |U (resp. s n+2 (U) ⊂ C |U ) and a vector field ℓ over C |U , which is constant along the fibers and which satisfy the following conditions : (1) the sections s n+1 and s n+2 are given by the mappings (2) ℓ |X = z d dz + ∂, ℓ |Y = −w d dw + ∂. In particular, θ( ℓ) = ∂, i.e. ℓ projects onto the vector field ∂. Here θ denotes the projection on the horizontal component, see (4). Proof. The proof follows the lines of the proof of [U] Lemma 5.3.1. For a small neighbourhood U of b, we choose (u 1 , . . . , u m , x) and (u 1 , . . . , u m , y) local coordinates in π −1 (U) satisfying condition (1) of the Lemma. We denote by π ′ : C ′ −→ B the family of nodal curves parameterized by B obtained from the family C by identifying the two divisors s n+1 (B) and s n+2 (B). Note that we have a sequence of maps over B By choosing U small enough, we can lift the vector field ∂ over U ⊂ B to a vector field ℓ over C ′ |U which is constant along the fibers of C ′ |U → U and has poles only at S |U , i.e. lies in Θ ′ C ′ ( * S) π ′ . The inclusion Θ ′ C ′ ( * S) π ′ ֒→ ν * Θ ′ C ( * S − s n+1 (B) − s n+2 (B)) π allows us to see ℓ as a vector field over C |U having the property where the functions a(u, x) and b(u, y) are defined by the expressions of the restriction of ℓ to the neighboorhoods X and Y ℓ |X = a(u, x) d dx and ℓ |Y = b(u, y) d dy . Note that a(u, 0) = 0 and b(u, 0) = 0 for any u ∈ U. The rest of the proof then goes as in [U] Lemma 5.3.1 or [TUY] Lemma 6.1.2. Let b ∈ B and let ∂ be a vector field in some neighbourhood U of b. Taking U sufficiently small, we can lift the projective connection ∇ on the sheaf V † l, λ,µ,µ † ( F) over U to a connection. We consider the vector field ℓ constructed in Lemma 6.6. Then for a local section ψ over U the equation is equivalent to the equation for some local section a of O U . We will take as local coordinates ξ n+1 = z and ξ n+2 = w around the divisors s n+1 (U) and s n+2 (U), as introduced in Lemma 6.6. Then the image of ℓ under p can be written Since T [ξ i d dξ i ] = L 0 acting on the i-th component of the tensor product, we obtain the following decomposition where the exponent (i) of the Virasoro operator L 0 denotes an action on the i-th component. For any non-negative integer d we then project equation (8) via the map π d defined in (7) into H † λ , which leads to We have the equalities π d (L (n+1) 0 Hence both terms cancel, since ∆ µ = ∆ µ † . This leads to the equations for any d Multiplying (9) with τ d and summing over d, we obtain the equation Note that ∂(ψ d τ d ) = (∂ψ d )τ d , since the vector field ∂ comes from B. The vector field ℓ over C |U determines a vector field m over the family of smooth curves C |U ×Ω 0 as follows. We fix a point b ∈ B and a non-zero complex number τ with |τ | < 1. The smooth curve C (b,τ ) is obtained from the curve C b by removing the two closed discs D n+1 and D n+2 centered at s n+1 (b) and s n+2 (b) with radius |τ |, and by identifying in the open curve according to the relation zw = τ. Under this identification, we see that the two restrictions of vector fields ℓ |{b}×A n+1 and ℓ |{b}×A n+2 correspond (since z d dz = −w d dw ) and thus define a vector field m over C (b,τ ) , which has poles only at the n points s 1 (b), . . . , s n (b). Moreover the Laurent expansion of m at s 1 (b), . . . , s n (b) coincide with the Laurent expansion of ℓ. For the construction in a family, see [U] section 5.3. Hence θ( m) = ∂ and p( m) = (ℓ 1 d dξ 1 , · · · , ℓ n d dξn ). So equation (10) can be written as The last equation means that ψ is a projectively flat section for the WZW connection. From now on we assume that B is compact. Since by Theorem 6.2 (3) the formal power series ψ determines a holomorphic section over B × D r we can choose a complex number τ 0 = 0 with |τ 0 | < r and evaluate ψ at τ 0 . This gives a section ψ(τ 0 ) of the conformal block Moreover, using the factorization rules (see e.g. [TUY] Theorem 6.2.6 or [U] Theorem 4.4.9) we obtain by summing over all dominant weights µ ∈ P l an O B -linear isomorphism which is projectively flat for the WZW connections on both sheaves over B by Theorem 6.5. We fix a base point b ∈ B, which gives a direct sum decomposition We denote by D the subgroup of PGL(V † l, λ (F τ 0 ) b ) consisting of projective linear maps preserving the direct sum decomposition (11) and by p µ : D −→ PGL(V † l, λ,µ,µ † ( F) b ) the projection onto the summand corresponding to µ ∈ P l . The next proposition is an immediate consequence of the fact that the maps s µ (τ 0 ) are projectively flat. Proposition 6.7. With the above notation we have for any µ ∈ P l and any λ ∈ (P l ) n (1) the monodromy representation of the sheaf of conformal blocks V † l, λ (F τ 0 ) over B × {τ 0 } takes values in the subgroup D, i.e., In the proof of the main theorem we will use the above proposition for a slightly more general family F of n-pointed connected projective curves. We shall assume that F satisfies the two conditions: (1) the curve C (b,τ ) is smooth if τ = 0. The desingularizing family F will thus be a n + 2m-pointed family. Remark 6.8. An example of a family F satisfying the above conditions is given in section 6.1.2. Proposition 6.9. With the above notation we have for any µ ∈ (P l ) m and any λ ∈ (P l ) n (1) the monodromy representation of the sheaf of conformal blocks V † l, λ (F τ 0 ) over B × {τ 0 } takes values in the subgroup D, i.e., Proof of the Theorem. We will now prove the theorem stated in the introduction. We know by [La] assuming 1 g ≥ 2 that there is a projectively flat isomorphism between the two projectivized vector bundles PZ l ∼ −→ PV † l,∅ equipped with the Hitchin connection and the WZW connection respectively. Here V † l,∅ stands for the sheaf of conformal blocks V † l,0 (F ) associated to the family F = (π : C → B; s 1 ) of curves with one point labeled with the trivial representation λ 1 = 0 (propagation of vacua). We then deduce from Proposition 6.9 (2) applied to the family F = F hyp g with n = 1, m = g and the choice of weights λ 1 = 0 and µ 1 = · · · = µ g = ̟, where we associate the weight 0 to ∞ and the weight ̟ to the remaining 2g points (note that ̟ = ̟ † ), that it suffices to show that the monodromy representation π 1 (B, b) −→ PGL(V † l,0,̟,...,̟ ( F) b ) has an element of infinite order in its image. In order to show the last statement we consider the family of rational curves F rat 2g+1 defined in section 6.1.1. The family F which desingularizes the nodal curves F |B×{0} is a family of (2g + 3)-pointed rational curves consisting of the disjoint union of two projectve lines with 5 points 0, 1, −1, u, −u on one projective line and 2g − 2 points 0, ∞, 3, −3, . . . , g, −g on the second projective line. Next we observe that the conformal block for the projective line with 2g − 2 marked points 0, ∞, 3, −3, . . . , g, −g with the zero weight at the points 0 and ∞ and the weight ̟ at the other 2g − 4 points is non-zero. This follows from an iterated use of the propagation of vacua, the factorization rules and from the fact that dim V l,̟,̟ (P 1 ) = 1. Remark 6.10. For the convenience of the reader we recall that we have taken the family of smooth hyperelliptic curves given by the affine equation (6) for two complex numbers α and τ with α −1 and τ sufficiently small -note that α −1 and τ measure the size of the domain where the sewing elementsψ for the two families F rat 2g+1 and F hyp g converge. The parameter u varies in B as defined in (5). Then the loop ξ ∈ π 1 (B, i) which starts at i and goes once around the points −1 and 0 has monodromy of infinite order. Remark 6.11. The previous argument, which proves the theorem for the Lie algebra sl(2), fails when considering the Lie algebras sl(r) with r > 2. The main reason is the fact ̟ † = ̟ for r > 2, where ̟ is the first fundamental weight. Finiteness of the monodromy representation in genus one In this section we collect for the reader's convenience some existing results on the monodromy representation on the conformal blocks associated to the Lie algebra sl(2) for a family of onemarked elliptic curves labeled with the trivial representation. We consider the upper half plane H = {ω ∈ C | Im ω > 0} with the standard action of the modular group PSL(2, Z), which is generated by the two elements S = 0 −1 1 0 , T = 1 1 0 1 satisfying S 2 = (ST ) 3 = e. Let F denote the universal family of elliptic curves parameterized by H. We denote by V † l,0 (F ) the sheaf of conformal blocks of level l with trivial representation at the origin. The sheaf V † l,0 (F ) has rank l + 1 and for each λ ∈ P l we obtain by the sewing procedure a section ψ λ over H given by the formal series where ψ is the unique (up to a multiplicative scalar) section of V † l,λ,λ,0 (P 1 ) and φ is any element in H λ . Because of Theorem 6.2 the l + 1 sections ψ λ are projectively flat for the projective WZW connection on V † l,0 (F ) and are linearly independent by the factorization rules (11). Note that this decomposition of the sheaf V † l,0 (F ) into a sum of rank-1 subsheaves corresponds to a degeneration to the nodal elliptic curve given by Im ω → ∞, or equivalently τ = exp(2iπω) → 0. Moreover by evaluating the sections ψ λ at the highest weight vector φ = v λ ∈ H λ we obtain analytic functions χ λ (ω) = ψ λ (ω)|v λ , which correspond to the character of the representations H λ : see e.g. [U] equation (4.3.1). This shows that in the genus one case the local system given by the conformal blocks with trivial marking equipped with the WZW connection coincides with the local system given by the characters χ λ (ω). Moreover the monodromy action of the modular group PSL(2, Z) on the vector space spanned by the characters {χ λ (ω)} λ∈P l has been determined. With this notation the main statement of this section is the following Theorem 7.2. The image of the representation ρ l is finite. Proof. Using the explicit expression of the matrix ρ l (U) for any element U ∈ PSL(2, Z) computed in [J] section 2, it is shown in [G] section 2 that the matrix ρ l (U) has all its entries in the set 1 2(l+2) Z[exp( iπ 4(l+2) )]. Since moreover the representation ρ l is unitary, we may deduce finiteness along the same lines as in [G] proof of corollary.
2012-02-13T16:07:29.000Z
2010-03-23T00:00:00.000
{ "year": 2010, "sha1": "aebe79b24111ea375037d97e89a1c584b0f8d875", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.geomphys.2012.11.003", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "d9c5c391429d753bb3bf4c7e45eeda4951c2bfdd", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
225915707
pes2o/s2orc
v3-fos-license
FACTORS ASSOCIATED WITH DELAYED AMBULANCE RESPONSE TIME IN HOSPITAL Ambulance response time is the key performance for ambulances services. The objective of this study is to determine the factors associated with delayed ambulance response time in Hospital Universiti Sains Malaysia. This was a cross sectional study conducted in Emergency Department Hospital Universiti Sains Malaysia between January 2016 to January 2017. A total of 300 samples had been collected by ambulance paramedic using validated ambulance form ‘Borang Sela Masa Tindak Balas Ambulans’. All ambulance forms with missing data were excluded in this study. Of 300 cases of emergency ambulance call cases, there were 254 cases (84.7%) of delayed ambulance response time. Current ambulance response time is 14 minutes with interquartile range of 5 minutes. Factors which showed significant association delayed ambulance response time include distance, location type and ambulance mechanism. The odd of delayed ambulance response time by every increase in distance unit was 1.59 (95% CI, 1.37 to 1.85). For location type, the odd of delayed ambulance response time for public location as compared to road was 0.13 (95% CI, 0.04 to 0.45). For ambulance mechanism, the odd of delayed ambulance response time for beacon type as compared to siren type was 0.22 (95% CI, 0.01 to 0.69). Distance, location type and ambulance mechanism showed significant association with delayed ambulance response time. Further intervention should be derived to improve current ambulance response time. INTRODUCTION Ambulance service is one of the components of prehospital service. Response time is crucial in managing medical and trauma emergencies such as cardiac arrest, airway obstruction, severe haemorrhage, severe chest or head injury 1 . This was proven particularly for out-of-hospitalcardiac arrest 2 and trauma victims in urban settings 3 . The effectiveness of ambulance service is characterised by the following two measures of performance: response time and service time 4 . The shorter the time intervals, the more effective the system 4 . Ambulance response time (ART) is defined as the period between emergency call received and ambulance arrival at scene 5 . Current recommendation of ambulance response time in response to medical emergencies is within 8 minutes for at least 90% ambulance calls 6 . This response time had evolved into a guideline that had been incorporated into operating agreements for many emergency medical service providers 7 . At present, ambulance services in Malaysia are provided countrywide by governmental and nongovernment bodies 8 Ambulance, Red Crescent and some at private hospitals 8 . Most government ambulances are based in hospital facilities 9 . In EDHUSM, ambulance services are run under hospital based system whereby all ambulances are located in hospital compound (near Department of Emergency Medicine) 10 . Ambulances will be despatched to scene site as soon as possible once emergency calls were received. Since January 2005, an Emergency Medical Dispatcher (EMD) squad was launched in EDHUSM 10 . This is a dedicated unit which consist of ambulance crews who were trained under EMD course that was modified from Emergency Medical Services Authority of California to become EMD to manage all ambulance calls in EDHUSM 10 . These EMD personnel are hospital attendants who had successfully passed their EMD training and managed all ambulance calls which includes call taking and responding to ambulance calls 10 . In addition to this EMD team, a team called Rescue 991 under Angkatan Pertahanan Awam Malaysia (APM) (specialised government body established to assist disaster and emergency event) were located in EDHUSM since 2000. Their main purpose was to extend social work services to public including ambulance services 11 . This is a unique team which only exists in EDHUSM. A study regarding ART in Kelantan was conducted in 2004 by Shaharudin et al 10 concluded that ART in EDHUSM Kelantan was 15.2 minutes. Only 40% of the total ambulance call in EDHUSM responded within 8 minutes. This is far from international standard of criteria of 8 minutes or less in at least 90% of ambulance calls. Factors that needs to be considered to achieve the international standard includes type of ambulance service, sociodemographic patterns, geographical differences and public behavioural influences toward good behavioural practices 9,10 . The objective of this study was to determine factors that contribute to delay ambulance response time. METHODOLOGY Setting, study design, sample size determination This was a cross sectional study conducted in HUSM between January 2016 to January 2017. The average ambulance call received by EDHUSM was 600 to 700 per year 10 . In order to determine the sample size for this study, data on three factors that were determined to be significant in affecting ambulance response time based on previous study 12 were used in calculation. Alpha level of 0.05 and statistical power of 0.8 were used. A sample size of 300 including 30% drop-out rate was determined. Data collection and processing All data were collected by ambulance paramedics of HUSM using a standardized form 'Borang Sela Masa Tindak Balas Ambulans'. These ambulance paramedics were enrolled in the study based on voluntary basis. They were briefed regarding the purpose of the study using 'Borang Maklumat Kajian' and a written consent form 'Borang Keizinan' were given to them. Only those consented ambulance paramedics were involved in the study. Ambulance form with missing data were excluded in the study. All ambulance forms were completed by paramedics after attending each ambulance calls during the study periods. The ambulance forms that were used in this study were validated earlier 23 . The ambulance forms consist of 6 sections: (1) Call receiver and biography of ambulance team (2) Call time (3) Patients information (4) Route to location (5) Ambulance specification (6) Geographical factor. Call receiver were EMD, medical assistant, staff nurse, doctor or others. Biography data of ambulance team consist of their working experiences, highest academic achievement, age and gender. Call time were recorded as international time and include call receiving time, team activation time, ambulance despatch time and scene arrival time. Ambulance response time was measured as the time between scene arrival time and call received time. Patients information encompasses zone that they were triaged to upon arrival to hospital (critical-red, semi criticalyellow or noncritical-green) and the location of incident (road, housing area, public area, working place or others). Route to location include congested or smooth (non-congested) route and these parameters were determined subjectively by ambulance driver. Ambulance specification were ambulance brands (Toyota, Mercedes, Aveco, Ford), ambulance type (A, B, C) and ambulance warning system (siren, beacon, public announcement system). Geographical factors consist of precision location, distance from hospital to location and other factors (such as flood, landslide, or heavy rain). Precise location was defined as the ambulance arrived at the correct location with the address that were given by the caller. Statistical Analysis Statistical analysis was performed using IBM SPSS version 22. P value < 0.05 was considered statistically significant. Categorical variables were summarised using percentages and compared using Chi-square test. Mean values of numerical variables between two groups were compared by student t-test. Logistic regression was used to identify factors associated with delayed ambulance response time and to estimate odds ratio (OR) and 95% confidence interval (CI) for the association between variables. Variables with p values <0.25 were introduced in multivariate logistic regression model. A manual backward stepwise approach was used to remove non-significant variables; only variables with p<0.05 were retained in the final model. Ethical Issues Only the consented ambulance paramedics were involved in this study. This study had obtained ethical approval from The Human Research Ethics Committee of USM [USM/JEPeM/15110497]. Simple Logistic Regression on Factors Associated with Delayed Ambulance Response Time The factors which showed significant association with delayed ambulance response time include distance (p<0.01), location type (p<0.01) and ambulance mechanism (p<0.01). The odds of delayed ambulance response time by every increase in distance unit was 1.59 (95% CI, 1.37 to 1.85). For location type, the odds of delayed ambulance response time for public location as compared to that of road was 0.13 (95% CI, 0.04 to 0.45). For ambulance mechanism, the odds of delayed ambulance response time for beacon type as compared to that of the siren type was 0.22 (95% CI, 0.01 to 0.69). Analyses of the associated factors for delayed ambulance response time by simple logistic regression is summarized in Table 2. Multiple Logistic Regression on Factor Associated with Delayed Ambulance Response Time Analyses of the significant associated factors for delayed ambulance response time by multiple logistic regressions was shown in Table 3. Multiple Logistic Regression (p < 0.05 was significant); The goodness of fit of the model was checked using Hosmer-Lemeshow test. The result gives no evidence of lack of fit of the model. According to the analysis, there was only the increment of distance showed significant association with the delayed ambulance response. Hence, the three associated factors for delayed ambulance delayed likely to be independent association than group factor. In our study, we investigated both geographical and mechanical factors that contributed to the delayed ART. Among the geographical factors, distance and location were associated with delayed ambulance response time. The percentage of ART within 8 minutes were higher, which was 7% if distances were within 8km, in comparison to those that responded beyond the 8km radius (0.7%). This is in agreement with Breen et al (2000) who reported that ambulance calls responding to emergencies more than five miles away from nearest ambulance station had less than 5% likelihood of ability to respond within 8 minutes 1 . The median distance of response by HUSM's ambulances was 7.8 km. This had exceeded the requirement range as per department standard operating policy which is within 6km radius 24 from HUSM. The reason for this is due to lack of manned vehicles from nearby ambulance base station, which is also one of the identified factor that influenced response time performance in Breen et al 1 . Paramedic who were involved in ambulance calls from this ambulance base station were actually those who involve in other daily work in their hospitals. Therefore, EMD program should be implemented in order to create a dedicated emergency ambulance team to manage ambulance calls. Prioritised dispatching (give priority to those cases needing urgent paramedic care or urgent transport to hospital) has been shown to be an effective strategy for use in ambulance service 14 .Ambulance responses in less than or equal to 8 minutes were higher and the odds of dying was 1.4% less by using priority dispatch system for ambulance deployment 15 . Variables Another geographical factor that contributed to delayed ambulance response time was the type of location. Among the studied locations, ART to public places (schools, market, commercial places) were longer compared to other locations. This might be due to bystander interference and physical barriers (stairs, elevators) causing difficult accessibility to the scene site. These two factors were among the factors that affect the interval between ambulance arrival at scene and ambulance personnel arrival to patient 16 . Incident that occur at high rise building also contribute to delay response time. This is due to the vertical response time (the need to climb stairs or board the elevators) needed for paramedic team to reach the patient's side 17 . Lateef et al (2000) reported that the problems encountered in high rise building were multiple stops of the elevator for use by the public, preset elevator stops, all elevators being in use, lack of directions, inadequate space in the stairwell or the elevator prohibiting use of a stretcher 18 . Therefore, building structures which take into consideration of emergency access should be enforceable in all high-rise buildings. Not only the ambulance respond time can be improved, the safety of paramedics and patients also can be ensured. In this study, we observed that usage of only beacon light was associated with delayed ambulance response time. The chance of delayed ambulance response time when using beacon light was 22%. This contradicts with Brown et al (2000) who reported lights and siren(L&S) reduce ambulance response time by 1 minutes 46 seconds, a statistically significant time saving. However, this time saving is only clinically relevant in very few cases and it requires a larger multi-centre L&S trial to address this issue 19 . A study conducted by Brien et al (1998) also showed that L&S shortens transport-time but the time saved was not associated with immediately apparent clinical significance. In addition to this, L&S was reported as direct cause of emergency vehicles crashes 21 , hence the time saving with L&S should be balanced against the risk associated with its usage. Limitation Samples data were completed by ambulance paramedics who responded to ambulance calls, therefore bias and incomplete documentation were unavoidable. Current ambulance response time in Hospital Universiti Sains Malaysia was 14.1 minutes. This showed that we still have not reach the international standard for ambulance response time. Distance, location type and ambulance mechanism showed significant association with delayed ambulance response time. Among these factors, distance was found to have the largest effect.
2020-06-11T09:09:27.260Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "0bf79eeced7649ffd4eb1a16c3ed2b0108696431", "oa_license": "CCBYNC", "oa_url": "http://www.mjphm.org/index.php/mjphm/article/download/551/87", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "020edb5870bcd17b243cc96bbe268d7c9a764ace", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
218617784
pes2o/s2orc
v3-fos-license
Demographic and socioeconomic characteristics of Canadian medical students: a cross-sectional study. BACKGROUND While the importance of medical students' demographic characteristics in influencing the scope and location of their future practice is recognized, these data are not systematically collected in Canada. This study aimed to characterize and compare the demographics of Canadian medical students with the Canadian population. METHODS Through an online survey, delivered in 2018, medical students at 14 English-speaking Canadian medical schools provided their age, sex, gender identity, ethnicity, educational background, and rurality of the area they grew up in. Respondents also provided information on parental income, occupation, and education as markers of socioeconomic status. Data were compared to the 2016 Canadian Census. RESULTS A total of 1388 students responded to the survey, representing a response rate of 16.6%. Most respondents identified as women (63.1%) and were born after 1989 (82.1%). Respondents were less likely, compared to the Canadian Census population, to identify as black (1.7% vs 6.4%) (P < 0.001) or Aboriginal (3.5% vs. 7.4%) (P < 0.001), and have grown up in a rural area (6.4% vs. 18.7%) (P < 0.001). Respondents had higher socioeconomic status, indicated by parental education (29.0% of respondents' parents had a master's or doctoral degree, compared to 6.6% of Canadians aged 45-64), occupation (59.7% of respondents' parents were high-level managers or professionals, compared to 19.2% of Canadians aged 45-64), and income (62.9% of respondents grew up in households with income >$100,000/year, compared to 32.4% of Canadians). Assessment of non-response bias showed that our sample was representative of all students at English-speaking Canadian medical schools with respect to age, though a higher proportion of respondents were female. Additionally, there were no differences between early and late respondents with respect to ethnicity, rurality, and parental income, occupation, and education. CONCLUSIONS Canadian medical students have different socioeconomic characteristics compared to the Canadian population. Collecting and analyzing these characteristics can inform evidence-based admissions policies. Background Medical students differ from the general population with respect to socioeconomic status (SES), ethnicity, and rural background [1][2][3]. These differences may contribute to inequities in access to care, as many medical trainees go on to care for populations with whom they have shared life experiences and are comfortable serving [4][5][6][7]. The Association of Faculties of Medicine of Canada (AFMC) has called for medical schools to diversify their student population to more closely represent the Canadian population [8]. While several initiatives to respond to this call are underway, there is a lack of data on student demographics to inform future initiatives, support policy changes, and track progress [9]. In Canada, entry into medical school generally encompasses eligibility criteria, academic performance, application components such as essays and reference letters, and interviews [10]. Eligibility criteria can include a required number of years of undergraduate education and completion of the Medical College Admissions Test (MCAT). There are several distinctions between schools in Québec and those outside. Québec schools require a diploma or degree equivalent to a College studies diploma given by the Québec Ministry of Education, rather than an undergraduate degree [10]. Québec schools also do not require the MCAT. Outside of applications, Québec schools have seen a relatively modest rise in tuition compared to schools elsewhere. For example, tuition at the University of Toronto has risen from $3222/ year in 1994 to $29,030/year in 2019. Conversely, tuition fees at the University of Montréal have risen from $2286 to $3601 for Quebec residents, and $11,193 for all other Canadians [1,10]. Within this Canadian application system, equitable access to medical school may impact applicant pools, medical class composition, and future patient care. Physicians who are part of visible minority populations backgrounds tend to treat traditionally underserved patients and serve in areas of physician shortage [11][12][13][14][15][16][17][18]. Students with low socioeconomic backgrounds and those who grew up in rural communities are more likely to serve communities with similar backgrounds and/or demographic characteristics [19][20][21][22][23][24][25]. The potential benefit to underserved communities has brought medical school admissions into the realm of social accountability. Indeed, Health Canada and the AFMC have highlighted the role of enhancing admissions processes to achieve the desired diversity in the physician workforce [26,27]. Several Canadian medical schools have increased efforts to recruit underrepresented students, such as the Northern Ontario School of Medicine's recruitment of students with aboriginal backgrounds [28] and the University of Calgary's Pathways to Medicine Program, which aims to support the enrollment and success of future medical students from traditionally under-represented groups throughout Alberta [29]. Additionally, the University of Toronto recently developed a Black Student Application Program, with the goal of increasing and supporting Black medical student representation [30]. While some schools track applicant demographic characteristics, there has been no national characterization of the demographics of Canadian medical students since 2007 [1]. In the 2007 analysis, investigators found substantial disparities between medical students and the Canadian population with respect to socioeconomic status. Given the implications of these demographic disparities on access to care and the current shortage of physicians in Canada, it is important to systematically track such demographic data. In this study, we aimed to characterize the demographics of students at Englishspeaking Canadian medical schools through a nationallyadministered survey, and to compare them to the Canadian population. Methods We conducted a cross-sectional study on the demographics of students at English-speaking Canadian medical schools in 2018 through an online survey. We adapted the study methodology from previous studies on this topic [1,2]. We coordinated the project with student leaders from the Canadian Federation of Medical Students (CFMS), which represents 14 of 17 Canadian medical schools. We excluded students from the other three Canadian medical schools, based in Quebec, for two reasons. First, previous studies have postulated that these students have distinctive demographic characteristics compared to medical students from Englishspeaking schools [1,3], given their younger average age at matriculation and substantially lower tuition fees [31,32]. Additionally, we did not have a reliable method of reaching these students as they are not represented by the CFMS. Survey design Through our survey, we aimed to capture information on the following demographic characteristics: ethnicity, gender identity, sex assigned at birth, socioeconomic status, and rurality of the area respondents grew up in. Additional elements of the survey included questions regarding characteristics and behavior after entering medical school including: debt burden, preference of future specialty and practice location, and the perceived impact of demographic and financial factors on future practice. The results of those post-admission questions are not reported in this study. Instead, we focus on the demographics of students admitted to Canadian medical schools, with plans to publish further analysis of all study data. We hosted the survey on an online survey platform Simple Survey (OutSideSoft Solutions Inc., Quebec, Canada). The complete survey and explanations of questions is available as Additional file 1. We used two previous surveys as starting points to improve content validity and allow for direct comparisons to other populations: The 2016 Canadian Census [33], and a previously validated survey addressing this research topic [2]. Most individual survey items were taken verbatim from the Canadian Census. For rurality, and parental occupation, we used classifications different from the census, which is detailed below in the section Survey Content. We piloted the survey with 16 medical students from across Canada and subsequently altered wording for certain questions to improve clarity and applicability to the medical student population. Survey content We collected data on respondents' ages, year of medical school, and level of education prior to medical school. We also asked about ethnicity using the same terms used in the Canadian census: Aboriginal, Arab, Black, Chinese, Filipino, Korean, Japanese, Latin American, South Asian (e.g. Indian, Bangladeshi, Sri Lankan), Southeast Asian (e.g. Cambodian, Indonesian, Thai), West Asian (e.g. Iranian), and "other". Participants could choose more than one ethnicity. We also asked students about the size of the community they grew up in, using the 2016 Statistics Canada Population Centre and Rural Area classification [34]. A rural area was defined as having a population of < 1000 people, small and medium population centres as having populations of 1000-99, 999, and large urban population centres as having populations of ≥100,000 [34]. To compare participants' socioeconomic status to the Canadian population, we asked about three commonly used and well-validated markers of socioeconomic status: parental income, occupation, and education level [35][36][37][38]. For parental income and education level, we used similar income brackets and diploma or degree classifications respectively as the 2016 census [33]. For parental education level, we used a modified version of the Pineo-Porter Occupational Scale as has been used by Dhalla et al. [2,39]. Survey delivery We contacted medical students through class email lists. The emails included information on the purpose of the study, contact information for the research team, the nature of voluntary participation, and a link to the survey. No individual emails were collected, used, or stored at any point during the study. After the initial email, participants received three biweekly reminder emails. The CFMS also promoted the survey through their social media accounts (Facebook and Twitter), and student leaders at individual schools delivered class announcements. The survey was open for a total of 10 weeks in spring 2018 to ensure coverage of different examination and vacation schedules. Analysis We imported questionnaire data directly from Simple-Survey software into SPSS Version 24 (IBM, Armonk, NY). We removed participants who declined to complete the survey at the informed consent step, and surveys which were started but not answered. When two or more consecutive surveys had identical answers and the former survey(s) had fewer questions completed, we assumed that this was the same participant who accessed the survey more than once. In these cases, we only considered the final response. We used descriptive statistics to summarize responses to all questions and chi-squared tests to detect differences in characteristics of survey respondents and the general Canadian population via the 2016 Comprehensive Census. Assessment of nonresponse bias We performed post-hoc analyses to assess for nonresponse bias using two approaches [40]. First, we compared our data on age and sex to the 2017 Canadian Medical Education Statistics (CMES) report published by Association of Faculties of Medicine of Canada, a dataset which represents the entire Canadian medical student population [31]. For age, we compared our fourth-year respondents to graduating medical students in CMES, the only group for whom age was available. For sex, we compared results from our question "sex assigned at birth" to the listed sex of the CMES 2017 population from all years of medical school. We restricted the above analyses to students from English-speaking Canadian medical schools. Second, we compared the first 100 respondents to the last 100 respondents, with the assumption that late respondents are more similar to nonrespondents [41]. We compared ethnicity, rurality, and parental income, education, and occupation between these groups. Finally, to assess non-response bias based on a respondent's year of medical school, we compared answers between first and fourth years. We undertook this analysis after observing substantial variability in response rates between respondents of different medical school years. For these two groups, we compared ethnicity, rurality, and parental income, education, and occupation. Results A total of 1388 students from 14 Canadian medical schools responded to our survey. Based on the total population of students at English-speaking medical schools stated in the 2017 CMES report [31], we had a response rate of approximately 16.6%. The characteristics of Canadian medical students in the study group are described below in comparison with the Canadian Census. The findings are summarized in Table 1. There were 451 (32.5%) respondents from first year of medical school, 421 (30.3%) from second year, 295 (21.3%) from third year, and 221 (15.9%) from fourth year. With respect to respondents' education prior to medical school, 33 (2.4%) attained doctorate degrees, 10 (0.7%) attained other professional degrees, 320 (23.1%) attained master's degrees, 899 (64.7%) obtained bachelor's degrees, and 126 (9.1%) had diplomas or degrees below a bachelor's degree. Among all respondents, 155 (11.1%) spent more than 6 years in post-secondary education prior to medical school, 469 (33.8%) spent 5-6 years, 583 (42.0%) spent 4 years, and 181 (13.0%) spent fewer than 4 years. The descriptive statistics that follow use the Canadian Census as a comparator. Ethnicity Ethnicities differed significantly between respondents and the general population (P < 0.001, χ 2 = 169, dF = 5) ( Table 2). Respondents from our survey were more likely to identify as South Asian (P < 0.001) and Chinese (P < 0.001), and less likely to identify as black (P < 0.001), Aboriginal (P < 0.001), and white (P < 0.001) when compared to the census population. Rurality A total of 1351 (97.3%) of our respondents answered a question about the size of the area they primarily grew up in. There were 864 (62.2%) who grew up in large urban centres, defined as a population of 100,000 or more, 398 (28.7%) who grew up in a small or mediumsized centre, defined as a population of 1000-99,999, and 89 (6.4%) who grew up in a rural area, defined as a population of less than 1000. In comparison, 59.6% of 2016 census respondents lived in a large urban centre, 21.7% in a small or medium-sized centre, and 18.7% in a rural area. The proportions differed significantly between survey respondents and the Canadian population, with Canadian medical students more likely to have grown up in urban centres (P < 0.001) and small or mediumsized centres (P < 0.001), and less likely to have grown up in a rural area (P < 0.001). Household income Respondents from our survey had significantly different household incomes compared to the Canadian population (P < 0.001, χ 2 = 618, dF = 4) ( Table 5). Respondents were more likely to come from high-income households, with 62.9% of respondents indicating household income of greater than $100,000 CAD compared to 32.4% of the census population (P < 0.001). Assessing non-response Bias We compared respondents in our survey to the entire population of students at English-speaking Canadian medical schools based of CMES 2017. We found no differences in age among graduating students. We did, however, find that students in our survey were more likely to have selected "Female" as the sex assigned at birth, compared to the CMES population (Additional file 2). When comparing early to late respondents, defined as the first and last 100 respondents respectively, we found no differences with respect to ethnicity, rurality, and parental income, occupation, and education (Additional file 2). When comparing first year respondents and fourth year respondents, we found no differences with respect to ethnicity, rurality, and parental income, occupation, and education. Discussion We found several important differences between students from English-speaking medical schools in Canada and the general Canadian population. Medical students, compared to the census population, are more likely to have grown up in high-income households and have parents who are professionals with high levels of formal education. Medical students are less likely to be black, Aboriginal, and to have grown up in a rural setting. Our data add to numerous previous reports, dating back to the 1960s, of such disparities [1,2,42]. Accurately comparing our findings to earlier surveys conducted in 2001 and 2007 remains challenging due to our low response rate, changes in the broader Canadian population, and the capture of data from medical students from Quebec in previous studies [1,2]. In our study, 62.9% of our respondents came from households earning more than $100,000 per year, compared to 46.7% in a 2007 survey and 36.5% in a 2001 survey [1]. Conversely, 7.5% of students in our survey came from households earning less than $40,000 per year, compared to 12.8% in 2007 and 17.6% in 2001 [1]. These income data, however, should be interpreted cautiously due to inflation and rising average income in Canada. In light of the low response rates and historical changes in income, further qualitative comparisons of the 2001 and 2007 data suggest that there may be increasing matriculation of students who are the children of highlyeducated professionals, including physicians. There may be several underlying reasons for this socioeconomic disparity. First, increasing tuition fees may affect enrollment patterns, as average first-year tuition fees at English-speaking medical schools in Canada have risen from $12,512 in 2007 to $18,594 in 2017 [31,32]. A 2008 analysis of tuition deregulation in Ontario found that increasing tuition fees are associated with increased a Based on 2016 Canadian census b Forty-one students did not provide their father's occupation and 38 students did not provide their mother's occupation a Based on a modified Pineo-Porter Scale and the 2016 Canadian Census National Occupation Classification b Thirty-eight students did not provide their father's occupation and 40 students did not provide their mother's occupation enrollment of students whose parents hold a graduate or professional degree [43]. Additionally, an increase in medical school tuition is associated with matriculation of fewer students from low-income families [3] and increasing socioeconomic status of enrolled students [44]. Conversely, schools with lower tuition fees are more likely to have students from low-income neighborhoods [1]. In addition to the potential impact of increasing tuition fees, increasing competition for a limited number of seats at medical schools may favor applicants with higher socioeconomic status [45]. Factors such as grade-point average and the MCAT are often weighted heavily for their perceived validity [45,46]. While these measures have been shown to predict performance in medical school [47,48], the advent of expensive test-preparation courses has commercialized the admissions process [49]. Furthermore, the emphasis on personal factors such as leadership, commitment to service, and volunteerism can create additional bias [45,50]. Applicants with socioeconomic barriers may be unable to access experiences which emphasize these qualities or may be compelled to eschew such opportunities in favor of paid employment. Encouragingly, many schools are attempting to make progress in this area. While only 3.5% of respondents in our survey were Aboriginal, this figure may improve. Recently, all 17 medical schools across Canada made a commitment to ensure matriculation of a minimum number of students from Aboriginal communities [51]. Additionally, many of our respondents grew up in small, medium-sized or rural communities, which may represent the results of recent efforts to recruit individuals from Aboriginal [26,28] and rural communities [4]. Limitations Our study has several important limitations. First, we had a low response rate compared to previous studies of this kind. This biased our results towards more responses from female participants, as shown in our assessment of nonresponse bias. Our survey population, however, was representative in age, and that there were no differences between early and late respondents with respect to ethnicity and markers of socioeconomic status. Thus, a low response rate alone should not be considered as a marker of poor validity [40,52,53]. Second, our survey was voluntary and relied on selfreported data with no secondary verification, creating the opportunity for convenience, recall, and misclassification biases. We did, however, pledge anonymity and confidentiality to respondents and are not aware of any reason for them to systematically provide dishonest answers. Third, it remains possible that some participants accidentally responded more than once. However, once we removed suspected duplicate entries, as detailed in our Methods section, we had no two individual surveys with identical answers. Fourth, the generalizability of our results is limited as our survey was not sent to students from French-speaking medical schools, who are known to have differing demographics compared to their colleagues from English-speaking schools [1,3]. Finally, we did not collect data on which individual school participants were from, which creates the possibility certain schools are over-or under-represented. Implications We emphasize caution in the interpretation and generalization of our results, given the above limitations. Additionally, the relatively small samples of certain populations, such as the 23 respondents who identify as Black and the 10 respondents who attained a professional degree prior to medical school, make these particular subgroup comparisons challenging to interpret. Within these limitations, these data have several implications for medical education and health policy in Canada. Widening socioeconomic disparity between physiciansin-training and their future population may exacerbate inequities in access to care. A large body of evidence suggests that medical students from traditionally disadvantaged backgrounds, such as those who are part of visible minority populations [11][12][13][14][15][16][17][18] or have rural or low socioeconomic backgrounds [19][20][21][22][23][24][25], are more likely to practice in areas with physician shortage. Inequities in medical school admission poses a 'wicked' political problem [54]. Addressing such inequities in the admissions process will take a large, coordinated effort. The first step in this effort, is the collection and dissemination of data on medical school applicants and matriculants. While student-initiated research in this domain, such as our survey, is a meaningful step, such efforts are sporadic and limited in scope. Indeed, our findings were substantially limited by the low response rate. Improving the quality of these data will require partnership between students, faculty, and funding bodies to systematically and continuously track educational outcomes and future practice locations of medical students from differing backgrounds [55]. Incorporating data on medical school applicants, in addition to matriculants, may further Admittedly, simply collecting more data will not solve the problem of the socioeconomic gap between physicians and their patients. The availability of these data, however, can allow researchers, faculties of medicine, and governmental funding organizations from across the political spectrum to define the nature of the problem and adopt a more evidence-based approach to admissions policies. Conclusions Through a cross-sectional survey conducted in 2018, we found that students at English-speaking Canadian medical schools have, on average, substantially higher socioeconomic status compared to the Canadian population. Compared to previous studies on this topic, the socioeconomic gap between medical students and the broader Canadian population appears to be widening. Addressing this complex issue will require a coordinated effort between students, medical schools and faculty, and funding bodies. approved the submitted version and agreed to be personally accountable for their own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. TA designed the study, acquired the data, and substantially revised the manuscript. TA approved the submitted version and agreed to be personally accountable for their own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. JHK analyzed and interpreted the data, and substantially revised the manuscript. JHK approved the submitted version and agreed to be personally accountable for their own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. JG designed the study, acquired the data, and substantially revised the manuscript. JG approved the submitted version and agreed to be personally accountable for their own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. SS interpreted the data, and drafted and substantially revised the manuscript. SS approved the submitted version and agreed to be personally accountable for their own contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated, resolved, and the resolution documented in the literature. Funding This project was supported financially by the Canadian Federation of Medical Students. The funding body assisted with data collection, but had no role in study design, data analysis or interpretation, or writing or editing of the manuscript. Availability of data and materials The following datasets used in the study are publicly available at the following links: a. Canadian Medical Education Statistics 2007: https://afmc.ca/sites/default/ files/documents/en/Publications/CMES/Archives/CMES2007Vol29.pdf b. Canadian Medical Education Statistics 2017: https://afmc.ca/sites/default/ files/CMES2017-Complete.pdf c. 2016 Canadian Census: https://www12.statcan.gc.ca/census-recensement/2 016/dp-pd/index-eng.cfm The dataset of Canadian medical students from the 2018 survey is not available due to concerns regarding compromise of individual privacy. Ethics approval and consent to participate The Western University Research Ethics Board (REB: 109258) provided ethics review and approval for this study. Written informed consent was obtained from all participants. Consent for publication Not applicable.
2019-09-15T03:12:52.755Z
2019-06-11T00:00:00.000
{ "year": 2020, "sha1": "3eb17b7f4a48cb25dae0790dc3ae936493f05ab3", "oa_license": "CCBY", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-020-02056-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef8ec3737e3bbb8458ec78897101ea47a34523bd", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }