id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
257477710
pes2o/s2orc
v3-fos-license
Novel Approach for Glycemic Management Incorporating Vibration Stimulation of Skeletal Muscle in Obesity Because obesity is associated with impaired glucose tolerance and type 2 diabetes (T2D), it is important to manage the blood glucose level at an early stage. Nevertheless, people with obesity have significantly lower resistance to muscle fatigue after exercise and exercise adherence. Therefore, we developed a novel “Relaxing-Vibration Training (RVT)” consisting of 25 postures using vibration stimulation of skeletal muscle and determined the feasibility of RVT for glycemic management. Thirty-one participants with obesity were enrolled in a controlled trial (CT) and experimental trial (ET) based on a 75 g oral glucose tolerance test (OGTT). During the CT, participants were required to rest in a quiet room. During the ET, the RVT program (50 Hz, 4 mm), consisting of 25 postures of relaxation and stretching on the vibratory platform, was performed for 40 min. Subsequently, the participants rested as in the CT. Subjective fatigue and muscle stiffness measurements and blood collection were conducted before and after RVT. In both the CT and ET, interstitial fluid (ISF) glucose concentrations were measured every 15 min for 2 h. The incremental area under the curve value of real-time ISF glucose during an OGTT was significantly lower in the ET than in the CT (ET: 7476.5 ± 2974.9, CT: 8078.5 ± 3077.7, effect size r = 0.4). Additionally, the levels of metabolic glucose regulators associated with myokines, muscle stiffness, and subjective fatigue significantly improved after RVT. This novel RVT suggests that it is effective in glycemic management with great potential to improve impaired glucose tolerance and T2D with obesity in the future. Introduction The increasing prevalence of obesity directly and indirectly contributes to increased morbidity and mortality, including type 2 diabetes (T2D) [1]. Considering that impaired glucose tolerance (IGT) and T2D are paradigms of obesity-related disorders, it is necessary to manage the blood glucose metabolism in people with obesity at an early stage. Exercise and diet restriction have been recommended as crucial preventive treatment measures. Exercise is recognized to be more effective in the prevention and improvement of insulin resistance and T2D in the long term because it increases mitochondrial biogenesis and improves glucose tolerance and insulin action [2]. However, people with obesity have significantly lower muscle fatigue resistance after exercise than those with non-obesity [3]. Additionally, patients with T2D have greater muscle fatigability in both lower and upper body muscles [4]. Even though exercise is effective in regulating glucose metabolism, most people with obesity maintain a sedentary lifestyle, and exercise adherence is low because of physical limitations, musculoskeletal discomfort, and physical and psychological fatigue [5]. Therefore, there is a need for an alternative to conventional exercise that is relaxing, safe, easy to continue, and effective. Recently, vibration training (VT), which contracts and relaxes skeletal muscles by vibration stimulation without a mass load or dynamic exercise, has received considerable attention. This mechanical stimulation uses proprioceptive spinal reflexes to induce an amyotrophic stretch reflex mediated by the muscle spindle and a type-Ia sensory fiber, thereby facilitating activity of homonymous α-motor neurons [6]. It has been proved that this technique has a similar benefit to resistance exercise based on voluntary muscle contraction [7]. VT is effective in plasma glucose regulation because it increases glucose utilization by causing voluntary muscle contraction through vibration stimulation [8]. In addition, it has been reported to be effective in improving glycemic indicators and lipidrelated cardiovascular risk factors [9]. However, most VT was designed as an alternative to resistance exercise movements such as squats and lunges, which are accompanied by muscle pain, and fatigue, as we have previously experienced [10,11]. Therefore, the former approach may also be unsuitable for those who do not favor exercise and those with limitations in mobility and posture among people with obesity. On the basis of these considerations, in this pilot study, we aimed to advance a previously general VT that requires high-intensity resistance posture and developed a novel "Relaxing Vibration Training ( R VT)" program that can be performed more comfortably and safely. As a first stage, we conducted an acute trial to verify the feasibility of a novel R VT for glycemic management. Using interstitial fluid (ISF) glucose concentrations during an oral glucose tolerance test (OGTT), blood markers, and muscle stiffness and fatigue were determined in middle-aged and older adults with obesity. We hypothesized that our practice of R VT has a more positive effect on subjective and biochemical indicators and has potential viability as a glycemic management program. The primary outcome was a change in the ISF glucose concentration, and the secondary outcomes were a change in blood markers, muscle stiffness, and fatigue. Ethical Approval and Study Design This study of a single-arm, acute intervention design was conducted for 7 days (March 2021) at the University of Tsukuba. The study objectives, design, criteria of inclusion and exclusion, assessments, practice of R VT, OGTT, insurance compensation for injury, withdrawal of consent, and privacy protection were explained face-to-face to eligible participants. Written informed consent was obtained from each participant, the study was conducted in accordance with the Declaration of Helsinki and was approved by the ethical committee of the University of Tsukuba (reference no. Tai 020-95). The study protocol was registered at the University Hospital Medical Information Network center (UMIN no. 000042787). We applied the devices, the Free-Style Libre Flash continuous glucose monitoring (FSL-CGM) system (Abbott Diabetes Care, Witney, UK) and the Polar A370 fitness tracker (Polar Electro Oy, Kempele, Finland), to study the participants' bodies on the first day. Thereafter, the controlled trial (CT) was conducted on the fourth day and the experimental trial (ET) on the seventh day. Participants were required to fast for more than 10 h before each trial and were forbidden to consume alcohol or perform excessive exercise on the day before the trials. In both trials, participants ingested 225 mL soda-flavored solution (TRELAN ® G75; Ajinomoto Pharmaceuticals Co., Ltd., Tokyo, Japan) containing 75 g glucose and performed an OGTT lasting for 2 h, the ISF glucose concentration being recorded every 15 min. The details of the CTs and ETs are as follows ( CT Participants were asked to fill out a questionnaire consisting of questions about age, sex, smoking, alcohol intake, and medical history, and to measure blood pressure and heart rate (OMRON HEM-7111, Kyoto, Japan). Additionally, during the 2 h OGTT, participants were required to rest while measuring ISF glucose concentrations every 15 min on a chair in a quiet room. They were allowed to perform tasks such as reading a book, watching a movie, or operating a computer. ET Participants performed R VT for 40 min, starting 15 min after the intake of a 75 g glucose solution. Subjective fatigue surveys, muscle stiffness measurements, and blood collection were conducted before and after R VT. Subsequently, the participants moved to a quiet room and rested as in the CT. In this study, we used a vibration machine (Pro5 AIRdaptive; Power Plate, Badhoevendorp, The Netherlands) that can deliver three-dimensional harmonic vibration to the body. An expert with a power plate instruction certificate developed a novel R VT consisting of 25 postures of relaxation and stretching. The R VT was performed for 1 min per posture with a frequency of 50 Hz and an amplitude of 4 mm for 40 min on the vibratory platform, including preparation time for the next posture and rest ( Figure S1). Participants On estimating the sample size using the statistical software G* Power 3.1, 34 subjects were required as the total sample size (a priori effect size = 0.5, α = 0.05, power [1 − β] = 0.8). Forty participants with obesity residing in Tsukuba City, Japan were recruited through snowball sampling and a regional information magazine (Joyo Living Co., Ltd., Tsukuba, Japan). A screening survey via telephone or face-to-face interviews was conducted using a self-reported questionnaire. The inclusion criteria were: (1) age ranging over 40-74 years, (2) body mass index (BMI) ≥ 25 kg/m 2 , (3) having one or more risk factors for metabolic syndrome, (4) active participation in the study. The participants were excluded if they (1) took neuropsychiatric drugs, (2) were prohibited from exercising by doctors due to serious diseases, including brain dysfunction, renal disease, liver dysfunction, heart disease, and peripheral angiopathy, (3) had an excessive alcohol intake (>60 g/day) [12], (4) had participated in other clinical studies within the past three months, (5) were pregnant or possibility pregnant, (6) were judged inappropriate by the lead principal investigator, for example, conduct that interferes with the progress of the research by not cooperating with the research, making a fuss, or fighting with other participants. In total, 40 people with obesity applied for this study. However, four applicants were excluded according to the criteria and two applicants declined to participate because of conflicting schedules. Additionally, the participants experienced R VT for approximately 5 min to check whether there were any problems on the body and whether it was feasible. We finally analyzed data from 31, excluding 3 participants who did not complete R VT for 40 min due to personal reasons (a posteriori effect size = 0.5, α = 0.05, power [1 − β] = 0.8) (Figure 2). ISF Glucose Concentrations The glucose concentration was determined using the FSL-CGM system, which has the advantage of being able to measure ISF glucose concentrations in real-time without blood collection through a sensor attached to the subcutaneous tissue. The FSL-CGM system consists of a glucose reader and sensor and the activation time for the sensor is 14 days, with the sensor's high accuracy and convenience having been shown in previous studies [13,14]. Before the study, a health professional attached a sensor of the FSL-CGM to the rear upper arm of the participants under an aseptic technique and monitored it for 48 h to confirm the adaptability and stability of the ISF glucose concentration measurement. Participants were asked to set the alarm for the 2 h OGTT in both trials and measure the ISF glucose concentrations by touching the reader to the sensor every 15 min and recording the value in the datasheet. Characteristics of Participants BMI (kg/m 2 ) was calculated as body weight in kilograms divided by height in meters squared. Fat and muscle mass were determined using a bioelectrical impedance analyzer (MC-980A, TANITA, Tokyo, Japan). We measured the waist circumference (WC), blood pressure, fasting blood glucose, high-density lipoprotein cholesterol (HDL-C), and triglyceride (TG) to confirm whether the participants displayed the corresponding risk factors for metabolic syndrome. Risk factors for metabolic syndrome were evaluated according to the Japanese standards as follows: abdominal obesity (WC: male ≥ 85 cm, female ≥ 90 cm), high blood pressure (systolic blood pressure ≥ 130 mmHg, diastolic blood pressure ≥ 85 mmHg, or a history of hypertension), high fasting glucose (≥110 mg/dL or a history of diabetes), low HDL-C (<40 mg/dL), and hypertriglyceridemia (≥150 mg/dL or a history of hyperlipidemia) [15]. To measure the amount of daily physical activity (PA) and sleep time, participants were required to wear a Polar A370 fitness tracker based on a wrist-worn three-axis accelerometer model for the entire study period of 7 days [16]. The number of walking steps, total sleep time, and PA time by intensity (light, moderate, vigorous) were divided into baseline and during the experiment, and the mean values of each were calculated. Blood Markers Blood samples were collected from an antecubital vein and separated fractions were stored at −80 • C until further analysis. The free fatty acids (FFAs) and HDL-C levels were determined by enzyme method, lactate dehydrogenase (LDH), aspartate transaminase (AST), and creatine kinase (CK) levels by the Japan Society of Clinical Chemistry transferable method, fasting serum glucose level by the hexokinase-G-6-phosphate dehydrogenase method, high-sensitivity C-reactive protein (hs-CRP) level by fixed time assay method, and cortisol level by radioimmunoassay. We evaluated serum levels of fibroblast growth factor 21 (FGF21; R&D Systems; Minneapolis, MN, USA), interleukin 6 (IL6; R&D Systems), and myostatin (Cusabio biotech, Wuhan, China) using commercial enzyme-linked immunosorbent assay kits. Muscle Stiffness and Fatigue Muscle stiffness of the trapezius, deltoid, biceps brachii, rectus femoris, biceps femoris, tibialis anterior, and medial gastrocnemius on the dominant side was determined before and after R VT in the ET using the Myoton ® PRO (Myoton AS, Tallin, Estonia), an accurate and reliable device for non-invasive digital palpation of superficial skeletal muscles. The location of the muscle to be measured was marked, and the probe of the Myoton ® PRO device was vertically mounted on the surface of the measuring mark point as suggested by the manufacturer [17]. Three consecutive measurements were performed on each muscle area, and the mean value for each was used for statistical analysis. Additionally, subjective fatigue was assessed before and after R VT using the questionnaires "subjective symptoms of fatigue (Jikaku-sho shirabe)" and "body parts of fatigue (Hirou-bui shirabe)", developed by the Japan Occupational Health and Occupational Fatigue Research Committee. The subjective symptoms of the fatigue questionnaire comprised 25 items divided into five categories (I: drowsiness, II: instability, III: uneasiness, IV: local pain or dullness, V: eyestrain). The body parts of the fatigue questionnaire investigated subjective fatigue levels for each body part, including the neck, shoulder, middle back, upper arm, forearm, lower back, hand, hip and thigh, lower leg and knee, and foot [18]. Statistical Analyses Linear mixed model analysis was applied to evaluate the differences in the intervention effect of the two trials on the change of ISF glucose concentrations. On the basis of significant interactions (trials × times), this study performed post hoc analysis with Bonferroni correction. Wilcoxon signed-rank test was used only in the ET to compare changes in the incremental area under the curve (IAUC) of the ISF glucose concentration response, blood markers, muscle stiffness, and fatigue before and after R VT practice. The IAUC value is the sum of the areas of triangles and rectangles geometrically calculated using the elapsed time from baseline to 120 min and the response value of ISF glucose concentration. On the basis of the fasting blood sugar (0 min), the sum of increased values at 15 min (A), 30 min (B), 45 min (C), and 60 min (D; divided by 2) were multiplied by 15 and the sum of the increased values at 90 min (E) and 120 min (F; divided by 2) were multiplied by 30 {IAUC = (A + B + C + (D/2)) × 15((D/2) + E + (F/2)) × 30} [19]. To verify the effect sizes (0.1: small, 0.3: medium, 0.7: large) of all variables from the baseline to after the R VT, the effect size r was calculated as the Z statistic divided by the square root of the sample size (N) (Z/ √ N). All statistical analyses were performed using SPSS version 26.0 (IBM Corp., Armonk, NY, USA), with significance levels set to p < 0.05. The rate of change (∆ (%)) was calculated by subtracting the baseline value from the value after R VT practice and was expressed as a percentage of the baseline value. Characteristics of Participants As shown in Table 1, the age of the 31 participants (48.4% female, 51.6% male) was 55.6 ± 8.4 yr. All participants had abdominal obesity, and it could be inferred that the participants were of the type with much muscle mass (49.5 ± 11.2 kg) but also considerable fat mass (male: 25.3 ± 7.6 kg, female: 30.3 ± 9.5 kg). Additionally, there were no significant changes in the sleep time, walking, or PA during the experiment period (Table S1). Figure 3A shows a comparison of the changes in ISF glucose concentrations with time of the two trials, focusing on within 1 h of the OGTT and before and after R VT practice. A significant interaction (p = 0.014) was observed between ISF glucose concentrations and the two trials, and according to the post hoc analysis, the ISF glucose concentrations in the ET was significantly reduced at 45 min (p = 0.019) and 60 min (p = 0.001). Figure 3B shows the comparison of the IAUC of ISF glucose concentrations of the two trials during 2 h of the OGTT, with the ET being significantly lower than the CT (p = 0.047). Changes in Blood Markers after R VT FGF21 and myostatin as markers of metabolic glucose regulation were significantly decreased ( Figure 4). FFAs as relative marker of lipid utility was significantly decreased (Figure 4). Additionally, there were no changes in the muscle-damage markers CK, AST, LDH, or hs-CRP after R VT ( Table 2). Discussion Vibration stimulation of skeletal muscles and tendons activates muscle spindles, which is termed "tonic vibration reaction". It activates more motor neurons, which in turn activates motor units and the movement of actin and myosin, increasing muscle contraction [20]. Such muscle contraction increases the content of glucose transporter 4 in muscle cells and its translocation to the sarcolemmal membrane, which considerably improves glucose transport capacity [21,22]. As such, muscle activation aids in glucose and insulin control, thus it has been actively suggested for T2D patients. Therefore, for those people with obesity who have a sedentary lifestyle or the elderly with weak joints and strength, muscle contraction by the tonic vibration reaction is considered as a more effective treatment. A meta-analysis study reported that VT interventions could reduce fasting blood glucose concentrations by 25.7 mL/dL in older adults with T2D [23]. It has been reported that 12 weeks of VT and strength training decreased the IAUC values of the OGTT [24], and that glycosylated hemoglobin values improved after VT for 6 [25] and 12 weeks [24]. By contrast, a previous study that performed acute VT reported reduced blood glucose concentrations in both diabetic patients and healthy elderly women [26]. In our pilot study, designed as an acute intervention, ISF glucose concentrations were significantly lower with respect to the IAUC values of the 2 h OGTT as well as during the practice of R VT (45 min) and immediately after the end of R VT (60 min). Overall, our study supports the positive result of previous studies that the glucose concentration can be controlled regardless of whether the period of skeletal muscle vibratory stimulation is acute or chronic. When muscle tissue is stimulated, it secretes myokines, which affect inflammation, glucose processing, and adipose tissue. FGF21 and myostatin, myokine factors, are known to be associated with obesity and insulin resistance [27]. It was reported that FGF21 is involved in glucose metabolism regulation and promotes blood glucose absorption by adipocytes [28]. However, "paradoxical" plasma FGF21 elevation in obesity and diabetes suggests a potential FGF21-resistant state [29]. Additionally, myostatin increases with obesity and with a lack of exercise, which is involved in the acquisition of insulin resistance [30]. Therefore, a balanced FGF21 level and a reduced myostatin level are also attracting attention as potential therapeutic targets for insulin resistance in T2D. In older men, the intervention effect of resistance exercise decreased FGF21 and myostatin and increased muscle strength in both T2D and non-T2D subjects [31]. After performing R VT in the present study, there were great decreases in the ISF glucose concentration, FGF21, and myostatin. Contrary to the results of this study, it was reported that acute VT in both overweight and normal weight subjects displayed a time-dependent response in IL6, glucose, and insulin, but no change in myostatin [32]. By contrast, myostatin mRNA expression started to decrease 1 h after resistance exercise and was most suppressed after 8 h, while IL6 expression was highest after 4 h [33]. In the previous studies above, it was shown that glucose metabolism and myokines react in response to muscle contraction, whether voluntary or involuntary; however, opinions are still divided on the reactions as time course and the physical interaction. Also, FFAs as a relative marker of lipid utility were significantly decreased. The R VT may stimulate the hydrolysis of TG, which results in the release of FFAs to circulation and their oxidation in skeletal muscles. The elevated blood lipid level, observed in obesity and metabolic syndrome, is an adverse condition that leads to lipotoxicity and ectopic fat deposition in other organs, and consequently insulin resistance and impaired glucose metabolism [34]. Thus, efficient uptake and oxidation of FFAs in working muscles by intense vibration may be effective for retracting their high blood glucose levels. Moreover, because vibration stimulation of skeletal muscle has different physiological effects depending on the parameter settings (e.g., frequency, amplitude, posture), the protocol should be carefully set by a certified professional [35]. In a previous study, VT was set at 12-16 Hz frequency, 4 mm amplitude, and resistance exercise [9], and in another study, it was set at 30 Hz frequency, 2 mm amplitude, and squat posture [36]. Both studies showed a significant decrease in fasting blood glucose in the VT group compared with the control group. Compared with the protocols of the previous studies (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(2)(3)(4), that of the present study (50 Hz, 4 mm) was set a higher, but was consistent with the fact that vibration stimulation of skeletal muscle can modulate glucose concentration. Hazell et al. reported that the greater the amplitude (4 mm) and frequency (35,40,45 Hz) of VT, the greater the measured electromyography activity of the muscle in both static and dynamic contractions [37]. It was also suggested that the frequency should not be <20 Hz to avoid the resonance frequency range [35]. Despite the high frequency and amplitude, no adverse events were reported from the participants during R VT, rather, the results that the muscle stiffness and fatigue improved showed satisfaction with this R VT. More importantly, we need to focus on the specific postures that were performed on the vibrating platform. High-intensity postures such as resistance exercise are difficult protocols for the elderly or diabetic patients in the high-frequency and high-amplitude settings. Because people with obesity and T2D have lower exercise adherence than healthy adults, strenuous resistance exercise accompanied by muscle fatigue is more likely to reduce exercise continuity [3,4]. The novel development of R VT in the present study significantly improved medial gastrocnemius stiffness, drowsiness, instability, local pain or dullness, eye strain, and whole-body fatigue. Moreover, after R VT practice in this study, the muscledamage markers CK, AST, LDH, and hs-CRP did not change. In the participants with fibromyalgia, VT exercise (30 Hz, 2 mm) and conventional exercise significantly reduced pain and fatigue compared with the control group; however, stiffness and depression did not change. Additionally, it was insufficient to prove the effect of VT alone [38]. Another previous study reported that an acute VT performed for 60 sec in a half-squat posture on a vibrating platform set at 35 Hz and 5 mm was effective in reducing delayed-onset muscle soreness, the pressure pain threshold, and CK in young adults [39]. Delayed-onset muscle soreness after strenuous exercise such as resistance exercise and sprint were reduced by 22-61% after muscle massage with five stretching movements of lower extremities on a vibrating machine (30-50 Hz, 2 mm) [40]. Because vibration stimulation increases blood flow, it is effective in the excretion of fatigue substances. In addition, because the stiffness of the muscle is softer, the muscle is activated, such that muscle pain and fatigue are quickly recovered. Therefore, compared with the conventional exercise method, this novel R VT has advantages in the ability of glucose regulation and improvement of muscle fatigue while its practice is comfortable and safe. This pilot study has several limitations. First, it did not include diabetic patients exclusively. Because it is at an early stage to verify the feasibility of a novel R VT, we were concerned about unexpected side effects in diabetic patients, including hypoglycemia and data instability. As mentioned in the introduction, we focused on people with obesity, because glucose regulation is important at an early stage from a prevention standpoint before the onset of IGT and T2D in obesity. Second, this pilot study was designed as an acute trial. The glucose concentration responds temporarily and sensitively to acute external stimuli such as diet and exercise. Therefore, we decided that it was necessary to first verify the novel R VT through an acute intervention trial. Because positive results were obtained in this situation, it is expected to show positive results in long-term interventions. Third, there was sampling bias and a relatively small sample size. Through the statistical software G* Power 3.1, 34 and 31 participants were estimated as a sample size by setting a power (1-β) of 0.8; however, it did not satisfy the sample size required for 0.95, which is the best power. However, a 1-β of statistical power indicates the possibility of Type II error (β) occurrence, with 0.8 meaning that even with 80% significant power, there is a 20% chance of not discriminating a significant difference. Additionally, it was reported that a power of ≥0.8 is more suitable for having a statistically significant difference [41]. Fourth, this study was designed as a single-arm, acute intervention, with the same person performing the CT first, followed by the ET at timed intervals. Therefore, in the next intervention study, it will be necessary to divide the participants into two groups and complete the CT and ET in a counterbalanced order. Based on results of this study, we would perform mass clinical studies exclusively on diabetic patients. Conclusions In this pilot study, the acute practice of the novel R VT improved the ISF glucose concentration, FGF21 and myostatin levels, and muscle stiffness and fatigue in middleaged and older adults with obesity. In the future, this novel R VT is expected to have a positive impact as a glycemic management program that can improve glucose regulation and fatigue in people with obesity, and further improve IGT and T2D. However, because this study was the result of an acute clinical trial, it is necessary to conduct additional studies on whether the same results can be obtained when long-term intervention with this novel R VT is conducted in diabetic patients in the future. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijerph20064708/s1, Figure S1: The target part of the body in the novel R VT program (50 Hz, 4 mm, 25 postures with relaxation and stretching), R: Right, L: Left; Table S1: Comparison of the sleep time, walking, physical activity baseline and during the experiment period; Table S2: Comparison of ISF Glucose Concentration in CT and ET within 2 h of OGTT. Author Contributions: M.K. designed and conducted this study, performed the statistical analysis, and wrote the first draft of the manuscript. H.Z. and T.K. contributed to the design of study and performed experiments. Y.M. performed experiments. T.O., K.T., T.I. and T.S. contributed to the conception and design of the study. S.O. contributed to the funding acquisition, the conception and design, conducted this study including executing the blood data analyses, and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the Grant-in-Aid for Scientific Research, Japan (18K17918 and 21K11718). Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and was approved by the ethical committee of the University of Tsukuba (reference no. Tai 020-95; 3 December 2020). The study protocol was registered at the University Hospital Medical Information Network center (UMIN no. 000042787; 18 December 2020). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: All data generated or analyzed during this study were included in this published article. In addition, upon reasonable request, the raw data supporting the findings of the study can be provided by the corresponding author.
2023-03-12T15:49:12.528Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "eeeb8d0a0dafca6d49dd03393baf9a4b6c01fbc4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/20/6/4708/pdf?version=1678345107", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9316097dfa67978776a534e3c90a1b27782ad4d4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
6789225
pes2o/s2orc
v3-fos-license
Microsomal epoxide hydrolase gene polymorphism and susceptibility to colon cancer We examined polymorphisms in exons 3 and 4 of microsomal epoxide hydrolase in 101 patients with colon cancer and compared the results with 203 control samples. The frequency of the exon 3 T to C mutation was higher in cancer patients than in controls (odds ratio 3.8; 95% confidence intervals 1.8–8.0). This sequence alteration changes tyrosine residue 113 to histidine and is associated with lower enzyme activity when expressed in vitro. This suggests that putative slow epoxide hydrolase activity may be a risk factor for colon cancer. This appears to be true for both right- and left-sided tumours, but was more apparent for tumours arising distally (odds ratio 4.1; 95% confidence limits 1.9–9.2). By contrast, there was no difference in prevalence of exon 4 A to G transition mutation in cancer vs controls. This mutation changes histidine residue 139 to arginine and produces increased enzyme activity. There was no association between epoxide hydrolase genotype and abnormalities of p53 or Ki- Ras. © 1999 Cancer Research Campaign background that determines individual susceptibility to disease. There is currently much interest in the roles of oncogenes, tumoursuppressor genes and mismatch repair enzymes in colon cancer (Tomlinson et al, 1997). In addition, interindividual variation in the ability to dispose of reactive xenobiotics catalysed by glutathione S-transferases GST-M1 and GST-T1 has been investigated (Lang et al, 1986;Strange et al, 1991;Zhong et al, 1993;Chenevix-Trench et al, 1995). However, results from a number of studies show only weak and inconsistent associations with disease susceptibility. N-acetyltransferase 2 (NAT-2) polymorphism may be implicated in susceptibility to colon cancer (Lang et al, 1986;Wohlleb et al, 1990;Illett et al, 1994;Probst-Hensch et al, 1995), but there is evidence that its relationship may be by linkage with other genes rather than causally (Hubbard et al, 1997). We have used a polymerase chain reaction (PCR) strategy to investigate whether polymorphisms in the microsomal epoxide hydrolase gene (mEPHX) (Hassett et al, 1994) have any relationship to colon cancer. The enzyme is expressed in many tissues, including colon and liver. Polymorphisms of mEPHX may have functional significance. There is variation in exon 3, where a T to C alteration changes tyrosine residue 113 to histidine and is associated with lower enzyme activity when expressed in vitro. By contrast, A to G transition in exon 4 changes histidine residue 139 to arginine and produces increased enzyme activity. The effect of combining the alleles has not been established. The activity of mEPHX varies more than 50-fold in Caucasians (Omiecinski et al, 1993). This variation of activity is, thus, due to a combination of genetic polymorphism, transcriptional and post-transcriptional control of gene expression. Controls and cancer cases Control blood samples (n = 203) were obtained anonymously from the Scottish National Blood Transfusion Service. These were Caucasian individuals aged between 18 and 65 years with equal sex distribution. This group has been previously described (Cantlay et al, 1994) and was drawn from the same geographical area as the cancer study group. The presence of colorectal neoplasia was not specifically excluded, but all patients were healthy. Peripheral blood from patients with colorectal cancer was collected from a consecutive series of operable colorectal cancer cases after surgery in four local hospitals between 1988 and 1993 (n = 101). Cancer patient data Cancer diagnosis was confirmed histopathologically and cases were classified according to Dukes' stages (A, B, C), and according to position of cancer in the colon as either right (caecum, transverse or ascending) or left (sigmoid, descending or rectum) sides. All samples were Caucasian in origin. DNA was extracted as previously described (Cantlay et al, 1995). In addition, the frequency of immunodetectable stabilization of p53 was recorded using antibody DO7 (Dako) on formalin-fixed, paraffinembedded tissue (n = 92). Loss of heterozygosity at the p53 locus on chromosome 17 (n = 93) was determined as previously reported (Cripps, 1994). The presence of codon 12 mutations in Ki-ras oncogene was determined using allele-specific PCR (n = 81) as described previously (Kotsinas et al, 1993). mEPHX PCR analysis Two separate PCR assays were used to detect the two mutations in mEPHX. The assay for the exon 3 T to C variant, changing tyrosine 113 to histidine, uses the primer pair E1 5′-GATCGATAAGTTC-CGTTTCACC [starting at bp 321 in mEPHX cDNA] and E2 5′-ATCCTTAGTCTTGAAGTGAGGaT (starting at bp 461). The downstream primer abuts directly onto the mutation site and an engineered base change (shown in lower case: an A for a G) produces an EcoRV restriction enzyme site (GATATC) in the wild type only. The exon 4 A to G transition produces an RsaI restriction fragment length polymorphism (ATAC to GTAC). The primer pair E3 5′-ACATCCACTTCATCCACGT (bp 494) and E4 5′-ATGC-CTCTGAGAAGCCAT (bp 685) is used to assay this polymorphism. Figure 1 shows a typical result of the genotyping assays. Polymorphisms were detected using restriction enzymes EcoRV (exon 3) and RsaI (exon 4). The polymerase chain reaction was performed on a Hybaid Omnigene thermal cycler using 200 ng of genomic DNA, 200 ng of primers E1/E2 or E3/E4, 200 mM dNTPs (Pharmacia, UK), ×1 polymerase buffer (Promega, UK), 1.5 mM magnesium chloride, 4% DMSO and 2 u of Taq Polymerase (Promega, UK) in a total volume of 50 µl. Twenty microlitres of each PCR reaction was digested with 5 u of the appropriate restriction enzyme (Gibco BRL, UK). Digested PCR products were separated by size on 1.8% Metaphor (Hoeffer Scientific) agarose gel. Bands were visualized by ethidium bromide staining and ultraviolet illumination. Main cycling parameters were: 38 cycles of 94°C for 30 s, 55°C for 25 s and 72°C for 20 s. Statistical analysis Associations between disease groups and specific genotypes and phenotypes were analysed for significance by the two-tailed chisquared test and P-values were Bonferroni corrected. Odds ratios and 95% confidence intervals were calculated to assess relative risk of disease conferred by a particular allele or genotype. mEPHX polymorphisms in disease groups The distribution of mEPHX genotypes are shown in Table 1, and the comparison of exon 3 alleles with p53 and Ki-ras mutations shown in Table 2. In the group of colon cancer patients (Table 1), there was a significant increase in the proportion of individuals homozygous for the histidine 113 (exon 3) [P = 0.007; odds ratio (OR) = 3.84]. Twenty-one out of 101 patients (21%) were homozygous for the histidine 113 (exon 3, putative 'slow' activity) versus only 6% of controls. There was no significant difference in the distribution of arginine 139 (exon 4, putative 'fast' activity) between control and cancer groups. Cancers were divided into right-and left-sided groups and the association with histidine 113 polymorphism was recalculated. For right-sided tumours, there was a trend for the putative 'slow' allele to be more common in the cancer group, but this was not statistically significant when the Bonferroni correction was applied. By contrast, left-sided colon cancers showed a highly significant increase in histidine 113 'slow' allele compared with controls (P = 0.002; OR = 4.1). No association with sex, age or Dukes' stage was found for either allele (data not shown). No association between arginine 139 polymorphism and right-or left-sided tumours was identified. When compared with the frequency of immunodetectable stabilization of p53, loss of heterozygosity at the p53 locus, or codon 12 mutation of Ki-Ras, no association was noted with polymorphisms of mEPHX at either exon 3 or exon 4 ( Table 2). Comparisons of the observed distributions of mEPHX genotypes and those predicted by allele frequencies by chi-squared analysis showed that the populations studied were in Hardy-Weinberg equilibrium, indicating that the control and study groups were sufficiently random and representative (data not shown). DISCUSSION The demonstration that genetically defined polymorphisms in mEPHX, predicted to affect enzyme activity at least in part, are associated with an increased incidence of colorectal cancer suggests that reactive epoxide intermediate metabolites may play a role in the development of colon cancer. Individuals with histidine 113 instead of tyrosine 113 (exon 3) had more than a threefold relative risk of having colorectal cancer. This is particularly true for cancers arising in the left side of the colorectum, i.e. descending and sigmoid colon and rectum, in which the relative risk increased to more than 4. The absence of a correlation with p53 or Ki-ras mutations is unsurprising. mEPHX is a protective enzyme involved in general oxidative defence, rather than in specific protection of individual genes. Previously studies describing weak association between glutathione-dependent enzymes and colorectal cancer have found that the risk is more consistently with tumours originating in the left side (Zhong et al, 1993). This site may be more at risk from oxidative stress, or may be more likely to have high exposure to oxidants because of the higher transit time for faecal material at this site. Epoxides may be present in diet, or generated from a number of sources, including benzpyrene (which is present in cigarette smoke), dietary polycyclic aromatic hydrocarbons and nitrosamines (Craft et al, 1988;Yang et al, 1988). Some further evidence suggests that mEPHX may be involved in steroidogenesis reactions which may explain preliminary observations that 'slow' genotype may be protective for ovarian cancer (Lancaster et al, 1996) and is not involved in lung or bladder (Brockmoller et al, 1996) cancer risk. The present study has examined genotype and not phenotype. There is clear evidence that the presence of a 'slow' exon 3 allele does confer lower enzyme activity, but genotype alone is insufficient to explain the variation of microsomal epoxide hydrolase enzyme activity seen in population studies (Hassett et al, 1997). In particular, the effect of carrying both exon 3 and exon 4 polymorphisms is undetermined and, thus, assumptions concerning enzyme activity from our study should be necessarily guarded. It is still possible that the described mutations are of themselves not causally related to colon cancer, but rather are in linkage with other, as yet unidentified, factors. However, the clear association does indicate that this enzyme is an important candidate to relate diet with susceptibility of the colorectal mucosa to injury. Evidence that dietary supplements, particularly fish oils which are thought to be chemopreventative for colon cancer, can induce microsomal epoxide hydrolase (Yang et al, 1993) and, thus, increase enzyme activity further strengthens this association and indicates the need for further investigation.
2014-10-01T00:00:00.000Z
1999-01-01T00:00:00.000
{ "year": 1999, "sha1": "b0b2ac4056c1882dd87298840cf1bffab9192bf3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/6690028.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b0b2ac4056c1882dd87298840cf1bffab9192bf3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
54090830
pes2o/s2orc
v3-fos-license
Hydroxyapatite : Preparation , Properties and Its Biomedical Applications Hydroxyapatite, a naturally occurring form of calcium phosphate, is the main mineral component of bones and teeth. Natural hydroxyapatite and bone have similar physical and chemical characteristics make it biocompatible. Its porous structure resembles native bone. The biocompatibility, biodegradability and bioactivity make it extensively useful in interdisciplinary fields of sciences like chemistry, biology, and medicine. Calcium phosphate-based ceramics are of great interest as substitutes of synthetic bone graft due to their similarities in composition to bone mineral and bioactivity as well as osteoconductivity. This article gives an overview of hydroxyapatite from its preparation and properties to biomedical applications of its composites. Introduction Hydroxyapatites (HAP) is a naturally occurring mineral form of calcium apatite comprising of about 50% of the weight of the bone, which accounts for its excellent osteoconductive and osteointegrative properties [1] [2] [3].It is a main component of bone mineral but in some cases carbonate-apatite is a main hard tissue component, as in dental enamel [4].One of the most common apatites used as bioceramic in medicine and dentistry is hydroxyapatite (HAP) due to its bioactivity and osteoconductive properties in vivo [5] [6] [7] [8].The advantage of using HAP as a bioceramic or biomaterial compared to other bioceramics, such as Bioglass or A-W glass-ceramic, is its chemical similarity to the inorganic component of bone and tooth.Chemically hydroxyapatite is Ca 5 (PO 4 ) 3 OH but often written as Ca 10 (PO 4 ) 6 (OH) 2 .Naturally, hydroxyapatite is an inorganic S. Pokhrel DOI: 10.4236/aces.2018.84016 226 Advances in Chemical Engineering and Science component found in human hard tissues such as tooth and bone.These materials are generally used as human body implant materials.Natural hydroxyapatite can be prepared from eggshells, coral, fish bone, chicken bone, etc. [9].Recently, hydroxyapatite has attracted interests because of its hemostatic properties, and bone healing function [10] [11] [12].This article gives an overview on different ways of hydroxyapatite preparation, its properties and biomedical applications of its composites. Preparation of Hydroxyapatite Hydroxyapatite can be prepared by different methods such as sol-gel process [13], chemical precipitation [14], etc. Chaudhari et al. prepared the HAP by applying the following reaction [15]. ( ) ( ) HAP can be produced from coral [17], seashell [18], eggshell [19] [20] [21] and also from body fluids [22].There are numerous methods have been reported for the preparation of hydroxyapatite from eggshell.One of them is the hydrothermal method.It is extensively reported method of HAP production from eggshell [23].This method of preparing HAP from eggshells in a phosphate solution at a high temperature is a novel approach for synthesizing valuable biomedical materials [19].In this method, fine hydroxyapatite single crystals are prepared by a hydrothermal method with Ca(OH) 2 and CaHPO 4 ⋅2H 2 O as starting materials.HAP prepared from hydrothermal methods has more crystallinity and good homogeneity, the major advantage of hydrothermal method.This method is direct and straight forward which gives all the characteristics band of HAP but it is laborious and time consuming [19]. Next is the microwave irradiation method, it requires a chelating agent i.e. ethylenediamine tetra acetic acid (EDTA) (Figure 1) [24].This is an indirect way where synthesis of HAP is generally led by formation of calcium precursor from eggshells as the first step.Thus, prepared HAP shows higher sinterability and stability at high temperatures with better stoichiometry, morphology, and osteoblast cell adhesion [23].Türk et al. reported that microwave assisted biomimetic synthesis can be a promising technique of preparing HAP powders in shorter time [25]. High energy mechanochemical activation method is also applied to produce HAP.It involves two processes: attrition milling and ball milling [26].The mechanochemical reaction supplies enough amount of hydroxyl group to the starting powders to form a single phase of hydroxyapatite.This is relatively simple and recommended for the mass production of high crystalline hydroxyapatite [27].A simple sol-gel precipitation technique can be used to prepare nanohydroxyapatite from egg shell.The powder particles are polycrystalline in nature with an average size of 5 -90 nm.The produced nano-HAP was found in pure form [28] with higher bioactivity than HAP coarser crystals [29].ticles prepared by the microemulsion route led to a smaller particle size and the improve degree of particle agglomeration as compared to conventional precipitation method [31]. Basically, biomimetic processing is based on biologic systems store and process information at molecular level [32] [33] [34] [35].The extension of this concept has upgraded in processing of synthetic bone in last few decades [36]. Properties of Hydroxyapatite Sobczak-Kupiec et al. reported that the physicochemical properties and morphology of HAP depended on the origin/preparation method [38].Synthetic hydroxyapatite exhibited low crystallinity, with high porosity and more surface area.On the otherhand, HAP obtained from animal bone via calcination at 800˚C possesses highest crystallinity [38]. Hydroxyapatite has the capability to form chemical bonds with surrounding hard tissues [39] [40] with the formation of a HAP interfacial layer [41].The similar physical and chemical characteristics of natural hydroxyapatite with bone make it biocompatible [8]. Bowen co-workers studied the relationship between the composition and di-electric and piezoelectric composites for polarized bone substitutes.It was observed that the addition of BaTiO 3 increases permittivity and ac conductivity of the material [42].It is summarized that HAP-BaTiO 3 composites can be used as polarized bone substitutes [42]. Gao et al. prepared three porous scaffolds by sintering of bovine bone and three-dimensional gel-lamination method.The results demonstrated that three types of HAP scaffolds showed good attachment, proliferation and differentiation of osteoblasts [43].Hydroxyapatite ceramic, derived from bovine bone by sintering, has a porosity and pore structure which resembles that of native bone.The porosity and the good wettability with water and organic solvents permit ceramic loading with drugs such as antibiotics, or substances that improve healing of bone [44]. According to Zhang and Darvell, the morphology and structural characteristics of hydroxyapatite whiskers depend on the initial Ca/P ratio (iCa/P) and pH (ipH), as well as the initial calcium concentration (i[Ca]) [45].Deviation in these values did not affect on constitution, which was crystallographically indistinguishable from HAP. Ca/P ratio gradually improved with increase in both ipH and iCa/P, but was independent of i[Ca].Uniform whiskers were obtained at high iCa/P and low ipH, or at high ipH and low iCa/P.Uniform whiskers were obtained at high iCa/P and low ipH, or at high ipH and low iCa/P.At low iCa/P and a low ipH branch-like whiskers and irregular plate-like particles were produced, while a high ipH supported the formation of lath-like HAP at high iCa/P.Preferred growth along the c-axis was greater at higher iCa/P and ipH as well as at low i[Ca] [45]. Werner and coworkers manufactured osteo implants having graded porosity by multilayer casting of HAP tapes with controlled pore structure [46].The results proved that sintering temperature is a critical factor influencing density, microstructure and stability of HAP phase.The optimum sintering temperature to obtain maximum flexural strength for three layered structures was found to be 1250˚C.Pore-graded three-layer structures revealed approximately 40% higher flexural strength than a homogeneous three-layer structure with single pore size.The macroporous HAP network gives access for osteoblast-like cells which can attach, spread and propagate throughout the macropores and their interconnections [46]. Several studies have been reported the scanning electron micrographs of hydroxyapatite.Here, representative SEM of sample i.e. calcined at 900˚C is presented in Figure 2. In this image, the morphology of hydroxyapatite was found porous with pore size less than 1 μm in average and nonhomogeneous [8].surface morphology of the prepared hydroxypatite (HAP) ceramic particles via calcinations of natural bones and synthetic sol-gel method and observed the aggregation of particles with rough and granular to dense surfaces.The size of HAP particles was predicted to the range between 50 -500 nm [48]. Applications of Hydroxyapatite (HAP) Historically, the first broadly tested artificial bioceramics was plaster of Paris grafting [56] made the first attempt to implant a laboratory produced CaPO 4 as an artificial material to repair surgically created defects in rabbit bones in 1920 [57].He also invented some other advances in orthopedic surgery [50].Presently hydroxyapatite has received much more interest as an implant material with applications in dentistry and orthopedics [58] [59] [60]. Synthetic HAP has been used widely as an implant material for bone substitute because of its excellent osteo inductive properties [61].demonstrated that the properties of HAP nanocrystals can be modulated to produce HAP/biomolecule conjugates that are tailored for specific therapeutic applications [73]. A transparent and slight yellow chitosan (CS)/HAP nanocomposite rods reported high performed, potential application as internal fixation of bone fracture.The method resolves the problem of the nano-sized particle aggregation in polymer matrix [74].Hoffmann et al. fabricated HAP/starch/chitosan composites hemostatic material and proposed as a substitute for bone wax or even as a bone filling material for orthopedic surgery applications [75]. Madhumathi et al. deposited HAP on the surface of chitosan hydrogel membranes and evaluated the biocompatibility of these membranes using MG-63 osteosarcoma cells and suggested that chitosan hydrogel-HAP composite membranes is applicable for tissue-engineering [76]. Electrospinning is cost effective and appropriate technique for the production of nanofibers for fabricating scaffolds with biomolecules and has been used across a wide range of biocomposite polymer systems and bone tissue engineer-Advances in Chemical Engineering and Science ing actions [62].Calcium phosphate ceramics has great importance in the field of tissue engineering for the biological applications [77].Ngiam 150 nm to deliver a hepatitis B surface antigen (HBsAg) [90]. The antibacterial properties of nano-hydroxyapatite can be increased by adding silver ions in the HAP structure [91] [92].Dubnika et al. developed a method to prepare a novel carrier system based on the silver-doped hydroxyapatite and loaded with lidocaine hydrochloride in the presence of chitosan or sodium alginate (HAP/Ag/polymer/drug composite) [92]. Conclusions Hydroxyapatite is shown to be a significant material for biomedical applications due to its biodegradability, biocompatibility and bioactivity.HAP is a beneficial biomaterial for dental and medical applications.The HAP nanoparticles are more useful than conventional sized HAP bulk ceramics based on large surface-to-volume ratio, reactivity, and biomimetic morphology of the HAP nanoparticles for applications such as fillers for composites, reparative materials for damaged enamel and carriers for drugs.This review gives an overview about the synthesis, properties and applications of HAP in biomedical domain. It can be concluded from the above presented investigations that despite numerous methods elaborated the synthesis of HAP which are used as bone scaffolds and in dentistry, there is still a huge demand for developing a simple efficient and green method for the production of HAP. Fig- ure 3 (a) and Figure3(b) presented representative SEM pictures of received bovine bone (raw material) and bones annealed at 900˚C, respectively.The microstructure of received bovine seemed dense due to the presence of organic substances in the bovine bone matrix.A typical bone-like matrix was obtained for samples annealed at 900˚C as shown in Figure3(a).Surface morphology showed the interconnected porous structure[47].Rahavi et al. studied the S. Pokhrel DOI: 10.4236/aces.2018.84016229 Advances in Chemical Engineering and Science ( calcium sulfate) but they have ex vivo applications.By the end of 19th century, surgeons already used plaster of Paris as a bone-filling substitute [49] [50].References [51] [52] [53] [54] [55] give details on recent history of CaPO 4 , bioceramics and biomaterials.Fred Houdlette Albee (1876-1945), who invented bone Oonishi explained the use of HAP composites in clinical orthopaedics for spacing or filling bone defects because of its important biological properties such as lack of immuno-reaction and absence of postoperative morphological change or volume decrease.HAP implants fixed with cement avoids problems of high density polyethylene wear particles[62].Other applications of HAP include femoral plugs in total hip replacement and HAP coating on metal components for cementless fixation.For rapid and strong cementless fixation porous metal surfaces are used; HAP coating of porous metal gives improved results.Bioactive interfacial bone cementation technique was also developed by introducing fine HAP granules between the bone and polymethyl methacrylate (PMMA) cement[62].Blends of polycarpolactone (PCL)/HAP, PCL/collagen (Col)/HAP, PCL/gelatin (Gel)/HAP, poly-L-lactic acid (PLLA)/Col/HAP and poly3-hydroxy-butyrateco-3-hydroxyvalerate (PHBV)/HAP were studied by various research groups as a substitute for bone tissue engineering[63]-[68]. Scaffolds with HAP polymeric composites improved the new bone tissue development with increased osteointegration, osteoblast adhesion and calcium mineral deposition on its surface[68].HAP-enhanced surface properties can be used to increase cell response and proliferation to induce mineralization in bone tissue engineering.Hydroxyapatite has been used in diversity biomedical fields such as matrices for bone cements, controlled drug release, tooth paste additive, dental implants, etc. [65].Prabhakaran et al. fabricated poly-L-lactic acid (PLLA)/HAP and PLLA/Collagen (Col)/HAP nanofibres by electrospinning and found that PLLA/Col/HAP nanofibres biocomposite are better than PLLA/HAP nanofibres for effective bone regeneration and mineralization [68].Polycaprolactone (PCL)/HAP/Col nanofibres has interconnected porous structure which provided mechanical support and facilitated extracellular matrix (ECM) production for bone tissue formation [65].Marra et al. examined the blends of biodegradable polymers, poly (caprolactone) and poly (D,L-lactic-co-glycolic acid), as scaffolds for applications in bone tissue engineering.HAP granules were introduced into the blends and porous discs were prepared.Mechanical properties and degradation rates in vi-S.Pokhrel DOI: 10.4236/aces.2018.84016231 Advances in Chemical Engineering and Science tro of the composites were determined.The discs were seeded with rabbit bone marrow or cultured bone marrow stromal cells and incubated under physiological conditions.This study suggested the feasible use of novel polymer/ceramic composites as scaffold in bone tissue engineering applications [69].Calcium phosphate-based ceramics, such as HAP, are of great interest as synthetic bone graft substitutes due to their similarity in composition to bone mineral and bioactivity as well as osteoconductivity [70].Wang et al. blended hydroxyapatite (HAP) into poly (3-hydroxybutyrate) (PHB) and poly (3-hydroxybutyrate-co-3-hydroxyhexanoate) (PHBHHx) to build films and scaffolds [71].HAP blending, showed improvement in mechanical properties of PHB including compressive elastic modulus and maximum stress as well as enhancement in osteoblast responses including cell growth and alkaline phosphatase activity.On the other hand, the blending of HAP particles into PHBHHx scaffolds fabricated by salt leaching was unable to either strengthen its mechanical properties or enhance osteoblast responses.Although HAP is bioactive and osteoconductive, its blending with PHBHHx cannot generate a better performance on bone reconstruction [71].Petricca et al. reported the composites of HAP and PLGA; poly (D,L-lactic-coglycolic acid) and found the improved mechanical properties as well as increased osteogenic response of the HAP/PLGA composites are appropriate as bone substitution scaffolds [72].Palazzo et al. investigated the adsorption and desorption of anticancer drugs cis-diamminedichloroplatinum (II) (CDDP, cisplatin) and new platinum (II) complex di(ethylenediamineplatinum) medronate (DPM), as well as the clinically relevant bisphosphonate alendronate, towards two biomimetic synthetic HAP nanocrystalline materials with either needle-shaped (HAP) or plate-shaped (HAP) morphologies and different physico-chemical properties.This work Bernard et al. re- et al. fabricated the nanofibrous composites for mimicking the bone components and observed that deposition of HAP on PLLA/collagen nanofibers results in better early osteoblast attachment to mineralized nanofibers [78].Rodríguez-Lorenzo et al.Goyal et al. used cellobiose-coated, spherical nHAP ranging from 50 to [89]fluoresce in sodium salt, and Cy3 amidite and found that fluorescence quantum efficiency can be increased by 4-fold from 0.045 to 0.202 for the free and encapsulated dye respectively[89].Nano HAP can be used as an antigen S. Pokhrel DOI: 10.4236/aces.2018.84016233 Advances in Chemical Engineering and Science carrier.
2019-06-13T13:18:55.264Z
2018-09-13T00:00:00.000
{ "year": 2018, "sha1": "303c5775837d6af77d7e8a2687c9f539a3587f8c", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=87542", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "303c5775837d6af77d7e8a2687c9f539a3587f8c", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
266998343
pes2o/s2orc
v3-fos-license
The long and the short of it: Salivary telomere length as a candidate biomarker for hypertension and age‐related changes in blood pressure Abstract Hypertension becomes more prevalent with increasing age. Telomere length (TL) has been proposed as a candidate biomarker and can be accessibly extracted from saliva. However, clarity is needed to evaluate the suitability of using TL as a predictor in such instances. This study investigated salivary TL in a cohort of older adults from the 2008 Health and Retirement Study (n = 3329; F: 58%, mean age: 69.4, SD: 10.3 years) to examine any associations with blood pressure (BP). A Bayesian robust regression model was fit using weakly informative priors to predict the effects of TL with age, sex, systolic BP (SBP), diastolic BP (DBP), and treatment status. There were small effects of treatment (β: −0.07, 95% CrI [−0.33, 0.19], pd: 71.91%) and sex (β: −0.10, 95% CrI [−0.27, 0.07], pd: >86.78%). Population effects showed a reduction of 0.01 log 2 units in TL with each year of advancing age (95% CrI [−0.01, −0.00]). Conditional posterior predictions suggest that females, and treated individuals, experience greater change in TL with increasing age. Bayes R 2 was ~2%. TL declines with increasing age, differs between sexes, and appears to be influenced by antihypertensive drugs. Overall, all effects were weak. The data do not currently support the suitability of salivary TL as a biomarker to predict or understand any age‐related changes in BP. | INTRODUCTION Vascular aging is associated with various physiological changes at cellular and molecular levels, including oxidative damage and telomere attrition (Emami et al., 2022;Kirkwood, 2005).For this reason, it comes as no surprise that investigations into age-related conditions often consider changes in these variables to predict (or establish risk parameters for) disease onset and severity.The prevalence of hypertension increases with advancing age and is a strong predictor of future adverse cardiovascular events and cardiovascular disease (CVD) (Roseleur et al., 2022), increasing the risk for ischemic stroke by 38% in females (Gorgui et al., 2014).While in an older population CVD is the leading cause of death, the preceding risk factors are often triggered in the younger years of life but may be delayed in clinical appearance (Emami et al., 2022).Assessment of telomere length (TL) has the capacity to provide potentially useful information relating to the lifespan, and the overall "health status" of a cell which is expected to be poorer in the presence of disease (Shammas, 2011).Physiologically speaking, the length of telomeres should shorten as part of natural human aging to preserve genomic information that could be lost with the gradual reduction in DNA strand length during cellular division (Allsopp et al., 1995;Shammas, 2011).However, inherent telomere shortening can be differentiated from accelerated telomere shortening, where the latter represents an implicit deterioration in function that can be directly related to the aging process and age-related diseases, including those of the vasculature (Huang et al., 2020;Serra et al., 2003;Spyridopoulos et al., 2009).Telomere shortening has been observed in vascular endothelial cells, smooth muscle cells, and cardiomyocytes during aging (Edo & Andrés, 2005).Current associations between increasing age and cardiovascular mortality have identified population effects of longitudinal increases in systolic blood pressure (SBP) by ~7 mmHg per decade among adults over the age of 40 years (Gurven et al., 2012).The increases in SBP particular to this age demographic may be due to the dysfunction of the vascular endothelium, accompanied with an increase in oxidative distress-either due to intrinsic antioxidant deficiency, or the reduced clearance capacity of oxidants, with advancing age (Kozakiewicz et al., 2019).It has been observed that oxidants induce premature senescence in vascular smooth muscle cells, coupled with accelerated telomere shortening and altered telomerase activity (Cao et al., 2002;Matthews et al., 2006).Age-associated endothelial dysfunction and disturbances in cellular redox homeostasis often precede the development and onset of hypertension (Minhas et al., 2022).Within the last decade, meta-analyses (D'Mello et al., 2015;Haycock et al., 2014) and population studies (Yang et al., 2009) have demonstrated inverse associations between leukocyte TL and risk of developing hypertension, independent of conventional vascular risk factors.Meanwhile, another study suggests that there is no difference in mean aortic TL between hypertensive and normotensive individuals (Morgan et al., 2014).Therefore, understanding the factors influencing age-related changes in BP and the onset of hypertension development is of pertinent clinical relevance due to the stark increase in CVD-related mortality in later life.Associations between TL more broadly, and the vascular adaptations that occur during and as a result of hypertension onset, have been investigated in previous experimental and human studies (Bhupatiraju et al., 2012;Tellechea & Pirola, 2017).However, the functional relationship between the two, and the suitability of TL as an independent biomarker in general, remains under question.Currently within the literature, TL is more commonly assessed from leukocytes and whole blood, but can also be assessed from nonblood, disease relevant tissue types (i.e., lung, arterial, and skeletal muscle) (Demanelis et al., 2020).In 2020, Demanelis et al. (Demanelis et al., 2020) measured TL across 24 tissue types and showed that while TL is not constant, it is correlated, across tissues.Due to the accessibility and non-invasiveness of TL extraction from saliva, further investigation is required to assess if the previous associations between leukocyte TL and hypertension can translate to salivary TL, and therefore evaluate its use as a candidate biomarker for hypertension.Age-dependent associations with telomere shortening have been observed in humans in vivo and in vitro (Allsopp et al., 1995), as well as explored in terms of predicted maximal life expectancy (Steenstrup et al., 2017).Some epidemiological studies (Benetos et al., 2018;Huang et al., 2020;Masi et al., 2014) have examined the association between TL from a multitude of tissue types and hypertension, all with varying methodological approaches and conclusions.Using similar methodological principles to previous studies in the broader field (Elliott-Sale et al., 2020;Jones et al., 2020), we used a Bayesian approach to conduct analyses of basic demographic information (age and biological sex), BP (systolic and diastolic measurements), hypertensive treatment status (treated or untreated), and salivary TL data extracted from the 2008 Health and Retirement Study (HRS).Understanding the role of biomarkers in circulatory conditions or diseases is pivotal for targeted and early intervention strategies aimed at improving favorable lifestyle outcomes.Therefore, the aim of this analysis was to investigate salivary TL in a cohort of older adults and examine the effects of increasing age and any associated influences on BP measurements, with attempts to critically assess the suitability and utility of using salivary TL as a biomarker for hypertension onset or severity. | Data and population Data were extracted from the HRS 2008 wave.The HRS is sponsored by the National Institute on Aging (grant number NIA U01AG009740) and is conducted by the University of Michigan.The use of this database for epidemiological analyses is well established (Frank and Denis, 2013;Pleiss et al., 2023;Yu et al., 2023).The 2008 Cross-Wave and Physical Measurement public use datasets were used to obtain demographic and measured information, and the authors were granted access to sensitive telomere data collected from participants in the same year.All data were collated, and participants were matched between datasets using a combination of their de-identified Household Identification and Person Numbers using Microsoft Excel (Microsoft, v16.0) and imported into R (v4.0.3) (R Core Team, 2020) for cleaning and statistical analysis.The HRS uses a stratified, multistage cluster sample weight designed to represent the US population with respects to age, sex, and race.Informed and written consent were obtained from all participants. A total of 5808 participants provided buccal samples for TL assessment.Of these participants, 2479 had incomplete data for BP and treatment status and analysis proceeded with list wise deletion (n = 3329 included), assuming the data were missing at random (Bhaskaran & Smeeth, 2014). | Blood pressure Measurement of BP followed standardized protocols and can be found in the 2008 data description files (https:// hrs.isr.umich.edu/ ). | DNA collection and extraction for telomere length assessment The 2008 telomere data include average TL data from 5808 HRS respondents who consented and provided a saliva sample during the 2008 interview wave.Average TL was measured using quantitative PCR (qPCR) by comparing telomere sequence copy number in each participant's sample (T) to a single-copy gene copy number (S).The resulting T/S ratio is proportional to a reference mean TL.T/S ratio was logarithmically transformed for analyses. | Data analyses and processing Data were analyzed using the "brms" package (Bürkner, 2017) in R (v4.0.3) (Kruschke & Liddell, 2018;R Core Team, 2020) to model relationships between T/S ratio (expressed as TL on log 2 scale), age, diastolic blood pressure (DBP), SBP, and sex, inclusive of an individual's treatment status.Bayesian methods for a linear-style regression model were utilized, with a Student's-t response distribution (Kruschke, 2013).Inference was by an estimation approach and did not consider hypothesis testing (Kruschke & Liddell, 2018).Model comparisons (Table S1) to evaluate the predictive performance of a generalized additive model using a smooth function of age, and a linear-style model without a smooth function, were performed using leave-one-out crossvalidation (LOO) (Vehtari et al., 2021).There were no statistically influential differences between the models (looic estimates = 2977.8[SE: 124.5], 2978.8 [SE: 124.4], respectively) meaning that models were equivocal, and therefore the modeling proceeded with the simpler linear-style approach. The model can be characterized using the general equation: where: TL, telomere length; β₀, intercept; βₙ, coefficient for the nth predictor Xₙ; σ, scale of the residual error distribution; ν, degrees-of-freedom (DoF) of the residual error distribution.This allowed for direct estimations of the probability distribution of the treatment effect, which can be interpreted as the probability of different effect sizes given the observed data (Etz & Vandekerckhove, 2018;Kruschke & Liddell, 2018).Prior distributions were intended to be weakly informative (t-distributions with a location of 0, and ν of 3) and were used for all fixed effects, with a scaled parameter set for all continuous variables to correspond to the observed scale of the response variable, so that inference was driven predominantly by the data (Gelman et al., 2008;Lemoine, 2019).Results are presented as posterior means and 95% credible intervals (CrI).Model convergence was assessed with the Rhat statistic, which should be below 1.01 (Vehtari et al., 2021), and effective sample sizes (ESS) above 1000 were considered reliable (Bürkner, 2017), consistent with diagnostic Markov chain Monte Carlo (MCMC) methods used in Bayesian statistics.Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT) framework, indices of effect existence along the probability of direction (pd) were quantified using bayestestR (Makowski et al., 2019).The pd represents the probability (from 50% to > 99%) with which an effect goes in a particular direction (i.e., positive [>0] or negative [<0]). | Visualization of conditional marginal effects Estimated age effects conditional on treatment status (treated: 50% of sample) and sex (females: 50% of sample), and their contrasts, were obtained from the fitted estimates of the posterior samples and used to compute expected mean response values of TL, with uncertainty intervals set at 95%.Posterior predictions were posed to visualize the expected mean response value of TL given either systolic (Figure 1) or diastolic (Figure 2) pressure.To represent the expected mean response values across pressures, user-defined values were set for SBP and DBP, and age was specified in decades from 50 to 100 years (Lüdecke, 2018;Williams, 2012).Graphics were built with ggplot2 (Wickham, 2016). Posterior predictive checks showed no evidence of systematic lack-of-fit.All estimations successfully converged (Rhat = 1.001) and indices were >1000.Model summaries are shown in Table 2. Point estimates (posterior mean) and 95% CrI of the parameters are reported for coefficients (Table 3).Of note, there were weak independent effects S2).All other effects were negligible.Statistical summaries for the base model, full model, and general additive model (for model comparisons) can be found in Tables S3-S5, respectively.Probability of direction statements generated for the full model are provided in Table S6. | Estimated effects and approximate posterior probability As shown in Figures 1 and 2, estimated effects at representative values revealed small effects of sex on TL changes with age.The slope of change in TL with age was different between sexes (posterior mean for males: −0.003, 95% CrI [−0.01, −0.00]; posterior mean for females: 0.001, 95% CrI [−0.00, 0.00]), with an approximate 75% posterior probability of females experiencing a greater change in TL with age, considering estimated age effects conditional on sex and their contrasts (posterior mean: −0.00, 95% CrI [−0.00, 0.00]). | DISCUSSION The current study focused on evaluating salivary TL as a predictor for age-related changes in BP, considering sex and hypertensive treatment status (i.e., treated, or untreated).This was driven by the notion that natural age-related increases in BP, as well as sex differences in hypertension risk, may influence/be influenced by natural telomere shortening i.e., accelerate telomere attrition (Benetos et al., 2018;Brouilette et al., 2003).Understandably, the sheer biological variability in the parameters evaluated, and the small effects, highlight that in this context, salivary TL lacks utility as a candidate biomarker for hypertension (reflected in the observed model variance: Bayes R 2 estimate = 0.02, 95% CrI [0.012, 0.024]).Irrespectively, posterior predictive checks demonstrated that the model appeared to be an unbiased description of the data.Our findings support the existing proposition that TL declines with increasing age, and independently suggest that on average, TL is shorter in males than females, and in individuals receiving antihypertensive treatment (in both sexes).Although, these effects were very weak, and without considering any influencing factors are unlikely to be a true representation of real-life circumstances.At a cellular level, aging can be observed as the restriction in cellular division capacity prior to entering replicative senescence, which can be driven by oxidative stress, telomere shortening, and many other complex processes (Fyhrquist et al., 2013).Our independent finding of shorter TL (quantified as −0.01 log 2 units) with each year of increasing age is somewhat consistent with existing cross-sectional studies (Rehkopf et al., 2016;Yu et al., 2023).Interestingly though, the rate of cellular aging (as indicated by shortened TL) particularly in late midlife, has been shown to relate more to vascular damage, independent from other CV risk factors (Masi et al., 2014).A 2016 meta analyses (Tellechea & Pirola, 2017) of 3097 participants (n = 1415 hypertensive, n = 1682 control) indicated that leukocyte TL may be shorter in hypertensive individuals compared with normotensive individuals, but suggested that studies controlling for confounding effects would be needed to confirm these findings and further explore sources of heterogeneity.A confounding factor we explored in the current study is the effect of hypertensive treatment, and small, independent effects suggested a shorter average TL in those who were treated, compared to untreated (β: −0.07, 95% CrI [−0.33, 0.19]).We could speculate that even though an individual may be treated, there could be some irreversible degree of vascular damage that may have already occurred-but a claim like this need to consider additional factors.To visually explore these independent findings further, we posed reasonable posterior predictions to generate graphics based on a representative form of the full model (fitted estimates), using age, in decades, from 50 to 100 years across a set range of biologically feasible measures of diastolic and systolic pressures, conditional on sex and treatment status (comprised of estimates from females: 50%, treated: 50%).To the contrary, the expected average value of TL appeared to be longer in the treated groups of males and females with increases in SBP beyond ~125 mmHg (Figure 1) and increases in DBP beyond ~90 mmHg (Figure 2).While the figures do not reflect individuals within a clinically "controlled" hypertension range who are treated, and individuals with "masked" hypertension who are untreated, this is an interesting observation considering the clinical significance of these values represent a need for hypertensive intervention.In 2002, Cao et al. (Cao et al., 2002) found telomerase activity to be selectively enhanced in the aortic tissues of genetically hypertensive rats before the onset of hypertension, and that this corresponded with increased proliferation of vascular smooth muscle cells.Consistent with the increase in telomerase activity, TL was increased within the genetically hypertensive rats, and stabilized without progressive shortening (Cao et al., 2002).The authors suggest this reflects that TL maintenance can occur before significant vascular wall remodeling and the onset of hypertension.Although we cannot speculate the same motivation from our findings, this may call for a closer inspection of salivary telomerase activity in conjunction with salivary TL assessment.More recently, Deng et al. (Deng et al., 2022) similarly report that longer TL is related to an increased risk of hypertension (using data from a genome-wide association study of participants in the UK BioBank).Indeed, these findings conflict with previous evidence, and the lack of consensus among the literature warrants further investigation in order for these claims to be endorsed.Our observations did indicate that there may be a different effect of pressures, dependent on treatment status (effect of being treated on SBP had a 51.62% probability of being negative [pd <0], effect of being treated on DBP had a 66.11% probability of being positive (pd >0)).Existing literature suggests that antihypertensive drugs may influence cell senescence and intracellular oxidative stress (Münzel & Keaney Jr, 2001;Sorriento et al., 2018) potentially altering TL. Oxidative stress (both oxidative eustress and distress) plays an important role in the molecular processes of vascular aging by way of modulating pro-inflammatory responses, contributing to vessel and endothelial (dys) function, altering calcium homeostasis in vascular cells, as well as autophagy activation in endothelial and vascular smooth muscle cells (Dudinskaya et al., 2020).It has been observed that specific classes of antihypertensive drugs can affect TL through an endothelial nitric oxide synthase (eNOS)-dependent anti-senescence effect in human endothelial cells (Hayashi et al., 2014;Zhang et al., 2020).An important consideration to make here is that broader data, including greater diversity of BP as a continuous measurement (not dichotomized), would be helpful in assessing causality and associated mechanistic effects of antihypertensive drugs. Coefficients for parameter estimates showed sexspecific differences in TL (β: −0.10, 95% CrI [−0.27, 0.07]), but is highly variable.We identified that the slope of change in TL with age for males (posterior mean: −0.00, 95% CrI [−0.01, −0.00]) was different to the slope for females (posterior mean: 0.00, 95% CrI [−0.00, 0.00]), with an approximate 75% posterior probability of females experiencing a greater change in TL with age.Generally speaking, the life expectancy of females is considered to be longer than that of males on average (Hoogendijk et al., 2019), and we could speculate that perhaps a greater longevity of females is associated with a delay in TL shortening, granted our findings cannot strongly explain this.A longitudinal study published in 2019 demonstrated that women had higher total life expectancies but spent more time in poor health compared to men (Hoogendijk et al., 2019).With this considered, perhaps the use of salivary TL as a general biomarker for longevity could be warranted, but there is no reasonable way of translating this measure into relevant information regarding the quality of health of an individual or a population.It has been postulated that genetic factors, as well as estrogen, can mediate (or slow) TL decline (Fyhrquist et al., 2013;Lee et al., 2005).Considering the mean age of the female population in the present study is generalized to be post-menopausal, estrogen activity (or lack thereof) may not be a driving factor at play, but given the known alterations in vascular function and redox environments by estrogens (Murphy & Kelly, 2011;Xiang et al., 2021), cannot be ruled out.This could conceive the notion that the varying contribution of sex-specific hormones may be factors in either determining longer average TL in the female sex at birth, or in the maintenance of TL over the lifetime, independent of other factors.With these observations in mind, it could be warranted to further consider the physiological influence of sex differences, age and hormonal status in the clinical management and control of hypertension (Ahmed et al., 2019). STRENGTHS The HRS obtains TL from salivary DNA samples which, while accessible, could be highly variable when selfcollected.Additionally, although we accounted for treatment status overall, a limitation to this is that the data collected did not discern between the name or class of the antihypertensive drugs, or the prescribed dosage nor duration of treatment, and we did not account for additional medications being taken as potential confounders/ interaction effects in the analysis.A criticism of previous population-based, epidemiological, or cross-sectional studies assessing TL is that there is currently no known range or value to provide a substantive understanding of what classifies telomeres as "short" or "long", and this, in part, may be the reason some values are perceived as extreme outliers, which can often be identified and removed from analysis, or arbitrarily classified as shorter or longer based on the mean of a sample.A strength of the current study is that the estimated DoF from the conditional distribution (ν: 3.25, 95% CrI [2.96,3.54])provides empirical evidence that kurtosis is important for this variable in the observed data, which would otherwise be missed in a model using a normal distribution.Therefore, we propose that exploring some of the more "extreme" outliers to understand more about telomere maintenance and biology, why they appeared and whether it is likely similar values will continue to appear, may be necessary to establish a "homeostatic range" of typical TL distribution. | FUTURE DIRECTIONS Where data is available, future longitudinal, cohort studies are warranted to assess the link between the rate of telomere attrition, BP, and antihypertensive treatment-including medication type, duration of treatment, and dosage.In doing so, there is capacity to explore if telomere attrition is inherently maintained over time, or if it occurs at an increased rate during disease development.It would be worth exploring the changes in response to treatment, considering age and sex, and if alternate interventions such as lifestyle modification can support longer TL more effectively at different ages and between sexes.For acute, and more rapid assessments of potential pre-manifesting vascular mediators, future avenues should investigate more clinically relevant biomarkers or measures.Perhaps, greater efforts should be directed into observing more sensitive changes at cellular and molecular levels in vivo, to detect the manifestation of vessel dysfunction that leads to age-related changes in BP. | CONCLUSIONS While measuring TL can provide valuable insight into the possible role of TL in the pathophysiology of vascular aging and disease, telomere dynamics are highly variable and are not a static property, meaning much more information is needed to understand telomere function and any associated effects on disease onset or severity.Taken together, the overall findings of this study support an age-dependent association of salivary TL on advancing age, and that there are sex-specific TL dynamics that may contribute to the development and onset of hypertension.Additionally, we identify that the use of antihypertensive drugs warrants further exploration in relation to vascular telomere activity.However, given the very weak relationships identified in the current analysis, salivary TL cannot provide the required sensitivity to predict or understand any age-related changes in BP, and the small effects allude to any agerelated changes in BP more likely being explained by other factors.As a candidate biomarker for hypertension, a once-off assessment of salivary TL is a biological measurement with limited utility. F Posterior predictions representing the expected average value of TL with changes in SBP, given all other conditions are true.Females, regardless of treatment, have longer TL on average.In both sexes, TL appears to be longer in treated individuals when pressure exceeds 125 mmHg (shading either side of the line shows 95% uncertainty intervals).Importantly, DBP within the graphic is informed at values set at a pressure of 2 / 3 multiplied by that of SBP, to reflect a biologically feasible measurement with any given change in SBP.DBP, diastolic blood pressure; SBP, systolic blood pressure; TL, telomere length. of treatment status (β: −0.07, 95% CrI [−0.24, 0.14]) and sex (β: −0.10, 95% CrI [−0.18, 0.05]), and extraction of population-level ("fixed") effects for age showed an associated decline of 0.01 log 2 units in TL with each year of advancing age (95% CrI [−0.01, −0.00];Table F Posterior predictions representing the expected value of TL with changes in DBP, given all other conditions are true.As with Figure1, females, regardless of treatment, appear to have longer TL on average.In both sexes, TL appears to be longer in treated individuals when DBP increases above 90 mmHg (shading either side of the line shows 95% uncertainty intervals).SBP is informed within the graphic as a constant function of DBP (expressed as DBP + 2/3) to reflect a biologically feasible pressure combinations with any given change in DBP.DBP, diastolic blood pressure; SBP, systolic blood pressure; TL, telomere length. Descriptive statistics of participants (n = 3329) included in the model, expressed as median and inter-quartile ranges (IQRs) or count (%). Total (n = 3329) Parameter estimates from the full model for TL using the parameters of interest (SBP, DBP, treatment status, sex, and age) including interaction terms for age and sex; treatment status and blood pressure, for participants (n = 3329).
2024-01-17T06:16:52.581Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "7d5408a6a4cfaf1884a2f3168036c02a4b35b65c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "bcb2978d837f2d08ddcc8f3e4c93f9c134edbbf2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
268160564
pes2o/s2orc
v3-fos-license
Carbon Tax Research Trend : This research focuses on mapping articles that discuss carbon tax published through sinta 1 and 2 accredited journals and scopus quartile 1 and 2. The purpose of this research is to explore more deeply about carbon tax research with a focus on business management and accounting in the 2015-2023 period. The method used in this research is a quantitative method with a bibliometric approach. This bibliometric approach is used to determine the development of research topics related to carbon tax research trends. Research samples, journal names, publication years, research methods, types of research variables and research data sources are the basis for mapping in this study. This research also visualizes carbon tax keywords using VOSviewer software. The results found that the search for articles that discuss carbon tax research found 8 articles published from accredited journals Sinta 1 and Sinta 2 and 50 articles from journals indexed by Scopus quartile 1 (Q1) and quartile (Q2) . This study contributes to knowing the trend of scientific publications on carbon tax, and provides an opening for researchers in conducting future research by conducting deeper calculations related to the amount of carbon emissions and carbon tax involving subjectivity in disclosure assessment. INTRODUCTION Global problems can threaten various things both in terms of the environment and human life (Lindenmayer et al., 2023).This threat is the impact of climate change arising from global problems.One of the triggers of climate change is the increase in exposure to greenhouse gases (GHG) (Mar et al., 2022).The United Nations (UN) suggests that the increase in GHG exposure will occur in line with economic growth, population and the increase in the level of human life (Belmonte-Ureña et al., 2021).The solution to reducing greenhouse gas (GHG) emissions can be done if a carbon tax is implemented (Brand et al., 2013;Ihsan & Hutama, 2023;Qiu et al., 2020).Several countries in the world have implemented carbon taxes such as Finland (Roh et al., 2020;Zhou et al., 2021).Apart from Finland, carbon tax has also been implemented for a long time in Africa (Alton et al., 2014;Choi, 2013;Shu et al., 2017).Carbon tax is an excise tax levied based on the amount of energy including coal, coke, gas, and crude oil (Dong et al., 2023;Dwyer et al., 2013; J. Zhang et al., 2024).The existence of a carbon tax policy makes it possible for entities to do business with the minimum possible (spending the smallest possible cost) but as much as possible to increase finances that can increase entity profits.This strategy has the potential to increase state revenue.The revenue can be allocated to sustain economic growth and improve national development for a country (Zhao et al., 2022). Carbon tax is one of the most interesting topics to be researched further.A lot of research has been done by previous researchers who focus on one subject area such as business, management and accounting with the keyword carbon tax.(Chang et al., 2023;Goh et al., 2023;Mashhadi Rajabi, 2023;O'Ryan et al., 2023;Ramadhani & Koo, 2022;Sun & Chen, 2022; Q. Zhang et al., 2022;S. Zhang et al., 2023).This study was motivated by research from (Wimala & Yeremy, 2022) on bibliometric analysis of tax research.Researchers tried to re-examine in the same way related to taxation, namely carbon tax.By using 8 articles on carbon tax in 7 accredited journals Sinta 1 and Sinta 2 during 2015.In addition, researchers also used Scopus Quartile 1 and Quartile 2 indexed journals with a total of 50 articles on carbon tax found during 2015. This research aims to explore in more depth about carbon tax research with a focus on the field of business management and accounting where gaps will be sought that can be researched by future researchers.Previous research examined the potential application of carbon tax policies in industry using the bilbliometric analysis method.The difference in research conducted by previous researchers is that researchers focus more on the subject area with the keyword carbon tax and the subject area of business, management and accounting.Previous research conducted by Wimala & Yeremy (2022) looked at the effect of carbon tax implementation, especially in the construction industry in countries that have implemented carbon taxes before.In addition, Wimala & Yeremy's research (2022) looked at the potential for its application in Indonesia and also used SWOT analysis to determine internal and external factors in the implementation of carbon tax policies.Based on the above statements, researchers focused on selecting Sinta 1 and 2 indexed journals and Scopus indexed journals with quartile 1 (Q1) and quartile 2 (Q2).Thus, 8 Sinta 1 and 2 articles and 50 quartile 1 (Q1) and quartile 2 (Q2) articles were collected for the period 2015-2023.The period of 2015-2023 was chosen by researchers because in 2015 there was the Paris Agreement.This agreement controls countries to reduce carbon dioxide and greenhouse gas emissions to limit global warming below 2.0 degrees Celsius (Delbeke et al., 2019;Kuo et al., 2016;Ran & Xu, 2023;Stefano & Richard, 2009). The contribution of this research is to determine the trend of scientific publications on carbon tax, the core journals of scientific publications on carbon tax, journals of scientific publications based on research variables, scientific publications based on year of publication, research methods, scientific publications on carbon tax research data sources and visualization of carbon tax keywords using VOSviewer.Second, this article provides an overview of future research to conduct deeper calculations related to the amount of carbon emissions and carbon tax and involves subjectivity in assessing disclosure.In addition, the contribution of research on carbon tax with bibliometric analysis is expected to contribute to the development of science as a form of progress in human civilization. The literature review aims to report trends, relationships, consistencies, and gaps so that work can be done in an organized manner and can be evaluated.According to Donthu et al. (2021) Bibliometric analysis is a popular method that can be used in exploring and analyzing large amounts of scientific data.This can make it possible to reveal the evolution in a particular field, as well as highlight the developing fields in the field being studied (Donthu et al., 2021).According to (Haryani & Sudin, 2020) in the implementation of bibliometric analysis, there are several steps that must be taken, including the data search process.This bibliometric data search was carried out manually by accessing sinta and scopus web journals with the keyword carbon tax and focusing on the business, management and accounting sub-area.In the bibliomertic analysis, researchers chose journals accredited by Sinta 1 and 2 and Scopus with Quartile 1 and 2. Bibliometric analysis was carried out on two aspects; (1) development trends in journals with carbon tax keywords.For the second aspect, using visualization of the results of bibliometric analysis with the help of Vosviewer. Bibliometrics is conducted to further explore the research on carbon tax with a focus on business management and accounting where it will look for blemishes that can be researched by future researchers.This article is presented in 5 sections.The first is the introduction, the second is the literature review, the third is the research method, the fourth is the results and discussion, which presents information related to mapping based on mapping the name of the scientific journal and the year of publication of the research, mapping the year of publication of carbon tax articles, mapping the carbon tax method, and mapping the source of research data and finally the fifth is the closing (conclusion). METHODS The research conducted by the researcher used descriptive quantitative with bibliometric approach.The unit of analysis used in this research is scientific articles on carbon tax.The research data source is scientific publications on carbon tax with accredited scientific journals sinta 1 and 2 and scopus quartile 1 and 2. The reason for choosing these accredited scientific journals is because sinta 1 and 2 are national journals that have been verified or have been recognized as true and have been reviewed by several reliable reviewers to be published.Researchers use the Scopus data base because they consider the quality of reputation that has been recognized internationally.In addition, Scopus also has a big impact on a journal or institution in the world of scientific publications.The population used is scientific publications on carbon tax that have been scientifically published and indexed by Sinta 1 and 2 and Scopus quartile 1 and quartile 2. The samples used in this study are scientific publications on carbon tax indexed by Sinta 1 and 2 and Scopus quartile 1 and quartile 2 for the last 8 years from 2015-2023, found 8 Sinta 1 and 2 articles and 50 Scopus quartile 1 and 2 articles. The data collection technique used by researchers is secondary data.Researchers conduct searches by opening the Scopus database at www.scopus.comwith a subscription account (paid) so that researchers can access all its features.Then the researcher uses the term or keyword carbon tax (carbon tax) with the search results taken in the article title.The data obtained by researchers is publication data in the field of taxation with the scope of carbon tax for the last 8 years, namely 2015-2023.After obtaining the search results, researchers began to explore the Sinta and Scopus https://www.ilomata.org/index.php/ijtcdatabases to see the core journals of scientific publications on carbon tax, journals of scientific publications based on research variables, scientific publications based on the year of publication and research methods and scientific publications on carbon tax research data sources.Furthermore, the researcher made an effort to visualize the development of research on carbon tax using VOSviewer software and presented the result items for the top 4 clusters. Research sample selection result A total of 8 carbon tax articles were collected from national scientific journal sites indexed by Sinta 1 and 2. Can be seen from table 1.In addition, there are also 50 articles on carbon tax from international scientific journal sites indexed by Scopus quartile 1 (Q1) and quartile 2 (Q2).Both types of journals have a submission deadline of December 31, 2023.Some articles that do not fit the sample criteria will be excluded and here there are 58 eligible articles, of which there are 50 scopus Q1 and Q2 articles and 8 sinta 1 and 2 articles.The selection of the article sample is presented in table 1.For articles that do not meet the requirements such as not including sinta 1 and 2 journals and Q1 and Q2 scopus journals will be excluded from the sample selection results.The samples that have been selected based on the filters that have been selected by researchers will then be discussed in more detail in the next discussion. For Figure 1, the researcher focuses on explaining the development of articles published from accredited journals Sinta 1 and 2. It can be observed that the distribution of publications in 8 articles from 2015 to 2023.In 2016 there was only 1 article from Sinta that explained about carbon tax.As for 2016 and 2017, there were no articles published in Sinta 1 and 2 that discussed carbon tax.In 2019 there was 1 article published from a sinta accredited journal.However, in 2020 and 2021 there were no articles published from sinta 1 and 2 journals.2022 is the year when sinta 1 and 2 accredited articles can increase to 4 articles that discuss carbon tax.In 2023 there were publications from sinta 1 and 2 journals but only 2 articles were published.Thus, it can be said that 2022 is the year with the highest number of published articles, namely 4 articles since the last 8 years 2015 to 2023.In 2023 is the year that has the second most publications with a total of 2 articles published in Sinta 1 and 2, the rest in 2016 and 2021 only 1 article can be published.Apart from these years there were no articles published in national accredited journals Sinta 1 and 2. Thus, research on carbon tax is still minimal because only 8 studies with 7 journals as a place of publication. Figure 2 is the distribution of Q1 and Q2 scopus journals in 2015-2023.Researchers added Q1 and Q2 scopus journal mapping sources because according to researchers if the coverage of sinta 1 and 2 journals is only a few and still minimal.Researchers mapped from 2015 to 2023, which is the same as the coverage of sinta 1 and sinta 2 mapping.From Figure 2, it can be observed that in 2015 and 2016 there were only 2 studies that discussed carbon tax.2015 and 2016 were the years with the least number of carbon tax studies.From Figure 2, it can be observed that 2019 and 2023 are the years with the highest number of studies on carbon tax, each of which has 11 articles or 11 studies.From 2017 to 2019, there was an increase in research on carbon tax.2020 experienced a decrease in the number of studies where there were only 4 articles.After 2020 until 2023 there was an increase in the number of studies on carbon tax.Thus, it can be concluded that the amount of carbon tax research conducted from 2015 to 2023 is quite a lot, namely 50 articles.The 50 scientific articles on carbon tax research are accredited by Scopus Q1 and Q2.Table 3 shows that the Journal of Cleaner Production is the international scientific journal that publishes the most articles on carbon tax.Journal of Cleaner Production published 27 articles (54%).The journal with the second most articles is the International Journal of Production Economics.The International Journal of Production Economics published articles on carbon tax with a total of 6 articles (12%).Apart from the journals mentioned above, there is only 1 article (2%) in each journal.Table 3 also explains that the Journal of Cleaner Production from 2015-2023 has been consistent for the last 8 years.This is because in the last 8 years the Journal of Cleaner Production is the journal with the most publications.Thus, apart from the Journal of Cleaner Production, the opportunity to publish articles on carbon tax is an opportunity for further researchers.From table 3, it can be seen that there are many only 1 journal that contains articles about carbon tax. Mapping by Year of Publication Table 4 shows the Mapping of the Year of Publication of Sinta 1 and 2 carbon tax articles.Based on the year of publication, 2022 is the number of years with the most articles published than other years, namely 4 articles (50%).The year 2022 is also the most research topics on carbon tax since the last 8 years.Then in 2023 is the year with the second most publications from sinta 1 and sinta 2 journals.The number of carbon tax research in 2023 was 2 scientific articles (25%). For 2016 and 2019 are years where there is only 1 publication of scientific articles (12.5%).Apart from the previously mentioned years, there were no studies explaining carbon tax indexed by Sinta 1 and Sinta 2. Thus, carbon tax is still very minimal to be published in nationally accredited journals such as sinta 1 and sinta 2. The year that there are no scientific publications on carbon tax is an https://www.ilomata.org/index.php/ijtcinteresting finding.This is because carbon tax is a new thing implemented in Indonesia and is a very interesting trend topic if studied in more depth and can add to the treasure of knowledge in the field of taxation. Mapping by Research Method Table 6 presents information about the mapping based on the research method of carbon tax articles Sinta 1 and 2. In 2022 and 2023, many carbon tax studies conducted research with quantitative methods.Quantitative research in 2022 and 2023 each with the number of articles 2. Quantitative research also occurred in 2019.In 2019 there was only 1 research on carbon tax using quantitative methods.Since the last 8 years quantitative research amounted to 5 studies (62.5%).Meanwhile, qualitative research was highest in 2022.In 2022 carbon tax research using qualitative methods was only 2 articles.While other research occurred in 2016, but only consisted of 1 qualitative method research article.The number of qualitative research methods since the last 8 years is 3 articles (37.5).Interestingly enough, there are no mixed methods in carbon tax research.Thus it can be concluded that the opportunity for future researchers is to increase the number of mixed research methods.In addition, it can be developed more deeply and more research with qualitative methods.This is because qualitative research is less when compared to quantitative research.Furthermore, Table 7 presents the mapping based on the research methods of Q1 and Q2 scopus carbon tax articles.When viewed from table 7, the number of quantitative and qualitative research methods has the same proportion, namely 24 research methods each (48%).In quantitative research methods, 2019 was the year with the most research using quantitative methods, namely 7. 2018 also dominated, namely there were 6 quantitative research methods.2015 was the least year that used quantitative research.For qualitative research, 2023 is the year with the most research in qualitative methods, namely 7. The least research on qualitative occurred in 2015, 2016, 2022, namely only 1 each.For mixed methods only 2 (4%), occurred in 2021 and 2023.Thus the gap from further researchers can use mixed methods because in mixed methods there are still minimal researchers to research it. Mapping by Research Variable Type Table 8 shows the mapping of carbon tax research variables based on journals indexed by Sinta 1 and Sinta 2. The carbon tax variable is the variable with the highest total of 4 (25%).Furthermore, the Constraints and Green Economics variables are the second most variables, namely 2 (12.5%).Over the past 8 years, the total variables found from 8 articles are 16 variables (100%).Thus, it can be concluded that the use of variables other than carbon tax, constraints and green economics can be used in research on carbon tax because it is still very minimal.In Table 9, there is also a mapping of the types of research variables on carbon tax (carno tax).The mapping is mapped from Q1 and Q2 scopus indexed journals.During the last 8 years 2015-2023 there were 33 research variables (100%) from 50 Q1 and Q2 scopus articles.The variables of carbon tax, Reducing energy consumption, Economy and The tourism industry are the most variables of other variables, namely 4 (8.33%).The environment variable is the 2nd variable with the most total variables over the last 8 years, namely 3 (6.25%).Carbon tax research seen from table 9 which is still 1 variable (2.08%) can be used as a development in future research. Mapping by research data source The mapping of research data sources is done by classifying articles on carbon tax based on data sources, namely primary data, secondary data, and mixed data (primary and secondary).Tables 11 and 12 show the mapping based on the research data sources.From the results of data processing in Figure 3 using keywords, the carbon tax research development map indexed by Scopus from 2015-2023 formed 32 clusters with a total of (162 items).Researchers will display the top 4 clusters with a total of (41 items).The presentation of the 4 clusters will be displayed in table 13 below. CONCLUSIONS This study aims to explore carbon tax research with a focus on business management and accounting in scientific journals indexed by Sinta 1 and 2 and Scopus Q1 and Q2 during the period 2015-2023.The number of studies used in this study is 58 articles consisting of 8 sinta articles and 50 scopus articles selected according to predetermined procedures.This study dominates that the donating publication year in sinta 1 and 2 journals is 2022.As for Q1 and Q2 scopus journals, namely 2019 and 2023.Indonesian Treasury Review: Journal of Treasury, State Finance, and Public Policy and Journal of Accounting and Finance are journals that publish the most articles from sinta 1 and 2 journals, namely 2 articles (25%).For Q1 and Q2 scopus journals, Journal of Cleaner Production is the journal that published the most articles from 2015-2023, namely 27 articles (54%).Quantitative method is the most used research method in carbon tax research in Sinta 1 and Sinta 2 as well as Scopus Q1 and Q2.For data sources, most researchers use secondary data sources.Visualization of carbon tax keywords found 32 clusters with a total of 162 word items.Researchers only present the top 4 clusters with a total number of items 41.In this study, only the keyword carbon tax was used to search for journals.Thus, further research can add several keywords related to the topic of carbon tax so that more research articles on tax will be collected.Limitations of this research also exist from the search process where some journal sites when accessed occur errors and relevant articles are not available full paper.Future research can also expand the search for articles on other websites/portals to increase the development of research on carbon tax. Figure 3 Figure 3 Visualization of Carbon tax Keywords from Scopus Journals Q1 and Q2 Source: Personal Data Processing Results (2023) Table 1 . Carbon tax research sample selection Table 2 shows that the journal Indonesian Treasury Review: Journal of Treasury, State Finance, and Public Policy and Journal of Accounting and Finance are the national scientific journals that publish the most articles on carbon tax, namely 2 articles (25%).The other types of journals can only publish 1 (12.5%)article on carbon tax.From table 2 observed by researchers for the last 8 years, the Journal of Treasury, State Finance, and Public Policy and the Journal of Accounting and Finance have consistently published articles on carbon tax where there are 2 articles each (25%). Table 2 . Mapping of Sinta 1 and 2 Carbon tax publications by journal name Table 3 . Mapping of Q1 and Q2 Scopus Carbon tax publications by journal name Table 5 describes the mapping of the year of publication of carbon tax articles from Scopus Q1 and Q2.The year 2019 is the largest year of publication of articles describing carbon tax from Scopus Q1 and Q2, namely 10 articles (20%).Furthermore, the second most research on carbon tax is shown in 2018 and 2023.These years both published scientific research articles on carbon tax (carbon tax) as many as 9 articles (18%).2021 is the third year that describes scientific research on carbon tax, namely 6 articles (8%).2016 was the year that published the fewest articles in Scopus Q1 and Q2, namely 1 article (2%).In 2016, research on carbon tax was still minimal.Of course, this is an opportunity for researchers to further research during the year.Fluctuations since the last 8 years from 2015 to 2023 have increased research on carbon tax.Thus, research on carbon tax can continue to be developed because considering that since the last 8 years only 50 articles indexed by Scopus Q1 and Q2. Table 4 . Mapping of the Year of Publication of Sinta 1 and 2 Carbon tax Articles Table 5 . Mapping of the Year of Publication of Q1 and Q2 Scopus Carbon tax articles https://www.ilomata.org/index.php/ijtc Table 7 . Mapping by Research Method of Q1 and Q2 Scopus Carbon tax Articles Table 8 . Mapping by research variable type Carbon taxSinta 1 and 2 Table 9 . Mapping by type of research variable Carbon tax Scopus Q1 and Q2 Table 11 maps carbon tax research data sources from Sinta 1 and 2 indexed journals while Table 12 maps carbon tax research from Scopus Q1 and Q2 journals.Table 11 shows that secondary data is the most widely used research data in carbon tax research during 2015-2023.Secondary data sources in research on this topic are obtained from entity report data and for primary data sources in this study are obtained from questionnaire surveys, observations, interviews, documentation, and experiments.For table 12 for the last 8 years, the most secondary data is also 26 (52%).No different from table 11 for this secondary data is also obtained from entity report data while for secondary data, namely from questionnaire surveys and others.For mixed data, both tables 11 and 12 are data that is minimally used, of course this is interesting for future research if you use mixed data. Table 10 : Mapping by data source of Carbon tax research Sinta 1 and 2 Table 11 : Mapping by data source of Q1 and Q2 Scopus Carbon tax studies Table 12 : Exposure of items from each cluster in Q1 and Q2 scopus journals on Carbon tax.
2024-03-03T19:46:23.095Z
2023-12-29T00:00:00.000
{ "year": 2023, "sha1": "b320824e47df53e77fde308d21493a3dcfd5d0bf", "oa_license": "CCBY", "oa_url": "https://www.ilomata.org/index.php/ijtc/article/download/1002/524", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e3d7df6e12eb9278e0610d65d5615b746ee824f4", "s2fieldsofstudy": [ "Business", "Environmental Science", "Economics" ], "extfieldsofstudy": [] }
195685486
pes2o/s2orc
v3-fos-license
The carboxypeptidase angiotensin converting enzyme (ACE) shapes the MHC class I peptide repertoire The surface presentation of peptides by major histocompatibility complex (MHC) class I molecules is critical to CD8+ T cell mediated adaptive immune responses. Aminopeptidases are implicated in the editing of peptides for MHC class I loading, but C-terminal editing is thought due to proteasome cleavage. By comparing genetically deficient, wild-type and over-expressing mice, we now identify the dipeptidase angiotensin-converting enzyme (ACE) as playing a physiologic role in peptide processing for MHC class I. ACE edits the C-termini of proteasome-produced class I peptides. The lack of ACE exposes novel antigens but also abrogates some self-antigens. ACE has major effects on surface MHC class I expression in a haplotype-dependent manner. We propose a revised model of MHC class I peptide processing by introducing carboxypeptidase activity. ACE affects surface MHC class I expression Individual MHC class I molecules select peptides with a preferred consensus motif and thus bind distinctive peptide repertoires 5 . If ACE affects the peptide repertoire, then the enzyme may affect surface expression of class I proteins. Surface K b and D b amounts were assessed using ACE-deficient cells from Ace −/− mice 21 , ACE wild-type cells (Ace +/+ ) and ACE overexpressing cells. For the latter, we used cells from 2 unique mouse strains: ACE 10 (ref. 22) and Pd (see Methods), which overexpress ACE in macrophages and DCs, respectively ( Supplementary Fig. 3). Among the peritoneal F4/80 hi macrophages and the splenic CD11c hi DCs, there was a 17% increase of mean fluorescence intensity of K b and a 31% increase of D b in the Ace −/− cells compared to their Ace +/+ counterparts (Fig. 2a). In contrast, both molecules were decreased in ACE over-expressing cells (16% for K b and 22% for D b ). A similar K b and D b increase was observed in Ace −/− total splenocytes ( Supplementary Fig. 4a). Furthermore, IFN-γ-treated Ace −/− skin-derived fibroblasts expressed significantly more surface K b and D b than IFN-γ-treated Ace +/+ cells (Fig. 2b). Thus, analysis of mice having high, normal or no ACE expression indicates an inverse relationship between ACE abundance and quantity of surface class I. To confirm the relationship between ACE and class I expression, we first transfected L.K b cells with an ACE expression construct or with an ACE cDNA, termed mACE, in which the two ACE catalytic domains were rendered inactive by point mutations. The catalytically active ACE decreased K b surface expression by 20% when compared to the cells expressing mACE (Fig. 2c). Moreover, when wild-type C57BL/6 mice were administered the pharmacologic ACE inhibitor ramipril for 6 d, splenocyte K b and D b expression increased by 34% and 43%, respectively (Fig. 2d). To investigate whether ACE can also affect the presentation of other MHC class I molecules, we overexpressed ACE in K haplotype L929 cells and D haplotype A20 cells. This resulted in a 19% increase of K k , a 15% decrease of D k and a 12% decrease of K d , while D d barely changed (Fig. 2e). After BALB/c mice were treated with ramipril, splenic DC K d and D d increased by 12% and 9%, while L d was unaffected ( Supplementary Fig. 4b). Thus, ACE has major effects on surface MHC class I expression in a haplotype-dependent manner. Peptide supply and MHC class I stability To understand the mechanism for surface MHC class I modulation, we first measured class I-peptide stability by testing surface MHC class I dissociation after treating cells with brefeldin A. This study showed that the rate of loss of surface K b and D b was similar in total peritoneal cells from either Ace −/− or Ace +/+ mice (Fig. 3a). Next, we measured the de novo appearance of cell surface class I. For this test, existing cell surface class I-peptide complexes were stripped by short-term acid treatment. Cells were then moved back to normal media and the time dependent recovery of cell surface class I molecules was measured (Fig. 3b). The rate of recovery in Ace −/− cells was 10-20% faster than in Ace +/+ cells. These data suggest that part of the reason for increased surface class I expression in Ace −/− cells may be faster assembly of new peptide-MHC complexes but not increased peptide-dependent class I stability in these cells. ACE edits the MHC class I peptide repertoire Because ACE abundance directly affect the surface expression of peptide-MHC class I complexes, we next assessed the nature of the peptides presented by MHC class I when ACE expression changes. One means of detecting differences in the MHC class I peptide repertoire is by cross immunization 23 . Specifically, we immunized Ace +/+ female mice with peritoneal macrophages from male Ace +/+ or male ACE 10 mice. The advantage of gender disparity is to measure the female CD8 + T cell responses to the HY Smcy and Uty peptides, the Y-chromosome-specific D b epitopes 24 . At 10 d, the females were sacrificed and splenocytes were expanded in culture with cells equivalent to those used to immunize the animals (booster). After 7-day culture, a further restimulation consisted of APCs (female Ace +/+ macrophages) loaded with either the Smcy or Uty peptides. Five hours later we measured the peptide-specific IFN-γ secreting CD8 + T cells by intracellular staining and flow cytometry (Fig. 4a). There was a 2-fold increase for Smcy and a 3-fold increase for Uty in the CD8 + T cell response from the female mice immunized with ACE 10 cells, as compared to those immunized with Ace +/+ cells. These data would be consistent with a difference in the numbers of D b -presented Uty or Smcy peptides on male Ace +/+ and ACE 10 immunizing macrophages. Indeed, when we examined surface Uty presentation using the Uty-specific CTL clone CTL-10 (25), there was a significant increase of D b -Uty presentation on ACE 10 macrophages compared to Ace +/+ macrophages ( Supplementary Fig. 5). When the splenocytes from the immunized Ace +/+ female mice were restimulated, not with peptides, but with male macrophages equivalent to those used in the original immunization, there was again a higher response in the females immunized, expanded and restimulated with male cells from ACE 10 mice than Ace +/+ mice. However, when both the boost and the restimulation were with female macrophages from either Ace +/+ or ACE 10 mice, there was no CD8 + T cell response. These data suggest that ACE overexpression in the ACE 10 cells changes (and in this case increases) the presentation of the Uty and Smcy epitopes, but that it does not create any novel epitopes, as indicated by the lack of response of cells from the immunized female Ace +/+ mice to restimulation with female ACE 10 cells. Using a similar approach, we next investigated self-antigen presentation by Ace −/− cells. Now, the female Ace +/+ mice immunized with male Ace −/− cells showed a CD8 + T cell response to the Uty and Smcy peptides that was only 50% that of female Ace +/+ mice immunized with male Ace +/+ cells (Fig. 4b). However, when restimulation was performed with male macrophages, there was a much stronger response in the female Ace +/+ mice immunized with Ace −/− cells (29% IFN-γ + ) as compare to those immunized with Ace +/+ cells (4% IFN-γ + ). Further, when female Ace +/+ mice were immunized with male Ace −/− cells, but expanded and restimulated with female Ace −/− cells, there was also a very robust CD8 + T cell response (23% IFN-γ + ), which was not seen in female Ace +/+ mice immunized with male Ace +/+ cells and then expanded and restimulated with female Ace +/+ cells (1% IFN-γ + ). A similar result was obtained when, instead of macrophages, the immunizing and stimulating cells were total splenocytes ( Supplementary Fig. 6a). That female Ace −/− cells elicit such a strong response from female Ace +/+ mice implies that the Ace −/− cells present a surface repertoire containing some novel peptides. Supporting this argument, the final yield of CD8 + T cells from Ace +/+ mice immunized and boosted with Ace −/− cells was substantially more than equivalent mice treated with Ace +/+ cells ( Supplementary Fig. 6a,b). A potential concern of the above data is that, while the Ace −/− mice are highly inbred to C57BL/6, it is conceivable that the cross immunization response may be directed at different minor histocompatibility loci. To test this, we asked whether, in vivo, ACE inhibitor treatment would change the class I peptide repertoire of Ace +/+ cells. Specifically, Ace +/+ mice were immunized and their splenocytes were boosted with Ace −/− macrophages from mice of the same gender as the recipients. However, for the final restimulation step, we used either Ace +/+ cells from syngeneic mice or Ace +/+ cells from syngeneic mice treated with ramipril for 6 d ( Supplementary Fig. 7). Although less robust than the response to Ace −/− cells, the cells from Ace +/+ mice treated with ramipril activated a very significant population of CD8 + T cells, which was not observed when using cells from untreated Ace +/+ mice (9.1% vs. 1.4%). Thus, pharmacologic inhibition of ACE activity caused a significant shift in the presented class I peptide repertoire of syngeneic cells, mimicking the Ace −/− phenotype. To further investigate if peptides presented by MCH class I proteins are responsible for the cross immunization results, we examined whether the CD8 + T cell response of Ace +/+ mice to the Ace −/− cells could be inhibited when the restimulating Ace −/− cells were pre-blocked with antibody specific to H-2D b . As shown in Supplementary Fig. 7, antibody (Clone B22.249) inhibited this response by almost 60%. We assume a significant part of the remaining activity was due to K b presentation, which was not blocked in the experiment. Thus, the magnitude of the effect seen with anti-D b compliments the pharmacologic approach in showing that the immune response is not directed towards minor histocompatiblity loci but towards antigens presented by MHC class I molecules. To further investigate whether Ace −/− mice express all peptide epitopes present in wild-type mice or whether some wild-type epitopes are lost, Ace −/− mice were immunized, boosted and restimulated with macrophages from either Ace +/+ or Ace −/− mice (Fig. 4c). In all instances, immunizing cells were the same gender as recipients. While Ace −/− mice immunized with Ace −/− cells showed essentially no response (about 1% IFN-γ + CD8 + T cells), equivalent mice immunized with Ace +/+ cells showed a very strong response (22% IFN-γ + ). Also, the final yield of CD8 + T cells from Ace −/− mice was substantially more when immunized and boosted with Ace +/+ macrophages, as compared to those with Ace −/− cells ( Supplementary Fig. 6c). As all mice are on a C57BL/6 background, these data support the conclusion that the surface peptide repertoire of Ace +/+ and Ace −/− cells are different from each other. Another way to assess presentation of peptides is to examine T cell receptor (TCR) diversity. If the presence or the absence of ACE is associated with different processing of peptide antigens, this may be reflected by differences in TCR V β usage. To examine this, we measured the CD8 + T cell V β repertoire using splenocytes isolated from Ace +/+ and Ace −/− mice. We also examined V β usage after acute mouse polyomavirus (PyV) infection ( Supplementary Fig. 8), since we previously found that ACE has different effects on the expression of PyV Large T antigen versus Middle T antigen 14 . Differences were found in V β chain usage both before and 8 d after PyV infection between Ace +/+ and Ace −/− mice (Fig. 4d). For example, before infection, V β chain 3 is more frequent in CD8 + T cells from Ace +/+ mice than Ace −/− mice. However, after infection, this reverses, with Ace −/− mice expressing a higher percentage of V β chain 3. Differences in usage were noticed with V β chains 3, 7, 8.1/8.2, 9 and 10 b . ACE edits self-antigens The general changes of MHC class I peptide repertoires between Ace +/+ and Ace −/− mice led us to examine their difference in the presentation of individual peptides. We thus measured a panel of minor histocompatibility (mH) antigens specifically expressed by C57BL/6 mice. The quantity of individual mH antigens was assessed by antigen-specific CTL clones. When this was performed using IFN-γ-primed fibroblasts from Ace /+ and Ace −/− mice, we observed that the Ace −/− cells were profoundly deficient in presented H7 a and H3a a antigens, but expressed increased amounts of H47 a and H13 a (Fig. 5a). Consistent with this, when macrophages from ACE 10 mice were compared to equivalent Ace +/+ cells, ACE 10 cells presented more H7 a and H3a a , but less H47 a (Fig. 5b). Expression of H13 a was equivalent between ACE 10 and Ace +/+ . These results indicate that ACE can directly affect self-antigen presentation with positive or negative effects. ACE affects the presentation of viral antigens We also measured the impact of ACE on the presentation on polyomavirus-derived antigens. Eight days after the infection of Ace +/+ and Ace −/− mice, the splenic CD8 + T cell responses to three dominant or subdominant PyV antigens were measured (LT359-368 (D b ), MT246-253 (K b ) and LT638-646 (D b ), Fig. 6a) 26 . Compared to Ace +/+ littermates, Ace −/− mice showed reduced responses to the dominant LT359-368 and subdominant MT246-253, but an increased response to the subdominant epitope LT638-646. To directly measure the presentation of viral antigens by APCs, we infected mice with PyV.OVA-I, which expresses the ovalbumin K b -dominant SIINFEKL within the sequence of the PyV Middle T antigen 27 . Three days post-infection, we fixed the splenocytes and measured the presentation of D b -LT359-368 by incubating with the available hybridoma HLT359 (Fig. 6b). Splenocytes from PyV.OVA-I infected Ace −/− mice elicited decreased responses, which is consistent with the readout of the LT359-368 specific T cell response. We also measured the surface K b -SIINFEKL expression on the splenocytes using the antibody 25-D1. 16. Cells were co-stained with anti-CD11b and anti-CD11c to distinguish APCs (Fig. 6c). Not surprisingly, CD11c + DCs were more efficient in presenting SIINFEKL than CD11b + CD11c − myeloid cells. However, both populations of Ace −/− cells were less efficient in presenting SIINFEKL than their Ace +/+ counterparts. Thus, we conclude that ACE can also affect class I presentation of virally expressed antigens. ACE works as a carboxyl dipeptidase on proteasome products To understand the biochemical activity of ACE in the setting of antigen processing, we made use of the SIINFEKL (SKL) as the model epitope (a scheme of the peptides used in these experiments is listed in Supplementary Table 1). When L.K b cells were transfected with an SKL minigene, ACE co-transfection did not affect the presentation of SKL (Fig. 7a). However, when the penultimate lysine of SKL was replaced with histidine, a residue more easily cleaved by ACE 28 , ACE co-transfection decreased the SHL presentation as tested by B3Z hybridoma cells 29 . ACE expression also significantly enhanced the presentation of SKL-TE, a C-terminal 2-mer extended SKL precursor. Not surprisingly, the ACE overexpressing ACE 10 macrophages were more efficient in processing the SKL-TE peptide than Ace +/+ macrophages ( Fig. 7b and Supplementary Fig. 9a). C-terminal amide modification can block monopeptidase activity. However, ACE 10 macrophages processed SKL-TE-amide almost as efficiently as SKL-TE, while the ability to present the 1-mer extended SKL-T was comparable between ACE 10 and Ace +/+ macrophages. These experiments verified that ACE works as a carboxyl dipeptidase in MHC class I antigen processing. ACE did not work as an aminopeptidase because Ace −/− , Ace +/+ and ACE 10 macrophages were equivalent in the processing of N-terminal extended E-SKL and LE-SKL when these were nucleofected as minigenes ( Supplementary Fig. 9b). Further, the use of Nterminal extended SKL precursors showed that ACE expression does not appear to change the basic machinery of antigen presentation. To directly assess the catalytic activity of ACE in processing SKL, SHL and C-terminal extended precursors, we incubated the individual peptides with recombinant human ACE for 1 h and subjected the digestion products to liquid chromatography-mass spectrometry (LC-MS) analysis. ACE did not produce appreciable N-terminal truncated fragments from any of these peptides. While the C-terminal 2 amino acids (2-mer) of SKL could be mildly removed by ACE in vitro, the efficiency of ACE in degrading SHL to SIINFE was much superior (Kcat SHL / Kcat SKL = 3.98, Supplementary Fig. 10a). ACE could not digest SKL-T or SKL-TEWT, but it could remove the C-terminal 2-mer of SKL-TE and SKL-TEW ( Supplementary Fig. 10b,c). For all peptides tested, ACE did not remove the C-terminal 1mer, 3-mer or 4-mer. These data are consistent with what we observed in the in vivo peptide processing assays presented above and indicate that ACE is a carboxyl dipeptidase. To verify that ACE edits MHC class I peptides through its catalytic domains and can function in the ER, we generated mutated ACE cDNA constructs that were either functionally inactive or could not enter the ER. The mACE protein, discussed previously, was made by mutating the two zinc-binding motifs of the ACE catalytic domains from HEMGH to KEMGK 30 . mACE protein appears to be expressed in transfected cells equivalent to wild-type ACE in flow cytometry analysis ( Supplementary Fig. 11a). Only ACE but not mACE enhanced the presentation of SKL from SKL-TE in L.K b cells (Fig. 7c). Secondly, we made a truncated ACE (ΔACE) by removing the first 29 amino acids, which is the hydrophobic signal peptide 13 . ΔACE is catalytically active because, once transfected, the protein helps L.K b cells to process SKL-TE ( Supplementary Fig. 11b). However, only ACE but not ΔACE can enhance the presentation of ER-directed SKL-TE in TAP-deficient RMA-S cells (Fig. 7d). It is the proteasome that makes the bulk of the raw peptide pool for MHC class I presentation. However, some proteinases have been shown to expose antigenic peptides in a proteosome-independent manner 11 . We thus studied the relative contribution of the proteasome and of ACE to the processing of either ovalbumin protein or SKL-TE. When both ovalbumin and ACE were expressed in L.K b cells, ACE increased the presentation of SKL (Fig. 7e). However, when proteasome activity was blocked with the inhibitor epoxomicin, this totally abrogated SKL presentation from ovalbumin, whether or not ACE was present. In contrast, when SKL-TE was expressed in L.K b cells, ACE coexpression enhanced SKL presentation, and this was reduced, but not eliminated with epoxomicin, consistent with published data showing that the proteasome can cleave C-termini of extended peptide precursors 9 . These data suggest that ACE trims peptides produced first by proteasome activity. DISCUSSION Here, we show that the surface MHC class I peptides presented by ACE deficient and wildtype cells are different. This conclusion is supported by cross immunization studies, analysis of TCR V β usage, expression patterns of minor histocompatibility antigens and the examination of viral antigen presentation. A role for ACE in processing MHC class I peptides was shown in macrophages, DCs and fibroblasts. These data suggest that, under physiologic conditions, ACE actively participates in the intracellular peptide processing for MHC class I presentation. Further, ACE serves a unique antigen processing function in mice since other enzymes apparently cannot compensate for the absence of ACE. Previous studies have examined C-terminal cleavage of MHC class I peptide precursors when proteasome activity was inhibited 9,31,32,33 . These studies led to the belief that only the proteasome affects the C-termini of MHC class I peptides and thus argued against the involvement of carboxypeptidases. However, in light of a recent publication 11 and now by the analysis presented here, this idea seems untenable. While we are not certain why previous studies did not detect carboxypeptidase activity, one possibility is that perhaps, unlike ERAP, ACE expression may be extremely low in the cell lines originally used to study this question. For example L cells (a common cell line used in such studies) have 2 logs less ACE mRNA, as compared to macrophages and DCs (data not shown). These experiments also neglected the possibility of inducible carboxypeptidase in professional APCs, especially in the setting of inflammation. The predominant action of ACE is to cleave C-terminal dipeptides from oligopeptide substrates. ACE has wide substrate specificity. Some studies have shown that ACE can even remove C-terminal tripeptides (from substance P) and cleave N-terminal tripeptides (from luteinizing hormone-releasing hormone) 34 . However, the in vivo significance of these unusual ACE catalytic activities is not known. What is clear is that ACE can cleave peptides ranging from three amino acids (HHL) 13 to over 40 amino acids (amyloid β peptide) 35 . Because of this property, ACE may not work as a "molecular ruler" in editing peptides for MHC class I loading, as is postulated for ERAP 36 . We tested the correlation between ACE expression and surface MHC class I expression, and found that out of 7 MHC class I molecules, 4 (K b , D b , K d and D k ) showed significant inverse correlation to ACE abundance. Expression of MHC class I on the cell surface depends on the assembly of peptide-MHC class I complexes in the ER and their stability. In ACE-deficient cells, the augmented K b and D b expression was probably due to a more abundant peptide supply rather than to a slower dissociation rate of peptide-MHC class I complexes. This inverse correlation between ACE and peptide supply is not seen when studying ERAP or TAP. The unusual effect of ACE may result from the mechanism of TAP peptide selection. Sequencing of MHC class I peptide motifs revealed that the C-terminus is usually the conservative anchor residue 5 . TAP has substrate selection when transporting peptides from the cytosol to the ER 37,38 . Mouse TAP, in particular, prefers peptides with hydrophobic residues at their C-termini 39,40 , which in most cases matches class I specificity. Thus, when an ER carboxypeptidase exists, it may be more destructive than constructive in shaping the quantity of peptides suitable for MHC class I presentation. Nonetheless, for any individual epitope, ACE action may increase, decrease or presumably not affect expression. In our studies, ACE-deficient mice immunized with wild-type cells produced a robust CD8 + T cell response to antigens found on wild-type cells. This strongly indicates that a wild-type set of epitopes rely on ACE for exposure. Indeed, the induction of ACE in APCs by either IFN-γ or L. monocytogenes implies a physiologic advantage during immunologic challenge. Given the terrific sensitivity of T cells for detecting presented epitopes, the advantage of increasing peptide diversity may outweigh the disadvantage of a somewhat reduced number of particular epitopes. Finally, we note that either ACE depletion or ACE over-expression changes immunogenicity. Millions of patients take ACE inhibitors. Whether the concentrations of these drugs are sufficient to have any effects on surface class I levels or class I peptide presentation will be important to study. Mice Ace −/− mice and ACE 10 mice were described previously 21,22 . Both lines were backcrossed to C57BL/6 for 10 or more generations. To make Pd mice, ACE cDNA was cloned in pBS, an expression vector that was previously modified to contain the 5.3 kd CD11c-promoter 41 (generous gift of C. Ried, University of Munich). C57BL/6 transgenic Pd mice were generated by the core facility of Emory University. C57BL/6 and BALB/c mice were purchased from Jackson Laboratory. Some mice were treated with ramipril (8 mg/kg/day) (Sigma) in water. Procedures and animal experiments were approved by the Institutional Animal Care and Use Committee at Cedars-Sinai Medical Center. Cells TPMs were prepared as described 14 . Fresh bone marrow was cultured with either M-CSF or GM-CSF and IL-4 for 7 d to expand bone marrow-derived Mφ or DCs respectively. Splenic DCs were purified using CD11c + magnetic beads (Miltenyi Biotec). Monocytes were sorted from bone marrow. Skin-derived fibroblasts were prepared by incubating minced tissue twice in a buffer consisting of collagenase (0.3 mg/ml), trypsin (0.05%) and EDTA (0.53mM) at 37°C for 45 min. Single cells were prepared and resuspended in MEM media supplemented with 10% FBS and cultured under standard conditions. Antibodies The following antibody clones were used: AF6-88. (anti-IFN-γ). All the antibodies above were purchased from either eBioscience, BioLegend or Pharmingen. The H-2D b -specific antibody clone B22-249 is from Cedarlane. The polyclonal rabbit-anti-ACE antibody was described previously 42 . The 25D-1.16 antibody was made by R. Germain (US National Institute of Health). The FITC-conjugated mouse TCR Vβ Screening Panel kit was from BD Pharmingen. It was co-stained with PEconjugated anti-CD8a. The stained samples were analyzed on a Beckman Coulter CyAn ADP and data were analyzed with FlowJo software. Real-time quantitative PCR ACE expression was quantified by RT-PCR using QuantiTect SYRB Green dye (Qiagen). DNA amplification was carried-out using an Icycler (Bio-Rad). We used the ACE primer set 5′-TAGGCTGCCTCCTTTATGTG-3′ and 5′-GTGGTCACGGATTAATGCTC-3′. ACE and GAPDH primer sets were synthesized by Invitrogen. The relative quantities of ACE as compared to the internal control, GAPDH, were calculated and an amplification plot with fluorescence signal vs cycle number was drawn. MHC class I supply and surface stability The rate of MHC class I surface expression was measured by monitoring MHC class Iexpression after incubating cells in ice cold acid stripping buffer (0.131M sodium citrate, 0.066M sodium phosphate and 1% BSA, pH 3) for 2 min. The cell suspension was neutralized by adding 10-fold volume of cold IMEM medium and washed two times. Cells were then cultured in IMEM supplemented with 10 mM HEPES pH 7.4, 0.5 mM sodium pyruvate and β-mercaptoethanol for recovery. For measurements of the dissociation of surface peptide-MHC class I complexes, cells were cultured in the same medium used for acid stripping recovery but containing 7 μg/ml brefeldin A (eBioscience). We also established parallel cell cultures without any treatment as the control maximum (100%) for each time point. At different time points, cells were stained with FITC-conjugated H-2K b and PE-conjugated H-2D b , as well as 7-AAD, and subjected to flow cytometry analysis. Immunization and CD8 + T cell response Female mice of one strain were immunized intraperitoneally with 1 × 10 7 male TPMs from another strain, or as control, from mice of the same strain. 10 d after immunization, 2.4 × 10 7 splenocytes from the recipients were expanded-boosted in one well of 6-well plate with 2 × 10 6 Mφ from either male or female mice of the strain used as the immunizer. The cultures included 20 U/ml of IL-2 (eBioscience). After 7 d, dead cells were removed and lymphocytes were purified by Ficoll fractionation (GE Healthcare). The harvested lymphocytes were counted and anti-CD8 + staining was used with flow cytometry to estimate the yield of the CD8 + T cells. In the presence of brefeldin A, the CD8 + T cell responses of the immunized mice were measured by flow cytometry for inducible IFN-γ production after a 5 h exposure to the indicated peptides or cells. For testing the CD8 + T cell responses to HY peptides, 5 μM of the relative HY peptide and Mφ from female Ace +/+ mice (as APCs) were coincubated. When using splenocytes as the immunizer, 2 × 10 7 splenocytes were injected intraperitoneally. 10 d later, the splenocytes from the immunized mice were expanded as above but with 1 × 10 7 irradiated splenocytes from mice of the same genotype used as the immunizer and 50 U/ml of IL-2. After 5 d, a subsequent reboosting was followed with freshly irradiated splenocytes and 20 U/ml of IL-2 for another 7 d. Then, LPSstimulated CD8 + T cell-depleted splenocytes were used as the restimuli for testing inducible CD8 + T cell IFN-γ production. For inhibition of presentation by H-2D b , Ace −/− cells used for restimulation were pre-incubated for 20 min at 4 °C with antibody B22.249 (23) before being cultured together with Ficoll-purified CD8 + T cells. Cell transfection ACE, ovalbumin and GFP expressing constructs and peptide minigenes were made as described 15 . L929 and L.K b cells were transiently transfected with FuGENE HD (Roche); A20 cells were transfected with DEAE-dextran (Millipore). For testing surface MHC class I changes after ACE expression, cells were co-transfected with either the ACE-expressing construct or the empty vector and a GFP expressing plasmid to allow identification of transfected cells. Mφ were transfected with a nucleofector (Lonza). Cells were harvested 24-48 h later for further assays. Antigen presentation assays To evaluate the presentation of mH antigens, Mφ or IFN-γ primed fibroblasts were coincubated with mH antigen-specific CTL clones (kind gifts from D. Roopenian, the Jackson Laboratory). Cells were co-cultured for 1 h, after which 7 μg/ml brefeldin A was added and cells were incubated for an additional 5 h. Cells were then surface stained with anti-CD8 + antibody followed by IFN-γ intracellular staining (eBioscience) and flow cytometry analysis. For testing ACE in processing ovalbumin-related antigens, DNA constructs were transfected into L.K b cells. In some of the experiments, 1 μM epoximicin (Cayman Chemical) was added 8 h after transfection. After 24 h, cells were washed, and fixed with paraformaldehyde and neutralized with glycine. After further washing, B3Z hybridoma cells were then seeded. Supernatants were collected 20 h later for IL-2 ELISA (eBioscience). Listeria and polyoma virus infection Listeria monocytogenes strain EGD was a gift from R. Ahmed (Emory University). 3 × 10 4 CFU cells were injected intravenously and after 48 h, mice were sacrificed and a splenic single cell suspension was prepared for ACE and cell subpopulation staining. Polyomavirus (PyV) strain A2 and PyV.OVA-I were described previously (38). PyV.OVA-I was generated by inserting the SIINFEKL coding sequence into the gene of Middle T antigen. For testing TCR Vβ usage, 2 × 10 6 PFU of PyV were injected intraperitoneally (i.p.) and after 8 d, splenocytes were prepared. For testing CD8 + T cell responses to PyV antigens, 1 × 10 5 PFU of PyV were injected i.p. Eight days later, splenocytes from the infected mice were restimulated with individual PyV peptides and IFN-γ + CD8 + T cells were quantitated. For examining the presentation of viral antigens on APCs, 1 × 10 6 PFU of PyV.OVA-I were injected i.p. and 3 days later, splenocytes were harvested and fixed. 5 × 10 6 splenocytes were coincubated with 4 × 10 5 hybridoma HLT359 cells for 20 h and supernatant IL-2 was evaluated by ELISA. Some of the splenocytes were directly stained with antibody 25-D1.16, anti-CD11b and anti-CD11c, and SIINFEKL presentation was examined by flow cytometry. Peptides Smcy (KCSRNRQYL) and Uty (WMHHNMDLI) peptides were purchased from AnaSpec. All other peptides were synthesized and HPLC purified by LifeTein or Peptide 2.0. Peptide digestion and mass spectrometry SKL related peptides were incubated with 0.01 unit of recombinant human ACE (EMD Chemicals) for 0.5 h or 1 h in 50 μl reaction buffer (75 mM NaCl, 1 μM ZnCl 2 and 12.5 mM Tris-HCl, pH 7.4) at 37 °C. One unit of ACE is defined as the amount of enzyme that will cleave 1 nmol MCA-RPPGFSAFK(DNP)-OH per min. Reaction was stopped by adding 10 μM EDTA and moving onto ice. Liquid chromatography-mass spectrometry (LC-MS) experiments were performed by the UCLA Molecular Instrumentation Center. Briefly, LC-MS was performed with a Waters Acquity UPLC connected to a Waters LCT-Premier XE Time of Flight Instrument controlled by MassLynx 4.1 software. The mass spectrometer was equipped with a Multi-Mode Source operated in the electrospray mode. Peptide samples were separated using an Acquity BEH C18 1.7 um column (2.1 × 50 mm) and were eluted with a gradient of 2 -50% solvent B over 10 min (solvent A: water, solvent B: acetonitrile, both with 0.2% formic acid (vol/vol)). Mass spectra were recorded from a mass of 300-2000 daltons. Statistics P value was generated using two-tail Student's T test. Supplementary Material Refer to Web version on PubMed Central for supplementary material. ACE works as a carboxyl dipeptidase on proteasome products. (a) L.K b cells were transfected with minigenes expressing the indicated peptides. Cells were also co-transfected with an ACE-expressing construct or an empty vector. The efficiency of antigen presentation was tested by measuring the IL-2 expression of co-incubated B3Z hybridoma cells. Data are pooled from 4 independent experiments. (b) Mφ from Ace +/+ and ACE 10 mice were pulsed with the indicated peptides. SKL presentation was evaluated using the antibody 25-D1. 16. Data (ΔMFI) were calculated as the raw MFI minus the background MFI, which was determined by staining the naïve cells. Data are pooled from 3 independent experiments. (c) L.K b cells were transfected with the SKL-TE minigene. Cells were cotransfected with an empty vector or a construct expressing wild-type ACE or catalytic domain mutated ACE (mACE). SKL presentation was evaluated 24 h later. (d) TAPdeficient RMA-S cells were nucleofected with a minigene expressing the ER-directed SKL-TE 14 . Cells were co-transfected with an empty vector or the construct expressing wild-type ACE or N-terminal signal truncated ACE (ΔACE). SKL presentation was evaluated. The histograms are representative of 3 independent experiments for c,d. (e) L.K b cells were cotransfected with constructs expressing ovalbumin or SKL-TE, and an empty vector or an ACE-expressing construct. Some cells were treated with the proteasome inhibitor epoxomicin (Epox.) after transfection. SKL presentation was examined by B3Z cell IL-2 expression. ND indicates non-detectable. Data are pooled from at least three independent experiments; *P<0.02; **P<0.005.
2018-05-08T17:43:01.814Z
2011-08-17T00:00:00.000
{ "year": 2011, "sha1": "c6f963fa842b3f26f1ee9ff1a4bf965dd73aaaf2", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3197883?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c6f963fa842b3f26f1ee9ff1a4bf965dd73aaaf2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
155364978
pes2o/s2orc
v3-fos-license
Farm succession and retirement : Some international comparisons The increasing age of farmers and the reluctance to transfer management from the owning generation to the successor generation has been well documented by several studies. In this article we review the literature relating to the succession of farm businesses. Drawing on data from the international FARMTRANSFERS project, we explore attitudes toward retirement and also rates and patterns of succession in several contrasting countries and states in the United States. Lastly, we a Centre for Rural Policy Research, Department of Politics, University of Exeter, Amory Building, Rennes Drive, Exeter, UK. EX4 4RJ; m.lobley@exeter.ac.uk; +44 (0)1392 264539 (corresponding author) b Beginning Farmer Center, Iowa State University, 10861 Douglas Avenue, Suite B, Urbandale, Iowa, 50167; jrbaker@iastate.edu; 515-677-2445 c School of Geography, Earth and Environmental Sciences, University of Plymouth, Drake Circus, Plymouth, UK. PL4 8AA: iwhitehead@plymouth.ac uk; +44 (0)1752 585913 discuss the implications of the research and provide recommendations for public policies that would enhance the opportunities for successors to succeed in the continuation of the farm family business. Introduction and Literature Review As with many family businesses, often one of the prime objectives of farm families is to pass on control of a sound and improved business to the next generation (Gasson and Errington, 1993).Despite declining numbers of farms in many parts of the western world, coupled with the expansion of corporate farming, family farming remains of totemic importance.Intergenerational succession represents the renewal of the family farm and can potentially act as a helpful corrective in addressing the apparent increasingly aged population of principal farmers.In contrast to many other professions in contemporary society, farming remains a largely inherited occupation and one in which the transfer of business control and ownership to the next generation is arguably one of the most critical stages in the development of the business (Uchiyama, Lobley, Errington, and Yanagimura, 2008).Moreover, evidence suggests that rates of intergenerational succession are much higher in farming than in other self-employed occupations (Laband and Lentz, 1983).And, in the case of the family farm, intergenerational succession tends to also be intrafamilial succession. For instance, in the United Kingdom families are responsible for most farms and much of the farmed land.A survey of 255 farmers in six areas of England found that 84 percent operated "established family farms" (that is, farms run by operators who are at least the second generation of their family to be farming the same farm or nearby farm), and were responsible for managing 86 percent of the area covered by the survey (Lobley, Errington, McGeorge, Millard, and Potter, 2002).Sometimes, family occupancy of the farm or local farmland was extremely lengthy: 31 percent of established family farmers could trace their family's occupancy of the farm to 1900 or earlier.The main entry route into farming in England remains intergenerational transfer within a family (ADAS, 2004;Lobley, et al., 2002).Similarly, in Australia, despite falling rates of succession, some 94 percent of farms are family-owned and -operated.Many farmers can trace their family's occupation of the farm back three generations or more, and there is evidence of a strong "rural ideology" that prioritizes passing on the farm within the family (Barclay, Foskey, and Reeve, 2005). Patterns of ownership in the United States are similar to those found in the UK and Australia.In the U.S. over 98 percent of all farms are family farms, and those farm families own 93.5 percent of all farmland (Hoppe and Banker, 2006).In Iowa, the average length of ownership of family farms was 83 years (Korsching, Lasley, and Gruber, 2007).Farmers in the eastern United States may, in some cases, trace their ownership to the early 17th century; an example is the Shirley Plantation in Virginia, which was established in 1613 (Clay, 2006).In the western United States, farm family ownership of land may be more recent in origin.Indeed, the last homestead land patent was granted in Alaska in 1988(National Park Service, 2007). The intergenerational and intrafamilial transfer of farms can be a source of great strength.In most cases the successor is a child of the manager, and in addition to physical assets, intangible assets (e.g., tacit knowledge) are transferred to the new business principal (Uchiyama, et al., 2008).The highly detailed and locally specific knowledge associated with successful intergenerational transfers can prove vital for effective agricultural and environmental management and can engender a sense of intergenerational accountability (Burton, Mansfield, Schwarz, Brown, and Convery, 2005).The source of such strength can also be a source of problems, however, not least of which is the potential for conflict between the generations, avoidance of discussing the issues (Barclay, et al., 2005;Symes, 1990) and sometimes the treatment of a successor as a "farmer's boy" (Gasson and Errington,1993).In the latter case, a successor is essentially treated as a hired worker, given little opportunity to develop the managerial skills needed to operate the family business, and kept in place by the promise that the eventual reward will be ownership of the family farm (Lobley, 2010). Succession is not a single event but is (or should be) a process that takes place over an extended period of time.Succession is the process of transferring the management of business assets.This may involve the transfer of the management of the "home farm" to a successor (or multiple successors), or it may involve the transfer of the necessary capital to establish a new farm business.Accordingly, it is possible to distinguish between succession to the farm and succession to the occupation of farming.In addition to succeeding to the farm and/or the occupation, the successor also benefits from the transfer of skills and, frequently, less tangible assets such as a detailed knowledge of the home farm, its microclimate and its idiosyncrasies (Errington and Lobley, 2002). The mirror image of succession is retirement.Just as succession is a process rather than a single event, retirement from farming is not a single act or event but a series of transitions (Rosenblatt and Anderson, 1981).The self-employed generally face a greater range of opportunities in terms of the balance between their time devoted to work and time devoted to other activities and in the case of farming, in particular, the term "retirement" can cover a wide range of situations.At one extreme, it can refer to the process of selling and leaving farming altogether.Frequently however, it may involve withdrawal from some of the more arduous tasks alongside a continuing day-to-day involvement in the business.For some, full retirement is achieved by selling, moving away from the farm, and no longer relying on a farm to produce retirement income.For others, a pathway of semiretirement with retirement income that is to some extent dependent on farm income may, after a series of transitions, eventually lead to full retirement and a move out of the farmhouse or even off the farm entirely.Finally, inheritance denotes the legal transfer of ownership of business assets.1Whilst conceptually separate, these processes are obviously linked, the timing and degree of ease of the process can have considerable implications for the farm business as well as the individuals involved in that business. The twin processes of succession and retirement can be a time of considerable financial and emotional stress on farm households (Burton and Walford, 2005).In addition, evidence from the U.S. and Europe suggests that farm business performance and farm development can be influenced by succession issues (e.g.Calus, Huylenbroeck, and Lierde, 2008;Mishra and El-Osta, 2008;Potter and Lobley, 1992;Boehlje and Eidman, 1984;Harl, 1972).Such influences can operate in a number of ways.For instance the "succession effect" (Potter and Lobley, 1996) refers to the impact of the expectation of succession on the farm business.Evidence suggests that farms may be developed over a long period, in order to provide a business capable of supporting two generations or to yield sufficient capital to establish successors on separate holdings.For instance, Calus, et al. (2008) found that the value of total farm assets was significantly higher on Belgian farms where a successor was present.Similarly, using data from the 2001 Agricultural Resource Management Survey, Mishra and El-Osta (2008) identified a positive association between farm capital stock and succession decisions on U.S. farms. 2 The succession effect can be reinforced by the "successor effect" (Potter and Lobley, 1996), that is, the impact of the successors themselves, as they gradually (or sometimes rapidly) assume managerial control.Successors often return from a period of agricultural training with new ideas and an innovative approach to the business.The extent of their impact will be influenced by how rapidly they ascend the "succession ladder" (Errington and Lobley, 2002). Finally, the "retirement effect" (Potter and Lobley 1996) can be identified toward the end of a farmer's career and is most pronounced where succession has been ruled out.In these cases farm operators frequently disengage or even withdraw from agriculture, by downsizing to reduce workload, letting or selling land, and frequently farming their remaining land less intensively.In some instances, these farmers can be regarded as "capital consumers" (Lobley and Potter, 2004), progressively liquidating farm assets to provide an income as part of a gradual process of leaving farming.For example, evidence from Belgium indicates that older farmers without successors begin to disinvest and that total asset values can decline toward liquidation levels (Calus, et al., 2008).In Ireland, Symes found that farms lacking a successor were less likely to be managed intensively, and that "the production cycle declines closer to a subsistence mode in old age than at any other point in the life cycle" (Symes, 1973, p. 101). Given that farm succession and farm business development influence each other, the process of succession has implications for the social and economic sustainability of the family farm and the economy and community in which it operates.Clearly succession is, or should be, of importance to policymakers, given the evidence that the process has a considerable influence on farmer behavior.In addition, since facilitating the timely transfer of farm businesses is an explicit objective of many policy initiatives, it is important that policymakers understand the processes of intergenerational transfer.For farm advisers, a fuller understanding of the process of succession is important because at the very time when members of the new generation are seeking to improve productivity or business viability through investment, members of the older generation may be engaged in disinvestment to provide for their retirement.This is particularly likely where no separate pension provision has been made and the farm business itself is expected to provide retirement funds.Thus, advisers need to consider how to maintain a viable business for the next generation, whilst minimizing the financial and emotional stress increasingly associated with the pursuit of this goal.Against this background, this paper compares rates and patterns of succession in the U.S. (in the states of Iowa, Virginia, North Carolina, Pennsylvania and New Jersey), Canada, England and Australia.It identifies and compares plans for retirement and the financing of retirement in these countries.It also explores similarities and differences in routes to succession before going on to consider some implications for policy. Applied Research Methods This paper draws on both published and unpublished data from the FARMTRANSFERS project, a series of international comparative studies replicating an original survey by Errington and Tranter (1991).This international collaboration was initiated by the late Professor Andrew Errington of The University of Plymouth and John R. Baker of the Beginning Farmer Center, Iowa State University. The project is based on a survey questionnaire originally developed by Professor Errington and subsequently replicated in a number of different countries (see table 1) to provide a standard set of data to be added to the FARMTRANSFERS database.FARMTRANSFERS is currently directed by John Baker, Matt Lobley (University of Exeter, UK) and Ian Whitehead (University of Plymouth, UK).To date over 15,600 farmers have completed the copyrighted FARMTRANSFERS questionnaire.The details of the survey in several countries have been noted in other papers (such as Uchiyama, et al., 2008;Barclay, et al., 2005;Errington, 1998;Errington and Lobley, 2002;Baker, Duffy, and Lamberti, 2001).Data is collected through a postal questionnaire covering basic background information about the farm (e.g., size, tenure, and enterprise structure) and farm family demographics (e.g., age and household composition).Detailed information is also recorded regarding retirement and succession plans, sources of advice and information, and the delegation of decisionmaking responsibility between the principal farmer and his or her successor(s).Given the wide range of social, cultural, and economic differences in the different countries and U.S. states participating in FARMTRANSFERS, modifications are made to the questionnaire to reflect such differences, with the agreement of the project directors.The questionnaires administered by country are referred to as "replications." It should be noted that the year of the survey and sample size for each country reported here is: Iowa, 2006 (972); Pennsylvania and New Jersey, 2005 (1,271); North Carolina, 2005 (2,095), Australia, 2004 (790);England, 1997 (491);Ontario and Quebec, 1997 (1,277).(See table 1 for a list of all FARMTRANSFERS surveys between 1991 and 2010.)The individual replications of the survey reported here span close to a decade and the sample sizes vary considerably.However, these specific replications have been selected for analysis in order to illustrate the diverse range of socioeconomic and cultural contexts in which the survey has been conducted.Clearly, the FARM- TRANSFERS methodology is not without its limitations, including the variation in survey year and the limitations of the standardized postal questionnaire format.Nevertheless, this approach yields a range of quantitative data relating to the pattern, process, and speed of succession and retirement, which provide a firm base for future indepth inquiries.Moreover, it allows for an international comparison of the results, which is not possible using other data sets.As such, the data is invaluable in order to identify common elements of succession plans, determine educational needs of farm business owners, compare succession patterns internationally, and create a resource useful to farm business operators for future succession activities. Rates of Succession In terms of the rate of succession (i.e., the proportion of farmers with an identified successor), figure 1 provides some international comparisons and illustrates some notable differences.For instance, England has a higher rate of succession selection compared with Canada, Australia, and several U.S. states.Indeed, Iowa, Virginia, Pennsylvania, New Jersey, and North Carolina all have much lower rates of succession.In addition, figure 1 shows that the number of daughter or daughter-in-law successors internationally is low.The identification of a successor depends, at least in part, on the age of the principal farmer.On average, respondents to the survey in England were older than their Canadian counterparts, which might explain some of the difference in rates of succession.However, farmers in the U.S. replications are noticeably older on average and yet have much lower rates of succession selection (see figure 1). Figure 2 explores in greater detail the association between the age of the principal farmer and the likelihood of having secured a successor.Generally, the younger the farmer, the lower the rate of expected succession, with Australia being an exception to this pattern.Data from England and Canada show that the expectation of succession increases noticeably with age.On average, succession rates in Iowa, Virginia, and North Carolina remain fairly low. Delegation of Managerial Responsibilities As previously discussed, a major objective of the international FARMTRANSFERS project is to examine the process of succession or the process of transferring managerial control and other intangible assets, such as site-or farm-specific knowledge.In order to do this, respondents are asked to indicate if a number of specific decisions are made by the principal farmer alone, shared with the successor, or made by the successor alone.The tasks presented to the respondents represent technical, tactical, strategic planning, managerial, and financial aspects of the farm operation.Table 2 compares the international data on task delegation where each decision was assigned a score ranging from 1 (farmers themselves are solely responsible) to 5 (successors are solely responsible).A score ranging from 2 to 4 represents shared responsibility between the farmer and successor. The results show that financial decisions are most likely to be made by the principal farmer without any help from the successor.The data also show that if successors are going to be solely responsible for a decision, that decision would most likely involve livestock management, and the selection, recruitment, and supervision of employees.With one or two exceptions, the types of decisions most frequently delegated to the successors and those not delegated to the successor are similar across international lines. The Succession Ladder The delegation of decisions and tasks can be referred to as the succession ladder, 3 or a ladder of 3 The concept of the succession ladder is well established and was first identified empirically by Commins and Kelleher (1973) in Ireland.Subsequent work, for example in New Zealand (Keating and Little, 1991) and in the UK (Hastings, 1984;Errington and Tranter, 1991), provides further empirical support for "the existence of a ladder of responsibility which successors climb en route to the acquisition of full managerial control" (Gasson and Errington, 1993, p. 213).Hastings (1984) made a major contribution to understanding the different decision domains (e.g., technical, strategic) and the order in which a successor passes though each domain.One of the contributions of FARMTRANSFERS has been to demonstrate the existence of the succession ladder and the broadly similar order of individual "rungs" on the ladder in responsibility the successor will climb (Errington, 1998).In this model, the first type of decisions delegated to the successor are technical decisions, those involving the type and level of production inputs, such as feed or fertilizers, along with the tactical decisions concerned with the day-to-day planning of the farm operation.The next decisions delegated are the strategic planning decisions, such as the mix and type of enterprises.Successors will then make decisions such as when to hire more employees, and the recruitment, selection, and supervision of employees.Further up the ladder of many different international contexts (e.g., Uchiyama, et al., 2008).Note: The numbers represent the rank order of decisionmaking authority retained by the older generation. 1 represents the activity most identified as retained solely by the older generation. Note: One number may appear more than once for the same state or county.This is due to the fact that some activities and decisions had the same percentages attributed to them. *Pennsylvania, New Jersey, and North Carolina surveys differed from those represented by the data in table 2 above.Therefore, not all activities and decisions have a rank score for Pennsylvania and New Jersey. responsibility, successors are then responsible for financial decisions, such as negotiating sales of crops or livestock, and identifying sources of and negotiating loans and financing.Finally, successors are responsible for deciding when to pay bills.This is most likely be one of the last areas of responsibility delegated to the successor (Errington, 1998).Such decisions, technical, tactical, strategic, and financial, are representative of rungs on the succession ladder. Data from the international FARMTRANSFERS project found that France experiences a faster succession process than England, with Canada falling in the middle of the spectrum.Iowa has been found to have the slowest succession rate (Barclay, et al., 2005).Uchiyama, et al. (2008) found a relationship between the age of the successor and the amount of delegation.Specifically, as successors grow older, more tasks and decisions are delegated.However, while delegation of managerial authority increases evenly in Canada and Iowa, in England and Virginia the increase in delegation drops off after the age of 40 (Uchiyama, et al., 2008).An Australian study found that Australian farmers are more likely to delegate greater amount of managerial responsibility than farmers in Iowa and England, and a lesser amount than farmers in Canada and France (Barclay, 2005).See Uchiyama, et al. (2008) for further analysis of the association between delegation and age of successor and principal farmer. The Succession Process Previous studies have discussed the different routes that successors may take before taking over the farm operation (Uchiyama, et al., 2008).The two principal routes identified are: (1) the direct route, where successors go directly into farming after they leave school, and (2) the diversion route, where successors are employed in an off-farm job after leaving school and then return to the home farm operation at a later date.This is sometimes referred to as a professional detour (Gasson and Errington, 1993;Uchiyama, et al., 2008). The succession route followed is likely to be influenced by a number of factors, including the availability of alternative employment and cultural norms regarding the value of nonfarm work.Uchiyama and colleagues found that farm size is a predictor of succession route.Generally, farms that are larger provide more opportunity for the older and younger generations to work side by side. Those successors who are on the direct route to succession are more likely than successors on the diversion route to develop intangible assets such as managerial skills (Uchiyama et al., 2008).In addition, successors who are from smaller farming operations are more likely to be employed off the farm, except for those successors in England and Virginia. FARMTRANSFERS project results can be used to explore patterns of succession based on the successor's current farm activity and the degree of decisionmaking authority that he or she has (Errington and Lobley, 2002).Errington and Lobley identified two distinctions in the pattern of succession: the responsibility exercised by the successor in making decisions on the farm, and the extent to which he or she is able to run an autonomous enterprise (Errington and Lobley, 2002).They used this to empirically identify different types of successors previously conceived of as conceptual "ideal types" by Gasson and Errington (1993). The first category of successor is the Farmer's Boy, in which the successor has little or no responsibility for decisionmaking and provides mainly manual labor on the farm.This category is common in England, as demonstrated by the FARMTRANSFERS Surveys (e.g., Uchiyama, et al., 2008;Errington and Lobley, 2002).The second category is the Separate Enterprise, where the home farm operation is large enough to support a separate enterprise run by the successor.This category allows the successor to develop managerial skills and also allows for some financial autonomy (Gasson and Errington, 1993).The third category of successor is the Stand-By Holding, in which the successor is set up on a separate farm in order to develop his or her farming skills.Although the successor might share machinery or labor at some point, he or she still remains independent of the farmer.The last category of successor is Partnership.In a partnership, the farmer works with the successor and shares responsibility for decisionmaking.A formal partnership agreement may even be executed (Gasson and Errington, 1993). Successors in Canada and the U.S. are more likely to take a professional detour route -a nonfarm job right out of school before returning to the farm operation.Few U.S. successors run a stand by farm.English successors are more likely to be in the farmer's boy category for a longer period of time compared with their counterparts in the U.S. and Canada.English and Canadian successors are more likely to run a separate enterprise to develop farming skills necessary for farm operation (Lobley and Errington, 1998). Retirement Succession and retirement are intimately interlinked.The incorporation of a successor into the business can offer the principal farmer the opportunity to semiretire, while in equal measure, the unwillingness of a senior farmer to step back can hinder the succession process.Evidence from FARMTRANSFERS surveys indicates that farmers in Iowa, Virginia, and North Carolina are more likely to remain employed on the farm operation, are less likely to semiretire from farming, and indicate that they will never retire.Farmers in Australia, England, Ontario, and Quebec are more likely to experience semiretirement or full retirement from farming (see figure 3).The identification of a successor is associated with a path of semiretirement from farming, in that those farmers who have identified a successor are more likely to experience some form of semiretirement.This trend occurs regardless of nationality.The presence of a successor might make semiretirement a realistic option for farmers who may otherwise face a choice of continuing to work full-time or completely retiring.Interestingly, farmers are less likely to choose a form of semiretirement if their successors are employed off the farm (Uchiyama, et al., 2008). Not only do retirement plans vary significantly across the FARMTRANSFERS replications being considered here, but so does the average age of planned retirement.As figure 4 indicates, farmers in the United States tend to plan to retire at an older age than their counterparts in Canada, France, and England.Australian farmers, however, indicated in a 2004 survey that the average age of retirement is 65, similar to U.S. farmers (Barclay, et al., 2005). The ability to finance retirement is likely to be one of a number of factors influencing retirement plans.Figure 5 presents comparative data on anticipated sources of retirement income and illustrates some significant differences between FARMTRANSFERS replications.The two Canadian replications (Ontario and Quebec) are notable for the significance of the sale of farm land or other farm assets in order to fund retirement.Farmers in France, on the other hand, gain the largest proportion (48 percent) of their retirement income from social security payments, while farmers in England tend to gain a significant proportion of their retirement income from private pension provision. The decision to retire and step back from a career that is often characterized as a "way of life," and one in which much of an individual's and family's social, cultural, and economic history and identity is conjoined, is not always an easy decision to reach.Advice on retirement planning can therefore be very important.Table 3 shows the comparison between countries of farmer respondents ages 50-59 and their discussions of retirement.Canadian and Iowan farmers are more likely to discuss their retirement plans with family members; however, farmers in England are less likely to do so.Previous studies have shown that retirement discussions with family members often increase after the identification of a successor (Uchiyama, et al., 2008), although this varies by location. Policy Implications This section begins with a brief review of contemporary challenges for agriculture.This provides the context in which to reflect on the place of family farms in addressing these challenges and the importance of timely and effective transfers of property and/or businesses in the farming industry. Challenges for Agriculture at Global, Regional, and Local Levels Arguably, the last two decades have presented the most significant challenges for agriculture in the post-war period.The focus of attention centers on the capacity of resources and practices in global agriculture to meet increasing demands for food, from rising populations and changing diets, along with a raft of other goods (i.e., bioenergy and industrial crops) and services (i.e., conservation and recreation) in the context of volatile commodity prices, diminishing nonrenewable resources, and climate change.Concurrent with such challenges, there is increasing evidence of continued degradation of the soil arising from continued unsustainable, intensive agricultural practices in areas of the world, including Australia, the U.S. and the UK.A decade ago, the Policy Commission on the Future of Food and Farming in the UK (Policy Commission, 2002) warned of the unsustainability of commercial farming practices.More recently, the UK Department of Environment, Food and Rural Affairs (DEFRA) has launched the country's first Food Security Assessment (DEFRA, 2009a), followed in close succession with the publication of DEFRA's vision 2030 -Safeguarding Our Soils: A Strategy for England (DEFRA, 2009b). Similarly, interest in beginning and farm succession planning has increased in the United States.The 2008 farm bill, part of the Food, Conservation, and Energy Act of 2008, established the Beginning Farmer/Rancher Development Program.The goal of the program is to enhance the food security of the United States by providing beginning farmers and ranchers and their families with the necessary knowledge and skills to make decisions concerning the future sustainable farming of their properties.The challenges to farming will vary geographically in nature and degree.In Eastern Europe, they will differ from those of Eastern Australia or the uplands of England.In a recent review of the challenges to rural land management, Hodge (2009, p. 652) states that "farm businesses need to develop their resilience in the face of greater exposure to the volatilities of world markets and reduced level of support under agricultural policy," as well as the uncertainties of climate change.Such "resilience" is the preserve of many family farms, and arguments for this are familiar.Jones (1996, p. 197) refers to the importance of the "intimate coaxing style of management" of family farms and advantages as "a long term institution protecting not only its economic base, but also its own place and surrounding."Continuity of management, through close relationships between family members, and the "sharing" of capital assets and the detailed knowledge of the farm resource, all contribute to the strength of family farms.The successors of the future will have to be highly motivated, skilled in technical and business matters, and capable of pre-empting change and planning appropriate responses.Without this, the risk is that the cornerstone of agricultural business in these countries will fail to meet national and global expectations. Impacts of Effective and Less Effective Succession As an entry route to agriculture, succession can have a significant impact on the contribution of farming in terms of economic, environmental, and social benefits.It has been argued that "succession and the failure of succession can have a powerful influence on the development trajectory of a farm" Lobley (2010, p. 1).Effectiveness can perhaps be measured first in terms of the presence of a successor to the business, and, second, in the timeliness and "smoothness" of transfer to that successor of the business.As previously mentioned, the business and the industry as a whole can derive benefit from the so-called "succession effect," which arises from the early identification of a successor and leads to determined development of the business to a state where two generations can be supported.Similarly, previous discussion has also centered on the "successor effect," a renewed enthusiasm for the business, as the parties begin to share managerial responsibilities.In challenging times, these two "effects" are clearly in the interests of efficient farming for the business and the country, providing perhaps the best model for succession.Clearly, there are policy implications in terms of providing favorable circumstances for the achievement of such effects and the benefits to be gained from them. Where a successor has been identified, the sequential transfer of the "reins of the business" may be slower than optimal.This has been identified as the case in the latest survey for England (1997), as well as in Germany ( 2003), Austria (2003) andNorth Carolina (2005), where the "farmer's boy" category of successor is dominant.In policy terms, a high proportion of "farmer's boy" successors suggests potential lack of wider farming knowledge, business and managerial skills, and the motivation required to drive the business forward in such uncertain times.Multiplied up, this may lead to farm businesses less well placed to adapt to and succeed in responding to the challenges of the future.Closely related to this is the barrier of low retirement rates in farming, identified in research conducted for DEFRA on Entry to and Exit from Farming in the UK (ADAS, 2004) and confirmed as an international feature of farm businesses, earlier in this paper.For many, such a strong reluctance to retire is due to the decision to farm as a long-term lifestyle choice.However, other barriers may also exist, including inadequacy of pension provision and the lack of affordable housing for the retiree or the successor. Of course, there may be other causes for a lack of a successor and implications if that occurs.In some cases farmers may just not have had children.In others, the farmer's children may become disinterested in the family business to the extent of losing any intention to succeed.This may be a product of the "late" recognition of the need for and discussion with potential successors.FARMTRANSFERS survey findings indicate successor age to range between 40 and 60 years old, with a wider range of ages at which the principal farmer identifies the successor.Without a clear successor, the business, the land and the building complement stand to be transferred to an operator new to the farmland, whether retained as a whole unit or separate lots.A time lag thus begins between takeover of this farm resource and its effective management, during which time obstacles may arise, financial and otherwise, to its continuing use as farmland.Where environmental objectives are important, such as for nature conservation to protect particular habitats, this lag time could be particularly important and may result in unnoticed decline. Finally, in terms of implications for wider society, commentators have expressed concern over the apparent aging of the farming community. Although not commonly the focus of succession research, investigations are required into the impact of earlier succession on the relationships between farm and community and the potential for younger farmers and their families to contribute to rural development. Conclusions There is much to consider here for researchers, policymakers, farm business advisers and farm business principals and prospective successors.In terms of research there is a continuing need to develop a clearer understanding of the process of intergenerational transfer in countries across the globe.Obvious research gaps exist in space (geographical coverage) as well as in time (up-todate evidence).Such deficiencies preclude the spread of good practice.On the question of retirement, qualitative research is needed to investigate the key influences over decisions in this regard.What scope is there to encourage planned retirement more broadly in the farming industry? In terms of policy, consideration focuses on three areas: first, measures to assist with increasing the likelihood of succession, that is, the presence of a successor motivated to take over the oftmentioned "reins of the business"; second, measures to encourage early identification of, and discussions with, the successor(s), to include the development of plans for "handing over the reins of the business"; and, third, measures designed to reduce the apparent barriers to retirement.As previously mentioned, replications within the FARMTRANSFERS project across a range of countries and states has provided evidence highlighting, perhaps not surprisingly, variations in some aspects of retirement and succession issues.The relevance of the three types of measures mentioned above therefore also will vary. The attraction of agriculture as a career is crucial to continued motivation of potential successors to take on the family farm.Student applications to agricultural colleges and universities have decreased dramatically in the last three decades in the UK, resulting in the reduction of postschool educational provision in agriculture as departments close across the country.To reverse this situation, a redoubling of effort is required to convey the message that sustainable agriculture has a key role to play in a future of global population growth (food security), pressures to reduce carbon emissions (waste management and renewable energy opportunities), and climate change.Rewarding career opportunities will continue to develop in these areas.Such messages need to be conveyed convincingly by government, educational institutions, and farming organizations.Resources should also be made available to deal with future increases in demand for training and education in what must be seen as a renaissance in the farming industry.The main objective here is to increase the potential for a heightened "successor" effect in farm businesses -the return of enthusiastic and well trained young farmers to their family businesses. As for the second focus of policy action, the FARMTRANSFERS project has uncovered variation in the age at which the principal farmer identifies a successor.In some countries, such as Australia, this is achieved earlier than others.Late commitment to a successor can result in unprepared semiretirees or full retirees, unprepared successors, and unprepared businesses.Mere identification of a successor is not enough; this project has also seen variation in the rate and approach to handing over the reins.Retirement offers opportunities for not only successors but also for retirees wishing to reduce their involvement physically, managerially, and financially over a period of time.For the industry, a mutually agreed upon retirement program can benefit all parties and the industry generally.In many other businesses, full retirement is the norm.In family farms, the knowledge and skills of the retiree are retained as a valuable asset to the business.A planned retirement program is therefore beneficial.Where appropriate, consideration should be given to funding for or direct provision of advice and training for farm business succession planning, through seminars, workshops, consultations, and publications, either directly with farming principals and prospective successors or via farm advisers.The main objectives here would to increase the "succession effect" by encouraging early identification and discussion between parties and to reduce the likelihood of the "farmer's boy" model of successors, identified as typical in England. Finally, this paper has confirmed the international significance of barriers to retirement in the industry.Again, these vary geographically and may include a combination of internally imposed issues and/or externally imposed constraints.Regarding the former, lack of motivation to retire is the product of a range of actual or perceived issues which might include the importance of farming as "a way of life," including home and stock, the perception of a shortage of appropriate skills for other opportunities in retirement, and the reluctance to consider training to acquire new skills.In addition, lack of early planning may lead to inadequate pension provisions, causing the need for continued dependence on the farm business.Policy directions involving support for advice and "training for retirement," mentioned above, would be appropriate here. In terms of externally imposed constraints, a lack of affordable housing in the locality may be a major problem.Retirees may prefer to remain in the vicinity of the family farm and more flexible approaches to planning decisions may need to be considered.Financial constraints for the successor who is expected to take on some or all of the business assets could also delay decisionmaking.Improvement in the availability of loans on manageable terms, along with the review of grant provision to encourage successors to take over and develop their family businesses, could be appropriate, depending on prevailing "local" (state or national) circumstances.The international prominence of succession as the means of farm transfer should, alone, suggest the need for greater understanding and effort, to ensure that farm businesses have the best chance to remain (or become) strong and competitive, with the complement of assets to face the challenges of the future. Figure 5 . Figure 5. Anticipated sources of retirement income: Some international comparisons Table 2 . International Comparison of Task Delegation Score Table 3 . International retirement discussions
2018-12-29T16:05:33.616Z
2010-08-12T00:00:00.000
{ "year": 2010, "sha1": "e2b84a7b4e9eb2256c13b5b1f43cc4384c6f3fc5", "oa_license": null, "oa_url": "https://foodsystemsjournal.org/index.php/fsj/article/download/10/3", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e2b84a7b4e9eb2256c13b5b1f43cc4384c6f3fc5", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
119322012
pes2o/s2orc
v3-fos-license
Families of Legendrian Submanifolds via Generating Families We investigate families of Legendrian submanifolds of 1-jet spaces by developing and applying a theory of families of generating family homologies. This theory allows us to detect an infinite family of loops of Legendrian n-spheres embedded in the standard contact (2n+1)-space (for n>1) that are contractible in the smooth, but not Legendrian, categories. Introduction A central motivating question in contact topology is the search for the boundary between flexibility (when contact objects behave like smooth objects) and rigidity (when behavior is more restrictive). This search tends to take the form of distinguishing or classifying contact objects up to isotopy. Phrased in terms of the space of all contact structures on a given manifold, or the space of all Legendrians in a given contact manifold, investigating isotopy classes can be thought of as trying to understand the set of path components. Flexibility results tend to give information about higher homotopy groups as well as π 0 : Eliashberg proved, for example, that there is a homotopy equivalence between the space of over twisted contact structures and the set of smooth 2-plane distributions on a 3-manifold [8], and Gromov proved that there is a homotopy equivalence between the space of Lagrangian immersions L Ñ pW, ωq and a space of bundle maps T L Ñ T W [11]. Rigidity results for higher homotopy groups are less common. Bourgeois uses the cylindrical contact homology invariant to construct non-trivial examples of elements in π m of the space of contact structures on unit cotangent bundles of negatively curved manifolds [1]. Kálmán uses the Chekanov-Eliashberg DGA invariant to construct a non-trivial example in π 1 of the space of Legendrian knots in standard contact R 3 [15]. Kálmán's example is especially interesting because his loop of Legendrian knots is contractible as a loop of smooth knots. In this article, we study the space of Legendrian submanifolds in the 1-jet space J 1 M with its canonical contact structure. The template for finding Date: November 5, 2013. JS was partially supported by NSF grant DMS-0909273. MS was partially supported by NSF grant DMS-1007260. nontrivial elements in higher homotopy groups is the same as that used in the rigidity results above: first, to an object X in the space X , associate some (graded) group HpXq which is an invariant of the path component of X P X . Next, to an element γ P π m pX ; Xq, associate an element Φpγq P End 1´m pH˚pXqq, and attempt to prove that this endomorphism is nontrivial. In contrast to the results above, which use flavors of the holomorphiccurve-based contact homology, we use the generating family homology as our invariant; see [9,22]. Because generating homology is a Morse-theory-based homology, the advantage of this choice is two-fold: first, our proofs do not have to deal with the technical analysis of a holomorphic curve theory or the complicated combinatorics of the Chekanov-Eliashberg algebra; and second, families of Morse-theory-based homologies have been elegantly packaged in Hutching's language of spectral sequences [14]. Suppose the Legendrian Λ Ă J 1 M has a generating family f with generating family homology GH˚pf q. Let L denote the space of Legendrian embeddings in J 1 M. The main technical application of the families framework developed in this article is the following: Theorem 1.1. There exists a morphism from π m pLpJ 1 M q, Λq to End 1´m pGH˚pf qq if m ą 1, or from a subgroup of π 1 pLpJ 1 M q, Λq to AutpGH˚pf qq if m " 1. For the space of Legendrian submanifolds of R 2n`1 , with n ą 1, we find that the morphism is nontrivial. Theorem 1.2. There exists an infinite family of Legendrian n-spheres in R 2n`1 such that for each sphere Λ, there exists an element α P π 1 pL; Λq which is contractible as a smooth loop of spheres but is not contractible in the space of Legendrian submanifolds. We remark that recently a similar map has been announced by Bourgeois and Brönnle. Their map counts certain holomorphic curves, and it is unclear if the two maps are related. In Section 2, we review generating families and generating family homology. In Section 3, we review Hutchings' families framework for families of Morse functions, and adapt it to our set-up of generating families. In Section 4, we prove the main results, finishing by rephrasing Theorem 1.1 in slightly more general terms. In Section 5, we apply the families framework in several ways; for example, to computing generating family holomogy of higher dimensional Legendrians via a bootstrap argument, as well as to showing how the morphism in Theorem 1.1 factors through front-spinning. Background Notions In this section, we briefly review the notion of a generating family for a Legendrian submanifold and the (Morse theoretic) generating family homology. 2.1. Spaces of Legendrian Submanifolds. Let J 1 M denote the p2n`1qdimensional 1-jet space of a n-dimensional smooth manifold M. We assume that M is closed, or else diffeomorphic to R n outside of a compact set. The 1jet space is equipped with the standard contact structure. Let Λ Ă J 1 M be an n-dimensional Legendrian submanifold. We are interested in the topology of the space of Legendrian submanifolds, which is formed by taking the quotient of the function space of Legendrian embeddings by orientationpreserving self-diffeomorphisms of the domain. The space of submanifolds inherits the quotient topology from the weak C 8 topology on the function space, as in [13]. Let LpJ 1 M q denote this space of submanifolds, and simply denote by L n the space of local Legendrian submanifolds, i.e. LpR 2n`1 q. Generating Families for Legendrian Submanifolds. Generating families generalize the fact that the 1-jet of a function f : M Ñ R is a Legendrian submanifold of J 1 M . To see how, begin by considering the trivial fiber bundle MˆR N with coordinates px, ηq. A function f : MˆR N Ñ R is a generating family if 0 is a regular value of the function B η f : MˆR N Ñ R N . Denote by F the set of all generating families. A generating family yields a Legendrian submanifold as follows: consider the fiber critical set The Legendrian submanifold Λ f defined by f is then the 1-jet of f along Σ f : Λ f " tpx, B x f px, ηq, f px, ηqq : px, ηq P Σ f u . Said another way, the Cerf diagram for the family of functions f x parametrized by x P M is the front diagram for Λ f . A given Legendrian submanifold Λ may have many different generating families; call that set F Λ . Let p : F Ñ LpJ 1 M q denote the map that sends a generating family f to the Legendrian submanifold Λ f that it generates. A key fact for this paper is: 21]). The map p : F Ñ LpJ 1 M q is a Serre fibration. 2.3. Generating Family Homology. Generating families may be used to define a Morse-Floer-type theory for Legendrian submanifolds; see [9,22] as well as [18]. The definition requires the use of Morse theory on noncompact domains, so we restrict our attention to generating families that are either linear at infinity or quadratic at infinity. The former (resp. latter) condition requires the generating family f to agree with a nonzero linear function Apηq (resp. a non-degenerate quadratic function) outside a compact set in MˆR N . If f is linear at infinity, then it may be represented as f " f 0`A , where f 0 has compact support and A is linear; the support of f is the support of f 0 . From here on, we assume that our functions are linear at infinity. The first step in the definition of generating family homology is to introduce the difference function on the fiber product of the domain of f with itself: The critical points of δ with positive critical values correspond to the Reeb chords of Λ f , and we capture this geometric information with the following definition of generating family homology: where ω is a number larger than any critical value of δ and where there are no critical values of δ in p0, q. It is not hard to prove that the groups GH k pf q are independent of the choices of ω and ; see [19, §3]. It is worth noting that 0 is a critical value for δ whose critical points form a Morse-Bott submanifold diffeomorphic to the Legendrian itself. Further, if a generating family f is linear-at-infinity, then, after a fiberwise change of coordinates, so is its difference function δ [9]. We then define the support of δ to be the support of δ 0 where δ " δ 0`A with A linear. The basic invariance property of generating family homology is: Theorem 2.2 (Traynor [22]). If f s : r0, 1sˆMˆR N is a 1-parameter family of generating families that generate a Legendrian isotopy Λ s , then there exists an isomorphism Φ fs : GH k pf 0 q » GH k pf 1 q. Combining this theorem with Theorem 2.1, we see that the set of all generating family homologies for a Legendrian submanifold Λ is invariant under Legendrian isotopy. Hutchings' Spectral Sequence We review Hutchings' construction in [14] of a spectral sequence for smooth families of Morse functions and submanifolds in the context of generating families. Up to some small modifications, his constructions and results apply to difference functions of generating families. We slightly extend the theory developed in [14] to include parameter spaces that have non-empty boundary. Our first task is to set notation for the family of difference functions we plan to analyze using Hutchings' scheme. Fix 0 ă ! 1. Let B be a finite-dimensional compact manifold, thought of as a parameter space. Unlike in [14], we allow B to have nonempty boundary. Let π : Z Ñ B be a fiber bundle whose fiber over b P B is Z b " MˆR NˆRN . Let δ " tδ b : Z b Ñ Ru bPB be a family of smooth functions depending smoothly on b that satisfies: Genericity: In the complement of a codimension one subvariety of B, all critical points of δ b with critical value at least are nondegenerate, and Linear-at-Infinity: Outside a compact set K in MˆR NˆRN , δ b agrees with a fixed nonzero linear function on R NˆRN . Let ∇ : Z Ñ B be a connection. To work with Morse homology in this setting, we need to introduce metrics and gradient flows. We begin by introducing a Morse-Smale pair pF B , g B q on the base space B, requiring the additional property that δ b is Morse for all b P CritpF B q. If BB ‰ H, we assume that the component of the negative gradient flow of F B with respect to g B , orthogonal to BB, is non-zero and points inward. Let W be the horizontal lift to Z of this negative gradient flow lifted using ∇. Let g Z denote a fiberwise metric on Z and let ξ be the negative fiberwise gradient flow of δ b with respect to g Z . Finally, we define the vector field which we will use to define differentials in a spectral sequence. We label this geometric data by the tuple The zeroes of V are pairs p " pb, xq, where b P B is a critical point of F B and x P Z b is a critical point of δ b . We will consider two complementary gradings: the base grading ipb; F B q and the fiber grading ipx; δ b q. The total grading of a zero p of V is ippq " ipb; F B q`ipx; δ b q. Hutchings proves in [14, Proposition 3.4 and p. 461] that, generically, the stable and unstable manifolds of the zeroes of V intersect transversally under a slightly different set-up: his fiber Z b is compact, his base B cannot have boundary, and 0 is not a degenerate critical value. Even so, since Hutchings' proof works by examining one pair of non-degenerate critical points at a time, his proof still applies to pairs of critical points with positive critical value in our set-up, with the linear at infinity condition taking the place of compactness. We say that Z is admissible (over B) if the choices above are sufficiently generic so that the stable and unstable manifolds of zeroes of V are transverse. To make the intersections of the stable and unstable manifolds easier to work with, we set some additional notation. Fix zeroes p and q of V . Define Ă Mpp, qq to be the space of negative flowlines u P C 8 pR, Zq of V , i.e. smooth maps u : R Ñ Z that satisfy d dt uptq "´V puptqq, with the property that lim tÑ´8 uptq " p and lim tÑ8 uptq " q. We use this set to define the moduli space of flowlines where u " u 1 if uptq " u 1 pt`τ q for some τ P R. Proposition 3.1. For a generic choice of V , Mpp, qq is a pre-compact manifold of dimension ippq´ipqq. The boundary of the compactification is given by: Proof. This is a rephrasing of the standard argument in Morse homology. Note that even though the space Z need not be compact, the linear-atinfinity condition on δ means that V satisfies the Palais-Smale condition as set down in [20, §2.4.2]. If BB ‰ H, we augment the standard argument as follows. Extend the family to be over a slightly larger open base manifold B 1 where the fiber Z b for b P B 1 zB is constant in the direction orthogonal to BB. Extend the function F B to F B 1 such that for a generic metric g B 1 which extends g B , the negative gradient flow projected orthogonally to BB points towards BB Ă B 1 in any component of B 1 zBB. Even though B 1 is not compact, there are no flow lines starting or ending at any critical point that flow into B 1 zB; thus, the usual arguments that show that the moduli spaces are manifolds with corners from Morse theory, applied to B, hold. Following Hutchings, the data Z yield a bigraded chain complex where the generators are the critical points pb, xq of V with δ b pxq ą . The generator pb, xq has bigrading pipb; F B q, ipx; δ b qq. The differential d n : C l,m Ñ C l´n,m`n´1 counts flow lines of V with coefficients in Z{2. Specifically, we define: #M 0 ppb, xq, pc, yqqpc, yq. That the map d is a genuine differential follows from Proposition 3.1. We filter the complex C ν :" À l`m"ν C l,m by the first grading, F l C ν :" À l 1 ďl C l 1 ,ν´l 1 , and let E˚,˚" E˚,˚pZ, q be its associated spectral sequence. The proof of Theorem 2.2 applies to the current situation, and implies that the fiberwise generating family homologies GH˚pf b q can be assembled into a locally constant sheaf, which we denote by F˚pZq. E 2 term: The E 2 term of the spectral sequence is Homotopy invariance: If Z is admissible over Bˆr0, 1s with the restrictions Z 0 :" Z| t0uˆB and Z 1 :" Z| t1uˆB also admissible, then there is an isomorphism of spectral sequences E˚,˚pZ 0 q " E˚,˚pZ 1 q. On the E 2 term, this is the isomorphism induced by the isomorphism of local coefficient systems Proof. When BB " H, the properties stated in the theorem follow with little or no modifications from Hutchings' arguments. In outline, Hutchings first establishes the theorem for spectral sequences defined using singular chains in the base (for any base); see Propositions 4.1, 4.3, 4.6 and Remark 1.5 in [14]. Hutchings then extends the isomorphism from singular homology to Morse homology in [14, Section 2.3] to an isomorphism of singular spectral sequences and Morse spectral sequences over closed manifold base spaces in [14, Proposition 6.1]. When BB ‰ H, we need to supplement the arguments connecting singular and Morse homology. The key idea in the argument is that the descending manifold of a critical point is a manifold with corners [14, Equations (2.6) and (2.7)]. That these equations extend to the case of a base manifold with boundary comes from repeating the argument given in the proof of Proposition 3.1. Remark 3.3. There are several other properties of Hutchings' spectral sequence that we have not included in the theorem above. The most interesting is a Poincaré duality statement, which holds in our set-up for some cases. Algebra of Homotopies In this section, we use the ideas of Section 3 to investigate the homotopy groups of the space of Legendrian submanifolds. In Section 4.1, we discuss how to interpret a family of n-dimensional Legendrians Λ b Ă J 1 M, parameterized by the m-manifold B, as a single pm`nq-dimensional Legendrian Λ. We also discuss relationships to the generating family homology. In Section 4.2, where B " S m is a (based) m-sphere, we interpret Theorem 3.2 as a morphism from the based homotopy groups of the space of Legendrian embeddings LpJ 1 M q to the space of endomorphisms of generating family homology. In Section 4.3, we study this morphism further to find examples of loops of Legendrian embeddings which are non-contractible as Legendrians submanifolds, but contractible as smooth submanifolds. In Section 4.4, we construct a more general morphism from the free homotopy classes of LpJ 1 M q. Let Λ Ă J 1 pBˆM q be the pn`dimpBqq-dimensional Legendrian trace; that is, the front of Λ over the point b is the front of Λ b . As in Section 3, let F B : B Ñ R be a generic function on the base, let V be the vector field from equation (3.1), and let Z " pZ Ñ B, δ " tδ b u b , F B , V q. Lemma 4.1. The function f is a generating family for Λ. If F B is a sufficiently C 2 -small Morse function and Z is admissible, then Proof. This result is straightforward after making two observations. First, in local coordinates, the differential of the fiber derivative of f at pb, m, ηq contains the differential of the derivative of f b as a full-rank submatrix. Thus, f also satisfies the transversality condition for generating families. Second, the quasi-isomorphism type (which determines its homology) of CM˚ppδ`F B q ω , pδ`F B q q is independent of the choice of generic F B which makes δ Morse, assuming F B is C 2 -small, and hence perturbing by F B does not change the topology of the level sublevel set. We next consider two examples. The first will be used in Sections 4.2 and 4.3, while the second appears in Section 4.4. Example 4.2 (Based m-sphere). Let Λ Ă J 1 M be an n-dimensional Legendrian submanifold. Let ρ : S m Ñ LpJ 1 M q be a smooth S m -family of Legendrian submanifolds with the property that for a small contractible neighborhood U of b P S m , we have ρpU q " Λ. Construct a Morse function F S m : S m Ñ R that has two critical points, a maximum at a P U and a minimum at b. Assume that }F S m } C 2 ă as in Lemma 4.1. Let Λ be the trace of this m-isotopy and define the generating family f for Λ as in Equation (4.1). If m " 1, we assume that the application of Theorem 2.1 yields a loop of generating families, not just a path. Perturb V if necessary so that is an admissible family. Assume thatρ| Br0,1s m´1ˆb m is independent of b m . Define the Morse function on the base to be: where 0 ă σ ! ! 1. Note that for any metric, the negative gradient of F I m projects to the outward normal direction on BI m . Let Λ be the trace of this m-isotopy and define the generating family f and its difference function δ as in equation (4.1). Perturb V if necessary such that Z " pZ Ñ I m , δ, F I m , V q is an admissible family. 4.2. From Homotopy Groups of the Space of Legendrians to Generating Family Homology. We revisit the map ρ : S m Ñ LpJ 1 M q from Example 4.2, using it to relate the homotopy groups of LpJ 1 M q to morphisms of generating family homology. Specifically, if f is a generating family for Λ and m ą 1, then we will construct a morphism Ψ : π m pLpJ 1 M q; Λq Ñ End 1´m pGH˚pf qq If m " 1, then we restrict the domain of Ψ to the set of homotopy classes of loops in LpJ 1 M q that lift to loops (not just paths) of generating families; denote by π gf 1 pLpJ 1 M q, Λ 0 q the subgroup associated to those loops. Note that if a loop in LpJ 1 M q does not lift to a loop of generating families, then we already know that the loop is non-contractible. To define the map Ψ, we begin by setting notation. Fix a generating family f for Λ and a small neighborhood U Ă S m that contains both the the maximum a and the minimum b of a C 2 -small function F S m that has no other critical points. Suppose that ρ : S m Ñ LpJ 1 M q is a smooth map with the property that ρpU q " Λ. Construct the generating family f ρ as in Example 4.2, recalling that if m " 1, then we assume that we have a loop of generating families. Lemma 4.1 implies that the differential of the generating family chain complex GC˚pf ρ q in degree l can be written as d " ř l`1 k"0 d k pZq, as in equations (3.2) and (3.3). For an element c P CritpF S m q, and a generator pe, pq P GC˚pf ρ q, define xpe, pq, cy to be p P GC˚pf ρ c q if e " c and 0 otherwise. Extend this pairing bilinearly. We can now restate (and prove) Theorem 1.1 is more detail. Proposition 4.4. The map ψ ρ defined above has the following properties: (1) The map induces a homomorphism Ψ ρ : GH˚pf q Ñ GH˚`1´mpf q. Proof. The general principle of this proof is outlined in [14]. For the convenience of the reader, we present some of the details here when considering generating families. To prove the first property, note that d 2 pc, xq " 0 if and only if xd 2 pc, xq, ey " 0 for all e P CritpF S m q. Since the base function F S m has critical points of index 0 and m only, we see that d k " 0 unless k " 0, m. In particular, for all x P Critpδ a q, we have: 0 " xd 2 pa, xq, by " xpd 0 d m`dm d 0 qpa, xq, by. Thus, ψ ρ is a chain map and induces a map Ψ ρ : GH˚pf ρ a q Ñ GH˚`1´mpf ρ b q. Next, we take two homotopic maps ρ, ρ 1 : S m Ñ LpJ 1 M q with admissible data Z and Z 1 , respectively. Combining Examples 4.2 and 4.3, we construct an admissible Zr´1, 1s over IˆS m " r´1, 1sˆS m such that Z|´1 " Z " Z| 0 and Z| 1 " Z 1 . We then apply Lemma 4.1 to define d " dpZr´1, 1sq. There are six critical points of F IˆS m , which we denote by pn, cq where n P t´1, 0, 1u and c P ta, bu. Since the base indices lie in the set t0, 1, m, m`1u, the equation d 2 " 0 now implies: Since we are working with a based homotopy between ρ and ρ 1 , the map d 1 corresponds to the identity map; in particular, we have: d 1 ppc, 0q, xq " ppc, 1q, xq`ppc,´1q, xq for c P ta, bu and x P Critpδ pc,0q q " Critpδ pc,˘1q q. Thus, Equation (4.4) indicates that the map H : GC˚pf ρ pa,0q q Ñ GC˚´m`2pf ρ pb,1q q defined by Hpxq " xd m`1 ppa, 0q, xq, pb, 1qy , is a chain homotopy between ψ ρ and ψ ρ 1 . The proof of the third statement for m ě 2 essentially appears in [14, Example 1.9], as Hutchings' proof relies on a based homotopy similar to the one we just explicitly constructed. For m " 1, we are unaware how to apply Theorem 3.2 to prove that Ψ rρsrρ 1 s " Ψ rρs Ψ rρ 1 s . Instead, this follows from the traditional "broken-curves" argument of the more well-studied continuation methods in Morse/Floer theory. 4.3. A constructive proof of Theorem 1.2. In this section, we prove Theorem 1.2, namely that for every n ą 1, there is an infinite family of Legendrian submanifolds, Λ n,r Ă R 2n`1 parametrized by r P N so that π 1 pL n , Λ n,r q is non-trivial. Further, the non-trivial homotopy classes we produce in π 1 pL n , Λ n,r q are trivial in the smooth category. We begin by constructing Λ n,r . Consider the Legendrian link in R 3 whose front projection appears in Figure 1. This link, which is isotopic to the Hopf link, has a generating family f : RˆR N Ñ R with the the top strand of the top component generated by critical points of index r`N and the bottom strand of the bottom component generated by critical points of index N´1. Spin the front about its central axis into R n`1 as in [10] to get two Legendrian spheres. Then perform a 0-surgery along the horizontal dotted 1-disk in Figure 1 to get a connected Legendrian sphereΛ n,r . That the spinning and surgery constructions yield Legendrian surfaces with generating families is a simple generalization of facts proven in [2]. To construct Λ n,r itself, we take two copies ofΛ n,r , positioned sufficiently far apart along the x 1 axis so that the pair can be generated by a single generating family that is equal to a linear function in η in a neighborhood of the hyperplane x 1 " 0; see [18, §3.3]. Finally, perform another 0-surgery to connect the two copies; once again, the result has a generating family which we will call f n,r . It is important that the three 0-surgeries performed thus far line up as in Figure 2. For r ě n`2, it is straightforward to use the cobordism long exact sequence of [18] (see also [2]) to compute that the generating family homology with respect to the generating family f n,r is: It is easy to see from the computation that the group GH r pf n,r q is generated by two chains β L and β R , each of which is arises from a sum of critical points that lie in exactly one of the copies ofΛ n,r . With the Legendrian spheres Λ n,r in hand, we proceed to construct a non-contractible loop in L n based at Λ n,r . The idea is to effect a rotation by π in the first two coordinates of the base manifold R n , which yields a loop in L n because of the symmetry of Λ n,r . To be more precise, fix τ ! 1 and choose a smooth function σ : r0, 2πs Ñ r0, πs with the properties that σ is non-decreasing, σ´1t0u " r0, τ s, and σ´1tπu " rπ´τ, 2πs. Define a path ρ : r0, 2πs Ñ SOpnq of rotations of the base R n to be the identity except for the following elements of SOp2q in the upper left corner: " cos σpsq sin σpsq sin σpsq cos σpsq  . Finally, let f s " f n,r˝ρ psq, where we have implicitly extended ρ to be the identity on the fiber component. The symmetry of the function f n,r implies that this is actually a smooth family of generating families over the base S 1 even though ρ does not descend to a smooth function on S 1 . In particular, we obtain a smooth loopρ of Legendrian spheres in L n . To place the construction above in the families context, note that the construction above yields a (trivial) bundle Z " S 1ˆRnˆR2N over S 1 , a fiber-wise difference function δ s , and a base function F B as constructed in Section 4.2 with maximum at 0 and minimum at π. It remains to specify a vector field V . Choose any metric on the base circle and let W be the lift of ∇F B to Z via the trivial connection. Let ξ 0 be the fiber-wise gradient of δ 0 , and define (4.5) ξ s pxq " W psqρ 1 psq`ρpsqξ 0 pxq. Finally, as in Section 3, we define the vector field V to be V px, sq " ξ s pxqẀ psq. Thus, we have all of the data necessary to form a tuple Z for use in the families construction. Proof. It suffices to show that Ψρ is not the identity. The vector field V constructed above is designed so that a flow line γptq " pγ M ptq, γ S ptqq has the following properties: (1) The component γ S ptq satisfies the decoupled one-dimensional equation γ 1 S ptq " W pγ S ptqq. (2) The component γ M ptq is of the form γ M ptq " ρpγ S ptqqζptq for some flow line ζptq of the vector field ξ 0 . This fact is a straightforward consequence of Equation (4.5). It is then clear that the rigid flow lines that compute the map Ψρ on GH˚pf n,r q send a class of GH˚pf n,r q represented by critical points with x 1 ă 0 to the symmetric class represented by critical points with x 1 ą 0. By construction, this map is not the identity in degree r, and hence the loopρ is not contractible. While the loopρ is non-trivial in π 1 pL n , Λ n,r q, it is smoothly trivial. More precisely, we have: Proof. For n " 2, we exhibit a null-homotopy; by spinning this homotopy, we get a proof for the n ą 2 case. The null-homotopy is constructed in two stages. First, note that the space of long 2-knots in R 5 is connected [3]. Further, as noted in [3, Definition 1], the space of long 2-knots in R 5 is homotopy equivalent to the space of embeddings of D 2 into D 5 that agree with a fixed linear function on the boundary. Thus, there is a smooth isotopy of the left lobe of Λ 2,r that satisfies the following: (1) It fixes the attaching region of the 0-surgery joining the left to the right lobes; (2) It is supported in the left half-space of R 5 ; and (3) It takes the left lobe to a flying saucer. Performing this isotopy on the left lobe and its rotation on the right, we obtain a smooth isotopy H that takes Λ 2,r down to a flying saucer; note that this isotopy is symmetric about the z axis. We are now ready for the first stage of the homotopy Θ : r0, 2s Ñ L 2 that connects ρ to the identity. We work entirely with the front diagram. At time t " 0, we simply take Θ to be ρ. As t increases to 1, for each fixed t, we perform Hpx, 3sq to gradually transform Λ 2,r into the flying saucer over s P r0, t 3 s, then rotate the result by π, and then perform the reverse homotopy Hpx, 3p1´sqq for s P r1´t 3 , 1s. See Figure 3 for a schematic picture of this construction. At t " 1, the loop ρ has been transformed into a loop that starts by doing H over r0, 1 3 s, then fixes the flying saucer over r 1 3 , 2 3 s, and then undoes H over r 2 3 , 1s. This loop is clearly null-homotopic, and we append this null homotopy to the homotopy constructed above. Remark 4.7. The proof above shows that the elementρ P π 1 pL n , Λ n,r q has order at least 2. We can modify the construction to produce elementsρ m P π 1 pL n , Λ n,r q that have order at least m for any m ą 1. Instead of connecting two copies ofΛ 2,r with a 0-surgery, we begin with a central flying saucer centered on the z axis. We then take m copies ofΛ n,r , arrayed as in Figure 4, and let ρ m,r be a rotation about the z axis by 2π m . The computations of the generating family homology have the same form as those for Λ n,r , and a slight generalization of the proof of Proposition 4.5 shows that all powers ρ m,r , pρ m,r q 2 , . . . , pρ m,r q m´1 are nontrivial maps. In fact, the argument above shows that for any subgroup G ă SOpnq that acts transitively and without fixed points on a set S Ă S n´1 , there exists an n-dimensional Legendrian submanifold Λ G Ă R 2n`1 and an injection G ãÑ π 1 pL n , Λ G q. 4.4. Free homotopies. One can also consider relative versions of the discussion of the map Ψ: instead of m-spheres of Legendrians up to basepointpreserving homotopy, consider m-cubes of Legendrians up to homotopy relative to their boundary. One way to algebraically package this, before passing to homology, is as a fundamental 8-groupoid, which we sketch below. This groupoid is an example of a so-called p8, 0q-category. Essentially, an p8, 0q-category is a category with objects, 1-morphisms between objects, 2-morphisms between 1-morphisms, etc. The "p¨, 0q"-label indicates that all k-morphisms for k ą 0 have homotopy inverses. The "p8,¨q"-label indicates that operations and relations, such as the composition of two composable 1-morphisms and associativity of composition, only hold up to "homotopy." For a rigorous definition of an p8, 0q-category in terms of Kan complexes and simplicial sets, see [17, Remark 1.1.2.3 and Example 1.1.2.5] Example 4.8. As mentioned, an example of an p8, 0q-category is π ď8 pXq, the fundamental 8-groupoid of a topological space X. The objects of π ď8 pXq are the points in X. The 1-morphisms M or 1 px, yq are the (possibly empty set of) paths from x to y. Composition of composable 1-morphisms is concatenation of paths. Note that we are unconcerned with how to parameterize the composite path since all choices are homotopic. This leads to the 2-morphisms M or 2 pα, βq between paths α, β which start and end at x, y P X : they are the based homotopies connecting α, β. Note that all pě 1q-morphisms have homotopy inverses. Example 4.9. We define another p8, 0q-category, GHpL n pJ 1 M qq, based on the generating family chain complexes of points in L n pJ 1 M q. The objects are GC˚pZq :" GC˚pf q with differentials d " dpZq. Note if GC˚pZq " GC˚pZ 1 q, but the Legendrians f and f 1 generate are not the same, the chain complexes are considered the same object in this category. Given a Legendrian isotopy Λ b ,´1 ď b ď 1 which is constant for´1 ď b ď 0, let Z be the admissible family associated to the trace Λ. (See Section 4.2.) Define a 1-morphisms α " αpZq P M or 1 pGC˚pf´1q, GC˚pf 1 qq, αpxq :" xd 1 p0, xq, 1y . (using the notation of the proof of Proposition 4.4). Note that when defining M or 1 pGC˚pZq, GC˚pZ 1 qq, we are considering all families Zr´1, 1s between all pairs Z and Z 1 (as in the proof of Proposition 4.4) such that GC˚pZq " GC˚and GC˚pZ 1 q " GC 1 . We continue in this manner, defining the 2morphisms with the d 2 -map, et cetera. Proof. The proposition follows from almost identical arguments to the proof of Proposition 4.4. Further Applications In this section, we examine several explicit constructions of families of Legendrian submanifolds with generating families, teasing out the implications of the families machinery of Section 3 for each construction. 5.1. Product Families. Suppose that Λ Ă J 1 M is a Legendrian submanifold with generating family f . Given a closed manifold B, we may form the product family ΛˆB Ă J 1 pMˆBq simply by taking the generating family f B with fiber f B b " f . This construction, together with a choice of a C 2 -small Morse function F B on B and a metric g on MˆR N , induces a family pZ Ñ B, δ, F B , V q. We may then use Theorem 3.2 to compute the generating family homology of the constant family f B on the total space ΛˆB using a Künneth-type formula. Proposition 5.1. The generating family homology of the total space of a product family may be computed by: Proof. The E 2 property of Theorem 3.2 implies that E 2 i,j " H i pB; GH j pf qq. The triviality property of Theorem 3.2 implies that the spectral sequence E˚,˚collapses at the E 2 page, and we recover the generating family homology of the family f B as in the statement of the theorem. While the result of this corollary has been obtained when M " R n and B is the k-torus [5], this is a new result for all other cases. To see an application of the corollary, one may take any pair of twist knots in J 1 R that Chekanov distinguished using linearized Legendrian contact homology [4]. In this case, since the twist knots have only one possible linearized contact homology group, it is easy to use Fuchs and Rutherford's results in [9] to show that Chekanov's twist knots have different generating family homology. Remark 5.3. The product families construction is a special case of Lambert-Cole's Legendrian product construction [16]. The 1-jet of F B in J 1 B is a Legendrian Λ B isotopic to the zero section, and the product above is then Lambert-Cole's Legendrian product ΛˆΛ B . Front Spinning. In the next few subsections, we bring the front spinning constructions of [6,10], their adaptation to generating families [2], and their generalization to twist spinning [2] into the families context. For the simplest version of this construction, suppose that a Legendrian submanifold Λ Ă R 2n`1 is contained in the half-space H defined by x n ą 1. This can always be achieved via a translation in the x n direction, which is a Legendrian isotopy. Suppose further that Λ has a linear-at-infinity generating family f whose support (Section 2.3) also lies in the half-space H. As alluded to in Section 2.3, we may also assume that δ is linear-atinfinity and has support in the half-space H -in fact, we assume that the support lies in the set defined by x n ą 1; see [19]. It is straightforward to check, as noted in [2], that f Σ is still a generating family. We call the new Legendrian the m-spinning of Λ and denote it by Σ m Λ; it clearly has the diffeomorphism type of ΛˆS m . A small generalization of the proof of Proposition 5.1 yields: Proposition 5.4. The generating family homology of the m-spun generating family f Σ,m may be computed as: Proof. The proof is structured around a relative Mayer-Vietoris argument in the domain of δ Σ,m , where we take the set A h to consist of points px, ρ, θ, ηq P R n`mˆR2N with ρ ă 1 and δ ă h and the set B h to consist of points with ρ ą 1 2 and δ ă h. Since δ is a linear function for ρ ă 1, we see that the pairs pA ω , A q and pA ω X B ω , A X B q are both acyclic. Thus, a Mayer-Vietoris argument shows that GH˚pf Σ,m q is isomorphic to H˚`N`1pB ω , B q, which, by examination of Equation 5.1, is precisely the generating family homology of the product family ΛˆS m constructed in the previous section. We conclude, as in the previous section, that if two Legendrians may be distinguished by their generating family homology, then their m-spins are so distinguished as well; see [5,Section 5] for a comparable computation for Legendrian Contact Homology when m " 1. Twist Spinning. To generalize the spinning construction of Section 5.2, consider a representative α of an element in π m pL n ; Λq. Suppose that Λ has a generating family f , and let f θ denote the lift of α to the set of generating families for Λ θ starting at f . If m " 1, we must explicitly assume that the lifting procedure yields a loop, not just a path, of generating families. As a common generalization of [2] and [10], and in parallel to [7] for m " 1, we define a generating family for the twist-spun Legendrian pn`mq-submanifold Λ α by: f α px 1 , . . . , x n´1 , ρ, θ, ηq " f θ px 1 , . . . , x n´1 , ρ, ηq. Front spinning is obviously a special case of twist spinning: simply twist-spin the constant isotopy. To compute GH˚pf α q, we return to the setup in Example 4.2, where the base function F : S m Ñ R has a maximum at a P S m , a minimum at b P S m , and no other critical points. Theorem 3.2 implies that the E 2 term of the families spectral sequence for the family f θ is GH˚pf q'GH˚pf qr1´ms with the differential defined as follows. If x is a generator of GH˚pf q, then in the notation of Sections 3 and 4, the generators of the E 2 term are of the form pa, xq and pb, xq. The definition of the map Ψ then implies that the differential is: pb, Ψ rαs pxq`xq m " 1 pb, Ψ rαs pxqq m ą 1 dpb, xq " 0. Proposition 5.5. The generating family homology GH˚pf α q is independent of the choice of representative of α and may be computed from the chain complex pGH˚pf q ' GH˚pf qr1´ms, dq described above. Proof. The proof is parallel to that of Proposition 5.4, above, with the construction of Ψ in Equation (4.3) and Proposition 4.4 taking the place of Proposition 5.1. The theorem above can give us information in two ways: first, it allows us to use distinct elements of π m pΛ n ; Λ 0 q to produce pairs of distinct pn`mq-dimensional Legendrian submanifolds. For example, twist-spinning the Legendrian Λ constructed in Section 4.3 by the non-trivial element in π 1 pΛ n , Λq yields a Legendrian pn`1q-submanifold distinct from the ordinary spin of Λ. The theorem above also provides a potential mechanism to distinguish elements of π m pL n q: if the twist-spins of two loops of Legendrian with a common base point have different generating family homology, then the difference must have arisen from the Ψ maps. Thus, if one can compute the generating family homology by some other means -surgery [19] or a generating family version of the Mayer-Vietoris sequence of [12], for example -then one has a chance of finding new examples of non-trivial elements of π m pL n q without directly computing the Ψ maps directly. Unfortunately, as of this writing, we know of no implementations of this technique. 5.4. Factoring Ψ Through Spinning. In this section, we study the relationship between the morphism Ψ from homotopy groups of spaces of Legendrians and the 1-spinning construction. Unlike in Section 5.2, we need the analyze the chain complex more closely, but along the way, we reprove Proposition 5.4 in the 1-spun case. First we adapt a technique useful for gradient flow trees and holomorphic disks in Legendrian Contact Homology [6,12] to generating family homology. We state the lemma more generally than is needed in this article for possible future applications. Let g be a metric on MˆR NˆRN , S Ă M be a submanifold, and N pSq Ă M be the -neighborhood of S. Let δ be the difference function of a generating family f : MˆR N Ñ R. Let V be a (negative) gradient-like vector field for δ used to define the differential in GCpf q. Assume the support of V agrees with the support of δ. Lemma 5.6. For all sufficiently small ą 0, and for all px, η,ηq such that x P BN pSq and δpx, η,ηq ą 0, assume one of the following holds: either the component of V normal to BN pSq is non-vanishing and points inwards; or, px, η,ηq is not in the support of δ. Fix points p, q P MˆR NˆRN with δppq ą δpqq ą 0 and negative gradient-like flow line γ of δ connecting them. (2) If both p and q lie in SˆR NˆRN , then γ sits entirely in SˆR NˆRN . (3) If f S is the restriction of f to SˆR N , then GCpf S q is naturally a subcomplex of GCpf q. If we replace "inwards" with "outwards" in the first assumption, then the first and second statements above still hold. Proof. Note that if γ exits the support of V, it then stays within a single fiber txuˆR NˆRN . Thus, for the first statement, it suffices to observe that the hypotheses imply that V is everywhere tangent to SˆR NˆRN . For the second statement, since the normal component of V always points into T pSˆR NˆRN q at p, or vanishes, even if p is a critical point of δ, the flow line cannot leave any neighborhood of SˆR NˆRN . Thus, the first observation implies that γ lies entirely in SˆR NˆRN . A similar proof, based at q, holds if we replace the "inwards" assumption by "outwards". For the third statement, note that the vanishing normal component of V along SˆR NˆRN implies that there is a one-to-one correspondence between the critical points of δ and those of δ S . The equality of differentials then follows from the argument for the second statement which prevents a flow line from leaving SˆR NˆRN . All critical points of the gradient-like vector field V have coordinates ρ ą 1 and θ "´π{2 or π{2, which we distinguish by labeling as cr´s and cr`s, respectively, where c is a critical point of the difference function of f . This induces a decomposition of the differential d Σ,1 of GCpf Σ,1 q " GCr´s ' GCr`s: d´´d´d`´d`` We first prove a lemma which implies Proposition 5.4 for the 1-spin case. Lemma 5.7. For all critical points b, c of the difference function of f , we have: d´`cr´s " 0, d`´cr`s " 0, xd´´cr´s, br´sy " xdc, by " xd``cr`s, br`sy, where d is the differential of GCpf q. Proof. By the symmetry of V under the reflection through the x 1¨¨¨xn´1 z plane, any elements in any rigid moduli space M 0 pcr`s, br´sq appear in pairs; thus, d`´" 0. Let S Ă R n´1ˆR2 be the open hypersurface satisfying θ "´π{2 and ρ ą 1{2. We see that the hypotheses (with "inward" specification) of Lemma 5.6 hold; therefore, the third statement of the lemma implies: d´`" 0, and xd´´cr´s, br´sy " xdc, by. Finally, let S 1 Ă R n´1ˆR2 be the hypersurface defined by θ " π{2 and ρ ą 1{2. The identity xd``cr`s, br`sy " xdc, by now follows from the second statement of Lemma 5.6 (with the "outward" hypothesis). Proposition 5.8. Let Ψ be the map from Proposition 4.4. Let P r˘be the projection map defined on generators as GHpf Σ,1 q Ñ GHpf q, cr˘s Ñ c, cr¯s Ñ 0. Proof. First note that i is well-defined, since the 1-spin of a homotopy of two Legendrian S m -families is a homotopy of two 1-spun Legendrian S mfamilies. Let d m be the chain map which induces the upper arrow Ψ in the proposition, and d Σ,1 m be the chain map which induces the lower Ψ, both as in equation (4.3). Using the notation of Lemma 5.7, it suffices to show that: (5.3) xd Σ,1 m cr´s, br´sy " xd m c, by " xd Σ,1 m cr`s, br`sy. We prove the first equality, as the second one follows from identical reasoning. Let Λptq, t P S m , represent an arbitrary element in π m pL n ; Λqq and Σ 1 Λptq, be its front-spun counterpart. Recall the S m -family is described in Example 4.2. For t P S m , choose (smoothly in t) the half-hyperplane Sptq from the proof of Lemma 5.7 (rotated according to t) which "cuts out" a copy of Λptq from Σ 1 Λptq. This defines a hypersurface S in S mˆRn`1 . Like in the proof of Lemma 5.7, we see that the hypotheses of Lemma 5.6 are satisfied. Equation 5.3 follows from the second statement of Lemma 5.6.
2013-11-03T21:04:53.000Z
2013-11-03T00:00:00.000
{ "year": 2013, "sha1": "1574203c21e2a2e64079b10b87d3a79038d18afa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1311.0528", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1574203c21e2a2e64079b10b87d3a79038d18afa", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
231733618
pes2o/s2orc
v3-fos-license
Low Soluble Receptor for Advanced Glycation End Products Precedes and Predicts Cardiometabolic Events in Women With Rheumatoid Arthritis Background: Cardiovascular disease (CVD) causes premature mortality in rheumatoid arthritis (RA). Levels of soluble (s)RAGE change with aging, hypertension and hypercholesterolemia. We assessed whether sRAGE was associated with increased risk of CVD in RA patients. Methods: Serum sRAGE was measured in 184 female RA patients and analyzed with respect to CVD risk estimated by the Framingham algorithm (eCVR), metabolic profile and inflammation. Levels of sRAGE in 13 patients with known cardio-metabolic morbidity defined the cut-off for low sRAGE. Prospective 5-year follow-up of new CV and metabolic events was completed. Results: Low sRAGE was significantly associated with previous history and with new imminent cardiometabolic events in the prospective follow-up of RA patients. In both cases, low sRAGE reflected higher estimation of CVR in those patients. Low sRAGE was attributed to adverse metabolic parameters including high fasting plasma glucose and body fat content rather than inflammation. The association of sRAGE and poor metabolic profile was prominent in patients younger than 50 years. Conclusions: This study points at low sRAGE as a marker of metabolic failure developed during chronic inflammation. It highlights the importance for monitoring metabolic health in female RA patients for timely prevention of CVD. Trial registration: ClinicalTrials.gov with ID NCT03449589. Registered 28, February 2018. INTRODUCTION Glycation is the process of non-enzymatic binding of sugar molecules glucose and fructose with proteins, lipids and nucleic acids. Glycation directly depends on glucose concentration and occurs at random sites of a molecule. It leads to the loss of molecule's function and degradation into the advanced glycation end products (AGEs) (1). Excessive glycation may occur both in response to the oxidative stress, hypoxia and inflammation. In turn, circulating AGEs in the extracellular compartment activate the proinflammatory receptor for advanced glycation end products (RAGE) and participate in perpetuation of inflammation (2). Under inflammatory conditions, other non-glycated RAGE ligands such as S100 proteins and HMGB1 are accumulated. RAGE ligands induce proinflammatory signaling through the membrane-bound RAGE causing nuclear translocation of NFkappa B followed by cytokine production (2). Broad range of harmful consequences of long lasting hyperglycosemia for health is well-documented (3), while cellular malfunction in response to high circulating glucose requires better understanding. Exposure of proteins to glucose enhances the process of unselective glycation (4)(5)(6). Measurement of glycated hemoglobin is clinically used to monitor DM (4). Ingestion of high glycated milk protein results in a rise of plasma glucose (7). However, there is a controversial view on levels of sRAGE in T2D. Several studies indicated decreased levels of sRAGE in T2D without complications (8,9) and others reported high levels of sRAGE in T2D with cardiovascular or renal complications due to increased production of AGEs (10)(11)(12). AGEs have a key role in chronic inflammation and their accumulation has reported both in CVD, atherosclerosis and RA. Other factors as male gender, smoking and hyperglycemia have been reported to raise generation of AGEs independently to RA. Interestingly disease activity or erosivity of RA had no association with AGEs (13). RAGE is a multiligand receptor, which belongs to the immunoglobulin superfamily of cell surface molecules and is physiologically expressed by cells involved in innate immune responses, including macrophages and granulocytes, and also on endothelial cells, vascular smooth muscle cells, and adipocytes (14). A soluble form of RAGE (sRAGE) is either generated via the proteolytic cleavage of extracellular domain of the membrane-bound RAGE or formed by endogenous splicing of RAGE mRNA transcripts. It acts as a decoy receptor by catching RAGE ligands and preventing them from binding to the membrane-bound RAGE and thereby modulating the pro-inflammatory effects of RAGE signaling (2,15). Soluble RAGE is considered to protect against adverse effects of proinflammatory RAGE ligands. Low levels of sRAGE were suggested to be a very early marker of endothelial dysfunction (16), and were reported in coronary artery disease (17,18), atherosclerosis (19), essential hypertension (20,21), hypercholesterolemia (22), and in RA (23), where CVD remained to be the major cause of premature death. We have previously reported that chronic inflammation in RA is associated with significantly lower serum sRAGE compared to healthy controls and patients with non-inflammatory joint diseases (23). Furthermore, the presence of anti-RAGE antibodies locally in the joints of RA patients was related to a less destructive joint disease (24). In the present prospective study, we assess an association between serum sRAGE and cardiometabolic health in female RA patients. We search for the CVD risk factors attributed to the low serum levels of sRAGE. Patients One hundred eighty-four female patients with established RA were recruited into the study. All the patients fulfilled the American Rheumatism Association 1987 revised criteria for RA (25). Patients were randomly chosen from the methotrexate (MTX)-treated patient cohorts at two rheumatology units in Sweden, Sahlgrenska University Hospital in Gothenburg and the Northern Älvsborg Country Hospital in Uddevalla during the period from November 2011 until September 2013. Patients under the age of 18, patients with other rheumatologic diseases, and juvenile idiopathic arthritis were excluded. At inclusion, 93% (n = 172) of patients received MTX treatment. Fifty-one patients (28%) had treatment with biologics including infliximab (n = 23), etanercept (n = 12), golimumab (n = 5), adalimumab (n = 3), rituximab (n = 3), tocilizumab (n = 4), abatacept (n = 1). Twenty-five MTX-treated patients (16%) received concomitantly other disease modifying drugs (14 sulfasalazine, 6 hydroxychloroquine, 4 combination of sulfasalazine and hydroxychloroquine, and 1 cyclosporine A). Oral corticosteroids (median dose 5.0 mg/day) were regularly used by 20 patients (11%). All patients completed the questionnaire about their current medication, concomitant diseases and smoking habits. At inclusion, all patients were examined by experienced rheumatologists and the clinical (tenderness and swelling of 28 joints) and laboratory (erythrocyte sedimentation rate, C-reactive protein) disease activity variables were recorded. Disease activity score in 28 joints (DAS28) was calculated (http://www.4s-dawn. com/DAS28/). The clinical information with regard to patients' age, sex, body mass index (BMI), body fat content (26), and disease duration were collected. Ethical Consideration The study was approved by the Swedish Ethical Review Authority (Dnr. 659-2011) and performed in accordance with the Declaration of Helsinki. The informed written consent was obtained from all subjects prior to enrolment in the study. The trial is registered at ClinicalTrials.gov with ID NCT03449589. Calculation of Estimated Cardiovascular Risk A 10-year risk for development of CVD was estimated (eCVR) using a digital version of the Framingham algorithm (27) and included sex, age, systolic blood pressure, treatment for hypertension, current smoking, diabetes, HDL, and total cholesterol. CVD Follow-Up at 5 Years Five years after enrollment, the patients were contacted for a structured telephone interview and a questionnaire was sent to their home address. The questions were asked for any CV event, and about current medication with antihypertensive drugs, anticoagulants, anti-diabetic drugs, and use of statins. The reported CV events and changes in medications were then controlled in patients' medical records and the Swedish National Health Registry. We were able to reach all patients except 3 patients-two of them were diseased and one patient had moved out of Sweden. Collection and Preparation of Blood Samples The blood samples were obtained after overnight fast. Blood was collected from the peripheral cubital vein directly into the vacuum tubes containing serum clot activator (Vacuette, Greiner Bio-One, Kremsmunster, Austria), mixed thoroughly and left to coagulate for 3-4 h at room temperature. The tubes were then centrifuged at 2,000 × g for 10 min, the serum carefully collected, aliquoted, and stored at −80 • C until use. Measurement of sRAGE The levels of sRAGE in serum were determined using a specific sandwich ELISA kit (R&D Systems, Minneapolis, MN, USA) according to the manufacturer's protocol. Serum was diluted 1/3 in assay buffer and introduced into the ELISA plates coated with mouse monoclonal antibody against RAGE. After 2 h of incubation with serum, polyclonal capture antibody against the extracellular portion of RAGE was used. The reaction was visualized by tetramethylbenzidine substrate. The minimum detectable concentration of sRAGE was 4 pg/ml. According to the manufacturer, no significant cross-reactivity to EN-RAGE, HMGB1, S100A10, or S100Baa was observed. Other Serological Measures The measurement of adipokines and cytokines were determined using specific sandwich ELISA kits according to the instructions from the manufacturers (R&D Systems, Minneapolis, MN, USA) as previously described (28). The inflammatory parameters, blood lipids and RF/ACPA antibodies were measured at the accredited Laboratory of Clinical Chemistry at the Sahlgrenska University Hospital according to clinical routines. Plasma glucose levels were measured using FreeStyle Lite kit (Abbott Diabetes Care Ltd., Oxon, UK) and insulin levels by sandwich ELISA kit (DY8056, R&D Systems, Minneapolis, MN, USA). Statistical Analysis Descriptive statistics for continuous variables are presented as the median with interquartile range, and for categorical variables as the number and the percentage. Univariate correlation between variables was examined by the Spearman's correlation test. Any two factors with a correlation coefficient >0.3 were investigated for co-linearity. For continuous variables, the difference between groups was assessed by using the Mann-Whitney U-test. The difference in frequency, sensitivity and specificity of calculations were performed using Chi Square and Fisher's exact test. Analyses were performed using Graph Pad Prism 8 for Microsoft Windows. All tests were two tailed and p < 0.05 was considered statistically significant. Soluble RAGE and Clinical, Metabolic, and Inflammatory Features in RA Out of 184 patients included at the baseline, we identified 7 patients with type 2 diabetes (T2D) and 6 patients with previous CV events. As T2D and CVD could affect sRAGE levels, these 13 patients with cardiometabolic diseases were extracted from the cohort, analyzed separately and comprised a cardiometabolic reference (CMR) group. The baseline characteristics of CMR group (n = 13) and the remaining study cohort (n = 171) are shown in Table 1. Expectedly, CMR group had significantly higher eCVR compared to the remaining 171 patients ( Table 1). This high CVR was largely attributed to high fasting plasma glucose levels and adverse composition of blood lipids including TG and TG/HDL ratio, leptin/adiponectin ratio, and BMI ( Table 1). CMR group had significantly higher disease activity estimated by DAS28 compared to the remaining 171 RA patients. Interestingly, sRAGE levels had significant strong positive correlation with insulin (r 0.643, p = 0.028), HOMA index (r 0.626, p = 0.032), and age (r 0.675, p = 0.013) within the CMR group. To investigate whether sRAGE concentrations were associated with high CV risk, sRAGE values within the lower 75% of the CMRG were considered low and were used to dichotomize the CV event free RA patients into high sRAGE (sRAGE hi ; n = 73) and low sRAGE (sRAGE lo ; n = 98) groups (Figure 1). The median eCVR was comparable between the groups with high and low sRAGE levels (Figure 1). We found neither differences in cardiometabolic nor in RA-related disease activity parameters (Figure 1). Next, we performed univariate correlation analysis between sRAGE and CV risk parameters in un-dichotomized RA cohort and observed bi-directional correlation profile between sRAGE and eCVR (Supplementary Figure 1). Thus, we analyzed correlation between sRAGE levels and cardio-metabolic and inflammatory parameters within respective group. The correlation pattern of sRAGE was remarkably different between the sRAGE hi and sRAGE lo patients (Figure 2A). This difference in correlation between RAGE hi and sRAGE lo groups was confirmed by the Fisher r-to-z test and was significant for eCVR-BMI, body fat index, age, IL-6, and IGF-1 (Figure 2A). Additionally, in patients within sRAGE hi group, sRAGE showed significant positive correlation with plasma glucose, eCVR and age. In relation to RA-related risk factors, sRAGE correlated positively with DAS28, tender and swollen joints, IL6, and resistin, whereas a negative correlation was seen between sRAGE and serum levels of IGF1 (Figure 2A). In contrast, in patients within sRAGE lo group, a positive correlation was seen between sRAGE level and IGF1, whereas eCVR-BMI and body fat content correlated negatively to sRAGE. The analysis of traditional CVR factors such as hypertension, dyslipidemia, overweight, smoking, and age showed no significant difference between sRAGE lo and sRAGE hi groups ( Figure 2B). We thereafter studied RA-related CVR factors, which included higher ESR, the presence of RA-specific antibodies, long disease duration and active RA disease defined by DAS28. The comparison showed that none of the RA-related risk factors had significant difference between sRAGE lo and sRAGE hi groups ( Figure 2C). Since eCVR is age dependent and a decrease of sRAGE levels with increasing age has been reported in several studies (29-31), we performed the analysis separately for the patients of different age groups. We compared the traditional and RA-related CVR factors as well as serum levels of adipokines and cytokines for ages <50 years (n = 66), and ≥50 years (n = 105) in sRAGE lo and sRAGE hi group. We observed no significant differences in sRAGE levels between the patients <50 years compared to those ≥50 years within respective sRAGE lo and sRAGE hi groups. Prospective Follow-Up for Development of New Cardiometabolic Events Within 5 years, 11 of 171 patients (6.4%) developed new cardiometabolic events (CME). In the sRAGE lo group, seven events were observed including 1 patient with new T2D diagnosis, 2 chronic atrial fibrillations (AF), 2 transitory ischemic attacks, 1 patient got deep venous thrombosis and one patient deceased due to aorta dissection. In the sRAGE hi group, CME occurred in 4 patients including 1 patient with new T2D combined with AF, and 1 AF, 1 stroke, and 1 incidental aortic aneurysm were reported. The prevalence of new CME was not different between the sRAGE lo and sRAGE hi groups (6.9 vs. 6.1%, respectively). Next, we wanted to study whether the patients with new CME were different at inclusion with respect to inflammation and metabolic characteristics compared to patients in sRAGE hi and sRAGE lo groups that had no new CME. The new CME group had significantly lower sRAGE levels compared with the sRAGE hi group (Figure 3A). Importantly, new CME group had significantly higher eCVR compared to both the sRAGE lo and sRAGE hi groups. Patients in the new CME group were significantly older and had the adverse metabolic parameters such as higher plasma glucose levels and increased body fat compared with patients in the sRAGE hi and sRAGE lo groups. Inflammation, measured by ESR, IL6, IL1β, and DAS28, was not different between the new CME group and the remaining RA patients. Further, we compared the new CME group with CMR group, which accumulated CVR factors and had the highest eCVR (Figure 1). The baseline parameters of the patients with new CME were similar to the CMR group with respect to eCVR and also sRAGE ( Figure 3B). We observed no differences in other cardiometabolic and inflammatory parameters between those groups. DISCUSSION In the present study we show that low serum levels of sRAGE are significantly associated with previous history and with new imminent cardiometabolic events in female RA patients. In both cases, this corresponded to higher estimation of CVR in the patients with low sRAGE. This low sRAGE was largely attributed to adverse metabolic parameters rather than signs of inflammation. We observed high fasting plasma glucose, and overweight to be the major contributors to CVD risk in younger FIGURE 1 | Metabolic and inflammation-related characteristics of female patients with rheumatoid arthritis. Serum levels of soluble receptor for advanced glycation end products (sRAGE) were measured in 184 patients and the patients with no previous cardiovascular events were divided into the high (sRAGE hi , n = 73) or low (sRAGE lo , n = 98) groups, accordingly. Cardio-metabolic reference group (CMRG) consisted of patients with history of cardiovascular events and/or type II diabetes (n = 13). SRAGE in the upper quartile of CMR group (above 1,504 pg/ml, indicated with dotted line) defined the cut-point for dichotomization. Statistical evaluations were calculated using the Mann-Whitney U-test. The box plots indicate medians and interquartile ranges; whiskers show min to max. *P < 0.05; **P < 0.01; ***P < 0.001; and ****P < 0.0001. TG/HDL, ratio between serum triglycerides and high-density lipoproteins; DAS28, disease activity score; IL, interleukin. sRAGE lo group. Also, 5-year follow up showed that the patients with new CME had remarkably low sRAGE levels and reach the level of CMR group in eCVR. The patients with new CME displayed significant accumulation of unfavorable metabolic factors combining high plasma glucose with overweight. Recently, Dozio et al. suggested circulating sRAGE as an early marker of cardiometabolic disease. They showed that healthy obese women presented lower sRAGE levels than normal-weight women, and found inverse association between sRAGE levels with BMI, total fat mass, and visceral fat in the epicardial region (32). In another study, investigating healthy subjects from the general population with no T2D, CVD, hypertension, or treatment for hyperlipidemia, the authors found that BMI and waist circumference were inversely associated with sRAGE in women (33). Consistent with above findings in healthy women, we observed a significant association between low sRAGE levels with total body fat and eCVR in the patients below 50 years. Similarly, patients with new CME had low sRAGE and displayed significantly increased plasma glucose levels and body fat content compared to both sRAGE lo in sRAGE hi groups. These findings suggest that low sRAGE reflects a metabolic misbalance prior to imminent clinical CME. In our cohort, we have analyzed the total levels of circulating sRAGE, which exists in two main isoforms-a soluble RAGE cleaved from the membrane-bound full-length RAGE by proteases (34)(35)(36), and an endogenously secreted RAGE produced by alternative splicing (15). While cleaved RAGE has a strong association with inflammation markers, endogenously secreted RAGE remains constant among age groups in the healthy population and reflects metabolic disturbances related to obesity and insulin resistance (29). In our sRAGE hi group, sRAGE levels were primarily associated with inflammation including disease activity and IL6. In contrary, in sRAGE lo group, sRAGE had a negative correlation with BMI, eCVR and age, reflecting metabolic disturbance rather than inflammation. What could explain these seemingly controversial associations? Under physiological conditions, the cell surface RAGE has relatively low expression in non-inflamed tissues (37) whereas during inflammation, it is up-regulated responding to ligand exposure (1). In active RA, a plethora of inflammatory ligands for RAGE are present, both in the synovium (38-40) as well as in the circulation (41,42) thereby modulating the expression of cell-bound RAGE. In our sRAGE hi patient group, but not in sRAGE lo group, we observed a positive correlation between FIGURE 2 | Correlation between sRAGE levels and cardio-metabolic and inflammatory parameters within respective groups (A) and the differences in frequency of the traditional cardiovascular risk factors (B) and RA-related risk factors (C) between the patients with high (n = 73) and low (n = 98) serum levels of soluble RAGE. Correlation matrix in (A) shows Spearman's R-values in color as indicated by color key code. *P< 0.05; **P < 0.01. The differences as indicated by forest plots in (B,C) are calculated as odds ratio (OR) with 95% confidence interval (CI). The p-values are obtained by chi-square statistics. Comparison of the traditional, inflammation-and RA-related CVR factors for ages <50 years (n = 66) (D), and ≥50 years (n = 105) (E), in sRAGE lo and sRAGE hi group. The comparison is done pairwise using the Mann-Whitney U-test. The box plots indicate medians and interquartile ranges; whiskers show min to max. *P < 0.05; **P < 0.01. BMI, body mass index; DAS28, disease activity score with assessment of 28 joints; eCVR, estimated cardiovascular risk; IL, interleukin; RAGE, receptor for advanced glycation end products; HDL, high density lipoprotein. TG/HDL, ratio between triglycerides and high-density lipoprotein; TC, total cholesterol; ESR, erythrocyte sedimentation rate; ACPA/RF, presence of antibodies to citrullinated peptide antibodies and/or rheumatoid factor; DD, disease duration; SBP, systolic blood pressure. markers of inflammation. In fact, the more inflammatory ligands for RAGE in the surrounding milieu, the higher expected density of the cell-bound receptor, which predisposes to increased production of sRAGE by cleavage and sRAGE levels are probably a simple reflection of RAGE production in tissues. Besides inflammation, another molecular explanation for decreased sRAGE is conceivable. Hyperglycemia leads to increased non-enzymatic glycation of proteins, i.e., production of AGEs. Soluble RAGE binds AGEs without activating cellular pathways and functions as a decoy to AGEs, increasing its consumption and decreasing detectable circulating level. On the other hand, binding up AGEs leads to blocking of AGE-RAGE signaling, reduces the positive feedback loop for RAGE upregulation and thereby potentially limits enzymatic cleavage of RAGE. This explanation is well-applicable for the patients of . This group was compared to the patients with high (sRAGE hi , n = 69) and low (sRAGE lo , n = 91) serum levels of soluble RAGE. eCVR, estimated cardiovascular risk; RAGE, receptor for advanced glycation end products. (B) Comparison of new CME group (n = 13) with CMR (n = 11) group with respect to eCVR and sRAGE. *The comparison is done pairwise using the Mann-Whitney U-test. The box plots indicate medians and interquartile ranges; whiskers show min to max. *P < 0.05; **P < 0.01; ***P < 0.001; and ****P < 0.0001. CMRG and new CME groups, both recognized by low sRAGE and high plasma glucose level. Of importance, the levels of circulating sRAGE could be affected by several drugs. The effect of treatment for hypertension and hyperlipidemia (22,43) as well as DMARD treatment with methotrexate (23) has been shown to modulate sRAGE levels in several studies. However, in our cohort, we did not find any differences in the level of sRAGE between groups either treated or not with statins, DMARDs or antihypertensive drugs Supplementary Table 1. Our study has certain limitations, which need to be taken into account. Firstly, the study had a cross-sectional design, although the patients were clinically followed up for 5 years with respect to cardiometabolic events. A structured consecutive blood sampling during the follow up period would have probably rendered more clear-cut results. However, data from the community-based atherosclerosis risk study ARIC, which measured sRAGE levels with 3 years apart, suggested that sRAGE concentrations within individual subjects are relatively stable. Thus, a single measure could be valuable to evaluate the long-term CV risk (44). Secondly, as discussed above, we have analyzed the total amount of sRAGE and therefore the study does not permit any conclusions with regards to sRAGE isoforms and their relations to CVR in RA women. This question needs to be addressed in future studies. Taken together, this study shows that low sRAGE reflected higher CV risk in female RA patients. It was associated with previous history and with new forthcoming cardiometabolic events. The study also emphasizes metabolic misbalance behind low sRAGE in female RA patients and puts it forward as a useful biomarker to monitor cardiometabolic health in RA patients. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethical Review Board of Gothenburg with permission code 659-2011. All methods used in this study were carried out in accordance with relevant Swedish guidelines and regulations and following the Good Clinical Practice. The informed written consent was obtained from all subjects prior to enrolment in the study.
2021-02-02T17:59:04.973Z
2021-01-28T00:00:00.000
{ "year": 2020, "sha1": "3df6fd5aaf56722ed32622e2fedd2c3b7349c186", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2020.594622/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3df6fd5aaf56722ed32622e2fedd2c3b7349c186", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
203995985
pes2o/s2orc
v3-fos-license
Combined Model for IAQ Assessment: Part 1—Morphology of the Model and Selection of Substantial Air Quality Impact Sub-Models : Indoor air quality (IAQ) is one of the most important elements a ff ecting a building user’s comfort and satisfaction. Currently, many methods of assessing the quality of indoor air have been described in the literature. In the authors’ opinion, the methods presented have not been collected, systematized, and organized into one multi-component model. The application purpose of the assessment is extremely important when choosing IAQ model. This article provides the state-of-the-art overview on IAQ methodology and attempts to systematize approach. Sub-models of the processes that impact indoor air quality, which can be distinguished as components of the IAQ model, are selected and presented based on sensory satisfaction functions. Subcomponents of three potential IAQ models were classified according to their application potential: IAQ quality index, IAQ comfort index, and an overall health and comfort index. The authors provide a method for using the combined IAQ index to determine the indoor environmental quality index, IEQ. In addition, the article presents a method for adjusting the weights of particular subcomponents and a practical case study which provides IAQ and IEQ model implementation for a large o ffi ce building assessment (with a BREEAM rating of excellent). To determine the indoor air quality and indoor environmental quality (IEQ) indexes for the case study. (TC—thermal comfort, L—light quality, ACc—acoustic comfort). index ), indoor air quality (IAQ index ), acoustics ( ACc index ), and lighting quality ( L index ). certified). Standardized CEN and ISO analytical methods were used to determine the VOC concentrations and CO 2 and formaldehyde concentrations in the indoor air of the building. Selection of the sampling points was made with the BREEAM assessor in two representative office zones per tested floor and a minimum of two floors. The building was tested three days after formal final finishing works at the pre ‐ occupancy stage with no users inside. For this office building, the tests were conducted on the 55th and 47th floors. Air samples were collected using an active sampling procedure with an electronic mass flow controller, which controlled the air flow (10 dm 3 /h for VOC tests and up to 30 dm 3 for formaldehyde tests). Indoor samples were set up in selected representative office locations, approximately 1.5 m above the floor, away from windows, doors, potential emission sources, and direct sunlight. Air samples were tested in accordance with the ISO 16000 ‐ 6:2011 and ISO 16000 ‐ 3:2011 standards. The VOCs were assessed using tubes filled with Tenax State-of-the-Art Indoor Air Quality Measurement Systems Approximately 30 years ago, people began to realize that buildings not only provide them with a sense of security, but can also significantly affect their health and well-being. This is particularly important due to the fact that people spend an increasing amount of time in closed indoor environment. Air quality and ventilation approaches were initially based on the users' dissatisfaction with the scent of the human body and, as such, the understanding of indoor air quality (IAQ) had serious limitations. Large quantities of pollutants and their sources clearly influence the indoor comfort of building inhabitants, as well as their health. In 1998, Fanger [1] presented an approach to the quantitative determination of perceived IAQ based on the level of dissatisfaction of residents caused by bad odors and irritants, smoke, and other sources of pollution. This approach provided two new measures of IAQ: the olf, which quantifies the pollution generated from a strong source of human bio-pollutant in the range of the impact of emitted odors on perceived air quality, and the decipol, measuring the perceived air quality in an indoor space with a source of pollution of one olf at a ventilation rate of 10 l/s. The number of emitted olfs per floor unit in different types of buildings and the amounts of pollutants from tobacco smoking (in olfs) can then be determined. Consideration of only the odour of the human body, without taking into account the influence of pollutants from various other sources (for example Smoking alone emits more than 7000 different compounds, many of which are harmful [2] for humans and animals and may transport biological pollutants that can act as allergens. People and household animals emit gases which are unpleasant, transfer pathogens, and cause diseases. These examples show that there are many paths for penetration of and exposure to the sources of pollution in indoor environments. In connection with the growing need to determine levels of indoor air pollution, new centers performing tests and new methods have been created considering the ability to analyze an increasing number of harmful substances. Measurement of pollutant concentrations in the air is generally a task performed by experts mainly in accredited laboratories and the results are published in scientific journals, technical reports, and, eventually, in guidelines, e.g., those of American Society of Heating Refrigerating and Air Conditioning Engineers ASHRAE [3]. The presence and concentrations of pollutants are often detected and measured without careful consideration of the significance of these measurements, and the pollutants measured may not be the most widespread or the most harmful. Some emissions are incorrectly grouped together; for example, more than one million volatile organic compounds (VOCs) are known and their toxicities are generally unknown, but they are often reported as a single value and referred to as the total VOCs (TVOCs) component. Frequently, carbon dioxide is used as an indicator of IAQ, although it does not have such a negative effect on the health of residents in the concentrations in which it is usually found in buildings. In our opinion, CO2 is rather a marker of human bioeffluents. Examples of different understandings of the set of typical pollutants in an indoor environment are shown in Smoking alone emits more than 7000 different compounds, many of which are harmful [2] for humans and animals and may transport biological pollutants that can act as allergens. People and household animals emit gases which are unpleasant, transfer pathogens, and cause diseases. These examples show that there are many paths for penetration of and exposure to the sources of pollution in indoor environments. In connection with the growing need to determine levels of indoor air pollution, new centers performing tests and new methods have been created considering the ability to analyze an increasing number of harmful substances. Measurement of pollutant concentrations in the air is generally a task performed by experts mainly in accredited laboratories and the results are published in scientific journals, technical reports, and, eventually, in guidelines, e.g., those of American Society of Heating Refrigerating and Air Conditioning Engineers ASHRAE [3]. The presence and concentrations of pollutants are often detected and measured without careful consideration of the significance of these measurements, and the pollutants measured may not be the most widespread or the most harmful. Some emissions are incorrectly grouped together; for example, more than one million volatile organic compounds (VOCs) are known and their toxicities are generally unknown, but they are often reported as a single value and referred to as the total VOCs (TVOCs) component. Frequently, carbon dioxide is used as an indicator of IAQ, although it does not have such a negative effect on the health of residents in the concentrations in which it is usually found in buildings. In our opinion, CO 2 is rather a marker of human bioeffluents. Examples of different understandings of the set of typical pollutants in an indoor environment are shown in Table 1 [4]. This table provides recommended values from the results of the European project HealthVent [5], which aimed to develop health-based ventilation guidelines. Table 1 also includes recommendations provided by the Word Health Organization (WHO) on the acceptable levels of pollutant concentrations [6,7], as well as recommendations from other organizations, such as China's IAQ standard values [8]. Different approaches to the IAQ issue mean that the exposure limits assumed in the various source materials differ. * PAH Para-Aminohippuric Acid-cyclic aromatic hydrocarbons; ** TVOC Total Volatile Organic Compounds; Reference [13] provides additional TVOC certification tests for new office buildings by determining (at the ppbv level) the content of carbon tetrachloride, chloroform, 1,2-and 1,4-dichlorobenzene, ethylbenzene, toluene, and o-, m-and, p-xylene. Theoretical work on a combined IAQ model allowing aggregation of the results of the assessment of components affecting humans [14] is not yet well recognized in the literature. However, studies on IAQ indicators, which aim to provide a quantitative description of indoor air pollution, have been conducted since the nineties. In 2003, a significant study by Sekhar et al. [15] was published related to the standard indoor pollutant index (IPSI), the disease symptom index in the building symptom index (BSI), and to the often-cited works by Moschandres and Sofuoglu [16,17] on the indoor environmental index (IEI), indoor air pollution index (IAPI), and the indoor pollutant standard index (IPSI). The IAPI characterizes air pollution in an office with a single number: the index. The index value ranges between zero (lowest pollution level, i.e., best indoor air quality) and 10 (highest pollution level i.e., worst indoor air quality). The IAPI is a composite index; sub-indices ed are aggregated using the arithmetic mean in conjunction with a tree-structured calculation scheme. This scheme gives rise to some reservations, because at the top of the tree-structured calculation scheme is the IEI (calculated as the arithmetic mean of the IAPI and the IDI (indoor air discomfort index), and the combination of IAQ sensation and thermal conditions does not appear until later. While considering the indicators for the quantitative description of pollution, the proposal of the IEA Working Group named "Defining the Metrics of IAQ" should also be mentioned. This group prepared, in 2017, the document entitled "In the Search of Indices to Evaluate the Indoor Air Quality of Low-Energy Residential Buildings" [18]. The group made the following assumption for the categorization of various indicators: there should be one index per individual pollutant and a dimensionless coefficient should be specified to evaluate the IAQ, provided that the current (observed) concentrations of a given pollutant c j are related to the ELVs (exposure limit values) concentration c j,ELV . The index is calculated for each individual pollutant [18], which is specific only for this exact pollutant. The report showed that aggregation can be performed by addition, by taking the maximum value or by other methods, in an attempt to define metrics that can be used to evaluate IAQ. The assumption was that the reference value usually refers to health risks (accounting for chronic or acute effects), but other metrics can also be used, (e.g., odor or irritation threshold). There are two important properties to be considered when aggregating sub-indices: ambiguity and eclipsing. As a result of the analysis, the authors concluded "that there are problems with model aggregation methods. In the aggregation model I agg = I 1 + I 2 , ambiguity creates a false alarm and in the aggregation model I agg = 1/2(I 1 + I 2 ), eclipsing underestimates the effect" [18]. Therefore, the discussion remains open [18]. The report also showed how there are large spreads of concentrations of individual pollutants (up to seven rows), even in the group of pollutants for which sub-indices were built. It determined the difficulties of building a weighted scheme based on the simplest percentage adjustment of the concentration shares and, thus, the share of the mass of pollutants to be removed by ventilation. The current state of knowledge does not provide information authorizing the omission of certain pollutants. Hence, taking into account the lack of data on the characteristics of each chemical compound and consideration of the "removal efficiency" [19] requires us to abandon thinking about the adjustment of many individual pollutants, and to focus only on the creation of a model based on the representative and target components. In this state of knowledge, there are hopeful studies and proposals with a grey combined IAQ index model and the grey clustering model for IAQ indicators proposed by Zhu and Li in 2017 [20] is particularly interesting, especially when the relationships between system factors and the system's IAQ behavior and the interrelationships among the factors are uncertain. At first, all specific indoor air pollutants and related parameters should be measured. However, this is a very complex and time-consuming process. On the basis of the characteristics and correlations of the pollutants, the indoor air quality can be characterized by representative indicators. Studies [20] have pointed out that respirable particulates, CO 2 and TVOCs, were the three most representative and independent environmental parameters which can be used as an evaluation index of indoor air Appl. Sci. 2019, 9,3918 5 of 35 quality in office buildings. Since each indicator represents a class of pollutants with similar sources and dissemination characteristics, this index group avoids unreliability due to the fact that these indicators are "too small" because of critical concentration depression. A data pretreatment method must be used in the calculation procedure, reflecting the differences in concentration levels among different pollutants, but also expressing their influence on the comfort and health of the indoor occupants. Moreover, the measured pollutant concentrations can be used to predict the probable levels of other parameters, and good agreement was found between the predictions and measured values. The Research Questions The main research question contained in the paper concerned whether it was possible with the current state of knowledge to create and use in practice an IAQ model that was based on a unified and coherent approach for input indoor air parameters (such as pollutant concentrations, odor levels, and moisture content) and provided one output parameter (we proposed occupant satisfaction, IAQ index (in %)). The authors looked for physical equations for the IAQ index 's subcomponents and dependencies for their predicted occupant satisfaction functions with a pollutant concentration c j (PD = f (c j ) in %) which could be used as a model for subcomponents. This paper's intention was to provide an IAQ model with a step-by-step process which can be used to determine the value of the overall indoor environmental quality index (in %) including another three components: thermal comfort, acoustic comfort, and lighting quality. The innovative approach and added value of this article is in the use of the proposed IAQ model in practice and the relatively simple calculation of the overall IEQ value (with an uncertainty estimation) using the actual results of measurements in the Building Research Establishment Environmental Assessment Method BREEAM certified case study office building. The authors also provided occupant satisfaction functions for CO 2 , TVOCs, and formaldehyde HCHO in two variants: with experimental %PD values taken from the literature and for these pollutants' %PD values converted from an Air Quality Index system (see Section 2.2.). Research Content and Strategy The proposed IAQ model is presented in Sections 2.2-2.7. The model is later used to analyze the case study of an office building described in Section 2.9. Figure 2 presents the subsequent research steps from theory to practical application. Section 2.8 shows the method for determining indoor environmental quality IEQ index where IAQ index is a subcomponent/part of the IEQ index model. In order to determine the IAQ and IEQ, physical measurements of the indoor environment in the building were conducted using the experimental approach provided in Section 2.10. Based on these physical indoor measurements the IAQ and IEQ indexes (number of occupants satisfied with the indoor air and overall indoor quality, respectively) were assessed (see Section 3) and discussed (see Section 4). The IAQ Model Proposal-Basic Assumptions In the IAQ model construction process (our proposal), the commonly accepted approach is to transform individual concentrations of pollutants into subcomponents before they are aggregated into a single index (occupant satisfaction in %). However, summation of sub-indices can lead to situations in which all are under individual health thresholds, but the final indicator shows when the threshold has been exceeded. Conversely, the averaging of partial sub-indices can lead to an overall indicator showing an acceptable IAQ, even though one or more partial indicators are larger than their individual thresholds. One solution is to use the maximum value of all sub-indices to create the final form of the ∑IAQindex. Taking these issues into consideration, the authors created the ∑IAQ model with three complication levels adapted to the purposes of potential applications of the model, as presented in Figure 3; i. Certification of a building, e.g., via the BREEAM system using (three sub-indices), called "quality"; ii. Design, including perceptible contaminants affecting comfort and using the IAQ index when calculating the IEQ (five sub-indices), called "comfort"; iii. Complex design, with the ∑IAQindex representing both comfort and health (seven sub-indices) called "comfort/health". The simplest one, "quality", is an inner part of the ∑IAQcomfort model and can be used separately for simple applications with the main purpose of supporting a green building certification, e.g., via the BREEAM system using three components, i.e., CO2, HCHO, and TVOC. This model is used later on the case study of a BREEAM building. The IAQ Model Proposal-Basic Assumptions In the IAQ model construction process (our proposal), the commonly accepted approach is to transform individual concentrations of pollutants into subcomponents before they are aggregated into a single index (occupant satisfaction in %). However, summation of sub-indices can lead to situations in which all are under individual health thresholds, but the final indicator shows when the threshold has been exceeded. Conversely, the averaging of partial sub-indices can lead to an overall indicator showing an acceptable IAQ, even though one or more partial indicators are larger than their individual thresholds. One solution is to use the maximum value of all sub-indices to create the final form of the IAQ index . Taking these issues into consideration, the authors created the IAQ model with three complication levels adapted to the purposes of potential applications of the model, as presented in Figure 3; i. Certification of a building, e.g., via the BREEAM system using (three sub-indices), called "quality"; ii. Design, including perceptible contaminants affecting comfort and using the IAQ index when calculating the IEQ (five sub-indices), called "comfort"; iii. Complex design, with the IAQ index representing both comfort and health (seven sub-indices) called "comfort/health". The IAQ Model Proposal-Basic Assumptions In the IAQ model construction process (our proposal), the commonly accepted approach is to transform individual concentrations of pollutants into subcomponents before they are aggregated into a single index (occupant satisfaction in %). However, summation of sub-indices can lead to situations in which all are under individual health thresholds, but the final indicator shows when the threshold has been exceeded. Conversely, the averaging of partial sub-indices can lead to an overall indicator showing an acceptable IAQ, even though one or more partial indicators are larger than their individual thresholds. One solution is to use the maximum value of all sub-indices to create the final form of the ∑IAQindex. Taking these issues into consideration, the authors created the ∑IAQ model with three complication levels adapted to the purposes of potential applications of the model, as presented in Figure 3; i. Certification of a building, e.g., via the BREEAM system using (three sub-indices), called "quality"; ii. Design, including perceptible contaminants affecting comfort and using the IAQ index when calculating the IEQ (five sub-indices), called "comfort"; iii. Complex design, with the ∑IAQindex representing both comfort and health (seven sub-indices) called "comfort/health". The simplest one, "quality", is an inner part of the ∑IAQcomfort model and can be used separately for simple applications with the main purpose of supporting a green building certification, e.g., via the BREEAM system using three components, i.e., CO2, HCHO, and TVOC. This model is used later on the case study of a BREEAM building. The simplest one, "quality", is an inner part of the IAQ comfort model and can be used separately for simple applications with the main purpose of supporting a green building certification, e.g., via the Appl. Sci. 2019, 9, 3918 7 of 35 BREEAM system using three components, i.e., CO 2 , HCHO, and TVOC. This model is used later on the case study of a BREEAM building. Figure 1 shows the processes influencing the morphology of the IAQ model and Figure 3 provides a list of the pollutants for which three IAQ submodels were built, containing human-perceived contaminants (IAQ quality and IAQ comfort models), but also the IAQ comfort/health model for both perceptible and imperceptible pollutants, i.e., those that are not perceptible by humans but affect health and require additional energy for intensive ventilation for health reasons. There are potential sub-indices, such as IAQ(VOC non-odorous ) or IAQ(CO) [5,7]. Dust pollutants may have their sub-index both in the comfort model (if reliable curves of human sensory perception of PM concentrations are known) or in the IAQ comfort/health model if their health impact is considered to be the dominant feature. Considering the types of pollutants harmful to health assigned to the sub-indices of IAQ, we only consider the most important air pollutants (i.e., target emissions) that were given in the WHO guide in 2010 [5,7]. Submodels of processes that impact on air quality in indoor environments, which can be distinguished as components of the IAQ model, were based on sensory satisfaction functions (index of occupant dissatisfaction (PD) with the level of air pollution). Subcomponents of the three potential IAQ models were classified according to their future potential applications: in the assessment of environmental quality index IEQ (models IAQ quality and IAQ comfort ) or in the design of ventilation taking into account all possible harmful-to-health pollutants (model IAQ comfort/health ). In our opinion, such systematization creates order and has a practical dimension as presented later on in the case study. The following are the target pollutant groups: i. In the air quality model (IAQ) quality , the IAQ index subcomponents were assigned to the selected three pollutants. The submodels for the IAQ were CO 2 , TVOC, and formaldehyde HCHO, as recommended by References [5,10,21,22]; ii. In the (IAQ) comfort model, thepreviously provided simplified IAQ quality subcomponents for the three main pollutants were extended with a set of selected compounds VOC odorous , related to the collection of IAQ sub-indices (VOC odorous ) with an unknown cardinality, increased appropriately for the number of dominant pollutants. In addition, we provided a conditional deluge of two more components: (1) calculated using the enthalpy of hot and humid air (high enthalpy h > 55 kJ/kg [23]), the percentage of persons dissatisfied with respiratory cooling with humid air at relatively high temperatures and (2) the percentage of persons dissatisfied with indoor pollution with respect to dust pollution (PM 10 and PM 2.5 ), measured via panel tests. The introduction of a dust-pollution subcomponent to the IAQ model may be debatable, because some experimenters [24,25] underline the unique results of sensory tests of discomfort from dust, and the influence of "emissions" of respiratory dust particles on satisfaction is still under-researched. Considering the above, we expected two variants of the comfort model: with PD (PM 10 , PM 2.5 ) or without this factor; iii. In the overall IAQ model, comfort (IAQ) comfort and health risk (IAQ) health indicators were used, and, hence, this model was called (IAQ) comfort/health . Models for subcomponents of IAQ not perceived by humans but influencing health, can be borrowed from the index set in the AQI (air quality index) system [26 -28], which was adapted to assess the quality of indoor air based on, and in accordance with, the concepts of the air quality assessment system used globally by the American EPA. Values of AQI indices published on active EPA websites using the air quality index system were introduced for application in US federal regulations in 1999 [28]. Currently, the AQI system for outdoor air includes the following pollutants: ozone, particulate pollutants (PM 10 and PM 2.5 ), carbon monoxide CO, sulfur dioxide SO 2 , and nitrogen dioxide NO 2 . To convert a specific air pollutant concentration to an AQI, the EPA developed a tool called the AQI Calculator, which is an open resource [29]. This system (referring to the index from 2004 [16] and the indoor pollutant standard index (IPSI)) was further developed, and the proposed IAQI for indoor air presented by Wang et al. [30] in 2008 and a newer proposal [27] from 2017 for a similar but narrower set of indices, also for indoor air, were both modeled on it. The AQI and IAQI indicators showed an increase in the level of impact on human health with increasing concentrations of air pollution. There are some detected difficulties here, since "AQI is a piecewise linear function of the pollutant concentration" [27]. The calculated values of the AQI [31] or IAQI [30] indices, over the entire 0-500 scale calculated from the measured concentrations of selected contaminants or in the part of the scale corresponding to the IAQ rating, ranged from "good" to "unhealthy", and can be converted to PD% for use in the model equation, (IAQ) comfort/health . Concentrations will be significant when the uncertainties of scale conversions are estimated. Authors believe that their way of converting the AQI scale to PD% (which is similar to the method of conversion of the IEQ components' ordinal scales from the OFFICAIR EC project [32] to PD% scale; for example, the occupant percentage dissatisfied with noise [33,34] should be accepted in light of the expected results of a metrological analysis of the reliability of the combined IAQ model. Subcomponent models (physical functions, PD%) of the IAQ model for all individual air pollutants are presented later in this section. IAQ Model Weighting Scheme Considering Air Pollution Ventilating To obtain a comprehensive picture of IAQ in a building, it is necessary to measure the number of pollutants with different individual concentrations. There are methods that weigh sub-indices [21] but the problem is finding an effective weighting scheme and understanding how to adjust them in the overall model of all the pollutants in IAQ. For this reason, we proposed an adjustment method for the weights. In our opinion, provided in detail in References [33,34], and also according to Reference [22], the best weighting scheme, which would lead to a credibly aggregated model of IAQ composed of many extractable components (sub-indices), would be a system based on concentration values (the "excess masses" of pollutants to be regarded as loadings for the ventilation system). Therefore, the we aimed to determine the individual pollutants assigned to the IAQ model, their concentrations, c j , as the inputs of the IAQ submodels, and their "excess concentrations" originating from emissions or determined within indoor environments. Thus, it was possible to determine directly the energy requirements for ventilation purposes and the required minimum global ventilation rate. Determining the input concentration value, c j , for each IAQ sub-index enables the determination of the total mass of pollutants in the air, which is the basis for determining the air change rate N 1, . . . 7 (overall air change rate), assuming that the model includes all significant IAQ pollutants. Currently, according to References [35][36][37], the most common assumption made is that pollution from VOC j compounds arises only from emissions due to the presence of construction or finishing materials (for j = 1, 2, . . . n) (it can be assumed that the source i of an emission is the entire indoor environment and then i = 1) from the zero state. The physical model for determining the ventilation rate in indoor environments polluted with VOC-type pollutants from building materials is given by EN 16798-1:2019 [10], assuming that design parameters for indoor air quality are derived using limit values for substance concentrations. In accordance with ECA Report Number 11 [36], the design ventilation rate required to dilute an individual substance emitted from building materials is calculated as: where Q h is the ventilation rate required for dilution in m 3 per second, G h is the emission rate of the substance in micrograms per second, C h,j is the guideline value of the substance in micrograms per m 3 , C h,o is the concentration of the substance in the supply air in micrograms per m 3 , and ε v is the ventilation effectiveness. In fact, in a building with active "indoor chemistry" (see Figure 1), the use of this formula seems to be increasing. Taking into account the dynamic nature of the processes of generating various pollutants, the approach to IAQ and its components should be changed and subcomponents should be treated as pollution load processes, increasing in number not only due to the emission processes but also due to the generation of bio-pollution, water evaporation, and even dust infiltration from outside. It is also possible to set steady-state (initial) concentrations of pollutants and to determine the expected time courses of removal of these pollutants by means of ventilation (curves c j = f(τ)), at constant values of air change rate per hour ACH (h −1 ). Such ventilation rate calculations for CO 2 were developed in 1997 by Persily [38] and similar ones were provided in 2017 by Gyot [39] at the Berkeley National Laboratory. These calculations are not very accurate, as shown in the general demonstration graph in Figure 4. Appl. Sci. 2019, 9, x FOR PEER REVIEW 9 of 32` developed in 1997 by Persily [38] and similar ones were provided in 2017 by Gyot [39] at the Berkeley National Laboratory. These calculations are not very accurate, as shown in the general demonstration graph in Figure 4. Less accurate time-dependent curves of the total minimum ventilation rate ACH needed for "contaminant exhaustion" can be determined using programs [40] based on generic engineering equations for the sum of pollutants ∑cj. The generic equation for pollution concentration (the ratio of the amount of polluting product to the amount of fluid in the space (such as air in a room) can be calculated from the following equation: where c is the pollution concentration in the space (or in the room) with perfect mixing (m 3 /m 3 ) or (kg/kg), q is the amount of pollution added to the space (m 3 /h) (kg/h), N is the air change rate per hour (h −1 ), V is the volume or mass of the space (m 3 ) or (kg), e is the number 2.72, and t is time (h). If the initial concentration (at t = 0) in the space and the concentration in the supply fluid is zero, after some time the concentration in the room will stabilize. The ventilation rate graph for an amount of pollution q = 1 and a volume of space V =1, shows the values of ∑cj, similar to Figure 4. In order to obtain more exact values of the VOC concentrations remaining in the room for the air change rate function, it is possible to use the published dependencies or for the assumed volumes V of ventilated spaces with determinate concentrations, as they can be determined experimentally. A simplified method for determining the ventilation rate N from a simple formula for the time, t, course of a trial ventilation was provided by the Japanese researchers Noguchi et al. [41] in 2016. A description of this method is worth reading. Based on the temporal changes of the TVOC concentration measured using a PIDTVOC meter [42], the air change rate N or the ventilation rate F was estimated using the following method. Assuming perfect mixing of the air in the room and a constant TVOC emission rate E, the concentration change of TVOC in the room can be expressed by the following equation: Finally, Equation (5) can be expressed with one unknown parameter, the air exchange rate N as: Less accurate time-dependent curves of the total minimum ventilation rate ACH needed for "contaminant exhaustion" can be determined using programs [40] based on generic engineering equations for the sum of pollutants c j . The generic equation for pollution concentration (the ratio of the amount of polluting product to the amount of fluid in the space (such as air in a room) can be calculated from the following equation: where c is the pollution concentration in the space (or in the room) with perfect mixing (m 3 /m 3 ) or (kg/kg), q is the amount of pollution added to the space (m 3 /h) (kg/h), N is the air change rate per hour (h −1 ), V is the volume or mass of the space (m 3 ) or (kg), e is the number 2.72, and t is time (h). If the initial concentration (at t = 0) in the space and the concentration in the supply fluid is zero, after some time the concentration in the room will stabilize. The ventilation rate graph for an amount of pollution q = 1 and a volume of space V = 1, shows the values of c j , similar to Figure 4. In order to obtain more exact values of the VOC concentrations remaining in the room for the air change rate function, it is possible to use the published dependencies or for the assumed volumes V of ventilated spaces with determinate concentrations, as they can be determined experimentally. A simplified method for determining the ventilation rate N from a simple formula for the time, t, course of a trial ventilation was provided by the Japanese researchers Noguchi et al. [41] in 2016. A description of this method is worth reading. Based on the temporal changes of the TVOC concentration measured using a PID TVOC meter [42], the air change rate N or the ventilation rate F was estimated using the following method. Assuming perfect mixing of the air in the room and a constant TVOC emission rate E, the concentration change of TVOC in the room can be expressed by the following equation: Finally, Equation (5) can be expressed with one unknown parameter, the air exchange rate N as: The initial concentration c j can be determined from the experimental results. After a long time, when the exponential term in Equation (5) can be assumed to be zero, the concentration C(t) becomes constant. The steady-state concentration C st can be determined from the temporal change in the experimental results where the concentration levels off. The rule that the IAQ model should include a weighting scheme, referring to the variation in the share of pollutants in the IAQ, has been noted in References [35,37]. According to the first proposal, the weighting system is based on the differentiation of coefficients R j , which are the ratios of the real concentration values (or mass of pollutants) to the values of reference concentrations, representing the so-called relative masses of non-eliminated pollutants. This can also be represented by the desirable reduction in the level of pollution by means of ventilation and, thus, also by the energy requirements. For one emission source, the proposed system with a coefficient R j for a given pollutant C j has the formula: where I j is the ratio of the gas-phase concentration to the reference concentration value. For example, for the LCI (Lowest Concentration of Interest) value for the jth compound emitted from the building material, the factor R j is dimensionless, since y j is the gas-phase concentration for the jth compound in µg·m −3 , I j is the lowest concentration of interest (LCI) [41] for the jth compound in µg·m −3 and m is the number of all elected compounds. The weight coefficients W j are used as the weights of the equations in the IAQ index, according to: The authors of Reference [35] justified adjusting the coefficients where R j ≤ 1, but they did not explain the physical meaning of this condition. We believe that further discussion should include the issue of whether the "relative mass" of contamination expressed by Equation (6) has a proper place in the weighting scheme for the equations. The dimensionless quantity (7) does not have a sound physical meaning [37]. In our opinion, this type of calculation method is debatable. One should strive to cover all the sub-indices of the combined IAQ ij model with a weighting scheme that would give VOCs a share in the total energy requirements for ventilation. The term "relative mass" should correspond to weights rationally proportional to the energy expenditure for ventilation of individual pollutants (IAQ sub-indices). Therefore, our future work will focus on introducing weights for all expressions in the overall IAQ model equation. However, since this is currently not possible without an adjustment method adapted to weight determination for very small concentrations, it was decided to present our model as an interim solution. We proposed the use of weights based on the "excess concentration" values within the pollutant categories only with similar and comparable orders of concentration values, for example the VOC odorous and VOC non-odorous categories. The rules of adjustment with boundary conditions will be provided and justified in the follow-up article to this report. IAQ Model Scheme Morphology According to the new proposal, for models with air quality sub-indices IAQ(P j ) (developed with the standard EN 16798-1:2019 as a reference for IEQ model creation [22,33] and with assumptions described in Reference [34]), in the case where indoor air has many pollutants, P 1 , . . . j, the combined ΣIAQ index equation is: (8) where the W P1 , . . . Pj weighting system for IAQ components is created on the basis of the arithmetic mean and the concept of "excess concentration" is introduced only for groups of pollutants with similar concentration values. There is a difference in concentration ∆c j between the observed concentration of pollutant c j and the reference concentration c ref (c ELV or c LCI ), which is below the current concentration in contaminated rooms. Thus, the excess concentration is: The weights W 1, . . . j for all three IAQ models are determined on the basis of arithmetic means or by adjusting all the values of ∆c j in a given model using Equation (10): where the sum of the adjusted weights W j of all ventilated pollutants described with sub-indices should be unity. The weight values for a given IAQ model, (e.g., IAQ comfort ) may be different, but the sum of the sub-index weights must be ≤1.0. The values of the reference concentrations are the concentration levels that are acceptable or recommended as limit values for various pollutants P j . In the case of the IAQ quality submodel as part of the IEQ index model, weights should be used for the VOC odorous (HCHO and TVOC) reference threshold concentrations of odors. The weights in the weighting system should be adjusted to unity according to the Equation (11): There are, however, non-typical cases in which the scales have different values. This is the case for formaldehyde, the concentration of which is many times lower in the building than the threshold level c th . According to the WHO [7], the admissible value c ref is also higher than the concentration in the building. In this case, the authors recommend taking the reference value as zero. Then, the weight W HCHO described by Equation (11) (for two pollutants), would not be negative (c HCHO − c ref ). The ASHRAE Guideline 10 (2011) [3] recommends that the IEQ model (and appropriate weights, W i ) should contain synergy effects of environmental parameters included in the subcomponents and their sensory perceptions. Figure 5 shows the extended IAQ index model with its sub-indices treated as components of the IEQ index , but also with sub-indices of the IAQ comfort/health type, i.e., pollutants that do not belong to the IEQ model but are important to health and the energy balance of a building with a mechanical ventilation system. The experimental dependencies of the percentage of persons dissatisfied, %PD, and the values of the concentrations of pollutants, c j , sensed in indoor air in the appropriate ranges are of fundamental significance in the sub-indices relevant to the IEQ model [43]. Appl. Sci. 2019, 9, x FOR PEER REVIEW 12 of 32` The scheme of the ∑IAQcomfort/health model consists of seven (or more) components or IAQ submodels and these are models for the various types of pollutants: IAQ(CO2), IAQ(TVOC), IAQ(HCHO), IAQ(VOCodorous), IAQ(h), IAQ (PM2.5, PM10), and the selected IAQ(VOCnon-odorous). The IAQ(VOCodorous) and IAQ(VOCnon-odorous) models should be multiplied, depending on the number of dominant VOC pollutants, and, hence, the ∑IAQcomfort/health model will, in practice, have more than seven components. The inputs of each IAQ submodel are unit concentrations in air of a given pollutant,cj (in the case of IAQ(h). This is the moisture content x in well-known units "g of water vapor (gw) per kg of dry air (kga)", converted to a concentration of cj in μgwater/m 3 (or H-absolute humidity in gw/m 3 which is a measure of water-vapor density). In some cases, it is necessary to convert the pollution-derived parameter to VOC concentration (conversion of the odor intensity OI to VOC concentration is described later in this section). From the concentration values, total air pollution can be calculated, and, subsequently, also the energy needed to ventilate the indoor air pollution. When the concentration levels of pollutants are variable and are increasing due to the presence of emissions, then the formulas given in Reference [36] and the amended standard EN 16798-1:2019 are used to calculate the required ACH ventilation rate. When the level of contamination is set (or quasi-fixed) and the volume and other parameters of the ventilated room are known, it is possible to calculate ventilation-time curves, i.e., maximum From the dependencies, expressed as the curves for PD(CO 2 ) or PD(VOC odorous ), the equations of the models are derived Equations (12)- (14): The scheme of the IAQ comfort/health model consists of seven (or more) components or IAQ submodels and these are models for the various types of pollutants: IAQ(CO 2 ), IAQ(TVOC), IAQ(HCHO), IAQ(VOC odorous ), IAQ(h), IAQ (PM 2.5 , PM 10 ), and the selected IAQ(VOC non-odorous ). The IAQ(VOC odorous ) and IAQ(VOC non-odorous ) models should be multiplied, depending on the number of dominant VOC pollutants, and, hence, the IAQ comfort/health model will, in practice, have more than seven components. The inputs of each IAQ submodel are unit concentrations in air of a given pollutant,c j (in the case of IAQ(h). This is the moisture content x in well-known units "g of water vapor (g w ) per kg of dry air (kg a )", converted to a concentration of c j in µg water /m 3 (or H-absolute humidity in g w /m 3 which is a measure of water-vapor density). In some cases, it is necessary to convert the pollution-derived parameter to VOC concentration (conversion of the odor intensity OI to VOC concentration is described later in this section). From the concentration values, total air pollution can be calculated, and, subsequently, also the energy needed to ventilate the indoor air pollution. When the concentration levels of pollutants are variable and are increasing due to the presence of emissions, then the formulas given in Reference [36] and the amended standard EN 16798-1:2019 are used to calculate the required ACH ventilation rate. When the level of contamination is set (or quasi-fixed) and the volume and other parameters of the ventilated room are known, it is possible to calculate ventilation-time curves, i.e., maximum ventilation curves for ACH ventilation rate to reach concentration levels ELV, LCI or the olfactory threshold level, according to Reference [40] or another adequate equation. There are two outputs of each IAQ submodel as described below: 1. The weights of the weighting system for the model IAQ quality and hypothetically W 1, . . . 5 for the model IAQ comfort or W 1, . . . , 7 for the model IAQ comfort/health (in a hypothetical model with a set adjustment method). These should reflect the energy load of the IAQ expressed by the theoretically assumed increase in the current concentration of pollutant c j relative to the reference concentration of c j , ref , which determines the level of this concentration intended to be obtained by ventilation. 2. The PD% with the IAQ as a function of air pollution concentration. These values, determined in panel tests, reflect the impact of the interaction of air with a given pollutant at the actual level of concentration, estimated via panelists' sensations/perceptions (PD = f(c j ) in %). Examples of measurable physical parameters for the purpose of IAQ and IEQ calculations (see case study) are given in the following section. For the construction of the combined model IAQindex with a weighting scheme useful for aggregating sub-indices, we proposed the model presented in Figure 5. In this scheme, the combined IAQ model is shown as the basic assumption for the aggregation of all sub-indices. First, the model was cut by a cross-connected vertical connection regarding the inputs of submodels-the calculation of the sum of the masses of all pollutants c j in the ventilated space, using the values for the inputs of all submodels of IAQ concentration values of contaminants. The sum of the concentrations of all air pollutants expressed as mass units of pollution per m 3 of volume (which can be read after multiplication by V (m 3 ) as the mass to be displaced by ventilation), is the basis for calculating the "air change rate per hour" (ACH), the minimum air exchange rate needed to reduce the observed mass level of air pollution in a ventilated room (see Reference [40]). The second connection concerns the submodel outputs-the conversion of excess concentration to a dimensionless value, which allows to for the weighting scheme of the IAQ index combined model and the weights of individual IAQ submodels to be determined. Additional assumptions were as follows: i. The sum ∆c 1, . . . 7 , which admittedly constitutes an excess mass increase of the sum of pollutants described by the submodels, was treated only as a "virtual energy load of the building" for ventilation conducted for the elimination of pollutants, and therefore, when adjusting the weights, the possibility of dividing the excess concentration by the sum of concentrations should be considered. ii. The percentage of persons dissatisfied PD(IAQ component ) determined experimentally in sensory studies using panelists' sense of air quality during their exposure to an internal environment deteriorated by a given contamination component (PD = f (c j )), was derived from the literature or direct experiments. Values of weights W 1, . . . j in sets of three, five or more components of the three IAQ index models (see Equations (12)- (14), were adjusted to a value of unity by dividing ∆c 1, . . . , j of each component by the sum of excess concentrations ∆c 1, . . . j in µ/m 3 . We proposed the use of c ref values, apart from the values of c LCI [18], c ELV , and the threshold values c th for odorous compounds, were as follows. i. For IAQ(TVOC) and IAQ(VOC odorous ), the threshold concentrations, c th , for identified odorous compounds or mixtures are from Reference [44]; iii. For the IAQ(h), the water-vapor concentration H (g w /m 3 ), recalculated from the moisture content x (g w /kg dry air ) using the gas constant for the water vapor and the actual temperature, the value of h up to the critical value for "high enthalpy of humid air" was evaluated using the formula: where h is the specific enthalpy of humid air (kJ/kg), which must be >55 kJ/kg. The EN 16798-1:2019 standard [10] recommends a limit for the dehumidification of air of 12g w /kg dry air (this value must be converted to a c value in g w /m 3 ). i. For IAQ(VOC non-odorous ), the c ELV value in cases where no established LCI values were derived from the EN 16798-1:2019 standard. ii. For IAQ(PM 2.5 , PM 10 ), the c ELV values were derived from the WHO [7] or other organizations (Tables 1 and 2). The proposed reference values of pollutants forming the sub-indices of the IAQ model are given in Table 2. With reference to the concentration values of c LCI , it should be noted that according to References [18], this value is typically acquired by dividing occupational exposure limits by a safety factor (100 or 1000). Concentration c LCI is taken from Lowest Concentration of Interest (EU-LCI) from European Commission lists. However, the model values for exposure limit values (ELVs) of indoor air pollution, in accordance with the recommendations of the health-based ventilation guidelines [5], should be adopted in accordance with the current WHO guidelines given in the periodically issued WHO Air Quality Guidelines [7]. Selection of Submodels for Pollutant Components Our overall selection of physical subcomponent equations and dependences for %PD = f(c j ) is presented in Table 3. The models presented were used to determine the IEQ for the sample building. The highlighted pollutants were taken into account in the case study building assessment. Formulae for the conversion of odorant concentration c j to odor intensity OI for recalculation (4) [53] [54] [55] [51] [34] The recommended criteria for dimensioning of humidification and dehumidification. It is recommended to limit the absolute humidity value to x = 12g w /kg a or H(g w /m 3 ) h = 1.006t + 0.622(2501 + 1. 1 TVOC, according to Reference [45], represents a narrow chromatographic picture that excludes, for example, the lower aldehydes, e.g., formaldehyde. 2 The measurement of the intensity of the odors in the building in which emissions from construction materials occur can be performed by a panel of participants, where the room is treated as a "test room for background odor" according to Section 6.8.1 of ISO 16000-28:2013, "Indoor Air-Part 28: Determination of odor emissions from building products using test chambers" (2013) [51]. The assessment of the 90% confidence level is possible through the use of a 15-pi odor intensity scale with a reading uncertainty of ±2 pi. 3 OI is perceived odor intensity on a six-level scale from 0 to 5 (no odor = 0, slight odor = 1, moderate odor = 2, strong odor = 3, very strong odor = 4, and overpowering odor = 5). 4 Based on the study Kim (5) volatile fatty acids. For example, for the group of reduced sulfur compounds for compound number 1, the conversion equation for H 2 S is Y = 0.950logX + 4.14, for the carbonyl compounds group for compound number 10, ammonium NH 3 , it is Y = 670logX + 2.38, and for the VOC group for styrene it is Y = 1.420logX + 3.10. 5 If one considers a cooling system that removes heat from a space but does not remove moisture unless condensation occurs, such as radiant cooling without dehumidification in a ventilation system, the importance of humidity is very clear. The sensible cooling of air in a room (no change in absolute humidity) from 25 • C and 60% RH to 20 • C (process a-b for x = 0.012 kg w /kg a ).This value, coupled with a high temperature, t a = 25 • C, is accepted as the critical limit value for dehumidification by EN16798-1:2019 [10] and can be converted into an absolute humidity H (g w /m 3 ). These processes are expected to significantly increase thermal comfort and air quality. Nevertheless, the same change in enthalpy of the air h can be achieved by simply reducing the humidity by 10% RH and keeping the temperature constant (processes a-c). Since IAQ is a function of enthalpy, these are expected to be the same. Here, a change in humidity of 10% RH at constant temperature is equivalent to a change in temperature of 5 or 6 • C at constant moisture content in air x (kg w /kg dry air ). 6 The IAQI system [30] concentration value c using the interpolation method for each air pollutant. In the IAQI system, the index range 0-50 is "good" with a significance level of "little or no risk", 51-100 is "moderate" where "sensitive persons or those with respiratory symptoms are concerned", 101-150 is "unhealthy for sensitive groups", 151-200 is "unhealthy for all individuals", 201-300 is "very unhealthy-more serious health effects for everyone for short-term exposure", and 301-500 is "hazardous" with "health warning of emergency conditions for everyone". Therefore, the comfort scale for IAQI is adequate for index values from zero to 200 and our proposal is to use the "hypothetical" scale of PD* of 0-100% in this range, converted from IAQI. 7 The method of conversion of the IAQI scale to the PD* scale is based on experiences gained during research [27]. The IAQI values in a health-risk scale can be given in % for persons giving a verbal answer of "no risk", "moderate", and "unhealthy" as their health risk evaluation. Therefore, when determining, for instance, The air quality indexes (i.e., AQI [31] and IAQI [30]) are piecewise linear functions of the pollutant concentrations. At the boundary between AQI categories, there is a discontinuous jump of one AQI unit. To convert from concentration c j to I j (in the converted scale index I j will be PD j , Equation (16) is used. where I j is the air quality index I in the PD*% scale, c low is the pollutant concentration break point, which is ≤c j , c high is the pollutant concentration break point, which is ≥c j , I low is the index break point corresponding to c low , I high is the index break point corresponding to c high and c p is the truncated (to an integer) actual concentration for the pollutant. Little data exist on the AQI's metrological reliability for AQI and IAQI. Only in the EPA Air Program undertaken at Cornell University [26] is there a previous review of the quality assurance requirements for AQI. The Representative VOCs for Indoor Environment The time when IAQ studies focused on a class of contaminants referred to as volatile organic compounds (VOCs) is bygone. The analytical methodology available was the primary basis for this focus, but the recent broadening of analytical methods has led to growing realization that other compounds (i.e., SVOCs) beyond traditional VOCs are implicated in IAQ problems. The choice of VOCs remains a challenge in IAQ assessment. Moreover, VOCs is somewhat vague term, the definition of which is not universally agreed upon. It has been defined in terms of vapor pressures and boiling points, as well as molecular chain lengths detectable by chromatographic techniques. Due to the complexity of VOC emission profiles, it is tempting to simplify the analysis and reporting of emissions by grouping all detected compounds together. The first problem with this approach is that individual compounds have highly variable health and/or comfort effects, the result being that concentration alone is not predictive of IAQ effects. Levels of concern vary by orders of magnitude, so a collective concentration will not correlate with IAQ. Second, VOC detection and quantification are highly method dependent. A given sampling and analysis system cannot capture or respond to all the VOCs present in any indoor environment or in the test chamber for a given emitting material. Thus, the term "total" is misleading. The important aspect of IAQ submodel selection is the strategy defined by the US EPA as "VOCs-Total versus Target: Irritancy, Odor and Health Impact". The representative 90 target VOCs were presented by Canada's National Research Council's Institute for Research in Construction (NRG-IRC) in collaboration with several academic and governmental partners, including Health Canada. The compounds were selected based on health impact, occurrence in indoor air, known emission from building materials, as well as suitability for detection and quantification by gas chromatography-mass spectrometry (GC-MS) or high-performance liquid chromatography (HPLC). Our list of target VOCs was actually representative for indoor environment and recommended by the HealthVent project and is provided in Table 1. Steps of ΣIAQ index Calculation After selection of the IAQ model type ( IAQ quality or IAQ comfort ), the IAQ index evaluation was carried out using the complex model IAQ from Figure 5, which should contain the following stages. (a) Calculation of the total concentration of pollutants in the ventilated space or the total mass of air pollutants per m 3 , the level of which is to be reduced by the ventilation process (taking into account the ventilated volume of the room and the emissions present). (b) Selection of the IAQ index model shape from the models defined by Equations (12)- (14), with the provision that due to the multiplication of the submodels for IAQ(VOC odorous ), the number of subcomponents of the IAQ comfort model will be more than five. (c) Processing the input data of the submodels to obtain the concentration value c j , e.g., converting the measured OI value into a concentration value for a given pollutant c j in µg/m 3 . (d) Calculation of the excess concentration values for each identified contaminant (Table 2) ∆c j = c j −c ref . (e) Calculation of the sum of excess concentrations (see Table 2), ∆c j . (f) Calculation of adjusted weights Wj for the selected model equations. IAQ quality and IAQ comfort , are determined on the basis of arithmetic means or by adjusting all the values of ∆c j in a given model using Equation (10), only for groups of pollutants with similar concentration values. (g) Calculation of the value of the ventilating air flow for the environment described in the IAQ index model in accordance with the requirements of the standard EN 16798-1:2019 (a method using the criteria for the ventilation required for the individual substance emitted) [10]. (h) Calculation of given IAQ environmental input parameters, including concentrations of pollutants c j assigned to submodels. The PD values from their sensory equations (Table 3) are presented as the dependence of the percentage of persons dissatisfied PD = f (c j , . . . ), from one of the formulas from Table 3, in order to determine this function. (i) Selection of the IAQ quality model equation (with weights W 1 , W 2 , and W 3 ) or the IAQ comfort model equation (with weights W 1 , . . . W 5 or more) and calculation from Equation (13) of the value with adjusted weights, followed by multiplication of IAQ submodels (Equation (8)) and insertion as a term of the IEQ index in Equation (18) [34]. When selecting the IAQ comfort/health model type, an IAQ index evaluation is carried out using the combined model IAQ from Figure 5, which should contain the following steps. (a) Calculation of the total concentration of pollutants c j in the ventilated space or the total mass of air pollutants per m 3 , the level of which is to be reduced by the ventilation process (taking into account the ventilated volume of the room and the emissions present). (b) Choosing the IAQ index model (12) from among the models defined by Equations (12)- (14), with the provision that by multiplying the IAQ(VOC non-odorous ) submodels, the number of subcomponents of the IAQ comfort/health model will be more than seven. (Table 3) depending on the percentage of persons dissatisfied, PD = f (c j , . . . ), or selected from the formulas for determining this function given in Table 3. (i) Development of IAQ PM2.5 , IAQ PM10 , and IAQ non-odorous submodels. When it is planned to use the indoor air quality index scale IAQI or a similar scale, it is necessary to convert these to PD* values in %, in two steps: (1) by reading from the standard curves of the IAQI = f(c j ) all IAQI values for the determined (measured) VOC non-odorous concentration values and using the converted scale calibration curve, PD = f(IAQI), by reading from the recalibration curve (Equation (16)), the PD = f(c j ) values on the dissatisfaction rating scale from zero to 100 in %, according to Footnote 7 in Table 3. The IEQ Assessment Equation with IAQ As a Subcomponent The proposed IAQ model can be a substantial component of the IEQ model; for example, in the case study shown later in the article. The indoor environmental quality index refers to the quality of a building's environment with respect to the occupants' satisfaction in %. The morphology of the IEQ index model used to assess buildings, to determine as an IEQ component the IAQ index and to determine other subcomponents TC index -thermal comfort, ACc index -acoustic comfort and L index -light quality based on measurements of physical properties in each of the submodels-in accordance with the scheme of the Piasecki-Kostyrko model, is presented in Figure 6 [22]. Lindex-light quality based on measurements of physical properties in each of the submodels-in accordance with the scheme of the Piasecki-Kostyrko model, is presented in Figure 6 [22]. The EN 16798-1:2019 is the reference for IEQ model creation [22,33]. The standard allows complex indoor information to be presented as one overall indicator of indoor environmental quality of the building-IEQindex. The model reliability, including the uncertainties of measurements and data for this model, was discussed clearly in Reference [34], where the authors also presented the internal incongruity in the IEQ model structure and the justification for using the crude weights method for each subcomponent. Originally, the IEQ model was expressed as a polynomial equation consisting of four terms by Wong [43]. The IEQindexis composed of the following subcomponents (SIi): thermal comfort (TCindex), indoor air quality (IAQindex), acoustics (ACcindex), and lighting quality (Lindex). Multiplying their weights, Wi, leads to Equation (17). The authors adopted the crude weighting system, where all elements are weighted in the same way (0.25 for W1-W4), as shown in Equation (18). As a consequence of the equation, the subcomponents SIi (the predicted percentage of those satisfied) can be calculated using Equation (19). where PD is the predicted percentage dissatisfied (PPD) and PD(SIi) is the percentage of persons dissatisfied with the IEQ subcomponent (SIi) level. The authors' simulations for IEQindex sub-indices and preliminary metrological analysis of the overall IEQ model fitting were performed with Monte Figure 6. The research steps necessary to determine the IEQ index for buildings, including physical and design parameters of buildings and subcomponent models. The EN 16798-1:2019 is the reference for IEQ model creation [22,33]. The standard allows complex indoor information to be presented as one overall indicator of indoor environmental quality of the building-IEQ index . The model reliability, including the uncertainties of measurements and data for this model, was discussed clearly in Reference [34], where the authors also presented the internal incongruity in the IEQ model structure and the justification for using the crude weights method for each subcomponent. Originally, the IEQ model was expressed as a polynomial equation consisting of four terms by Wong [43]. The IEQ index is composed of the following subcomponents (SI i ): thermal comfort (TC index ), indoor air quality (IAQ index ), acoustics (ACc index ), and lighting quality (L index ). Multiplying their weights, W i , leads to Equation (17). The authors adopted the crude weighting system, where all elements are weighted in the same way (0.25 for W 1 -W 4 ), as shown in Equation (18). As a consequence of the equation, the subcomponents SI i (the predicted percentage of those satisfied) can be calculated using Equation (19). where PD is the predicted percentage dissatisfied (PPD) and PD(SI i ) is the percentage of persons dissatisfied with the IEQ subcomponent (SI i ) level. The authors' simulations for IEQ index sub-indices and preliminary metrological analysis of the overall IEQ model fitting were performed with Monte Carlo tests. It is easy to show that the standard deviations of these values are equal: A Case Study of a Building The experimental part of this study was performed simultaneously with the BREEAM certification process, including determination of the three primary IAQ pollutants: formaldehyde concentration, CO 2 , and VOCs in the indoor air [22]. The building is a high tower, made of a convex concrete-steel structure with a glass facade. The basic information on the assessed building is presented in Table 4. At the time of the test, the building had a standard empty office without furniture (so-called pre-occupancy stage). The walls were plastered and painted, the suspended ceilings were in place, and the floors were finished with synthetic carpets. All building installations were active, including the mechanical ventilation controlled by the Building Management System BMS system with zonal CO 2 concentration sensors. The building was tested a few days after the formal end of finishing works. The tests were made on the 55th and 47th floor. The experimental part of this study was performed simultaneously with the BREEAM certification process, including determination of the three primary IAQ pollutants: formaldehyde concentration, CO2, and VOCs in the indoor air [22]. The building is a high tower, made of a convex concrete-steel structure with a glass facade. The basic information on the assessed building is presented in Table 4. At the time of the test, the building had a standard empty office without furniture (so-called pre-occupancy stage). The walls were plastered and painted, the suspended ceilings were in place, and the floors were finished with synthetic carpets. All building installations were active, including the mechanical ventilation controlled by the Building Management System BMS system with zonal CO2 concentration sensors. The building was tested a few days after the formal end of finishing works. The tests were made on the 55th and 47th floor. Measurement points in the building were determined based on the analysis of frequencies of designed occupancies of the room and interior finish standards (open spaces). The sampling plan was prepared with the BREEAM assessors conducting the certification process of the facility. The main focus was on the IAQ index of open spaces in which the largest number of people may reside, and these represent the largest occupied usable floor space. According to the detailed design project documents, the building emphasizes the use of materials with known and low emission levels (BREEAM certified). The Equipment, Measurements, and Experimental Approach Standardized CEN and ISO analytical methods were used to determine the VOC concentrations and CO2 and formaldehyde concentrations in the indoor air of the building. Selection of the sampling points was made with the BREEAM assessor in two representative office zones per tested floor and a minimum of two floors. The building was tested three days after formal final finishing works at the pre-occupancy stage with no users inside. For this office building, the tests were conducted on the 55th and 47th floors. Air samples were collected using an active sampling procedure with an electronic mass flow controller, which controlled the air flow (10 dm 3 /h for VOC tests and up to 30 dm 3 /h for formaldehyde tests). Indoor samples were set up in selected representative office locations, approximately 1.5 m above the floor, away from windows, doors, potential emission sources, and direct sunlight. Air samples were tested in accordance with the ISO Appl. Sci. 2019, 9, x FOR PEER REVIEW 21 of 33` The experimental part of this study was performed simultaneously with the BREEAM certification process, including determination of the three primary IAQ pollutants: formaldehyde concentration, CO2, and VOCs in the indoor air [22]. The building is a high tower, made of a convex concrete-steel structure with a glass facade. The basic information on the assessed building is presented in Table 4. At the time of the test, the building had a standard empty office without furniture (so-called pre-occupancy stage). The walls were plastered and painted, the suspended ceilings were in place, and the floors were finished with synthetic carpets. All building installations were active, including the mechanical ventilation controlled by the Building Management System BMS system with zonal CO2 concentration sensors. The building was tested a few days after the formal end of finishing works. The tests were made on the 55th and 47th floor. Measurement points in the building were determined based on the analysis of frequencies of designed occupancies of the room and interior finish standards (open spaces). The sampling plan was prepared with the BREEAM assessors conducting the certification process of the facility. The main focus was on the IAQ index of open spaces in which the largest number of people may reside, and these represent the largest occupied usable floor space. According to the detailed design project documents, the building emphasizes the use of materials with known and low emission levels (BREEAM certified). The Equipment, Measurements, and Experimental Approach Standardized CEN and ISO analytical methods were used to determine the VOC concentrations and CO2 and formaldehyde concentrations in the indoor air of the building. Selection of the sampling points was made with the BREEAM assessor in two representative office zones per tested floor and a minimum of two floors. The building was tested three days after formal final finishing works at the pre-occupancy stage with no users inside. For this office building, the tests were conducted on the 55th and 47th floors. Air samples were collected using an active sampling procedure with an electronic mass flow controller, which controlled the air flow (10 dm 3 /h for VOC tests and up to 30 dm 3 /h for formaldehyde tests). Indoor samples were set up in selected representative office locations, approximately 1.5 m above the floor, away from windows, doors, potential emission sources, and direct sunlight. Air samples were tested in accordance with the ISO 16000-6:2011 and ISO 16000-3:2011 standards. The VOCs were assessed using tubes filled with Tenax pre-occupant 49 59,000 3000 2 Measurement points in the building were determined based on the analysis of frequencies of designed occupancies of the room and interior finish standards (open spaces). The sampling plan was prepared with the BREEAM assessors conducting the certification process of the facility. The main focus was on the IAQ index of open spaces in which the largest number of people may reside, and these represent the largest occupied usable floor space. According to the detailed design project documents, the building emphasizes the use of materials with known and low emission levels (BREEAM certified). The Equipment, Measurements, and Experimental Approach Standardized CEN and ISO analytical methods were used to determine the VOC concentrations and CO 2 and formaldehyde concentrations in the indoor air of the building. Selection of the sampling points was made with the BREEAM assessor in two representative office zones per tested floor and a minimum of two floors. The building was tested three days after formal final finishing works at the pre-occupancy stage with no users inside. For this office building, the tests were conducted on the 55th and 47th floors. Air samples were collected using an active sampling procedure with an electronic mass flow controller, which controlled the air flow (10 dm 3 /h for VOC tests and up to 30 dm 3 /h for formaldehyde tests). Indoor samples were set up in selected representative office locations, approximately 1.5 m above the floor, away from windows, doors, potential emission sources, and direct sunlight. Air samples were tested in accordance with the ISO 16000-6:2011 and ISO 16000-3:2011 standards. The VOCs were assessed using tubes filled with Tenax adsorbent. Then, they were thermally desorbed using a thermal desorption apparatus (TD-20, Shimadzu, Tokyo, Japan). The process of separation and analysis of volatile compounds was achieved using a gas chromatograph equipped with a mass spectrometer (GC/MS) (model: GCMS-QP2010, Shimadzu, Tokyo, Japan). The following GC oven temperature program was applied: initial temperature 40 • C for five min, 10 • C per min to 260 • C, and the final temperature of 260 • C for 1 min. The 1:10 split ratio injection mode was applied. The method used has a limit of quantification of 2 µg/m 3 . The volatile compounds were identified by comparing the retention times of chromatographic peaks with the retention times of reference compounds and by searching the NIST data base (National Institute of Standards and Technology, Gaithersburg, MD, USA) mass spectral database. Identified compounds were quantified using a relative identification factor obtained from standard solution calibration curves. TVOC was calculated by summing identified and unidentified compounds eluting between n-hexane and n-hexadecane. In order to determine volatile aldehydes, air samples were taken via cassettes using a solid absorbent silica gel coated with 2,4-dinitrilophenyl hydrazine (2,4-DNPH), and then subjected to a laboratory test using high-performance liquid chromatography (HPLC) with UV-Vis detection (Dionex 170S, Dionex, Sunnyvale, CA, USA) and an isocratic pump (Dionex P580A, Dionex, Sunnyvale, CA, USA). The described method has a limit of quantification at 2 µg/m 3 . Other IEQ index components were tested as follows. The acoustic tests confirming the designed values were carried out by the measurement of the equivalent sound levels, LAeq, in the selected locations. The measurements were carried out during the daytime (starting at 11:00). The following equipment was used for the measurements: Brüel&Kjaer 4231 acoustic calibrator (Brüel&Kjaer, Naerum, Denmark), Nor-121 analyzer (Norsonic, Tranby, Norway), Brüel&Kjaer 4165 measuring microphones (Brüel&Kjaer, Naerum, Denmark), analyzer with microphone Norsonic-140 (Norsonic, Tranby, Norway). Before the tests were carried out, the calibration of the measuring path was conducted in accordance with the instructions to "check the acoustic measurement channel". The test results were evaluated in relation to the requirements considering permissible sound levels A in rooms intended for human dwellings. Thermal environmental measurements were provided using the microclimate multifunctional instrument HD32.1 and the tests were in accordance with ISO 7726 and ISO 7730. VOCs were tested simultaneously at all points. Visual comfort (Hea 01) was confirmed by using a MAVOLUX 5032C instrument (USB version) with a 3C15683 detector (Gossen, Nürnberg, Germany), in accordance with EN 12464 provisions. Additional Explanations The adaptation of the IAQ model to a practical casestudy was mainly for illustrative purposes in the context of the presented IAQ calculation/aggregation method. We did not focus deeply on discussing the technical or environmental issues of the presented building. Other IEQ subcomponents, such as thermal, acoustic, and visual satisfaction (in %), used to determine the IEQ index, were experimentally determined and partly presented in References [22,33]. Authors do not focus on these results in this article, as they have already been discussed in other papers [22]. Results for theIAQ index and IEQ index Prediction A previous publication of ours [22] reported on IEQ and IAQ building assessments for a larger number of BREEAM buildings, where IEQ was assessed without calculating the combined IAQ index. The combined model of the IAQ index presented in this paper had not yet been previously developed, and we were limited in determining the IEQ index , thus we only took into account two of the most well-known pollutants (i.e., CO 2 and TVOC) separately. The assessment of the IEQ index was made by adaptation of the measured parameters (complying with the draft EN 16798-1:2019 standard for indoor environments) as the input values for the submodels of the IEQ index . The input values for the case study are presented in Table 5, which provides the input data for determining the IEQ index sub-indices of thermal comfort (TC index ), indoor air quality (IAQ index ), acoustics (ACc index ), and lighting quality (L index ) for an office building (47th floor) three days after completion of the finishing work before users were allowed in the building (i.e., pre-occupancy stage). Table 5. Physical parameters 1 and IEQ index results calculated using Equation (9) separately for an IAQ index with internal air pollution of CO 2 and an IAQ index with internal TVOC air pollution, assuming a realistic uncertainty of parameter measurement for the case study of a building (47th floor; open space) three days after the completion of finishing works. IEQ TVOC Second variant with c TVOC as an IAQ index parameter IEQ TVOC = 80.1% ± 10.7% 1 The IEQ and its measurement's uncertainty (with subcomponent standard deviation values) were calculated for IEQ physical parameter values, where t a is the air temperature ( • C), t r is the mean radiant temperature ( • C), v a is the relative air velocity (m/s), p a is the water-vapor partial pressure (Pa), M is the metabolic rate (met), and I cl, is the clothing insulation (clo). In addition, c CO2 is the concentration in ppm, c TVOC is the highest observed TVOC concentration in µg/m 3 , actual noise is in dB(A), and E min is the minimum daylight illuminance (lux). Results for the ΣIAQ index and IEQ index Assessment Including Identified Pollutants (CO 2 , TVOC, and HCHO) The example of a modified calculation of the collective submodel IAQ quality for three basic pollutants, as a component of the IEQ model for determining one project value for this indicator, is provided in two variants. The first variant uses sub-indices of IAQ for two pollutants, CO 2 and TVOC, which are described in Table 5, as well the sub-index of the third pollutant, HCHO (according to Reference [47]), where these differences in the approaches mean one must combine them into one submodel IAQ in order to be used in IEQ calculation. The second variant uses submodels of IAQ for TVOC and HCHO pollutants based on the IAQI system [30] and then converts them into percentages of persons dissatisfied PD* in %. According to the diagram of the model IAQ from Figure 5 and using Equation (12) of the IAQ quality model, the submodel weights are calculated as follows. W CO 2 for the submodel IAQ(CO 2 ) = 0.5 is a component of the polynomial: W VOC for submodel IAQ(VOC) = 0.5 is a weight for combined submodel of the polynomial: IAQ(VOC) = W TVOC ·IAQ(TVOC) + W HCHO ·IAQ(HCHO) (22) with the terms W TVOC and W HCHO calculated from Equation (14) using the measured values c j (actual concentration of TVOC and HCHO) and the reference values c ref (Table 6). In our case study, the value of the submodel IAQ weight for the model (HCHO) ought to also be calculated from the measured value and the reference value c ref . However, formaldehyde is an unusual pollutant because, although it belongs to VOC odorous compounds, the concentrations found in buildings are many times lower than the HCHO threshold c th = 300 µg/m 3 according to the WHO [7] and lower than the threshold concentrations of HCHO from 60 µg/m 3 to 70 µg/m 3 issued in 2013 by the American Industrial Hygiene Association [44]. The permissible value of c ref = 100 µg/m 3 is also higher than the formaldehyde concentration found in buildings, according to Reference [5] and the standard EN16798-1:2019 [10]. Therefore, the authors propose that in such a case (to avoid a negative value of ∆c j ), the value modelling the reference should be taken as zero. Then, the form of the adjusted W HCHO weight in the model described by Equation (11) for air with three pollutants, would be as follows. The results of the weights assessment for the two variants of the IAQ quality model are presented in Table 6. According to the diagram of the model IAQ from Figure 5 and Equation (8), we proposed sensory equations for the percentage of persons dissatisfied %PD* in two variants. The submodel IAQ's first variant includes the following: 1. The IAQ submodels used so far in References [22,33] for CO 2 and TVOC pollutants, as shown in Table 5; 2. The IAQ submodel for formaldehyde, using two types of equations depending on the range of HCHO concentrations measured in the building. Formaldehyde concentrations in the air with values above the threshold concentration, c th , for its odor, i.e., above 60 or even 300 µg/m 3 , can be used to create IAQ submodels for rooms with volatile and aromatic VOC compounds as well as for the HCHO equation [50]. PD HCHO = exp (2.14 · OI − 3.81) exp (2.14 · OI − 3.81) + 1 (24) However, in the case study building, the maximum concentration of HCHO was 18 µg/m 3 and, therefore, its concentration in the air was several times lower than the concentration of the odor threshold, c th [44]. The intensity of the formaldehyde odor was undetectable under these conditions, and the sensory equation PD = f(OI), which is appropriate for sensory detection of IEQ, is not applicable for odors below the threshold. Therefore, for small concentrations, we proposed the use of the equation taken from the work of Zhu and Li [47] based on the analysis of "health effects on the human body", derived from "indoor air quality comfort evaluation experiments and the literature". PMV HCHO = 2log c HCHO 0.01 (25) This equation links the value of the new unit "the effect of formaldehyde on human comfort", called PMV HCHO , with its c HCHO concentration (µg/m 3 ) in the air. It covers the range from 10 µg/m 3 to 320 µg/m 3 and, as declared by the authors, this value has the same nature as PMV thermal comfort, which can be converted into a PD% unit according to the formula in Reference [48], experimentally confirmed for nearly zero energy buildings (NZEBs) by Reference [58]. The submodel IAQ's second variant includes the following. i. The IAQ(CO 2 ) submodel used so far for CO 2 pollution, as shown in Table 5. ii. The IAQ submodels for TVOC and HCHO types of pollution used as indoor air quality index ratio values borrowed from the IAQI system [30], which are then converted into percentages of persons dissatisfied (PD* in %) in the following way (a) The reference curves of IAQI = f (c j ) [30] for two dependencies of the IAQI index on TVOC and HCHO contamination values must be reconstructed. On the y-axis are the IAQI index values from zero to 200 in the range from "no risk" to "unhealthy" and on the x-axis, the c j values are presented. (b) In accordance with the measured values of c TVOC and c HCHO , the values IAQI TVOC and IAQI HCHO are determined from the functions IAQI TVO c = f(c) and IAQI HCHO = f(c). (c) Based on the IAQI system parameters [30] given in Footnote 6 of Table 3, which are presented as data for the functions for indexes, the IAQI TVOC and IAQI HCHO values appropriate for the break pointsi n perceived pollution concentration values are calculated in a range from zero (good) to 200 (unhealthy) using the ordinal scale IAQI = f(c) [27]. This function was converted to IAQI TVOC and IAQI HCHO scales using the concentration function scales PD*(TVOC) and PD*(HCHO) in the PD* range from 0 to 100% ( Figure 8). (d) The data used for calculation and conversion of IAQI and PD* scales are presented in Table 7. (e) Based on data determined for the new converted scales (Table 7) (e) Based on data determined for the new converted scales (Table 7) In the context of the results for the overall IEQ model index when treating both the main pollutants CO2 and TVOC separately, we present in Table 8 the transformed IEQ calculations with the sub-indices ∑IAQ. The results for the individual case study building IAQ subcomponents are taken from Table 5 (for cTVOC = 787 μg/m 3 and cHCHO = 18 μg/m 3 [22]). The IEQ index with ∑IAQquality values was calculated using two variants-the first conventional and the second borrowed from the IAQI scale [30] (Table 8). which can be converted into a PD% unit according to the formula in Reference [48], experimentally confirmed for nearly zero energy buildings (NZEBs) by Reference [58]. The submodel ∑IAQ's second variant includes the following. i. The IAQ(CO2) submodel used so far for CO2 pollution, as shown in Table 5. ii. The IAQ submodels for TVOC and HCHO types of pollution used as indoor air quality index ratio values borrowed from the IAQI system [30], which are then converted into percentages of persons dissatisfied (PD* in %) in the following way (a) The reference curves of IAQI = f(cj) [30] for two dependencies of the IAQI index on TVOC and HCHO contamination values must be reconstructed. On the y-axis are the IAQI index values from zero to 200 in the range from "no risk" to "unhealthy" and on the x-axis, the cj values are presented. (b) In accordance with the measured values of cTVOC and cHCHO, the values IAQITVOC and IAQIHCHO are determined from the functions IAQITVOc= f(c) and IAQIHCHO = f(c). (c) Based on the IAQI system parameters [30] given in Footnote 6 of Table 3, which are presented as data for the functions for indexes, the IAQITVOC and IAQIHCHO values appropriate for the break pointsi n perceived pollution concentration values are calculated in a range from zero (good) to 200 (unhealthy) using the ordinal scale IAQI = f(c) [27]. This function was converted to IAQITVOC and IAQIHCHO scales using the concentration function scales PD*(TVOC) and PD*(HCHO) in the PD* range from 0 to 100% (Figure 7). (d) The data used for calculation and conversion of IAQI and PD* scales are presented in Table 7. In the context of the results for the overall IEQ model index when treating both the main pollutants CO 2 and TVOC separately, we present in Table 8a the transformed IEQ calculations with the sub-indices IAQ. The results for the individual case study building IAQ subcomponents are taken from Table 5 (for c TVOC = 787 µg/m 3 and c HCHO = 18 µg/m 3 [22]). The IEQ index with IAQ quality values was calculated using two variants-the first conventional and the second borrowed from the IAQI scale [30] ( Table 8a). Concentrations c and SD(c) are in µg/m 3 ; PD* H , PD* L , and SD(PD*) are in %, and standard deviations of the HCHO concentration of 12% was adopted on the basis of reports from IAQ research conducted as part of BREEAM in 2016 [22]. The assumptions were that c H and PD* H are the coordinates of the upper break point (i.e., high break point) of the converted scale PD* = f(c), and c L and PD* L are the coordinates of the lower break point (i.e., low break point) of the converted scale PD* = f(c). Standard deviations were assumed for c meas , c TVOC , and c HCHO as well as for PD* = ±12%, as this is half of the transformed segment of the scale, which according to Table 7 covers a range of 25% of PD*, with one perceived category of air quality, e.g.,"no risk" or "moderate". Therefore, the maximum standard deviation was 12.5% and, according to the literature on AQI and IAQI values, it should be rounded to a total value. 2) SD vote (PD(SI i )) from the ±u overall (IEQ) equation was the standard deviation of a probability distribution of an each. (SI i ) vote and was calculated primary using the PD(SI i ) equation calibration curve [34]. Discussion of the IAQ index Theoretical Model For years, the authors, as accredited laboratory personnel, have conducted IAQ pollution tests in indoor environments for various applications. Based on our experience, it was concluded that the general approach to assessing combined IAQ has not yet been systematized and that there is a global tendency to assess individual IAQ parameters separately or to group them without a justified aggregation method. This is not a good situation from the point of view of building users' needs. This, in our opinion, may lead to incorrect IAQ interpretations in specific building situations. In the context of analyzing the problem in this paper, authors presented a summary of the state-of-the-art methods and also provided a new approach for solving some of these problems. As presented, it is possible to create a IAQ index aggregating the results of indoor air analyses, taking into account various representative pollutants. Three levels of comprehensive air quality assessments (with three, five or seven subcomponents), depending on the application of the assessment, were proposed, together with step-by-step procedures. This may be practical, as shown in the evaluation of a case study on a building. We originally selected the main IAQ subcomponent equations and user satisfaction dependences, %PD = f(c j ), and provided them all in one place (Table 3). We then proposed and justified the weighting schemes for the IAQ total equation. In most of the studies in the literature, the weighting schemes used for IEQ or IAQ assessments are not physically justified or explained. There are known methods of weighting sub-indices, but the problem that was solved in this paper was an effective system for weight adjustments. For the construction of the combined model IAQ index , with a weighting scheme useful for aggregating sub-indices, we proposed the model scheme presented in Figure 5. According to the results, the advantage of the complex model IAQ index , in which the input quantities always constitute concentrations of given pollutants, is the ability to use these concentrations to calculate excess pollution concentrations from Equation (10) and generate weighting schemes W 1, . . . n for all three models by adjusting the weights based on the concentration values of excess air pollutants to a value ≤1.0 for each IAQ index model. The ∆c j values determine the masses of pollutants that must be removed by ventilation to eliminate the target pollutant effect. They can be determined as differences between the current concentrations of pollutants and the concentration of pollutants at the reference or standard level (e.g., c ELV or c LCI ), and in the case of VOC odorous , the odor threshold c th. The presented approach may allow planning of air quality for the building. As discussed, it is important to identify those VOCs with comfort, health, and impacts and focus on the IAQ sub-model choice aspect, briefly defined as the strategy "VOCs-Total vs. Target: Comfort, Irritancy, Odor, and Health Impact". The model from Figure 5 has uniform inputs, i.e., concentration levels c j and two outputs: (1) weighed (adjusted) and (2) sensory equations, PD* = f (c TVOC ), constituting the IAQ submodel equations. These second outputs of submodels (PD* values) are coefficients of satisfaction from the comfort sensation or lack of "health risk". These are the terms of the equation describing "combined IAQ", which meets the requirements of the abovementioned strategy of selecting IAQ sub-models related to IAQ components that have the most impact on the resulting IEQ perception. Models for subcomponents of IAQ not perceived by humans but influencing health are recommended to be used from the index set in the AQI system [26 -28], which was adapted by the authors to assess the quality of indoor air based on, and in accordance with, the concepts of the air quality assessment system used globally by the American EPA. In the context of the subcomponent of the TVOC concentration in Figure 7, the authors provided the relationships of PD* = f(c TVOC ) based on Jokl research [49] and resulting from the IAQI scale [30] as converted by the authors. The relationship between PD* and TVOC concentration in both approaches is strongly correlated, as shown in Figure 9. of the TVOC concentration in Figure 8, the authors provided the relationships of PD* = f(cTVOC) based on Jokl research [49] and resulting from the IAQI scale [30] as converted by the authors. The relationship between PD* and TVOC concentration in both approaches is strongly correlated, as shown in Figure 9. The curves obtained from the conversion confirm Jokl's predictions provided in [49] and previously accepted idea. However, the final confirmation of these curves will be done experimentally as panel tests, as planned by the authors for the near future. Discussion of Results for the Case Study on a Building The experimental study was performed in a BREEAM certified building, and included the determination of formaldehyde concentration, CO2, and VOCs in the indoor air. The example calculation of the combined ∑IAQ model for three basic pollutants, as components of the IEQindex model, is presented in two variants but the calculated PD*TVOC values obtained with both calculation methodswere very similar. The first variant of ∑IAQ calculation used %PD = f(cj) curves in % sub-indices of IAQ for three pollutants, and the differences in approach in Table 5 and Table 8 meant combining them into one IEQ submodel: the ∑IAQ model intended for the IEQ calculation. The second variant used submodels of IAQ for TVOC and HCHO pollutants based on the IAQI system [30]which were then converted into percentages of persons dissatisfied (PD* in %). The first conclusion is that CO2 concentration cannot be used separately for the IAQindex assessment, especially at the pre-occupancy stage ( Table 5). The building was polluted with VOC emissions and HCHO from the construction products directly after finishing works were completed. The authors confirmed that all three pollutions should be a simultaneously integrated part of the IAQ model, because the importance of TVOC is much greater, representing the main source of pollution-the construction and finishing materials. According to the results, we recognized two variants of the combined ∑IAQindex calculation. For the first variant [22], the combined ∑IAQ index of satisfied users was 69.1% and, for the second variant (new approach with converted AQI index), the ∑IAQ index was 70.0% satisfied. The results ofthe IEQindex(1) (for Variant 1) were within the interval of combined overall uncertainty,16.24%, Figure 9. PD TVOC based on conversion from IAQI to the PD* scale and study of Weber-Fechner theory. The curves obtained from the conversion confirm Jokl's predictions provided in [49] and previously accepted idea. However, the final confirmation of these curves will be done experimentally as panel tests, as planned by the authors for the near future. Discussion of Results for the Case Study on a Building The experimental study was performed in a BREEAM certified building, and included the determination of formaldehyde concentration, CO 2 , and VOCs in the indoor air. The example calculation of the combined IAQ model for three basic pollutants, as components of the IEQ index model, is presented in two variants but the calculated PD* TVOC values obtained with both calculation methodswere very similar. The first variant of IAQ calculation used %PD = f(c j ) curves in % sub-indices of IAQ for three pollutants, and the differences in approach in Tables 5 and 8a meant combining them into one IEQ submodel: the IAQ model intended for the IEQ calculation. The second variant used submodels of IAQ for TVOC and HCHO pollutants based on the IAQI system [30] which were then converted into percentages of persons dissatisfied (PD* in %). tests carried out immediately after finishing works gave results that significantly exceeded the BREEAM limits for TVOC at 300 µg/m 3 (twice as high). It should be expected that an acceptable level should be reached after a minimum of one month from the completion of the work. For correctness of the obtained calculations, the authors are conducting a model credibility analysis that will be provided in the next article-Indoor Air Quality Model Part II: The Combined Model IAQ index Reliability Analysis. The model uncertainty estimate may be compromised because the model reproduces the discomfort level associated with the dominant component.
2019-09-19T09:10:21.251Z
2019-09-18T00:00:00.000
{ "year": 2019, "sha1": "62523177593ca83cad105200910eb09e84885bd2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/9/18/3918/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "abd998b89780eabc91ea388b722aeaafd5c68e79", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
209445916
pes2o/s2orc
v3-fos-license
Combined Metabolomics and Genome-Wide Transcriptomics Analyses Show Multiple HIF1α-Induced Changes in Lipid Metabolism in Early Stage Clear Cell Renal Cell Carcinoma The accumulation of lipids is a hallmark of human clear cell renal cell carcinoma (ccRCC). Advanced ccRCC tumors frequently show increased lipid biosynthesis, but the regulation of lipid metabolism in early stage ccRCC tumors has not been studied. Here, we performed combined transcriptomics and metabolomics on a previously characterized transgenic mouse model (TRAnsgenic Cancer of the Kidney, TRACK) of early stage ccRCC. We found that in TRACK kidneys, HIF1α activation increases transcripts of lipid receptors (Cd36, ACVRL1), lipid storage genes (Hilpda and Fabp7), and intracellular levels of essential fatty acids, including linoleic acid and linolenic acid. Feeding the TRACK mice a high-fat diet enhances lipid accumulation in the kidneys. These results show that HIF1α increases the uptake and storage of dietary lipids in this early stage ccRCC model. By then analyzing early stage human ccRCC specimens, we found similar increases in CD36 transcripts and increases in linoleic and linolenic acid relative to normal kidney samples. CD36 mRNA levels decreased, while FASN transcript levels increased with increasing ccRCC tumor stage. These results suggest that an increase in the lipid biosynthesis pathway in advanced ccRCC tumors may compensate for a decreased capacity of these advanced ccRCCs to scavenge extracellular lipids. Introduction Clear cell renal cell carcinoma (ccRCC) is the most common type of kidney cancer, accounting for over 70% of all primary renal tumors [1]. Clear cell morphology results from the presence of intracellular lipid droplets [2,3]. These lipid droplets are produced in the endoplasmic reticulum (ER), serve as a bioenergetic fuel and for the generation of cell membranes, and may play a role in protecting against oxidative and ER stress [2,3]. The mechanisms involved in lipid droplet accumulation in ccRCC are reported to be increased de novo lipogenesis through reductive glutamine carboxylation, in combination with inhibition of lipid degradation [4e6]. Molecular profiling studies of human primary tumor specimens have revealed the loss of von Hippel Lindau (VHL) as the only consistent clonal event during ccRCC initiation [7,8]. In early stage ccRCC, the expression levels of HIF1a are markedly enhanced by loss www.transonc.com Volume 13 Number 2 February 2020 pp. 177-185 177 or inactivation of VHL tumor suppressor gene [9,10]. Additional genomic events, such as loss of PBRM1, promote the malignant transformation of kidney lesions through activation of HIF1a transcriptional activity [11,12]. To model early stage ccRCC in mice, we previously generated the TRACK (TRAnsgenic Cancer of the Kidney) transgenic mouse model with expression of a mutant, constitutively active HIF1a specifically in the proximal tubules of the kidneys [13]. These mutations in the oxygen-dependent degradation domain of the HIF1a found in TRACK kidneys preclude recognition by VHL, interfere with the proteasomal degradation of this HIF1a, and promote its transcriptional activity in the kidneys. Our subsequent histologic, transcriptomics, and metabolomics analyses showed that this TRACK model shows major similarities to early human ccRCC, including the formation of lipid-filled "clear cells" and a metabolic switch to aerobic glycolysis [13e15]. Here, we analyzed the role of HIF1a in the regulation of lipid uptake and metabolism in early stage ccRCC in TRACK mice, and we compared these results in TRACK mice with human ccRCC patient data. Our results suggest that activation of HIF1a signaling promotes dietary lipid uptake in early stage ccRCC to a greater degree than in late stage ccRCC. Transgenic Mouse Experiments Wild type (WT) C57BL/6 male mice and transgenic lines in the C57BL/6 background carrying constitutively active mutants of HIF1a (P402A, P564A, N803A, g-HIF1aM3) driven by a truncated g-glutamyl transpeptidase promoter, as previously characterized, were used for this study [13]. The mice were housed at the Research Animal Resource Center of Weill Cornell Medical College (WCMC). The care and use of these animals was approved by the Institutional Animal Care and Use Committee of WCMC. All mice were fed a regular chow diet after weaning for 1e2 weeks, after which they were randomly assigned to receive a regular chow (#5053, Lab Diet) or high-fat diet (HFD) (#58v8, Test Diet). HFD contains 23.6% fat by weight (versus 5% in the regular diet), is rich in saturated fatty acids (FAs) (9.05% versus 0.78%) and monounsaturated FAs (9.32% versus 0.96%), and has increased amounts of polyunsaturated FAs, including linolenic and linoleic acid. A total of 43 male mice were treated with HFD (g-HIF1aM3 [TRACK] n ¼ 23, WT n ¼ 20), and 24 mice were used as reference (TRACK n ¼ 16, WT n ¼ 8). We sacrificed mice after 2, 4, 6, 10, 12, and 15 months to study the phenotype. In total, 5 groups of mice, 12e18 months old, were fed a regular diet and used for metabolomics (n ¼ 14 WT, n ¼ 12 TRACK) or transcriptomics (n ¼ 6 WT, n ¼ 3 TRACK). Immunohistochemistry was conducted on the sectioned paraffin-and OCT-embedded kidneys. We used an antibody to CA9 (CA-IX) to show that the TRACK mice were positive for this HIF1a target and ccRCC marker [16] (Suppl. Figure 1). We stained sections with Oil Red O (ORO) (Rowley 1320-06-5). Representative images of each group were selected by a pathologist blinded to the group labels. Metabolomics Analysis We harvested the kidneys from six TRACK and five WT mice. Metabolite profiling was performed as previously described [14,17]. Briefly, tissue samples were washed in cold PBS, followed by three cycles of bead beating in 80%À70 C methanol:water using a tissue lyser cell disrupter. The metabolites in the extraction mixture were separated from proteins by centrifugation. The supernatants were pooled, dried in a speed vac, and stored at À80 C. The metabolites were solubilized in 0.2 M NaOH and subsequently measured by LC/MS and LC-MS/MS. Untargeted metabolite profiling was performed using aqueous normal phase (ANP) and reverse phase (RP) chromatographic separations, followed by dual spray electrospray ionization and high-resolution accurate mass determination using a time-of-flight (TOF) mass spectrometer (Agilent model 6230). The LC system comprised a Cogent Diamond Hydride™ (ANP) column (MicroSolv Technology Corporation, Eatontown, NJ), a Zorbax SB-AQ (RP) column (Agilent Technologies, Santa Clara, CA), and a Model 1200 Rapid Resolution LC system. An Agilent 6538 UHD Accurate-Mass Q-TOF with the same ANP and RP platform was used to conduct fragmentation analysis for confident molecular identification. Metabolites were normalized to protein as measured with the Bio-Rad DC protein assays. Metabolomics Data Processing was performed using MassHunter Qualitative Analysis software. Statistical analysis was done in Mass Profiler Professional (Agilent Technology, MPP, version B2.02). Aligned molecular features detected in all biological replicates of at least one group were directly applied for statistical analysis across treatment groups by MPP. Whole Transcriptome RNA Sequencing We extracted total RNA from one thin, outer slice of the two kidney cortices of each of three TRACK (13 months) and three age-matched, WT mice. RNeasy spin columns (Qiagen) were used to purify RNA. The complete transcriptomes were sequenced using an Illumina HiSeq2000 Sequencer with 51bp single-end reads and 4 samples per lane as previously described [15]. Ingenuity pathway analysis (IPA) was performed with the DESeq2 processed data using the default settings of the software. All genes that displayed a significant change (q < 0.05) were included in the analysis. Analysis of Publicly Available Human RCC Data Publicly available transcriptomics data from 66 chromophobe, 533 clear cell, and 290 papillary RCC (pRCC) tumor specimens, along with the results from 129 flanking normal kidney specimens, were used as deposited by The Cancer Genome Atlas (TCGA) on January 28, 2016 with the digital object identifier 10.7908. The preprocessed RNA-Seq by Expectation Maximization (RSEM) normalized data were used. Metabolomics data from 138 patients with ccRCC were used as included in the supplementary data in Hakimi et al. [18]. The levels of individual metabolites from patients with different stage tumors were separated from their levels in corresponding flanking normal kidney tissue, as depicted in the sample output file. Statistical Analysis One-way analysis of variance was applied with Tukey's multiple comparison posttest to test statistical significance of the gene expression differences in kidney cancer. All p-values obtained with RNA-Seq were adjusted for the false discovery rate to yield q-values. MPP was used to provide a multivariate statistical platform for comparative metabolite profiling. The two-sided Student's t-test was applied to determine statistical significance of metabolite differences. The association between different gene expression levels and disease stage was assessed by calculation of Spearman's correlation coefficient. A two-sided p < 0.05 or q < 0.05 was considered to indicate significance. Inhibition of Fatty Acid Biosynthesis and b-Oxidation Pathways in the Kidneys of TRACK Mice We previously showed that constitutive activation of HIF1a in the kidneys of TRACK mice promotes a metabolic switch to aerobic glycolysis [14]. To investigate further changes in metabolism in this model, we performed IPA. We used differentially expressed genes (q < 0.05) identified in a whole-genome transcriptomics analysis of the kidney cortices from 13-month-old TRACK mice compared with age-matched WT mice ( Figure 1). We focused on the canonical pathways with lipid or glucose metabolism annotations. Consistent with our previous analyses, we found the glycolysis I, HIF1a signaling, and TCA cycle II pathways among the most significantly changed pathways in TRACK versus WT kidneys. In addition, the fatty acid b-oxidation I, fatty acid b-oxidation III (unsaturated), stearate biosynthesis I, and palmitate biosynthesis I pathways were perturbed. The fatty acid b-oxidation I and stearate biosynthesis I were among the pathways most significantly altered. The majority of the genes in these two pathways showed lower mRNA levels in TRACK (22/32 genes; 27/44 genes) as compared with WT kidneys, and by IPA, both pathways were predicted to be inhibited. Dietary Lipid Uptake in the Kidneys of TRACK Mice To analyze the regulation of lipid metabolism in the TRACK kidneys, we then measured the levels of critical mRNAs, small molecule intermediates, and products of lipid metabolism. The rate-limiting enzymes in the lipid biosynthesis and b-oxidation pathways have been described (Figure 2A) [19]. Although we detected increased transcript levels of several genes involved in lipid biosynthesis (Acly, Fasn, Scd1, Figure 2B), no significant changes were noted in the expected metabolites (glutamine, palmitic acid, stearic acid, Figure 2C). We detected decreased levels of Lpl and Cpt1a transcripts, as well as decreases in the levels of metabolites in the b-oxidation pathway (oleoylcarnitine; Figure 2B, C) in TRACK compared with WT kidneys. Uptake of extracellular lipids can act as an alternative way to increase the cellular lipid content during hypoxia [20,21]. We thus Figure 1. Analysis of lipid metabolism pathways in the kidney cortices of transgenic TRACK mice compared with wild-type mice. Ingenuity pathway analysis (IPA) of differentially expressed genes (q < 0.05) in kidney cortices of three TRACK as compared with three WT mice. Stacked bar charts illustrate the number of decreased (blue) and increased (red) transcripts in each individual canonical pathway. The total number of transcripts in each pathway is shown in the far-right column, while the numbers of decreased and increased transcripts are presented in the bars. The right-tailed Fisher's exact test was used to determine statistical significance. The glycolysis I, fatty acid b-oxidation I, stearate biosynthesis I, TCA cycle II, fatty acid b-oxidation III (unsaturated), and palmitate biosynthesis I pathways are significantly (p < 0.05) perturbed in TRACK as compared with WT kidneys. The glycolysis I and HIF1a signaling pathways predominantly contain increased transcripts, indicating activation, while the fatty acid b-oxidation I, stearate biosynthesis I, and TCA cycle II pathways are predicted to be inhibited. These results suggest that constitutive activation of HIF1a in TRACK kidneys increases nonoxidative glucose metabolism, while inhibiting lipid degradation and biosynthesis. ) comparing WT and TRACK kidney cortices. Transcripts related to lipid biosynthesis, uptake, and storage are increased, while the mRNA levels of lipid degradation genes are reduced in TRACK compared with WT. C illustrates the relative ion abundance of metabolites. Note a significant (p < 0.05) increase in the levels investigated whether scavenging of extracellular lipids contributes to the lipid accumulation in the kidneys of TRACK mice. We detected increased transcript levels of genes involved in FA transport (Cd36, Fabp5) and cholesterol uptake (Vldlr, ACVRL1) in TRACK compared with WT mice ( Figure 2B). We also measured increased levels of unsaturated long-chain FAs (oleic acid, palmitoleic acid) in the kidney cortices of TRACK versus WT mice ( Figure 2C). Linoleic acid and linolenic acid are essential, polyunsaturated FAs that cannot be synthesized de novo in humans or mice and therefore, by definition, are obtained from the diet [22]. We detected increased levels of linolenic and linoleic acid in TRACK versus WT kidneys ( Figure 2C). We also detected elevated levels of lipid storage genes (Hilpda, Plin2, Figure 2B) and esterified FAs (PC 36:4, lysoPC 16:1, Figure 2C), indicating that lipids are likely stored in lipid droplets. Collectively, these results indicate that constitutively active HIF1a promotes the uptake and storage of dietary lipids in the kidneys of TRACK mice. A High-fat Diet Enhances Lipid Accumulation in the Kidneys of TRACK Mice To explore further the role of extracellular lipid uptake, 1-month-old TRACK and WT mice were fed a HFD or a regular chow diet. Weight monitoring showed increases in total body weight in both WT and TRACK mice after 12 weeks on a HFD as compared with a chow diet (38.1 g versus 26.9 g, p < 0.001, Figure 3A). Twenty-three TRACK mice were fed a HFD and sacrificed after 2, 4, 6, 10, 12, and 15 months. Consistent with our previous observations, TRACK mice showed ccRCC precursor lesions, characterized by high carbonic anhydrase (CA-IX) expression, highly disorganized tubular structures, and an abundance of clear cells (Supplementary Figure 1). We did not observe any invasive growth or differences in nuclear morphology in the kidneys of the TRACK mice on a HFD versus a chow diet. The HFD-fed TRACK mice exhibited increased numbers of clear cells in the kidneys, compared with TRACK mice fed a chow diet ( Figure 3B). These changes were visible in TRACK kidneys starting after 2 months on a HFD and thereafter, while minimal changes were observed in WT kidneys. Consistent with increases in FA and lipid import resulting from the HFD, the increase in clear cell abundance was accompanied by increased levels of neutral triglycerides (TGs) and lipids, as assessed by ORO staining, in TRACK mice ( Figure 3C). These results suggest that the constitutively active HIF1a promotes the accumulation of dietary lipids in the kidneys of TRACK mice. Longer latency periods or alternative dietary formulations may be required to determine the effects of a HFD on tumor development. Increased Expression of Lipid Uptake and Storage Genes in Human ccRCC We next compared our TRACK kidney results with data from human ccRCC specimens. The presence of clear cells is a fundamental morphologic feature of human ccRCC that is not often seen in other subtypes of RCC, such as chromophobe RCC (chRCC) and pRCC [23]. Previously, TCGA consortium conducted RNAseq analysis on samples from patients with chRCC (n ¼ 66), pRCC (n ¼ 290), and ccRCC (n ¼ 533) along with 129 normal flanking kidney tissue samples [7,24,25]. To investigate whether dietary lipid uptake contributes to the formation of clear cells in ccRCC, we analyzed these RNAseq datasets. We first focused on the dominant lipid receptors and compared their expression levels in the three types of RCC to normal human kidney tissue ( Figure 4A). We found an increase in CD36 (5.1-fold), a modest increase in VLDLR (1.5-fold), and a decrease in LDLR (2.6-fold) transcript levels in ccRCC as compared with normal kidney ( Figure 4B). Activin receptorelike type 1 (ACVRL1), LDL receptorerelated protein 1 (LRP1), and caveolin 1 (CAV1) are alternative receptors for LDL particles and FAs [26,27]. We found 1.9-fold, 2.0-fold, and 5.1-fold increases, respectively, in ccRCC as compared with the normal kidney tissue ( Figure 4B). In contrast, the CD36, ACVRL1, and LRP1 of (essential) fatty acids (palmitoleic acid, oleic acid, linoleic acid, alpha linolenic acid), as well as a large (p < 0.05) decrease in fatty acid carnitine products (oleoylcarnitine) in TRACK as compared with WT kidneys, showing that HIF1a promotes the uptake of dietary lipids (such as the essential fatty acids linoleic and alpha-linolenic acid) presumably through increased expression of lipid receptors in the kidneys of transgenic mice. transcripts were either decreased or not significantly changed in chRCC and pRCC compared with the normal kidney tissue. VLDLR, LDLR, and CAV1 also showed significant increases in chRCC ( Figure 4B). We conclude that CD36 and VDLR transcripts are increased relative to those in the normal kidneys specifically in human ccRCC, similar to our results in TRACK versus WT kidneys ( Figure 2). We next investigated markers of lipid storage in the different RCC subtypes. We found a marked and unique increase in the FA transporter FABP7 transcript (113-fold) and the FA elongation enzyme ELOV2 (11-fold) in human ccRCC specimens compared with normal kidneys (Figure 4C). At the same time, the transcripts of genes involved in lipid droplet stability (PLIN2, 7.8-fold), fusion (CIDEB, 1.9-fold), and growth (HILPDA, 21.3-fold) were increased in ccRCC compared with the normal kidney tissue ( Figure 4C). The mRNA levels of these genes were either not significantly changed (FABP7, HILPDA, PLIN2, ELOVL2) or decreased (CIDEB) in the other types of RCC. These TCGA data analyses indicate that the elevated transcripts of lipid receptors and lipid storage genes are dominant features of human ccRCC. HIF1a Promotes Dietary Lipid Uptake in Early Stage Human ccRCC To investigate whether FA uptake occurs in human ccRCC, we probed metabolomics data obtained from a cohort of 138 patients with different stages of ccRCC [18]. Similar to our observations in TRACK mice, we noted increased amounts of the unsaturated essential FAs, linolenic (1.9 fold, p < 0.0001) and linoleic acid (1.5 fold, p < 0.0001), particularly in stage 1 ccRCC tumors, in comparison with the normal kidney tissue ( Figure 5A). In these same stage tumors, we found increased levels of glutamine (1.4 fold, p < 0.0001) and palmitic acid (1.2 fold, p < 0.0001). Previous research by the TCGA showed a metabolic shift toward lipid biosynthesis in subgroup of ccRCC patients with a poor prognosis [7]. To gain insight into the relative importance of lipid import during disease progression, we analyzed FA and glutamine levels in the same cohort of 138 patients at different stages of disease. Although the distribution of tumor staging was skewed toward stages 1 and 3, we found a trend toward a decrease in linoleic and linolenic acids with increasing disease stage ( Figure 5A, Pearson R À0.1495 and À0.1222, p ¼ 0.0401 and p ¼ 0.0767). In contrast, we detected no clear correlation between glutamine or palmitic acid and tumor stage (R À0.0175, À0.1207, p ¼ 0.4193, p ¼ 0.0793). These results are consistent with the possibility that ccRCC tumors recruit fewer dietary lipid decreases during disease progression. We also compared the CD36 and FASN mRNA levels in patients with different stages of ccRCC. Similar to the decreases in linolenic and linoleic acid, we found a decrease in CD36 mRNA levels with increasing ccRCC tumor stage, while the FASN mRNA levels showed an increasing trend with tumor stage ( Figure 5B, Pearson correlation coefficients À0.1594 and 0.0735, p ¼ 0.0001 and p ¼ 0.0451). To determine whether the increase in lipid biosynthesis correlated with a decrease in dietary lipid uptake, we investigated the association of CD36 and FASN within each patient. Indeed, we found an inverse correlation between FASN and CD36 mRNA levels ( Figure 5C, Pearson R À0.1594, p ¼ 0.0044). We also compared the . mRNA levels of genes involved in lipid uptake and storage in human kidney cancer. Transcriptomics data obtained by the TCGA from normal human kidney tissue (n ¼ 129), chromophobe (chRCC, n ¼ 66), papillary (pRCC, n ¼ 290), and clear cell renal cell carcinoma (ccRCC, n ¼ 533) specimens were analyzed for lipid uptake and storage genes. A illustrates some of the lipid uptake (orange/red) and storage (yellow) pathways. B illustrates the quantitative mRNA levels (RNA-Seq by Expectation Maximization [RSEM]) of lipid uptake genes in normal tissue and different subtypes of kidney cancer, respectively. Note statistically significant increases in transcripts of lipid uptake genes CD36, VLDLR, ACVRL1, LRP1, and CAV1 in human ccRCC compared with the normal kidney. As shown in C, mRNA levels of lipid storage genes FABP7, HILPDA, PLIN2, ELOVL2, and CIDEB are also increased in ccRCC. Together, these data indicate that increased transcripts of genes involved in both dietary lipid uptake and lipid storage are common in ccRCC compared with both normal kidney and with other histologic subtypes of RCC. Statistical significance of differences in cancer specimens as compared with the normal kidney tissue was assessed using ANOVA (p < 0.05). CD36 and FASN transcript levels in ccRCC patients with WT HIF1a versus a loss of the HIF1a locus. We detected decreased CD36 mRNA levels, but no change in FASN transcripts in patients that showed loss of at least one allele of the HIF1a gene ( Figure 5D). Our results suggest that dietary lipid uptake in early stage ccRCC may be driven by HIF1a signaling in human ccRCC in addition to TRACK mice. Discussion The accumulation and storage of lipids are critical for the management of oxidative and ER stress, and lipids promote homeostasis in ccRCC tumors [2,3,28]. Here, we provide evidence from whole genome transcriptomics analyses that HIF1a signaling enhances the accumulation of lipids in early stage ccRCC through the uptake of extracellular lipids. Our results indicate that HIF1a signaling increases the transcript levels of lipid receptors, such as CD36 and ACVRL1, as well as transcripts of genes involved in lipid transport (FABP7) and storage (PLIN2 and HILPDA). We here show that the activation of the lipid biosynthesis pathway may compensate for the decreased ability of more advanced stage ccRCC tumors to scavenge extracellular lipids. These more advanced ccRCC tumors may acquire a relative dependency on the HIF2a or the MTORC1 pathway, which were previously shown to drive the de novo lipogenesis gene network, including FASN [5,29]. Du et al. [30] showed that the rate-limiting enzyme in mitochondrial FA import, carnitine palmitoyltransferase 1a (CPT1a), is directly repressed by HIF1a and HIF2a in human ccRCC cultures and that this repression results in a decrease in FA catabolism. In TRACK kidney cortices, we have shown that CPT1a transcripts are greatly reduced relative to levels in WT kidney cortices ( Figure 2B), so a reduction in FA catabolism plus increased lipid uptake resulting from HIF1a activation should result in much greater internal levels of FAs in early stage ccRCC. Previous lipidomic profiling of human ccRCC specimens showed increased levels of mature TGs, as well as cholesterol esters in cancerous tissues as compared with the normal tissue [31]. Human tumors were also found to be enriched for polyunsaturated, long-chain FAs. These results are in line with preclinical data showing that hypoxic cells preferentially scavenge unsaturated FAs from phospholipids [21,28]. We found increased transcript levels of , and somatic copy number data (n ¼ 418) (Panel D) of stage I, II, III, and IV ccRCC were analyzed. Statistical significance was assessed by the two-sided Student's t-test and Pearson's correlation. A illustrates the levels (log2 normalized abundance) of two essential fatty acids (linoleic and linolenic acid) and intermediates of lipid biosynthesis (glutamine, palmitic acid) as measured by LC-MS/MS. A statistically significant (p < 0.05) increase in essential fatty acids and lipid biosynthesis intermediates was detected in stage I ccRCC tumors as compared with the normal kidney tissue. As stage increased, the level of linoleic acid decreased (p < 0.05), while no significant changes in the glutamine or palmitic acid levels were observed. B illustrates levels of CD36 and FASN mRNA levels (RNA-Seq by Expectation Maximization [RSEM]) by tumor stage. CD36 mRNA levels decreased with increasing tumor stage, while the FASN transcript levels increased with tumor stage. C shows the association between CD36 and FASN mRNA levels within each individual patient with ccRCC. An inverse association was found between the CD36 and FASN expression levels. D shows the CD36 and FASN mRNA levels in patients with or without a HIF1A somatic copy number reduction. We found higher CD36 mRNA levels in patients with wild-type (WT) HIF1A. Collectively, these results suggest that HIF1a increases dietary lipid uptake in patients with early stage ccRCC, while tumors in patients with advanced ccRCC may rely on lipid biosynthesis. CD36, SCD1, and ELOVL2 in TRACK kidneys, as well as in early stage human ccRCC tumors. These proteins are known to import, desaturate, and elongate FAs, ultimately yielding the most abundant substrates in the tumors. We also measured increased transcript levels of several cholesterol receptors (VLDLR, ACVRL1, LRP1) in TRACK kidneys and human ccRCC, while we detected one nonsignificantly increased cholesterol ester (cholesterol sulfate) in our metabolomics analysis of the TRACK kidneys. These results suggest that HIF1a activation may not only lead to increased dietary FA uptake but that HIF1a may also enhance the enzymatic processing of FAs and cholesterol uptake. Future research will be needed to determine if (un)saturated FAs, dietary lipids, and cholesterol have distinct roles in ccRCC. Previous research showed that unsaturated FAs protect against reactive oxygen species in glioblastoma and breast cancer cells [20]. An increase in antioxidant defense mechanisms, with increased glutathione metabolism and somatic alterations of the redox regulators KEAP1 and NRF2, was previously shown in patients with aggressive ccRCC [18]. Although unsaturated FAs may reduce oxidative stress, the reverse has been shown for saturated FAs [32,33]. The results presented here suggest that the composition of dietary lipids may influence the disease course in patients with ccRCC.
2019-12-24T14:03:07.346Z
2019-12-19T00:00:00.000
{ "year": 2019, "sha1": "dd382303c8afc4ea37f1baf3a41b275e516536ca", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.tranon.2019.10.015", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c14ea5473115d44dc38900f94caca0780adca17f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
218642680
pes2o/s2orc
v3-fos-license
Incidence of recreational snowboarding-related spinal injuries over an 11-year period at a ski resort in Niigata, Japan Background There is limited knowledge regarding the incidence of recreational snowboarding-related spinal injuries. Objective This study investigated the incidence and characteristics of recent recreational snowboarding-related spinal injuries and discussed possible preventive measures to reduce the risk of spinal injuries. Methods This descriptive epidemiological study was conducted to investigate the incidence and characteristics of snowboarding-related spinal injuries at the Myoko ski resort in Niigata Prefecture, Japan, between 2006 and 2017. The incidence of spinal injuries was calculated as the total number of spinal injuries divided by the number of snowboarding visitors, which was estimated based on the ticket sales and estimates regarding the ratio of the number of skiers to the number of snowboarders reported by seven skiing facilities. Results In total, 124 (72.5%) males and 47 (27.5%) females suffered spinal injuries. The incidence of spinal injuries was 5.1 (95% CI 4.4 to 5.9) per 100 000 snowboarder visitors. Jumps at terrain parks were the most common factor in 113 (66.1%) spinal injuries, regardless of skill level (29/49 beginners, 78/112 intermediates, 6/10 experts). Overall, 11 (including 9 Frankel A) of 14 (78.6%) cases with residual neurologic deficits were involved with jumps. Conclusions In recreational snowboarding, jumping is one of the main causes for serious spinal injuries, regardless of skill level. The incidence of spinal injuries has not decreased over time. Individual efforts and educational interventions thus far have proven insufficient to reduce the incidence of spinal injury. Ski resorts and the ski industry should focus on designing fail-safe jump features to minimise the risk of serious spinal injury. Conclusions In recreational snowboarding, jumping is one of the main causes for serious spinal injuries, regardless of skill level. The incidence of spinal injuries has not decreased over time. Individual efforts and educational interventions thus far have proven insufficient to reduce the incidence of spinal injury. Ski resorts and the ski industry should focus on designing fail-safe jump features to minimise the risk of serious spinal injury. InTROduCTIOn The danger of spinal injuries has been highlighted as a risk associated with the spread of recreational snowboarding. [1][2][3][4][5][6][7][8] Increasing media coverage of snowboarding events and competitions, such as the World Cup, Olympics and Winter X games, may have affected the way in which recreational snowboarders perform, prompting them to attempt to emulate professionals. 9 The spinal region is one of the most common injured body parts for critical injury among snowboarders [1][2][3][4][5][6][7][8] and traumatic paraplegia may result in permanent disability. 10 11 It has been reported that spinal injuries to the thoracolumbar region are most likely to be associated with jumping. [1][2][3][4][5] Although many terrain park (TP) features have been designed for jumps and aerial manoeuvres, snowboarders are significantly more likely to sustain spine injuries in TPs than on regular slopes. [6][7][8] In Japan, the frequency of snowboardingrelated spinal injuries due to jumping has increased since the latter half of 1990. 2 Yamakawa et al 2 reported that the total numbers of patients with snowboard-related spinal injuries have exceeded the total numbers of patients with ski-related spinal injuries since the 1995-1996 season and the total numbers of snowboarder visitors have exceeded the total numbers of skier visitors since the 1997-1998 season. However, there are few reports on spinal injuries in Japan. It is unknown whether the incidence of spinal injuries has decreased in the last decade. Consequently, there is a need for research that elucidates the occurrence of spinal injuries in snowboarders, which may have been affected by recent changes in their behaviour or slope design. The purpose of this study was to investigate the incidence and characteristics of snowboarding-related spinal injuries at the Myoko ski resort in Niigata Prefecture, Japan, between 2006 and 2017 and to discuss possible preventive measures to reduce the risk of spinal injury. PATIenTS And MeTHOdS The Myoko ski resort is a famous ski resort with seven ski facilities in Niigata Prefecture, Japan, and approximately 0.6 million visits per year. The closest primary emergency care hospital to the Myoko ski resort is Niigata Prefectural Myoko Hospital, and Niigata Prefectural Central Hospital is the only local referral centre for serious spinal injuries. Therefore, we supposed that the vast majority of patients with spinal injuries that occurred while snowboarding at the Myoko ski resort were treated in these two hospitals. This study included all patients with snowboardingrelated spinal injuries who were treated in one of the two hospitals between December 2006 and April 2017. The factors investigated in this study included sex, age, skill level, cause of accident, location of injury, pattern of spinal injury, and severity of neurologic injury. Self-reported skill levels were classified as beginner, intermediate or expert. Causes of injuries were categorised in simple fall (on a regular ski slope, not in a TP), collision on slopes with objects or other snowboarders or skiers, jump in TPs or 'other'. At this resort, the arbitrary creation of jump features is prohibited on regular slopes and there are no half pipes at this ski resort. Jumping is therefore performed only in TPs. Most TP jump features were built by former professional snowboarders, skilled groomers and resort staff who were entrusted by the ski resort administrator. Injuries that occurred on non-aerial TP features, such as boxes and rails, were classified as 'other'. Cases of spinal injury in off-piste areas were excluded. To determine the incidence of snowboarding-related spinal injuries, the total number of visitors to the Myoko ski resort was estimated based on the number of ticket sales announced by each of the seven skiing facilities that comprise the resort. Each facility also reported estimates regarding the ratio of the number of skiers compared with the number of snowboarders based on ski patrol observations. Estimation of the number of snowboarders was based on this ratio. The incidence of spinal injuries was calculated as the total number of spinal injuries divided by the number of snowboarding visitors. The regions of spinal injury were divided into the cervical vertebrae (C1-C7), thoracic vertebrae (Th1-Th12), lumbar vertebrae (L1-L5) and sacral-coccyx. Furthermore, to clarify the characteristics of the location of injuries, the thoracolumbar junction level (Th10-L2) was divided and its characteristics were investigated. Spinal injuries were classified into several types: compression fracture, burst fracture, facet subluxation, dislocation fracture, spinous process fracture, lumbar transverse process fracture and sacral-coccyx fracture. Patients with sprains or contusions of the spine were not included in this study. The location of injury, presence of spinal cord injury (SCI) and fracture pattern were investigated from medical records and images between December 2006 and April 2017. In cases with multiple vertebral body fractures, we defined the location of vertebral fracture based on the largest vertebral body collapse observed with a lateral view using X-ray imaging. Neurologic severity was evaluated according to the Frankel grade 12 at the time of first visit and at discharge. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research. Incidence During 11 winter seasons, there were approximately 3 334 000 visits by snowboarders and 171 spinal injuries. Therefore, the incidence of spinal injury was 5.1 (95% CI 4.4 to 5.9) per 100 000 snowboarder visitors. The total numbers of visitors and snowboarding-related spinal injuries per year are shown in figure 1. Severity of injuries At the time of the first visit, 27 snowboarders had neurologic deficits associated with their spinal injuries. Among these patients with neurologic deficits, jumps were involved in 18 cases, falls in 7 cases, collision (with a tree) in 1 case and other (rail fall) in 1 case. The patterns of associated injuries included 1 case of C2 hangman fracture, 2 cases of C4/5 facet subluxation, 3 cases of cervical central cord injury (1 fall, 1 jump, 1 rail fall), 9 cases of burst fracture (1 case of C5, 5 cases of L1, 3 cases of L2) and 12 cases of fracture-dislocation. At final follow-up, 12 males and 2 females with a mean age of 27±6.1 (20-42) had a neurologic deficit. There were nine cases with Frankel A (three beginners, three intermediates, three experts) and five cases with Frankel D (three beginners, two intermediates). Overall, 11 (9 Frankel A, 2 Frankel D) of 14 (78.6%) cases with residual neurologic deficit involved jumps in TPs. dISCuSSIOn Little is known about the incidence of snowboardingrelated spinal injuries. According to Sacco et al, 13 between January 1990 and December 1995 in Vermont, USA the incidence of spinal injury was 1.3 per 100 000 visits for snowboarders and approximately 15% of ski resort users were snowboarders at the time of the study. Tarazi et al 1 reported that the incidence of spinal injury was 4.0 per 100 000 visits for snowboarders over two seasons Open access (December 1994-April 1996 in Vancouver, Canada) and approximately 15% of ski resort users were snowboarders. On the other hand, Yamakawa et al 2 reported that the incidence of spinal injury was 5.7 per 100 000 visits for snowboarders over 12 seasons (December 1988-March 2000 in Gifu Prefecture, Japan) and more than 50% of ski resort users were snowboarders since the 1997-1998 season. In the present study, the incidence of snowboarding-related spinal injury was 5.1 (95% CI 4.4 to 5.9) per 100 000 snowboarder visitors over 11 seasons (December 2006-April 2017) and snowboarders accounted for approximately 40% to 60% of the Myoko ski resort visitors. Over the past three decades, the ratio of snowboarders has increased to around 50%, and the incidence of spinal injuries has increased up to approximately 5 per 100 000 visits. We suggest that the increased incidence of spinal injuries is partly related to the popularisation of TPs without appropriate safety measures. Snowboarders like to perform tricks and aerial manoeuvres. 3-6 9 To attract more snowboarders, ski resorts construct TPs with man-made features, allowing more acrobatic jump manoeuvres. Since around 2000, various jump features have been introduced to the TPs at the Myoko ski resort in response to global snowboarding trends. These features include tabletops, step-downs, spines, hips and gaps and non-aerial items, such as boxes and rails. It has been reported that snowboarders in TPs are significantly more likely to sustain spine injuries in TPs than on regular slopes. [6][7][8] Although many aerial features in TPs have been designed for jumping and aerial manoeuvres, the rate of injuries associated with jumping is four times higher than in alpine skiing, 14 and it has been reported that 52% to 77% of snowboarding spinal injuries involved jumping. 1 2 In the present study, jumps in TPs were the most common factor in 113 (66.1%) spinal injuries, and 11 (78.6%) of 14 SCIs with residual neurologic deficit involved jumps in TPs. In total, 10 (59.1%) cases of spinal injury affected the thoracolumbar junction (Th10-L2). Our results are consistent with those of previous reports [1][2][3][4][5][6][7][8] and show that jumps in TPs remain one of the greatest risk factors for serious spinal injuries at the thoracolumbar junction, and that the incidence of spinal injuries has not decreased. Conventional preventive measures with regard to recreational snowboard injuries can be divided into two types: those undertaken by individuals and those undertaken by ski resort administrators. Regarding individual preventive measures, Ishimaru et al 15 reported that hip pads reduce the overall risk of injury in recreational snowboarders, but hip pads are not considered being an effective protection against life-threatening injuries (including SCIs). There is also no evidence demonstrating the efficacy of body trunk protectors for protection against thoracolumbar spinal injuries. 16 Some reports have emphasised the importance of educational intervention. 17 18 According to Cusimano et al, 18 educational intervention in the form of brochures and videos aimed at young skiers and snowboarders appeared to be effective in improving safety-related knowledge, attitudes and behaviours, although there was no significant difference in injury rates between the control and intervention groups. Few strategies for reducing the incidence of spinal injuries have been evaluated for efficacy. 5 The present study shows that serious spinal injuries occur in TPs regardless of skill level. In the case of recreational snowboarders, it seems impossible to reduce the incidence of spinal injuries through only individual efforts and risk perception. It is also important to note the inherent risk in TP jump features. Snowboarders tend to fall backward from jumps. 3 This phenomenon may result from the concave curved takeoffs, a design feature that can induce backward rotation. 19 Falling backward is more likely to result in direct impact at the back of the trunk because of difficulty of cushioning backward falls using the upper limbs. 20 Thus, backward falls from jumps may lead to vertebral fractures of the flexion-distraction type (dislocation fracture with or without burst fracture). 21 The likelihood and severity of injury have been reported to be related directly to the impact on landing. 22 23 Historically, the design of skiing equipment such as skis, snowboards, bindings and helmets has been carried out by professional engineers at experienced companies. However, jump features in modern TPs are designed by skilled groomers and resort staff with little scientific basis. 22 24 Currently, most recreational TP jump features are built without the involvement of professional engineering design. [22][23][24][25] 'Fail-safe' and 'fool-proof' are important concepts for the prevention of accidents caused by human error. 'Fool-proof' refers to the ability to mitigate injury when users make errors. In this regard, all that can be done to prevent snowboarders from performing jumps is to remove all jump features from ski resorts. Goulet et al 26 reported that removing man-made jumps from TPs prevented severe injury. However, removal of all jump features is unacceptable for the resorts and resort users. 'Fail-safe' refers to the ability to maintain safety even when a failure mode occurs. For instance, a fail-safe jump would be one in which the snowboarder does not suffer from catastrophic injuries (including SCIs) even if he/ she fails to jump. In this regard, McNeil et al 24 evaluated the safety of jump features quantitatively and created TP jump features using an engineering design approach to minimise the risk of serious spinal injury. Hubbard et al 22 suggested that the probability of severe injuries on landing is correlated with jumper velocity perpendicular to the landing surface, and proposed that landing impact severity can be reduced by constructing landing slopes that are nearly parallel to the trajectory of the jumper. McNeil et al 23 also introduced the concept of shaping of the landing to minimise impact, using the equivalent fall height to parametrise impacts. McNeil et al 24 proposed that engineered jump designs limit the energy dissipated at impact by designing the shape of the landing surface Open access and reduce the inversion risk by limiting the curvature to the 'late' section near the end of the take-off ramp ( figure 3). Based on this theoretical foundation, [22][23][24] Petrone et al 25 constructed TP jump features to test the feasibility of controlling landing impact. Audet et al 9 recommended that an engineering approach considering TP design and management might help prevent injuries and that future research should focus on how to design and maintain a safer environment. Further studies are needed to verify whether an engineering approach to TP jump feature's design can contribute to reducing the incidence of catastrophic spinal injury. This study has several limitations. First, it was conducted at one ski resort in Japan. However, the features at the Myoko ski resort do not differ from those at other ski resorts (personal communication with the Myoko ski resort administrators). In addition, one of the ski grounds that comprise the Myoko ski resort has been making TPs under the guidance of an internationally renowned company since 2015. Therefore, we consider that our findings can be generalised to the latest trends in recreational snowboardingrelated spinal injuries at ski resorts, where TPs are made without an engineering approach. Second, our study patients did not rate themselves as Sulheim et al 27 advocated that snowboarding skill was classified into four categories by the type of turns they routinely performed. Third, the incidence of spinal injuries we calculated in this study was crude incidence and was not adjusted for age, sex or other factors, unlike typical epidemiological studies. Fourth, the ratio of the number of skiers to the number of snowboarders were estimated based on ski patrol observations and may not have been strictly accurate. However, the Japan Association for Skiing Safety 28 reported that the mean ratio of the number of skiers to the number of snowboarders were 53% to 47%, respectively, during the period 2013-2017, which is close to our estimated ratio. Therefore, we regard our calculated total number of snowboarders as a reasonable denominator. Finally, we have missed patients with minor trauma who did not seek medical attention. However, it is unlikely that a patient with a spinal injury required medical care would not visit a medical institution, as almost all people in Japan are insured. COnCluSIOn Snowboarding-related spinal injuries are a frequent occurrence at ski resorts. In this study, the incidence of snowboarding-related spinal injury was 5.1 (95% CI 4.4 to 5.9) per 100 000 snowboarder visitors over 11 seasons. Preventive measures should focus on reducing the likelihood and consequence of spinal injuries involving jumps in TPs. While individual effort and educational interventions may be valuable, ski resorts and the ski industry should also focus on constructing fail-safe TP jump features to minimise the risk of serious spinal injury.
2020-05-07T09:08:33.092Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "9efec815d0e830a970e3ebdece5b488cb93606ab", "oa_license": "CCBYNC", "oa_url": "https://bmjopensem.bmj.com/content/bmjosem/6/1/e000742.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "12251561b1fcd4cd10cdea67f27009849ab5fb0b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
134680639
pes2o/s2orc
v3-fos-license
Location of Sinabung volcano magma chamber on 2013 using lavenberg-marquardt inversion scheme Sinabung Volcano has been monitoring using GPS after his eruption on August 2010. We Applied Levenberg-Marquardt Inversion Scheme to GPS data on 2013 because deformation of Sinabung Volcano in this year show an inflation and deflation, first we applied Levenberg-Marquardt to velocity data on 23 January 2013 then we applied Levenberg-Marquardt Inversion Scheme to data on 31 December 2013. From our analysis we got the depth of the pressure source modeling results that indicate some possibilities that Sinabung has a deep magma chamber about 15km and also shallow magma chamber about 1km from the surface. Introduction Before the eruption in 2010, the eruption of Sinabung volcano was never recorded in history. The first period of eruptive activity of Mount Sinabung began on August 27, 2010, which then turned into a classification Sinabung Sinabung volcanoes type A is a solitary volcanic which has a single peak, where the activity before the eruption of Sinabung 2010 just a solfatar gas emissions and fumarole [1]. Based on data and information from PVMBG, the period of the next eruption started on 15 September 2013, which is still a phreatic eruption, then turned into a magmatic eruption since 23 November 2013 and was followed by the emergence of a lava dome since December 16, 2013. From this case we want to know the deep and radius of Sinabung magma source. Point Pressure source Surface deformation of a volcano is controlled by the shape and size of the source, the increment of pressure, and the elastic properties of the medium. The deformation is proportional to the ratio of the cavity pressure change to the half-space elastic modulus ΔP/G, and Poisson's ratio [2]. Mogi model assumes that the crust is half-elastic medium and surface deformation caused by pressure source in the form of a spherical of magma which is locate data certain depth. If changes in hydrostatic pressure take place on chamber, the deformation will occur symmetrically. The model input required in the model Mogi point pressure source are the depth (d) and changes in volume (ΔV). Its magnitude varies inversely with squared distance from the center of cavity. The location of the source depth can be estimated by calculating Levenberg Marquard Inversion Scheme Menke (1984) defines the inverse theory as a whole technical of mathematical and statistical methods to obtain useful information regarding a physical system based on the observation of the systems [8] In the non-linear inversion scheme, we can use relation between the forward operator G , the model parameters m , and the data d as this equation On Mogi equation the models parameter is (d and ) that best fit with the data. By assuming the measurement errors are normally distributed, then we minimize the sum squared error that normalized by the maximum likelihood principle, respective to their standard deviations  , If we let In the Levenberg-Marquardt (LM) scheme, the update of model parameters at n iteration is found by solving the equation where  is a dumping parameter which should be adjusted during the inversion until convergence. Results and Discussion PVMBG has placed several GPS stations at Sinabung volcano since 2010. The locations of these stations are shown in Fig below. We use data of deformation recorded at these stations in 2013 published by PVMBG because in this period we have condition that sinabung being inflation and deflation and we can observed shallow magma chamber and deep magma chamber on this period by applaying Levenberg Marquard (LM) inversion scheme. The Levenberg Marquard (LM) Inversion Scheme was applied to the 2013 data of deformations recorded at 7 GPS stations, first we tried to applay Levenberg Marquard on gps velocity data on 23 Januari 2013, we choose that time because at 23 januari 2013 konditin of sinabung is tend to be stable, from our inversion we get deep magma chamber about 1km with volume of magma chamber about 3x10 ଽ ݉ ଷ . Then we applied Levenberg Marquard inversion on velocity data on 31 Desember 2013, on this period we get shallow magma chamber about 1 km and volume of magma chamber about 0.8 x10 ଽ ݉ ଷ . From tha calculation we predict that the magma sources in 2013 migrated from deeper part to shallower beneath summit of Sinabung before the lava dome appearance in December 2013. Conclusion From our analysis usisng Levenberg Marquard (LM) Inversion Scheme, we get prediction that the magma sources in 2013 migrated from deeper part to shallower beneath summit of Sinabung before the lava dome appearance in December 2013, from our inversion we got deep magma chamber about 14 km and also shallow magma chamber about 1 km under the surface.
2019-04-27T13:09:48.640Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "bb32b92380b54bf537918e5f6ccbadb6bae8d818", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1013/1/012182", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d798ae8bc5cd39c91ec9718c25e1b84391babd7b", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
221397208
pes2o/s2orc
v3-fos-license
Cosmic Amorphous Dust Model as the Origin of Anomalous Microwave Emission We have shown that the thermal emission of the amorphous dust composed of amorphous silicate dust (a-Si) and amorphous carbon dust (a-C) provides excellent fit both to the observed intensity and the polarization spectra of molecular clouds. The anomalous microwave emission (AME) originates from the resonance transition of the two-level systems (TLS) attributed to the a-C with an almost spherical shape. On the other hand, the observed polarized emission in submillimeter wavebands is coming from a-Si. By taking into account a-C, the model prediction of the polarization fraction of the AME is reduced dramatically. Our model prediction of the 3$\sigma$ lower limits of the polarization fraction of the Perseus and W 43 molecular clouds at 17 GHz are $8.129 \times 10^{-5}$ and $8.012 \times 10^{-6}$, respectively. The temperature dependence of the heat capacity of a-C shows the peculiar behavior compared with that of a-Si. So far, the properties of a-C are unique to interstellar dust grains. Therefore, we coin our dust model as the cosmic amorphous dust model (CAD). INTRODUCTION Plenty of observations indicate that the majority of interstellar dust is composed of amorphous material ( Li & Draine 2001a). Amorphous materials show unique physical properties compared with crystalline materials. Zeller & Pohl (1971) found from laboratory measurements that the temperature dependence of the heat capacity and the thermal conductivity of amorphous materials at low temperatures shows deviation from those of crystalline materials and is linearly proportional to temperature and proportional to the square of the temperature, respectively. These behaviors were found universally among glasses, such as cristobalite, vitreous silica, and so on (Nittke et al. 1998), and do not depend on the microscopic nature of the materials. Based on these facts, Anderson et al. (1972) and Phillips (1972) independently proposed that thermal characteristics of amorphous materials at low temperature are governed by the transition between the two-level systems (TLS) caused by the deformation of the crystal structure. The mechanical potential of some of the atoms composing an amorphous material becomes a double-well potential. Quantum mechanically, the ground state of the energy eigenstates of the atoms split into two states. One is described by the sum of the states trapped in each potential minimum. The other is described by the difference between these states. Small but finite energy splitting occurs between these two states. In the TLS model, heat absorption and heat transport are governed by the transition between these states. Since the TLS model successfully explained the low-temperature thermal behaviors of the amorphous materials, it has been accepted as the standard model to describe the amorphous materials. Paradis et al. (2011) showed that the fact that the observed spectrum index of thermal emission from the Galactic dust from submillimeter through millimeter wavebands is smaller than 2, can be explained by taking into account the interaction between the TLS and the electromagnetic waves. Anomalous microwave emission (AME), which shows up as an emission bump at around 10-30 GHz, is observed ubiquitously in the various Galactic environments (see Dickinson et al. 2018 and references therein). Because of the spatial correlation of the AME and the thermal dust emission, it is widely believed that the AME originates from a kind of dust (Davies et al. 2006). However, the physical process of its emission mechanism is still unresolved. Thermal emission from amorphous dust has been proposed as one of the candidates of the AME mechanism (Jones 2009;Nashimoto et al. 2020b). Since the typical energy difference between the TLS is of the order of ∼ 1 K × k B (where k B is the Boltzmann constant) (Phillips 1987) that corresponds to ∼ 10 GHz × h (where h is the Planck constant), the emission caused by the resonance transition between the TLS is potentially able to explain the AME. Nashimoto et al. (2020b) showed that the thermal emission from amorphous silicate dust (a-Si) based on the TLS model could reproduce observational features of intensity and polarization spectra from far infrared to microwave wavebands. One of the problems of their model is that the model prediction of the polarized intensity slightly exceeds the observational upper limit of the polarized flux density obtained by QUIJOTE (Génova-Santos et al. 2015;. To date, polarized emission from the AME has not been detected. On the other hand, it is known that silicate dust grain contributes only half of the Galactic interstellar dust, and the remaining half is composed of the carbonaceous dust grain (Weingartner & Draine 2001;Mishra & Li 2015). It is worth studying whether the polarized intensity predicted by thermal emission from the amorphous dust is able to be reduced by taking into account the carbonaceous component. In this letter, we studied whether the thermal emission from amorphous dust proposed by Nashimoto et al. (2020b) is able to provide the model consistent with the current upper limit of the polarized intensity of the AME by taking into account both a-Si and amorphous carbonaceous dust (a-C) simultaneously. Our model is tested by comparing observed spectral energy distributions (SEDs) from microwave through far infrared for Perseus molecular cloud and W 43 molecular cloud. Structure of this letter is as follows. In Section 2, we present the amorphous dust emission model. In Section 3, we compare our model to the observation. In Section 4, we discuss the physical properties of amorphous dust predicted by the results. MODEL Thermal emission intensity and polarization spectra of amorphous dust, I d ν and P d ν , are expressed as, where i specifies the dust species (a-Si or a-C) throughout in this paper, N i is the dust column density, C abs i is the absorption cross section, C pol i is the polarization cross section, T i is the dust temperature for each species, and B ν is the Planck function. We assume that the shape of a dust particle is ellipsoid with a x,i ≥ a y,i ≥ a z,i and is characterized by the geometrical factor L j,i (see Bohren & Huffman 1983): where j = x, y, and z, and V i is the volume of the dust grain of species i. It is assumed that ellipsoidal dust grains of the same volume with different axial ratios are uniformly present, which is called the continuous distribution of ellipsoids (CDE), where the lower cutoff parameter L min i for L x,i is introduced to remove ellipsoidal dust with an extremely large axial ratio and, therefore, L x,i takes a value in the range of 1/3 from L min i . We consider the case that the minor axis of the ellipsoidal dust is perfectly aligned in a direction parallel to the interstellar magnetic field, which is assumed to be perpendicular to the line of sight. This assumption will be discussed in detail in Section 4. The ensemble average of absorption and polarization cross sections are given as (Draine & Hensley 2017): where λ is the wavelength of the incident electric field, χ x 0,i , χ y 0,i , and χ z 0,i are the electric susceptibilities of the ellipsoidal dust for the incident electric field polarized along with each axis. These electric susceptibilities are expressed by a dielectric constant ε i for the spherical dust grains and L min i (see Equations (A15)-(A17) in Draine & Hensley 2017;Nashimoto et al. 2020b). The dielectric constant ε i is given by the electric susceptibility of the spherical dust grain, that is χ 0,i , as The electric susceptibilities are given by following equations, χ 0,a-C = χ res 0,a-C + χ tun 0,a-C + χ hop 0,a-C + χ lat 0,a-C + χ free 0,a-C . The first three terms in the right hand side of each equation are the TLS contributions of the electric susceptibilities where χ res 0,i is attributed to the resonance transition between the two levels, χ tun 0,i and χ hop 0,i describes the quantum tunneling and thermal hopping relaxation processes to catch up with a shift of the energy level caused by the incident of the electromagnetic waves (Phillips 1987;Meny et al. 2007;Nashimoto et al. 2020b). The contribution from the lattice vibration χ lat 0,i is given by the superposition of the Lorentz models. The carbonaceous dust grain is considered to contain free electrons because a-C might be an intermediate material between conductive graphite and non-conductive diamond. Therefore, the free electron contribution calculated by the Drude model (see, e.g., Bohren & Huffman 1983) χ free 0,i is taken into account in a-C. In this study, the contribution from free electrons in a-C is adopted as the graphite model provided by Draine & Lee (1984). A basal plane could not be defined for a-C. Therefore, the electric susceptibilities of the perpendicular and parallel to the basal plane provided by Draine & Lee (1984) are averaged with a weight of 1 : 2. The electric susceptibilities originated from the TLS are expressed as follows (Nashimoto et al. 2020b), where ∆ max 0,i and ∆ min 0,i are the maximum and the minimum of the tunneling splitting energy ∆ 0 , d 0,i is the expectation value of the electric dipole moment at the potential minimum for an atom, τ +,i is the dephasing time, τ tun,i and τ hop,i are the relaxation time for the tunneling and the hopping, respectively (see Meny et al. 2007;Nashimoto et al. 2020b), n i is the atomic number density of a dust grain, f TLS i is a fraction of atoms showing the TLS for each dust species, E = ω 0 is the energy splitting of the TLS, u is the ratio of ∆ 0 to E, V 0 is the height of the potential barrier, f (V 0 ) is the distribution function of V 0 modeled by the Gaussian with the mean of 550 K×k B and the deviation of 410 K×k B , and ω is the angular frequency of the incident electric field. We assume that the dephasing time of a-Si, τ +,a-Si , is much longer than ω −1 0 . This is equivalent to assume that the resonance transition probability corresponding to the energy scale between the two levels is extremely small. Under these assumptions, χ res 0,a-Si is reduced to: Equation (14) coincides with the formula presented by Meny et al. (2007). Since, in our model, the main contributor in the frequency range beyond the infrared are big grains, we neglect the size distribution of the dust grains and the dust size is fixed to a i = 0.1 µm where a i ≡ (a x,i a y,i a z,i ) 1/3 . We can safely assume that a big grain stays at the temperature defined by the equilibrium between the heating by the interstellar radiation field (ISRF) and the radiative cooling. To calculate the equilibrium temperature of each species, the following relations provided by Tielens (2005) are adopted: where G 0 is the scale factor of the ISRF, and power law dependence of wavelength for the long-wavelength dust opacity with power law index of 2 for a-Si and 1.8 for a-C are assumed (Tielens 2005). Although the dust opacity in our model shows complex wavelength dependence far from the power law model and the above relations are not self-consistent with our dust opacity model, the adopted relations can be used as good estimator of dust temperature as discussed in Section 4. Since the wavelength dependence of dust opacity in our model depends on the dust temperature, adopting the above relations dramatically reduces fitting cost. In this study, G 0 is treated as one of the fitting parameters where G 0 = 1.7 represents the average value of the ISRF in the Galactic interstellar space. A relative abundance of a-C to a-Si in number is fixed to reproduce the accumulative dust mass ratio M a-Si /M a-C of about 1.2 given by Hirashita & Yan (2009). The total column density of the dust defined by N d = N a-Si + N a-C , is treated as a fitting free parameter. For a-Si, f TLS a-Si and L min a-Si are fitting free parameters. The ∆ max 0,a-Si , ∆ min 0,a-Si and τ + a-Si of a-Si are set to reproduce the model proposed by Meny et al. (2007). In the case of a-C, ∆ max 0,a-C , ∆ min 0,a-C and τ + a-C are treated as fitting free parameters additional to f TLS a-C and L min a-C . The adopted values of the physical parameters for a-Si and a-C are summarized in Table 1. The tunneling relaxation time τ tun,i is evaluated from the mass density of a dust grain ρ i , the sound velocity for transverse waves c t,i , and the elastic dipole for transverse waves γ t,i by using the formula described in Phillips (1987); Meny et al. (2007); Nashimoto et al. (2020b). We assume that the pre-exponential factor for hopping relaxation time τ 0 hop,i have the same value for each dust species. COMPARISON WITH OBSERVATIONS The predicted spectra of thermal emission from the amorphous dust composed of a-Si and a-C are compared with the observed intensity and polarization spectra of Perseus and W 43 molecular clouds (Nashimoto et al. 2020b). The contribution of the foreground and background interstellar matter of each molecular cloud has already been removed from these data. In addition to the dust emission, the free-free emission attributed to each molecular cloud can be seen in the intensity spectra. The frequency dependence of the free-free contributions was modeled by the formula given by Planck Collaboration et al. (2011). The emission measures (EMs), which are equivalent to the amplitude of the free-free emission, were treated as fitting free parameters. The cosmic microwave background (CMB) temperature anisotropy was already removed from the spectra of the W 43. On the other hand, the CMB temperature anisotropy has not been removed from the spectra of the Perseus. Therefore, the CMB temperature anisotropy was taken into account when the Perseus intensity spectra were fitted. The absorption of the CMB monopole due to interstellar dust, which is named the CMB shadow by Nashimoto et al. (2020a), is taken into account in the fitting self consistently. To perform the fitting, we use emcee Markov Chain Monte Carlo software (Foreman-Mackey et al. 2013). The means of the probability density distributions for each parameter estimated from the MCMC method are adopted as the values of the best-fit model. - † * We define δL a-C ≡ 1/3 − L min a-C . † Since the contribution of the CMB temperature anisotropy is removed from the data of W 43, δT CMB is not a fitting parameter. The intensity and polarization spectra of the best-fit model are overlaid on the observed spectra for each molecular cloud in Figure 1. The best-fit parameters are summarized in Table 2. Our model provides an excellent fit to the observed intensity and polarization spectra simultaneously from microwave through far infrared. In this model, the AME originates from the resonance emission of the TLS in a-C, and the dominant contributor to the intensity spectra from far infrared to AME is a-C. The observed polarized emission in the submillimeter waveband is attributed to a-Si. Our model predicts that almost all polarized emission is originated from a-Si. On the other hand, the shape of a-C is very close to spherical and the polarized radiation emitted from a-C is negligibly small. According to Draine & Fraisse (2009), the dust model composed of spherical a-C and ellipsoidal a-Si is compatible with all the observables from optical through far infrared. Figure 1 shows the frequency dependence of the expected polarization fraction of dust. The polarization fraction at 17 GHz predicted by our model is 5.763 × 10 −3 for the Perseus cloud and 2.983 × 10 −4 for the W 43 cloud. From the 3σ lower limit of L min a-C , the 3σ lower limit of the polarization fraction is provided as 8.129 × 10 −5 and 8.012×10 −6 , and the ratio of polarized emission of a-C to that of a-Si is 0.1554 and 0.08180 for the Perseus and the W 43, respectively. These results are consistent with the observed upper limit (Génova-Santos et al. 2015;. Since the dominant contributor of the polarized emission of the AME is a-C in our model, eventually a detection of polarized AME emission would allow to constrain the degree of the asphericity of a-C that is L min a-C . Although Nashimoto et al. (2020b) predicted a 90-degree flip in the polarization direction in AME frequency range for W 43, it did not happen in the current model. This originates from the difference of the adopted models to describe the contribution due to lattice vibration. Nashimoto et al. (2020b) applied the disordered charge distribution (DCD) model (Schlömann 1964) as a contribution due to lattice vibration, while the model proposed by Draine & Lee (1984) is applied in this study. The flip of the polarization direction occurs when the sign of the real part of the electric susceptibility of dust are reversed. The real part of the electric susceptibility due to the resonance transition becomes negative around the resonance frequency. When the absolute value of the real part of the electric susceptibility due to the resonance transition is larger than that of other contributions, the real part of the electric susceptibility of the whole dust becomes negative. The real part of the electric susceptibility due to the lattice vibration proposed by Draine & Lee (1984) is about one order of magnitude larger than that of the DCD model, and its absolute value is larger than the contribution from the resonance transition in all frequency range. Under ideal conditions (uniform magnetic field component perpendicular to the line of sight, perfectly aligned dust grains, no turbulent component of the magnetic fields) our best fitting models show that the shape of a-Si and a-C of Perseus and W 43 are close to perfect sphere. Such results are roughly consistent with optical polarimetry on Perseus (Goodman et al. 1990) and high resolution polarization map of a dense core in W 43 at 350 µm (Dotson et al. 2010). The derived dust column density for Perseus is consistent with the visual extinction extracted from 2MASS (Schnee et al. 2008). The visual extinction inferred from the derived dust column density for W 43 is A V = 40 because A V = 1.086πa 2 Q ext V N d where we assume the extinction efficiency at the visual band Q ext V of 1.5. This is consistent with A V extracted from the Planck map (Planck Collaboration et al. 2016). More realistic comparisons with dust grain models taking into account environment effects are discussed in the next section. DISCUSSION We have shown that the thermal emission of the amorphous dust composed of a-Si and a-C grains provides excellent fit to both the observed intensity and the polarization spectra of molecular clouds simultaneously. By taking into account a-C, the model prediction of the polarization fraction of the AME is reduced dramatically compared with the prediction made by Nashimoto et al. (2020b). The AME originates from the resonance transition of the TLS attributed to the a-C with almost spherical shape. On the other hand, the observed polarized emission in submillimeter wavebands is coming from a-Si. The systematic errors brought by adopting relations (15) and (16) to estimate dust temperatures are discussed. Since the emission from a-C is the dominant contributor to the intensity SEDs in our model, the temperature of a-C is defined robustly by the far infrared peak position of the intensity SEDs. In Perseus, by adopting the opacity model with the best fit parameters for a-C shown in Table 2, G 0 = 2.363 is obtained to reproduce the best temperature of a-C under the energy balance condition between the radiative heating and cooling. This means a reduction of 20% from the best fit value, G 0 = 3.048, obtained by using Equations (15) and (16). This lower value of G 0 translates by an equilibrium temperature of a-Si of 14.24 K when the opacity model with the best fit parameters for a-Si shown in Table 2 are adopted, except the temperature. The deduced temperature coincides with the best fit temperature shown in Table 2 very well. Therefore, we conclude that relations (15) and (16) can be used as good proxies of dust temperature. For W 43, caution must be paid to use these relations since they apply to optically thin clouds. The SED of the ISRF has prominent peak in the near infrared regime (Mathis et al. 1983). The near infrared extinction inferred from A V = 40 for W 43 is from a few to 10 magnitudes (Gao et al. 2013). To model the thermal state of dust grains in W 43, one would have to solve for the radiative transfer of the ISRF through the cloud which is out of the framework of the present work. However, our treatment is enough to show the potential ability of amorphous dust model to fit intensity and polarization SEDs simultaneously from AME through the far infrared peak. Since we neglected a reduction of the polarization fraction due to astronomical effects (Hildebrand & Dragovan 1995), the actual interstellar dust may have a lower value of the shape parameters, L min i , than those reported in this paper and may have larger ellipticity. Our shape distribution model described in Section 2 includes oblate and prolate spheroids for each value of L x,a-Si . The axial ratio between major and minor axes of oblate (a x,a-Si = a y,a-Si ≥ a z,a-Si ) and prolate (a x,a-Si ≥ a y,a-Si = a z,a-Si ) spheroids limit when L x,a-Si = L min a-Si are shown in Table 3. The maximum allowed axial ratios of the oblate (prolate) spheroids in the best-fit models are about 1.4 (1.2) for a-Si and about 1.007 (1.004) for a-C in the Perseus molecular cloud, and about 1.05 (1.03) for a-Si and 1.0005 (1.0003) for a-C in the W 43 molecular cloud, respectively. These results indicate that, in order to make our model compatible with the observed data, the shapes of the interstellar dust are close to spherical. However, it is natural to assume that the observed polarization fraction of each molecular cloud suffers a significant reduction due to astronomical attenuation. Although the galactic magnetic field is assumed to be perfectly perpendicular to the line of sight in this study, it is certain that there are finite inclination and variation of the magnetic field direction along the line of sight. Local turbulence in the magnetic fields may also reduce the polarization fraction. Depolarization also occurs due to mixing different components along the line of sight with different polarization directions. In addition, beam depolarization leads to lower observed polarization fraction, especially in the data provided by QUIJOTE, whose beam widths are 1 • . Then, we take the maximum value of the observed polarization fraction of the interstellar dust at 353 GHz of 0.2 (Planck Collaboration et al. 2018) as the reference value of the intrinsic polarization fraction of the interstellar dust emission. The required shape parameters of a-Si, L min a-Si , and the maximum allowed axial ratios of the oblate and prolate spheroids to realize the polarization fractions of 0.2 at 353 GHz for each molecular cloud are also summarized in Table 3. It shows that the large variety of the shapes are allowed for a-Si if the intrinsic polarization fraction is 0.2. For comparison, the maximum allowed axial ratios of spheroids when the intrinsic polarization fractions at 353 GHz are 0.05 and 0.1, are shown in Table 3. How the shape constraints on a-C coming from our best-fit models are relaxed by taking into account the astronomical attenuation is estimated as follows. The model prediction of the polarization fraction of a-C at 353 GHz for the Perseus molecular cloud is increased a factor of 0.2/0.046 = 4.35, where the possible intrinsic polarization fraction of 0.2 is divided by the observed polarization fraction at 353 GHz of 0.046. The prediction of the polarization fraction of a-C from our best model at 353 GHz is 6 × 10 −3 . The possible intrinsic polarization fraction of a-C is estimated as 0.026. This results in δL a-C ≡ 1/3 − L min a-C = 6 × 10 −3 . The maximum allowed axial ratios of a-C are relaxed to 1.04 for oblates and to 1.02 for prolates, respectively. Similarly, in the W 43 molecular cloud, the maximum allowed axial ratios of a-C are relaxed to 1.03 for oblates and to 1.01 for prolates, respectively. The shape of a-C has to be still close to spherical. Imperfectness of the dust grains alignment relative to the magnetic field could further relax the constraint on the dust grain shape. It is known that the dust grains are not perfectly aligned relative to the magnetic field (Hildebrand & Dragovan 1995;Guillet et al. 2018). Guillet et al. (2018) pointed out although the large grain with a = 0.1 µm may be aligned almost perfectly, the degree of the alignment decreases as the size of dust decreases. Although further studies on the astronomical attenuation of the polarization fractions of the dust emission are required to extract the information of dust shape, it is certain that our model predicts that the shape of a-C is close to spherical and a-Si has variety of ellipticity. Figure 2 shows the temperature dependence of the heat capacity of a-C and a-Si with the best-fit parameters for the Perseus cloud summarized in Table 2. Below a few kelvins, the contribution from the TLS becomes dominant. The heat capacity of a-Si shows a linear dependence on temperature as observed in the laboratory experiments. The heat capacity of a-C shows a bump at around sub kelvin. The heat capacity of a single TLS is described by a function called a Schottky heat capacity, which has a peak at k B T i ≃ 0.42E. The temperature dependence of the heat capacity of the amorphous material is defined by the superposition of each TLS in the material, which has different E. The distribution of tunnel splitting energy ∆ 0 of a-C is limited to a narrow range in order to reproduce the intensity of AME. As a result, the distribution of E is also limited in a narrow range. This is the reason why the temperature dependence of a-C shows such peculiar characteristics. Our model prediction of the fraction of the atoms trapped in the TLS, f TLS a-Si , of a-Si is the order of comparable to the laboratory measurements for the amorphous material (Phillips 1987). On the other hand, our model prediction of f TLS a-C is about two orders of magnitude larger than that of a-Si. Because of this, the heat capacity of a-C predicted by the TLS model is much larger than the Debye heat capacity around 0.1 K. Since little has been done for the laboratory measurement of the heat capacity of the carbonaceous amorphous material, measuring the low-temperature behavior of the heat capacity of a-C in the laboratory is important to test our prediction. Another possibility is that the characteristics of the amorphous dust predicted by our model are specific to the interstellar dust grains. It does not affect the far infrared emission and AME since such a low temperature behavior of the heat capacity has little influence on the thermal history of big grains. Supposing the latter possibility, we name our model as the cosmic amorphous dust model, abbreviated to CAD. Testing CAD possibility of the origin of AME by laboratory experiments and astronomical observations is worth to do.
2020-09-02T01:01:27.703Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "ada9c35ab5f3d5a57bfeeb0dc3f249e8d4675fac", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.3847/2041-8213/abb29d/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ada9c35ab5f3d5a57bfeeb0dc3f249e8d4675fac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235421060
pes2o/s2orc
v3-fos-license
Improved Transformer Net for Hyperspectral Image Classification : In recent years, deep learning has been successfully applied to hyperspectral image classification (HSI) problems, with several convolutional neural network (CNN) based models achieving an appealing classification performance. However, due to the multi-band nature and the data redundancy of the hyperspectral data, the CNN model underperforms in such a continuous data domain. Thus, in this article, we propose an end-to-end transformer model entitled SAT Net that is appropriate for HSI classification and relies on the self-attention mechanism. The proposed model uses the spectral attention mechanism and the self-attention mechanism to extract the spectral– spatial features of the HSI image, respectively. Initially, the original HSI data are remapped into multiple vectors containing a series of planar 2D patches after passing through the spectral attention module. On each vector, we perform linear transformation compression to obtain the sequence vector length. During this process, we add the position–coding vector and the learnable–embedding vector to manage capturing the continuous spectrum relationship in the HSI at a long distance. Then, we employ several multiple multi-head self-attention modules to extract the image features and complete the proposed network with a residual network structure to solve the gradient dispersion and over-fitting problems. Finally, we employ a multilayer perceptron for the HSI classification. We evaluate SAT Net on three publicly available hyperspectral datasets and challenge our classification performance against five current classification methods employing several metrics, i.e., overall and average classification accuracy and Kappa coefficient. Our trials demonstrate that SAT Net attains a competitive classification highlighting that a Self-Attention Transformer network and is appealing for HSI classification. Introduction Hyperspectral image (HSI) conceives high-dimensional data containing massive information in both the spatial and spectral dimensions.Given that ground objects have diverse characteristics in different dimensions, hyperspectral images are appealing for ground object analysis, ranging from agricultural production, geology, and mineral exploration to urban planning and ecological science [1][2][3][4][5][6][7][8][9][10].Early attempts exploiting HSI mostly employed support vector machines (SVM) [11][12][13], K-means clustering (KNN) [14], and polynomial logistic regression (MLR) [15] schemes.Traditional feature extraction mostly relies on feature extractors designed by human experts [16,17] exploiting the domain knowledge and engineering experience.However, these feature extractors are not appealing in the HSI classification domain as they ignore the spatial correlation and local consistency and neglect exploiting the spatial feature information of HSI.Additionally, the redundancy of HSI data makes the classification problem a challenging research problem. In recent years, deep learning (DL) has been widely used in the field of remote sensing [18].Given that deep learning can extract more abstract image features, the literature suggests several DL-based HSI classification methods.Typical examples include Stacked Autoencoder (SAE) [19][20][21], Deep Belief Network (DBN) [22], Recurrent Neural Network (RNN) [23,24], and Convolutional Neural Network (CNN) [25][26][27].For example, Dend et al. [19] use a layered and stacked sparse autoencoder to extract HSI features, while Wan et al. [20] propose a joint bilateral filter and a stacked sparse autoencoder, which can effectively train the network using only a limited number of labeled samples.Zhou et al. [21] employ a semi-supervised stacked autoencoder with co-training.When the training set expands, confidential predictions of unlabeled samples are generated to improve the generalization ability of the model.Chen et al. [22] suggest a deep architecture combined with the finite element of the spectral space using an improved DBN to process threedimensional HSI data.These methods [19][20][21][22][23][24] achieved the best results in the three datasets of IN, UP, and SA, as follows: 98.39% [21], 99.54% [19], and 98.53% [21], respectively.Zhou et al. [23] extend the long-term short-term memory (LSTM) network to exploit the spectral space and suggest an HSI classification scheme that treats HSI pixels as a data sequence to model the correlation of information in the spectrum.Hang et al. [24] use a cascaded RNN model with control loop units to explore the HSI redundant and complementary information, i.e., reduce redundant information and learn complementary information, and fuse different properly weighted feature layers.Zhong et al. [25] designed an end-to-end spectral-spatial residual network (SSRN), which uses a continuous spectrum and spatially staggered blocks to reduce accuracy loss and achieve better classification performance in the case of uneven training samples.In [26], the authors propose a deep feature fusion network (DFFN), which introduces residual learning to optimize multiple convolutional layers as identity mapping that can extract deeper feature information.Additionally, the work of [27] suggests a five-layered CNN framework that integrates the spatial context and the spectral information of HSI and integrates into the framework both spectral features and spatial context.Although current literature manages an overall appealing classification performance, the classification accuracy, network parameters, and model training should still be improved. Deep neural network models increase the accuracy of classification problems; however, as the depth of the network increases, they also cause network degradation and increase the difficulty of training.Prompted by He et al. [28], the residual network (ResNet) is introduced into the HSI classification [29][30][31] problem.Additionally, Paoletti et al. [30] design a novel CNN framework based on the feature residual pyramid structure, while Lee et al. [31] propose a residual CNN network that utilizes the context depth of the adjacent pixel vectors using residuals.These network models with residual structure afford a deep network that learns easier, enhances gradient propagation, and effectively solves deep learning-related problems such as gradient dispersion. Due to the three-dimensional nature of HSI data, current methods have a certain degree of spatial or spectral information loss.To this end, 3D-CNNs are widely used for HSI classification [32][33][34][35], with Chen et al. [32] proposing a 3D-CNN finite element model combined with regularization that uses regularization and virtual sample enhancement methods to solve the problem of over-fitting and improve the model's classification performance.Seydgar et al. [33] suggest an integrated model that combines a CNN with a convolutional LSTM (CLSTM) module that treats adjacent pixels as a sequence of recursive processes, and makes full use of vector-based and sequence-based learning methods to generate deep semantic spectral-spatial characteristics, while Rao et al. [34] develop a 3D adaptive spatial, spectral pyramid layer CNN model (ASSP-SCNN), where the ASSP-SCNN can fully mine spatial and spectral information, while additionally, training the network with variable sized samples increases scale invariance and reduces overfitting.In [35] the authors suggest a deep CNN (DCNN) scheme that during network training combines an improved cost function and a Support vector machine (SVM) and adds category separation information to the cross-entropy cost function promoting the between-classes compactness and separability during the process of feature learning.These methods [32][33][34][35] achieved the best results in the three datasets of IN, UP, and SA, respectively, of 99.19%, 99.87%, and 98.88% [33].However, despite the appealing accuracy of CNNbased solutions, these impose a high computational burden and increase the network parameters.The models proposed in [33] and [35] converge at 50 and 100 epochs, respectively.To solve this problem, quite a few algorithms extract the spatial and spectral features separately and introduce the attention mechanism for HSI classification [36][37][38][39][40][41].For example, Zhu et al. [36] propose an end-to-end residual spectral-spatial attention network (RSSAN), which can adaptively realize the selection of spatial information and spectrum information.Through the function of weighted learning, this module enhances the information features that are useful for classification, and Haut et al. [37] introduce the attention mechanism into the residual network (ResNet), suggesting a new vision attentiondriven technology that considers bottom-up and top-down visual factors to improve the feature extraction ability of the network.Wu et al. [38] develop a 3D-CNN-based residual group channel and space attention network (RGCSA) appropriate for HSI classification combining bottom-up and top-down attention structures with residual connections, making full use of context information to optimize the features in the spatial dimension and focus on the area with the most information.This method achieved 99.87% and 100% overall classification accuracy on the IN and UP datasets, respectively.Li et al. [39] design a space spectrum attention network (JSSAN) to simultaneously capture the remote interdependence of spatial and spectral data through similarity assessment, and adaptively emphasize the characteristics of informational land cover and spectral bands, and Mou et al. [40] improve the network by involving a network unit for the spectral attention module using the global spectrum space context and the learnable spectrum attention module to generate a series of spectrum gates reflecting the importance of the spectrum band.Qing et al. [41] propose a multi-scale residual network model with an attention mechanism (MSRN).The model uses an improved residual network and spatial-spectral attention module to extract hyperspectral image information from different scales multiple times, fully integrates and extracts the spatial spectral features of the image.A good classification effect has been achieved on the HSI classification problem.These methods [36][37][38][39][40][41] achieved the best result in the SA dataset of 99.85% [37]. Although CNN models manage good results on the HSI classification problem, these models still have several problems.The first one being that the HSI classification task is at the pixel level, and thus due to the irregular shape of the ground objects, the typical convolution kernel is unable to capture all the features [42].Another deficiency of CNNs is the small-size convolution kernel limiting the CNN's receptive field to match the hyperspectral features over their entire bandwidth.Thus, in-depth utilization of CNN is limited, and the requirements for convolution kernels of different classification tasks vary greatly.Due to the large HSI spectral dimensionality, it is not trivial to use long-range sequential dependence between distant positions of the spectral band information because it is difficult to use for CNN-based HSI classification specific context-based convolutional kernels to capture all the spectral features. Spurred by the above problems, this paper proposes a self-attention-based transformer (SAT) model for HSI classification.Indeed, a transformer model was initially used for natural language processing (NLP) [43][44][45][46][47], achieving great success and attracting significant attention.To date, transformer models have been successfully applied to computer vision fields such as image recognition [48], target detection [49], image super-resolution [50], and video understanding [51].Hence, in this work, the proposed SAT Net model first processes the original HSI data into multiple flat 2D patch sequences through the spectral attention module and then uses their linear embedding sequence as the input of the transformer model.The image feature information is extracted via a multi-head selfattention scheme that incorporates a residual structure.Due to its core components, our model effectively solves the gradient explosion problem.Verification of the proposed SAT Net on three HSI public data sets against current methods reveals its appealing classification performance. The main contributions of this work are as follows: 1. Our network employs a spectral attention module and uses both the spectral attention module and the self-attention module to extract feature information avoiding feature information loss.2. The core process of our network involves an encoder block with multi-head self-attention, which successfully handles the long-distance dependence of the spectral band information of the hyperspectral image data.3.In our SAT Net model, multiple encoder blocks are directly connected using a multilevel residual structure and effectively avoid information loss caused by stacking multiple sub-modules.4. Our proposed SAT Net is interpretable, enhancing its HSI feature extraction capability and increasing its generalization ability.5. Experimental evaluation on HSI classification against five current methods highlights the effectiveness of the proposed SAT Net model. The remainder of this article is organized as follows.Section 2 introduces in detail the multi-head self-attention, the encoder block, the spectral attention, and the overall architecture of the proposed SAT Net.Section 3 analyzes the interplay of each hyper-parameter of SAT Net against five current methods.Finally, Section 4 summarizes this work. Methodology In this section, we first introduce the Spectral attention module, then we derive a detailed formula for the multi-head self-attention module and the encoder module.Finally, we give the detailed HSI classification process of the proposed model. Spectral Attention Block The attention mechanism [52] imitates the internal process of a biological observation behavior.It is a mechanism that aligns internal experience and external sensation to increase the observation precision and can quickly extract important features of coefficient data.The attention mechanism is currently an important concept in neural networks widely used in several computer vision tasks [53].In this paper, we introduce the spectral attention module to enhance the feature extraction ability of the proposed deep learning network.Given a feature map × × as input, we define a 1-D spectral attention map × × .The purpose of using spectral attention is to extract information features useful for HSI classification by changing the weight of spectral information, which can be defined as presented in Equation (1). where × × , (, ) is at position (m, n),  represents the multiplication element, y the output of spectral attention, and max(.)themaximum area.y 和y represent the global average and maximum pooling, respectively.The first FC layer is used as a dimensionality reduction layer parameterized by W0, while the second FC layer is a dimensionality increasing layer parameterized by W1. refers to the ReLU activation function, and 0 × , 1 × , × × , W0, and W1 are shared weights.Finally, we multiply ( ) with the input to obtain y . The spectral attention module is presented in Figure 1, where we use global average and global maximum pooling to extract the spectral information of the image.The two different pooling schemes extract more abstract spectral features, which are then followed by two FC layers and activation functions to establish two-pooling channel information.Then, we perform a correlation process to combine the weights of the two spectral feature channels.Finally, the newly assigned feature weight is multiplied by the input feature map to correct the weights of the input feature map and afford to extract higher-level feature information. MaxPool AvgPool Input feature map Shared MLP Output feature map Figure 1.Spectral attention mechanism.The module uses operations such as maximum pooling, average pooling, and shared weights to re-output feature maps with different weights. Multi-Head Self-Attention A CNN scheme is strictly limited by its kernel size and number of layers, thus weakening its advantage in capturing the long-range dependence of input data [52] and ultimately it is imposed to ignore some sequence information of the HSI input data.The selfattention mechanism improves the attention mechanism, which reduces the dependence on external information and can better capture the internal data correlation or its characteristic information.In this work, we utilize a self-attention variant to extract image features, namely the multi-head self-attention module. Therefore, we initially remap X to q ,k , v by utilizing the three initialization transformation matrices W , W , and W : where X is that the original HSI data is processed first, and then the block is noticed through the spectrum.The resulting flat 2D block with the same size W , W , and W are three different weight matrices, which linearly change the input original vector and perform on each input three different linear transformations to obtain the intermediate vectors q , k , and v , and ultimately increase the diversity of the model feature sampling. Then, we calculate the weight vector a according to the and k parameters obtained from Equations ( 2) and (3), respectively, which is expressed as: where i, j, m(1, N + 1), with N the number of flattened 2D blocks (Section 2.3 presents a detailed calculation of N).After that, we apply the dot product operation on the and k , and divide by √, where d is the dimensions of and k, respectively, to normalize the data.Finally, the weight vector a is output through a softmax function.The a vector depends on the vector and all k vectors, and thus ultimately, Equation ( 5) produces in total N + 1 vectors with a length of N + 1 per vector.Next, we combine Equations ( 4) and ( 5) to obtain the , and a vector and perform a weighted average operation to calculate vector c : The output vector of Equation ( 6) is the weighted average of all vectors, with the weights provided by the a vector. Our deep learning pipeline combines a multi-head self-attention block under multiple self-attention concatenation schemes with the detailed process presented in Figure 2. The multi-head self-attention input is the vector produced by Equation ( 6), employing different W , W , and W parameters during the matrix operations in Equations ( 2)-( 4) to obtain different vectors.Ultimately, all outputs are stacked, forming the multihead self-attention output.Finally, the latter output passes through a fully connected layer to create N + 1 u-vectors, where each u vector has a one-to-one correspondence with X .Figure 2. Multi-Head Self-Attention structure: After mapping, linear change, matrix operation, and other operations, the output sequence obtained has the same length as the input sequence, and each output vector depends on all input vectors. Encoder Block According to the transformer concept employed in NLP and the suggestion of Dosovitskiy et al. [54], an image × × can be remapped into a sequence of flattened 2D patches ×( • ) .We extend [54] and add a processing step where the patch image obtained from the original data is mapped through the spectral attention block to extract the relevant features.Thus, ultimately, we obtain N flattened 2D blocks of the same size, with the dimension of each block being ( • ), with P the size of the setting block, = • , and H, W, C are the width, height, and channel number of the image, respectively.Then, for each vector, we perform a linear transformation (fully connected layer) and compress the dimension ( • ) into dimension D. As a reference, we use the encoder model of the transformer, and since the decoder model is not used, we add a learnable embedding vector and introduce a positional encoding .This process is represented by: 0 = ; 1 E; 2 E; 3 E; ⋯ ⋯ ; E + (7) where E represents the linear transformation layer, • is the input dimension, and D is the output dimension.The trainable variable is used to represent the position information of the added sequence.When the positions are close, they often have similar codes, and the patches in the same line/column also have similar position codes. We design the encoder block by utilizing several operations, including the norm, multi-head self-attention, and dense, as expressed in Equation ( 8) and illustrated in Figure 3.It is worth noting that in the latter figure, the Gaussian Error Linear Unit (GELU) [55] activation function introduces the idea of random regularization, affording the network to converge faster and increasing the model's generalization ability.Additionally, we employ multiple residual blocks to eliminate problems such as gradient dispersion.The Multilayer Perceptron (MLP) exploited contains two layers with a GELU non-linearity scheme.Finally, depending on the scenario, the encoder block presented in Figure 3 can be stacked multiple times as required to achieve a high HSI classification.The latter is discussed in Section 3.3, where LN represents Layer Normalization and MHSA multihead Self-Attention. Overview of the Proposed Model Finally, the vectors obtained through stacked encoder modules are input to two fully connected layers employing GELU activation functions.Then, we exploit the first of the two vectors, i.e., the learnable embedding vector of the classification, to obtain the final classification result, which is expressed as: where is an additional embeddable vector used for classification and refers to the output of the encoder block, i.e., utilizing the dense, GELU, and dense blocks presented in Figure 4.The execution process of the entire SAT network is shown in the latter figure.After the original HSI data is processed, it is input into the spectral attention and decoder modules with multi-head self-attention to extract HSI features.Second, the encoder module uses a multilayer residual structure for connection, thereby effectively reducing information loss, and finally through the fully connected layer, it outputs classification information. First, around each pixel, we extract patches of block size × × , with the third dimension being the spectral dimension of different his, while for the edge pixels that cannot be directly extracted, we employ a padding operation.Ultimately, we obtain the final sample data with shape (m, , , ), where m is the number of samples and is the width and height of the sample, respectively.A detailed analysis of the sample size is presented in Section 3.3.For the processed sample data, we pass it through the spectral attention module to redistribute the weight of the spectral information.Since the spectral attention mechanism does not change the shape of the input feature map, the shape of the output sample data is still (m, , , ). Once the raw HSI data are remapped into a set of ( × × ) image patches, we process each sample into an × × sequence of flattened 2D patches with shape (P, P, o).However, the transformer-model expects a two-dimensional NxD matrix as an input (Remove the Batch_size dimension), where = × × is the sequence length and D the dimension of each vector of the sequence (Set to 64 in this article).Therefore, we reshape the × × 2D patches into a two-dimensional matrix of shape ( × × , o × P × P), and apply a linear transformation layer on the latter two-dimensional matrices to ultimately create a two-dimensional Matrix (N, D).Then, we introduce the embedding vector and the position code (as described in Section 2.2) and create a matrix of size (Batch_size, N + 1, D) (Add Batch_size dimension) used as the input to the encoder block.Here, we use multiple encoder modules (the specific number of modules is discussed in Section 3.3.3)to continue extracting image features.In contrast to Dosovitskiy et al. [54], we change the direct connection of a single encoder module and employ the residual structure to inter-connect each encoder module, with the detailed process shown in Figure 4.This strategy affords to reduce the information loss caused by stacking multiple encoder modules, and the model convergence speed is accelerated.The classification results are finally output through two fully connected layers. Experiments, Results, and Discussion In this section, we first introduce three publicly available HSI data sets and then analyze the five factors that influence the classification accuracy of the proposed model.Finally, we challenge the proposed model against current state-of-the-art methods and discuss the experimental results. Data Set Description For our experiments, we consider three publicly available HSI data sets, namely the Salinas (SA), the Indian Pines (IN), and the University of Pavia (UP).Detailed information on all datasets is presented in Table 1.This dataset includes HSI collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in Salinas, California, USA.It has 224 spectral bands and a spectral resolution of 400~2500 nm.Each HSI has a size of 512 × 217 pixels and a spatial resolution of 3.7 m/pixel.This dataset has in total 54,129 marked pixels presenting 16 object classes.The pseudo-color image and the corresponding ground truth map are illustrated in Figure 5, with the sample division ratio of the training and the test set shown in Table 2.This dataset was collected by the AVIRIS sensor in Northwestern Indiana, USA involving 200 spectral bands and a spectral resolution of 400~2500 nm.It includes an HSI of 145 × 145 pixels and a spatial resolution of 20 m/pixel, with 10,249 marked pixels involving 16 object classes.The pseudo-color image and ground truth map are presented in Figure 6.The sample ratio between the training and the test set is shown in Table 3.The Reflective Optics Spectrographic Imaging System (ROSIS) sensors collected this HSI in Pavia, Italy, involving imagery of 610 × 340 pixels and a spatial resolution of 1.3 m/pixel.The spectral bands are 103 with a resolution of 430~860 nm.In total, there are 42,776 marked pixels of nine object classes.The pseudo-color image and ground truth map are shown in Figure 7, with the training and test sets presented in Table 4.We randomly selected 20% of the dataset for training for our experiments, and the remaining 80% was for testing.A detailed experimental analysis is presented in Section 3.2. Experimental Setup We evaluate the performance of the proposed SAT Net model on an Intel(®) Xeon(®) Gold 5218 with 512 GB RAM and an NVIDIA(Headquartered in Santa Clara, USA) Ampere A100 GPU with 40 GB RAM.Our platform operates on windows 10 utilizing the tensorflow2.2deep learning framework and the python3.7 compiler.We optimize the model by exploiting the Adam optimizer [56] with a batch size of 64 and employ the crossentropy loss function for reverse gradient propagation.We also employ a five-folder cross-validation [57] scheme to train and test the model in the experiments 3.3.1 and 3.3.2.Specifically: we divide each data set into five parts, accounting for 20% of the total data set.During each training round four parts are used as the training set and one part is used as the test set.In total, we consider five rounds of training exploiting each time a different subset of the data set as a training and a testing set.Finally, the average performance of the five test results is considered the model's accuracy.In the experiments that follow.We quantitatively evaluate the performance of all competitor methods relying on the overall classification accuracy (OA), the average accuracy (AA), and the kappa coefficient (K). Image Preprocessing The first batch of trials involves investigating the interplay between the hyperparameter setup and the overall classification performance of the proposed SAT Net.These hyperparameters involve the extracted cube size, i.e., are the size of the 3D extracted patch, the size of the 2D patches, the number of stacked encoder blocks, the learning rate, and the proportion of training to testing samples. Image Size (IS) In this trial, we investigate the cube sizes of 16, 32, and 64, which are extracted around each pixel of the HSI raw data, with the corresponding results presented in Table 5.From the latter table, we observe that for IS = 16, the SA, IN, and UP datasets manage an OA of 97.18%, 93.42%, and 96.45%, respectively.However, despite the OA metric being relatively high, it is still lower than the optimum performance attained when IS = 64.This is because a smaller extraction cube interferes with the spatial continuity, while as IS increases, the performance also increases, and ultimately IS = 64 achieves the highest classification results.It should be noted that due to our hardware, our trials are limited to a maximum of IS = 64.In this experiment, we vary the size of the flattened 2D patch sequence.The different PS evaluated are inversely proportional to the number of the linear embedding sequences that are input to the encoder block.Thus, we set PS to 4, 8, and 16 with the corresponding results presented in Table 5.From the latter table, we confirm the findings of Dosovitskiy et al. [54] that = , and thus for our trials, it should be greater than 16. Hence, for our trials, we employ a trial-and-reject strategy and conclude that for = 16 our method manages an appealing performance, which we adopt for the trials to follow. Depth Size Here, we vary the number of stacked encoder blocks within the proposed SAT Net, with the stack cardinality set to 2, 3, 4, 5, and 6.The corresponding experimental results are shown in Figure 8, highlighting that as the number of encoder blocks increases, the classification accuracy increases, but also the total network parameters affecting the difficulty during network training increase as well.However, increasing the model parameters too much will cause the model to overfit and ultimately reduce its classification accuracy.For our trials, an encoder block cardinality of three manages a classification performance of 99.91%, 99.03%, and 99.47%, for the SA, IN, and UP datasets, respectively. Training Sample Ratio The proportion of training vs. testing data affects the fitting process of the model during its training.Hence, we evaluate the training proportions of 3%, 5%, 10%, 20%, 30%, and 40% of the entire dataset, with the corresponding results presented in Figure 9. From the latter figure, we observe that when the proportion of the training set is 3% and 5%, the classification result of IN is poor, and this is because the total number of samples in the IN dataset is relatively small.However, when the proportion of the training set exceeds 20%, all three datasets achieve quite appealing classification results.For the subsequent trials, and to compare our technique against current methods, e.g., Zhong et al. [25], we set the training set ratio to 20% of the total samples. Learning Rate The learning rate affects the gradient descent rate of the model, and thus choosing an appropriate learning rate can control the convergence performance and speed of the model.For our experimental analysis, we set the learning rate to 0.0001, 0.0005, 0.001, and 0.005, respectively, with the corresponding results shown in Figure 10.We optimize SAT Net's performance by setting the learning rate for SA to 0.001 and UP and IN to 0.0005. Evaluation We challenge the proposed SAT Net against convolutional neural network (CNN) [58] (CNN architecture with five layers of weights), spectral attention module-based convolutional network (SA-MCN) [40] (Recalibrate spatial information and spectral information), three-dimensional convolutional neural network (3D-CNN) [32], and the spectral-spatial residual network (SSRN) [25], and the multi-scale residual network model with an attention mechanism (MSRN) [41].For fairness, we set the ratio of training set and test set to 2:8.We also optimize the model by exploiting the Adam optimizer [56] with a batch size of 64 and employ the cross-entropy loss function for reverse gradient propagation. Quantitative Evaluation Tables 6-8 present the classification accuracy of each object class, and method evaluated exploiting the OA, AA, and K metrics.From the results, we observe that the CNN network, its classification results are still lacking due to the spectral feature information loss of the 2D-CNN that ignores the 3D nature of the HSI data.SA-MCN extracts spectral information features based on spectral attention.The 3D-CNN directly extracts the feature information of the spatial and spectral dimensions, which significantly improves the accuracy of HSI classification.Nevertheless, 3D-CNN still does not fully utilize the space and spectrum-related information.On the contrary, SSRN exploits the spatial-spectrum attention module to redistribute the spatial and spectral information weights achieving good classification results.The proposed SAT Net attains the most appealing results over all three data sets, especially on the SA dataset, where it manages an overall classification accuracy of 99.91%.The MSRN network uses an improved residual network and space-spectral attention module to extract hyperspectral image information from different scales and multiple times, and fully integrates and extracts the spatial spectral features of the image.The best results are attained on the IN dataset managing an Overall accuracy, Average accuracy, and Kappa of 0.9937, 0.9945, and 0.9961, respectively.Regarding the proposed SAT Net, it obtains the most attractive results on the SA data set, as its overall classification accuracy, average classification accuracy and Kappa reaches 0.9991, 0.9963 and 0.9978, respectively.Finally, on the UP data set the proposed methods has comparable performance to MSRN.Indeed, the overall accuracy and Kappa coefficient are slightly inferior to the MSRN model, while the average accuracy is slightly superior to the MSRN model.Compared to the competitor methods, we extract the image features via a multi-head self-attention scheme that avoids partial information loss when utilizing regular convolution kernels during feature extraction and solves the problem of HSI long-distance dependence. Qualitative Evaluation Figures 11-13 show the overall accuracy curve of the proposed model against the competitor models.The results indicate that as the number of training steps increases, the accuracy of all models is continuously improving.Among the models, CNN has the lowest initial OA.SA-NET has the slowest convergence speed, MSRN has the fastest convergence speed, and SAT NET has the second-best convergence speed.The proposed model converges well in 20 epochs on the SA dataset and converges well within 30 epochs on the IN and UP datasets.Figures 14-16 show the visualization results (pseudo-color classification map) of different models on the three public datasets we utilize in this work.The corresponding classification maps obtained by CNN, and SA-MCN manage an inferior performance, with significant noise levels, spectra, and poor continuity between different object classes.The results obtained by the 3D-CNN and SSRN methods are better, containing less point noise.MSRN also achieved good classification results.In contrast, the classification map generated by the proposed SAT Net model and MSRN has smoother boundaries, less noise, and overall manages a higher classification accuracy.Figure 17 Conclusions This article proposes a deep learning model that is appropriate for HSI classification entitled SAT Net.Our technique successfully employs a transformer scheme for HSI processing and proposes a new strategy for HSI image classification.Indeed, we first process the HSI data into a linear embedding sequence and then use the spectral attention module and the "multi-head self-attention" module to extract image features.The latter module solves long-distance dependence on the HSI spectral band and simultaneously discards the convolution operation avoiding information loss caused by the irregular processing of the typical convolution kernel during object classification.Overall, SAT Net combines multi-head self-attention and linear mapping, regularization, activation functions, and other operations to form an encoder block with a residual structure.To improve the performance of SAT Net, we stack multiple encoder blocks to form the main structure of our model.We verified the effectiveness of the proposed model by conducting two experiments on three publicly available datasets.The first experiment analyzes the interplay of our model's hyperparameters, such as image size, training set ratio, and learning rate, to the overall attained classification performance.The second experiment challenges the proposed model against current classification methods.In comparison with models such as CNN, SA-MCN, 3D-CNN, and SSRN on the three public datasets, SAT NET's OA, AA, and Kappa achieved better results.In comparison with MSRN, SAT NET achieved better results on the SA dataset.It achieved classification performance comparable to that of MSRN on the UP dataset, whereas it is slightly inferior to MSRN on the IN dataset; however, it uses less convolution (spectral attention module) to achieve better classification performance.In comparison with other methods, it provides a novel idea for HSI classification.Second, SAT NET better handles the long-distance dependence of HSI data spectrum information.On the three public data sets, i.e., SA, IN and UP, the proposed method achieved an overall accuracy of 99.91%, 99.22%, and 99.64% and an average accuracy of 99.63%, 99.08%, and 99.67%, respectively.Due to the small number of samples in the IN data set and the uneven data distribution, the classification performance of the SAT network still needs to be improved.In the future, we will study methods such as data expansion, weighted loss function, and model optimization to improve the classification of small-sampled hyperspectral data. Figure 3 . Figure 3. Transformer Encoder Block.This module is composed of the norm, multi-head self-attention, and dense and other structures connected in the form of residuals. Figure 4 . Figure 4.The proposed SAT Net architecture.After the original HSI data is processed, it is input into the spectral attention and decoder modules with multi-head self-attention to extract HSI features.Second, the encoder module uses a multilayer residual structure for connection, thereby effectively reducing information loss, and finally through the fully connected layer, it outputs classification information. Figure 8 . Figure 8. Overall classification accuracy per dataset under various encoder block sizes. Figure 9 . Figure 9. Overall accuracy per dataset under different training set proportions. Figure 10 . Figure 10.The overall classification accuracy of the three data sets at different learning rates. show the overall accuracy curve of the proposed model against the competitor models.The results indicate that as the number of training steps increases, the accuracy of all models is continuously improving.Among the models, CNN has the lowest initial OA.SA-NET has the slowest convergence speed, MSRN has the fastest convergence speed, and SAT NET has the second-best convergence speed.The proposed model converges well in 20 epochs on the SA dataset and converges well within 30 epochs on the IN and UP datasets.Figures14-16show the visualization results (pseudo-color classification map) of different models on the three public datasets we utilize in this work.The corresponding classification maps obtained by CNN, and SA-MCN manage an inferior performance, with significant noise levels, spectra, and poor continuity between different object classes.The results obtained by the 3D-CNN and SSRN methods are better, containing less point noise.MSRN also achieved good classification results.In contrast, the classification map generated by the proposed SAT Net model and MSRN has smoother boundaries, less noise, and overall manages a higher classification accuracy.Figure17is a partially enlarged view of the classification results of MSRN and SAT NET on the three datasets of SA, IN, and UP.It is observed from the enlarged image that in the SA dataset, the classification result of the SAT Net model has less continuous noise, and there is less noise only at the boundary between Grapes_untrained and Vinyard_untrained.In the IN dataset, MSRN and SAT Net have some pixel misdivisions at the border of Soybeans-clean till and Soybeans-min till.In the UP dataset, MSRN and SAT Net are mixed with some Meadow features in the bare soil features. Figure 11 . Figure 11.Overall accuracy curve of different models in SA dataset. Figure 12 . Figure 12.Overall accuracy curve of different models in IN dataset. Table 2 . Training and Testing Samples for the SA Dataset. Table 3 . Training and Testing Samples for the IN Dataset. Table 4 . Training and Testing Samples for the UP Dataset. Table 5 . Evaluation of several hyperparameters under five-folder cross-validation.(Highest Performance is in Boldface). Table 6 . Classification Results of Various Methods for the SA Dataset (Highest Performance is in Boldface). Table 7 . Classification Results of Various Methods for the IN Dataset (Highest Performance is in Boldface). Table 8 . Classification Results of Various Methods for the UP Dataset (Highest Performance is in Boldface).
2021-06-14T12:20:24.758Z
2021-06-05T00:00:00.000
{ "year": 2021, "sha1": "d7de9124087e3687abf05055b8c3d2045c6e94ce", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/13/11/2216/pdf?version=1623058046", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4dbd0104668d024056af063dbb1cf1ff53da311e", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
120400938
pes2o/s2orc
v3-fos-license
Metal Matrix Composite Material by Direct Metal Deposition Direct Metal Deposition (DMD) is a laser cladding process for producing a protective coating on the surface of a metallic part or manufacturing layer-by-layer parts in a single-step process. The objective of this work is to demonstrate the possibility to create carbide-reinforced metal matrix composite objects. Powders of steel 16NCD13 with different volume contents of titanium carbide are tested. On the base of statistical analysis, a laser cladding processing map is constructed. Relationships between the different content of titanium carbide in a powder mixture and the material microstructure are found. Mechanism of formation of various precipitated titanium carbides is investigated. Introduction Direct Metal Deposition (DMD) is a laser cladding method that allows fabricating near-net-shape metal parts from a CAD solid model in one step [1,2]. Laser cladding is a method of depositing material by which a powdered material is melted and consolidated by use of a high power laser [3]. The DMD technology can reduce the overall part production time and cost [4,5]. Metal Matrix Composite (MMC) denotes a class of composites with at least two constituent materials one of which is a metal [6]. MMCs are applied in automotive and aerospace industry owing to enhanced high temperature strength, fatigue resistance, wear resistance and lightweight design [7]. MMCs are produced by laser cladding from a wide range of alloys and particulate reinforcement phases. Performance characteristics of a MMC object are influenced by the properties of the particulate reinforcement phase such as chemical composition, shape and size, properties as ingredient material, volume fraction and spatial distribution in the matrix [8]. Various studies have confirmed the technological advantages of the laser powder cladding over the conventional deposition welding in the field of composite materials [9]. The objective of this research is to demonstrate the possibility to produce by DMD carbide-reinforced MMC objects from a mixture of high-strength steel 16NCD13 and titanium carbide powders. Regression analysis of the influence of the laser cladding process parameters on the track geometry is carried out. A laser cladding process map Experimental setup The experiments were performed on Trumpf DMD 505 commercial industrial-scale laser cladding installation equipped by a 5 kW continuous wave CO 2 laser system. A laser spot of 5 mm in diameter with a TEM 01* energy distribution is formed on the substrate at 20 mm distance from a nozzle tip. A powder cladding set-up consists of: computer-controlled powder feeding system, coaxial cladding nozzle mounted on CNC five-axis gantry. Coaxial powder injection is realized by nozzle, carrier, shaping gas mixture of Ar and He. Experiments were performed at gas flow rates G carrier (Ar/He) = 18/2 l/min, G shaping (Ar) = 10 l/min, and G nozzle (Ar/He) = 15/1 l/min. Experimental plan To analyze the influence of the main laser cladding parameter on the geometry of an individual track, an experiment plan was established (Table 1). In total, n = 64 tracks of 25 mm length were cladded. The main track geometrical characteristics such as track height H, track width W and substrate melting depth h were measured in the beginning, the middle and the end of each laser track, and then their average values were calculated. The technological characteristics such as dilution D = h/(H + h) and powder deposition efficiency E p = 2/3· ·H·W·S/F were estimated. Layers were produced by overlapping individual laser tracks by 3 mm in each step. Multilayer objects of 35x35x12 mm 3 size were fabricated by criss-cross manufacturing strategies i.e. the cladding directions of two consecutive layers were perpendicular to each other. After etching by HCl/FeCl 3 solution, cross sections of the multi-layered object samples were subjected to microstructure and chemical composition analysis on a scanning electron microscope TESCAN Vega 3 SB with EDS. The Vickers microhardness testing was performed on BUEHLER Omnimet MHT 5104 equipment. Laser cladding process map In statistics, regression analysis includes any techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent (or output) parameters (y) and independent (input) parameters (x). A regression model relates y to a function of x: y = f(x) [10]. In this study, the laser cladding parameters such as laser power P, scanning speed S, power feeding rate F were input parameters, the geometrical characteristics such as track height H, track width W, and substrate melting depth h were output parameters. To evaluate the level of influence of the laser cladding parameters on the track geometry, values of input parameters were normalized. In our case, the mathematical description of the model is presented in the form of a polynomial of the second degree (Table 2). Given a data set {y i , x i1 ,…,x ip } n i=1 of n statistical units, a nonlinear regression model takes form Often these n equations are combined together and written in the vector form as Then the final solution is written as After the computation, the regression model for each output parameter has been constructed ( Table 2). In order to assess the goodness-of-fit of the model, the R-squared, analyses of the pattern of residuals and hypothesis were used. Statistical significance was checked by an F-test of the overall fit followed by t-tests of the individual input parameters [11]. Table 2 summarizes all the found regression coefficients for the geometrical characteristics for the track laser cladding from the steel/TiC powder mixture. It should be noted that the laser power P, the scanning speed S and the powder feeding rate F have almost the same effect on the track height H. For the track width W, the major parameter is the laser power P. Under conditions applied in this study, a variation of the powder feeding rate F does not lead to a significant change of the diameter of the powder flow in the working area. But a two-time rise of the laser power P from 2.5 up to 5 kW leads to the increasing from 3.8 to 5 mm of the spot diameter d 0.86 , and, thus, of the size of On the base of the present statistical analyze, a laser cladding process map was depicted ( Figure 1). The dashed curves present three different levels of dilution D of 15, 25, and 35%, respectively. The hyperbolic solid curves correspond to diverse track heights H changing from 0.2 up to 0.7 mm. Different powder deposition efficiency E p is given by the dotted curves. The zone of acceptable dilution and high powder deposition efficiency is cross-hatched. Further experiments were realized with these optimal parameters. The laser cladding process map also allows estimating the layer thickness which usually exceeds by 10-30% the track height and depends on the step between the overlapping individual tracks. Phase diagram of system Fe-TiC 16NCD13 powder is a low-carbon low-alloy steel. To estimate at a first approximation the effect of titanium carbides TiC on the microstructure and the properties of the steel, an iron alloy with TiC will be considered ( Figure 2) [12]. The quasi-binary section Fe-TiC has a eutectic-type phase diagram with 3.8 wt% of TiC content in the eutectic. The maximum solubility of TiC in Fe is 0.6 wt%. Concentration of the elements dissolved in a solid solution increases with the cooling rate relatively to the equilibrium crystallization. These solid solutions are called metastable or supersaturated. During laser cladding, high cooling rates (> 1000 K/s) contributes to the formation of a supersaturated solid solution of TiC in γ-phase [13]. In this case, crystallization takes place according to the metastable phase diagram (Figure 2). TiC prim nucleated in the form of compact polyhedrons (small-scale dendrites) which become centers of the crystallization of the eutectic. The crystallization of the eutectic E (γ + TiC) is not equilibrium at high cooling rate which is characteristic for laser cladding. After the crystallization of TiC prim , the liquid phase is depleted by titanium carbide near TiC prim crystals. TiC in γ-phase has not time to completely diffuse because of a high cooling rate. Therefore, in the beginning the eutectic crystallizes in the form of a rim from γ-phase around TiC prim crystals. Then, the eutectic composition aligns, and the eutectic assumes common lamellar and rod-like structure (Figure 3). Secondary titanium carbides TiC second precipitate in the eutectic phase due to a change in the solubility of TiC in the γ-phase from 0.6 up to 0.011 wt% in the solid state. However, these precipitations are difficult to identify even at a large variation of the solubility because their shape is very similar to one of the eutectic. MEB analysis of the chemical elements distribution confirmed the hypothesis of the structure formation. The concentration of the titanium is increased in the primary titanium carbides and the eutectic colonies. The other elements are uniformly distributed in the alloy. The solubility limit of TiC in γ-Fe increases because of a high value of the supercooling, and the alloy turns to be γ-monophase. A supersaturated solid solution of TiC in γ-Fe forms. The excessive TiC precipitates at the grain boundaries because of the repeated heating of the metal by the upper layers ( Figure 4). The analysis of the distribution of the chemical elements showed a high concentration of titanium on the grain boundary in the region of the excessive TiC precipitation. The other elements in the alloy are distributed uniformly. The supersaturated solid solution of TiC in γ-Fe forms in the same way as in the alloy with 2.5 vol% TiC content. But the alloy is outside the γ monophasic region because of a higher TiC content. As a result, eutectic colonies in the lamellar and rod-like form precipitate at the dendrite boundaries ( Figure 5). The precipitation of TiC second does not occur due to the formation of supersaturated solid solution of TiC in γ-Fe. MEB analysis revealed a high concentration of titanium in the region of eutectic colonies. The other elements of the alloy are homogeneously distributed. Hardness The hardness tests of the MMC material yielded highly diversified results which indicated a considerable influence of the type of structure on the properties of the samples (Figure 6). At the 2.5 and 5 vol% TiC content, the hypoeutectic alloy has a considerable hardness that is higher by 40% in the middle and by 90% in the upper layers than that of the pure steel. The formation of the supersaturated solid solution of TiC in γ-Fe with a strong distortion of the crystal lattice increases sharply the hardness. The hardness of the hypoeutectic alloy reaches up to 550HV 0.1 . But the hardness decreases up to 400HV 0.1 in the middle because of the repeated heating of the alloy by the upper layers. The precipitation of excessive TiC from the supersaturated solid solution of TiC in γ-Fe reduces the distortion of the crystal lattice. The hardness of the hypereutectic alloy (10 vol% of TiC) insignificantly exceeds that of the pure laser-cladded steel. Nonequilibrium eutectic formed in the alloy is composed of the γ-phase in the form of a rim around TiC prim crystals, and the eutectic E (γ + TiC). The microhardness of the γ-phase depleted by titanium carbide is lower than that of the eutectic. As the volume fraction of the γ-phase considerably exceeds the amount of the other phases, the alloy hardness is determined by a more plastic γ-phase. Conclusions A carbide-reinforced metal matrix composite material from a mixture of low-alloy steel 16NCD13 and titanium carbide TiC powders was produced by laser cladding. Equations of relationship between the main laser cladding parameters and the geometrical characteristics of the cladded tracks were derived by regression analysis. On the base of statistical study, laser cladding process map for the deposition of individual tracks was established. Optimal process parameters with acceptable dilution and high efficiency for laser deposition of MMC material were established. MMC material with 2.5, 5, and 10 vol% of TiC content fabricated by laser cladding presents three different structures, respectively:
2019-04-18T13:09:05.593Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "8e2d5bc38c3d1de55e1ace68a4592de9d15deca7", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.phpro.2011.03.038", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "925476d4f4cc09ee97ed0d85a92378855f520b3a", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
264083460
pes2o/s2orc
v3-fos-license
Theoretical Backbone of Library and Information Science: A Quest This study primarily aims to identify unique theories and specific uses of theories in the library and information science (LIS) domain. It provides a comprehensive list of the theories used in LIS journal articles indexed by Scopus (an abstract and citation database) from 1970–2021. It expands on the most common theories and highlights the areas and purposes for which used theories in the LIS domain. Our goal is to demonstrate the usages and applications of various borrowed theories from complementary disciplines in the LIS domain. A systematical methodology is applied, following a few open-source AI-based software packages (such as ASReview, and OpenRefine), to analyse the theories against different parameters, keeping in mind the drawbacks of the previous studies. The study's findings show that the LIS domain's theoretical foundations are understudied. Researchers mainly borrowed theories from social sciences such as sociology, psychology, and management studies to solidify their domain. The paper provides a clear road map for the theoretical development of LIS research. And the resulting outputs may help policymakers, academicians, and researchers, irrespective of disciplines in general and information science in particular, understand the foundations and theoretical and methodological trends of theories that may lead to a better understanding of the theories before their selection and applications. Introduction The presence of theory is an indication of research eminence and respectability (Van Maanen, 1998), as well as a feature of the discipline's maturity (Brookes, 1980;Hauser, 1988). The development of theory is the central goal; the 'jewel in the crown' of research (Eisenhardt, 1989). Theory plays a vital role in research (Ngulube, 2020;Thomas, 1997). A good theory continually advances knowledge, directs researchers to essential questions, and provides knowledge and understanding about a research topic and the discipline (Sonnenwald, 2016;van de Ven, 1989). Moreover, in any research, the effective and practical application of theory has always been critical to developing new knowledge or interpretations of existing knowledge. Research without the use of theories is poor and lacks a sound foundation and limited usefulness to the particular domain (Sarter, 2006). Theories are essential in research, and the importance of theories in research cannot be denied (Doherty, 2012;Gregor, 2006;Hall, 2003;Hider & Pymm, 2008;Jeong & Kim, 2005;Lee et al., 2004;Neuman, 2000;Ngulube, 2018;Van de Ven, 1989). Research in LIS during the 1980s and 1990s produced several foundational theories in this domain. (Chatman, 1999;Cole, 2011;Dervin, 1998;Kuhlthau, 1991). For example, Dervin's (1983) sense-making theory, Mellon's (1986) library anxiety theory, Ellis's (1987) model of information-seeking behaviour (ISB), Bates' (1989) berry-picking theory, Kuhlthau's (1991) information search process (ISP), Chatman's (1996) theory of information poverty, and others. As a result, the use and application of theory in LIS research increased during the 1990s and early 2000s (Kim & Jeong, 2006;McKechnie et al., 2001;. In the LIS domain, theories are used to guide the analysis, explanation, and prediction of phenomena and to provide design and action guidelines (Gregor, 2006). It tells us why they are correlated (Sutton & Staw, 1995). It is necessary, in research writing, to explain relationships among concepts. Hence, it is an essential and crucial task for LIS professionals to form a clear picture of the application and use of the theories in LIS research. Nevertheless, there is a lack of theoretical research in LIS (Kim & Jeong, 2006), and there is little discussion in different platforms and forums of what theory means in LIS. Indeed, the need to develop, teach, and apply theory in LIS research remains acute (Buckland, 2003;Hjørland, 2000;Thompson, 2009). Although a few LIS researchers and practitioners have created many useful conceptual frameworks, models, and theories (Fisher et al., 2005), they are restricted in their use in LIS. For example, Hartel (2009) and Bates (2006) reported on the development of meta-theories in this domain of LIS. In a more in-depth analysis of theory use in LIS, Kumasi et al. (2013) qualitatively analysed the extent to which theory is meaningfully used in the scholarly literature by developing a theory talk coding scheme, that included six analytical categories, describing how theory is discussed in a study. The intensity of theory talk in the articles was described across a continuum from minimal (e.g., a theory is discussed in the literature review and not mentioned later) through moderate (e.g., multiple theories are introduced but without discussing their relevance to the study) to major (e.g., a theory is employed throughout the study). Finally, they concluded that "LIS discipline has been focused on the application of specific theoretical frameworks rather than the generation of new theories". In the same vein, Pettigrew and McKechnie (2001) reported the limited application of theory and the failure of LIS research to address the practical problems of the profession. The same observation was made by Ukwoma and Ngulube (2021), who reported that much research works, such as theses and dissertations in LIS, needed more theory. Some studies used concepts like theory, theoretical framework, and conceptual framework interchangeably. This may be due to the fact that LIS researchers are not aware of the role of theory, including different components of a theory in the research process (Dankasa, 2015;Kim & Jeong, 2006;, or they have a lack of knowledge about the utility of theory in LIS research or have misconceptions about the theory and theoretical contribution in the LIS field (Ngulube, 2018;Ocholla & Roux, 2011). LIS theories are usually vague and conceptually unclear, and basic concepts must be defined in the literature (Jarvelin & Vakkari, 1990;Poole, 1985;Schrader, 1986) and research in LIS has been dominated by a paradigm that "has made little use of such traditional scientific approaches as foundations and conceptual analysis, or of scientific explanation and theory formulation" (Jarvelin & Vakkari, 1990). Our field, LIS, depends on other disciplines for theoretical work (Ferraro et al., 2009;Kumasi et al., 2013;Weber, 2003), and LIS researchers borrow theories mainly from several branches of social science such as psychology, sociology, or management (Hjørland, 1998;Morgan & Wildemuth, 2009;Poole, 1985). This is due to lack of qualitative scholarly literature on theory use in LIS (Kumasi et al., 2013). On the contrary, Ferraro et al. (2009) reported that theory borrowing like other disciplines is the tradition of information science. And, importing theories from other disciplines is not an indication of weakness in the discipline (Truex et al., 2006). Actually, researchers borrow a concept or a theory from other disciplines out of its original context to explain the same or a different phenomenon (Murray & Evers, 1989) for the purpose of investigating the questions of the form, structure, and organization of information and the social impacts of information technologies (Bates, 1999). Many researchers (Hall, 2003;Oswick et al., 2011) argued that theory borrowing is a common practice prevalent in academic research and is an inevitable and integral part of theory development in all disciplines (Kenworthy & Verbeke, 2015;Truex et al., 2006;Whetten et al., 2009), especially those with a broad disciplinary base, including LIS (Mutula & Majinge, 2017;Ukwoma & Ngulube, 2021) as a result of a shortage of applicable home-based or native theories and core theories. Scholars borrow, adapt, extend and at times generate new theories when conducting research (Doherty, 2012). Theories are adopted and adapted or some variables are excluded to suit the LIS context. LIS has an interdisciplinary foundation that results in its borrowing from other disciplines. LIS domesticate theories mainly from sociology, management, psychology and information systems (Ukwoma & Ngulube, 2021). Incorporating theories from another discipline strengthens and enhances the linkages or connections between the two disciplines (Stock, 1997). So, there is a continuous call for developing "good theories" (Watson, 2001), especially home-based (here LIS) theories, e.g., theories of our "own" (Lee et al., 2004;Neuman, 2000;Weber, 2003). The paper reports the extent of theory use in LIS research and how and where researchers use theories in their papers, whether in their original format or otherwise modified. The paper is structured as follows: Section 2 discusses the meaning and role of theories in research. Section 3 describes similar works related to theories, including the importance of theorising in LIS research. Section 4 elaborately discusses the scope and limitations of the study. Section 5 focuses on the research questions formulated for this study. Section 6 deals with the methodology used to fulfill the research questions. Section 7 shows the results from different perspectives. Section 8 provides an overall assessment of the study and proposes several critical directions, and Section 9 concludes the paper with avenues for future exploration. What Actually is a Theory? The term "theory" derives from the Greek "theoria" and, in modern usage, from the Latin "teoria," which means "a looking at, viewing, contemplation, or speculation". It may have derived from the philosophy of science, particularly from Kuhn's account of theory change and scientific revolutions and has a complicated origin story with roots in several philosophical and psychological doctrines (https://iep.utm.edu/theory-theory-of-concepts/). Still, there is no agreed-upon definition of "theory". Different experts have defined theories from different perspectives (Babbie, 1992;Merton, 1957;Odi, 1982;Schwandt, 1997;Vogt, 1993), and there is a considerable level of disagreement among experts about what constitutes a theory (Sutton & Staw, 1995). Sutton and Staw (1995) further stated that a consensus on exactly what theory is and why it is so challenging to develop a robust theory had not been reached. Not everything is a theory, and it is quite difficult to judge what is actually a theory irrespective of disciplines as theory work may take a variety of forms. The term "theory" is almost a common phenomenon and is applied to all types of research both quantitative and qualitative, and to mixed methods irrespective of disciplines or domains. Experts have already admitted the essence of using theory in research and learning. It is used for giving meaning to abstractions and/or concepts to explain and aid understanding of phenomena by making generalisations about proven facts (Chijioke et al., 2021). Bhattacherjee (2012) clarifies that theory explains and predicts through building correlations and causations-cause-effect) relationships, respectively. It helps us understand things, events, activities, behaviours, and/or situations (Scott et al., 2008). In general, theory answers a human need to make sense of the world and to accumulate a body of knowledge that will aid in Liber Quarterly Volume 33 2023 understanding, explaining, and predicting the things we see around us, as well as providing a basis for action in the real world (Gregor, 2002). Babbie (2013) defined it as an organised explanation for the purpose of making observations regarding a specific aspect of life. Corley and Gioia (2011) concluded that theory shows relationships among different concepts that show how and why a phenomenon occurs. So, it is difficult for the authors to give a solid definition of theory in the context of LIS research and the role of theory in the domain of LIS has been a subject for debate for several decades (Pinfield et al., 2021). Here, the authors have tried to focus on theory by citing experts' views. For example, Boyce and Kraft (1985) regarded theory as 'a body of principles: fundamental laws or empirical regularities. Sugimoto (2016) defined it as "a set of statements, systems, or principles, used to describe or explain phenomena". Gregor (2002) opined that theory should be considered from many different dimensions and it could be classified in such a way that would fulfill its purposes. In the same line, Neuman (2000) proposed five factors of a theory, such as (i) the direction (deductive or inductive), (ii) the level of the theory, (iii) whether it is formal or substantive, (iv) the forms of explanations it employs, and (v) the overall framework of assumptions and concepts in which it is embedded, by which it could be classified. On the other hand, Wacker (2004) identified four key properties that constitute a good theory: "formal conceptual definitions, theory domain, explained relationships, and predictions". He also considered theory as a link that creates relationships between concepts. Whereas Garver (2008) suggested that theories vary in their specifications. Another researcher (Buckland, 1991) said that a "good" theory is one that matches well our perception of whatever the theory is about. The closer the match, the better the theory is. Van Maanen (1998) suggested that theory must be convenient and should help and support to organise and communicate unwieldy data and simplify the terrible complexities of the social world, matters that may well be more important to the field than whether or not a given theory is true of false. Buckland (1991) defined theory "in the broad sense of a description or explanation of the nature of things, not in the more restricted sense, used in some sciences, of denoting fundamental laws formally stated and falsifiable." Some other researchers have given the basic definitions of theory precisely such as Smit (1995) 'a set of principles that is used to explain a certain phenomenon or phenomena; Silverman (2006) defines 'a set of explanatory concepts'; Vogt (1993) defines ' a statement or group of statements about how some part of the world works-frequently explaining relations among phenomena ';Odi (1982) defines 'an internally connected and logically consistent proposition about relationships among phenomena'; Welman and Kruger (1999) define 'a group of logical, related statements, which is presented as an explanation of a phenomenon'; Babbie (1992) defines 'a systematic explanation for the observed facts and laws that relate to a particular aspect of life'; Kaplan (1964) defines 'a way of making sense of distributing situation'; Schwandt (1997) defines 'a unified, systematic explanation of a diverse range of social phenomena'; Grover and Glazier (1986) define 'generalisations which seek to explain relationships among phenomena'; and 'a set of statements about the relationship(s) between two or more concepts or constructs' (De Vos et al., 2005;Jaccard & Jacoby, 2010;Swanson, 2013). Sutton and Staw (1995) rightly said that the lack of a unified definition among scholars of what a theory is has often made it difficult to develop a strong theory for any discipline. But they all agreed that theory develops as an explanation to advance knowledge in the particular field (Thomas, 1997). Most of these definitions of 'theory' mention the relationships between or amongst several variables. According to Babbie (1995), social science theory is 'a systematic explanation for the observed facts and laws that relate to a specific aspect of life'. The role of theory in the social sciences is, among other things, to explain and predict behaviour, be usable in practical applications, and guide research (Glaser & Strauss, 1999). Some other experts have tried to give a formal definition of theory in the context of information science. Lee et al. (1997) discussed theory in this context in terms of underlying causal relationships, but primarily from a statistical viewpoint. Gregor (2002) identified and distinguished five different types of theory important for the discipline of LIS: (i) theory for analysing and describing, (ii) theory for understanding, (iii) theory for predicting, (iv) theory for explaining and predicting, and (v) theory for design and action. In the same study, she reported that many LIS researchers had failed to give any explicit definition of their own view of theory. Walster (1995) examined five instructional design theories valuable to LIS education and also described the basic components of the theories and their application in this domain. Based on the discussion, determining the scope of the theory and suggesting a comprehensive definition of the theory is quite difficult. These vary depending on the type of research, academic field, and researcher. Hjørland (2013) correctly opined that the situation is somewhat chaotic and that it is difficult to get a clear overview of the theoretical landscape of the field as a whole. So, a theory, built from concepts, variables, or phenomena, is a mental activity and an interrelated set of constructs that seeks to explain an object or things, explains observed regularities or relationships between two or more variables, and shows how and why events occur. Literature Review The use and application of theory are common in any academic discipline, and LIS is no exception. Apart from its own theory, many theories from other disciplines have also been used in the LIS field for the development of research productivity in this domain. Many LIS researchers, particularly in the field of information science, have developed theories about informationseeking behaviour. Initially, a theory was developed for a particular discipline and has since been modified and utilised in other disciplines or for any set of phenomena. For example, management theory is now being taught in different library schools (Trosow, 2000). The LIS literature, according to Feehan et al. (1987), has not evolved sufficiently to support a rigid body of its own theoretical basis. Chatman (1996) is indeed correct when she claims that using and developing a theory is hard work in LIS. Vakkari and Kuokkanen (1997) attempted to analyse theory development in LIS using a case study from information seeking studies. Hjørland (2000) reported that LIS lacks good theories because there are no explicit theories in LIS. Many of the theories used in LIS are from other fields such as psychology, sociology, or management (Dillon, 2007). In the same vein, Ocholla and Roux (2011) opined that LIS largely relies on theories from other disciplines. They also presented a theoretical framework model used in LIS research and clustered it by research themes. Pierce (1992), citing the work conducted by Pettigrew and McKechnie (2001), also reported that LIS researchers tend to borrow theories from other disciplines. Furthermore, most LIS researchers borrowed theories from the social sciences (Oswick et al., 2011). Onyancha and Kwanya (2019) were in support. Again, Pettigrew and McKechnie (2001) found that 45.4% of theories used in LIS came from the social sciences, followed by LIS (29.9%), sciences (19.3%), and humanities (5.4%). Besides, more than 70% of the theories applied in Chinese LIS journals (Wang et al., 2016) and 57.5% of the theories used in Korean LIS journals (Kim & Jeong, 2006) were borrowed from other disciplines. There have been many efforts by researchers to analyse the state of theoretical research in LIS. Here, authors have tried to give an overview of theories used in the LIS domain. Many authors have advocated for the development and application of theory in LIS (Buckland, 2003;Hjørland, 2000;Thompson, 2009). Many authors have developed many conceptual frameworks, models, and theories (Fisher et al., 2005;. dos Santos Maculan and de Oliveira Lima (2017) reported two theories, viz., Dahlberg's analytical concept theory and Ranganathan's faceted classification theory on concepts that are commonly taken for granted and discussed in the LIS literature. Some other experts reported the use of different theories in several branches of LIS research. For example, Mellon (1986), Mansourian (2006) and Ellis (1993) used grounded theory; Aluri (1981) Rogers (1995) used diffusion of innovation theory; Fishbein and Ajzen (1975) used the theory of reasonable action; Ajzen (1991) used the theory of planned behaviour in different areas of the LIS domain. Oliveira Machado et al. (2019) discussed concept theory in LIS from an epistemological perspective. Benoit (2007), on the other hand, provided an overview of major critical theories from a variety of disciplines, including the humanities, social sciences, and education. Michell and Dewdney (1998) provided a brief explanation of another social science theory i.e., the mental models theory, and the use of it in LIS research. There are many meta-theories operating in the field currently. Vickery (1997) discussed the meta-theory of information science research. Grover and Glazier (1986) proposed a model for theory building in LIS called "circuits of theory". The model includes a taxonomy of theories, developed earlier by the authors. The purpose of the taxonomy was to demonstrate the relationships among the concepts of research, theory, paradigms, and phenomena. Rioux (2010) explored the use of meta-theory as an integrative conceptual tool that can help analyse, direct, and enhance theory building, professional practice, and professional preparation in LIS. Kaijun et al. (2019) discussed fractal theory in information science (Parsa et al., 2016) including other domains such as mechanical science (Rinaldo et al., 1993), astronomical meteorology (Fossum et al., 2013), and life sciences (Puetz & Borchardt, 2015). In reality, little research has been conducted to investigate the use of theory in LIS, and thus has been often criticised as being fragmentary, narrowly focused, and oriented to practical problems (Grover & Glazier, 1986). Many authors have noticed limited use of theory in published research and have advocated greater use of theory as a conceptual basis in LIS research (Boyce & Kraft, 1985;Feehan et al., 1987;Grover & Glazier, 1986;Hjørland, 1998;Spink, 1997). Some quantitative studies have been conducted on the theory use in LIS. A number of studies (Feehan et al., 1987;Jarvelin & Vakkari, 1990;Julien, 1996;Julien & Duggan, 2000;Nour, 1985;Peritz, 1980) concluded that most LIS research is atheoretical, with the rate of theory use in LIS ranging from 10 to 21 percent. For example, Peritz (1980) reported that only 14% of sample articles from 1950 to 1975 could be considered theoretical research. Whereas Nour (1985) reported 21.2%, Feehan et al. (1987) reported 13%, and Järvelin and Vakkari (1990) reported 10% of the literature published in 1980 using theory. Julien (1996) found that theoretical studies occupied 32% of the 241 randomly selected papers on information needs and use from 1990 to 1994. Gonzalez-Teruel and Abad-Garcia (2007) found that theories were used in only 14% of the papers on information needs published in Spanish journals from 1990 to 2004. Even so, variations in the use of theory may also be regional. Wang et al. (2015) found that at least one theory was mentioned in the full-texts of 30.2% of papers from the Journal of the China Society for Scientific and Technical Information (JCSSTI) from 2000 to 2013, which is taken as the top one journal of information science by Chinese universities. Then, Wang et al. (2018) further analysed the LIS papers published in all 52 Chinese LIS journals from 2008 to 2017 and found that 18.97% of them mentioned at least one theory in their abstracts. Wu et al. (2017) found that 49.9% of articles published in Taiwanese LIS journals between 2010 and 2015 use theory. In Korean LIS research journals, Jeong and Kim (2005) found only 10% of studies applied a theory. The authors observe that these variations in the use of theory among different LIS journals are due to the number of journals as well as the journal selection process used by the researchers. According to another study (Mckechnie & Pettigrew, 2002), a theory was discussed in 34.2 percent of the articles. The study covered almost 1,160 articles published in six prominent LIS journals from 1993 to 1998. The same type of work has been conducted by Järvelin and Vakkari (1990). Authors reported 10% use of theories in LIS, whereas it was 18.3% in another work (Julien & Duggan, 2000). Kim and Jeong (2006) reported that the use of theory was only 17.57%. Nour (1985) reported that while 21.2% of articles used theory in an analytical sense, less than 3% of the articles were about information science theory. Feehan et al. (1987) reported that only 13% of the 123 research articles sampled from 91 LIS journals either discussed or applied theory in the study design or attempted to formulate theories or principles that could provide a theoretical basis for LIS. Scope and Limitations As previously stated, (see Section 2), the term "theory" refers to ideas, works, opinions, models, hypotheses, and so on (see also Section 6). The sample the authors have analysed does not claim to be exhaustive, as the search operators or the search syntax (part of our search strategy) were limited to those articles that contained the term 'theory' or its equivalent terms, as stated above, in the title and in the abstract, along with the other two terms, viz. 'library' and 'librarian'. The authors face specific difficulties where researchers have used such parallel terms in place of the 'theory' or a different name but not the word 'theory'. The authors have considered articles where these terms were not present/used or were used in other ways, but the creator of the theory was mentioned, or researchers did mention the name of the theory. The keywords provided by the researchers were not used as such words were manually added and might not capture all the topics being discussed in the papers. The authors have identified many retrieved papers that do not directly relate to the development of the LIS theory. Moreover, these papers were rejected from our analysis for not covering any potential theoretical work promoting LIS research. Another problem relates to suggesting the definition of the 'theory' and determining the scope of the 'theory'. The authors do not claim that the journals covered by Scopus or the articles the authors have studied exclusive representatives of LIS research. Our classification of LIS sub-fields (Section 7.4) and grouping of all theories based on originating domain (Section 7.7) needs to be more foolproof. In many cases, extended abstracts were not provided by the researchers, and authors could not identify the areas where theories were used in the article, i.e., the introduction, method, results/discussion, or at what level or extent researchers had used the theory with fidelity. The authors could not identify the type of research for which theories were tested or employed. Moreover, determining the quality of the theories used differs from our study's objective. Research Questions Summarising the above, we have set the following research questions to fulfill our objectives for this study: 1. What were the theories applied in LIS research? Or which were the most important theories discussed in the corpus data from 1970-2021? 2. Which topics were most discussed? Or what were the research topics studied in the LIS area during the period? 3. How, and to what extent a data carpentry tool (viz. OpenRefine) can be applied to quantify the structured data set and for deep faceting the text corpora to identify categories of the theories used in LIS research? 4. Does LIS have any systematic theoretical base? Or, to what extent does LIS research rely on other disciplines for its theoretical foundations? Or, from which disciplines did LIS researchers borrow or draw theories? Methodology Traditional quantitative content analysis, with natural language processing (NLP) and text mining technology, is taken as the research method. Data were extracted only from LIS journals covered by Scopus (https://www. scopus. com/) against carefully crafted search queries: library, librarian, theory, and synonymous/equivalent terms related to 'theory'. For example, sometimes, keywords such as 'framework', 'model', 'pattern', 'paradigm' 'method' were used interchangeably by scholars in place of 'theory' and thus were used and considered to search for information. The authors have selected the Scopus database as the primary data source and limited the search to only LIS research articles published from 1971 to December 2021. A total of 14,294 raw articles (from 1971 to 2021) were downloaded or extracted using various search terms like 'library', 'librarian', 'theory', and parallel/synonymous terms of 'theory', all of which appeared in the title and the abstract of the paper. The authors have excluded 368 titles or papers for not having an abstract. A total of 61 articles were also removed from three unrelated journals. Only full-length articles written in English were collected where the specific application of theories was made. Editorials, book reviews, letters, interviews, commentaries, non-research articles, and news items were also excluded from the analysis. A total of 13,865 articles were considered for this paper, and all the relevant bibliographic data, such as titles, abstracts, authors, names of the journals, etc., were recorded in a single Excel file. The sample data were then ranked and curated using ASReview (https://asreview.nl/) to examine whether the articles were relevant to the development of LIS theory. It is used to rank a set of selected abstracts based on their context and relevance. Finally, 13,225 (95.4%) articles were removed for not having a direct link with the LIS theory. The sample size for this study was 640 (4.6%) research articles directly related to the development of LIS theory. In the next step, OpenRefine (https://openrefine.org/), an open-source data rackling tool, is used to obtain the results as reflected in Table 1. The CSV file of sample data is transformed into 'OpenRefine' for further processing, and Table 1 reflects the dataset generated by 'OpenRefine' after analysing the corpus data. This tool helps quantify papers Number of articles in which the term 'theory' was mentioned in the 'title' of the paper 191 (29.84%) Number of articles in which the term 'theory' was not mentioned in the 'Abstract' of the paper 39 (6.09%) Number of articles that did mention the term 'theory' both in the 'title' and in the 'abstract' of the paper 566 (88.43%) Number of articles that used other equivalent/synonymous words in place of 'theory' 67 (10.46%) Number of articles in which originators' names were not mentioned 531 (82.9%) Number of theories appeared more than once in the papers (see Appendix A) (calculation is made based on 411 unique theories) 31 (7.54%) Average number of theories per article or paper (which used at least 1 theory) 1.56 Occurrences of theories (more than once) in number of articles [Break-up]: [Two theories used = 9 papers, Three theories used = 8 papers, Four theories used = 5 papers, Five theories used = 2 papers, Six theories used = 1 paper, Seven theories used = 1 paper, Eight theories used = 1 paper] 27 articles (4.21%) Number of unique theories used 10 times or more (greater than equal to 10) in any paper (see Annexure I) 5 theories and deep faceting the text corpora to identify categories of the theories used in LIS research. The authors have taken only those papers where theories of LIS or theories of other disciplines were applied. The 'title' and the 'abstract' and 'index terms' of the papers were used as chief sources of information. All the articles were checked and validated by domain experts other than the authors to improve the sample data's validity and reliability. The authors have classified all papers using a descriptive framework that considers the level of theories used and the stages at which theories are used as it reflects the intensiveness of the use of theories within the studies. Again, the authors have critically reviewed and analysed each theory against selected criteria and consulted different documentary sources to determine its originating discipline. Furthermore, for this purpose, the paper's focus and the background information of the creator of the particular theory are considered. Results This section presents our results, organised by analysis of research questions as stated in Section 5. Keeping in mind the study's objectives, the following aspects of importation are examined considering the stated parameters (Table 1). General Statistics This section overviews the theories used in LIS research from different perspectives (Table 1). Altogether, there were 531 (82.9%) papers (out of 640 papers) where researchers did not mention the name of the originator of the theory (originator was not mentioned at all neither in the abstract nor in the title). Even so, 449 (70.15%) articles did not mention the 'theory' in the 'title' of the articles (but the term 'theory' was present in most of the abstracts). The authors identified one reason for this: researchers using other parallel and synonymous terms (such as principles, frameworks, schemes, concepts, models, works, ideas, paradigms, and so on) in place of 'theory', which were also counted. Kumasi et al. (2013) also reported that the multiple terms or words such as "framework," "model," were used interchangeably in the articles by the scholars to describe a theory. Booth and Carroll (2015) also reported that the various connotations of theories were synonymously used in the social sciences and humanities. In many cases, researchers mentioned only the name of the person who was probably the theory's creator. Most studies (95%) used only one theory. However, in many cases, it was unclear how multiple (Table 1) theories were mentioned and discussed in the paper. The maximum number of theories used by researchers in any study was eight (8) ( Table 1). In addition, five (5) theories were mentioned ten times or more in any paper (see Appendix A & Table 1). Many authors have reported the percentage or proportion of theories used in LIS, ranging from 13% to 34.1% in LIS journals (Feehan et al., 1987;Julien, 1996;. For example, it was 14% (Peritz, 1980); 13% (Feehan et al., 1987); 21.2% (Nour, 1985); 10% (Järvelin & Vakkari, 1990); and 18.3% (Julien & Duggan, 2000). All these studies were restricted to selected LIS journals, and the sample sizes were limited. In our study, the percentage or proportion of theories used was 64.21% (here 411 unique theories and N= 640 papers). A recent study by Kim and Jeong (2006) also reported the use of theory in LIS research at 41.4%. So, we can say that the total percentage of theories used in our study is higher than the findings of previous studies conducted from time to time. Again, we can say that the average number of theories per article (which used at least one theory) based on weighted average mean is 1.56. In their study, McKechnie and Pettigrew (2002) reported that 34.2% of articles incorporated theory in either the title, abstract, or text, for a total of 1,083 theory incidents or an average of 0.93 incidents per article. This study has identified 411 unique theories and a comprehensive list of theories, are provided in the appendices. There were 31 (7.54%) theories (see Appendix A) that appeared more than once among the 238 (37.18%) articles. Appendix B provides a list of 335 theories which appeared only once in the paper. Apart from that, there were 45 (10.94%) theories in LIS (Appendix C). It was also found that there were 366 (89.05%) theories (Appendix A & B) of other disciplines used in the LIS research. Liber Quarterly Volume 33 2023 And rest of the theories (31) were from other subjects (Appendix B). A few studies have also reported theories used in LIS research. For example, Pettigrew and McKechnie (2001) reported more than 100 theories used in LIS research. On the other hand, Lim et al. (2009) identified 154 theories used in LIS research. Furthermore, most theories came from the social sciences like us. In the same vein, we have calculated in how many papers theories from the social sciences were mentioned. It was found that there were 355 (55.46%) papers in which theories from the social sciences were applied. And the rest of the papers were from other domains, such as 62 (9.68%) papers from Sciences, 68 (10.62%) papers from management studies, and so on. We have also calculated the percentage of articles, in which the originators' names were mentioned in the 'title' and in the 'abstract' of the articles. There were only 21 (3.28%) articles in which the originators' names were mentioned in the 'title' of the articles. And there were 134 (20.93%) articles in which the originators' names were mentioned in the 'abstract' of the articles. But there were many common articles in which the originators' names were mentioned in both places. Table 2 shows the most frequently used key theories from the 640 articles. It displays the top 14 most dominant theories (in terms of its use in number of articles) in LIS research during the period under study. This arrangement is a subjective ranking of theories based on our assessment of the theories presented in the papers. The most used theories were grounded theory, learning theory, activity theory, and the unified theory of acceptance and use of technology model, which were prominent over the whole period under study (Appendix A). For example, grounded theory was used in 83 (12.96%) papers whereas learning theory was used in 20 (3.12%) papers, activity theory was used in 19 (2.96%) papers, UTAUT was used in 17 (2.65%) papers respectively. The first three theories were from social sciences whereas UTAUT was from management studies. Together, these 14 theories (Table 2) accounted for about 30.55% of the theories used during 1970-2021 in Scopus database. Almost all these theories were from several branches of the social sciences, except a few were from 'management studies' (ranking position 4, 7); 'sciences' (ranking position 9); 'communication studies' (ranking position 8), 'information studies' (ranking position 10, two theories, viz. Kuhlthau's theory of the information search process and Shannon's theory of communication). Westin et al. (1994) rightly said that LIS research draws from various reference and complementary disciplines. One of our study's objectives (RQ 1 & RQ 4) was to show dominant theories and how LIS researchers borrowed theories from other domains or disciplines. However, most of the theories within the social sciences came from psychology (88 theories), sociology (52 theories), philosophy (19 theories), economics (9 theories), and education (9 theories) (see Appendix B). Figure 1 also gives an overview of the proportion of theories used under selected broad disciplines, and different colors and bold types indicate their contributions. Mostly Dominating Theories So, most papers used less popular and less well-known middle-range theories (as proposed by Gregor, 2006), such as media richness theory, TAM, information processing theory, sense-making theory, etc. Psychological theories under Social Sciences dominated LIS research, while theories from sociology and philosophy came next and held second and third positions, respectively. *These five theories appeared four times (Appendix A) and thus jointly ranked 10 positions. Liber Quarterly Volume 33 2023 However, there is a significant difference in the number of theories that came from these two disciplines (e.g. sociology (52 theories) and philosophy (19 theories). Economics and education came close behind and jointly ranked fourth, respectively. Psychology and Sociology accounted for about 34.06% (140 theories) of the theories used during the study period. Here, authors have eliminated some theories from the LIS category, that were closely associated with and supposed to originate from the sciences and social sciences and thus were not kept in Appendix C under the LIS theory group. For example, the theory of communication, LibQUALþ TM Model. A study conducted by Lim et al. (2009) also reported almost the same results. Lor (2014) rightly stated that LIS had produced very little theory of any significance, and thus LIS researchers have used theory from other fields such as psychology, sociology, or management studies. Location (Sections) Where Theories were Mentioned There were differences among articles using the term 'theory' and its name. The authors have tried to find out the trend, pattern, and depth of use of theories in the papers by identifying the areas of use of theories. Theories were mentioned in almost every section, including the methodology section, the hypothesis section, the analysis section, and the research process itself. In the introduction section, theories were even used to review the background of the articles. Only a few papers mentioned it in conclusion. It was found that, out of 640 articles, there were only 39 (6.09%) papers where 'theory' was not mentioned in the 'abstracts'. It indicates that almost all the papers in the abstract used the term 'theory' and accounted for 601 (93.91%). But the most disappointing fact was that the term 'theory' was missing from most of the 'titles' of the articles and accounted for 449 (70.15%). It is because researchers had used other parallel or equivalent terms in place of 'theory'. Pettigrew and McKechnie (2001) reported that the terms 'theory or theories' were mentioned in 99% of the 'abstracts', whereas only 1% was mentioned in the 'title'. They only studied two areas, and their conclusions were vastly different from ours. Apart from that, this study has also identified other vital areas where researchers have used the term 'theory' (or name of the particular theory or name of the creator of the theory). The term "theory" was most frequently used in the methodology (43%) part to develop the method/model or framework for preparing the article, data collection (28%) part as a tool, introduction (which includes the purpose or objective) (21%), data analysis and interpretation (almost 6%), and discussion (below 3%) part to justify the results or its relevancy in the paper. In a few cases, it was just mentioned in the conclusion part of the articles. Sub-Domains of LIS Now, we examine the use of articles in different sub-fields and streams of LIS research (RQ 2). The authors cannot give an exact figure for the number of articles used in different sub-areas of information studies. As shown in the scope (below Table 3), the focus of the articles was not always limited to a single topic due to their interdisciplinary nature. There is no standard tool by which we can present the actual divisions of LIS literature. Authors have examined existing literature (Kim & Jeong, 2006;Sidorova et al., 2008;Wang et al., 2021), and there were significant differences among researchers in their categorization of LIS subjects into different sub-fields due to their interdisciplinary or multidisciplinary nature. For our understanding, the authors have identified the following six major areas/sub-domains of LIS research based on the scope, coverage, and focus of the articles under study. Lim et al. (2009) reported that most articles were about 'information seeking and use', 'information retrieval,' and 'library administration and management'. Our findings are pretty similar to those of another study by Kim and Jeong (2006). However, it is unclear how they arrived at these classifications, their sample data, or how and from what sources the data was gathered. It is to be noted Research Stages Concerning the Use of Theories This section examines the purpose of the theories used in LIS research (Table 4). Hannay et al. (2007) opined that theories were used to serve different purposes within a research article. Tsang and Ellsaesser (2011) said that theories were mainly employed in LIS research to extract findings. A recent study (Park et al., 2022) specifically mentioned that the theories of the information world were mainly used to guide the collection and analysis of empirical data. In addition, it could be used to perform three types of research, viz., qualitative research, quantitative research, or mixed research (Creswell, 2009). However, the authors have identified all the research activities and divided the sample data into three major parts or levels: input stages, processing stages, and output stages. These three levels are created for a better understanding of theories used at different stages of LIS research. The following are the details, clarifications, and works involved in these three stages: So, we can say that the most common roles of theories used were to design the study, e.g., research design (39%), data collection (35%), data analysis or explanation (13%), and research outcomes or results (5%). In a few cases, authors could not trace the intentions and motives behind the researchers' use of such theories and, thus, were not calculated. Van der Waldt (2021) also expressed a similar view, reporting that theories were mainly used to design research protocols and task materials; formulate hypotheses, research questions, design frameworks or models; and develop questionnaires and other instruments. McKechnie and Pettigrew (2002) also reported that they were used to frame, design, and interpret findings. They reported 19% (results section) and 15% (discussion section). However, these studies revealed less than this one does. As identified in this study, theories were used to serve different purposes, such as the design of the work, explanation, application, motivation, hypothesis testing, modification, and basis. Theories were discovered to be primarily used as a research method or tool (for data collection, surveying the user community, designing and constructing a theoretical framework or model), data analysis or discussion, and results. In many cases, theories were used as research inputs to identify and formulate research problems or test hypotheses in research designs. Theories were also used as an output to justify the results or findings. Types of Theories Used Another critical area that previous studies have not covered is identifying the types of theories (Table 5). None of the researchers in the LIS domain have mentioned the type in their studies as proposed by Gregor and other experts as discussed. Various levels of theories, with implications for research in LIS are described (Togia & Malliari, 2017). Gregor (2006) proposed five types of theories based on philosophical and disciplinary orientations (Table 5). Even so, it could be 'theory as input', 'theory as process', or 'theory as output' (Van der Waldt, 2021). Other experts (Doty & Glick, 1994;Schneberger et al., 2014) categorised theories as 'theory type 1', 'theory type 2', and so on up to 'theory type 5'. Another expert (Reynolds, 1971) identified another four forms of theory, namely a set of laws; an inter-related set of definitions, axioms and propositions; descriptions of causal processes and; vague concepts, untested hypotheses, and prescriptions for good behaviour. In addition, it could be of 'grand theory', 'middle-range theory', or 'general theory or process theory' ( Table 5). The final level, grand theory, is "a set of theories or generalisations that transcend the borders of disciplines to explain relationships among phenomena" (Glazier & Grover, 2002). The first theory level, called substantive theory, is defined as "a set of propositions that furnish an explanation for an applied area of inquiry" (Grover & Glazier, 1986). In fact, it may not be viewed as a theory but rather as a research hypothesis that has been tested or even a research finding (Kim & Jeong, 2006). The next level of theory, called formal theory, is defined as "a set of propositions that furnish an explanation for a formal or conceptual area of inquiry, that is, a discipline" (Grover & Glazier, 1986). Their difference lies in their ability to structure generalisations and their potential for explanation and prediction. Substantive and formal theories together are usually considered "middle-range" theories in the social sciences (Togia & Malliari, 2017). Here, the authors have attempted to categorise articles based on the types of In our study, it was found that most of the theories were "middle-range theories" (e.g., TAM, self-efficacy theory, Maslow's hierarchy of needs theory, media richness theory) (see Appendix B) and numbered 567 (88.59%). Morgan and Wildemuth (2009), citing Poole (1985), stated that middle-range theories were perfect for a professional field like information and library science. Pinfield et al. (2021) also shared the same view, stating that "mid-range" or "middle-range" theories were mostly used in the LIS domain. And the rest of the theories were "general theories" or "process theories" (e.g. expectancy theory). Although a few articles were about "grand theories" such as communication theory, cognitive theory, and critical theory. If we classify the theories according to their types as proposed by Gregor, most of the articles were "type 5" theory (used for research design and action). They numbered 428 (66.87%) articles, followed by "type 1" theory (179 articles, 28%) meant for the analysis of data including description, and "type 2" theory (33 articles, 5.1%) used for interpretation and explanation. Sonnenwald (2016) reported that all five of Gregor's types of theory exist in the LIS literature, although types 1 and 2 predominate. Our study is quite different and does not match previous studies. Another study (Bélanger & Crossler, 2011) discovered that 'type 4' theory predominated, followed by types 1 and 2. In contrast, another view is expressed by Alter (2017), who found 'type 4' theory dominant. The authors observe that these differences are likely due to the sample size and type, sample selection, search operators used, the method applied, and the nature and type of sources used (here, Scopus) for collecting samples. Categorisation of Theories Used Apart from LIS theories (Appendix C), this study also reports the use and application of different theories of other domains used from different dimensions to support LIS research. It was challenging and took much work for the authors to identify and classify theories because theory classification depends on many factors, including the type of use, stage of use (Davies et al., 2010), and purpose of use (Gregor, 2006). Even, theories could be classified based on their scope and structure, where they are used in an article, the way they are used in the paper, or what they are meant for. Markus and Robey (1988) also distinguished theory in terms of causal structure. In categorising theories, Hjørland and Pedersen (2005) emphasised knowledge of the broader meaning-producing contexts rather than focusing on trivial or naive descriptions of the documents. The authors have classified all the theories according to their domain of origin (Appendix A & B). When determining the originating discipline of theories, the focus of the papers and the inventors' backgrounds were taken into account. This classification is no longer final and somewhat subjective, as theories were unambiguously identified in some cases, and readers may disagree with our classification. Furthermore, our goal is not to show how different theories were applied in LIS research or how LIS theories were used outside of the field. Appendix B gives a comprehensive picture of theories in the sciences, social sciences, management studies, communication studies, information studies etc. Appendix C gives a comprehensive overview of LIS theories to pinpoint the focus of this paper. But it could have been kept under 'information studies'. In the same way, theories of 'organisational studies' or 'strategy' are kept under 'management studies'. Even the theories under 'communication studies' (see Appendix B) could have been kept under 'information studies' or 'sciences'. It is a fact that most of the 'communication theories' or 'information theories' have originated from 'science'. Discussions and Implications The essence of using theory from other domains for conducting LIS research has been felt since the beginning of the 21st century (Hall, 2003;Kenworthy & Verbeke, 2015;Oswick et al., 2011;Ukwoma & Ngulube, 2021). Moreover, establishing a link between theory and research has become a hot topic among LIS researchers (Mueller & Urbach, 2013). However, the development of theory or the range of theories used in LIS research has expanded over the past forty or fifty years (Sonnenwald, 2016), and the use and application of socio-cultural theories to uncover or underpin LIS research continue to increase. Furthermore, this is reflected in our study (Section 7.1), where the rate of use of theories in LIS research was 64.21%. Liber Quarterly Volume 33 2023 As discussed in Sections 4 and 6, there were misconceptions, a lack of awareness regarding theoretical discussions, and misuse of the term 'theory' among researchers (Ngulube, 2018). Only 640 (4.83%) core articles were finally selected for this study, which indicates that researchers have used unintentionally theories without having a clear idea of theory regarding their application and utilisation in research. This conceptual misunderstanding regarding the 'theory' among scholars may misdirect the focus of the research and affect the results. Bibliometric laws, classification rules, and cataloguing rules were considered theories in many cases. The theories included - • Wilson's general model of information behaviour, • Ranganathan's five laws of library science, • Information Theory/Theory of Information, • Information Theory of Communication, and • Kuhlthau's model of the Information Search Process. Researchers disagree and hold opposing views on whether these should be considered theories (Ngulube, 2020;Ocholla & Roux, 2011). In a few cases, two or more theories were used or applied in the same paper. However, there was no such application of said theories, and it was difficult to measure at what level or for which purpose a theory was used in the study. As a result, the relevance of the theory to the study could not be identified and was unclear as it was not used consistently throughout the study. In many cases, the purpose of using a theory in the paper was unclear and misleading. So, researchers should have mentioned their roles in the article or the extent to which they employed the theory with fidelity. Even so, some articles used multiple or one theory in multiple ways. It is unclear how a particular theory was used in the work. In most cases, the researchers did not mention the origin of the single theory (e.g., the discipline of origin). Additionally, the researchers should have mentioned the name of the particular theory with further explanation including the theory's originator (Section 7, Table 1). So, these issues must be reworked and clarified adequately in the text. Another problem was identifying the original subject or discipline from which a theory had come (Section 7.7). The authors found that the same theory was simultaneously considered as 'science' and 'social science' theory by the experts and was kept in both places. The same theory may be applied to science and social science or management studies depending on the context of the research work with any necessary modifications or extensions to the original theory. It is important to remember that the authors did not find any best practices or formal guidelines to identify the emerging discipline of the theories, which appeared to belong to more than one discipline. The authors have noticed that, in many cases, some theories originating from multiple disciplines had no option but to fit into a particular discipline considering the focus of the paper and the background information of the originator. The situation mostly happened in the 'social sciences'. For example, it can be argued that the 'resource-based-view' (RBV) theory may be placed in 'management studies' in a broader context or in 'economics' under 'social sciences'. Moreover, both sides/cases have compelling logic. Even the discipline of 'management studies' could be treated as a 'social science' subject rather than a separate discipline. This aspect may be one of the drawbacks of this study. The assessment of the past studies by different scholars has criticised LIS for its lack of theoretical research and may be trending downward. Regarding theory, Orlikowski and Iacono (2001) argued that LIS research is undertheorised. Rayward (2004) also supports this view. It is a hard fact that the LIS domain has yet to produce grand theories, and LIS researchers have used theories in their studies mainly to frame, design, and interpret findings. Many commentators have also suggested that the LIS domain needs to make more use of theory (Pinfield et al., 2021) and has thus been criticised for relying on theories imported from other disciplines rather than applying or developing theories from within (Park et al., 2022). The same view was also expressed by McKechnie and Pettigrew (2002). Although Kim (2004) disagreed with previous studies that reported that theoretical research was insufficient in LIS research. He found that 41.4% of the studies dealt with theoretical development and utilisation. Still, research in LIS is confined to theories developed and used in complementary disciplines. In this study, it was found that, out of 411 theories, only 45 were from the LIS domain, and the majority of the theories (366) were from adjacent disciplines. This is due to the lack of sound or home-based theories of LIS. Even the home-based theories are pragmatic and descriptive. This lack of theoretical contributions may be associated with the fact that LIS emanates from professional practice and is therefore closely linked to practical problems such as the processing and organisation of library materials, documentation, and information retrieval (Jarvelin & Vakkari, 1990;Kim & Liber Quarterly Volume 33 2023 Jeong, 2006). As previously stated, LIS has borrowed many theories from other disciplines due to a scarcity of theorists and theory-illuminated practitioners (Schrader, 1986). The authors think that there is considerable scope for further research in this area of theorising LIS research and identified mainly two areas: the development, use, and application of home-based theory; and general conceptual and methodological awareness among LIS researchers regarding theories, particularly the specific use of the meta-theoretical assumptions inherent in LIS research. However, the advancement of information technology and the use of such technologies in the library environment have significantly contributed to the domain of LIS research. Feather (2008) urged LIS researchers to prioritise various aspects of LIS research. He correctly stated that now is the time for LIS researchers to engage in and focus on several important issues that are more important in the LIS domain than today's management theories or tomorrow's technological miracle. However, many articles are published, and many practical works are done without explicating any theoretical or meta-theoretical assumptions. These works were not tested for how the term 'theory' is operationalised in the study, at what level it is used or generalised, or how the theory is proposed to operate within the study. In most cases, theories were discussed and talked about marginally or minimally. Freehan et al. (1987) correctly observed that LIS research had not matured sufficiently to support a cohesive body of its theoretical foundation and was instead built on theories from other disciplines (Gregor, 2006). However, the authors disagree with previous researchers who have treated our field as under-theorised because more than forty-five LIS theories have been identified (see Appendix C). Moreover, we fully support the view of Gregor (2006), who rightly realised the potential benefits of using theories in the LIS domain by saying that LIS professionals use both home-based and borrowed theories in a new look/way to make sense of their data. Conclusion The authors claim that LIS is facing a 'theory crisis' and is still in a grey area. Existing findings of our study focus on essential insights for the LIS research community, which raises different questions about LIS research. Does this mean that LIS research has no systematic theoretical base? Do we have our own or native theories? Or to what extent does LIS depend on other neighbouring disciplines for its theoretical base? How will LIS respond to this unmet situation? What will be at the forefront of LIS research in the near future? The process of theory building in LIS was not so strong and is evident through the tremendous borrowing of various theoretical concepts and the use of theoretical frameworks from other disciplines to address various LIS issues. Rawson & Hughes-Hassell (2015) rightly opined that the treatment of theory in LIS research covers a spectrum of intensity, from marginal mentions to theory revising, expanding, or building. He further stated that the field of LIS has not been very successful in contributing to existing theory or producing new theory. In spite of this, one may still assert that LIS research employs theory, and, in fact, there are many theories that have been used or generated by LIS scholars. However, "calls for additional and novel theory development work in LIS continue, particularly for theories that might help to address the research practice gap". As stated, using theory in research is essential as it helps produce transformative knowledge (Ngulube, 2020). As is evident in this study, the results focus on the fact that LIS researchers could not develop home-based theories and make good use of social science theories due to its (LIS) interdisciplinary nature or being transdisciplinary. As a result of the absence of its own or discipline-based theories, researchers have borrowed from other disciplines (see Appendix B). However, it would be improper to say that LIS research is conducted in collaboration but is widely connected with other disciplines. Even so, authors have noted a decreasing trend among LIS researchers to use existing theories rather than develop new ones. As there is no "best theory" (Weick, 1985), we should develop and improve theory-building skills (e.g. theorising of theories) to develop more home-based theories (Weick, 1989). Doherty (2012) rightly said that theory-building research is important to ensure the development of LIS research. LIS professionals should invest more in theory building and conducting such studies on the trends of theoretical and conceptual frameworks in LIS research. Researchers should focus more on the theoretical ties between LIS research and research in other neighbouring disciplines, and a meta-analysis of LIS theories could be a solution. Boyce and Kraft (1985) and Buckland (1991) have suggested a strictly defined standard Liber Quarterly Volume 33 2023 would not allow for the finding of many theories, even those considered theories within the bounds of LIS, because theories in this field may have had the status of "quasi-theories" (Boyce & Kraft, 1985). Moreover, the lack of a clear road map for theory development in LIS makes the process 'one of the most frustrating and arduous tasks in which a scholar engages' (Cunningham, 2013). Software and Data Attribution Active Learning for Systematic Reviews (ASReview) is a free (Libre) opensource machine learning tool for screening and systematically labeling a large collection of textual data. It is designed to accelerate the step of screening abstracts and titles with a minimum of papers to be read by a human with no or very few false negatives. It employs a machine learning technique known as active learning to help with screening for efficient and transparent systematic reviews for academia and beyond. The goal of ASReview is to help scholars and practitioners to get an overview of the most relevant papers for their work as efficiently as possible, while being transparent in the process. OpenRefine is a free, open source, standalone Java application which visualises and manipulates large quantities of data all at once. It is used for exploring, cleaning, linking and transforming data (an activity commonly known as data wrangling) on a large scale. The functions include are data normalisation, column reorganisation, faceting/clustering, tracking operations, exporting data and so on. It is similar to spreadsheet applications, and can handle spreadsheet file formats such as CSV, but it behaves more like a database. It is more powerful than a spreadsheet; more interactive and visual than scripting; more provisional/exploratory/experimental/playful than a database. Scopus is an abstracting, indexing and citation database with enriched data and linked scholarly literature across a wide variety of disciplines. It indexes content that is rigorously vetted and selected by an independent review board of experts in their fields. With comprehensive content coverage, highquality data, and precise search and analytical tools. Scopus gives researchers, librarians, research managers, and R&D professionals the insights to drive better decisions, actions, and outcomes. It empowers users to discover critical information, monitor trends, and identify subject matter experts. It also helps users visualise, compare and export data to evaluate research output and trends.
2023-10-14T16:14:39.397Z
2023-10-09T00:00:00.000
{ "year": 2023, "sha1": "d35186eea2cae85bfd42aa899f04f8641ea2ac4b", "oa_license": "CCBY", "oa_url": "https://liberquarterly.eu/article/download/13269/19636", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "043167a615e7b42a06bb2e8272bad40bcb65b26f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
18913407
pes2o/s2orc
v3-fos-license
Relationship between knee and ankle degeneration in a population of organ donors Background Osteoarthritis (OA) is a progressive degenerative condition of synovial joints in response to both internal and external factors. The relationship of OA in one joint of an extremity to another joint within the same extremity, or between extremities, has been a topic of interest in reference to the etiology and/or progression of the disease. Methods The prevalence of articular cartilage lesions and osteophytes, characteristic of OA, was evaluated through visual inspection and grading in 1060 adult knee/tali pairs from 545 cadaveric joint donors. Results Joint degeneration increased more rapidly with age for the knee joint, and significantly more knee joints displayed more severe degeneration than ankle joints from as early as the third decade. Women displayed more severe knee degeneration than did men. Severe ankle degeneration did not exist in the absence of severe knee degeneration. The effect of weight on joint degeneration was joint-specific whereby weight had a significantly greater effect on the knee. Ankle grades increasingly did not match within a donor as the grade of degeneration in either the left or the right knee increased. Conclusions Gender and body type have a greater effect on knee joint integrity as compared to the ankle, suggesting that knees are more prone to internal causative effects of degeneration. We hypothesize that the greater variability in joint health between joints within an individual as disease progresses from normal to early signs of degeneration may be a result of mismatched limb kinetics, which in turn might lead to joint disease progression. Background Osteoarthritis (OA) is a generally progressive condition that involves both anabolic and catabolic mechanisms within the articular cartilage and bone of synovial joints in response to both internal and external factors. Among these factors are age [1,2] genetics [3], joint/ limb alignment [4][5][6][7][8], joint injury [9], female gender [10] and obesity [11,12]. On the other hand, exercise and local muscle strengthening can inhibit its progression [13,14] by strengthening the local environment and thus reducing instability at the joint. But because there is no cure for OA and because its etiology is not fully understood, investigation continues to further elucidate the mechanisms which may contribute to its initiation and progression. Limb alignment and the relationship between the biomechanics as well as the incidence/prevalence of OA in one joint of an extremity in relationship to another joint within the same extremity has been a topic of reemerging interest. Recently, a cross-sectional study [15] found multiple kinematic and kinetic differences at the hip, knee and ankle joints in those individuals with severe knee OA. Furthermore, using mechanical axis radiographs, Tallroth et al. [16] found that the greater the tilt (relative angle of the talus to the distal tibia and distal fibula) in the ankle, the more degenerative were the changes. Previously, in a sample of 50 knee and ankle donors, it was shown that donors with degeneration in the ankle also showed degenerative changes within the knee at an equal or higher level [17]. The data suggest that factors such as altered mechanics as a result of limb alignment might contribute to degeneration within an entire limb. Furthermore, although genetic factors might be involved in global joint degeneration within an individual, mechanical factors within one limb likely influence the joint health of the contralateral limb. The goal of the present study was to expand our previous database [17] and correlate knee and ankle cartilage OA scores in an effort to further elucidate the relationship between degenerative joint disease within a limb and between limbs of an individual. We hypothesize that OA in one joint is associated with increased prevalence of OA in another kinematically related joint, with this relationship increasing with the severity of the OA. Methods One thousand sixty adult knee/tali pairs were obtained from 545 joint donors through collaboration with the Gift of Hope Organ and Tissue Donor Network. This included two knees and two ankles from each cadaveric donor with the exception of 30 donors for whom only one lower extremity was available because the other was present but not available to our laboratory. The joints were collected between June 1995 and April 2009 according to the policies of the Gift of Hope and with Rush University Institutional Review Board approval. Exclusion criteria included previous amputation of an extremity, joint replacement in either lower extremity, a history of hepatitis or a postmortem positive blood test for hepatitis, HIV, or any other communicable disease. The distal portion of the tibia (proximal component of the ankle) was not available; therefore, the talus represents the ankle joint in this study. Although donor medical histories were not available, age, gender and cause of death were provided. The donors were categorized as light, normal or obese on the basis of the subjective visual assessment of the joint harvester. The joints were opened within 24 hours of death of the donor and examined for disruption of the articular cartilage on a modified Collins scale [18] where grade 0 is normal smooth cartilage, grade 1 is superficial fibrillation, grade 2 is fissuring or superficial ulceration with possible osteophytes, grade 3 is 30% or less of the cartilage surface eroded down to subchondral bone and accompanying osteophytes, and grade 4 is more than 30% of cartilage eroded down to subchondral bone with gross geometric changes including osteophytes ( Figure 1). For knee cartilage scores, the highest (i.e., worst) score observed on the femur, patella, or tibia was taken as the score for the joint. The following statistical analyses were performed for nonparametric data. Spearman's rank correlation was carried out to determine the correlation between age and joint degeneration for each joint individually. The Wilcoxon signed rank test was used to determine the relationship between the grade of left and right sides for ankle and knee joints separately. To determine the effects of tissue type (ankle vs. knee) and sex on agedependent degeneration, survivor analysis was performed. The analysis determined the incidence (as survival rate) of the selected grades (set at 1 through 4) with increasing age (i.e., "survival curve"), stratified by tissue type. The mean survival age, the age at which half of the samples became degenerate (i.e., became the selected grade), was also determined. The survival curves were compared using Kaplan Meier analysis and Mantel statistic [19]. Additionally, the effect of sex on survival curves was determined separately for each tissue type. Statistical significance was taken at P < 0.05. Results Donors ranged in age from 19 to 98 with a mean age of 60 years. The age distribution per decade is shown in Figure 2. There were 287 (53%) men and 258 (47%) women. This profile reflects the donor population of the Gift of Hope Organ and Tissue Donor Network. Within the current study population, joint degeneration was first observed during the third decade of life, starting with a 26-year-old male with both knees displaying fibrillated cartilage (grade 1). The earliest signs of degeneration in the ankles were in a 28-year-old male whose knees displayed grade 2 and ankles displayed grade 1 degeneration. The oldest donor, a 96-year-old female had knees with grade 4 degeneration and ankles with grade 2 degeneration. The percentage of donors in each decade displaying each grade of degeneration for knees and ankles is shown in Figures 3a and 3b, respectively. We found that 17% of individuals had both normal (grade 0) knee and ankle cartilage. Degeneration increased for ankles through the tenth decade. Spearman's rank correlation for the association between age and the level of degeneration in both left and right ankle and knee joints was statistically significant (P = 0.0001 for both joints). Degeneration increased for knees through the ninth decade, with a slight decrease in the tenth decade, but it should be noted that there were only six donors in the tenth decade. Degeneration increased more rapidly with age for the knee joint, and significantly more knee joints displayed more severe degeneration than ankle joints from as early as the third decade. A minority (4%) of knee joints displayed completely normal cartilage from the sixth decade upward, and no single knee joint displayed normal cartilage by the eighth decade. For the ankle, however, approximately 50% of joints had completely normal-looking cartilage through the sixth decade. Erosion of 30% or less of the cartilage surface (grade 3) began as early as the fourth decade for knees and the fifth decade for ankles. Diffuse cartilage erosion (grade 4) began as early as the fourth decade for knees, but only surfaced in three donor ankles and not until the eighth decade. Through the eighth decade, the majority (63%) of ankles still displayed either normal or fibrillation cartilage, whereas at this point, the majority of knees displayed moderate to severe degeneration. Knee and ankle cartilage scores, separately, for left and right sides are shown in Figure 4. There were no significant differences between sides (P > 0.05). The majority of individuals had more severe knee OA than ankle OA (60.8% on the left side and 60.5% on the right side). Fewer individuals had equal severity of OA on the knee and the ankle (30.1% on the left side and 36.9% on the right side). Rarely did the ankle display more severe OA than the knee (1.1% on the left side and 2.6% on the right side). Results of the Wilcoxon signed rank test revealed that the only joint relationships which showed no significant differences were the left vs. right ankles and left vs. right knees (P = 0.1705 and 0.0845, respectively). Figure 5 shows cartilage scores by gender, where it can be seen that slightly fewer female knees and ankles were normal (grade 0) than male knees and ankles (P = 0.05). Females also showed more severe knee degeneration (grades 3 and 4) than did males (P = 0.03). The effect of weight on joint degeneration was significant ( Figure 6). The knees of obese individuals showed more severe (grades 3 and 4) degeneration than did knees from normal-weight or lightweight donors (P < 0.05). For ankles, however, although there were more grade 0 joints in the lightweight category of donors, the differences at the more severe levels of degeneration that existed between obese and normal-weight donors for the knee did not exist for the ankle (P > 0.05 for grades 3 and 4). The relationship between knee and ankle degeneration within individual donors can be resolved in several ways. An examination of which joint displays more severe degeneration than its counterpart within an extremity of an individual shows that the majority (61%) of donors had more severely degenerated knees than ankles on both left and right sides. Fewer (38.1% [left side], 36.9% [right side]) had knees and ankles of the same grade of degeneration, and only 1.1% (left side) and 2.6% (right side) had ankles which were more degenerated than knees. This was true for both left and right extremities. Ninety-nine percent of donors with normal (grade 0) knees also had normal ankles, whereas 38% of donors with normal ankles also had normal knees. For an in-depth look at the most severe degeneration, Table 1 shows the percentage of ankle joints at each cartilage grade when the ipsilateral/contralateral knee joint displayed grade 4 degeneration, and vice versa for the opposite joint. From this table it is apparent that individuals with severely degenerated knees can have normal ankle cartilage; this was the case as often as their having fibrillated or fissured ankle cartilage. However, far fewer ankles displayed the same severity of degeneration as the ipsilateral or contralateral knee, although this condition did indeed exist. Although there were only three ankles displaying grade 4 degeneration, they were associated with degenerated knees. Looking at the relationship between joint degeneration as the lower/higher joint of the limb changed in health status, Table 2 shows the distribution of matched and unmatched ankle grades within a donor with different levels of knee degeneration. Ankles within a donor increasingly did not match (i.e., did not have the same grade) as the grade of degeneration in the left knee increased through grade 3, with somewhat of a decline at grade 4 (P < 0.05 between all grades). For the right knee, there was an increase in the number of unmatched ankle grades as the knee grade increased through grade 2, at which point the percentage of unmatched grades was basically maintained (P < 0.05 between grades through grade 2; no significant difference between grades 2, 3 and 4). These data show the greater variability in joint health within individuals as disease progressed from normal to early signs of joint degeneration. If both knees displayed grades of 3, 63.8% (30 of 47) of these ankles had the same grade. If both knees displayed grades of 4, 81.8% (18 of 22) of these ankles had the same grade. There were 11 donors (23.4% of grade 3 donors) in whom both knees were grade 3 and both ankles were grade 0. There were five donors (22.7% of grade 4 donors) in whom both knees were grade 4 and both ankles were grade 0. Survival analysis to compare ankles and knees (Fig-ure7) suggested an increasing incidence of degeneration with aging for both knees and ankles, yet markedly different survival curves for the ankle as compared to the knee (P < 0.001, red vs. blue lines, respectively) at all grades (1 to 4; Figures 7a-d). Mild degeneration (i.e., grade 1 or greater; Figure 7a) occurred at an earlier age for the knee; the mean survival ages for the knees and the ankles were~65 and~75 years, respectively. The gap in the mean survival age increases for moderate degeneration (grade 2 or greater; Figure 7b) as the mean survival ages for the knees and the ankles become~70 and~85 years, respectively. Severe degeneration (grade 3 or greater; Figure 7c) is found in~80% of the knees by the age of 90 years, the age at which only <20% of the ankles were found to be equally degenerate. Grade 4 degeneration (Figure 7d) was found in~50% of the knees by the age of 90 years, while only three of the ankles had grade 4 degeneration in the late decades. Survival analysis to evaluate the sex differences ( Figure 8) suggests significant differences between the survival curves of the male and female knees when grade 3 (Figure 8a) or 4 was defined as being degenerate (for each P < 0.05). This was due to a relatively delayed degeneration of the knee in males between 70 to 85 years of age. In the ankle, sex had no significant effect on survival curves, regardless of the grade (each P > 0.1). However, there was a trend (P = 0.09) of relatively delayed degeneration (to grade 2) in ankles of females between 80 to 95 years of age (Figure 8b). Discussion OA is a condition based on both degenerative cartilage and bone changes within a joint resulting in the clinical Figure 6 Distribution of knee and ankle grades separated according to relative body type (as assessed visually). Knees from obese donors displayed more severe degeneration (grades 3 and 4) than did joints from lightweight or normal-weight donors. manifestation of these changes as joint pain. In the present study, because the pain history of the individuals within our donor population was not available, we do not use the term osteoarthritis, but rather joint degeneration [20]. This is of significance because it is well known that some individuals with joint pain show no radiographic or magnetic resonance imaging evidence of joint disease, whereas other individuals with no joint pain show imaging evidence of the pathological joint changes normally associated with OA [20,21]. A strength of the present study, however, is that we had the advantage of actual visualization of articular cartilage surfaces and osteophytes from cross-sectional cadaveric donors, thus rendering data on normal and early stages of the disease which cannot be discerned through any current imaging technologies. In a very early analysis of our donor population, when only 50 knee/ankle donor pairs had been harvested, we found that ankle joint degeneration was more frequent in men than in women, increased with age, and occurred most often in both limbs with the same severity [17]. In donors with degeneration in the ankle, the knee also showed degenerative changes with an equal or higher grade. At that time, we suggested that factors such as altered mechanics might be responsible for degeneration in one limb and result in changes in the contralateral limb. The present study on 545 knee/ankle donors with a mean age of 60 years reaffirms our previous results. For the knee joint, females showed greater degeneration than did males, with fewer normal joints and more joints displaying partial and extreme erosion of the articular surface. This concurs with the known greater prevalence of OA in women as compared with men [22,23]. This difference may be due to one or more of several known gender differences involving knee joint anatomy, kinematics and/or physiology [24][25][26][27]. One difference that we found in comparison to a previous study [16] was that here female donors had slightly less normal ankle cartilage and slightly more fibrillation than did males. However, this did not extrapolate to higher grades of degeneration, where male ankles displayed slightly earlier fissuring (grade 2) than did female ankles. The effect of weight on joint degeneration was jointspecific whereby weight had a significantly greater effect on the knee than on the ankle. The majority of knees from obese donors displayed degeneration of at least grade 2 (fissuring) or greater, and nearly 50% displayed cartilage erosion down to subchondral bone. In the ankle, although lightweight donors displayed little fissuring and no erosion, the levels of fissuring and erosion were not different between normal-weight and obese individuals. We found that approximately 20% of donors in whom both knees displayed advanced degeneration (grades of 3 or 4) had ankles that appeared perfectly normal; the reverse never occurred. This reinforces the idea that knee degeneration likely has a greater influence on ankle health than the reverse situation. The fact that knees may be bilaterally severely pathological in structure in the absence of visible ankle pathology attests to the structural stability of the ankle as a hinge joint with less mechanical freedom in comparison to the knee. It appears that, at least in some individuals, aberrant knee structure and function do not inevitably lead to changes in extremity function so severe that they affect the ankle. On a purely speculative level, however, it is likely that this protection would not be observed at the hip as the hip is much more highly prone to OA than the ankle, and the coexistence of hip and knee OA is well documented [28,29]. Survival analyses suggested that even mild degeneration (grade 1) occurs more slowly in the ankle than in the knee, and severe (grades 3 and 4) degeneration rarely occurs in the ankle. In addition, the effect of sex on joint degeneration was joint-specific and dependent on the severity of degeneration. In the knee, mild-to-moderate degeneration (grades 1 and 2) occurred similarly in both sexes; however, severe Figure 8 Survival curves of the male (red) and female (blue) samples in (a) the knees reaching the grade 3 and (b) the ankles reaching the grade 2. There were significant differences between the curves of the male and female knees when grade 3 (Figure 8a) or 4 was defined as being degenerate (for each P < 0.05) degeneration (grades 3 and 4) occurred at an earlier age for women. The trend reversed in the ankle. Additionally, we explored the data a bit differently to further elucidate the relationship between the two joints within an extremity. One interesting relationship occurred when looking at how degeneration at the knee related to the matching of ankle grades within an individual. Ankle grades increasingly did not match within a donor as the grade of joint degeneration within the left knee increased through grade 3 (partial erosion of the articular surface), with somewhat of a decline at grade 4 (severe erosion). There was an increase in the number of unmatched ankle grades as right knee degeneration increased through grade 2 (fissuring), at which point the percentage of unmatched grades was basically maintained. This points to the greater variability in joint health within the extremities and results in an imbalance in joint health between sides as disease progresses. Combined with the finding that as degeneration in the knee increased so did degeneration in the ankle, an interesting consideration appears. Ninety-nine percent of donors with normal (grade 0) knees also had normal ankles, whereas 38% of donors with normal ankles also had normal knees. However, once signs of knee degeneration occur, even at the earliest stages (i.e., fibrillation), the ankles of a pair begin to become discordant in their appearance with respect to each other. We interpret this as suggesting that whatever mechanism is occurring in the knee to cause early degeneration, the same mechanism is likely occurring in the ankle, but at a lower level. This may be either as a consequence of mechanical alterations in the knee or through an independent process. It is likely, however, that the two are related as has been suggested in studies that have attempted to elucidate the relationship between knee and ankle OA. Studies in patients have shown that hip-knee-ankle alignment contributes to the distribution of load across a joint surface. In fact, both the varus and valgus malalignment of the knee increase the risk of progression of medial and lateral OA, respectively [30,31]. The varus knee increases the force across the medial knee compartment, whereas the lateral compartment has increased force in the valgus knee [32]. In both these conditions, the mechanical alignment of the extremity is changed from the neutral axis, thus setting up alignment issues throughout the extremity and perhaps the entire body. In a retrospective study of mechanical axis radiographs of subjects just prior to total knee arthroplasty, it was found that ankle OA and tilt in the ankle were not uncommon [16]. Furthermore, the greater the tilt in the ankle, the more degenerative were the changes in the joint [16]. When the mechanical axis at the knee was corrected at the time of surgery, the ankle tilt was also significantly changed. This work relates well to one of our previous studies in which we found that the trabecular angle within the talar dome is associated with the level of joint degeneration [33]. The talar dome of the human talus receives compressive forces that have traversed the leg. Thus, in keeping with Wolff's Law, the body of the talus has predominantly vertically aligned trabeculae running superior to inferior. Through fast Fourier transform analysis, it was found that as the trabecular angle deviated from a perpendicular alignment, the greater were the cartilage changes on the articular surface, particularly at medial and lateral borders. We hypothesized that these results may be a reflection of the alignment and/or biomechanics at the joint [33]. Thus, taking the ideas of these latter two studies together, it is possible that a malaligned knee affects the alignment of the entire kinetic chain, setting the stage for potential pathology anywhere along that chain. Another relationship that would have been interesting to examine is how medial vs. lateral knee OA is related to medial vs. lateral ankle OA. Unfortunately, because we did not have information on the topographical location of cartilage changes, we cannot make any statements in this regard. However, in a previous cadaveric study, we found that more knee and ankle joints displayed greater degeneration on the medial than on the lateral aspect [17]. In another study, on the difference between foot center of pressure patterns between subjects with and without OA, we found that the subjects with medial compartment OA demonstrated a more laterally placed foot pressure pattern with normal walking as compared with non-OA control subjects [34]. This is accomplished by changing the axis of the ankle joint in relation to the leg and placing greater pressure on the medial aspect of the ankle. Therefore, at least from these results, it might be expected that medial ankle OA could be found in relation to knee OA. However, further studies must be carried out to make this determination. The limitations of the present study include the lack of information on the history of joint injury and the lack of information on the level of mobility or the use of walking aids. Each of these issues has the potential to introduce variability in the data that might not be accounted for. For instance, if a subject sustained an undocumented traumatic injury to the knee joint, it would not be known if the presence of OA in this joint was due to trauma or to the relationship of this joint to the contralateral knee or the ankles. Another limitation is that we did not have access to the distal tibia. If the joint degeneration on this component is greater than that on the talus of the same joint, this may lead to the underestimation of the true severity of ankle pathology. This would surely be the case in at least some specimens, as we previously showed in a sample of 100 specimens from 50 cadavers that 30% of ankle joints displayed greater degeneration on the tibia than on the talus, 21% showed equal levels of degeneration on both sides and 49% showed greater degeneration on the talus [18]. Another parameter of consideration is the manner in which body type (light, normal, obese) was determined. Because we obtained the joints through the Gift of Hope Organ and Tissue Donor Network, we were dependent upon subjective determination after physical examination of the body. We considered the amount of overall subcutaneous body fat in making this determination, and although not entirely scientific, we think this method provides good relative information within the study sample. Conclusions To our knowledge, this cadaveric donor joint study is the largest study of its kind for knee and ankle pathology. The knee joint displayed significantly greater signs of degeneration than the ankle joint and showed a gender preference whereby females had more severe knee degeneration. Obesity increased the severity of joint lesions in the knee but had a much less profound effect on the ankle. A major new finding of the study was that ankle grades increasingly did not match within a donor as the grade of joint degeneration in either the left or the right knee increased. This is, in essence, an imbalance in joint integrity between sides and points to a greater variability in joint health within the extremities as disease progresses. The possibility of this leading to limb malalignment, particularly once cartilage erosion in at least one knee compartment has occurred, is realistic. In turn, limb malalignment is highly associated with joint disease progression as shown in other studies.
2014-10-01T00:00:00.000Z
2010-07-28T00:00:00.000
{ "year": 2010, "sha1": "78fd19c93f02dfbc87fa4e2701e2c1429be2de1f", "oa_license": "CCBY", "oa_url": "https://bmcmedicine.biomedcentral.com/track/pdf/10.1186/1741-7015-8-48", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98c79236ba45e961d6647e2ed2fc757e146e7434", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219479623
pes2o/s2orc
v3-fos-license
Concept Mastery of Physical Education Students In Multiple Representation (MR) Based Three Dimensional Solid Object Motion Mechanics (3DSOM) Physics is a part of sciences which is most explain about nature and its phenomenon starting from anything real until anything abstract. One of the material in mechanism lecture which is considered as difficult and abstract is material of three dimensional solid object motion (3DSOM). This research aims to find out the concept mastery of 3DSOM in mechanics learning to physical education students. The subject of this research are students of 3rd semester amounted 21 people in one LPTK. The instrument used in this research is mechanics concept module of multiple representation (MR) based 3DSOM to measure the concept mastery of 3DSOM and test essay of MR based 3DSOM mastery concept which is compatible with indicator of concept understanding using Bloom’s Taxonomy Revised. This research uses quantitative and qualitative descriptive method to find out the achievement of concept mastery towards multiple representation based 3DSOM. The results of this research found that the average MR-based concept module percentage of material GBT3D 50.9% verbal, 63.6% pictures, 43.0% graphics and 72.2% mathematics which the four MR categories showed a pretty good percentage. The mean results for the pretest value of 37.4% and the average posttest value of 76.1% with the percentage of N-Gain 61.8% for the mastery of the concept of 3DSOM showed an increase. Limitation in this research is only involves one class with less number of students from one university, so that the result is not strong enough to represent the whole situation, this can becomes manner to conduct further research with larger amount of samples and different mechanics content. This research is the first which analyzes the concept understanding towards MR based 3DSOM mechanics lecture material content by analyzing the verbal, picture, figure, and mathematics skills. Introduction Mastery of the concept of physics in learning is the ability of a person to develop one fact with another fact. Physics is a branch of science that studies something concrete and can be proven mathematically using equation formulas that are supported by research that is continually being developed by physicists [1]. In mastering a physics concept if the concept is true it must be able to explain in its own words which is in accordance with the knowledge possessed. Mastery of concepts is very important for students because this is an indicator that students have fully understood what they have learned, so that later mastery of these concepts can help students in solving problems not only in lectures, but also in teaching practice in schools. Using multiple representations is a fundamental process in mastering a concept [2]. Material of mechanics in concept of three dimensional solid object motion (3DSOM), includes sub concepts of solid object motion in 3 dimensions: Moment of inertia, angular momentum and kinetic energy, Axis center axis, Euler Equation in Solid Motion, Free Rotation of Solid objects: Description of Geometric Motion, Free Rotation of Solid objects with Symmetrical Axis, Description Rotation of solid object relative to the Coordinate System, Eulerian Angle, Motion of Torque, Equation and Energy Increase, Gyrocompass, Why Lance does not Fall Over (Mostly) [3]. Formulating concept or principles in the three dimensional solid object motion. Multiple representation is a teaching practice which involves depictions, symbolizes, or represents a concept or process through different forms of representation [4]. Learning style is a manner for students to understand information, for example there are students who find it easier to absorb by verbal learning, but there are students who are easier with drawing or mathematical learning [5]. Facing that learning skill needs a learning approach that can deliver material in multiple representation way [6]. Learning style can be categorized as three types of style which are visual, auditorial and kinesthetic [7]. From these three learning styles there are individuals who tend to one style, and there are also those who tend to all learning styles [8]. If the teaching strategy of teacher is similar to student learning styles, then there is no lesson that is difficult [9]. Multiple representation and multimedia can support learning in many different manners [10]. Learning with multiple representation is more effective in building mental model of student and understand the concept compared to conventional learning [11]. Methods This research was conducted in Private University in Jakarta, started from December 2018 until January 2019. This research used quantitative and qualitative descriptive method to find out the achievement of multiple representation based 3DSOM mechanics concept mastery. The instrument used in this research is mechanics concept module of multiple representation (MR) based 3DSOM concept understanding to measure the understanding towards concept of 3DSOM and essay of MR based 3DSOM mastery concept which in accordance with concept understanding using Bloom's Taxonomy Revised [12]. The subject of this research are students of 3 rd semester academic year of 2018/2019. The total of students involved were 21. The data analysis technique in this research was conducted with analysis of N-Gain, normality, homogeneity, and t test [13]. Results and Discussion The most important process in learning physics continuity is to understand the basic concepts in physics [14]. The data collected in this study are in the form of understanding the concept of mechanics in 3DSOM material obtained from MR-based content modules and the results of student pretest and posttest after the application of multi representation 3DSOM material learning mechanics in the material of three dimensional solid object motion (3DSOM). Of the four multiple representations, the percentage results obtained in the range values off 43% to 72% with quite good category, this because the lack of students ability to analyze verbally and graphically in three-dimensional material on the central axis of solid objects. Overall and according to the indicators for each sub-chapter there has been a change and improvement in understanding the concepts in 3DSOM . Pretest and posttest were given to find out the success level of treatment in experimental class, with multiple representation basis can improve the student concept understanding in material of 3DSOM Figure 2 Analyze the mean of pretest-posttest and N-gain of MR based three dimensional solid objects motion (3DSOM). figure 2. The analysis results in this research showed that the pretest mean value was 37.4% and the posttest mean value was 76.1% with an N-Gain percentage of 61.8% for the mastery of the 3DSOM concept showing an increase. Consisting of 5 students 23.8% in the medium category, 13 students 61.9% in the high category and 3 students 14.35% in the very high category. From the posttest results it was concluded that the mastery of the concepts of physics education students in the learning of three-dimensional solid object motion mechanics based on multiple representations increased. Conclusion The results of data analysis and discussion of understanding the basic concepts of mechanics in students of third semester found that learning mechanics with a multiple representation (MR) based 3DSOM material module on the mastery of concepts in physics education students showed an increase in mastery of concepts with a fairly good percentage. Then learning about mechanics based on multiple representations can be implemented for the material of 3DSOM. Based on the conclusions from the results of the research, the authors suggest that the application of learning mechanics based on multiple representations can be used as an alternative learning approach on other material based on concept planting and it is hoped that other researchers can provide a variety of mechanics based learning approaches with more varied representations. the findings on the results of the pretest posttest 3DSOM achieved results in the high category. But for researchers in the future, if you want to develop this research more broadly, you can develop it by reviewing student learning styles in a wider group so that data retrieval is not only limited and the accuracy level and data strength can be higher.
2020-05-28T09:16:03.181Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "3a48914032c3947fc6cad173773737d91658804b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1521/2/022015", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "51c4147f96d07978e902cf40ced023b9848dc2f1", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
247486562
pes2o/s2orc
v3-fos-license
Quest for the Co-Pyrolysis Behavior of Rice Husk and Cresol Distillation Residue: Interaction, Gas Evolution and Kinetics : With the tremendous prosperity of industry, more and more hazardous waste is discharged from industrial production processes. Cresol distillation residue is a typical industrial hazardous waste that causes severe pollution without proper treatment. Herein, the co-pyrolysis of rice husk and cresol distillation residue was studied using thermogravimetry–mass spectrometry and kinetic studies. The Coats and Redfern method was employed to calculate the activation energy. The results indicated that the pyrolysis process of cresol distillation residue and RH/CDR (Rice Husk and Cresol Distillation Residue) blends can be divided into four stages and three stages for RH. The introduction of RH not only improved the thermo-stability of cresol distillation residue at a low temperature but also reduced the activation energy of the blends. The activation energy was the lowest when the proportion of rice husk in the blend was 60%. The main gaseous pyrolysis products included CH 4 , H 2 O, C 2 H 2 , CO 2 , C 3 H 6 and H 2 . There existed an unusual combination of synergistic and inhibitive interactions between RH and cresol distillation residue, respectively, within different temperature ranges. The synergistic interaction decreased the reaction’s activation energy, whereas the inhibitive interaction reduced the emission of main gaseous products, such as CH 4 and CO 2 . It was concluded that the addition of RH was conducive to improving the pyrolytic performance of cresol distillation residue and the resource utilization of cresol distillation residue. Introduction Distillation processes dominate 60% of separation in the chemical industry [1], but there are ca. 2.5 million tons of distillation residues produced in China every year. Distillation residues have been included in the national hazardous waste lists of different countries [2]. In general, distillation residue is mainly treated by landfill or incineration approaches [3][4][5]. The landfill of distillation residues generates large amounts of leachate that severely contaminate the soil and underground water [6][7][8][9]. The incineration of distillation residues is a high-energy consumption process that also produces severe secondary pollution [10,11]. The resource utilization of distillation residues for the production of value-added products seems to be an environment-benign approach [12], but only a handful of papers exist regarding the conversion of distillation residues to diesel and lubricating oil [13]. At present, varieties of biomass with clean and renewable characteristics have been used for the resource utilization of solid waste, including sewage sludge, food waste and municipal solid waste, through co-pyrolysis technology [14][15][16][17]. There is an inhibitive or synergistic interaction between biomass and solid waste during thermal treatment. We note that only one paper reported an inhibitive interaction between solid waste and biomass, in which the undecomposed lignite particles prevented the release of volatile matters in solid waste derived from refining and chemical wastewater at a lower temperature [18]. In contrast, synergistic interactions have been extensively studied. Several researchers have reported a synergistic interaction between sewage sludge and biomass that reduced the release of gaseous sulfur substances and NOx [19][20][21]. The structure of pyrolysis products can be optimized through synergistic interactions, as exemplified by the improved surface area of combustion ashes from textile dyeing sludge [22]. In addition, the synergistic interactions of blends have also resulted in higher reactivity and better combustion performance. For example, the synergistic interactions between textile dyeing sludge and microalgae improve the combustion performance of textile dyeing sludge because the density of its blends is larger than single microalgae [23]. We note that the co-pyrolysis of industrial distillation residues with biomass remains relatively underexplored. There are fewer than five studies focusing on co-pyrolysis of biomass with the distillation residue from lab-scale bio-oil production [24][25][26][27]. The interaction between industrial distillation residues and biomass remains relatively unexplored. In this contribution, we reported an unexpected synergistic effect that combined the high-temperature inhibitive and low-temperature synergistic processes in a sequential manner during the co-pyrolysis of cresol distillation residue and rice husk. Cresol distillation residue is a typical industrial waste from the production of p-cresol, which has been widely used for the synthesis of pharmaceuticals, herbicides, antioxidants and dyes [28]. It is estimated that 10-12 tons of cresol residue can be produced for every 100 tons of pcresol [29]. This work aims to investigate the interactions and product characteristics during co-pyrolysis of cresol distillation residue (CDR) and rice husk (RH) at various mixing ratios through thermogravimetric analysis. The interactions between cresol distillation residue and rice husk were investigated using the deviation of weight loss TG (∆W) between the calculated and experimental values in detail. TG coupled with mass spectrometry (TG-MS) enables the tracing of thermal reactions and the characterization of evolved gases. The kinetic parameters and apparent activation energy during thermal decomposition were calculated using the Coats and Redfern model. The elucidation of interactions between RH and CDR during the co-pyrolysis process is likely to provide scientific support for the effective utilization of CDR and to reduce related environmental hazards. Materials The cresol distillation residue was supplied by a local company in Nantong, and rice husks were purchased from an agricultural product store. RH was mixed with CDR in a ratio of 20-80% to examine the effect of biomass additives on thermal decomposition during the co-pyrolysis process. Residue-biomass mixtures were prepared with a biomass content of 20%, 40%, 60% and 80%. All samples were oven-dried at 105 ± 5 • C for 24 h to remove water before pyrolysis. The elemental content and chemical features of raw materials are exhibited in Table 1. The ash content, volatile matter and moisture were measured following the Chinese coal industry method (Chinese standard methods, GB/T 212-2008). The elemental analysis was performed using an elemental analyzer (Euro Vector EA3000, NETZSCH, Italy). The element ratio of H/C was adopted to characterize the amount of CO 2 released during pyrolysis [30]: the higher the H/C, the lower the level of CO 2 per unit of energy produced [31]. In this work, it was demonstrated that in comparison to cresol distillation residue, rice husk has potentially lower CO 2 emissions. The elemental content of Ca, Fe, Al, K, P, Cl and Si were determined using X-ray fluorescence ( Table 2). CDR was mixed with RH ground into a particle size of less than 100 mesh via vigorous stirring, with an RH weight percentage of 20-80 wt.%. Pyrolysis of all samples was performed on STA 449-QMS 403 TG-MS analyzers. About 10 ± 0.5 mg samples with particle sizes less than 100 mesh were heated from room temperature to 1000 • C with a heating rate of 20 • C/min and a N 2 flow rate of 20 mL/min. The generated gaseous products were monitored, including H 2 (m/z = 2), CH 4 (m/z = 16), H 2 O (m/z = 18), C 2 H 2 (m/z = 26), C 3 H 6 (m/z = 42) and CO 2 (m/z = 44), according to the database of the National Institute of Standards and Technology (NIST). All the pyrolysis test conditions were repeated two or three times, and the average data were taken to ensure repeatability. Kinetic Analysis We selected the Coats and Redfern model to calculate kinetic parameters based on TG analysis [32,33]. The conversion fraction is the mass fraction of a decomposed sample [34,35], which is described by: where x is the conversion extent of samples, W 0 = initial mass, W t = instantaneous mass and W ∞ = final mass. Isothermal reaction rate: A constant heating rate: Thus, Equation (2) is also equal to the following equation: The integral of Equation (4): First, perform partial integration on the right side of Equation (5); then, ignore the higher-order terms to obtain the following expression: Equations (6) and (7) can be written as a logarithm: Because 2RT/E 1, we simplify Equations (8) and (9): It is assumed that the pyrolysis reactions follow the first-order reaction kinetics at n = 1, which gives: For easy calculating, Equation (12) can be changed to the straight-line formula: x = instantaneous conversion ratio, T = absolute temperature (K), β = heating rate ( • C/min), R = Gas constant (J·mol −1 ·K −1 ), A 0 = pre-exponential factor (min −1 ) and E = activation energy (K·J·mol −1 ). Individual Samples Thermogravimetric (TG) and differential thermal gravity (DTG) curves showed a difference in thermal behaviors between RH and CDR ( Figure 1). RH consists of (hemi)cellulose, lignin and other organic minorities [36,37] so that the weight loss of RH can be divided into three main stages ( Figure 1a). The release of adsorbed gases and water vapor occurred from 50 to 218 • C, followed by the thermal decomposition of volatiles from (hemi)cellulose with the range of 218 to 409 • C (weight loss = 57.3%). The final stage is the slow decomposition of lignin and carbonaceous residue with a weight loss of 14.9%. In comparison, the pyrolysis of CDR was divided into four individual stages ( Figure 1b). Water evaporation occurred before 129 • C, after which the pyrolysis of (hemi)celluloses, phenols and ethers occurred until 510 • C. The weight loss from 510 • C to 750 • C corresponded to the charring reaction of lignin and the remaining hydrocarbon, followed by the thermal decomposition of inorganic substances at a very low rate [38]. As the DTG curves show, the maximum weight loss rate of RH (19.16 wt.%/min) is much higher than CDR (9. 4 wt.%/min) (Table 3), indicating the higher reactivity of RH than CDR during the co-pyrolysis process. Compared with RH, the peak temperature of maximum weight loss of CDR was lower than that of RH, indicating that CDR was more unstable during the co-pyrolysis process. Therefore, RH exhibited better pyrolysis performance than CDR, and it is estimated that the blends of RH with CDR enhance the pyrolysis process of CDR. Water evaporation occurred before 129 °C, after which the pyrolysis of (hemi)celluloses, phenols and ethers occurred until 510 °C. The weight loss from 510 °C to 750 °C corresponded to the charring reaction of lignin and the remaining hydrocarbon, followed by the thermal decomposition of inorganic substances at a very low rate [38]. As the DTG curves show, the maximum weight loss rate of RH (19.16 wt.%/min) is much higher than CDR (9. 4 wt.%/min) ( Table 3), indicating the higher reactivity of RH than CDR during the co-pyrolysis process. Compared with RH, the peak temperature of maximum weight loss of CDR was lower than that of RH, indicating that CDR was more unstable during the copyrolysis process. Therefore, RH exhibited better pyrolysis performance than CDR, and it is estimated that the blends of RH with CDR enhance the pyrolysis process of CDR. Ti is the initial decomposition temperature; Tf is the final decomposition temperature; Tmax is the first peak temperature; DTGmax is the maximum weight loss rate; Mf is the residue mass. We calculate the error with Mf as the reference: T i is the initial decomposition temperature; T f is the final decomposition temperature; T max is the first peak temperature; DTG max is the maximum weight loss rate; M f is the residue mass. The uncertainty values u 1 , u 2 , u 3 , u 4 , u 5 and u 6 are introduced by the measurement error of RH, 80RH, 60RH, 40RH, 20RH and CDR [39]. u is written as: where δ is error, and k = √ 3. We calculate the error with M f as the reference: The larger the uncertainty value is, the more unstable the sample is. The most unstable sample is CDR, which is consistent with the above analysis. Blend Samples The TG/DTG curves of the blends of RH and CDR can be roughly divided into four stages, similar to CDR, but the beginning temperature (T i ), maximum decomposition rate (T max ) and ending temperature (T f ) of the blends are totally different (Figure 2a,b, Table 3). The addition of RH to CDR reduced the heat transfer from the surface to the core of the blend samples, giving rise to the shift of both blends' T i and T f to higher temperatures [40]. The addition of RH significantly reduced the maximum rate of weight loss (DTG max ) and residual mass (M f ) of the blend samples from 20RH to 60RH. We noted that the DTG max peak gradually moved toward the high-temperature zone with the increase in RH content, indicating the addition of RH reduced the reactivity of the mixtures. However, when the proportion of RH is 80%, the DTG curve of 80RH/CDR is similar to pure RH. In Figure 2c, the conversion of all samples occurred from 100 to 900 • C. The conversion curve of the mixture almost coincides with the RH curve from 372 to 472 • C. The pyrolysis of all samples was delayed when the temperature rose above 472 • C, corresponding to the highly reduced value of the second DTG peak in Figure 2b. stages, similar to CDR, but the beginning temperature (Ti), maximum decomposition rate (Tmax) and ending temperature (Tf) of the blends are totally different (Figure 2a,b, Table 3). The addition of RH to CDR reduced the heat transfer from the surface to the core of the blend samples, giving rise to the shift of both blends' Ti and Tf to higher temperatures [40]. The addition of RH significantly reduced the maximum rate of weight loss (DTGmax) and residual mass (Mf) of the blend samples from 20RH to 60RH. We noted that the DTGmax peak gradually moved toward the high-temperature zone with the increase in RH content, indicating the addition of RH reduced the reactivity of the mixtures. However, when the proportion of RH is 80%, the DTG curve of 80RH/CDR is similar to pure RH. In Figure 2c, the conversion of all samples occurred from 100 to 900 °C. The conversion curve of the mixture almost coincides with the RH curve from 372 to 472 °C. The pyrolysis of all samples was delayed when the temperature rose above 472 °C, corresponding to the highly reduced value of the second DTG peak in Figure 2b. Interactive Effects Analysis For the purpose of investigating the interactions between RH and CDR during copyrolysis, the weight loss deviations between experimental and calculated values within the whole temperature range were calculated according to the following equation [41,42]: where W RH and W CDR represented the weight loss (TG) of each sample. x 1 and x 2 were the proportion of RH and CDR in the blend, respectively. W calculated is the sum of the component weight based upon its fraction at certain temperatures [43][44][45]. ∆W refers to the deviation of weight loss of the blend according to the TG curves, which could be used as an indicator of the interaction degree. There is no interaction if the value of ∆W is 0 [46]. Three stages of co-pyrolysis could be identified from Figure 3. The negative ∆W value reflects the synergistic effect between RH and CDR at temperatures below 374 • C due to the catalytic effect of alkali and alkaline earth metals in the rice husk (Table 2) [47,48]. In comparison, the calculated TG curves were below the experimental TG curves above 455 • C, demonstrating an inhibitive effect between RH and CDR during pyrolysis. The inhibitive effect is ascribed to the adherence of CDR pyrolysis products to the blend's surface, which, in turn, prevented further volatilization and attenuated the heat/mass transfers [49]. Moreover, it is the first time a perfect match between the calculated TG curves and the experimental TG curves between 374 and 455 • C has been reported, indicating the complete degradation of all volatiles generated from the initial pyrolysis of blends in this temperature range. indicator of the interaction degree. There is no interaction if the value of ΔW is 0 [46]. Three stages of co-pyrolysis could be identified from Figure 3. The negative ΔW value reflects the synergistic effect between RH and CDR at temperatures below 374 °C due to the catalytic effect of alkali and alkaline earth metals in the rice husk (Table 2) [47,48]. In comparison, the calculated TG curves were below the experimental TG curves above 455 °C, demonstrating an inhibitive effect between RH and CDR during pyrolysis. The inhibitive effect is ascribed to the adherence of CDR pyrolysis products to the blend's surface, which, in turn, prevented further volatilization and attenuated the heat/mass transfers [49]. Moreover, it is the first time a perfect match between the calculated TG curves and the experimental TG curves between 374 and 455 °C has been reported, indicating the complete degradation of all volatiles generated from the initial pyrolysis of blends in this temperature range. TG-MS Analysis The ion fragments of the main gaseous products generated during pyrolysis were monitored by TG-MS analysis ( Figure 4). CH4, H2O and CO2 were major products during the pyrolysis process. CH4 has three ionic strength peaks in the range of 100-309 °C, 309-478 °C and 478-900 °C, respectively. The synergistic stage from 100 °C to 309 °C is associated with the cracking of side chains of aliphatic hydrocarbons. The conversion of longchained aromatic groups and alkyl groups to methane then occurred from 309 °C to 478 °C [50]. In the inhibitive stage, CH4 is mainly derived from the conversion of methoxyl groups in lignin after 478 °C [51,52]. The CH4 peak intensity in the synergistic zone was lower than that in the inhibitory zone, which is mainly attributed to the speed of weight loss. However, the amount of CH4 released in both zones decreased after adding rice husk. TG-MS Analysis The ion fragments of the main gaseous products generated during pyrolysis were monitored by TG-MS analysis (Figure 4). CH 4 , H 2 O and CO 2 were major products during the pyrolysis process. CH 4 has three ionic strength peaks in the range of 100-309 • C, 309-478 • C and 478-900 • C, respectively. The synergistic stage from 100 • C to 309 • C is associated with the cracking of side chains of aliphatic hydrocarbons. The conversion of long-chained aromatic groups and alkyl groups to methane then occurred from 309 • C to 478 • C [50]. In the inhibitive stage, CH 4 is mainly derived from the conversion of methoxyl groups in lignin after 478 • C [51,52]. The CH 4 peak intensity in the synergistic zone was lower than that in the inhibitory zone, which is mainly attributed to the speed of weight loss. However, the amount of CH 4 released in both zones decreased after adding rice husk. H 2 O evolution can also be divided into three stages. H 2 O peaks in the synergistic region (<317 • C) are attributed to the loss of cellular water and the hydration of the phenolic hydroxyl groups from chemically bonded water and distillation residue [53]. Then, the thermal decomposition of light-volatile components occurred from 317 to 470 • C. The release of H 2 O in the inhibitive region (>470 • C) resulted from the binding of some free radical groups, such as hydroxide radicals, and oxygen ions, as well as the decomposition of O-contained functional groups (especially hydroxyl groups) [54]. During the whole pyrolysis process, the amount of H 2 O released from blends 60RH and 80RH was always higher, as opposed to 20RH and 40RH. The concentration of CO 2 initially increased in the synergistic region (300-450 • C) because of the breaking of aromatic moieties and carboxyl groups. The synergistic effect led to the rapid release of CO 2 . Then, the second peak of CO 2 was observed between 450 • C and 700 • C, which resulted from the degradation of carbonyl compounds and oxygenated compounds with high thermal stability [55]. When the temperature reached the inhibitive region (>700 • C), the decrease in CO 2 indicated the decomposition of a small amount of CaCO 3 [56]. However, the CO 2 emission of blends decreased compared with CDR. The CO 2 emission was the lowest at 60RH due to the inhibitive interaction. The H 2 emission occurred continuously over the temperature range of 300-900 • C (single peak) for each sample. The degradation of hydrogen-rich compounds occurred at about 300 • C, and the condensation of (hydro)aromatic compounds or the decomposition of heterocyclic compounds were detected above 600 • C [53,57]. The hydrogen release of CDR is higher than RH owing to the higher hydrogen content in CDR (Table 1). For blends, the peak values of H 2 occurred in the order of 40RH, 20RH, 0RH, 60RH, 80RH and 100RH, which differed from the regular sequence of the RH-addition ratio. Additionally, 20RH and 40RH contributed to the promotion of H 2 production, while other mixtures were just the opposite. H 2 yield was highest when the RH-addition ratio was 40 wt% (40RH). H2O evolution can also be divided into three stages. H2O peaks in the synergistic region (<317 °C) are attributed to the loss of cellular water and the hydration of the phenolic hydroxyl groups from chemically bonded water and distillation residue [53]. Then, the thermal decomposition of light-volatile components occurred from 317 to 470 °C. The release of H2O in the inhibitive region (>470 °C) resulted from the binding of some free radical groups, such as hydroxide radicals, and oxygen ions, as well as the decomposition of O-contained functional groups (especially hydroxyl groups) [54]. During the whole pyrolysis process, the amount of H2O released from blends 60RH and 80RH was always higher, as opposed to 20RH and 40RH. The concentration of CO2 initially increased in the synergistic region (300-450 °C) because of the breaking of aromatic moieties and carboxyl groups. The synergistic effect led to the rapid release of CO2. Then, the second peak of CO2 was observed between 450 °C and 700 °C, which resulted from the degradation of carbonyl compounds and oxygenated compounds with high thermal stability [55]. When the temperature reached the in- Aliphatic hydrocarbons C n H m (n ≥ 2) can be generated by two pathways. One is the decomposition of macromolecular components, including long-chain and branched-chains paraffin in CDR, into small molecules. Another is the degradation of hydroaromatic groups, polymethylene and aliphatic bridges (e.g., n-fatty acis). Alkyne (C 2 H 2 ) and alkene (C 3 H 6 ) have strong ionic peaks within the temperature range of 200-700 • C. Kinetic Analysis All kinetic parameters were calculated via the Coats and Redfern method within the temperature ranges from 190 to 380 • C and from 260 to 400 • C for CDR and RH, respectively. According to Equation (12), a series of linear-fitting curves were obtained and plotted in Figure 5. The activation energy E a and the pre-exponential factor A 0 were thereby generated for various blend ratios ( Table 4). The activation energies for RH, CDR and their blends lie in the range of 15-25 kJ/mol. The linear correlation coefficient R 2 of 0.95 indicates the reasonable fitting of the first-order reaction model. As exhibited in Table 4, the E a values of RH and CDR were 21.85 kJ/mol and 24.00 kJ/mol, respectively. The E a values of blends with different proportions were lower than those of RH or CDR, demonstrating that the existence of RH could reduce the energy required for CDR pyrolysis. This result also verified that a higher value of activation energy indicates a slower reaction. As the activation energy represents the critical energy required to initiate a reaction [58], the 60 RH blend, with the lowest activation energy among these blended samples, is recommended. It is consistent with the positive synergistic effect induced by the addition of RH so that the optimal mixing ratio of RH to CDR is 3:2. matic groups, polymethylene and aliphatic bridges (e.g., n-fatty acis). Alkyne (C2H2) and alkene (C3H6) have strong ionic peaks within the temperature range of 200-700 °C. Kinetic Analysis All kinetic parameters were calculated via the Coats and Redfern method within the temperature ranges from 190 to 380 °C and from 260 to 400 °C for CDR and RH, respectively. According to Equation (12), a series of linear-fitting curves were obtained and plotted in Figure 5. The activation energy Ea and the pre-exponential factor A0 were thereby generated for various blend ratios ( Table 4). The activation energies for RH, CDR and their blends lie in the range of 15-25 kJ/mol. The linear correlation coefficient R 2 of 0.95 indicates the reasonable fitting of the first-order reaction model. As exhibited in Table 4, the Ea values of RH and CDR were 21.85 kJ/mol and 24.00 kJ/mol, respectively. The Ea values of blends with different proportions were lower than those of RH or CDR, demonstrating that the existence of RH could reduce the energy required for CDR pyrolysis. This result also verified that a higher value of activation energy indicates a slower reaction. As the activation energy represents the critical energy required to initiate a reaction [58], the 60RH blend, with the lowest activation energy among these blended samples, is recommended. It is consistent with the positive synergistic effect induced by the addition of RH so that the optimal mixing ratio of RH to CDR is 3:2. Discussions This work will be discussed from the following two points: (1) interaction, (2) Pyrolysis products, and (3) kinetics data. (a) Interaction: The deviation of weight loss TG (∆W) demonstrated that there were synergistic interaction, no interaction and inhibitive interaction between RH and CDR at 76-374 • C, 374-455 • C and 455-1000 • C, respectively. Low temperatures favor synergistic interaction, which is consistent with some previous studies [26,59]. It is reported that the synergistic mechanism was mainly attributed to the catalytic effects of alkali and alkaline earth metals and the transfer of hydrogen and hydroxy radicals [60]. The phenomenon of no interaction generally occurs at the initial stage of pyrolysis due to the low temperature, at which the sample has not started to degrade yet [61]. It is a new discovery that there is no interaction in the middle temperature. This may be the reason for the temporary pause of the blends as the volatiles decrease. The inhibition mechanism was mainly attributed to the carbonization of biomass at high temperatures [62]. Further decomposition of CDR was hindered by a large number of carbonaceous deposits that covered and blocked the molecule pores of CDR residues. (b) Pyrolysis products: All co-pyrolysis products including CH 4 , H 2 O, CO 2 , H 2 and light hydrocarbon were detected via MS. The addition of rice husk reduced the main gaseous products CH 4 and CO 2 . For CH 4 , RH consistently produced more methane than CDR. This result was mainly attributed to the removal of methoxyl substituents of the lignin, cellulose and hemicellulose and the conversion of the alkyl chain of the lignin [63]. CDR was dominant during the pyrolysis of the blends, which reduced methane production. CDR produces a large amount of CO 2 between 400 • C and 600 • C, indicating that a large number of aliphatic groups in CDR were produced by decarboxylation/decarbonylation reaction [64]. (c) Kinetics data: The activation energy of RH in non-catalytic pyrolysis was 21.85 kJ/mol, which is far lower than the results of other studies. Balasundram et al. [65] revealed that the activation energy of RH under non-catalytic action was 49.78 kJ/mol, lower than 53.10 kJ/mol under catalytic action. The kinetic study of CDR has not been reported before this work. López-González et al. [66] reported activation energy of some biomass samples, such as Nannochloropsis gaditana, Scenedesmus almeriensis and Chlorella vulgaris, during pyrolysis in the range of 135-178 kJ/mol. Zhu et al. [26] reported an activation energy value of 71 kJ/mol for bio-oil distillation residue. Sanchez et al. [67] reported that the activation energy of animal manure, sewage sludge and municipal solid waste are 140, 143 and 173 kJ/mol. All samples studied in this paper have low activation energies, mainly due to the synergistic interaction at low temperatures. The synergistic interaction promoted the reaction process and resulted in a significant decrease in activation energy in corresponding conversion stages [62]. Therefore, the future research direction of distillation residues or other industrial hazardous waste can be started from the perspective of interaction and pyrolysis products. In short, compared with the traditional treatment methods, such as landfilling or incinerating, the co-pyrolysis of industrial hazardous waste and biomass would be a better solution. Conclusions In summary, an unexpected interaction existed in the co-pyrolysis of CDR and RH. There is a synergistic interaction between RH and CDR from 76 to 374 • C, which disappears in the medium temperature range. The inhibitive interaction occurs from 500 to 1000 • C. All co-pyrolysis products, including CH 4 , H 2 O, CO 2 , H 2 and light hydrocarbon, were detected via MS. Inhibitive interactions reduced the main gaseous product (CH 4 and CO 2 ), and synergistic interactions decreased the activation energy simultaneously. The optimum blending ratio between RH and CDR based on the lowest activation energy of 15.01 kJ/mol is 3:2. The interaction, gas evolution and kinetic parameters will be helpful to large-scale co-pyrolysis of cresol distillation residue and rice husk, and they also provide a promising solution for other distillation residues.
2022-03-17T15:15:35.629Z
2022-03-14T00:00:00.000
{ "year": 2022, "sha1": "e24576e90457c5b1ee22276a32d555b8cf743fcc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/15/6/2130/pdf?version=1647274754", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a87d0ac9e88c151e16922598f3652c874c3fb4c4", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [] }
8741221
pes2o/s2orc
v3-fos-license
The Activity of Nodules of the Supernodulating Mutant Mtsunn Is not Limited by Photosynthesis under Optimal Growth Conditions Legumes match the nodule number to the N demand of the plant. When a mutation in the regulatory mechanism deprives the plant of that ability, an excessive number of nodules are formed. These mutants show low productivity in the fields, mainly due to the high carbon burden caused through the necessity to supply numerous nodules. The objective of this study was to clarify whether through optimal conditions for growth and CO2 assimilation a higher nodule activity of a supernodulating mutant of Medicago truncatula (M. truncatula) can be induced. Several experimental approaches reveal that under the conditions of our experiments, the nitrogen fixation of the supernodulating mutant, designated as sunn (super numeric nodules), was not limited by photosynthesis. Higher specific nitrogen fixation activity could not be induced through short- or long-term increases in CO2 assimilation around shoots. Furthermore, a whole plant P depletion induced a decline in nitrogen fixation, however this decline did not occur significantly earlier in sunn plants, nor was it more intense compared to the wild-type. However, a distinctly different pattern of nitrogen fixation during the day/night cycles of the experiment indicates that the control of N2 fixing activity of the large number of nodules is an additional problem for the productivity of supernodulating mutants. Introduction Legumes cover up to 90% of their N demand through a root/Rhizobia symbiosis [1]. However, N 2 fixation in the nodules of the roots is a costly process at a whole plant level. Estimates based on greenhouse pot experiments reveal that the carbon costs for driving nitrogenase corresponds to about ¼ of shoot dry matter at the end of a growing season [2]. It is thus understandable that legumes-by various measures-keep the nitrogen fixation at the lowest necessary level and use any alternative nitrogen form preferentially [3]. Among these mechanisms, the first and most important is to match the total nodule number to plant growth and N demand. For that purpose, legumes have evolved a molecular mechanism that involves root-shoot signaling [4]. This autoregulation of nodulation (AON) consists in short of the following steps. The Nod-factors produced by the bacteria not only induce nodulation but also the formation of xylem-mobile Clavata3/Embryo Surrounding Region-Related (CLE) peptides. These peptides travel to the shoot and bind to and activate a Leucine-rich repeat (LRR) receptor-like kinase. The receptor kinase resembles the Arabidopsis CLAVATA1 gene [5]. The activated receptor kinase induces a not fully understood cascade of events that result in an only partially characterized compound, the so called shoot-derived inhibitor (SDI) [6]. SDI itself is transported in the phloem to the roots, inhibits nodule meristem growth and thus further nodule formation. The LRR receptor-like kinase represents a crucial step in the regulatory cascade and the root-shoot-root signaling. Legumes that carry a mutation in that gene show a so called super-or hypernodulating phenotype, forming the manifold number of nodules when compared to the wild-type. The mutation is described in various legumes, including Medicago truncatula Mt sunn [7], Glycine max Gm NARK [8], Lotus japonicus Lj HAR1 [9], Pisum sativum Ps SYM29 [10]. Under field conditions, the mutants show a comparatively poor performance [11]. This is attributed to the fact that the formation of the excessive nodules puts a metabolic burden on the plant and in addition the plant is not able to support the numerous nodules with sufficient assimilates. Nodules of supernodulating mutants are often of small size and low specific activity [12]. While various data convincingly show that photosynthesis is a crucial factor for the poor performance of supernodulating mutants under field conditions [11], it could thus far not be shown that optimal growth conditions and elevated CO 2 assimilation can improve the activity of the excessive number of nodules of the mutant lines. The objective of this study was to clarify through diverse experimental approaches whether under optimal growth conditions the nodule activity is limited by assimilate supply. For that purpose, nodule activity was followed through nodule H 2 evolution measurements while photosynthetic activity of the leaves was altered. In addition, the diversion of nodule activity during whole plant P-depletion from a fully nourished control was compared on sunn plants with plants of the wild-type (Jemalong A17). P deficiency is known to have a strong impact on leave CO 2 assimilation [13]. The Effect of Increased Photosynthesis on the Specific Activity of the Nodules The growth conditions for M. truncatula plants appear to be optimal in our system. Unlimited access to intensely aerated water (nutrient solution) is combined with a constant nutrient supply at optimal levels [14]. Additionally, the plants receive intensive light during a period of 16 h at optimal temperature (25 °C). The plants can reach up to 12 g dry matter (DM) during 8 weeks of growth in that system, depending solely on nitrogen fixation for N-nutrition (Ricardo A. Cabeza, data not shown). Although such data can only be compared reluctantly, this growth rate exceeds those found in the literature for nutrient solution or aeroponic growth of M. truncatula (e.g., [15]). Increasing photosynthesis through elevated CO 2 concentrations [16] or higher temperature around shoots [17] remained without a short-term reaction of nodule activity (Tables 1 and 2). Neither the per plant nitrogenase activity nor the relative efficiency of nitrogen fixation (EAC) could be affected through a 6 h elevated temperature of 10 degree nor through a three-fold increase in the concentration of CO 2 around shoots for three weeks. At least the significant increased assimilate availability through elevated CO 2 around the shoots should have shown an effect in case the nodules were assimilate limited. Assimilated carbon can reach the nodules within minutes [18][19][20]. Table 1. H 2 evolution of M. truncatula nodules after short-term increase of the temperature around shoots. Data are given as means of 10 replicates. The electron allocation coefficient (EAC, relative efficiency of nitrogenase) was measured at the beginning and the end of the experiment Data were compared statistically (within a column) between the points in time of the analysis but showed no significant differences (Tukey's test, p < 0.05, n = 10; comparison of the EAC by the t-test, p < 0.05, n = 10). Legumes are known to answer to long-term elevated CO 2 with a concerted reaction. Per plant nitrogen fixation is adapted to increased demand largely by the formation of new nodules rather than increased specific activity of the existing nodules [21][22][23]. Consistent with these observations were the effect of long-term elevated carbon dioxide around shoots of the wild type (Table 3). In fact, the specific activity of the nodules in the wild-type plant was lower under elevated CO 2 and remained unchanged in sunn. The formation of the excessive number of nodules in the mutant occurs in competition with root formation. According to our data, the carbon burden affects in particular root growth [24]. Our data furthermore suggest that elevated CO 2 around shoots and thus increased CO 2 assimilation partially rescue that detrimental effect on root growth. The fact that roots of sunn develop comparatively poorly might be part of the reason for the limited agronomical success of supernodulating mutants. The amount of CO 2 that is necessary to be pumped into the shoot compartment to keep the chosen threshold of CO 2 concentration indicates the photosynthetic activity of the plants. Since the long-term experiment was performed in a greenhouse under natural light with shifting intensity, we cannot quantify the effect of the elevated CO 2 concentration on CO 2 -assimilation precisely. Nevertheless, besides the fact that we observed increases in root and shoot dry matter, the higher influx into the shoot compartment with elevated CO 2 was obvious, in particular under sunny conditions. Both the short-and long-term treatments that increased photosynthesis did not increase the specific activity of the sunn nodules. For legumes with a functioning AON and a regulated number of nodules, most experimental data indicate that assimilate supply to nodules is finely adapted rather than limiting for their activity. This opinion remained largely unchallenged since the comprehensive review of Vance and Heichel [22]. In addition, various studies on the carbon expenditure for driving nitrogenase activity (respired carbon by the nodules per unit reduced N) strengthened that view. For instance, most legume nodules under optimal and undisturbed conditions respire more carbon than needed when a most efficient respiration in terms of the avoidance of alternative respiration, activity of external NAD(P)H-ubiquinone oxidoreductases and uncoupling proteins is assumed and the relative efficiency of nitrogenase (electron allocation) is high [25]. The carbon efficiency of nitrogen fixation can be significantly increased (lower amount of oxidized C per reduced N) at the same plants, when the inner plant competition for assimilates increases due to pod formation (vegetative vs. reproductive growth) [26]. Consequently, under conditions of undisturbed growth legumes appear to use assimilates for driving nitrogenase in excess of what is necessary [25]. Table 3. Growth and nitrogen fixation of wild-type and sunn M. truncatula plants. Data are given per plant as means of 6 replicates. Specific N 2 -fixation was calculated from the N increment in plants (N-free nutrient solution) and the number of nodules at the end of the experimental period. * indicates a statistically significant difference when compared to plants grown at 400 ppm CO 2 around shoots (t-test, p < 0.05, n = 6). Plants were grown for 8 weeks at ambient CO 2 . Subsequently the treatment and control conditions were maintained for three weeks. The experiment was done under greenhouse conditions with natural light and the plants enclosed in a plexiglass chamber with regulated atmosphere. The Effect of Phosphorus Depletion on Nitrogen Fixation and Photosynthesis In a second experiment, a set of nitrogen fixing plants was exposed to P-free nutrient solution starting after 6 or 8 weeks of growth of wild-type and sunn plants, respectively. During the following 20 days, per plant H 2 evolution of the nodules was continuously followed. Since P is of pivotal importance for leave CO 2 assimilation and carbohydrate turnover, we hypothesized that an effect on nitrogen fixation should occur earlier in the supernodulating phenotype of the plant. The reasoning for the experiment was that it would form a supplemental treatment to the thus far performed experiments. In these experiments the effect of treatments that increased photosynthesis were studied. A P depletion would show the effect of impaired photosynthesis on nodule performance of the wild-type vs. sunn plants. The experiment was performed under optimal growth conditions other than limiting P in the treatment. In a similar approach, Hernandez et al. [27,28] showed that photosynthesis was increasingly impaired through P depletion. Figures 1 and 2 show the total amount of H 2 evolution per plant and day as an integral of the continuously measured data. While a significant difference between the treatments was measured at 13 days of P depletion in the wild-type plants, this event occurred only one day earlier in the experiment with sunn plants. Root/nodule respiration measured daily at 11:00 am confirms the diversion of the treatment in the wild-type (Figure 3). Here the differences between the treatments reached already a significant value at day 9 after removal of P from the nutrient solution. Root/nodule CO 2 release is closely related to nitrogen fixation activity [29]. Overall, the effect of whole plant P depletion on nitrogen fixation is not significantly more rapid in the sunn plants when compared to the wild type. In addition, while per plant nitrogen fixation of the sunn plants at the beginning of the experimental period was significantly lower when compared to the wild-type plants, the increase in the control plants was steeper than in sunn and the nodules of the P depletion treatment maintained constant activity per plant for almost as long as three weeks. By contrast, nodule activity of the wild-type plants in the P-depletion treatment decreased steadily during the second half of the experimental period. Dry matter formation at the end of the experiment confirms the strong effect of P-depletion (Figure 4). The reason for the higher shoot dry matter of the sunn plants compared the wild-type plants is in part a longer growth period before the beginning of the P-depletion treatment. The significantly higher shoot/root ratio of the mutant plants is a further point indicating that the excessive nodule growth and functioning is a burden for root development [24]. Taken together, the nitrogen fixation patterns during P depletion of sunn vs. wild-type plants render no indications that the nodule activity in sunn plants is assimilate-limited under the optimal growth conditions. However, transcriptomic studies revealed complex effect of P deficiency on nodule formation, development and functioning [28]. For instance, genes involved in nodule formation, symbiosome development and maintenance of nodule C-and N-fluxes are differentially expressed. Accordingly, the fact that wild-type and sunn did not differ in the response to P depletion might be the result of some other more direct impact on the nodules than assimilate supply. Figure 4. Plant dry matter and shoot/root ratio of (A) sunn and (B) wild-type plants at the end of the P-depletion experiment. +P stands for sufficient P supply and −P for a three-week P-depletion treatment. Data are means of six replicates ± SE. Lower case letters indicate a significant difference between treatments (t-test, p < 0.05, n = 6). n.s., not significant. Figure 5. (A) Photosynthesis of sunn and (B) wild-type plants grown at sufficient P supply (+P) and after five to seven days of P depletion (−P). Data are given as means of six replicates ± SE. Lower case letters indicate a significant difference between treatments (t-test, p < 0.05, n = 6). Dry matter (g plant -1 ) Dry matter (g plant -1 ) Specific CO 2 assimilation of the leaves was measured at day 5 through 7 of the whole plant P depletion process ( Figure 5). Specific CO 2 assimilation was higher in the wild-type plants when compared to sunn plants. This confirms data of Voisin et al. [24]. In the wild-type plants the P depletion showed not yet a significant effect on specific CO 2 assimilation, while it was reduced in the sunn plants by about 15%. The lower photosynthesis of sunn control plants might be a consequence of lower per plant nitrogen fixation in the mutant. It is a known fact that legumes, within limits, can adapt photosynthesis to the demand of the nodules [22]. The fact that the leaves of the sunn plants fix less CO 2 is a further indication that a factor other than assimilation supply limited the nodule activity of sunn plants under our conditions. However, it cannot be ruled out that lower specific photosynthesis is a pleiotropic effect of the mutation in the sunn-gene [30]. Daily Patterns of Nodule H 2 Evolution Daily H 2 evolution showed a clear pattern as shown in Figure 6. Figure 6c shows the pattern in greater detail in particular with an indication of the dark periods. The light was switched off at 10 pm. At that point in time a steep decline in activity occurred, which is largely temperature related. Nodule activity was subsequently maintained or even increased during the 8 h dark period. This was also the case in wild-type and sunn plants during the P-depletion experiment. When the light at 6 am in the morning was switched on, an again temperature related steep increase occurred, followed by slightly increasing activity in the morning and a first decline in the H 2 evolution around noon. The subsequent decrease continued until about 5 pm. During the late afternoon, until 10 pm, the activity recovered. This pattern is typical for older plants. In young plants, a decline in nodule activity in the afternoon is almost undetectable but increases with the age of the plants. The overall daily rhythm of H 2 evolution was not influenced by the P depletion treatment. However, there are clear differences between wild-type and sunn plants. While the overall pictures resemble (Figure 6a), the decline during the light period begins much earlier, about two to three hours after the light was switched on. A second clear difference is depicted in Figure 6b. Here, nodule activity at 3 am is set to 100% and the following time-course is shown relative to that value. The figure shows that the decline in the wild-type plants was about one-third of the peak activity at around noon, while in the sunn plants the decline was much stronger, accounting for about two-third of the peak activity. Bergersen [31] pointed to the fact that biological processes, rather than being driven or down-regulated continuously, often oscillate around depleting pools. Assuming that the night and early morning nodule activity satisfies the N demand of the plant and the subsequent down regulation of the activity, is a result of a shoot factor (N-feedback [32,33]) that regularly slows down activity until a new demand emerges or the regulatory compound is used up. Such a mechanism would probably be confronted with higher amplitudes and the fine tuning to N demand would be much more difficult when the machinery (i.e., the nodule number) that produces available nitrogen is much greater and its size and activity is not matched to N demand. Consequently, the slow performance of supernodulating mutants might as well be a result of the difficulties of the plants to fine tune the activity of a potentially strongly excessive machinery (nodules) to the demand of the plants and maybe also due to fluctuating availability of assimilates. The aberrant daily pattern of H 2 evolution in sunn plants is an indication for that. Design of the Experiments For the objective to determine whether nitrogen fixation of supernodulating M. truncatula plants would be assimilate limited under optimal growth conditions we performed three sets of experiments. In a first experiment short-term reactions of nodule activity to increased leave CO 2 assimilation were monitored. In a second experiment, plants were exposed to elevated CO 2 concentrations around the shoots for three weeks under greenhouse conditions with natural light (long-term experiment). Eventually, in a third experiment, nitrogen fixation was followed during a whole plant P depletion process. In this experiment the specific CO 2 assimilation of the leaves was measured at 5 to 7 days after beginning the treatment. Plant Growth Seeds of Medicago truncatula (Gaertn.) cv. Jemalong A17 or sunn were submerged in H 2 SO 4 (96%) for 5 min for chemical scarification, sterilized with 5% (v/v) sodium hypochlorite for 5 min and rinsed several times with deionized water. The seeds were subsequently kept at 4 °C for 12 h in darkness, submerged in tap water. The next step was a 2 to 4 day slight shaking of the submerged seeds at 25 °C and continuous light. When the seed had developed an about 20 mm long primary root, 20 plantlets each were transferred to small growth boxes (170 mm × 125 mm × 50 mm) filled with aerated nutrient solution. The seedlings were fixed through small x-shaped cuts in tape on the upper side of the growth boxes. The plants were grown for two weeks in these boxes in a growth chamber with a 16/8 h light/dark cycle at 25/20 °C, respectively. Light intensity at plant height was approximately 500 µmol·m −2 ·s −1 . Immediately after transfer to the growth boxes, the seedlings were inoculated with 1 mL/box of a stationary Sinorhizobium meliloti (Sm) (102F51) YEM-culture, with an approximate cell density of 10 9 ·mL −1 . The Sm-strain induced good nodulation, with first eye-visible nodules after about 7 to 10 days. Wild-type plants developed only 2 to 5 visible nodules during the two-week growth in the growth boxes, while the sunn plants developed many. Sm 102F51 does not contain an uptake hydrogenase [34]. After two weeks, the plants were transferred to glass tubes, which allowed the separate measurement of root/nodule H 2 and CO 2 evolution. The system is described in Fischinger and Schulze [35]. . Phosphorus (P) was added daily as KH 2 PO 4 to a concentration of 5 µM P. For the beginning of the P depletion treatment, the daily P application was stopped. During the first week after transfer to the nutrient solution, the solution was once adjusted to a 0.5 mM NH 4 + concentration through the addition of NH 4 SO 4 . Low concentrations of ammonium support nodule formation in M. truncatula [36]. The nutrient solution was changed every week. During this procedure, the pump in the container was switched off and the backflow from the glass tubes to the container was blocked. In this way, the ongoing measurements in the root/nodule compartment were not affected. After the first week of growth in the glass tubes, the plants depended solely on N 2 fixation for N nutrition. Root/Nodule Gas Exchange Measurement The system for measuring nodule H 2 and CO 2 evolution, including the determination of apparent nitrogenase activity (ANA), total nitrogenase activity (TNA), the calculation of the electron allocation coefficient (EAC) and of N 2 fixation, is described in Fischinger and Schulze [37]. For a continuous, long-term measurement of H 2 evolution, we extended the set-up through an efficient three-step air drying system for the airstream flowing out of the root/nodule compartment. Root/nodule CO 2 evolution was measured daily at 11 am in the gas stream that was continuously analyzed for H 2 concentration. The CO 2 measurement was performed with a S151 CO 2 analyzer (Qubit, Kingston, ON, Canada). Elevated CO 2 and Temperature around Shoots For managing the atmosphere around shoots, a set of 12 plant shoots were enclosed in a plexiglass container. In these containers, temperature (10-35 °C), humidity (30%-60%) and CO 2 concentration (0.005% to 100%) could be continuously regulated and maintained over weeks. The procedure is described in Schulze and Merbach [38]. In short: the air in the container was turned over by 4 fans pressing it through coolers that where supplied with water of adjustable temperature. In addition, the temperature was measured at plants height. A pump took a 200 mL airstream from around the shoots and pumped it through a CO 2 analyzer. An automatic system switched two heaters inside the container or a low flow of CO 2 into the container on when a threshold was undershot. The CO 2 was supplied behind the fans so that it quickly mixed. Both systems were switched off when the threshold was reached again. In this way, the temperature could be increased from 20 to 30 °C within 5 min and the CO 2 concentration from 400 to 1200 ppm within 12 min. The threshold could be kept with 1% and 4% over-and undershooting for temperature and CO 2 concentration, respectively. The system was located in a climate chamber for the short-term experiments and in a greenhouse for the long-term experiment with elevated CO 2 . Measurement of Specific Leave CO 2 Assimilation For measurement of specific CO 2 assimilation, two leaflets were included in a small airtight compartment of the LI-6400XT portable photosynthesis measurement system (LI-COR, Lincoln, NE, USA). The measurements were performed on fully expanded leaves from 11 am through 3 pm at day 5 through 7 of the whole plants P depletion experiments. The surface area of the leaflets was determined through scanning. Conclusions Several results show that nitrogen fixation in the Mt sunn genotype is not limited by assimilate supply at optimal growth conditions. This is supported by the fact that neither short-nor long-term increased assimilate availability affected nodule specific activity neither in the wild-type plants nor in Mt sunn . In addition, whole plant P depletion did not show earlier or more intense effects on nitrogen fixation in the Mt sunn genotype when compared to the wild type. Eventually, specific photosynthesis was higher in the wild-type plants, which is not consistent with assimilate shortage of Mt sunn nitrogen fixation. A clear difference in the daily pattern of nitrogen fixation in Mt sunn , when compared to the wild-type, illustrates that the fine tuning of the excessive nitrogen fixation capacity of the mutant is difficult for the plant. As a conclusion, the poor performance of the mutant under field conditions might in addition to the high assimilate burden of the supernodulating phenotype be explained by difficulties in regulating the activity of the large number of nodules on whole plant level.
2016-03-14T22:51:50.573Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "5e045255aa23e156a31b39bff63533c6ede9d8d6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/15/4/6031/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e38d92bfc0a798df2b300ec6f24da0c8ab170b95", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
146120587
pes2o/s2orc
v3-fos-license
Generalized Intransitive Dice II: Partition Constructions A generalized $N$-sided die is a random variable $D$ on a sample space of $N$ equally likely outcomes taking values in the set of positive integers. We say of independent $N$ sided dice $D_i, D_j$ that $D_i$ beats $D_j$, written $D_i \to D_j$, if $Prob(D_i>D_j)>\frac{1}{2} $. A collection of dice $\{ D_i : i = 1, \dots, n \}$ models a tournament on the set $[n] = \{ 1, 2, \dots, n \}$, i.e. a complete digraph with $n$ vertices, when $D_i \to D_j$ if and only if $i \to j$ in the tournament. By using $n$-fold partitions of the set $[Nn] $ with each set of size $N$ we can model an arbitrary tournament on $[n]$. A bound on the required size of $N$ is obtained by examples with $N = 3^{n-2}$. Of two dice D 1 and D 2 we say that D 1 beats D 2 (written D 1 → D 2 ) if the probability that D 1 > D 2 is greater than 1 2 where D 1 and D 2 are the independent outcomes of the rolls of the dice. Of course, with standard dice this does not happen. If D 1 and D 2 are standard, then P (D 1 > D 2 ) = 15/36. If we use the labels A 1 = {3}, ( 1.2) and repeat each label twice to get 6-sided dice, {D 1 , D 2 , D 3 }, then P (D i > D i+1 ) = 5 9 for i = 1, 2, 3 (counting mod 3). A digraph R on a set I of size (= cardinality) |I| is a set of ordered pairs (i, j) of distinct elements i, j ∈ I such that at most one of the pairs (i, j), (j, i) lies in R. The digraph is called a tournament when exactly one of the pairs (i, j), (j, i) lies in R. The name arises because R models the outcomes of a round-robin tournament where every pair of players competes once with i beating j, written i → j, if (i, j) ∈ R. Alternatively, we can think of I as a list of strategies or actions so that i → j when i wins against j. The output set R(i) = {j : i → j} consists of the elements of I which are beaten by i. The 3-cycle models the game Rock-Paper-Scissors. In general, we will call a tournament R on I a game when its size |I| is odd and for each i, |R(i)| = 1 2 (|I| − 1). That is, each strategy beats exactly half of its competing strategies and is beaten by the other half. Clearly, the 3-cycle is (up to isomorphism) the only game of size 3. With size 5 there is also a game which is unique up to isomorphism. On the television show The Big Bang Theory this game was described as Rock-Paper-Scissors-Lizard-Spock. It can also be modeled using We would like to similarly mimic an arbitrary tournament. However, as the size of the tournament grows we will require larger dice, dice with more than 6 "faces". On a sample space of N equally likely outcomes, which we will call the faces, an N sided die is a random variable taking positive integer values. Again a pair of competing N-sided dice are assumed independent. See, for example, [3]. An N sided die is called proper when the values are all in [N] = {1, . . . , N} and the sum of the values is (N +1)N 2 , or, equivalently, the expected value is N +1 2 which is the same as that of the standard N sided die which takes on each value of [N] once. In [2] it is shown in various ways that an arbitrary tournament can be modeled by proper N sided dice. A convenient way of constructing examples is by the use of partitions. A partition A of a set I is a collection of disjoint subsets with union I. We call it a regular partition when the cardinalities of the elements of A are all the same. From now on we will assume that our partitions are regular so that A = {A 1 , . . . A n } is an n partition of Equivalently, for each i, the expected value of a random element of A i is N n+1 2 . For an n partition A on [Nn] we define the digraph it is more likely that a randomly chosen element of A i is greater than a randomly chosen element of A j rather than the reverse. If N is odd, then R[A] is necessarily a tournament on [n]. That is, We can use the partition to label the faces of n different N sided dice, with distinct values selected from [Nn]. If D i is the random variable associated with the die labeled with values from A i , then A i → A j exactly when D i → D j in the previous sense. If A is a proper n partition of [Nn] then repeating each value n times, we obtain n proper Example ( However the proof of this theorem and the related results in [2] are all rather non-constructive. If we let N n be the smallest positive integer N such that every tournament on [n] can be modeled by an n partition on [Nn], then the results of [2] do not provide a bound on the size N n . In Section 3 we provide an explicit construction which will yield such a bound. In Theorem 3.14 below we will show the following The bound is probably very crude. Furthermore, the examples constructed are not necessarily proper. On the other hand, we will show that for arbitrary positive n, there is a game of size 2n + 1 which can be modeled by a proper 2n + 1 partition of [3(2n + 1)]. Notice that N n does tend to infinity with n. To see this, recall that the number of n partitions of [Nn] is P n = (Nn)!/(N!) n . Since On the other hand, the number of tournaments of size n is T n = 2 n(n−1)/2 and so ln(T n ) = ln(2) 2 n(n − 1). Because n 2 grows faster than n ln(n) it follows that N n cannot remain bounded as n tends to infinity. Tournaments and Games All the sets we consider are assumed to be finite. A digraph on a nonempty set I is a subset R ⊂ I × I such that We use |I| to denote the cardinality of a set I. Notice that if I is the singleton {u}, then R = ∅ is the only digraph on I. We call this the trivial digraph and denote it ∅[u] Given a map ρ : I −→ J we letρ denote the product map ρ × ρ : Definition 2.1. Let R and S be digraphs on I and J, respectively. A morphism ρ : R −→ S is a map ρ : Clearly, if ρ is a bijective morphism then ρ −1 is a morphism and so ρ is an isomorphism. Two digraphs are isomorphic when each can be obtained from the other by relabeling the vertices. An automorphism of R is an isomorphism with R = S. If R is a digraph on I and π is a permutation of I, then we let πR be the digraph on I given by Clearly, if R and S are digraphs on I, then ρ : R −→ S is an isomorphism if and only if the map ρ on I is bijective, i.e. a permutation, and S = ρR. An R path [i 0 , . . . , i n ] from i 0 to i n (or simply a path when R is understood) is a sequence of elements of I with n ≥ 1 such that (i k , i k+1 ) ∈ R for k = 0, . . . , n−1. The length of the path is n. It is a closed path when i n = i 0 . An n cycle, denoted i 1 , . . . , i n , is a closed path [i n , i 1 , . . . , i n ] such that the vertices i 1 , . . . , i n are distinct. A path spans I when every i ∈ I occurs on the path. A spanning cycle is called a Hamiltonian cycle for R. A digraph R is called strongly connected, or just strong, if for every pair i, j of distinct elements of I there is a path from i to j. It follows that if |I| > 1, then for any i ∈ I there is a path beginning and ending at i. We may eliminate any repeated vertices j k = j ℓ with 0 < k < ℓ by removing the portion of the path j k , j k+1 , . . . , j ℓ−1 and renumbering. This shows that if R is strong and nontrivial, there is a cycle through each vertex. The trivial digraph on a singleton is strong vacuously. A subset J ⊂ I is invariant if i ∈ J implies that the output set R(i) is contained in J, or, equivalently, if any path which begins in J remains in J. It is clear that R is strong if and only if it does not contain any proper invariant subset. A digraph R is called a tournament when R ∪ R −1 = (I × I) \ ∆. Thus, R is a tournament on I when for each pair of distinct elements i, j ∈ I either (i, j) or (j, i) lies in R but not both. Clearly, if R is a tournament on I and J ⊂ I, then the restriction R|J is a tournament on J. Harary and Moser provide a nice exposition of tournaments in [5]. Proposition 2.2. If R is a strong tournament on I with |I| = p > 1 and i ∈ I, then for every ℓ with 3 ≤ ℓ ≤ p there exists a ℓ-cycle in R passing through i. In particular, R is a strong, nontrivial tournament if and only if it admits a Hamiltonian cycle. Proof. See Moon, For S ′ and S tournaments on J ′ , J, respectively, with J ′ and J disjoint, the domination product is the tournament S ′ ✄S on J ′ ∪J defined by: Conversely, if J is a proper invariant subset for a tournament R on I and J ′ = I \ J, then Let R be a nontrivial tournament on I, v ∈ I and J = I \ {v}. The vertex v is called a maximum when it satisfies the following equivalent conditions • The tournament R is not strong if and only if it is a domination product, i.e. R = S ′ ✄ S for some tournaments S and S ′ . (b) If v ∈ I with J = I \ {v} and R|J is strong, then exactly one of the following is true. ( Proof. (a) This follows from (2.3) and the remarks before it. For a positive integer k, a digraph R is called k regular when both the input set and the output set of of every vertex have cardinality k. That is, |R(i)| = |R −1 (i)| = k for all i ∈ I. A digraph which is k regular for some k is called regular. If a tournament on I is k regular, then |I| = 2k + 1. We will call a regular tournament a game because such a tournament generalizes the Rock-Paper-Scissors game. Such games are described in [1]. In particular, it is demonstrated there that up to isomorphism there is a unique game of size 5. Of special interest are the group games described in Section 3 of [1]. Let Z 2n+1 denote the additive group of integers mod 2n + 1 with congruence classes labeled by 0, 1, In particular, |A| = n. The set Z 2n+1 \ {0} is decomposed by the n pairs {{a, −a} : a ∈ Z 2n+1 \ {0}} and a game subset is obtained by choosing one element from each pair. In particular, there are 2 n game subsets. For example [n] = {1, 2, . . . , n} is a game subset. For any game subset A define the associated game R[A] on Z 2n+1 by It follows that for every positive integer n there is a game of size 2n + 1. Another way of seeing this is by induction using the following construction. Let R be a tournament on I. With J ⊂ I, let J ′ = I \ J. For u, v distinct vertices not in I, define the tournament R + , called the extension of R via J and u → v, by If R is a game with |I| = 2n − 1 and |J| = n, then the extension R + is a game of size 2n + 1. We conclude the section with the definition of the lexicographic product following the definition in [7] and [8] for graphs and in [4] for tournaments, see also [1] Section 6. For R, S digraphs on I, J, respectively, the lexicographic product R⋉ S is a digraph on It is easy to check that R ⋉ S is a tournament (or a game) if both R and S are tournaments (resp. both are games). Partition Constructions Recall that we defined for an n partition A of [Nn] the digraph is a tournament, e.g. if N is odd, then by permuting [n] or, equivalently, by relabelling the elements of A, we can obtain every tournament isomorphic to R[A] as the tournament of an n partition of [Nn]. For two disjoint sets A, B ⊂ N we define Clearly, where a and b are chosen randomly from A and B, respectively. In If |A| and |B| are odd, then by (3.3) Q(A, B) is odd and so cannot equal zero. On the other hand, if We call Case (i) a pair inclusion which we will write as B ֒→ A and Case (ii) a pair overlap with A higher which we will write as A ։ B. We write B < A when b < a for all (a, b) ∈ A × B. In that case, It is easy to check that, using (3.10) for the latter, that for i, j ∈ [n] So we see that It follows that if A is proper, then A * and A (M ) are proper. Proof. This is the partition version of the tournament construction given in (2.2). To be precise: The name is adopted because if a 1 , a 2 ∈ [M] then |a 1 − a 2 | < M and so for b 1 , b 2 ∈ N it follows that (3.17) In particular, we see that |B ⋉ A| = |B| · |A|. It is easy to check that Notice that the definition requires that we specify M. Proof. From this computation we immediately obtain the following theorem. Note that the lexicographic product of proper partitions is proper by (3.18). With M = Nn define Proof. Because the sets B 1 and B 2 are combined with different sets in A, it is clear that C 1 , . . . , C n+1 are pairwise disjoint. Equation ( (3.33) Since the ordering of the elements is preserved by the renumbering it follows that We apply the construction of (3.27) to the partition A yielding the disjoint sets {C 1 , . . . , C n+1 }. From Lemma 3.12 we see that i → j in R if and only if Q(C i , C j ) > 0. From this follows our main result and, in particular, Theorem 1.2. Theorem 3.14. For n ≥ 2, if R is a tournament on [n] and N is any odd integer with N ≥ 3 n−2 or an even integer with N ≥ 2 · 3 n−2 , then there exists an n partition A of [Nn] with R[A] = R. Proof. We use induction on n to prove that R can be modeled by A an n partition of [3 n−2 · n]. For the inductive step, we apply Theorem 3.13. By using A (2) we obtain an n partition on [2 · 3 n−2 n] which models R. If N = 3 n−2 + 2m or N = 2 · 3 n−2 + 2m, then the result follows by induction on m, using Theorem 3.4 for the inductive step. On the other hand, the exponential growth in Corollary 3.14 provides what is probably only a crude upper bound. For example, it shows that tournaments on [5] can be modeled using 5 partitions on [27 · 5] = [135]. The example (1.3) models the Rock-Paper-Scissors-Lizard-Spock tournament as a 5 partition of [30] and we will see in the next section that we can do even better. We can mimic for partitions the extension construction of (2.5) by using the following list. Proof. Use C = {C 1 , . . . , C n ,Ū,V } from Lemma 3.15 and then pack to obtain D. Examples We saw at the end of Section 1 that for any N there exist, for n sufficiently large, tournaments R on [n] which cannot be modeled using an n partition of [Nn]. Proof. Assume that every tournament of size k < n can be modeled using a k partition of [Nk]. If R is a tournament on [n] which is not strong, then by Proposition 2.3, it can be written as the domination product of two tournaments of smaller size. So by Theorem 3.6 it can be modeled as the domination production of some k 1 partition of [Nk 1 ] and a k 2 partition of [Nk 2 ] with k 1 , k 2 positive integers such that k 1 + k 2 = n. Hence, R can be modeled by an n partition of [Nn]. In this section we will consider examples of tournaments on [n] which can be modeled by using n partitions of [3n], i.e. with N = 3. For π a permutation of [n] and A = {A 1 , . . . , A n } an n partition of [Nn], define A π = {A π 1 , . . . , A π n } by (4.1) It follows that if R[A] is isomorphic to a tournament R on [n], then R[A π ] = R for some permutation π of [n]. On the other hand, if π is a permutation of [Nn] we can define π(A) = {π(A 1 ), . . . , π(A n )} with π(A p ) = {π(j) : j ∈ A p }, the image of A p by the map π. If we start with A an arbitrary n partition of [Nn], then by varying the permutation π we can obtain any n partition of [Nn]. For k = 1, . . . M − 1 call the transposition (k, k + 1) on [M] a simple transposition. For j ∈ [Nn] with j = k, k + 1 it is clear that k > j if and only if k + 1 > j. So it follows that Thus, either R[π(A)] = R[A] or the only possible changes are (i) the reversal of the edge from A p 2 to A p 1 , which occurs if Q(A p 2 , A p 1 ) = 1, (ii) the elimination of the edge from A p 2 to A p 1 , which occurs if Proof. As before label We first prove that by using a sequence of simple switches we can obtain {b i 3 : i ∈ [n]} = {2n + j : j ∈ [n]}. If we let m 3 = min{b i 3 : i ∈ [n]} then this is equivalent to m 3 = 2n + 1. Observe that always m 3 ≤ 2n + 1 since {b i 3 : i ∈ [n]} consists of n distinct integers with maximum 3n. We use induction assuming that m 3 = 2n + 1 − r. If r = 0 then there is nothing to prove. Assume that r ≥ 1. Among the n + r numbers in the interval [2n + 1 − r, 3n] some are level two elements and perhaps some are even from level one. Let k + 1 be the smallest such so that each of the numbers in the interval [2n + 1 − r, k] is from level three. In particular, k = b j with k + 1 = b i s for s = 2 or 1. If we do a simple switch of k with k + 1, to obtainB i ,B j then k =b i s and k + 1 =b j 3 . Furthermore, (4.4) implies that Q(B i ,B j ) = Q(B i , B j ) − 2 which is still positive by (4. Assume that r ≥ 1. Among the n + r numbers in the interval [n + 1−r, 2n] some are from level one. Let k +1 be the smallest such so that each of the numbers in [n + 1 − r, k] is from level two. In particular, k = b j 2 with k + 1 = b i 1 . As before we do a sequence of switches to get B withb i 1 = n + 1 − r and with m 2 = n + 1 − r + 1 = n + 1 − (r − 1). Applying the induction hypothesis we arrive at A. The main result of this section is the observation that for arbitrary n there is a game of size 2n + 1 which can be modeled using a 2n + 1 partition of [3(2n + 1)], that is, with N = 3. A p → A q ⇐⇒ q − p ∈ [n] mod 2n + 1. Proof. Notice that for convenience of the algebraic description we are labelling the elements of the partition by Z 2n+1 = {0, 1 Note first that σ(A p ) = 9n+6 for all p and so the partition is proper. In particular, with n = 2 we obtain the unique game of size 5 via Using these and suitable simple switches one can show that every tournament on [n] with n ≤ 5 can be modeled using an n partition of [3n]. By Proposition 4.1 one need only consider strong tournaments with 2 < n ≤ 5. There are three isomorphism classes of games of size seven, labeled Type I, II and III in Section 10 of [1]. All three can be obtained using 7 partitions of [21]. The Type I game is given by Theorem 4.4 with n = 3: Each with a Q value of 1. This is group game on Z 7 with game subset {1, 2, 4}. By doing a 1, 2 simple switch we reverse the arrow A 5 → A 6 . By doing a 10, 11 simple switch we reverse the arrow A 3 → A 5 . By doing a 18, 19 simple switch we reverse the arrow A 6 → A 3 . Together these three simple switches reverse the 3-cycle A 3 , A 5 , A 6 . This yields a Type III game which is not a group game. The result is still a stratified, proper partition.
2019-05-05T21:35:20.000Z
2019-05-05T00:00:00.000
{ "year": 2019, "sha1": "58b6cadc7868ebf348faf10ba2f25f83b75e8eea", "oa_license": null, "oa_url": "https://www.aimsciences.org/article/exportPdf?id=dcaf76f6-3b28-4a24-99c7-5013ab742cf4", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "58b6cadc7868ebf348faf10ba2f25f83b75e8eea", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
254807004
pes2o/s2orc
v3-fos-license
Downlink Beamforming for Dynamic Metasurface Antennas Dynamic metasurface antennas (DMAs) have great potential to be used in the radiator/receptor elements of future wireless transmitters and receivers, replacing conventional metallic antennas. This can be attributed to their unique properties, such as the ability to be reconfigured in real-time and to reduce the radio frequency chains, resulting in low implementation cost. However, the Lorentzian constraint associated with the DMA elements poses a challenge to real-time configuration and limits the application of the DMA. In this study, we propose a DMA-based wireless network, wherein a DMA-equipped base station (BS) communicates with single and multiple users. For the single-user scenario, we develop an optimal algorithm to maximize the signal-to-noise ratio of the user, which provides the weight of each DMA element in closed form. Furthermore, for multiple users, we formulate the weighted sum rate (WSR) problem and employ techniques from the single-user case to develop an efficient alternating optimization algorithm, which optimizes both the transmit precoders and DMA weights, to enhance the WSR of the system under the transmit power constraint of the BS. The numerical results demonstrate the effectiveness of the proposed algorithms in achieving better performance than that of the benchmark schemes. I. INTRODUCTION T HE world has been witnessing fast-growing throughput demands. A significant amount of the data traffic is channeled through wireless networks owing to their flexibility and low cost compared with wired networks. To meet these demands, wireless networks have constantly been evolving, with each change improving upon the preceding technology. For example, massive multiple-input multiple-output (MIMO), which employs many antennas, is presently the best technology for enabling the delivery of high data traffic and increased spectral efficiency (SE) through the wireless channel [1], [2], [3], [4]. Furthermore, the combination of massive MIMO and high frequencies such as millimeter wave (mmWave) and terahertz (THz) is considered a breakthrough technology for traffic-intensive wireless networks. However, each constituent piece of this combination has its limitations. The use of large antenna arrays in massive MIMO can inevitably increase the complexity, power consumption, and hardware cost. Additionally, because of large antenna apertures, massive MIMO can potentially extend the near-field region [5], [6], [7], [8], [9], necessitating entirely different signal processing techniques. This is the case because of the fundamental change in the electromagnetic wave properties from those of the approximated planar wavefronts in the far-field to those of the actual spherical wavefronts in the nearfield. For instance, the near-field spherical waves are able to focus the signal energy in terms of both angle and distance via the envisioned approach called beam focusing, whereas the far-field techniques can only direct a signal toward a given angle via a technique commonly known as beam steering. Moreover, channel estimation [10] and beam split problem in wideband systems [7] still remain pressing challenges in near-field wireless communication. Similarly, there are several issues associated with the use of high frequencies that hinder large-bandwidth communication. Among them, the high path loss is important because it significantly reduces the communication range and presents difficulties for heuristic signal processing techniques [11], [12], [13]. Several efforts have been made to address some of these challenges. The power consumption issue in massive MIMO, for example, has been intensively addressed using hybrid architectures, which reduce the number of radio frequency (RF) chains by adopting relatively less expensive and low-power phase shifters [14], [15], [16]. Thanks to ultra-dense networks [17] and cell-free massive MIMO [18], [19], [20], [21] technologies, which share the same philosophy of creating a user-centric architecture that significantly reduces signal propagation distance, high-frequency communication is still a possible vision. In this study, we extend the prior efforts of realizing large antenna arrays with low power consumption based on an emerging technology, known as dynamic metasurface antennas (DMAs). The DMA comprises a multitude of artificial metasurface elements capable of realtime reconfiguration. The reconfiguration, which enables the programmable control of the transmitted and received signal, is achieved by incorporating solid-state switchable components into each element. Notably, the adjustable properties of metasurface materials are being exploited not only by DMA but by other technologies as well. In the literature, it has also been suggested that other technologies, such as intelligent reflecting surfaces (IRS) [22], [23], [24], [25] and intelligent transmitting surfaces (ITS) [26], also make use of customizable metamaterials in somewhat different ways. For instance, in wireless communication, IRS is commonly placed near the receiver or transmitter for controlled reflection of signals to the desired destination. By contrast, DMA, which is fundamentally an array of antennas, must be attached to either the transmitter or receiver to facilitate the necessary signal interactions for transmission and reception. In addition, as the IRS only reflects signals rather than retransmitting them, it is made up of semi-passive elements, which have extremely low power consumption. The same objective is achieved in DMA via natural reduction of radio frequency (RF) chains, which reduces the hardware complexity and helps realize large-scale antenna arrays as in massive MIMO. Furthermore, for the same array aperture, DMA can pack a larger number of elements than the conventional massive MIMO, owing to the small size of the elements. Moreover, in DMA, signal processing techniques such as analog combining, beamforming, and antenna selection are implemented automatically without additional hardware as in hybrid architectures. Since its inception, the DMA technology has mostly been explored for use in satellite communications, radar systems, and microwave imaging [27]. The performance of DMAs in such applications has been reported to be good in terms of simplicity, power consumption, and flexibility. However, its use in cellular communication, particularly with massive MIMO, remains limited. A few studies have examined the use of DMAs in cellular communication and have served as a reference [6], [27], [28], [29], [30]. For example, Shlezinger et al. [27] studied the multi-user MIMO system, in which a base station (BS) uses DMA to receive signal streams sent from different user terminals. They proposed two alternating optimization (AO) algorithms for frequency-flat and frequency-selective channels of DMA. Their results revealed the potential advantages of using DMAs over standard antennas for realizing large-scale and low-power massive MIMO. This work was further extended in [28], wherein a similar DMA-equipped BS was bit-constrained and used to operate multi-user MIMO orthogonal frequency division multiplexing (OFDM) communications. The authors formulated a model of coarsely quantized DMA outputs and developed an algorithm for OFDM symbol detection. These two studies focused on uplink communication. However, the researchers in [6] and [30] employed the opposite approach, that is, downlink communication. As stated earlier, large-scale antenna arrays have the potential to extend the near-field regions; these studies specifically investigated the use of DMA for beam focusing in near-field communication regions. They formulated a mathematical model for the near-field wireless channel, which was later used in the considered antenna architectures, which are as follows: fully digital antennas, hybrid antennas, and DMA. They finally adopted the AO algorithm assisted by other techniques to solve the resulting optimization problems for different architectures. Most of the previous studies relaxed the Lorentzian constraint on the DMA elements to make the optimization problems tractable. Consequently, the analysis of the problem was simplified, which led to the development of feasible algo-rithms; however, this simplification resulted in performance degradation. We learned that such problems can possibly be solved with manageable complexity without employing these performance-degrading relaxations. Thus, the focus of this study is to investigate the use of DMA in downlink multipleinput single-output (MISO) systems for both single-user and multiple-user environments. Unlike prior studies that focused on the near-field communication region, our study focuses on the general far-field communication scenario. The main contributions of this study can be summarized as follows: • We propose a novel technique of splitting the problematic Lorentzian constraint into two parts, which significantly simplifies the analysis of various system setups. In addition, this technique also avoids the adoption of relaxation procedures when formulating the corresponding optimization algorithms. • The effectiveness of the splitting technique is evaluated by analyzing the single-user MISO systems. Using this method, we develop a simple algorithm that provides the optimal solution for DMA configurable weights in a closed form. • We then extend the analysis to the multi-user system, wherein we introduce the weighted sum rate (WSR) maximization problem for the DMA. We use the aforementioned splitting technique to simplify the development of the proposed tractable AO algorithm, which alternately optimizes the precoders and DMA configurable weights. Specifically, the precoders are optimized via the minimum mean square error (MMSE) scheme when the DMA configurations are fixed, and when the precoders are set, DMA configurations are solved via manifold optimization (MO). • Finally, after analyzing the average computational complexities of the proposed algorithms and benchmark schemes, extensive numerical results are provided to validate the accuracy of the analysis and demonstrate the effectiveness of the proposed algorithms. The remainder of this paper is organized as follows. Section II provides a brief introduction to DMA architecture and its behavior when interacting with electromagnetic waves. Furthermore, the signal model and problem formulation for both single-user and multi-user systems are presented. Section III presents the optimal solution for singleuser systems, whereas Section IV provides a sub-optimal yet highly competitive solution for multi-user systems. Section V presents the simulation results. Finally, Section VI concludes the paper. Hereinafter, R and C denote the real and complex domain, respectively. Scalars are denoted by lower-case italic letters. The bold-face lower-case (a) and upper-case (A) letters denote a vector and a matrix, respectively. For a matrix G, G i,j denotes the element in the i-th row and j-th column. The transpose, conjugate transpose, and complex conjugate are denoted by (·) T , (·) H and (·) * , respectively. We use I n and 0 n,m to denote an n×n identity matrix and an n×m zero matrix, respectively. For any vector x, [x] i is the i-th element of x; |x| and x denote its absolute value and Euclidean norm, respectively. The remainder of the division of a by b is denoted by a modulus operator, mod (a, b). E (·) denotes the expectation operator. arg(·) represents the phase extraction operator, which returns the phase of its argument, whereas · represents a rounding-up operator, which rounds its argument to the nearest larger integer. A circularly symmetric complex Gaussian random variable with mean υ and variance σ 2 is denoted by CN υ, σ 2 . II. SYSTEM MODEL AND PROBLEM FORMULATION In this section, we present the mathematical model of the proposed DMA-based communication setups. First, we provide a summary of the DMA architecture and signal behavior in Section II-A. Next, in Section II-B, we present the signal model and problem formulation for a downlink single-user MISO system. We then extend this model in Section II-C to a multi-user MISO model and formulate its corresponding sum rate maximization problem. A. DMA Architecture and Signal Model DMAs belong to a class of artificial metamaterials that allow engineering of their physical properties, such as permittivity and permeability, to attain a certain desired behavior toward electromagnetic waves. Recently, these materials have received considerable research attention in the wireless communication field owing to their simple and flexible structure, as well as the introduction of new signal processing abilities, which facilitate several new use cases. Architecturally, DMA is made up of microstrips/ waveguides, containing multiple metamaterial elements arranged vertically. One end of the microstrip is connected to the RF chain that is responsible for the baseband processing of the signal. The elements inside the microstrip, which are generally sub-wavelength spaced, are arranged horizontally, hence, making a planar surface of radiating metamaterial elements. As shown in Fig. 1, multiple elements in a single microstrip are connected to the same RF chain through a waveguide. This arrangement resembles the partially connected hybrid architecture in massive MIMO [31], [32], [33]. However, the former is more efficient as it naturally possesses this capability, whereas the latter incurs additional hardware costs of using unit-norm constrained phase shifters. This gives the DMA a major advantage in terms of hardware size and signal processing flexibility. Owing to its distinctive architecture, the DMA exhibits unique signal interactions. For example, during transmission, each RF chain feeds its corresponding microstrip with the same baseband-processed signal. As the signal traverses through the waveguide, it is radiated to the wireless channel by each element. Depending on the frequency response (state) of the element and the characteristics of the signal reaching the element, each radiation takes on a distinct form. The signal characteristics inside the microstrip are normally dictated by the nature and size of the waveguide material [34], whereas the state of the element is configurable. These two properties primarily govern the signal behavior in the DMA. Thus, we explain them further in the following sections. 1) Frequency Response of a Radiating Element: Each radiating element is considered as a resonant electrical circuit, whose frequency response q, for frequency f , is described by the Lorentzian form [35] and [36] where Ω ∈ R is the oscillator strength, f 0 is the resonance frequency, and Γ is the damping factor. These parameters can be configured for each element to attain the desired outcome. Nonetheless, for narrowband systems, previous studies have reported that it is safe to assume that the elements exhibit flat frequency responses [28]. With this assumption, the state configuration of each DMA element takes the following form where θ is the configurable phase shift on each element. 2) Signal Propagation Inside the Microstrip: Similar to a wireless channel, when the signal propagates through the microstrip, it undergoes two main effects: attenuation and phase shift. The attenuation level is mainly influenced by the material characteristics and size, whereas the phase shift typically depends on the wavenumber and signal location. Therefore, if α i and β i denote the attenuation coefficient and wavenumber of microstrip i, respectively, the signal observed by the l-th element, located at ρ i,l , from the input port of the microstrip is given by: Consequently, the relationship between the signal x ∈ C N d ×1 , which is input to N d DMA microstrips, each containing N e elements, and the corresponding transmitted signal t ∈ C N ×1 from all N = N d × N e elements is given by: is a diagonal matrix of dimension N × N , whose diagonal elements represent the signal propagation effects h i,l , ∀ i,l , and is the block-diagonal matrix of dimension N ×N d that collects the configurable weights of each element, with q i,l denoting the weight of l-th element in the i-th microstrip. B. Single-User Model We consider a system with a BS comprising N radiating metasurface elements as its transmit antennas, serving one single-antenna user. When the BS transmits a precoded unit-power symbol d by a precoder f ∈ C N d ×1 with a power P , the received signal y by the user can be expressed as where g ∈ C N ×1 is the channel between the BS and the user; z ∼ CN (0, σ 2 ) is the additive white Gaussian noise at the receiver of the user. The elements of H ∈ C N ×N and Q ∈ C N ×N d are defined in (4) and (5), respectively. For single-user systems, we aim to maximize the signal-tonoise ratio (SNR), which is given by This problem is formulated as follows: where f 1 (f , Q) = |g T HQf| 2 is the objective function obtained by dropping the constant terms, P and σ 2 , from the SNR. For a single user MISO system, the optimal precoder is given by maximum ratio transmission (MRT), which in this Substituting this into f 1 reduces this two-variables expression into a single-variable one, f 2 (Q) = g T HQ 2 , which eventually simplifies problem (P1) to C. Multi-User Model For the multi-user scenario, we consider the same setup as in the single-user system, except that the number of single-antenna users is increased to K. Similar to the singleuser scenario, the BS uses power P to transmit a precoded signal is the precoder for a unit-power information symbol d i , intended for user i. The signal received by user k from all radiating elements of the BS is given by where g k ∈ C N ×1 is the channel between the BS and the user k. To obtain a more manageable expression that simplifies the subsequent analysis, we exploit the unique structures of matrices H and Q to exchange the shapes of Q and g k . For this, the DMA weight matrix Q is vectorized to a column vector q ∈ C N ×1 , whose i-th element is given by N]; similarly, the channel vector g k is recast to a block-diagonal matrix G k ∈ C N d ×N , whose element in the m-th row and n-th column is given by This exchange transforms (11) into One of the major challenges in multi-user communication is the interference from other users, which can complicate the system analysis. A popular scheme for dealing with multi-user MIMO systems is based on weighted MMSE (WMMSE) sumutility maximization, proposed in [37]. In this scheme, the WSR for our system is formulated as where and 0 ≤ ω k ≤ 1 denote the signal-to-interference-plus-noise ratio (SINR) and the priority of user k, respectively, and J k ∈ C N ×N d = H H G H k is used to simplify the notation. Let F = [f 1 , f 2 , · · · , f K ] ∈ C N d ×K be a collection of all precoders for all users in the network. In this study, we aim to find the precoder F and DMA configurable weights Q that maximize the sum rate R 1 of the system, without exceeding the total transmit power of the BS while ensuring that all reflecting elements operate within the Lorentzian region. This problem is represented by III. OPTIMAL SOLUTION FOR SINGLE-USER MISO SYSTEMS In DMA-based systems, the physical constraint on the radiating element, which allows operation only within the Lorentzian region, is a major challenge. The Lorentzian constraint restricts the phase and amplitude of its resonator, in this case q, to the range 0 ≤ arg (q) ≤ π and 0 ≤ |q| ≤ 1 , respectively. In general, these kinds of limitations complicate signal processing. To resolve these complications, we can consider a three-step process, that is, i) relax the constraints into more manageable ones, ii) solve the problem with relaxed conditions, iii) recast the solution to a close approximation of a real solution. Most of the early works in DMA relied heavily on this technique. For instance, in a study [6], the Lorentzian constraint was relaxed to a phase-only constraint with a constant amplitude. Essentially, this approach focused only on the phase, whereas the amplitude had a constant value of 1. To make the problem more tractable, the optimizable range of the phase was expanded to 2π [6]. The solution was recast to the original Lorentzian restriction in the final step to make it practical. As stated previously, this approach has an undesired effect on the system performance, which raised the need for novel techniques to achieve better performance with reasonable complexity. In this section, we present one such technique for the single-user downlink MISO system. The key idea is to split the Lorentzian constraint in (2) into two terms, that is, the complex constant term j 2 and the exponential term e jθ 2 , containing the optimizable phase. The unique structures of matrices H and Q allow (P2) to be decomposed into N d independent subproblems, which can be solved individually, as each optimizable weight appears in only one term and one subproblem. Essentially, each subproblem optimizes the weights of a single microstrip. To get more insight, the objective function of (P2) is rewritten as Taking a closer look at the first subproblem, we have the following: where θ n,m is the configurable phase shift of the n-th element in the m-th microstrip. Let c = j 1 . By dropping the constant term 1 4 , the right-hand side term of (20) can be further simplified as: By dropping the constant term |c| 2 , (23) can be written as with {·} denoting the real part of a complex number. In the first term of (24), the real part is maximum when the imaginary part is zero, which is achieved by setting arg (b i ) = arg (c) , ∀ i . By aligning all of the phases, this assignment also complies with the second term. As the second sum contains conjugated values, the phases cancel each other and maximize the amplitude. Following this observation, the phase shift of Q i,1 is given by This process is repeated for all other subproblems in (19). We summarize the procedure in Algorithm 1. As this technique provides a closed-form solution for each subproblem without relaxing the Lorentzian constraint, we regard it as an optimal solution for the DMA-based single-user downlink MISO system. A. Algorithm Description This section provides the joint optimization algorithm of the precoders and DMA weights for the multi-user scenario. We specifically develop a solution for (P3), which is more challenging than (P2). The difficulty arises from the fact that apart from solving for DMA weights, we also have to solve for precoders of all transmitted symbols, which unlike the single-user scenario, the solution cannot be obtained in a closed-form. Previous studies [6], [30] have also arrived at formulations similar to (P3). Despite proposing well-tractable Algorithm 1 Optimal Algorithm for Single-User MISO Systems Input : g, H, N, N Output: Q algorithms for solving their problems, the aforementioned relaxation approach severely affects the performance. Thus, we propose an AO algorithm, which optimizes the precoder F and DMA weights q in two steps alternatively. In the first step, we optimize the precoders while fixing the DMA weights at their last updated values, and in the second step, we do the opposite. This process is repeated until convergence. In the following, we describe these steps in more detail. When q is fixed, the resulting subproblem to solve for F is reduced to a WSR maximization problem for conventional MISO systems [37]. A popular method for solving such problems is the WMMSE algorithm [38], which iteratively updates the following values until they converge: where μ ≥ 0 is the Lagrangian variable for the transmit power constraint, obtained by line search techniques such as the bisection method. Once F is obtained, we fix it and focus on optimizing q. However, this optimization is not straightforward because of the performance degrading Lorentzian constraint on the optimizable variable. Therefore, in this step, we adopt the same technique used in the single-user scenario of splitting the Lorentzian constraint into two parts. For ease of representation, we employ the following notations: s ∈ C N ×1 = e jθ1,1 , · · · , e jθN e,Nd T , that help to compactly transform the SINR expression of the k-th user into The new SINR expression in (32) presents a different optimizable variable θ n,m with a manageable constraint. Consequently, the inputs of the objective function (when F is fixed) in (14) must be modified to which also transforms the problem statement. The new problem takes the following form: The problem (P4) has a less restrictive constraint on θ n,m . In fact, the new constraint forms a complex circle manifold with a continuous and differentiable objective function R 2 (s), which allows the easy adoption of the MO algorithm [39], [40], [41]. MO can conceptually be broken in three main steps 1) Computation of Riemannian Gradient: The Riemannian gradient of a function R 2 at point s k , denoted by grad s k R 2 , is the orthogonal projection of the Euclidean gradient ∇ s k R 2 onto the tangent space. For our case, this is given by where the Euclidean gradient is The Euclidean gradient expression is derived using the natural logarithm version of (33) because of its computation simplicity and the independence of the solution from the base of the logarithm [37]. 2) Finding the Search Direction: The search direction of the conjugate gradient at point s k can be computed as where T (·) is the vector transport function given by k is the conjugate gradient update parameter at point s k , chosen as the Polak-Ribiere parameter [39], and d k is the search direction at point s k . 3) Retraction: This is the process of finding the next point s k+1 in the manifold by mapping the current point s k on the tangent space. Mathematically, it is given by Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. where τ is the step size, normally given by Armijo rule [42]. The steps above, which are summarized in Algorithm 2, are repeated until convergence, that is achieved when the Riemannian gradient in (36) approaches zero [41], [43]. Input : g, H, N, N d , N e 1 Find the initial s 0 and let k = 0. 2 Compute grad s k R 2 using (36). end Optimization of q when F is fixed. 8 Update q by using Algorithm 2. 9 until Objective function R 1 (f k , q) converges Output: F and q. The complete proposed algorithm for the multi-user DMAbased downlink MISO system is summarized in Algorithm 3. As this algorithm employs AO and MO to optimize the precoders and the DMA weights, we abbreviate it as AO-MO. In the first step of this algorithm, we start by randomly initializing the DMA weights q. The initial values of q must be within the Lorentzian region to ensure the convergence of both the MMSE and MO schemes. Steps 3 -7 focus on optimizing the precoders of all users by using MMSE when the DMA weights are fixed, whereas, in step 8, we use the MO algorithm to update the values of q, keeping the precoders fixed. This process of optimizing F and q is repeated until the objective function converges. B. Complexity Analysis To conclude the section, we provide complexity analysis of our proposed scheme in terms of floating-point operations (FLOPS) [44], [45]. Additionally, for performance comparison, we also analyze the complexities of the algorithms proposed in [6] and [30], which are used in Section V as the benchmark schemes. Zhang et al. [30] proposed an algorithm that uses AO and MO for its optimization process but with the relaxed Lorentzian constraint on Q. Therefore, we call it "Relaxed AO-MO". Similarly, as in [6] the AO algorithm is used twice for optimizations, we call it "double AO". It begins by optimizing the precoders in the outer AO loop, then relaxes the constraints on Q, before deploying another alternate procedure to individually optimize the weight of each DMA element by using line search techniques. Considering the proposed algorithm, complexity in the first step, which computes the precoders for the user equipment, can be attributed to matrix inversion in (28), which is given by where I W and I μ respectively denote the number of iterations for the WMMSE iterative process and for searching μ. In the second step, where MO is used to update the DMA weights, the dominant complexity arises from the Euclidean gradient computation, and is given by where I MO is number of iterations accumulated in Algorithm 2. If I o denotes the number of outer iterations for AO-based algorithms, the total complexity of AO-MO can be given by O (I o (C W + C MO )). As the relaxed AO-MO also uses AO and MO for its optimization, its complexity is calculated similarly. Despite sharing the same complexity expression, these two algorithms are expected to have different numbers of iterations for their convergence, resulting in different complexities. Next, we analyze the complexity of the double AO scheme. During the optimization of precoders, it uses the same WMMSE technique as the AO-MO; hence its complexity is also given by C W . For the DMA weights optimization, the complexity is largely dominated by the matrix Kronecker product and the iterative line search, which updates the weight of each DMA element. The combined complexity for these two procedures is given by C KL = O(K 2 2N 2 + N N d −N +2I ls N 2 (K + 1)), where I ls is the average number of iterations needed by the line search process. Therefore, O (I o (C W + C KL )) is the total complexity of the double AO scheme. V. SIMULATION RESULTS In this section, we present the numerical results to demonstrate the effectiveness of our proposed schemes. The algorithms developed in Sections III and IV are used to configure a BS using DMA to communicate with user terminals. We use relaxed AO-MO and double AO algorithms as the benchmark schemes to compare their performance with our proposed scheme. However, these algorithms have been designed specifically for multi-user scenarios. For the singleuser scenario, we use the single-user counterpart of double AO, which iteratively optimizes the relaxed DMA weights, as the benchmark scheme. We abbreviate it as "Relaxed DMA-SU." Additionally, the performance of a "Random weight" scheme, which randomly chooses the weight of each DMA element, is compared with other single-user schemes. A. Simulation Setup In this study, we consider a BS with a planar array of metasurface elements that communicates with K users using a carrier frequency of 28 GHz. The spacing between the microstrips and the elements therein is set to λ/2, where λ is the wavelength of the carrier wave. We set the DMA attenuation coefficient and wavenumber to α = 0.6 m −1 and β = 827.67 m −1 , respectively [30]. The transmitter power is set to 23 dBm, and the receiver noise is set to −80 dBm. Moreover, we adopt the practical narrowband Saleh-Valenzuela mmWave channel model [16], [46] throughout our simulations. The channel is further multiplied by the square root of the distance-dependant pathloss, which is given as where ∂ is the distance in meters between the BS and the user, and η = 3.5 is a pathloss exponent. B. Single User Scenario, K = 1 In this scenario, a BS centered at the origin of the xy plane is used to serve a single-antenna user located at (D, 0). 1) SNR Versus Number of Microstrips: We start by positioning a user at a distance D = 100 m from the BS and set the number of elements in each microstrip to N e = 10. Fig. 2 demonstrates the expected trend that the performance of all algorithms increases with the number of microstrips. However, it is observed from the performance of the random weight scheme that if the weights of the DMA elements are not optimized, the system performance is poor. Furthermore, it is clearly seen from this figure that our proposed optimal algorithm attains the highest SNR for all considered sizes of DMA. The suggested scheme provides approximately a 2-dB SNR gain over the relaxed scheme. 2) SNR Versus BS-User Distance (D): Next, we evaluated the impact of the separation between the BS and user. Fig. 3 illustrates the performance of various algorithms as the user is moved in a straight line from D = 50 m to D = 300 m. It can be seen from Fig. 3 that as the distance between the BS and the user increases, the performance of all the compared schemes decreases. This can be attributed to the increased pathloss that inevitably reduces the signal strength. Nevertheless, our proposed algorithm still demonstrates the best performance compared to the other benchmark schemes for the entire range of the distance. It provides an SNR gain of 2 dB compared to the relaxed scheme. 3) SNR Versus BS' Transmit Power: Fig. 4 presents the variation of the SNR as the BS transmit power is altered over the range P = [5 − 40] dBm. The number of microstrips and their radiating elements are set to N d = 20 and N e = 10, respectively. As expected, the performance of each scheme improves as the transmit power increases. Furthermore, following the prior trend, the proposed algorithm attains the best performance compared to the benchmark schemes. This further solidifies the superiority of our proposed algorithm. C. Multi-User Scenario For the multi-user system, we adopted a different BS and users setup, where the BS is positioned at the center of a 300 m-radius cell, radiating signals omnidirectionally to serve K = 5 single-antenna users, that are randomly distributed Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. within the cell, with an exception of a circular area of radius 35 m from the center. Each user is then assigned a priority proportional to its pathloss Λ i from the BS, which is, ω k = Λ k È K i Λi . Moreover, we adopt a fully digital (FD) antenna architecture to act as the performance upper bound. For this architecture, we assume the BS is ideally equipped with the same number of antenna elements as in the DMA case, and each antenna is connected to a dedicated RF chain. As conventional antennas do not have optimizable weights, only the precoder is optimized, using the same WMMSE algorithm [38]. 1) Convergence Behavior: Fig. 5 presents the convergence behavior of our proposed algorithm compared to the benchmark schemes. For this, we assume N d = 15, N e = 10, K = 5, and P = 23 dBm. From Fig. 5, it can be observed that the proposed scheme and the relaxed AO-MO converge faster than the double AO, and approximately at the same number of iterations. According to our analysis, double AO has the slowest convergence because of its large number of iterative processes as it optimizes the weight of each DMA element by using line search techniques. Additionally, we also observe that the FD converges at a higher value than any other schemes-this is expected because of its larger number of RF chains. Furthermore, our proposed algorithm converges at the highest objective value compared to the other benchmark schemes using the same number of RF chains. 2) WSR Versus the Number of Microstrips and Elements: We continue to assess the performance of our proposed algorithm for various sizes of the DMA array. We begin by varying the number of microstrips from 5 to 20 and setting other parameters to their default values. In Fig. 6(a), where we plot the performance of various schemes for these settings, it is seen that our proposed algorithm attains higher performance than both the relaxed AO-MO and double AO. Quantitatively, for a small number of microstrips, e.g., N d = 5, the proposed algorithm provides a 12.6% gain of WSR over the benchmark algorithms. This gain is expected because unlike the benchmark schemes the proposed algorithm optimizes the DMA weights without relaxing its constraints. Despite the decrease in performance gain as the number of microstrips increase, the proposed algorithm still provides a significant gain of approximately 10.8% over the benchmark DMA-based schemes for N d = 20. Moreover, when we vary the size of the DMA by changing the number of elements in the same range, that is, N e = {5, 10, 15, 20}, while keeping the number of microstrips at N d = 10, the performance of our proposed algorithm as shown in Fig. 6(b), is significantly higher than the benchmark DMA schemes. However, the FD achieves the highest performance among all other schemes because of the aforementioned reason. 3) WSR Versus the Number of Users: Next, we vary the number of users in the network and observe its impact on the performance of our proposed algorithm. Fig. 7, which plots the WSR of various schemes for different numbers of users in the network, is obtained by setting N d = 16 and N e = 10, and the rest of parameters take their default values. Fig. 7 shows that the performance of each algorithm decreases as the number of users increases. This can be credited to increased user interference as well as the natural phenomenon of decrease in quality of service when demand increases but the supply, such as the BS power and number of antenna elements, remains constant. However, the proposed AO-MO algorithm still provides remarkable performance gains over the benchmark DMA-based schemes, which increases with the number of users. For example, the performance gain increases from 10.7%, for K = 5, to 27.1% for K = 16. 4) Comparison of Complexity: Finally, we compare the complexities of our proposed algorithm and all the benchmark schemes, except FD. The comparison presented in Fig. 8 was attained by configuring the network with the same parameters as those used for Fig. 6(a). As shown in Fig. 8 the complexity of each algorithm increases with the size of the DMA. This behavior is predictable, as the number of elements increases, more computational resources are needed to optimize their weights. Despite using the computationally expensive Kronecker product and a separate line search-based iterative procedure for optimization of each DMA element's weight, the double AO scheme is less complex than relaxed AO-MO. However, this comes at the cost of poorer performance than all other schemes. Nevertheless, our proposed algorithm is the least complex among the compared schemes. VI. CONCLUSION This study examined a DMA-based downlink MISO system, in which a DMA-equipped BS serves single-antenna user(s). We split the analysis into single-user and multiple-user sce-narios and formulated the corresponding problem for each. First, we analyzed the single-user case, wherein we aimed to maximize the SNR of the user, considering the physical constraints of the DMA elements. We developed an optimal algorithm for this problem, which provides a closed-form solution. Next, we analyzed the multi-user scenario, wherein the rates of the users are first weighted proportionally to create a WSR problem. We then developed an AO algorithm to solve this WSR problem; this algorithm alternately optimizes the precoders of the system based on the MMSE technique and the DMA weights based on the MO algorithm. Although the proposed algorithms solve problems in different scenarios, they both share the same novel technique of splitting the Lorentzian constraint into two parts, which greatly facilitates the algorithm development. The proposed optimal algorithm in the single-user case offered a significant performance gain over those of the benchmark schemes. For example, when the size of the DMA was varied by changing the number of microstrips, our proposed optimal algorithm maintained an SNR gain of approximately 2 dB throughout the range. Apart from the performance superiority of our proposed optimal algorithm in single-user systems, we observe similar behaviour in the multiuser scenario. Specifically, a gain of more than 27% in the system's WSR is attained by our proposed AO-MO algorithm over the benchmark schemes when the BS' precoders and DMA weights are configured to serve sixteen users in the network. Most future wireless communication standards, such as 6G, are expected to employ wide bandwidths in order to deliver high data rates. In such cases, the operational bandwidth may significantly exceed the coherence bandwidth of the channel. In this study, we assumed a narrowband system; extending the proposed formulations to the wideband system constitutes an interesting research direction for future work.
2022-12-18T16:07:25.735Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "ca8323f832fa1b2204efbbd4f80dcf30b8468359", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/preprint/Downlink_Beamforming_for_Dynamic_Metasurface_Antennas/22207933/1/files/39539380.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "1a62d5f89d327ce3d7210c2d18cd0c27d95684f6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
235297066
pes2o/s2orc
v3-fos-license
Tetrodotoxins (TTXs) and Vibrio alginolyticus in Mussels from Central Adriatic Sea (Italy): Are They Closely Related? Tetrodotoxins (TTXs), potent neurotoxins, have become an increasing concern in Europe in recent decades, especially because of their presence in mollusks. The European Food Safety Authority published a Scientific Opinion setting a recommended threshold for TTX in mollusks of 44 µg equivalent kg−1 and calling all member states to contribute to an effort to gather data in order to produce a more exhaustive risk assessment. The objective of this work was to assess TTX levels in wild and farmed mussels (Mytilus galloprovincialis) harvested in 2018–2019 along the coastal area of the Marche region in the Central Adriatic Sea (Italy). The presence of Vibrio spp. carrying the non-ribosomal peptide synthetase (NRPS) and polyketide synthase (PKS) genes, which are suspected to be involved in TTX biosynthesis, was also investigated. Out of 158 mussel samples analyzed by hydrophilic interaction liquid chromatography coupled with tandem mass spectrometry (HILIC-MS/MS), 11 (7%) contained the toxins at detectable levels (8–26 µg kg−1) and 3 (2%) contained levels above the EFSA safety threshold (61–76 µg kg−1). Contaminated mussels were all harvested from natural beds in spring or summer. Of the 2019 samples, 70% of them contained V. alginolyticus strains with the NRPS and/or PKS genes. None of the strains containing NRPS and/or PKS genes showed detectable levels of TTXs. TTXs in mussels are not yet a threat in the Marche region nor in Europe, but further investigations are surely needed. Introduction Tetrodotoxins (TTXs) are potent neurotoxins, which have been known for centuries and have been implicated worldwide in pufferfish poisoning. Structurally, TTXs are alkaloids with a guanidinium moiety connected to a highly oxygenated carbon skeleton [1][2][3][4]. The guanidinium group is responsible for their toxicity; it binds to the voltage-gated Na + channel pores on neuronal and muscle cell membranes and thus blocks nervous signal transmission [5,6]. After eating contaminated seafood, symptoms of tetrodotoxin poisoning progress from tongue and lip numbness to progressive paralysis and, in the worst cases, death as a result of respiratory failure. Tetrodotoxin (TTX) is the best known member of the group, but it co-exists with some other natural occurring congeners. There have been 30 structural analogues reported to date, with different degrees of toxicity depending on their chemical structure [7,8]. TTX is over a thousand time more toxic to humans than cyanide, with a lethal dose of 2-3 mg (corresponding to 40-60 µg kg −1 b.w.) [9,10]. Besides pufferfish, other fish of the The results of this work fulfill the call for data from the EFSA to EU member states, provide useful information for a better understanding of TTX origin, and describe a study of TTX accumulation in marine organisms. Method Performance Assessment The developed HILIC-MS/MS method showed good analytical performance. Matrixmatched calibration curves exhibited good linearity with correlation coefficients greater than 0.99 and response factor drift < 10%. The limit of detection (LOD) was 8 µg kg −1 and the limit of quantification (LOQ) was 26 µg kg −1 for all the matrices included in the present study. The sensitivity was not excellent, due to the significant matrix effect (signal suppression of 70-80%) and the instrument features; the system can be classified as a low-sensitivity system, as described by Turner et al. [40]. However, the method was suitable to identify TTX contamination at the EFSA guidance level of 44 µg equivalent kg −1 for TTX. Internal quality controls carried out within the different analytical batches were in agreement with the validation parameters. A rough calculation of the contamination levels by single-point comparison with a standard at LOQ level resulted in a value of~10 µg kg −1 . All the contaminated samples contained only TTX itself. None of the eight other analogues included in the method were detected (<LOD). The contaminated samples were harvested during July from natural Pesaro beds (VA180705W, VA180731W, and MS180731W). None of the farmed samples showed detectable TTX (>LOD) (Table S1). As regards the Pesaro area, only two samples were contaminated, and both showed a concentration just above the LOD (>8 µg kg −1 ). All the contaminated samples were harvested from mid-June to the beginning of August and, as in 2018, only the parent toxin TTX was detected. Six of the seven sampling points gave at least one sample contaminated at a detectable level. Distribution of TTX in Mussel Tissues All the samples containing detectable amounts of TTX were dissected and the digestive glands were analyzed using HILIC-MS/MS. Assuming a mussel composition of 20% Distribution of TTX in Mussel Tissues All the samples containing detectable amounts of TTX were dissected and the digestive glands were analyzed using HILIC-MS/MS. Assuming a mussel composition of 20% by weight for the digestive gland (DG) and 80% for the remaining flesh (RF), a homogenous distribution of toxin between tissues would result in 20% of toxin in the DG and 80% in RF. The six samples with TTX levels between the LOD and LOQ showed similar contamination levels in the DG and the whole flesh (WF), indicating a uniform distribution of the toxin. In sample AS190605W, characterized by significant TTX levels, a 3-fold higher concentration of toxin was reported in the WF (67 µg kg −1 ) compared to the DG (~22 µg kg −1 ) indicating that roughly 93% of the total TTX was in the RF and only 7% in the DG (Table S2). Among the samples with the highest TTX levels, SN190702W and SS190702W showed 2.6-fold and 1.7-fold higher concentrations of TTX in the DG (196 and 107 µg kg −1 , respec-tively) than the WF (76 and 61 µg kg −1 , respectively) suggesting a preferential accumulation of the toxin in the DG (52% and 35% of the total contamination, respectively). Uptake and Depuration Rate Estimation In the Ancona sampling area, the toxin concentrations recorded and, in some sites, the sampling frequency, allowed the estimation of TTX uptake and depletion rates. In the An sud site, two weeks before the positive TTX sample identification (67 µg kg −1 ), no traces of the toxin were detected (Figure 2). in RF. The six samples with TTX levels between the LOD and LOQ showed similar contamination levels in the DG and the whole flesh (WF), indicating a uniform distribution of the toxin. In sample AS190605W, characterized by significant TTX levels, a 3-fold higher concentration of toxin was reported in the WF (67 µg kg −1 ) compared to the DG (~22 µg kg −1 ) indicating that roughly 93% of the total TTX was in the RF and only 7% in the DG (Table S2). Among the samples with the highest TTX levels, SN190702W and SS190702W showed 2.6-fold and 1.7-fold higher concentrations of TTX in the DG (196 and 107 µg kg −1 , respectively) than the WF (76 and 61 µg kg −1 , respectively) suggesting a preferential accumulation of the toxin in the DG (52% and 35% of the total contamination, respectively). Uptake and Depuration Rate Estimation In the Ancona sampling area, the toxin concentrations recorded and, in some sites, the sampling frequency, allowed the estimation of TTX uptake and depletion rates. In the An sud site, two weeks before the positive TTX sample identification (67 µg kg −1 ), no traces of the toxin were detected (Figure 2). At the Sir nord and Sir sud sites, no TTX was detectable one month before the maximum levels were reached (76 µg kg −1 and 61 µg kg −1 , respectively). Two weeks before, At the Sir nord and Sir sud sites, no TTX was detectable one month before the maximum levels were reached (76 µg kg −1 and 61 µg kg −1 , respectively). Two weeks before, values of~18 and~23 µg kg −1 (just below the LOQ) were detected at the two sites respectively. At all the sampling points, the TTX reached at least 85% clearance two weeks after having reached the highest contamination (Figure 2). At the An nord and An sud, TTX showed more than one maximum in the contamination profile during the 2019 sampling campaign, which lasted four months. Year 2018 Eighty-one of the 99 samples of mussels collected during 2018 were also analyzed using a culturing method to assess the presence of Vibrio spp. V. alginolyticus was isolated and identified in 50 (62%) of the 81 samples analyzed using PCR targeting a speciesspecific marker (Table S1 and Figure 3), while V. parahaemolyticus was never detected. All V. alginolyticus isolates were analyzed by PCR to detect the presence of NRPS and PKS genes. V. alginolyticus colonies harboring NRPS or PKS genes were isolated from three (4%) samples: the NRPS gene was detected in two isolates (3% of the samples), while PKS was Mar. Drugs 2021, 19, 304 6 of 16 detected in only 1 (1%). These three mussel samples were harvested from farming plants during spring or summer, and TTXs were not detected in any of them. All isolates bearing the NRPS or PKS genes were cultured and analyzed for TTXs using HILIC-MS/MS. None of them showed detectable levels of the toxins. of them showed detectable levels of the toxins. Year 2019 In 2019, Vibrio spp. isolation and identification were carried out on 35 mussel samples (Table S1), and in all of them V. alginolyticus strains were isolated and identified by PCR targeting a species-specific marker. NRPS and/or PKS genes were detected by PCR in V. alginolyticus isolates from 14 (40%) of the samples. Isolates from eight (23%) samples harbored only the NRPS gene and one (3%) sample only the PKS, while isolates from five samples (14%) carried both genes. All isolates harboring NRPS and/or PKS genes were from samples (14) harvested between May and August, but TTXs were detected in only seven of them. All the sampling points returned at least one isolate of V. alginolyticus containing NRPS and/or PKS gene. Year 2019 In 2019, Vibrio spp. isolation and identification were carried out on 35 mussel samples (Table S1), and in all of them V. alginolyticus strains were isolated and identified by PCR targeting a species-specific marker. NRPS and/or PKS genes were detected by PCR in V. alginolyticus isolates from 14 (40%) of the samples. Isolates from eight (23%) samples harbored only the NRPS gene and one (3%) sample only the PKS, while isolates from five samples (14%) carried both genes. All isolates harboring NRPS and/or PKS genes were from samples (14) harvested between May and August, but TTXs were detected in only seven of them. All the sampling points returned at least one isolate of V. alginolyticus containing NRPS and/or PKS gene. TTX and NRPS/PKS-Positive Vibrio alginolyticus in Mussels Sampled in 2019 Aiming to study the possible coexistence of NRPS/PKS-positive Vibrio alginolyticus and TTX contamination, further observations focused on the mussels collected during 2019 from natural beds, during the warmer seasons (spring and summer). The latter factors seemed to promote both TTX mussel contamination and NRPS/PKS-positive Vibrio alginolyticus isolation. In 2019, 11 mussel samples contained TTX > LOD. Ten of them were tested for Vibrio spp. (one contaminated sample was not tested) and seven (70%) were found to be positive for V. alginolyticus carrying the NRPS and/or PKS genes. Conversely, 14 mussel samples containing V. alginolyticus with the NRPS and/or PKS genes were analyzed for TTX and seven (50%) showed detectable levels. NPRS and/or PKS genes were always present in the samples with high TTX levels (≥15 µg kg −1 ) (Table S1, Figures 2 and 3). Discussion Across the whole study, out of 158 mussel samples analyzed for TTXs, in 144 (91%) the toxin levels were lower than the LOD, in 11 (7%) they were between the LOD and LOQ, and only 3 (2%) of them showed levels between 61 and 76 µg kg −1 . Looking at Europe, the EFSA has reported that of the more than 1600 results provided from Great Britain, Greece, and the Netherlands on TTXs in bivalve mollusks collected between 2006 and 2016, 92% showed no contamination or levels below 25 µg kg −1 , and in only 32 samples TTXs were above the recommended level (between 47 and 253 µg kg −1 ) [10]. In Italy, TTXs were measured at 0.8-6.4 µg kg −1 in 14 out of 25 shellfish collected in spring and summer from 2015 to 2017 in Syracuse Bay (Sicily) [38]. Exceptionally high contamination levels were reported in the Marano Lagoon (North Adriatic Sea), where TTX levels were measured in mussels of 541 µg kg −1 in 2017 and 216 µg kg −1 in 2018 [39]. Therefore, the results reported herein for mussels from the Marche coast are comparable with the European ones. The literature data show that TTXs in bivalve mollusks are not yet a serious threat to consumer health, even if contamination levels above the EFSA recommendation of 44 µg kg −1 were detected in a small percentage of samples. A more thorough analysis of the 2018 Marche coastal data showed undetectable TTX levels (<LOD) in all the farmed samples and only slight contamination in three mussels from Pesaro natural beds. In long-line mussel farms, mollusks live in pelagic waters (1-2 nautical miles from the coast) with high hydrodynamics and depths often above 5 m. Wild mussels live clinging to rocks, near the coast, and generally in shallow water. The contaminated mussels in the present study were harvested in July, a month characterized by high solar radiation and water temperatures around 25 • C. These environmental conditions are in agreement with the results of Turner et al. [42], who reported a clear relationship between the incidence of TTX-contaminated bivalve mollusks in Great Britain and the environmental characteristics of sampling sites. They pointed out that most of the contaminated samples came from sites characterized by shallow water (<5 m), relatively low salinity, and high temperature (>15 • C). Leao et al. [28] reported that among bivalve mollusk samples taken from Galicia between January and September 2017, those contaminated by low levels of TTXs came from intertidal areas, with water temperatures close to or above 15 • C and medium-low salinity. All the above results confirmed the hypothesis that Marche mussels from natural beds may be more prone to TTX contamination than farmed ones. Since high temperature also seems to be a causative factor, the sampling campaign in 2019 was focused in spring and summer, and only on natural beds, to increase the likelihood of finding TTX-contaminated mollusks. It turned out that in 2019, a significantly higher percentage (14% vs. 3% in 2018) of samples showed detectable TTX levels. Despite the small number of contaminated samples and the low levels measured, an attempt to evaluate TTX uptake and depuration rates was made for specific sampling sites. In general, TTX accumulation in mussels seems to be not as fast as in the case of other marine biotoxins [43], while it seems that total toxin clearance always occurs in a timespan of two weeks or less ( Figure 2). Moreover, at some sampling sites, more than one maximum and subsequent drop in contamination were recorded during the biweekly sampling intervals. All these findings confirm the previous reports by Turner et al. [42] for bivalves from the United Kingdom. Even though TTX levels above/around the LOQ were measured in the DG and WF of only three mussel samples out of nine, some consideration of tissue distribution may be attempted. The compartmentalization study highlighted three possible patterns: (i) despite the low TTX levels, the less contaminated samples showed a substantially uniform distribution of toxin between DG and RF; of the three more highly contaminated samples, (ii) one (AS190605W) showed preferential TTX accumulation in RF (93% of total TTX content), while (iii) the other two (SN190702W and SS190702W) showed preferential accumulation in DG (52% and 35%, respectively). Preferential accumulation in DG has been previously reported for other marine biotoxins in mollusks [44,45]. The TTX distribution pattern in the mollusk tissues has previously suggested a hypothesis of different possible routes of exposure [46]. Accumulation in the DG could indicate dietary TTX uptake, while a more homogeneous distribution in the mollusk tissues could be the result of contamination from symbiotic microorganisms. However, it has also been demonstrated that after ingestion and accumulation in the DG, TTXs can migrate from one tissue compartment to another in bivalve mollusks [47]. High toxin accumulation in DG can thus provide evidence for recent or ongoing TTX contamination. Interestingly, the two samples with the highest contamination levels in DG (SN190702W and SS190702W) were harvested on the same day from two neighboring sampling sites with very similar environmental conditions. We can also hypothesize for them the same TTX exposure time. Few data are available in the literature about TTX distribution in mollusk tissues. Vlamis et al. [34] found similar levels in the DG and WF (202.9 and 179.1 µg TTX kg −1 , respectively) of Greek mussels, while Biessy et al. [46][47][48] reported preferential accumulation of TTXs in siphons and DG in clam species from New Zealand (Paphies australis). Rapkova et al. [49] found the highest TTX levels in the DG of Pacific oysters (Crassostrea gigas) harvested from a production area in southern England. It also seems that different bivalve species differ in tissue accumulation patterns; therefore, the reported compartmentalization in mussels from the central Adriatic Sea may contribute to better understanding of the TTX compartmentalization behavior. Vibrio spp. isolation and identification in mussels from the Marche coast showed a high incidence of V. alginolyticus, with 50 contaminated samples out of 81 analyzed (62%) in 2018 and 35 out of 35 (100%) in 2019. These results were partially expected because vibrios are among the most abundant bacteria in the marine environment [50] and V. alginolyticus is the predominant species along the Italian Adriatic coast, followed by V. parahaemolyticus, V. cholerae, and V. vulnificus [51]. The increased incidence observed in the 2019 samples compared to those from 2018 is the result of sampling site (natural beds) and sampling period (spring-summer) selection in the second year of the monitoring campaign. It is known that the occurrence of Vibrio spp. is positively correlated with temperature, especially in temperate regions [52]. Additionally, the 2019 improvement of the Vibrio isolation method could have increased the analysis sensitivity. Moreover, the incidence of V. alginolyticus with NRPS and/or PKS genes was significantly higher in 2019 (40%) than in 2018 (4%). Again, the explanation for this evidence may be found in the positive correlation between the presence of these genes in vibrio strains and the warm sampling season. Furthermore, the three V. alginolyticus strains with NRPS and/or PKS genes found during 2018 were isolated from mussels harvested between May and July. The correlation between the presence in vibrios of genes responsible for other toxins, such as thermostable direct hemolysin (tdh) and thermostable direct hemolysinrelated hemolysin (trh), and the warm season has been previously reported [53]. In the future, more evidence should be collected on the correlation between the presence of V. alginolyticus carrying NRPS and PKS and the environmental temperatures. None of the NRPS-and/or PKS-positive strains showed detectable production of TTXs. It has previously been reported that bacterial TTX production, although proven, is very variable in terms of rate (from less than 1 ng mL −1 of extract to a few hundred) [23] and not always easily detectable. Furthermore, several studies have reported that the NRPS and PKS pathway genes in bacteria are not always related to TTX production, which probably occurs only in specific conditions or as a result of certain unknown stimuli [21]. It has also been hypothesized that some TTX-producing bacteria tend to lose their ability to synthesize the toxin if cultured in an artificial medium [23,54]. The focused mussel survey conducted during 2019 may enable some consideration of the possible coexistence of TTX contamination and V. alginolyticus carrying NRPS and/or PKS. A total 70% (7 out of 10) of the mussels that showed TTX > LOD (and analyzed for V. alginolyticus) and 100% of the samples with estimated concentrations > 15 µg/kg harbored V. alginolyticus carrying the NRPS and/or PKS genes. Conversely, in the 50% of mussels containing V. alginolyticus with the NRPS and/or PKS genes, TTX was measured at a detectable level (>LOD). These findings highlight the concomitance between TTX contamination and NRPS/PKS-positive V. alginolyticus in mussels from the Marche region. However, to confirm the possible correlation between the two parameters, longer-term studies are needed (Figure 2). Sampling A total of 158 mussel samples (Mytilus galloprovincialis) were collected between 2018 and 2019 from Marche region harvesting areas included in the biotoxin regional monitoring plan (Central Adriatic Sea, Italy). Nine were breeding areas and seven were wild sites ( Figure 4). Natural mussel beds in the Marche region show the unique features of a high, jagged and rocky coast, an exception in the Adriatic Sea, surrounded by straight sandy coasts. In 2018, samples were collected once a month from January until the end of August; in summer, sampling frequency was increased to twice a month. In 2019, the sampling was limited to the seven wild sites from May until early September, generally at a biweekly frequency. Each sample consisted of about 40-50 commercially sized mussels (5-7 cm) with a weight of approximately 4 kg. Samples were submitted both to chemical and microbiological analyses. A stock solution in water was prepared from the CRM. This solution was used to obtain calibration standards and to spike the blank samples used in quality control and in method validation. TTX Extraction from Mussels and Bacterial Pellets The extractions were performed following the EU-SOP "Determination of Tetrodotoxin by HILIC-MS/MS" [55], without the clean-up step. Moreover, the extraction protocol for bacterial pellets was implemented as described by Turner et al. [27] with minor changes. The details are described below. • Mussels (WF, DG) Bivalves were opened immediately once they arrived in the laboratory. Sand and solid residues were removed under running water, and the mussels were taken out of the shells and drained on a net. For each sample, about 150 g of WF was pooled and finely homogenized. DGs were dissected, pooled (10-15 specimens), finely homogenized, and analyzed separately in order to investigate the distribution of TTXs in the mussel tissues. WF or DG homogenate (5.0 ± 0.1 g) was extracted with 5 mL of acetic acid (1% v/v), vortex-mixed for 3 min, and placed in a boiling water bath (100 • C) for 5 min. The extract was cooled to room temperature, vortex-mixed for 3 min, and centrifuged at 3000× g for 10 min. Supernatant (1 mL) was transferred to a microcentrifuge tube, to which 5 µL ammonium hydroxide was added and the sample was vortex-mixed for 3 min and centrifuged at 10,000× g per 1 min. The final extract was diluted (1:2) with a solution of acetonitrile (80% v/v) containing acetic acid (0.25% v/v), filtered through a 0.2 µm syringe filter and analyzed by HILIC-MS/MS. • Bacterial pellets Bacterial cultures (400 mL) were centrifuged at 3000× g for 30 min and the pellets (about 1 g) were collected in 50 mL PP centrifuge tubes. Each pellet was extracted with 1 mL of acetic acid (1% v/v) in a boiling water bath (100 • C) for 5 min, and then cooled to room temperature and centrifuged at 3000× g for 15 min. The supernatant was diluted (1:2) with a solution of acetonitrile (80% v/v) containing acetic acid (0.25% v/v), filtered through a 0.2 µm syringe filter, and analyzed by HILIC/MS-MS. HILIC-MS/MS Analysis The chromatographic separation was achieved according to the EU-SOP "Determination of Tetrodotoxin by HILIC-MS/MS" [55]; details are given in Table S3. HILIC chromatography requires additional chromatographic methods for equilibration, column cleaning, and shutdown (Table S3). Mass spectral experiments were performed using a hybrid triple-quadrupole/linear ion trap 3200 Q TRAP mass spectrometer (AB Sciex, Darmstadt, Germany) equipped with a Turbo V source and an electrospray ionization (ESI) probe. The mass spectrometer was coupled to a 1200-HPLC (Agilent-Palo Alto, CA, USA), which included an in-line degasser (G1379B), a quaternary pump (G1311A), a refrigerated autosampler (G1329A), and a column oven (G1316A). Infusion and flow injection experiments were performed on TTX CRM to optimize compound-dependent and ion source parameters. Nine analogues ( Figure 5) were monitored via multiple reaction monitoring (MRM), with two transitions selected for each toxin to allow correct quantification and identification. The MS acquisition method is described in Table S3. The unequivocal identification of the TTX chromatographic peak was accomplished by retention time comparison and ion ratio verification for the two characteristic mass transitions in the samples and in a matrix-matched standard. All the other analytes, for which no reference materials were available, were identified by selecting specific transitions from literature. The calibration curves were matrix-matched in order to deal with the relevant matrix effect. All analogues were then quantified with TTX, assuming an equimolar response. Analytical Method Performance Assessment Method performances were investigated through in-house validation experiments in mussels, focusing on TTX, which was the only CRM available. Instrumental linearity was investigated via matrix-matched calibration curves on six concentrations, (6.5, 13, 19, 26, 65, 130 ng mL −1 ), prepared in triplicate and run for intralaboratory reproducibility. The mussels used for matrix-matched calibration curve preparation showed TTX levels < LOD. The calibration curves (y = bx + a) were obtained by plotting the toxin's chromatographic peak areas (y) against concentrations (x). The bestfit curves were obtained by using the least-squares regression model. Linearity was evaluated from the correlation coefficients and response factor variation. LOD and LOQ were estimated as the concentrations giving S/N ratios of 3 and 10, respectively, for the least intense (qualifier) transition monitored. The LOQ was calculated from the lowest calibration level and the LOD used was derived from the LOQ by dividing by 3.3. Subsequently, the estimated LOQ was experimentally confirmed by spiking blank mussel samples with the TTX CRM. The calculated LOD was extended to all the other TTX analogues. Throughout the study, concentrations >LOD and <LOQ were reported, but we were aware of dealing with numbers affected by a larger uncertainty than if they were >LOQ since their variability was above the Horvitz-Thomson theoretical equation prescriptions. Analytical Method Performance Assessment Method performances were investigated through in-house validation experiments in mussels, focusing on TTX, which was the only CRM available. Instrumental linearity was investigated via matrix-matched calibration curves on six concentrations, (6.5, 13, 19, 26, 65, 130 ng mL −1 ), prepared in triplicate and run for intra-laboratory reproducibility. The mussels used for matrix-matched calibration curve preparation showed TTX levels < LOD. The calibration curves (y = bx + a) were obtained by plotting the toxin's chromatographic peak areas (y) against concentrations (x). The best-fit curves were obtained by using the least-squares regression model. Linearity was evaluated from the correlation coefficients and response factor variation. LOD and LOQ were estimated as the concentrations giving S/N ratios of 3 and 10, respectively, for the least intense (qualifier) transition monitored. The LOQ was calculated from the lowest calibration level and the LOD used was derived from the LOQ by dividing by 3.3. Subsequently, the estimated LOQ was experimentally confirmed by spiking blank mussel samples with the TTX CRM. The calculated LOD was extended to all the other TTX analogues. Throughout the study, concentrations >LOD and <LOQ were reported, but we were aware of dealing with numbers affected by a larger uncertainty than if they were >LOQ since their variability was above the Horvitz-Thomson theoretical equation prescriptions. These numbers may be an estimation of possible contamination levels, enabling consideration of uptake and detoxification. Accuracy in terms of R% and precision in terms of intraday repeatability (RSDr%) were assessed by replicated analyses (N = 6) on blank mussel samples spiked at 75 µg kg −1 and 251 µg kg −1 (TTX levels often found in European shellfish). The spiked samples were quantified against the matrix-matched calibration curves. Quality Control Internal quality controls were included in sample analysis: a blank mussel sample was spiked with TTX at the LOQ in each analytical batch. Accuracy in terms of R% was calculated to check validation performances. All the samples in which TTX was quantified were subjected to co-chromatography, in which they were spiked with a comparable amount of TTX to indubitably confirm the analyte identification. Isolation of Vibrio spp. from Mussel Samples Bivalve mollusk samples were transported to the laboratory in refrigerated conditions and processed immediately upon receipt. For microbiological analysis of Vibrio, samples were externally cleaned with potable water and prepared for analysis in accordance with ISO 6887-3 [56]. In aseptic working conditions, about 10 individuals were opened and the flesh meat and intervalvular fluids were pooled together. Briefly, 25 g of bivalve sample was weighed, 225 mL of alkaline saline peptone water (ASPW) was added, and the sample was homogenized in a blender and incubated at 37 • C for 24 ± 3 h. After incubation, enrichment broths were subcultured onto selective media, Thiosulfate citrate bile sucrose agar (TCBS) and CHROM™ agar Vibrio (CHROMagar, France). These subcultures were further incubated at 37 • C for 24 ± 3 h. After incubation, colonies were selected on the basis of distinctive morphology and color. At least five yellow and/or green colonies from each TCBS plate, and mauve, blue, and white colonies from each CHROM™ agar Vibrio plate were isolated; colonies were subcultured on Tryptic soy agar (TSA) with 3% NaCl and identified via molecular assay. Mass Culture of Bacterial Isolates Bacterial isolates that were positive for NRPS and PKS genes according to PCR were cultured in 3% NaCl sterile nutrient broth with a final volume of 400 mL. Cultures were incubated at 25 • C with constant shaking (250 rpm) for 3 days, centrifuged at 3220× g for 15 min, and the pellets were finally collected for chemical analysis as described above [21,59]. DNA Extraction-Operative Method Bacterial colonies were suspended in 500 µL sterile distilled water, heated to 99 • C for 10 min, and centrifuged at 13,200 rpm for 1 min [60]. The supernatant was either tested by PCR immediately or stored at −20 • C. Possible V. alginolyticus colonies were submitted to PCR analysis for the species-specific gyrB gene [61] while those of V. parahaemolyticus were analyzed for the species-specific toxR gene [62]. PCR Analysis All the PCR amplifications were accomplished with a Mastercycler pro Thermal Cycler (Eppendorf). • gyrB and toxR species-specific genes Suspected V. alginolyticus and V. parahaemolyticus colonies were submitted to PCR analysis for detection of the species-specific gyrB [61] and toxR [62] genes, respectively. PCR amplification protocols are described in Table S4. The primers Alg F1 and AlgR1 (Invitrogen-Themo Fisher) were employed for the amplification of the gyrB gene fragment (568 bp) [61], while ToxR-F and ToxR-R primers were used for the detection of the toxR gene fragment (368 bp) [62]. V. alginolyticus ATCC 33787 and V. parahaemolyticus ATCC 17802 strains (American Type Culture Collection, Manassas, VA, USA) were used as positive controls of amplification (CTRL + ) for gyrB and toxR, respectively, in all analytical batches. Ultrapure distilled nuclease-free water was used as a negative amplification control (CTRL − ). The bacterial isolates were identified as V. alginolyticus or V. parahaemolyticus when PCR amplification generated products of the expected size by comparison to a 100 bp DNA ladder molecular weight marker and the positive control strain, after visualization by electrophoresis in 1.5% agarose gel under UV light. • NRPS and PKS biosynthesis genes The bacterial isolates identified as V. alginolyticus or V. parahaemolyticus were subjected to PCR analysis for the presence of PKS and NRPS genes. PCR amplification protocols are described in Table S4. The primers A2gamF and A3gamR were employed for the amplification of the NRPS gene fragment (300bp) [63]. DKF and DKR degenerate primers were used for the amplification of PKS gene fragment (300 bp) [28,64]. V. parahaemolyticus ATCC 17802 strain was used as a positive control of amplification (CTRL + ) for both target genes in all analytical batches. Ultrapure distilled nuclease-free water was used as a negative amplification control (CTRL − ). The V. alginolyticus and V. parahaemolyticus isolates were considered to be NRPS and/or PKS positive when PCR amplification generated products of the expected size by comparison to the molecular weight marker and the positive control strain, after visualization by electrophoresis in 1.5% agarose gel under UV light. Conclusions TTX in mussels from the Marche region coasts, Central Adriatic Sea, seems to not yet be a threat, with very few samples showing levels above the EFSA recommended threshold. The conducted survey showed that natural beds during the warmer seasons are the sites most prone to contamination. This evidence is in agreement with previous findings across various European countries, including Italy. V. alginolyticus containing NRPS and/or PKS genes seems to play a role in TTX accumulation in mussels, but further investigations are still needed. The present paper adds information which may help to better understand causative factors, uptake/detoxification rates, and TTX tissue distribution in mussels. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/md19060304/s1, Table S1: TTX and Vibrio spp. contamination in mussel samples from coastal area of the Marche region in 2018-2019, Table S2: TTX distribution in digestive gland (DG) and remaining flesh (RF) of contaminated mussels., Table S3: HILIC-MS/MS method for TTX analysis: chromatographic conditions, MS parameters, and MRM transitions, Table S4: Reagents and protocols for Vibrio spp. PCR analysis.
2021-06-03T06:17:21.250Z
2021-05-25T00:00:00.000
{ "year": 2021, "sha1": "1b28172ad887e9e2e21e71a177b3715ac73e012b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/19/6/304/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8677b2b3b7383a7f0317f99b2996caac7ee22d7c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
238724447
pes2o/s2orc
v3-fos-license
EU Electricity Policy (Im)balance: A Quantitative Analysis of Policy Priorities Since 1986 The European Union has produced hundreds of laws in the field of electricity policy in the last three decades, on issues ranging from nuclear disposal to renewable energy generation support. Is the EU electricity policy of the last 30 years balanced, according to the classical energy trilemma framework? An all-inclusive, quantitative, multi-decade examination of the EU energy policy is still lacking. Besides the traditional policy perspectives, policy density and intensity, this paper proposes a novel method to measure policy outcomes: policy importance. The results show that EU energy legislation is indeed imbalanced. Environmental concerns rank first among EU electricity policy priorities; however, since 2003, the creation of an internal market has started to challenge environment as the top priority. Furthermore, internal market policies tend to have a higher trend of adoption than environment. Security of supply is at the bottom of EU policymakers’ attention. The EU energy policy is becoming more intricate, but not more revolutionary. Meaningful policy changes occur at a stagnating yearly rate, despite the increasing power of the EU institutions. INTRODUCTION In any given work day, the Official Journal of the European Union publishes at least one piece of legislation related to energy. Only enumerating the title of binding rules covers more than 30 pages in the nuclear field alone. Using the World Energy Council (World Energy Council, 2020) framework of a classical energy trilemma between the competing energy priorities of affordability, security of supply and environmental sustainability, this article aims to shed a light over the existence or not of such balance in the European energy policy. There are several attempts to analyse this equilibrium between policy priorities, but comprehensive, decades-long, quantitative studies are missing. In a strategy paper for the French government, the offset between electricity prices and environment measures is studied, arguing that the electricity sector is in crisis, aggravated by an electricity generation oversupply (Auverlo et al., 2014). A long-term analysis of the legislative output in the EU energy sector, probing for policy patterns, concludes that neither incremental progress nor punctuated equilibrium satisfactorily explains the patterns of EU policy-making, stopping short of giving a verdict on policy balance (Benson and Russel, 2015). In another article, the balance between climate change and the internal energy market policies is investigated, and the conclusion is that both will fail, unless refocused (Helm, 2014). This paper intends to solve this puzzle of assessing the balance between European energy priorities in two steps. The first step is quantifying all legally-binding legislation (a policy density perspective), then all policy instruments such as targets and objectives (a policy intensity perspective) and, in a novel approach, valuating those targets and objectives according to a self-developed taxonomy (a policy importance perspective). The quantification is done at two levels: pillars (energy priorities defined according to the classical energy trilemma) and categories (a more refined classification of priorities). The second step is assessing the balance of energy priorities through all three perspectives (density, intensity and importance), but also recognizing patterns in EU policymaking and identifying gaps in EU policy. The empirical database created, including about 8,000 data points, allows many more applications, this article focusing on assessing the EU energy policy (im)balance and on recognizing policy patterns. The question addressed by this article is quantitatively determining if the European electricity policy is in the balance suggested by the classical energy trilemma framework. Assessing the policy (im) balance is useful, as it allows to identify policy gaps and to explain the roots of tensions with major stakeholders, such as members states. This paper also aims to give quantitative arguments in the centralization versus liberalisation debate, noting the inclusion of "internal energy market" as the fourth energy priority. The article is divided into seven parts: an introduction and a background, followed by a presentation of the analytical framework employed, including the methodology. The empirical results are separated into the three developed policy perspectives: policy density, policy intensity and policy importance, each displaying their own findings. Finally, the discussion and the conclusions respond to the questions addressed by the study: proposing a ranking of EU ambitions, assessing their balance, or lack thereof, and discussing the evolution of those priorities. A STOCK-TAKING EXERCISE ON THE CURRENT DEBATE ON ENERGY PRIORITIES IN EUROPE While energy and politics are generally intertwined at global level, in the EU case liberal market thinking was for decades the main guide (Talus, 2017). Liberalisation and EU energy market integration came hand in hand, in consecutive energy reforms (KU Leuven Energy Institute, 2015). Hence, different strands of literature are trying to reconcile major policy priorities, such as security of supply, environment or affordability, with the EU energy market liberalisation, in multiple, fragmented debates. However, a comprehensive analysis of how the EU priorities have evolved over time is missing. The security of supply -liberalisation debate is impeded by the vague notion of security of supply (Ang et al., 2015;Chester, 2010). Nevertheless, some authors note that EU energy security was often used as justification for further market integration (Huhta, 2020;Judge and Maltby, 2017). A lively debate resulted from the introduction of capacity mechanisms (Eurelectric, 2016) and their compatibility with the internal market (Brunekreeft and Meyer, 2019;Hawker et al., 2017;Özdemir et al., 2020). The affordability -liberalisation debate became more prevalent since the establishment of the Energy Poverty Taskforce and the European Energy Poverty Observatory in 2016. The debate suffered as well from unclear definitions of conceps (Deller, 2018;Thomson et al., 2016) and an early study found that liberalisation did not equate to affordability, at least for the most vulnerable consumers (Poggi and Florio, 2010). The environment -liberalisation debate is well-known and goes at the heart of the liberalisation argument. The main critique is that too high environmental externalities would occur in the energy generation and distribution chain (Hammond and Jones, 2011). In the EU energy sector, it is argued that not enough climate policy integration is employed to reach long-term climate policy objectives (Dupont and Oberthür, 2012) There is a decades-long discussion over the merits of liberalisation in the energy sector. On one hand, some authors note that lack of competition due to inevitable natural monopolies in generation and distribution and the widespread lack of information for actors on this particular market, would unavoidably create energy market failures (Aalto, 2014;Foley and Lönnroth, 1981;Goldthau, 2012;Greening and Jefferson, 2013). The 2001 California shortage of electricity supply is portrayed as another example of market failures (Wen and David, 2001). On the other hand, European energy liberalisation is praised, mainly owing to providing cost reductions and price finding. Looking at the changes to electricity markets due to liberalization, Joskow concludes that liberalization brought significant costs reduction without compromising quality of service. The primary problem is if the regulators can resist to group pressures (Joskow, 2008). Pollitt discusses the energy policy liberalization since the 1980s, looking at several aspects of the market, including electricity, climate policies, coal subsidies and their effects, concluding that it had positive, but limited effects (Pollitt, 2012). Using an innovative measure of electricity price, the EU annual average real price, an analysis focusing on the legal developments for power utilities finds that the early effects of liberalization are reduced electricity prices (Jamasb and Pollitt, 2005). Similarly, investigating the relationship between investment and regulatory regimes, from the perspective of electric and gas utilities in several EU member states, over 1997-2007 decade, a study finds that private ownership provides higher investment rates (Cambini and Rondi, 2010). For example, in the UK, the energy market privatization in the 1990s provided increased net efficiency gains, doubled labour productivity, increased government revenues (sales and taxes) and offered better prices for consumers (Domah and Pollitt, 2001). Finally, the liberalized energy market policy is shown to achieve some success particularly for the new EU member states, on costs reduction and competition (McGowan, 2008). In terms of energy policies mapping, Kanellakis, Martinopoulos, and Zachariadis record diligently the existing regulatory landscape, creating categories for various electricity market parameters (Kanellakis et al., 2013). Their article is a benchmark against which this article's own empirical analysis may be compared. However, while their stock-taking exercise is extensive, the research is not aiming specifically at quantifiable targets, as this article intends. Another comprehensive analysis of EU electricity policies is done by Ignacio Pérez-Arrriaga (Pérez-Arriaga, 2014), the editor of the Regulation of the Power Sector book. The book methodically describes the evolution of the electricity market design, explaining the motivation for each design adjustment. However, the book is intended as a manual and it does not provide a legislative analysis, but rather a historical outlook and a regulator's perspective. The political science scholarly literature discussing the merits of liberalisation is developed and rich, but fragmented. While there is ample research on various approaches to normative policy design and policy priorities, there is relatively little on their mapping, evolution, balance or patterns, presented in a detailed, comprehensive and quantifiable analysis. This article aims to fill those gaps by offering an all-inclusive and quantifiable measurement of the degree of attention given by European policymakers to the competing energy policy priorities, using a novel methodological analysis (policy importance). Such measurement is then applied to find policy imbalances and explore what would those imbalances mean for the current policy debate and to the crisis that some authors mention (Helm, 2014), if current policies continue without balancing. DEVELOPING AN ANALYTICAL FRAMEWORK FOR MEASURING THE EU POLICY OUTCOMES Filling the existing gap is achieved by analysing the objectives and targets of the EU electricity policy since 1986, when the single european act was adopted (Council of the European Communities, 1986). This document expanded significantly the powers of European institutions and gave a timetable for the creation of the internal market, one of the energy pillars analyzed. The objectives and targets are then classified at two levels: pillars, according to the classical energy trilemma, and categories, a more refined classification of priorities. The first level of the analysis, the classical energy trilemma, was proposed by the World Energy Council and means: energy security, e.g. no power cuts; environmental impact mitigation, e.g. decarbonisation and air quality; and social equity or affordability, i.e. accessibility and affordability of electricity across the population (World Energy Council, 2020). The advantage of this classification is that it acknowledges that achieving the three goals simultaneously is often a delicate balancing act, sometimes a zero-sum game. Those priorities were encoded as pillars: affordability, security of supply and environment; to which internal market was added due to the significant European importance. The balancing act is given by the fact that pursuing an energy pillar often, but not always, means trade-offs with the other pillars (World Energy Council and OLIVER WYMAN, 2015); for example, environmental sustainability may be at odds with affordability, or affordability with security of energy supply. The second level of the analysis is a detailed cataloguing of priorities, based on Kanellakis, Martinopoulos and Zachariadis's proposal (Kanellakis et al., 2013). While the energy trilemma implies competing priorities, this cataloguing recommends a cooperative arrangement, where different energy priorities are defined by their field, not purpose. Hence, a new catalogue was created, with eight categories: renewable energy; energy efficiency and savings; internal energy market; security of energy supply; environmental protection; nuclear energy; nuclear research; and research and development. Besides categorizing the policy priorities, different perspectives for policy analysis required examination. One theoretical strand looks at policy outcome, searching if the policy adopted solved the problem that was supposed to solve (Bondarouk and Mastenbroek, 2018;Tosun, 2012). This analytical framework comes in contrast with policy output, which looks at policies taken in response to a societal problem at the point of adoption. The critique of a policy outcome approach is that the policy effect is hard to isolate; for example, there could be implementation or adoption problems in member states, as some authors suggest (Knill and Duncan, 2007). In the vein of the policy output perspective, two methods are proposed: policy density, which is the number of policies put in place to reach a policy goal, and policy intensity, which focuses on the content of the policy instruments (Bauer and Knill, 2014;Bondarouk and Mastenbroek, 2018;Knill et al., 2012;Schaffrin et al., 2015). For our comprehensive research purposes, the policy density and policy intensity analysis fit best, as they unearth a large volume of pieces of legislation and targets, which allow measurement of the most impactful legislation and years, evolution in time, trends and ranking of policy priorities. However, policy density and policy intensity perspective have the drawback that major, binding targets are on the same scale as an obligation to send a report, for example. The toolbox provided by policy density and policy intensity analysis does not differentiate between those targets. To eliminate this limitation, a novel, third perspective, policy importance, was created by grading each target and objective, according to own criteria. This way, the indiscriminate measurement of targets is eliminated and groundbreaking targets differentiate from lesser ones, allowing a finer view of policy targets. To test the precision of our three perspectives, each chronological display of pillars and categories was juxtaposed, for each perspective, against the adoption year of the energy packages. This trial measures how well the new perspective fares compared with the traditional policy density and policy intensity. Energy packages are legislative cycles starting when the European institutions are adopting major reforms. As an energy package has a cycle of about 6-7 years and new major proposals from the Commission for the energy market design were adopted in November 2016, it was considered, for testing, as a new energy package. Methodology In order to measure the policy density, policy intensity and policy importance of the European Union's electricity policy a database was created, quantifying each individual target and objective of EU binding legislation in the electricity sector. The electricity sector refers to electricity-related pieces of legislation only, eliminating for example the legislation referring to vehicle or maritime fuel. Binding refers to the EU documents with legal effects: Regulations, Directives and Decisions. Regulations are binding legal acts, with detailed provisions. Directives set objectives which member states have to achieve by devising their own laws. Decisions are also binding legal acts, with a deadline to comply with, but applicable only to whom they are addressed (European Union, 2017). Delegated acts or regulatory technical standards are not included. While they are binding, they do not provide targets or objectives and would clog the study. The empirical data collection starts from 1986, taken as a starting point for European markets by much of the literature (Black, 2013;KU Leuven Energy Institute, 2015) and continuing until 2018. The identified target/objective was coded along 11 dimensions: (1) The binding obligations/targets in a short résumé; (2) quantifiable/ not quantifiable; (3) the pillar; (4) the category; (5) the exact provisions, quotes from legislation; (6) the importance, added in order to differentiate the importance of regulations, given a grade from 1 to 4, where 4 is the highest; (7) the full title of the legislation; (8) the link to that legislation; (9) the stage of the legislation, meaning the energy package including that legislation; (10) the year when the legislation was published; (11) if still in force or by which legislation was repealed. The empirical research led to about 300 pieces of binding EU legislation in the electricity sector, reuniting around 700 obligations/targets in about 30 years of data, and over 8000 tags. The own cataloguing system gave an importance number, one to four, to each legislation, target and objective, according to a predefined rulebook, as below: • 1 = small: project with budget under 20 million EUR/year; minor development (such as updating the list of projects of common interest or establishing an experts' group); foreign affairs (such as treaties on collaboration with other countries); • 2 = increasing: project with budget under 50 million EUR/ year; member states to inform Commission; guidelines (Commission empowered to draft delegated acts); Commission reporting (to the Parliament and to the Council); medium development (such as obligation of member states to form independent gas/electricity authorities); • 3 = significant: project with budget under 100 million EUR/year; targets given/diluted (legislation setting up, increasing or reducing quantifiable targets for member states to achieve, for example GHGs reduction); expansion of (Commission's) duties; new EU programme established; important development (such as member states obliged to set up GHGs national inventory systems or establishing a European programme on environment); • 4 = large: project with budget over 100 million EUR/ year; major expansion of (Commission's) duties; major development (such as unbundling of electricity and gas companies or common rules for the electricity market); new EU body (or scheme) established. The EU energy policy balance is investigated in gradual steps, through the pillars of the classical energy trilemma (affordability, environment, security of supply, internal market) and through separate categories (renewable energy; energy efficiency and savings; internal energy market; security of energy supply; environmental protection; nuclear energy; nuclear research; and research and development) through the lenses of three perspectives (policy density; policy intensity and policy importance). This matrix with six cells (pillars/categories on one axis; policy density, intensity and importance on the other axis) is investigated for each result in the sections below. POLICY DENSITY -CONSTANT ATTENTION TO THE "ENVIRONMENT" POLICY PRIORITY There are 291 binding pieces of legislation in the electricity domain from 1986 to 2018 published in the Official Journal of the European Union. Displayed chronologically, they show ebbs and flows, but clearly exhibiting an increasing trend. The 2001-2010 decade seemed particularly fruitful in terms of adopted legislation. In general, more pieces of legislation are adopted each year by EU policymakers. However, policy density seems to miss the appearance of energy packages. Those two observations condense the advantages and drawbacks of the density analysis: showing trends, but missing qualitative developments. In terms of number of pieces of legislation, the investigation shows a strong dominance of the "environmental" pillar. Almost half of the EU electricity legislation is having environment as the main objective (e.g. Council Regulation 1210/90 on the establishment of the European Environment Agency; Directive 2003/87/EC establishing a scheme for greenhouse gas emission allowance trading). "Affordability" and "internal market" pillars follow with about equal shares, roughly a quarter (e.g. 94/799/Euratom: Council Decision adopting a specific programme of research and training in the field of controlled thermonuclear fusion; Directive 96/92/EC concerning common rules for the internal market in electricity). Finally, only a few pieces of legislation are dedicated to "security of supply" (e.g. 97/7/EC: Council Decision repealing Directive 75/339/EEC obliging the Member States to maintain minimum stocks of fossil fuel at thermal power stations; Regulation 1407/2002 on State aid to the coal industry). If each policy priority is followed, on an individual progression (Figure 1), the results show no obvious domineering policy priority. With the exemption of "security of supply," all other policy priorities have years when they are on top. In 2001 and 2013, "environment" reaches unprecedented highs, which hints at important pieces of legislation published in those years (e.g. Directive 2001/81/EC on national emission ceilings for certain atmospheric pollutants; Regulation (EU) No 525/2013 on a mechanism for monitoring and reporting greenhouse gas emissions). However, regarding trends, "environment" is the only one seeing an increasing tendency, while "internal market" and "affordability" are rather flat. Notably, there is a distinct declining trend for "security of supply." Categories Looking at the data from the categories' perspective, there is a constant presence of "environmental protection" and "nuclear research" categories in almost all years. "Nuclear energy" gets constant attention since 2002, while "energy efficiency and savings" picks up pace since 2004. "Research and development" flare up only every couple of years, the same as "renewable energy." If categories are plotted in a chronological graph (Figure 2), a large spike is observed in 2001 (e.g. Directive 2001/80/EC on the limitation of emissions of certain pollutants into the air from large combustion plants; Directive 2001/81/EC on national emission ceilings), followed by a clear dominance of "environmental protection" legislation after 2013 (largely due to the development of the EU emissions trading system legislation). In terms of percentage of total adopted legislation, out of the eight categories, "environmental protection" makes a third, followed by "nuclear research" with about a quarter of all legislation. The two categories together represent more than half of all European electricity legislation. "Nuclear research" and "nuclear energy" add up to 36%, meaning that more than a third of the legislation is dedicated to the nuclear sector. Policy Density Perspective -Conclusions Putting all the observations above together, firstly, more legislation is adopted on annual basis. Nevertheless, rarely more than 4-5 pieces of legislation of the same classification are adopted in a year. Secondly, we find a clear ranking of energy priorities, identified by both our classification methods. Topping the rank of EU policymakers' attention is "environmental protection," followed by "internal energy market" with "security of supply" having least attention. "Nuclear energy" and "nuclear research" together have more than a third of all pieces of legislation, dwarfing "renewable energy" as the other named energy source. In terms of consistency, with the noticeable exception of 2001, when environmental legislation skyrockets (due to several pieces of legislation tackling air pollution, such as Directive 2001/81/EC on national emission ceilings for certain atmospheric pollutants), there is a remarkable steadiness of legislation adopted by the European institutions, with rarely more than 4-5 pieces of legislation of the same kind in a year. On individual policy priorities, "environment" has a dedicated piece of legislation almost every year, for more than three decades. "Internal energy market" has also consistent attention from policymakers, particularly after 2003. Other policy priorities come as a group, with 2-3 years of intense effort on a particular policy, such as "nuclear research," followed by a break. This leads to the conclusion that it is not the number of pieces of legislation that makes an energy package, but the importance of provisions in it. However, while policy density offers some important glimpses into the EU policymakers' attention towards various energy priorities, classification of an entire piece of legislation as one policy priority hides provisions with a different intent. Policy density is a rather raw way to analyse policy priorities. Consequently, a more in-depth examination is needed for definite results. POLICY INTENSITY -GATHERING PACE FOR "INTERNAL MARKET" POLICY PRIORITY Building on previous data, the investigation turns towards policy intensity analysis, which looks at the content of legislation. This perspective is more complex and more challenging, as each target and objective had to be labelled. If in the previous section analysis there were 291 pieces of legislation to quantify and display, this section classifies 685 targets and objectives. Taking a step back and looking at trends for all policy targets and objectives, there is an undoubtable increasing trend. There are several cyclical yearly spikes, an indication of legislation adoption in waves. Additionally, the precision of the policy method is verified by its power to identify energy packages. This test is performed by juxtaposing the adoption year of an energy package over the chronological evolution of the policy targets and objectives. While some energy packages are correctly guessed, there is not enough precision to make correct measurements. Nevertheless, the method reveals some useful insights. From a pillars' perspective, "environment" and "internal market" are dominating the policy priorities, but while "environment" is adopted in almost every year, "internal market" is significantly more present since 2003. "Affordability" is also a constant presence, but less than "environment" and almost disappearing since 2014. "Security of supply" pillar has an irregular presence, with no clear pattern. In terms of percentual number of targets and objectives, "environment" and "internal market" make more than two thirds of all EU electricity-binding legislation. "Affordability" is half the numbers of "environment", while "security of supply" is in last place, with only 6% of all legislation. If each pillar's progression is examined (Figure 3), "environment" and "internal market" pick up policymakers' interest significantly after 2001 and, excepting a few years, alternate at the top of energy priorities. "Security of supply" is clearly at the bottom of policymakers' attention with the least number of targets and objectives. Looking at trends, both "environment" and "internal market" have increasing trends, with the latter actually overcoming "environment" in recent years. Pillar "affordability" is slowly increasing in targets and objectives (e.g. Decision No 647/2000/ EC for the promotion of energy efficiency -SAVE II, oferring larger funding than SAVE I), while "security of supply" is rather stable, with a very low base (e.g. Council Regulation 1407/2002 on State aid to the coal industry has provisions where state id to the coal industry may be considered compatible with the proper functioning of the common market, under certain conditions; Regulation 994/2010 states that gas transmission system operators need to find bi-directional cross-border solutions). Categories The categories classification of energy targets and objectives shows constant attention to "environment," with targets and objectives adopted almost every year. "Internal energy market" progresses in ebbs and flows, but gets significant attention after 2003. Other categories have a cyclical development, with 2-3 years of intense effort, followed by a break of several years. Looking at the percentual number of EU electricity-binding legislation, there is a distinct ranking of energy priorities. The top spot is taken by "internal market" with almost a third of all targets and objectives. This is closely followed by "environmental protection" with about a quarter, while third place is "security of supply" with half the targets of "environment". However, if the two categories of nuclear are taken together, "nuclear energy" and "nuclear research," they would place jointly on the third place. At the bottom of policymakers' attention is "renewable energy" and "research and development." Analysing the chronological evolution of categories, "internal market" and "environment" are ranking at the top of attention of policymakers. "Environment" seems to receive more consideration since 2015. "Security of supply" shows clearly a cyclicity in energy policy attention, with many targets adopted in 1996, 2003, 2010, 2013 and 2017. Trends are difficult to analyse as data is too sparse, making it impossible to determine what direction policy priorities are taking from a categories' perspective. Finally, by juxtaposing the adoption year of an energy package over the chronological display of categories' evolution, the policy intensity analytical method could be investigated if it is a precise enough toolbox to determine what literature recognizes as energy packages. The findings show that while some adoption years of energy packages can be seen, there is no consistent identification. Nevertheless, it is worth noting that the more complex the analysis, the closer is the match to identify energy packages. For example, the most complex toolbox so far, policy intensity and categories, correctly notices a bump in "internal market" targets and objectives in four out of six energy packages adoption years. Policy Intensity Perspective -Conclusions In conclusion, the empirical results from a policy intensity perspective analysis are ambiguous over the ranking of policy priorities. While from the perspective of the classical energy trilemma, "environment" tops the raking of priorities, from the perspective of categories, the "internal market" is the dominant priority. It could be argued that "renewable energy" category is belonging to the environmental field, which would change the standing of priorities, however, "renewable energy" could also support energy independence. Therefore, "internal market" is crowned as the most pursued policy of this analysis perspective. The results also show an increasing trend of targets and objectives added each year. On average, a piece of legislation from 2018 has more targets and objectives than one from 1990, for example. Looking at individual policy priorities from the energy trilemma perspective, it is worth noting that there is a trend for the "internal market" to overtake "environment" as the main energy policy priority in the European Union. Particularly from 2003, there is concerted effort from policymakers towards building the internal market. Furthermore, policy priorities appear in cycles, with 2-3 years of intense effort, followed by a break of several years. This is valid for most of policy priorities, except "environment" which receives persistent attention. EU policymakers adopt every year new or updated targets and objectives in the field of environment. Finally, intensity policy analysis is insufficiently precise to detect energy packages. However, a pattern is found, indicating that the more precise is the classification and analysis adopted, the more energy packages become clearer to detect. The analysis points to the fact that further precision, more accurate instruments, would be able to offer better insight in determining the ranking of EU energy policies. Therefore, the next section follows up with the policy importance analytical framework. POLICY IMPORTANCE -"ENVIRONMENT" TOPS THE POLICY RANKING, BUT "INTERNAL MARKET" CLOSELY FOLLOWS Finally, a third layer of analysis is added, an original policy perspective, the policy importance. While various pieces of legislation have targets and objectives, not all are equal in importance. Some targets are impactful, such as setting new pollutant limits, creating new European agencies or splitting monopolies, while others present only the obligation of the European Commission to report the implementation of a policy to the European Parliament and to European Council, for example. Employing only the two perspectives displayed above, results would be skewed in favour of volume and not on impact. Therefore, a new taxonomy of EU energy policy targets and objectives was created, according to a self-developed system, detailed in the methodology chapter. This third viewpoint benefits from the policy intensity perspective, adding a grade according to importance to each target and objective of every piece of legislation within our defined scope. Regarding tendencies (Figure 4), there is an increasing trend in importance of legislation on an annual basis, but a flat trend for the importance of objectives and targets. Importance of legislation means all objectives and targets multiplied by their points divided by the number of pieces of legislation in that year. Importance of objectives means average importance of objectives and targets in a year. This outcome shows that the EU energy policymaking is producing pioneering provisions at a very stable rate, an almost flat curve. While each piece of legislation is becoming more intricate, with more objectives and targets per piece of legislation, this does not reflect in the average importance of those objectives and targets. Most of them are only low importance, meaning that the legislation is unnecessarily complicated. The outcome of the empirical research from an energy trilemma perspective largely follows the previous analyses: a skyrocketing policy ambition in 2009, bumps in 1996, 2003 and 2013; an ebbs and flows in energy policy adoption, but with an increasing general trend. These results are condensing what could be the most accurate display to the question of the degree of ambition of the energy policy of the European Union. Examining the points percentage for each pillar, "environment" ranks first, followed closely by "internal market," then "affordability" and "security of supply". This ranking is consistent with earlier findings. In a chronological display of pillars ( Figure 5), "environment" and "internal market" are alternating, both topping the policymakers' attention in most years. "Affordability" and then "security of supply" policy priorities follow far behind. Looking at trends, "environment" and "internal market" have almost identical increase rates, a clear competition between the two for the top spot of EU energy policy attention. "Affordability" is ranked third, with a moderate increase rate. Finally, "security of supply" trend rate seems flattened, with no increase. Furthermore, this is visibly an increasingly accurate identification of the start date of energy packages, as the figure shows, even without having the points stacked. Finally, making a comparison between pillars from the perspective of the highest graded targets and objectives (three and four-graded policy objectives and targets), "environment" policy priority has Source: author's elaboration the most ground-breaking, major targets and objectives. However, for the second place, "affordability" is not far from "internal market," showing that while "internal market" has numerous targets and objectives, they are not as important as their number would imply. "Affordability" punches higher than the number of targets and objectives tagged as such. Categories If the categories' classification is employed, there is a constant, yearly attention to "environmental protection." "Internal energy market," particularly after 2003, receives persistent attention as well, with some years even booming, such as 2009 and 2013. Other categories are less popular and their presence is not on a yearly basis, but more as cycles of 2-3 years followed by an interruption of a couple of years, such as "security of energy supply" or "nuclear research." Looking at percentual numbers, "internal energy market" has about a third of all points, followed by "environmental protection" and, third, "security of supply." Additionally, as in the pillars' section, a comparison is made between pillars from the perspective of the highest graded targets and objectives. "Environmental protection" tops the rank by far, followed by "internal energy market" and "security of supply" on the third place. On a chronological basis, the prominent categories are "environmental protection" and "internal energy market", flashing on top of the energy policy ranking. After 2009, "environmental protection" seems to lead the ranking, with policymakers giving the most attention to this policy priority. As a notable exception, "security of supply" category leads in 2010 and 2017. To test the precision of the policy importance perspective, energy packages adoption year are juxtaposed with the chronological display of categories by the policy importance analysis framework. The results show an accurate tracking of the energy package adoption, which proves the value of policy importance as a toolbox to identify ground-breaking energy developments in the EU energy policy field. Policy Importance Perspective -Conclusions In conclusion, the empirical research displayed an increasing trend on an annual basis in importance of legislation, but a flat trend for the importance of objectives and targets. Many of the new objectives and targets have low importance and could very well be eliminated without affecting the policy steering. For example, the Regulation 714/2009 on conditions for access to the network for cross-border exchanges in electricity has no less than 33 targets and objectives. Regulation 715/2009 on conditions for access to the natural gas transmission networks has 24 targets and objectives. The policy importance analysis shows "environment" and "internal market" as the main energy policy priorities of EU policymakers, followed, far behind, by "affordability" and "security of supply". Both the former policy priorities are tied in trends and receive continuous, annual attention from policymakers through new adopted targets and objectives. While "internal market" tends to dominate in volume, meaning number of points, "environment" received higher attention in recent years, after 2013. Therefore, delving into the trailblazing targets and objectives, those graded highest in our methodology, "environment" appears as the most pursued policy. Most groundbreaking provisions are in the field of environment (for example, creating an auctioning of allowances system for the reduction of GHGs; introducing guarantees of origin for renewable energy supply; the decision to sign the Paris Agreement), adding the most changes to the EU energy landscape. A clear comparison between "affordability" and "security of supply" cannot be made, as they do not have an equivalent in both pillars and categories. From a pillars' perspective, "affordability" dominates and "security of supply" takes the last place. Finally, the policy importance toolbox proved very accurate in detecting energy packages, all adopting years being in areas with high targets and objectives' importance. From both pillars and categories' standpoint, the highs correspond with an increase in "internal market" energy policy importance, meaning that energy packages are, in effect, major expansions of the "internal market" ambitions. DISCUSSION The market liberal thinking dominated EU policymaking for decades (Talus, 2017); nevertheless, many scholars argue that the environmental energy ambitions of the European Union are incompatible with this school of thought (Aalto, 2014;Hammond and Jones, 2011;Helm, 2014). We find that EU policymakers are in a situation with little room for maneuver, environment being already at the top of the agenda. The outcome of the research showed that, from a policy importance perspective, environment and internal energy market are the main policy priorities for EU policymakers, supporting Helm's (Helm, 2014) claims that the current EU energy design is based on the, presumably incompatible, internal energy market and the climate change package. Helm considers that this design is not tenable, and internal market must prevail. The findings seen so far (until 2018) show that internal market policies tend to have a higher trend of adoption than environment. In other words, the EU policymakers were choosing internal market over stronger environment measures, at least until 2018, heeding Helm's advice. This finding responds to several authors wondering about the direction of the EU policies (Dupont and Oberthür, 2012;Szulecki and Westphal, 2014). This research did not find arguments to support market failures due to the intrinsic characteristics of the energy sector, as theorized by some authors (Foley and Lönnroth, 1981;Goldthau, 2012;Greening and Jefferson, 2013). The decades-long accelerating development of the internal market did not create additional market problems such as market failures or increasing market share of the largest generator in the electricity market (Eurostat, 2021). However, there is support resulting from this research for authors arguing that the energy sector has high externalities and internal market might be unable to solve them (Hammond and Jones, 2011). The argument for this conclusion is that despite numerous and major targets and objectives in the internal market domain, the environment priority needed hefty attention from policymakers to respond to the problems in that domain. Substantial support is found by the results of this research for the supporters of liberalisation (Cambini and Rondi, 2010;Domah and Pollitt, 2001;Joskow, 2008;McGowan, 2008;Pollitt, 2012). Despite rather little attention towards affordability measures, the development of the internal market allowed major funding programs (e.g. the support for renewable energy sources, nuclear research) and higher prices for pollution (the EU Emissions Trading System, the National Emissions Ceiling Directive, the Industrial Emissions Directive), without an explosion in electricity prices. CONCLUSIONS The research question addressed is if there is an imbalance in EU electricity policies, what are its effects and how it reflects on the general discussion on liberalisation. The results of this investigation suggest that an imbalance indeed exists. The ranking of policy priorities, displaying a dominance of "environment" and "internal market," and only a few "security of supply" policies, show an imbalance of the energy trilemma for the European Union. We speculate that the solution for this conundrum would be more attention to EU security of supply and defusing in this manner potential tensions with member states. European treaties constantly reinforce European Commission's mandate in the environment area, but ringfence the energy independence of Member States. To be clear, this does not mean that the European institutions were banned from proposing European "security of supply" legislation. This grey area could be a reason for this imbalance in the classical energy trilemma for the European Union. Going further into the investigation, the results show that EU energy policymaking is producing pioneering legislation (importance per target/objective) at a very stable rate, an almost flat line over the three decades studied. The average importance per each piece of legislation increases over time, but each legislation has also more objectives and targets. This means that pieces of legislation are more complex (with more targets and objectives), but not necessarily more radical (they provide almost the same number of pioneering provisions every year). This shows that the European institutions keep in fact a certain couloir of pioneering provisions. Meaningful change comes at a stagnating rate, despite increasing power for the EU institutions. Looking at patterns through the pillars and categories classification, there are energy policies, such as "environment," given constant attention by policymakers, with pieces of legislation or targets/objectives adopted almost every year. However, a change of pattern occurs with "internal market," which has occasional occurrences in EU energy legislation adoption until 2003. From then on, the pattern changes and policymakers adopt every year, and in great numbers, targets and objectives on this energy priority. For example, the most important EU electricity-relevant binding pieces of legislation, totalling the most importance points per piece of legislation, are Regulation 714/2009 on conditions for access to the network for cross-border exchanges in electricity and Regulation 715/2009 on conditions for access to the natural gas transmission networks. Both are in the "internal market" domain. In the "environment" domain, the most important piece of legislation according to this article's methodology is Directive 88/609/EEC on the limitation of emissions of certain pollutants into the air from large combustion plants. The charts resulted from mapping the energy policy field offer visual cues for energy packages identification. The precision of the perspectives deployed in this article (policy density, intensity and importance) was tested thereby and proved that the policy importance perspective was the most precise in recognising the adoption year of energy packages. Furthermore, correct identification of energy packages means that the policy importance analysis can be used to detect any future legislative package even if they are published or recognized by policymakers as a "package." 2016 was hypothesized as the adoption year for a new energy package, but this assumption was proved wrong. This leaves the question of why there is no energy package from 2009 to 2018. This is a clear change of pattern as previous packages appeared every 5-6 years. There is a jump in targets and objectives' importance in 2013; which could be interpreted as an unidentified energy package. The imbalance in the energy trilemma is clear, but why is this happening? What drives the adoption of energy policy priorities in different years, different degrees of importance, different priorities? Scholarly literature exploration gives a plethora of responses, considering numerous factors as critical: from external factors, like price of raw energy materials (Schröder et al., 2013), technology (Alizadeh et al., 2016;Shilei and Yong, 2009;Zhu et al., 2015) and international relations (Taggart and Szczerbiak, 2013) to internal factors, such as policy implementation and adoption or even cultural factors specific to each member state (Falkner et al., 2007;Falkner and Treib, 2008). The empirical mapping that this article created allows such theories to be quantifiably checked, as there is enough body of data to act as control group and offer new insights of EU policy ambition and policymaking.
2021-08-31T23:26:39.471Z
2021-08-20T00:00:00.000
{ "year": 2021, "sha1": "d28d7ad9cb27b94239fe217c3e12080dcb6915e0", "oa_license": "CCBY", "oa_url": "https://econjournals.com/index.php/ijeep/article/download/11461/6026", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d28d7ad9cb27b94239fe217c3e12080dcb6915e0", "s2fieldsofstudy": [ "Political Science", "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
199586323
pes2o/s2orc
v3-fos-license
Analyzing Particle Systems for Machine Learning and Data Visualization with freud The freud Python library analyzes particle data output from molecular dynamics simulations. The library’s design and its variety of highperformance methods make it a powerful tool for many modern applications. In particular, freud can be used as part of the data generation pipeline for machine learning (ML) algorithms for analyzing particle simulations, and it can be easily integrated with various simulation visualization tools for simultaneous visualization and real-time analysis. Here, we present numerous examples both of using freud to analyze nano-scale particle systems by coupling traditional simulational analyses to machine learning libraries and of visualizing per-particle quantities calculated by freud analysis methods. We include code and examples of this visualization, showing that in general the introduction of freud into existing ML and visualization workflows is smooth and unintrusive. We demonstrate that among Python packages used in the computational molecular sciences, freud offers a unique set of analysis methods with efficient computations and seamless coupling into powerful data analysis pipelines. Introduction The availability of "off-the-shelf" molecular dynamics engines (e.g.HOOMD-blue [ALT08], [GNA + 15], LAMMPS [Pli95], GROMACS [BvdSvD95]) has made simulating complex systems possible across many scientific fields.Simulations of systems ranging from large biomolecules to colloids are now common, allowing researchers to ask new questions about reconfigurable materials [CDA + 18] and develop coarse-graining approaches to access increasing timescales [SZR + 19].Various tools have arisen to facilitate the analysis of these simulations, many of which are immediately interoperable with the most popular simulation tools.The freud library is one such analysis package that differentiates itself from others through its focus on colloidal and nano-scale systems. Due to their diversity and adaptability, colloidal materials are a powerful model system for exploring soft matter physics [GS07]. Such materials are also a viable platform for harnessing photonic [CDA + 18], plasmonic [TCLC11], and other useful structurallyderived properties.In colloidal systems, features like particle anisotropy play an important role in creating complex crystal structures, some of which have no atomic analogues [DEG12].Design spaces encompassing wide ranges of particle morphology [DEG12] and interparticle interactions [AADG18] have been shown to yield phase diagrams filled with complex behavior. The freud Python package offers a unique feature set that targets the analysis of colloidal systems.The library avoids trajectory management and the analysis of chemically bonded structures, which are the province of most other analysis platforms like MDAnalysis and MDTraj (see also 1) [MADWB11], [MBH + 15].In particular, freud excels at performing analyses based on characterizing local particle environments, which makes it a powerful tool for tasks such as calculating order parameters to track crystallization or finding prenucleation clusters.Among the unique methods present in freud are the potential of mean force and torque, which allows users to understand the effects of particle anisotropy on entropic self-assembly [vAAS + 14], [vAKA + 14], [KGG16], [HMA + 15], [AAM + 17], and various tools for identifying and clustering particles by their local crystal environments [TvAG19].All such tasks are accelerated by freud's extremely fast neighbor finding routines and are automatically parallelized, making it an ideal tool for researchers performing peta-or exascale simulations of particle systems.The freud library's scalability is exemplified by its use in computing correlation functions on systems of over a million particles, calculations that were used to illuminate the elusive hexatic phase transition in two-dimensional systems of hard polygons [AAM + 17].More details on the use of freud can be found in [RDH + 19].In this paper, we will demonstrate that freud is uniquely well-suited to usage in the context of data pipelines for visualization and machine learning applications. Data Pipelines The freud package is especially useful because it can be organically integrated into a data pipeline.Many research tasks in computational molecular sciences can be expressed in terms of data pipelines; in molecular simulations, such a pipeline typically involves: 1) Generating an input file that defines a simulation.2) Simulating the system of interest, saving its trajectory to a file.Fig. 1: Common Python tools for simulation analysis at varying length scales.The freud library is designed for nanoscale systems, such as colloidal crystals and nanoparticle assemblies.In such systems, interactions are described by coarse-grained models where particles' atomic constituents are often irrelevant and particle anisotropy (non-spherical shape) is common, thus requiring a generalized concept of particle "types" and orientation-sensitive analyses.These features contrast the assumptions of most analysis tools designed for biomolecular simulations and materials science. 3) Analyzing the resulting data by computing and storing various quantities.4) Visualizing the trajectory, using colors or styles determined from previous analyses. However, in modern workflows the lines between these stages is typically blurred, particularly with respect to analysis.While direct visualization of simulation trajectories can provide insights into the behavior of a system, integrating higher-order analyses is often necessary to provide real-time interpretable visualizations in that allow researchers to identify meaningful features like defects and ordered domains of self-assembled structures.Studies of complex systems are also often aided or accelerated by a real-time coupling of simulations with on-the-fly analysis.This simultaneous usage of simulation and analysis is especially relevant because modern machine learning techniques frequently involve wrapping this pipeline entirely within a higher-level optimization problem, since analysis methods can be used to construct objective functions targeting a specific materials design problem, for instance. Following, we provide demonstrations of how freud can be integrated with popular tools in the scientific Python ecosystem like TensorFlow, Scikit-learn, SciPy, or Matplotlib.In the context of machine learning algorithms, we will discuss how the analyses in freud can reduce the 6N-dimensional space of particle positions and orientations into a tractable set of features that can be fed into machine learning algorithms.We will further show that freud can be used for visualizations even outside of scripting contexts, enabling a wide range of forward-thinking applications including Jupyter notebook integrations, versatile 3D renderings, and integration with various standard tools for visualizing simulation trajectories.These topics are aimed at computational molecular scientists and data scientists alike, with discussions of real-world usage as well as theoretical motivation and conceptual exploration.The full source code of all examples in this paper can be found online 1 . Performance and Integrability Using freud to compute features for machine learning algorithms and visualization is straightforward because it adheres to a UNIX-like philosophy of providing modular, composable features.This design is evidenced by the library's reliance on NumPy 1. https://github.com/glotzerlab/freud-examplesarrays [Oli06] for all inputs and outputs, a format that is naturally integrated with most other tools in the scientific Python ecosystem.In general, the analyses in freud are designed around analyses of raw particle trajectories, meaning that the inputs are typically (N, 3) arrays of particle positions and (N, 4) arrays of particle orientations, and analyses that involve many frames over time use accumulate methods that are called once for each frame.This general approach enables freud to be used for a range of input data, including molecular dynamics and Monte Carlo simulations as well as experimental data (e.g.positions extracted via particle tracking) in both 3D and 2D.The direct usage of numerical arrays indicates a different usage pattern than that of tools, such as MDAnalysis [MADWB11] and MDTraj [MBH + 15], for which trajectory parsing is a core feature.Due to the existence of many such tools which are capable of reading simulation engines' output files, as well as certain formats like gsd2 that provide their own parsers, freud eschews any form of trajectory management and instead relies on other tools to provide input arrays.If input data is to be read from a file, binary data formats such as gsd or NumPy's npy or npz are strongly preferred for efficient I/O.Though it is possible to use a library like Pandas to load data stored in a comma-separated value (CSV) or other text-based data format, such files are often much slower when reading and writing large numerical arrays.Decoupling freud from file parsing and specific trajectory representations allows it to be efficiently integrated into simulations, machine learning applications, and visualization toolkits with no I/O overhead and limited additional code complexity, while the universal usage of NumPy arrays makes such integrations very natural. In keeping with this focus on composable features, freud also abstracts and directly exposes the task of finding particle neighbors, the task most central to all other analyses in freud.Since neighbor finding is a common need, the neighbor finding routines in freud are highly optimized and natively support periodic systems, a crucial feature for any analysis of particle simulations (which often employ periodic boundary conditions). In figure 2, a comparison is shown between the neighbor finding algorithms in freud and SciPy [JOPo01].For each system size, N particles are uniformly distributed in a 3D periodic cube such that each particle has an average of 12 neighbors within a distance of r cut = 1.0.Neighbors are found for each particle by searching within the cutoff distance r cut .The methods compared are scipy.spatial.cKDTree'squery_ball_tree, freud.locality.AABBQuery's queryBall, and freud.locality.LinkCell's compute.The benchmarks were performed with 5 replicates on a 3.6 GHz Intel Core i3-8100B processor with 16 GB 2667 MHz DDR4 RAM. Evidently, freud performs very well on this core task and scales well to larger systems.The parallel C++ backend implemented with Cython and Intel Threading Building Blocks makes freud perform quickly even for large systems [BBC + 11], [Int18].Furthermore, freud supports periodicity in arbitrary triclinic volumes, a common feature found in many simulations.This support distinguishes it from other tools like scipy.spatial.cKDTree,which only supports cubic boxes.The fast neighbor finding in freud and the ease of integrating its outputs into other analyses not only make it easy to add fast new analysis methods into freud, they are also central to why freud can be easily integrated into workflows for machine learning and visualization. Machine Learning A wide range of problems in soft matter and nano-scale simulations have been addressed using machine learning techniques, such as crystal structure identification [SG18].In machine learning workflows, freud is used to generate features, which are then used in classification or regression models, clusterings, or dimensionality reduction methods.For example, Harper et al. used freud to compute the cubatic order parameter and generate high-dimensional descriptors of structural motifs, which were visualized with t-SNE dimensionality reduction [HWG19], [vdMH08].The library has also been used in the optimization and inverse design of pair potentials [AADG18], to compute fitness functions based on the radial distribution function.The open-source pythia 3 library offers a number of descriptor sets useful for crystal structure identification, leveraging freud for fast computations.Included among the descriptors in pythia are quantities based on bond angles and distances, spherical harmonics, and Voronoi diagrams. Computing a set of descriptors tuned for a particular system of interest (e.g. using values of Q l , the higher-order Steinhardt W l parameters, or other order parameters provided by freud) is possible with just a few lines of code.Descriptors like these (exemplified in the pythia library) have been used with TensorFlow for supervised and unsupervised learning of crystal structures in complex phase diagrams [SG18], [AAB + 15].Another useful module for machine learning with freud is freud.cluster,which uses a distance-based cutoff to locate clusters of particles while accounting for 2D or 3D periodicity.Locating clusters in this way can identify crystalline grains, helpful for building a training set for machine learning models. To demonstrate a concrete example, we focus on a common challenge in molecular sciences: identifying crystal structures.Recently, several approaches have been developed that use machine learning for detecting ordered phases [SCKL15], [SG18], [FSM19], [SNR83], [LD08].The Steinhardt order parameters are often used as a structural fingerprint, and are derived from rotationally invariant combinations of spherical harmonics.In the example below, we create face-centered cubic (fcc), body-centered cubic (bcc), and simple cubic (sc) crystals with added Gaussian noise, and use Steinhardt order parameters with a support vector machine to train a simple crystal structure identifier.Steinhardt order parameters characterize the spherical arrangement of neighbors around a central particle, and combining values of Q l for a range of l often gives a unique signature for simple crystal structures.This example demonstrates a simple case of how freud can be used to help solve the problem of structural identification, which often requires a sophisticated approach for complex crystals. In figure 3, we show the distribution of Q 6 values for sample structures with 4000 particles.Here, we demonstrate how to compute the Steinhardt Q 6 , using neighbors found via a periodic Voronoi diagram.Neighbors with small facets in the Voronoi polytope are filtered out to reduce noise. Visualization Many analyses performed by the freud library provide a plot(ax=None) method (new in v1.2.0) that allows their computed quantities to be visualized with Matplotlib.Additionally, these plottable analyses offer IPython representations, allowing Jupyter notebooks to render a graph such as a radial distribution function g(r) just by returning the compute object at Fig. 4: UMAP of particle descriptors computed for simple cubic, body-centered cubic, and face-centered cubic structures of 4000 particles with added Gaussian noise.The particle descriptors include Q l for l ∈ {4, 6, 8, 10, 12}.Some noisy configurations of bcc can be confused as fcc and vice versa, which accounts for the small number of errors in the support vector machine's test classification. the end of a cell.Analyses like the radial distribution function or correlation functions return data that is binned as a onedimensional histogram --these are visualized with a line graph via matplotlib.pyplot.plot,with the bin locations and bin counts given by properties of the compute object.Other classes provide multi-dimensional histograms, like the Gaussian density or Potential of Mean Force and Torque, which are plotted with matplotlib.pyplot.imshow.The most complex case for visualization is that of per-particle properties, which also comprises some of the most useful features in freud.Quantities that are computed on a per-particle level can be continuous (e.g.Steinhardt order parameters) or discrete (e.g.clustering, where the integer value corresponds to a unique cluster ID).Continuous quantities can be plotted as a histogram over particles, but typically the most helpful visualizations use these quantities with a color map assigned to particles in a twoor three-dimensional view of the system itself.For such particle visualizations, several open-source tools exist that interoperate well with freud.Below are examples of how one can integrate freud with plato 4 , fresnel 5 , and OVITO 6 [Stu10]. plato is an open-source graphics package that expresses a common interface for defining two-or three-dimensional scenes which can be rendered as an interactive Jupyter widget or saved to a high-resolution image using one of several backends (PyThreejs, Matplotlib, fresnel, POVray 7 , and Blender 8 , among others).Below is an example of how to render particles from a HOOMDblue snapshot, colored by the density of their local environment [ALT08] fresnel 9 is a GPU-accelerated ray tracer designed for particle simulations, with customizable material types and scene lighting, as well as support for a set of common anisotropic shapes.Its feature set is especially well suited for publication-quality graphics.Its use of ray tracing also means that an image's rendering time scales most strongly with the image size, instead of the number of particles --a desirable feature for extremely large simulations.An example of how to integrate fresnel is shown below and rendered in figure 6. Conclusions The freud library offers a unique set of high-performance algorithms designed to accelerate the study of nanoscale and colloidal systems.These algorithms are enabled by a fast, easyto-use set of tools for identifying particle neighbors, a common first step in nearly all such analyses.The efficiency of both the core neighbor finding algorithms and the higher-level analyses makes them suitable for incorporation into real-time visualization environments, and, in conjunction with the transparent NumPybased interface, allows integration into machine learning workflows using iterative optimization routines that require frequent recomputation of these analyses.The use of freud for realtime visualization has the potential to simplify and accelerate existing simulation visualization pipelines, which typically involve slower and less easily integrable solutions to performing realtime analysis during visualization.The application of freud to machine learning, on the other hand, opens up entirely new avenues of research based on treating well-known analyses of particle simulations as descriptors or optimization targets.In these ways, freud can facilitate research in the field of computational molecular science, and we hope these examples will spark new ideas for scientific exploration in this field. Getting freud The freud library is tested for Python 2.7 and 3.5+ and is compatible with Linux, macOS, and Windows.To install freud, execute conda install -c conda-forge freud or pip install freud-analysis Its source code is available on GitHub 10 and its documentation is available via ReadTheDocs 11 . Fig. 2: Comparison of runtime for neighbor finding algorithms in freud and SciPy for varied system sizes.See text for details. Fig. 3 : Fig. 3: Histogram of the Steinhardt Q 6 order parameter for 4000 particles in simple cubic, body-centered cubic, and face-centered cubic structures with added Gaussian noise. Fig. 5 : Fig. 5: Interactive visualization of a Lennard-Jones particle system, rendered in a Jupyter notebook using plato with the pythreejs backend. Fig. 7 : Fig. 7: A crystalline grain identified using freud's LocalDensity module and cut out for display using OVITO.The image shows a tP30-CrFe structure formed from an isotropic pair potential optimized to generate this structure [AADG18]. https://www.povray.org/8. https://www.blender.org/ The Python scripting functionality built into OVITO enables the use of freud modules, demonstrated in the code below and shown in figure7. [Stu10]ITO is a GUI application with features for particle selection, making movies, and support for many trajectory formats[Stu10].OVITO has several built-in analysis functions (e.g.Polyhedral Template Matching), which complement the methods in freud.
2019-08-15T07:08:27.086Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "e58012e2bc986713968a3106830338d1798aefd3", "oa_license": "CCBY", "oa_url": "http://conference.scipy.org/proceedings/scipy2019/pdfs/bradley_dice.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "e58012e2bc986713968a3106830338d1798aefd3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
59361341
pes2o/s2orc
v3-fos-license
Pesticides and Their Movement Surface Water and Ground Water Feza Geyikçi Pesticides are poisons designed to kill pests such as rodents, insects, weeds and fungi. Pesticides are, by their nature, toxic chemicals; since many pesticides may potentially leave residues on foods available for human consumption, there is much concern regarding the potential health risks of pesticides in the human diet. Pesticides used in agriculture to control pests, such as insects, weeds, and plant diseases, have been subject to considerable legislative, regulatory, and consumer scrutiny over the past few decades. Pesticides, with their high degree of toxicity, constitute a very important group of target compounds in environmental samples. Those presentnin waters may have an agricultural, domestic or industrial origin, the most harmful effect being their inclusion in the so-called “nutritionchain” (Vinas et al., 2002). Many common pesticides contain potent neurotoxic chemicals that attack and disable portions of the nervous system and brain. The use of pesticides in commercial agriculture has led to an increase in farm productivity (Guler et al., 2010). Pesticides also present environmental concerns including water and soil contamination, air pollution, destruction of natural vegetation, reductions in natural pest populations, effects upon non-target organisms including fish, wildlife, and livestock, creation of secondary pest problems, and the evolution of pesticide resistance (Winter, 2004). Many pesticides were used on a global scale from the 1950s to the mid-80s, most of which are stable and persistent in the environment (Barra et al., 2001). The use of pesticides in agriculture is necessary to combat a variety of pests that could destroy crops and to improve the quality of the food produced. The advantages and disadvantages of pesticide pollution controlling technique are determined by many factors, which require a comprehensive evaluation method adopted in the evaluation of pesticide pollution controlling techniques. Exposure to high levels of pesticides can cause a range of acute, fluand malaria-like symptoms including headaches, weakness, nausea, respiratory distress, convulsions, coma, and death, accounting for an estimated 20,000 fatalities per year (Jiang and Wan, 2009; Guler et al., 2010). In a recent USEPA summary report defined vulnerability applied to risk assessment as a four component system: (1) susceptibility or sensitivity of the human or ecological receptors; (2) differential exposures of the receptors; (3) differential preparedness of the receptor to withstand the insult from exposure; (4) differential ability to recover from these effects. All of these components are pertinent to systems undergoing development from the fetus Introduction Pesticides are poisons designed to kill pests such as rodents, insects, weeds and fungi.Pesticides are, by their nature, toxic chemicals; since many pesticides may potentially leave residues on foods available for human consumption, there is much concern regarding the potential health risks of pesticides in the human diet.Pesticides used in agriculture to control pests, such as insects, weeds, and plant diseases, have been subject to considerable legislative, regulatory, and consumer scrutiny over the past few decades.Pesticides, with their high degree of toxicity, constitute a very important group of target compounds in environmental samples.Those presentnin waters may have an agricultural, domestic or industrial origin, the most harmful effect being their inclusion in the so-called "nutritionchain" (Vinas et al., 2002).Many common pesticides contain potent neurotoxic chemicals that attack and disable portions of the nervous system and brain.The use of pesticides in commercial agriculture has led to an increase in farm productivity (Guler et al., 2010).Pesticides also present environmental concerns including water and soil contamination, air pollution, destruction of natural vegetation, reductions in natural pest populations, effects upon non-target organisms including fish, wildlife, and livestock, creation of secondary pest problems, and the evolution of pesticide resistance (Winter, 2004).Many pesticides were used on a global scale from the 1950s to the mid-80s, most of which are stable and persistent in the environment (Barra et al., 2001).The use of pesticides in agriculture is necessary to combat a variety of pests that could destroy crops and to improve the quality of the food produced.The advantages and disadvantages of pesticide pollution controlling technique are determined by many factors, which require a comprehensive evaluation method adopted in the evaluation of pesticide pollution controlling techniques.Exposure to high levels of pesticides can cause a range of acute, flu-and malaria-like symptoms including headaches, weakness, nausea, respiratory distress, convulsions, coma, and death, accounting for an estimated 20,000 fatalities per year (Jiang and Wan, 2009;Guler et al., 2010).In a recent USEPA summary report defined vulnerability applied to risk assessment as a four component system: (1) susceptibility or sensitivity of the human or ecological receptors; (2) differential exposures of the receptors; (3) differential preparedness of the receptor to withstand the insult from exposure; (4) differential ability to recover from these effects.All of these components are pertinent to systems undergoing development from the fetus Pesticides in the Modern World -Risks and Benefits 412 through childhood.For example, differences in the chemical biotransformation capacity of the human fetus and developing child can be both protective and potentially detrimental to normal development Regarding this point, there is little direct information regarding the specific metabolism of xenobiotics, much less pesticides, in children or the fetus.Overriding differences in biotransformation in the fetus is the probable role of maternal metabolism of xenobiotics affecting the level of fetal toxicant exposure.Polymorphisms of maternal phase 1 and phase 2 enzymes may play a key role in these exposure events (Garry, 2004).Deterioration of surface and ground water quality represent the most significant adverse environmental impact associated with agricultural production.Degradation of surface and ground water quality has been identified as the primary concern with respect to the impact of agriculture on the environment.The degradation may occur as a result of the leaching of agricultural chemicals soil or biological organisms to surface waters.In this study, it is evaluated the surface and ground water contamination by pesticides. Pesticide properties The physical and chemical properties that make pesticides effective for pest control also create a potential for surface and ground-water contamination.The fate of a pesticide applied to soil depends largely on two of its properties: persistence and adsorption (adsorption is inversely related to solubility).Persistence is the "lasting power" of a pesticide.Most pesticides in the soil break down or "degrade" over time as a result of several chemical and microbiological reactions.Generally, chemical reactions result in only partial deactivation of pesticides whereas soil microorganisms can completely break down many pesticides to carbon dioxide, water and other inorganic constituents.Some pesticides produce intermediate substances called metabolites as they degrade.The biological activity of these substances may or may not have environmental significance.Microbes decrease rapidly below the root zone so pesticides leached below this depth are less likely to be microbially degraded.However, some pesticides will continue to degrade by chemical reactions after they have left the root zone.Degradation time is measured in half-life.Half-life refers to the amount of time it takes for a pesticide in soil to reach half the activity level it had at the time of application (i.e., for a pesticide with a half-life of 30 days, 50 percent of the pesticide will have degraded after 30 day).Pesticides having short half-lives often do not persist in the soil long enough to leach into groundwater.Chemicals with long half-lives are highly persistent and have a greater change of leaching into groundwater.To describe potential persistence, scientists classify pesticides as follows: 1. Non-persistent chemicals Half-life less than 30 days 2. Moderately persistent chemicals Half-life of 30 to 100 days 3. Persistent chemicals Half-life greater than 100 days (Mahler et al., 1997).Pesticides are divided into many classes.The pesticide classes are shown in Table 1 (Squibb, 2002).The adsorption process binds pesticides to soil particles, like iron fillings or paper clips stick to a magnet.Adsorption occurs because of the attraction between chemicals and soil particles.Pesticide molecules are positively charged.For example, are attracted to and can bind to negatively charged clay particles.Strongly adsorbed pesticides are less subject to through soil than weakly adsorbed pesticides.On the other hand, strongly adsorbed pesticides are more subject to loss via surface runoff.1.The Main classes of Pesticides adsorption include pesticide charge; soil pH, temperature and water content; the presence of previously adsorbed chemicals that have a stronger bond to soil particles; and the amount and type of organic matter present.In general, pesticide adsorption relates inversely to pesticide solubility in water.Highly soluble pesticides are weakly adsorbed and pose a greater threat of groundwater contamination.Four chemical properties that affect pesticide movement are solubility, adsorption, volatility and degradation.Solubility: The tendency of a pesticide to dissolve in water affects its leaching potential.As water seeps downward through soil, it carries with it water-soluble chemicals.This process is called leaching.Water solubility greater than 30 mg/L has been identified as the flag for a potential leached.Highly soluble pesticides have a tendency to be carried in surface runoff and to be leached from the soil to groundwater.Poorly soluble pesticides applied to soil but not incorporated have a high potential for loss through runoff or erosion.Adsorption: Adsorption refers to the attraction between a chemical and soil particles.Many pesticides do not leach because they are adsorbed, or tightly held by soil particles.Pesticides which are weakly adsorbed will leach in varying degrees depending on their solubility.Adsorption depends not only on the chemical properties of the pesticide but also on the soil type and amount of soil organic matter present.Even strongly adsorbed pesticides can be carried with eroded soil particles in surface runoff.The potential for a pesticide to be adsorbed is called the adsorption partition coefficient (K d ).The lower the partition coefficient is the greater the pesticide leaching potential.Volatility: The tendency of a pesticide to become a gas, similar to the evaporation of water will affect its loss to the atmosphere by volatilization.If a pesticide is highly volatile (has a high vapour pressure) and is not very water soluble, it is likely to be lost to the atmosphere and less will be available for leaching to groundwater.Highly volatile compounds may be come groundwater contaminants, however if they are highly soluble in water.For most pesticides, loss through volatilization is insignificant compared with leaching or surface losses.Volatile pesticides may cause water contamination or other problems from aerial drift.Environmental conditions such as temperature, humidity and wind speed affect volatilization losses.Special surfactants or carriers can be used to reduce volatilization losses.Degradation: A pesticides rate of degradation (persistence) in soil also affects leaching potential.Pesticides are degraded or broken down into other chemical forms by sunlight (photodecomposition) by microorganisms in the soil and by a variety of chemical and physical reactions.The longer the compound lasts before it is broken down that is the longer it persist the longer it is subject to the forces of leaching and runoff (Hairston, 1995). Pesticide movement in surface water and ground water For an agricultural system to be sustainable, adverse environmental effects of agricultural production must be minimized while competitiveness and profitability are maintained or enhanced.Degradation of surface and ground water quality has been identified as the primary concern with respect to the impact of agriculture on the environment.The degradation may occur as a result of the leaching of agricultural chemicals soil or biological organisms to surface waters.Contamination of surface water is less serious than is the case for groundwater.Properly applied pesticides may reach surface water and groundwater in three basic ways: runoff, run-in, and leaching.Runoff is the physical transport of pollutants over the soil surface by rainwater that does not soak into the soil.Pesticides move from fields while dissolved or suspended in runoff water or adsorbed (chemically attached) to eroded sediment.Run-in is the physical transport of pollutants directly to groundwater.For example, this can occur in areas of limestone (Karst-carbonate) aquifers, which contain sinkholes and porous or fractured bedrock.Rain or irrigation water can carry pesticides through sinkholes or fractured bedrock directly into groundwater.Leaching is the movement of pollutants through the soil by rain or irrigation water as the water moves downward through the soil.Soil organic matter content, clay content and permeability all affect the potential for pesticides to leach in soils.In general, soils with moderate to high organic matter and clay content and moderate or slow permeability are less likely to leach pesticides into groundwater.In fine textured soils, macropores, which are principally root channels and wormholes, may contribute to the leaching of pesticides.The advantages and disadvantages of pesticide pollution controlling technique are determined by many factors, which require a comprehensive evaluation method adopted in the evaluation of pesticide pollution controlling techniques.But in the average comparison experiment of pesticide pollution controlling techniques, an intuitive analysis and simple nature description of the ecologic, economic factors under the technique effects are made and the analysis results are independent to each other, a systematic and comprehensive evaluation of advantages and disadvantages of candidate techniques compared is difficult to be made.The change and development of these factors themselves is a grey change process.The Grey System Theory put forward by Deng Julong in 1980s is a new method of solving problems of few data, poor information and uncertainty, which takes the systems of "small sample", "poor information" and "uncertainty" with part information known and part unknown as the subject, mainly by finding valuable information through creation and development from the "part" information known, so as to achieve correct description and effective monitoring of rules of the system operation and evolution.At present, the Grey System Theory is widely applied in many scientific fields, but no literatures of pesticide pollution controlling evaluation can be found.During the pesticide pollution controlling evaluation, the information provided by limit system investigation and spatial-temporal detection data is not complete and certain and the vegetable field pesticide controlling system is a grey system.Based on this point, this paper has made a comprehensive comparison of the pesticide pollution controlling techniques in the vegetable production by adopting a relational analysis method of the Grey System Theory.The chemical pesticide provides a necessary guarantee for the output increase, but the pesticide abuse has led to daily worsen of the ecosystem of agricultural fields (Jiang et al., 2009). Pesticides in groundwater Pesticides in groundwater are an extremely serious problem.The turnover rate for groundwater may be as a few months, but more commonly years and decades are needed to replace the water in an oxygen-free environment are much less effective in breaking down pesticide chemicals.Extremely slow dilution and breakdown means that the contaminant will be present for a long time.The most critical hazard of contaminated groundwater is the potential for toxic effects in man and domestic animals that drink the water.Contamination of an underground aquifer cannot be easily corrected.Doing so requires drilling purge wells and pumping the water to the surface.Pumping may have to be continued for a long time to remove all the contaminated water.The process is extremely expensive.Preventing groundwater contamination is the best solution to what could be a hazardous situation.Numerous instances of groundwater contaminated with pesticides have been identified.In some cases, small communities have had to use bottled water until other sources of drinking water were developed.At this time, the full extent of groundwater contamination is not known.Pesticides have been found in groundwater in numerous instances, however, and it seems apparent that more instances will be discovered as more and more underground aquifers are sampled and tested for the presence of pesticides.The time it takes for pesticides to travel to groundwater decreases as the depth to groundwater decreases.Generally, the depth to groundwater is least in spring and greatest in late summer.If spring rains come shortly after pesticide application and water table is close to the surface, a greater potential for groundwater contamination exists. Pesticides in surface water The presence of pesticides in surface water, even in very small amounts, compromises the life cycle of aquatic organisms, such as algae and fish (tumors, interference with hormonal systems, respiration, growth, reproduction, etc.).Pesticides are harmful to the environment and a threat to the health of those who use these substances, notably those working in the agriculture (headaches, fertility loss, carcinogenic effect, etc.).But most importantly, the prolonged consumption of drinking water, fruits, and vegetables containing pesticides, even at very low doses, presents long-term risks to health.The question of pesticides brings up particular concerns in the area of drinking water production and wastewater treatment because these are among the principal pollutants that impact water resources.Surface water also can be contaminated directly by pesticide spray drift the travel and deposition of fine pesticide spray droplets away from their intended target when the spray is applied too close to water.Drift incidents can result in greater surface water contamination than either runoff or leaching.Obvious, acute effects such as fish kills can occur.Most surface waters (except deep lakes) have a rapid turnover rate, which means that fresh water dilutes the concentration of the contaminant quickly.In addition, most surface waters contain free oxygen, which enhances the rate at which pesticides are broken down by microorganisms.Contamination of surface waters should not be treated casually.An extremely toxic pesticide can cause the death of fish and other aquatic organisms even at low concentrations.Rivers and streams are receptors of toxic wastes generated on land.Pesticides impair beneficial uses of these waters and their biological resources.Pesticides are a group of organic compounds which have been found in aquatic systems worldwide (Rovedatti et al., 2001). Pesticide transport in air, water and soil In structured soils, macropore flow often causes rapid nonuniform leaching via preferential flow paths, where a fraction of the contaminant percolates into ground water before it can degrade or be adsorbed by the soil.As a result of agricultural practices, pesticides have been detected in many aquifers and surface waters.With regard to pesticides, moderately sorbed compounds with relatively short half-lives are particularly affected.Travel times for pesticides preferentially leached below the root zone are comparable to those for conservative solutes, with losses of typically less than 1% of the applied dose, but reaching up to 5% of the applied mass.These apparently small numbers can be put into perspective by considering the EU drinking water standard, which states that concentrations of a single pesticide may not exceed 0.1 μg l -1 .For a dose of 0.2 kg ha -1 and an annual recharge of 200 mm, this implies a maximum allowed leaching loss of only 0.1% of the applied amount.Hence, macropore flow should be considered in risk assessment of ground water contamination with pesticides.Pesticide leaching through the vadose zone to ground water is a complex process controlled by a range of soil and environmental conditions.Accordingly, pesticide fate models account for a variety of processes including soil water flow, solute transport, heat transport, pesticide sorption, transformation and degradation, volatilization, crop uptake, and surface runoff.A particular modeling challenge is to predict pesticide transport at very low leaching levels important for pesticide registration.On the other hand, it has been argued that for very low concentrations, approaching the level of quantification, the criteria for accuracy need not be as rigorous, particularly when the analysis takes into account the uncertainty of data and model outcome.The principal processes governing pesticide transport and fate in agricultural structured soil systems are illustrated in Fig. 1.Soil matrix and macropore characteristics invoking different transport patterns are highlighted in Fig. 2. Descriptions of models for simulating transport of pesticides (and other chemicals) can be found in several reviews and model comparison studies (Colume et al., 2001;Köhne et al., 2009).Once applied to cropland, a pesticide may be taken up by plants, adsorbed to plant surfaces, broken down by sunlight (photodegradation), or ingested by animals, insect, worms or microorganisms in the soil.It may be downward in the soil and either adhere to soil particles or dissolve in soil water.The pesticide may be vaporize and enter the atmosphere (volatization) or breakdown via microbial and chemical pathways into less toxic compounds.Pesticides may be leached out of the root zone by rain or irrigation water or wash off the surface of the land.Pesticides applied to the soil and immediately incorporated are protected from photodegradation, volatization and dew, which can cause hydrolysis (decomposition by reaction with water).Properly applied pesticides can reach surface and under-ground waters in two ways: in runoff and by leaching.Runoff is the physical transport of pollutants (chemical or soil) over the ground surface by rainwater, snowmelt or irrigation water that does not penetrate the soil.In the leaching process, pollutants are carried through the soil by rain or irrigation water as it moves downward. Factors affecting pesticide movement Pesticides are primarily moved from agricultural fields to surface waters in surface run-off.The amount lost from fields and transported to surface waters depends on several factors, including soil characteristics, topography, weather, agricultural practices, and chemical and environmental properties of individual pesticides (Colume et al., 2001).Pesticides that are susceptible to leaching do not move through all soils and into ground water at the same rate.Leaching and runoff are nonpoint pollution processes that depend on five sets of factors, some of which are controllable and some not.1. Application factors: These include the application site (crop or weed plants or soil surface or subsurface), the formulation (e. g., granules or suspended powder or liquid), and the application amount and frequency).The management practices that affect movement of pesticides are application methods, application rates and timing, and handling practices.The way in which a pesticide is applied determines leaching potential.Injection or incorporation into the soil, as in the case of nematocides, makes the pesticide most readily available for leaching.Most of the pesticides which have been detected in groundwater are those which are incorporated into the soil rather than sprayed onto growing crops.Pesticides sprayed onto crops, however are more susceptible to volatilization and surface runoff losses.Application rates and timing of a pesticides application also are critical in determining whether it will leach to groundwater.The larger the amount used and the closer the time of application to a heavy rainfall or irrigation, the more likely that some pesticide will leach to groundwater.Particular care should be taken when practicining chemigation because of the risks of back-siphoning and leaching.Properly storing and mixing pesticides and properly disposing of the containers are other factors that can contribute significantly to the contamination of surface water or groundwater.Quick and proper cleanup of spills is also important.2. Pesticide persistence and mobility: Some pesticide-soil combinations result in such strong binding of the pesticide to soil particles that the pesticide is moved only if the soil is moved, i. e., if erosion occurs.Many pesticides now in use are degraded so quickly on soil and crop surfaces that rainfall must occur within a few days after application for significant transport to occur.Pesticides must be relatively persistent and mobile to leach to ground water because the travel time for water to percolate to deep aquifers can range from months to years.However, once a pesticide has leached into subsurface soils, the biological activity and binding capacity there are often less than in soils near the surface.Thus, the pesticide becomes more persistent and mobile.Persistent and mobile pesticides also are more a threat for runoff.However, that part of pesticide residues which is most available for runoff -the part at the topmost surface of soils is the part most rapidly dissipated by evaporation and photodegradation.Moreover, runoff transport can be complete in hours, and erosion can transport immobile pesticides attached to soil.Thus, pesticide runoff is less dependent on the pesticide properties than pesticide leaching, and much more dependent on how soon runoff occurs after application.3. Soil and field topography: Soils differ greatly in their capacity to absorb water.The slope and drainage pattern of a field or a watershed greatly affect its potential to generate runoff water.Fast-draining soils such as sands and sandy loams have the greatest leaching potential; slow-draining clays and silty clays have the greatest runoff potential.Watershed size has an important effect on runoff pesticide concentration patterns; small streams adjacent to treated fields can have very high peak concentrations of hundred of ppb, but concentrations decrease quickly to low values.In large rivers, peak concentrations are much lower but concentrations may be elevated longer.The properties of soils that affect pesticide movement are texture, permeability and organic matter content.Soil texture is determined by relative proportions of sand, silt and clay.Texture affects movement of water through soil (infiltration) and therefore, movement of dissolved chemicals such as pesticides.The sandier the greater the change of a pesticide reaching groundwater.Coarse-textured sands and gravels have high infiltration capacities and water tends to percolate through the soil rather than to runoff over the soil surface or be adsorbed to soil particles.Therefore, coarsetextured soils generally have high potential for leaching of pesticides to groundwater but low potential for surface loss to streams and lakes.On the other hand, fine-textured soils such as clays and clay loams generally have low infiltration capacities and water tends to runoff rather than to percolate.Soils with more clay and organic matter also have more surface area for adsorption of pesticides and higher populations of microorganisms to breakdown pesticides.Therefore, fine-textured soils have low potential for leaching of pesticides to groundwater and high potential for pesticide surface loss.Highly permeable soils are susceptible to leaching.Soil permeability is a measure of how fast water can move downward through a particular soil and can typically be inferred from soil texture.Since water moves quickly through highly permeable soils, these soils may lose dissolved chemicals with the percolating water.In highly permeable soil, the timing and the method of pesticide application need to be carefully designed to minimize leaching losses.Soils high in organic matter have a low leaching potential.Soil organic matter influences how much water a soil can hold and how well it will be able to adsorb pesticides and prevent their movement.In addition, high organic matter may reduce potential for surface loss by increasing the soils ability to hold both water and dissolved pesticides in thr root zone where they will be available to plants.High organic matter also supports much of the microbial activity that decomposes pesticides.4. Weather and climate: Climate affects the type of grown, the intensity of pest problems, and the persistence of pesticides used.The intensity of rainfall and its timing with respect to pesticide application determines how much pesticide transport occurs.While these factors are not controllable, probabilities of pesticide runoff and leaching can be estimated, and avoiding pesticide application when rain is imminent is often possible.Areas with high rates of rainfall or irrigation may have large amounts of water percolating through the soil and therefore, are highly susceptible to leaching of pesticides especially if the soils are highly permeable.Intensity, duration and frequency of occurrence of rainfall also affect storm water runoff and losses of surface-applied pesticides. 5. Farm management: Pesticides manufacturers are making an effort to provide farmers with the information needed for pollution prevention.The farmer has considerable control over the pollution probabilities: knowledge of erosion control and of best application techniques, and an eye on the weather, is the first lines of defence against pollution (Wauchope et al., 1994;Vinas et al., 2002). Methods of prevention Farm pesticides are regulated by state and federal laws.It can be held liable for any damage to people, animals, fish, or wildlife resulting from your pesticide use and handling practices.Protect and the environment by using pesticides on labelled crops at label rates.Safely store and transport pesticides and all potential pollutants to reduce the chance of an accident or spill.This can be accomplished by following two basic steps.1. Select the proper chemical for the pest to be controlled.Identify the pest by pictures and descriptions in publications available from agricultural agencies, public libraries or local garden centres.Select only a pesticide that is recommended both for the pest and the plant or location affected.2. After deciding on the pesticide formulation and appropriate application method, thoroughly read, understand and follow label directions.Pesticide users should be aware of several specific situations when analyzing their pesticide use practices in the context of potential groundwater contamination.All need to be carefully considered.Correcting one bad practice will not help when another bad practice may represent a bigger problem.Storage: Checking storage facilities should be first step in the chain of events involved with pesticide use.Containers are frequently opened in the storage area and the possibility of a spill cannot be ignored.Spilling a concentrated formulation is a more serious matter than applying diluted material on a field.Storage facilities should have a concrete floor so that spilled concentrate can be cleaned up and disposed of properly thereby avoiding soil and water contamination.Mixing and loading: Mixing and loading sites are areas where a lot of pesticide can be inadvertently spilled on the ground.Repeated spills increase the concentration of the pesticide in the soil and increase the possibility of materials leaching through the soil to groundwater.Growers who apply a lot of pesticide should construct a pit lined with clay or preferably concrete and filled with rock and soil.Mixing and loading can be carried out over the pit so that any spill is contained and the active ingredient is broken down without the possibility of leaching to groundwater.The pit must be large enough to accommodate the maximum pesticide use anticipated for an operation.Many commercial pesticide applicators now have such a pit, which is often covered with a concrete slab sloped toward a drain in the centre that provides access to the pit.Application: Multiple applications to the same area have been responsible for groundwater contamination in several locations in the growers who depend on multiple pesticide applications for a crop should be analyze their pest management practices carefully with a view toward reducing the number of applications, the total amount applied, using a different pesticide less likely to leach through the soil profile, or using nonchemical methods to manage the pests.The problem is particularly acute when applications are made to sandy soil in areas with a high water table.Growers in such a situation will want to enlist help of pest management specialists in order to design a program that minimizes the possibility of groundwater contamination without sacrificing effective pest control.Rinsing tanks: Rinsing spray tanks can be a source of possible contamination.The best solution is to drain rinse water into a pit, as described in the section on mixing and loading.If no pit is available, users should not dump or spray the rinse water in the same place repeatedly.Always remember that the more pesticide applied to the same area, the greater the possibility of the active ingredient leaching to groundwater.It is also important to calculate accurately the amount of spray solution needed to avoid the need for disposing of excess spray solution.Rinsing and disposing of containers: Rinsing and disposing of containers is the last in the sequence of operations involved in the use of pesticides.A container is never completely empty and the concentrated formulation remaining represents a troublesome source of future contamination.Containers should be rinsed three times with the rinsate being added to the spray solution and then punctured so they can't be used for another purpose and disposed of in a sanitary landfill.Some landfill operators will not accept the containers unless they are crushed so they will take up less space in the landfill.Paper or cardboard containers should be emptied as completely as possible then punctured and disposed of in landfill.The pesticides are never completely combusted and represent a potentially hazardous source of exposure for the person doing the burning (Noyes et al., 1991;Wauchope et al., 1994). Fig. 1 . Fig. 1.Principal processes governing pesticide transport and fate in agricultural structured soil systems.The central frame is explained in Fig. 2. Fig. 2 . Fig. 2. Fractures and microtopography are triggers for preferential infiltration (top), Diverse structure/matrix interfaces stained by dye tracer visualize different preferential transport paths; these interfaces may affect lateral diffusion, sorption and degradation (middle).Soil matrix and macropore characteristics and resulting transport patterns; actual patterns also depend on the characteristics of rainfall and of overlaying soil horizons
2018-10-26T05:45:53.206Z
2011-10-05T00:00:00.000
{ "year": 2011, "sha1": "52bce1ace5b57ba239b08b674f783ebb518e50b8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/17301", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "23f4b243243547613a5e36ac4328f8407855fd90", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
259632245
pes2o/s2orc
v3-fos-license
Formulation and in vitro evaluation of compression coated mebeverine HCl tablets for colon targeting The aim of the study was to develop colon targeted compression coated Mebeverine HCl (MEB) tablets using pH dependent, swellable and rupturable polymers for effective treatment of irritable bowel syndrome (IBS). The MEB loaded core tablets were prepared by direct compression method using CCM and CP as super disintegrating agents at different concentrations and evaluated for pre compression and post compression parameters. The optimized core tablets were further used to fabricate compression coated tablets using different ratios of Eudragit L100, Eudragit S100 as pH-dependent polymers, Keltone and Ethocel as swellable and rupturable polymers by modified compression method. Drug compatibility with excipients was checked by FTIR studies and results indicate no interaction. The precompression and post compression studies were within acceptable limits. In vitro dissolution studies were done for compression coated tablets to find out the lag phase, burst drug release and retard drug release The findings of this study concludes that the lag time of compression coated tablet can be modulated by combining with EL 100, ES 100, Keltone and Ethocel in different weight ratio. These designed tablet system was found to be satisfactory in terms of release of the drug after the predetermined lag time, thus the system can be target to release in the colon proximity. The compression coating technique can be successfully applied for MEB for colon targeting to treat IBS. Introduction Colon targeted drug delivery systems offers to treat various colonic diseases like ulcerative colitis, amoebiasis, colonic cancer, inflammatory bowel disease (IBD) by delivering drugs directly to the colon region due to its longer transit time1. Colonic route of drug administration is not only used for the local treatment of colonic diseases but also be used for the systemic delivery of protein and peptide drugs2 and treat diseases which are sensitive to the circadian rhythms such as angina, asthma, arthritis etc. Various approaches are developed for colon targeting by considering the physiological characteristics such as gastric emptying, pH gradient and peristaltic moment of the GIT. The present study developed pH dependent colon targeting compression coated tablets loaded with Mebeverine HCl to treat IBD. Mebeverine is an antispasmodic drug that is claimed to act directly on the colonic muscle and is virtually free of systemic adverse reactions and has been used to treat IBD, it has direct effect on colonic muscle activity. Methods Preparation of the Core Tablet: MEB core tablets were prepared by direct compression method using two super disintegrants viz., CCM, CP at 4, 8 and 12% w/w and GQ720 as directly compressible carrier. All the ingredients as per the table 1 were weighed and blend with GQ720 and shaken in a polybag for prescribed time to get uniform mixing, further this blend was subjected for precompression studies viz., bulk density, compressibility index, angle of repose by standard procedure3-5. The powder blend was further compressed into tablets using 8 mm punch in Cadmach 10 station rotary tablet punching machine. Preparation of compression coated tablets The B-1 to B-8 batches of compression coated tablets were designed as per table 2 by one step dry coating technique 6 using optimized core tablets viz., F-3 and F-6 as shown in figure 1. In each case impermeable ethocel was applied under the bottom of the die cavity and core tablet was placed carefully at the center of die. Core tablet was slightly pressed to fix, above it the mixture of pH dependent (ES100, EL100), rupturable and swelling polymers (Keltone, Ethocel) were filled and manually lowered the lower punch slowly and compressed by using 13 mm flat faced punch. Total weight (mg) 700 700 700 700 700 700 700 700 FTIR studies The FTIR spectra for MEB, and optimized core and compression coated tablets were recorded using BRUKER-FTIR spectrophotometer in the wave number region from 4000 cm -1 to 500 cm -1 . The KBr press was used to prepare potassium bromide pellets loaded with samples under the study. Samples and KBr were mixed in a ratio of 1:100 and pellets were prepared by finely grinding the mixture in a mortar. Finely grinded mixture was introduced into a stainless steel die and pellets were prepared by pressing the die between polished steel anvils at a pressure of 10t/in 2 . Postcompression evaluation The developed core and compression coated tablets were studied for post compression parameters viz., thickness, diameter, friability, hardness, drug content, weight variation as per standard procedures and conditions. Disintegration Disintegration test is carried out by using USP apparatus, introduce one tablet into each tube and, add a disc to each tube. Suspend the assembly in the beaker containing phosphate buffer pH 7.4 and operate the apparatus for the specified time. Note down the time taken for tablet to disintegrate, triplicate readings were taken and data was computed. For core tablets In vitro dissolution study core tablets were conducted by using USP Type II Paddle apparatus. Place the stated volume about 900 ml of the dissolution medium viz., phosphate buffer pH 7.4, free from dissolved air, into the vessel of the apparatus. Assemble the apparatus and warm the dissolution medium to 37°C. Place one core tablet in the apparatus, allow the tablet to sink to the bottom of the vessel prior to the rotation of the paddle. Operate the apparatus immediately at the 50 rpm. At specified time interval withdraw the 5 ml sample and add a volume of fresh dissolution medium equal to the volume of the samples withdrawn to maintain sink condition. Filter the sample solution through Whatman filter 44, and measure at 263 nm for MEB content using a double beam UV spectrophotometer. The study was conducted in triplicate and data were computed by using dissolution software PCP Disso V3.0. For compression coated tablet In vitro drug dissolution studies were carried out for compression coated tablets using USP Type II Paddle apparatus. The drug release was studied in three different medium to simulate GIT proximity. Initially the dissolution was carried out in 0.1N HCl for first 2 h to mimic the simulation of gastric fluid. After replace the 0.1N HCl with phosphate buffer pH 7.4 and continue the dissolution for 6 h. Replace the phosphate buffer pH 7.4 with phosphate buffer pH 6.8 and continue the dissolution for 12 h to mimic small intestine and colon pH. In each case at different intervals of time specified volume was withdrawn and same was replaced with fresh dissolution medium to maintain the sink conditions. Filter the sample solution through Whatman filter 44, and measure at 263 nm for MEB content using a double beam UV spectrophotometer. The study was conducted in triplicate and data were computed by using dissolution software PCP Disso V3.0. FTIR studies The comparative FTIR data and specters were shown in table 3 and figure 2. The FTIR spectra of MEB shows characteristic absorption bands appeared at 2959.06 cm -1 for Ar-CH=CH-, 2837.27cm -1 for -CH2-,1714.33 cm -1 for C=O, 1510.11 cm -1 for Ar-CH=CH-and 1339.31 cm -1 for -C-N-. MEB loaded core tablets and compression coated tablets shows all the characteristic bands of MEB which clearly indicate that there is no interaction between the MEB and polymers used in the preparation of core and compression coated tablets. Post compression studies The post compression data for core and compression coated tablets were given tables 4 and 5. The core tablet results were within the limits and are in accordance with pharmacopoeial standards. The hardness and friability data indicates that tablets have sufficient mechanical integrity and strength. The weight variation results revealed good uniformity of the tablets and were found to be within acceptable limits as per the pharmacopoeial specifications, the disintegration time was below 3 min in core tablets and is the determinant factor in the designing of compression coated tablets where burst release after lag time is in question. The results suggest the disintegration time decreases with increase in concentration of super disintegrants, among the super disintegrants the tablets prepared with CCM shows better disintegration time. In both the case the mechanism of disintegration was due to swelling and hydrophilic wicking property. In case of compression coated tablets the hardness of the tablets was found to be in the range of 6.08±0.39 to 6.67±0.13kg/cm 2 friability in the range of 0.39±0.019 to 0.564±0.08% which were below 1% indicating the sufficient mechanical integrity and strength of the prepared compression coated tablets. All other parameters were found to be within specified limits and are complying with pharmacopoeial standards. The % drug content of MEB loaded core tablets and plain tablets were found to be 98.12±1.87 to 99.12±1.27, low SD values indicate uniformity in drug distribution and method adapted was reproducible. n = 10*/20**/6*** Disintegration studies The disintegration test for compression coated tablets was carried out for 12 hr in order to check the influence of pH dependent, swellable and rupturable polymers on tablet integrity and the results suggest that the disintegration was time dependent and type of polymers. The disintegration time directly related to the lag period of the study is depends on swelling and bursting nature of the swellable and rupturable polymers and impermeable behavior of ethocel used in outer shell of the tablet and superdisintegrants in the inner shell core tablet. Swellable and rupturable polymer, ethocel in outer shell delayed the disintegration rate to great extent and super disintegrants in the inner core increases the faster disintegration and facilitate faster drug release. The sequential changes during disintegration of compression coated tablets were shown in the figure 3. Figure 3 Sequential changes observed during disintegration of optimized compression coated tablet In vitro dissolution studies The in vitro dissolution studies were carried out for both core and compression coated tablets using USP Type II apparatus and the results were computed and analyzed by using dissolution software PCP Disso V3. The results were given tables 6, 7 and comparative dissolution profiles were shown in figure 4 and 5. Core tablets Total seven formulations F-1 to F-3 and F-4 to F-6 core tablets were prepared with CCM and CP at 4%, 8%, 12% w/w concentrations respectively, and Plain core tablets were prepared without any superdisintegrants. .58 for F-1 to F-6 and plain core tablets respectively. CCM decreases the disintegration time and increases drug release because it accelerates disintegration of tablets by virtue of its ability to absorb a large amount of water when exposed to an aqueous environment. The absorption of water results in breaking of tablets and therefore faster disintegration occurs. This disintegration is reported to have an effect on dissolution characteristics as well. It was found that the F-3 core tablet containing CCM 12% w/w as a superdisintegrant have lower disintegration time and higher drug release than that of other CCM formulations, F-6 core tablets prepared with 12% w/w CP as superdisintegrant have lower disintegration time and higher drug release than other CP formulations, which may be due to that CP is a cross linked polymer of povidone and this cross linking makes it an soluble, hydrophilic, highly absorbent material, resulting in excellent swelling properties and its unique fibrous nature gives it excellent water wicking capabilities, so CP provides superior drug dissolution and disintegration characteristics. Based on the in vitro results F-3 and F-6 core tablets were chosen for compression coating. Compression coated tablets The developed compression coated tablets consist of three components, the central core tablet made up pure drug MEB and different concentrations CCM, CP super disintegrants and GQ720 as carrier; the impermeable layer Ethocel; and barrier layer consist of mixture of pH dependent swellable and rupturable polymers viz., Keltone, ES100, EL100. B-1, B-2 compression coated tablets prepared using ES100 at different concentrations of Keltone and Ethocel using F-3 as core tablet; B-5 and B-6 compression coated tablets prepared using ES100 at different concentrations of Keltone and ethocel using F-6 core tablet respectively. Similarly B-3, B-4 compression coated tablets prepared using EL100 at different concentrations of Keltone and Ethocel using F-3 as core tablet; B-7 and B-9 compression coated tablets prepared using EL100 at different concentrations of Keltone and Ethocel using F-6 core tablet respectively. Lag time: Lag time is the time before the drug release started or the time in which less than 10 % of the drug released. Incorporation of core tablet into compression coated tablet produce a lag time prior to drug release. The lag time (t10) was in the range of 2.5 hr to 4 hr and t50 was in the range of 4.6 hr to 6 , which clearly indicates the drug release was restricted in acidic environment, a small amount drug release within lag time is may be due to solution of adhered drug particles. The t50 results show similar results and can be due the rupturing and swelling property of polymer and influence of polymers on environmental pH at intestine and colon proximity with EL100 and ES100. Mechanism of drug release from compression coated tablets In compression coated tablets, drug containing core compressed with the outer barrier layer, it prevents the rapid drug release from core tablets. The drug will not be released unless the coat is broken. When the dissolution medium reaches the core after eroding or rupturing the outer barrier layer rapid drug release was observed. The release profile of compression coated tablet exhibited lag time followed by burst release, in which the outer shell swells and ruptured followed by exposure of core tablets to the medium. When the dissolution medium come in contact with compression coated tablet, the barrier consisting of pH dependent (ES 100 and EL 100), swelling and rupturing polymers (Keltone and Ethocel) starts absorbing dissolution medium as a consequence the polymer swells and expanded and bursting of the layer. As the time passes the swelling process acts as disintegrating force which facilitates the destabilization of the barrier layer itself and additionally rupturing property of the polymer prone to disintegration of the tablet. Finally depending on the nature of the swelling and rupturing polymers the top layer is completely removed i.e., lag time, as a result the dissolution of drug increases sharply due to increased access of dissolution medium into the core of the tablet. Barrier layer intended to regulate the function of the system and modify the release of drug and the polymers present in the core tablet regulate drug release in burst release followed by controlled manner. This type of tablet could be described as a compression coated systems in which the top cover layer consists of swellable and rupturable polymer layer and the inner part of a conventional tablet acting as a drug reservoir. The possible sequential changes observed during dissolution studies for the compression coated tablet was depicted in the figure 6. Figure 6 Sequential changes observed for compression coated tablets during dissolution studies A similar behavior was noticed in all the formulations. The release profiles had a typical pulsatile shape. It is clear that in all cases minimum drug release occurs during the lag time followed by rapid drug release phase. At the stage of rapid release, the release of model drugs is faster and terminates from the systems. These results suggest that apart from the drug solubility the top cover layer also plays a significant role in modifying the lag time and the drug release. The polymer properties and the quantity of the polymer material contained in this layer control the performance and the function of the system. The best fit model was found to be korsemeyer peppas for all compression coated tablets and the exponential 'n' value greater than 1 suggest the drug release follows erosion followed super case II transport mechanism. Conclusion The compression coating technique can be successfully applied for colon targeting by using pH modulating, swellable and rupturable polymers. The findings of study concludes that the lag time of compression coated tablet can be modulated by combining with EL 100, ES 100, Keltone and Ethocel in different weight ratio. These designed tablet system was found to be satisfactory in terms of release of the drug after the predetermined lag time, thus the system can be target to release in the colon proximity.
2023-07-11T19:32:49.324Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "d98538955969eba9e828efdb1b6ee9cc3a4a0803", "oa_license": "CCBY", "oa_url": "https://gsconlinepress.com/journals/gscbps/sites/default/files/GSCBPS-2023-0210.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ed469f1d444de41f5000cbb5ad1b4ff67764c49b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
18224440
pes2o/s2orc
v3-fos-license
Bladder wall thickness and ultrasound estimated bladder weight in healthy adults with portative ultrasound device Background: The aim of this study was to investigate bladder wall thickness (BWT) and ultrasound estimated bladder weight (UEBW) values in healthy population with a portative ultrasound device and their relationship with demographic parameters. Materials and Methods: The study was carried out in Neurorehabilitation Clinic of Ege University Hospital. Ninety‐five subjects (48 women and 47 men) aged between 18 and 56 were included in the study. BWT and UEBW were determined non‐invasively with a portative ultrasound device; Bladder Scan BVM 6500 (Verathon Inc., WA, USA) at a frequency of 3.7 MHz at functional bladder capacity. These values were compared by gender, and their relation was assessed with age, body mass index (BMI) and parity. Results: Mean BWT was 2.0 ± 0.4 mm and UEBW was 44.6 ± 8.3 g at a mean volume of 338.0 ± 82.1 ml. Although higher results were obtained in men at higher bladder volumes, the results did not differ significantly by gender. Correlation analyses revealed statistically significant correlation between UEBW and age (r = 0.32). BWT was negatively correlated with volume (r = –0.50) and bladder surface area (r = –0.57). Also, statistically significant correlations were observed between UEBW and volume (r = 0.36), bladder surface area (r = 0.48) and BWT (r = 0.25). Conclusion: Determined values of BWT and UEBW in healthy population are estimated with portative ultrasound devices, which are future promising, for their convenient, easy, non‐invasive, time‐efficient hand‐held use for screening. has been noted for many years. Ultrasound estimated bladder weight (UEBW) was reported as a useful method for the objective and quantitative measurement of bladder hypertrophy in the studies. [6,10,12,13] Recently, portable handheld US device was introduced to obtain easier and quicker results for BWT and UEBW with understanding significance of diagnosing and evaluating BOO by clinicians. Although this device is commonly used to assess these values, there is only one study that used this device in healthy adults to our knowledge. [14] However, it is well known that providing BWT and UEBW values in healthy adults is necessary before measurement of these parameters in the patients, to be able to provide normal-pathologic boundaries. For this reason, we aimed to investigate BWT and UEBW values in healthy population with a portative ultrasound device and their relationship with demographic parameters. MATERIALS AND METHODS The study was carried out in Neurorehabilitation Clinic of Ege University Hospital. According to a power of 90% and a two-sided alpha value of P = 0.05, 95 healthy volunteers (48 women and 47 men) between 18 and 56 years of age were included in the study. They were recruited among INTRODUCTION Bladder outlet obstruction (BOO) is a clinical condition in association with a number of disorders in the lower urinary tract such as external sphincter dyssynergia, urethral valves, neurogenic bladder dysfunction and benign prostatic enlargement. In the experimental studies, it has been shown that BOO is followed by compensatory increases in bladder wall thickness (BWT) and bladder weight as a result of smooth muscle hypertrophy and decomposition of connective tissue. [1][2][3] These findings in response to BOO have been confirmed in humans. [4][5][6] In the clinical setting, the detection of these histological changes is an important issue in the early stages of BOO in order to avoid complications including renal failure, recurrent urinary tract infection, urinary incontinence, urinary retention, and bladder and renal calculi. Although some methods such as cystoscopy and cystography can be used to show bladder wall trabeculations suggesting detrusor hypertrophy, they do not quantitatively evaluate the degree of detrusor hypertrophy. [7,8] On the other hand, ultrasonography (US) is a non-invasive, simple, fast and wide-acceptable method in evaluating detrusor hypertrophy. [9][10][11] By using US, the BWT as an indicator of detrusor hypertrophy hospital employees and inpatients' and outpatients' relatives or caregivers. The subjects were excluded if they had history of lower urinary tract injury or surgery, if they had benign prostatic enlargement or prostatic neoplasm, if they had neurologic disease or diabetes mellitus that would affect functions of lower urinary tract, if they had renal disease, and if they had open wound in or around suprapubic area. The cases with renal stasis or other signs of bladder dysfunction affecting the kidneys were also excluded. The women were excluded if they were pregnant, if they had overactive bladder and pelvic organ prolapse. After the subjects were briefed about the study, and written consent was obtained from all subjects, their demographic characteristics (weight, height, body mass index [BMI], age, gender, and parity) were recorded. The study was approved by the local ethical committee. Ultrasonographic measurements were performed by using BladderScan  BVM 6500 (Verathon Inc., 20001 North Creek Parkway, Bothell, WA 98011 USA) patented "V" mode technology. The measurements were made according to the manufacturer's instructions. The subjects were scanned in the supine position with a 130° angle rotating ultrasound probe positioned in the midline above the pubic symphysis by 1 of 2 physicians. The scanner automatically detects misalignment and directs the user to the optimal position. Subjects were asked to drink as much water as possible prior to their exam. If the bladder was not of sufficient capacity at the time of measurement, subjects were rescanned after taking free fluids until a capacity of at least 200 ml was reached. Data from the scans were uploaded via the Internet using the proprietary ScanPoint  (Verathon Inc., WA, USA) software program for verification of scan accuracy and automatic calculation of bladder weight according to the algorithm developed by the manufacturer. Then, bladder volume, wall thickness, bladder surface area (BSA) and UEBW were determined automatically by the machine at a frequency of 3.7 MHz at functional bladder capacity [ Figure 1]. An individual scanning procedure was completed in 5-10 min. Statistical analysis Data were statistically analyzed by using 13.0 Statistical Package for the Social Sciences. The subgroups regarding age and gender were compared using independent samples t-test. Correlations between age, gender, BMI, and parity and ultrasound measurement values were computed by Spearman's correlation analysis. All the results were expressed as mean ± standard deviation. A P value below 0.05 was considered to indicate statistical significance. Table 1 summarizes subjects' characteristics. In a total of 95 subjects, 48 (50.5%) were women and 47 (49.5%) were men. There was no significant difference for age between women and men. BMI was significantly higher in men when compared to women (P < 0.05). RESULTS The results of the ultrasonographic measurements are presented in Table 2. Although higher results were obtained in men, either BWT or UEBW at higher bladder volumes, compared to women, no statistically significant difference was found between men and women. Parity did not differ regarding BWT and UEBW in women. DISCUSSION The results of the present study showed that normal BWT value was 2.0 ± 0.4 mm and normal UEBW value was 44.6 ± 8.3 g by using BladderScan BVM 6500, which is a portable ultrasound device. In addition, the association between age and UEBW values was found in this study, as expected, although it was not strong. This finding supports that the increased UEBW values were results of the increased collagen deposition in older women and age-associated detrusor hypertrophy expected in men with increasing bladder outflow tract obstruction, in accordance with previous data. [15,16] It is well known that the bladder wall as well as the different layers of the bladder can be imaged with ultrasound technology. By measuring with US device, the BWT has received increasing interest as a non-invasive test to diagnose BOO. On the other hand, measurement of mean BWT is important in order to show women with detrusor instability. Previous studies reported that they had thicker bladder walls than those with genuine stress incontinence suggesting that this change may be due to hypertrophy of the detrusor muscle secondary to repeated detrusor contractions against a closed urtehral sphincter. [17][18][19] Khullar et al. [20] also reported that detrusor hypertrophy may be result of an increased workload such as detrusor instability. Thus, the assessment of BWT allows an indirect measurement of the detrusor muscle thickness and this provides a potential index of detrusor activity. Previous studies showed that it is a reliable method [21] and the BWT has been found to correlate well with other measures of BOO such as uroflowmetry and post-void residual. [4,7] However, some authors showed that measurement of the bladder wall cannot be used to compare the grade of wall hypertrophy, not only between various patients, but also during follow-up of the same patient due to the fact that the BWT is dependent on the degree of bladder filling. [22,23] UEBW, which is independent of volume has the promise to become an important indicator for the diagnosis of BOO. By measuring the anterior BWT and calculating bladder surface area, it can be estimated. The studies showed that UEBW can be used as a reliable tool in the management of BOO and neurogenic bladder dysfunction. Several researchers have proposed the measurement of UEBW. However, in these methods, as the thickness was measured manually; the bladder wall measurements suffered from high inter-and intra-observer variability. In addition, such measurements required filling the patient's bladder to a known fixed volume using a catheter and an expensive high-resolution B-mode ultrasound machine and an ultrasound technician. [6,10,11,13] Accordingly, Chalana et al. [12] have developed an automatic and convenient method to estimate UEBW with BladderScan BVM 6500, which is a non-invasive, accurate, reliable, and easy to use. We have used this method to measure UEBW in our study. The results showed the association between age and UEBW values supporting findings of studies that used high-resolution B-mode ultrasound machine. Although portable handheld US device was introduced to obtain easier and quicker results for BWT and UEBW in the clinical setting, there is only 1 previous study reporting BWT and UEBW normal values in the literature to our knowledge. [12] In that study, they reported a surprising finding that there was no correlation between UEBW and age. Although we found a significant relation, we cannot explain why its strength was not strong. On the other hand, we have observed an association between UEBW and BSA supporting the results of that study. Our study has several limitations. Firstly, it can be argued that the number of subjects studied was relatively small. Despite we have decided to admit a number of the subjects according to the power analysis, it can be argued that statistically advised sample sizes for each sex (10 per decade of life of each sex) might increase the real magnitude of the findings. Secondly, because there was no subject over the age of 56, we cannot conclude these results for older ages. It should be, however, noted the difficulty to find the subject without any disease that would affect functions of lower urinary tract in the elderly population. Another limitation can be concluded that BWT values varied with the volume of urine contained in the bladder and this finding may influence the results. Considering the fact that all subjects were scanned with same instructions, including a capacity of at least 200 ml in the bladder, it is obvious that there was no doubt about the accuracy of the results. It is well known that BWT is dependent on the degree of bladder filling as concluded above. In summary, we have demonstrated normal values of BWT and UEBW by using BladderScan BVM 6500, which is a portative ultrasound device. Results showed that the values of BWT and UEBW in healthy population did not differ significantly by gender and age. The BWT and UEBW can be estimated with portative ultrasound devices which are convenient, easy, non-invasive, time-efficient hand-held use for screening. In the future, determined cut-off values for conditions effecting both BWT and UEBW such as BOO and overactive bladder with portative ultrasound devices may be very useful in our clinical practice.
2017-03-31T01:28:42.328Z
2013-02-01T00:00:00.000
{ "year": 2013, "sha1": "51e79b2b408b4154d80bc09b61fc3979546af44e", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "51e79b2b408b4154d80bc09b61fc3979546af44e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13986362
pes2o/s2orc
v3-fos-license
Metformin improved health-related quality of life in ethnic Chinese women with polycystic ovary syndrome Background Few studies have assessed whether the amelioration of the clinical signs of polycystic ovary syndrome (PCOS) achieved by treatment leads to improvement in the health-related quality of life (HRQoL) of patients. This study was aimed to examine the HRQoL of ethnic Chinese women with PCOS who received metformin treatment. Methods This prospective study was conducted at a medical center in Taiwan. Study participants aged 18–45 years were diagnosed as having PCOS according to the Rotterdam criteria, and all received metformin treatment. Their HRQoL was assessed using generic (WHOQOL-Bref) and PCOS-specific (Chi-PCOSQ) instruments. Mixed effect models were used to examine the effects of metformin on repeatedly measured HRQoL. Additional analyses using stratified patients characteristics (overweight vs. normal; hyperandrogenism vs. non-hyperandrogenism) were done. Results We recruited 109 participants (56 % were overweight, 80 % had hyperandrogenism). Among the domain scores of WHOQOL-Bref, the psychological domain score was the lowest one (12.64 ± 2.2, range 4–20). Weight (3.25 ± 1.59, range 1–7) and infertility (3.38 ± 1.93, range 1–7) domain scores were relatively low among the domain scores of Chi-PCOSQ. Overweight and hyperandrogenic patients had significantly lower HRQoL as compared with those of normal weight and non-hyperandrogenic patients, respectively. Metformin significantly improved the physical domain of WHOQOL-Bref (p = 0.01), and the infertility (p = 0.043) and acne and hair loss aspects (p = 0.008) of PCOS-specific HRQoL. In the subgroup analysis, significantly improved HRQoL following metformin treatment appeared for only overweight and hyperandrogenism subgroups. Conclusions Metformin might improve health-related quality of life of polycystic ovary syndrome women by ameliorating psychological disturbances due to acne, hair loss and infertility problems, especially for overweight and hyperandrogenic patients. Background Health-related quality of life (HRQoL) is generally defined as the functional effect of a clinical condition and/or its treatments upon a patient, which is subjective and multidimensional, including physical function, psychological state, and social interactions [1]. With medical advances that have improved life expectancy, population health is measured not only on the basis of saving lives but also in terms of improving the quality of life. The ultimate goal of healthcare is to improve, restore, or preserve the quality of life of patients [2]; survival per se may no longer be perceived to be only important outcome. Hence, in addition to traditional measures of health (e.g., survival), HRQoL is an important indicator that captures the burden of illness. For chronic illnesses or clinical conditions for which there is no cure, it is critical to provide therapy that makes patients feel better. To assess HRQoL, the degree to which the disease or its treatment influences the patient's life is quantified from an individual's perspective. Assessing HRQoL helps healthcare providers understand whether patients are satisfied with their health and associated treatments. Also, HRQoL is important to consider when evaluating various symptom management plans [3] and disease treatments [4], especially when they provide similar effects on life expectancy. According to U.S. Food and Drug Administration's guidance for industry, HRQoL can be used as a clinical outcome to claim the effect of treatment [5]. Polycystic ovary syndrome (PCOS) is the most common endocrine disorder in reproductive-aged women [6]. Clinical presentations associated with PCOS, such as overweight/obesity, hirsutism, acne, hair loss/ androgenic alopecia, oligomenorrhea, amenorrhea and infertility and can lead to mood disturbances, affect the emotional wellbeing as well as sexual satisfaction of women, and cause a reduction in the HRQoL of patients [7,8]. Obesity, clinical signs of hyperandrogenism (i.e., acne, hair loss), and infertility are the main contributors to psychological morbidity [9][10][11]. The HRQoL of women with PCOS has been investigated in several studies for some countries [7,12,13]; however, data on the HRQoL of Chinese women with PCOS is limited. It has been recognized that clinical representations of PCOS vary with culture and ethnicity [14], and may thus have different impacts on HRQoL. For example, the prevalence of hirsutism and obesity in Chinese women with PCOS appears to be lower than that from Caucasians patients [14]; contrarily, acne and hair loss were common problems reported in ethnic Chinese women with PCOS [15]. Therefore, assessing the impact of PCOS on the HRQoL of patients across ethnic groups is important. Metformin, which increases insulin sensitivity, is one of common treatments for PCOS in Taiwan. Some studies showed that metformin improves the body weight, insulin sensitivity, acne, hirsutism, and menstrual cycle of women with PCOS, and that the effects of metformin may vary depending on a patient's characteristics (i.e., obesity, hyperandrogenism) [16][17][18][19]. However, other research found that, among obese women with PCOS, metformin may not lower body weight or improve the menstrual cycle and weight loss alone through lifestyle modifications improves menstrual function [20]. Of notice, previous research primarily focused on clinical effectiveness of metformin [16][17][18][19][20], but only a few studies [7,21] have determined whether the amelioration of the clinical signs of PCOS achieved by treatment leads to improvement in the HRQoL of patients. Therefore, this study aimed to assess the impact of PCOS on the HRQoL of ethnic Chinese women with PCOS and the effects of metformin on the HRQoL of PCOS patients. Methods This was a prospective observational study. Before commencement of the study, permission was obtained from the Institutional Review Board of National Cheng Kung University Hospital, Tainan, Taiwan (A-ER-103-287). Participants All participants were recruited from the Department of Obstetrics and Gynecology at National Cheng Kung University Hospital during February to August, 2015. They met the following inclusion criteria: (1) aged 18-45 years, (2) diagnosed with PCOS according to the Rotterdam criteria, defined as the presence of at least two of the following three criteria: (i) oligo-anovulation (a cycle length of > 35 days or amenorrhoea), (ii) clinical hyperandrogenism (hirsutism recorded as m-FG score of ≥ 6 with/ without acne or androgenic alopecia) and/or biochemical hyperandrogenism (total testosterone level of more than 0.95 ng/mL), and (iii) polycystic ovaries (≥12 follicles measuring 2-9 mm in diameter, or ovarian volume > 10 ml in at least one ovary) [22], and (3) competent in Mandarin Chinese. We excluded those: (1) diagnosed with similar clinical presentations (e.g., hyperprolactinemia, thyroid dysfunction, Cushing syndrome), (2) diagnosed with diabetes or had fasting plasma glucose ≥ 126 mg/dL or 2-h glucose ≥ 200 mg/dL before PCOS diagnosis, (3) taking any medications that may influence insulin level or contraceptive pills at 3 months before PCOS diagnosis, (4) suffered a major traumatic event at least 6 months prior to the data collection (i.e., divorce, separation, or death of intimate partner or relatives). This is because the people experiencing these life events are likely to have negative emotions (e.g., depression, sad, anxiety), which may affect their psychological well-being [23][24][25]. All participants gave a written informed consent regarding their willingness to participate in the research. The second author used the questionnaires to face-to-face interview each participant at every time when they returned for a visit during 6 months of metformin treatment. Patients were scheduled for a following visit within 1 to 2 months. The author confirmed that all participants had completed all study questionnaires and demographic questions measuring age, gender, residence, highest education, disease duration, comorbidities, and exercise behavior. Routine physical examination (i.e., weight, height, PCOS-specific physical appearance: acne, hirsutism) was conducted at each visit. Overweight was defined using body mass index (BMI) calculated by weight and height, and a BMI ≥25 kg/m 2 suggests overweight. A blood test was taken at each visit to determine hormone and glycemic levels. Once PCOS was diagnosed, all patients were treated with metformin (500 mg TID). Patients who can not tolerate the immediate-release formulation (i.e., due to gastrointestinal intolerance side effects) could reduce the dose of metformin (from TID to BID) or were switched to extended-release metformin. Study measurements The following questionnaires were administered to each participant before, during, and after 6 months of metformin treatment. Health-related quality-of-life questionnaire for women with polycystic Ovary Syndrome (PCOSQ) [26] is a disease-specific HRQoL questionnaire that contains 26 questions using a seven-point rating scale (1: maximum impairment and 7: no impairment of HRQoL) in the following five domains: emotions (7 items), hair growth (5 items), body weight (5 items), infertility (5 items), and menstruation (4 items). The psychometric properties of the PCOSQ showed good test-retest reliability (all intraclass correlation coefficients > 0.8), acceptable internal reliability (all Conbroach's α values > 0.7), and satisfactory concurrent validity with the SF-36 [27]. The present study used a Chinese version of PCOSQ (Chi-PCOSQ), which was recently developed by Ou et al. [15]. Its score was shown to be reliable and valid in a sample of Chinese-speaking women with PCOS. WHOQO-Bref [28] is a short version of the World Health Organization Quality of Life (WHOQOL)-100. It has 26 items. The Taiwan version of WHOQOL-Bref additionally includes two domestic items. There are 26 items distributed into four domains: physical health (7 items), psychological health (6 items), social relations (4 items), and environment (11 items). Items are rated on a 5-point Likert scale (low score of 1 to high score of 5). The mean score for each domain is calculated, resulting in a mean score per domain that is between 4 and 20. The total score of WHOQOL-Bref is the sum of all domain scores; it ranges from 16 to 80, with a higher score indicating better quality of life. Internal consistency (Cronbach's α = 0.70-0.91), test-retest reliability (r = 0.76-0.80), and construct validity (comparative fit index = 0.89) have been established for the Taiwan version scores [29]. MMSA-8 In order to account for the effect of medication behavior, the Morisky 8-item medication adherence scale (MMAS-8) [30] was applied to measure medication adherence. The MMAS-8 is one of the most commonly used self-report adherence questionnaires. The Chinese version of MMAS-8 has been validated among a convenience sample of 176 patients in China [31]. The scale showed acceptable internal consistency (Cronbach's α = 0.77) and test-retest reliability (r = 0.88), and good construct validity. Statistical analyses Descriptive analyses were used to present the demographics of the study sample and the WHOQOL-Bref and Chi-PCOSQ total and domain scores. Repeated measures analysis of variance (ANOVA) was performed to detect the changes in the HRQoL outcomes along with treatment time between subgroups. Mixed effect models were applied to assess the effects of metformin on repeated outcomes measures (including total and domain scores of WHOQOL-Bref and Chi-PCOSQ). Study patients were stratified by BMI (overweight vs. normal) and clinical and/or biochemical hyperandrogenism. Mixed effect models were applied within each subgroup to detect significant change in the HRQoL following metformin treatment. The SAS 9.4 was utilized for all aforementioned analyses. Results A total of 109 eligible women were enrolled in the study, with average follow-up time of 5.18 (±1.06) months. There were 83 patients who completed 6 months of follow-up, 12 patients who were lost follow-up, 11 patients who got pregnancy, and 3 patients who switched to oral contraceptive pills. The mean age of the participants was 28.3 years; 56 % of them were overweight and 80 % had hyperandrogenism. The detailed baseline characteristics are presented in Table 1. The changes of HRQoL were revealed by the total and domain scores of both WHOQOL-Bref and Chi-PCOSQ within the period of metformin treatment (Table 2). Overall, PCOS patients had the lowest score in the psychological aspect of HRQoL, with similar trends found within subgroups. These imply higher impact of psychological disturbances to patients' quality of life. Overweight patients had significantly lower HRQoL (relatively poorer quality of life) as compared to that of normal weight patients in WHOQOL-Bref scores. Repeated measures ANOVA indicated that the changes in psychological and social domain scores along with treatment duration between overweight and normal weight subgroups were significantly different (p = 0.027 and p = 0.016 for psychological and social domains, respectively). Hyperandrogenic patients had significantly lower HRQoL as compared to that of those without hyperandrogenism. The changes in total score and physical, psychological, and social domain scores along with treatment time between hyperandrogenism and non-hyperandrogenism subgroups were significantly different (repeated measures ANOVA showed p = 0.014, 0.012, 0.013, and 0.015 for total score and physical, psychological, and social domain scores, respectively). PCOS patients had the lowest score on the weight domain of Chi-PCOSQ and the highest score on the body hair domain. This means that weight associated psychological disturbances had the greatest impact on patients' HRQoL, while patients had less impact of body hair problems on their HRQoL. Overweight patients had significantly lower PCOS-specific HRQoL (relatively poorer PCOS-specific quality of life) as compared to that of normal weight patients. Repeated measures ANOVA indicated that the change in weight domain score along with treatment duration between overweight and normal weight subgroups was significantly different (p < 0.0001). Also, hyperandrogenic patients had significantly lower PCOS-specific HRQoL as compared to that of nonhyperandrogenic patients. The change in acne and hair loss domain score along with treatment time between hyperandrogenism and non-hyperandrogenism subgroups was significantly different (p < 0.0005), implying that the improved HRQoL, especially on acne and hair loss aspects, was greater in hyperandrogenic patients than that in non-hyperandrogenic patients. Table 3 shows that the physical domain of WHOQOL-Bref significantly improved with treatment time (p = 0.01). Overweight patients had significantly improved physical domain scores during the treatment period, whereas no improvement trend was found for normal weight patients. As for the hyperandrogenism and non-hyperandrogenism subgroups, significantly improved physical domain scores were found only for the former. Table 4 indicates that the infertility and acne and hair loss domains of Chi-PCOSQ significantly improved with treatment time (p < 0.05). Overweight patients had significantly improved acne and hair domain scores, and hyperandrogenic patients had significantly improved infertility and acne and hair domain scores. Discussion There is a lack of research that assesses HRQoL in ethnic Chinese women with PCOS. Also, clinical evidence showing the effect of treatment for PCOS women on HRQoL is scarce. The present study found that psychological disturbances due to PCOS associated problems (i.e., acne, hair loss, infertility) may lead to a reduction in the HRQoL for ethnic Chinese women with PCOS. Overweight and hyperandrogenic patients had significantly lower HRQoL as compared with that of normal weight and non-hyperandrogenic counterparts. Metformin may provide benefits to the HRQoL of PCOS women by ameliorating psychological disturbances due to acne, hair loss and infertility problems, especially for the patients who present overweight and hyperandrogenism. HRQoL in PCOS patients treated with metformin Few studies [7,21] have assessed treatment effects on HRQoL outcomes for PCOS women. Hahn et al. [21] examined the HRQoL of 64 German women with PCOS with 6-month treatment of metformin. They found positive effects of metformin on HRQoL outcomes, especially on psychosocial, emotional, and psychosexual aspects of well-being. The present study also observed that the emotions domain of HRQoL was generally improved following treatment, although the change was not statistically significant. Acne and hair loss/androgenetic alopecia could increase women's self-consciousness, feelings of unattractiveness and emotional distress [10,11]. Previous studies have shown that metformin alleviates clinical signs of hyperandrogenism (i.e., acne) [32,33]. However, no studies assessed potential benefits of metformin on HRQoL outcomes associated with the improved clinical signs of hyperandrogenism (i.e., acne). This is in part because acne and hair loss issues are not included in the original PCOSQ [26], which is the most commonly used PCOSspecific HRQoL instrument. The present study used the Chi-PCOSQ, which is a Chinese version of PCOSQ and contains the acne and hair loss domain, and found significantly relieved acne and hair loss associated psychological disturbances (i.e., worried, embarrassed) after metformin treatment. In fact, more than half of study participants having acne or hair loss problem at baseline reported no these problems after 3 -4 months of treatment, which is even profound in the hyperandrogenism subgroup; 81 % of hyperandrogenism patients without acne or hair loss problems at 3 -4 months of treatment. Thus, the improved clinical signs of hyperandrogenism as a result of metformin may lead to satisfactory HRQoL of PCOS patients. Previous research has supported that insulin sensitizers (e.g., metformin) provide fertility benefits to PCOS patients (i.e., improve pregnancy rate [16]), especially those with hyperinsulinemia or insulin resistance, which could be responsible for the abnormal ovarian response [34]. Guyatt et al.'s study observed significant improvements in the infertility aspect of the PCOS-specific HRQoL (measured via PCOSQ) after 44 weeks of troglitazone treatment [7]. Troglitazone, an insulin sensitizer like metformin, was previously used in PCOS. The present study also observed improved infertility aspect of PCOS-specific HRQoL, especially in hyperandrogenic patients. Thus, the amelioration of ovulation problems achieved by insulin sensitizers (i.e., troglitazone, metformin) may alleviate emotional distress due to infertility and thus contribute to the improvement of the HRQoL of PCOS patients. The infertility domain in Chi-PCOSQ consists of three items: "concerned about infertility problems", "afraid of not being able to have children", and "sad because of infertility problems". These items are associated with patient's psychological concerns about fertility. Although we did not find a significantly increase pregnancy rate in our reproductive-aged participants after treatment, patient's perceived benefits from metformin and reduced perceived susceptibility to infertility after treatment [35] may lead to the improved fertility aspect of HRQoL that we observed. Metformin is recommended as one of treatment options in PCOS women, especially for those who present obesity, hyperandrogenism, insulin resistance or hyperinsulinemia [18]. Consistently, our results showed significant effects of metformin on HRQoL of PCOS patients, especially in overweight and hyperandrogenic patients. Also, insulin resistance is one of important characteristics for metformin efficacy [33]. Our previous study showed that with metformin treatment, overweight PCOS women (BMI ≥ 25 kg/m 2 ) had a significant reduction in body weight as compared to those with normal weight and patients with insulin resistance had a significantly improved 2-h insulin level as compared to those without insulin resistence [19]. In the present study, we found that the prevalence of insulin resistance in the overweight group was higher than that in the normal weight group (85 versus 58 %). This may be another reason why positive effect of metformin on HRQoL outcomes was observed in overweight patients, but not in normal weight patients. Moreover, it has been argued that the combination of menstrual problems, hyperandrogenism and anovulation can be positively affected by metformin treatment [18]. This may explain our findings showing that metformin provided significant benefits in HRQoL outcomes for PCOS women with hyperandrogenism, especially in terms of mitigating the burden of acne and hair loss, and infertility associated psychological distress on the HRQoL of patients. Importance of study findings to clinicians and patient care Considerable burden of psychological disturbances due to acne, hair loss and infertility problems on the HRQoL of ethnic Chinese women with PCOS requires healthcare providers' attention. Regularly assessing the HRQoL of PCOS patients through generic and/or disease-specific HRQoL instruments would help clinicians detect any changes in patients' HRQoL due to clinical signs of PCOS or treatment interventions. As a supplement to lifestyle changes (e.g., exercise and weight control), the amelioration of clinical signs of PCOS achieved by metformin may lead to improvement in HRQoL. Also, PCOS patients who present overweight or hyperandrogenism may receive the most benefit from metformin treatment on their HRQoL. Potential limitations Several limitations of this study need to be addressed. First, all participants were from one medical center in southern Taiwan. Our findings may be applicable to only a subset of Chinese women. Ethnically Chinese people are distributed over a large geographic area, and are likely to have differences in dietary habits, physical activity, and even treatment approaches. Second, all participants knew that they were receiving metformin and thus, the improved HRQoL which we observed may be in part because of their motivation to improve. Third, because this was an observational study and all participants received metformin treatment after PCOS diagnosis, this study did not compare other treatments for PCOS (e.g., oral contraceptives, as active control) or have a placebo control group. However, other types of treatment (e.g., clomiphene) for PCOS may also provide benefits (e.g., reproductive) for the HRQoL of patients. Fourth, because this was not a randomized design study, selection bias could not be avoided. Moreover, our results were based on patients' reporting of HRQoL outcomes, and thus, self-reporting bias could not be avoided. Lastly, the present study did not include objective measures (e.g., pregnancy rate) as indicators for metformin treatment. So, improvement in subjective outcomes (e,g, HRQoL) may not be explained as the result of changes in objective measures of clinical outcomes. However, we did observed clinical improvement in patient's body weight and acne and hair loss problems. Conclusions This is the first study to apply Chi-PCOSQ to assess the HRQoL of ethnic Chinese with PCOS and to evaluate the HRQoL outcomes of PCOS patients after metformin treatment. The results provide important clinical implications for the care of PCOS patients and suggest that developing interventions for improving the HRQoL of PCOS patients is needed. Future studies from other countries/ethnicity are warranted to evaluate the influence of treatment on the HRQoL outcomes of PCOS women.
2018-04-03T01:49:46.411Z
2016-08-24T00:00:00.000
{ "year": 2016, "sha1": "ebfd47d6e0045a3b671bd465be91e16c27549939", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12955-016-0520-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f06d1b6b8269a59a8ab8aa9bef07e70c51a17a15", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
229169
pes2o/s2orc
v3-fos-license
Elevated Expression of Serotonin 5-HT2A Receptors in the Rat Ventral Tegmental Area Enhances Vulnerability to the Behavioral Effects of Cocaine The dopamine mesocorticoaccumbens pathway which originates in the ventral tegmental area (VTA) and projects to the nucleus accumbens and prefrontal cortex is a circuit important in mediating the actions of psychostimulants. The function of this circuit is modulated by the actions of serotonin (5-HT) at 5-HT2A receptors (5-HT2AR) localized to the VTA. In the present study, we tested the hypothesis that virally mediated overexpression of 5-HT2AR in the VTA would increase cocaine-evoked locomotor activity in the absence of alterations in basal locomotor activity. A plasmid containing the gene for the 5-HT2AR linked to a synthetic marker peptide (Flag) was created and the construct was packaged in an adeno-associated virus vector (rAAV-5-HT2AR-Flag). This viral vector (2 μl; 109–10 transducing units/ml) was unilaterally infused into the VTA of male rats, while control animals received an intra-VTA infusion of Ringer’s solution. Virus-pretreated rats exhibited normal spontaneous locomotor activity measured in a modified open-field apparatus at 7, 14, and 21 days following infusion. After an injection of cocaine (15 mg/kg, ip), both horizontal hyperactivity and rearing were significantly enhanced in virus-treated rats (p < 0.05). Immunohistochemical analysis confirmed expression of Flag and overexpression of the 5-HT2AR protein. These data indicate that the vulnerability of adult male rats to hyperactivity induced by cocaine is enhanced following increased levels of expression of the 5-HT2AR in the VTA and suggest that the 5-HT2AR receptor in the VTA plays a role in regulation of responsiveness to cocaine. INTRODUCTION Cocaine addiction is marked by significant morbidity and loss of human potential, yet consistently effective and accessible recovery options remain limited. This fact underscores the continuing need to uncover the neural factors that drive vulnerability to cocaine addiction and relapse and to establish new pharmacological strategies to halt or reverse the progression of the disorder. Cocaine inhibits reuptake of monoamines, including dopamine (DA) and serotonin (5-hydroxytryptamine;5-HT;Koe, 1976) and the enhanced efflux of DA within the mesocorticoaccumbens circuit is critical in the generation of cocaine-evoked behaviors (Kelly and Iversen, 1976;Delfs et al., 1990;Callahan et al., 1994). The mesocorticoaccumbens DA neurons, which originate in the ventral tegmental area (VTA) and project prominently to subcortical [e.g., nucleus accumbens (NAc)] and cortical structures [e.g., prefrontal cortex (PFC)], are under the modulatory control of the 5-HT system (Alex and Pehek, 2007), with 5-HT neurons in the dorsal raphe nucleus innervating both cell body and terminal regions of the mesocorticoaccumbens circuit (Halliday and Tork, 1989). As such, the 5-HT system is also an important mediator of cocaine-evoked behaviors (for reviews, see Walsh and Cunningham, 1997;Muller and Huston, 2006;Bubar and Cunningham, 2008;Filip et al., 2010). The 5-HT 2A R is a G protein-coupled receptor (Berg et al., 1994) expressed throughout the nodes of the mesocorticoaccumbens circuit (Cornea-Hebert et al., 1999;Doherty and Pickel, 2000;Xu and Pandey, 2000;Nocjar et al., 2002;Miner et al., 2003). The 5-HT 2A R resident in the VTA is localized to both DA and non-DAergic [presumably γ-aminobutyric (GABA) or glutamate] neurons within the VTA (Doherty and Pickel, 2000;Nocjar et al., 2002), and appear to be integral in modulating psychostimulant-induced behaviors mediated by the mesocorticoaccumbens circuit. Microinfusion of the selective 5-HT 2A R antagonist M100907 into the VTA, but not the NAc, attenuated hyperactivity evoked by systemic administration of cocaine at doses that did not alter basal motor activation . Likewise, intra-VTA 5-HT 2A R antagonist administration significantly blocked amphetamine-evoked hyperactivity and associated DA release in the NAc, with no effect upon basal motor activity or DA efflux in NAc (Auclair et al., 2004). We have observed that microinfusion of the preferential 5-HT 2A R agonist 1-(2,5-dimethoxy-4-iodo)-2-aminopropane (DOI) alone into the VTA is sufficient to evoke hyperactivity in rats (Herin et al., unpublished observations). Thus, activation of 5-HT 2A R resident in the VTA results in behaviorally significant outcomes, and likewise appears to play a critical role in cocaine-evoked behaviors mediated by the DA mesocorticoaccumbens circuit. The virally mediated gene transfer technique represents a targeted means to manipulate the expression of important proteins in the brains of adult animals (Carlezon et al., 1997;Bolanos et al., 2003;Edry et al., 2011). A recombinant adeno-associated virus (rAAV) can be used to selectively transduce neurons for a long duration (weeks to months) with a minimum of toxicity and inflammation (McCown et al., 1996;Lo et al., 1999). In the present study, we have exploited rAAV-mediated gene transfer to investigate whether overexpression of the 5-HT 2A R in the VTA alters the vulnerability of adult male rats to the hypermotive effects of cocaine. We developed an rAAV containing the coding region for the 5-HT 2A R linked to a synthetic marker peptide (Flag; rAAV-5-HT 2A R-Flag), and infused vehicle or rAAV-5-HT 2A R-Flag unilaterally into the VTA of experimental animals, followed by measurement of basal and cocaine-evoked hyperactivity. Immunohistochemical analyses were used to confirm 5-HT 2A R overexpression as well as expression of the Flag peptide. ANIMALS Male Sprague-Dawley rats (Harlan Sprague-Dawley, Inc., Indianapolis, IN, USA) weighed 250-275 g at the beginning of the study. The rats were housed (initially four/cage) in standard plastic rodent cages in a temperature (21-23˚C) and humidity (55-65%) controlled environment under a 12-h light/dark cycle (lights on 07:00 h). Animals were acclimated to the colony for 3-5 days prior to surgery, after which they were single-housed and allowed to recover for at least one week prior to the start of experimental sessions. All animals were provided with food and water ad libitum. Experiments were conducted during the light phase of the light-dark cycle (1200-1800 h) and were in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and with approval from the UTMB Institutional Animal Care and Use Committee. VIRAL VECTORS The cDNA containing the coding region for the rat 5-HT 2A R was obtained (Dr. J. Liu, University of Cincinnati). Primers were designed to amplify only the coding region of the 5-HT 2A R and to add a BamHI site (to the 5 end), SpeI site (to the 3 end), 24 bases coding for a synthetic marker (Flag) protein and a stop codon. PCR amplification was performed using an rTtH-XL polymerase (Perkin Elmer, Boston, MA, USA) and the product (5-HT 2A R-Flag) was purified by preparative agarose gel electrophoresis followed by dialysis, phenol/chloroform/isoamyl alcohol (25:24:1) extraction and ethanol precipitation. The 5-HT 2A R-Flag DNA was then ligated into a pCW plasmid, which is appropriate for viral packaging. The pCW plasmid (provided by Dr. D. J. Poulsen; University of Montana) contains the inverted terminal repeats (ITR) of AAV, a chick β-actin (CAG) promoter, multiple cloning sites, and a woodchuck hepatitis virus posttranscriptional regulatory element (WPRE; Stone et al., 2005). 5-HT 2A R-Flag (300 fmol) and pCW (30 fmol) were digested with BamHI (2 units) and SpeI (5 units) for 2 h at 37˚C. Ligation was performed with a kit (TaKaRa Biochemical, Inc., Berkeley, CA, USA). Ten microliters of the resultant cDNA was used to transform DH5-α ultracompetent E. coli containing 50 µg/ml ampicillin. Plasmid DNA was isolated from 20 colonies and tested for inclusion of the plasmid by digestion with BamHI and SpeI. One positive colony was CsCl-purified and sequenced at the UTMB Molecular Biology Core Facility. Functionality of the transgene was determined by transfection in raphe RN46A cells (provided by Dr. Scott Whittemore, University of Miami; White et al., 1994) followed by immunocytochemical detection with 5-HT 2A R and Flag antibodies. The rAAV-5-HT 2A R-Flag was prepared by cotransfecting three plasmids into human embryonic kidney cells (HEK 293 cell line) based on previous protocols (Xiao et al., 1998;Wu et al., 2002) utilizing an AAV helper plasmid (pXX2) and an adenovirus helper plasmid (pXX6). The HEK cells were cultured in 150 mm dishes containing DMEM/10% FBS at 37˚C, 5% CO 2 . When cells reached 80% confluence, calcium phosphate precipitation was used for cotransfection with pCW-5-HT 2A R-Flag, pXX2, and pXX6. Following a brief rinse with DMEM, OptiMEM (Life Technologies)/10% FBS/120 µM chloroquine was added to the cells. Then 2.5 ml of DNA-calcium phosphate solution was added per plate. This solution contained three plasmids at the molar ratio of 7:2:4, 125 mM CaCl 2 and 1× HBS (2.5 M NaCl, 0.25 M HEPES, 75 mM Na 2 HPO 4 , pH 7.1). Cells were cultured with 5% CO 2 at 37˚C for 18 h, and changed with OptiMEM/10% FBS. Two days after co-transfection, cells and medium were collected, centrifuged at 1140 g for 15 min, and then resuspended in 150 mM NaCl/20 mM Tris pH 8.0 at 5 × 10 6 cells/ml. The cell suspension was further treated with 0.54% deoxycholate (Sigma, St. Louis, MO, USA) and 50 U/ml Benxonase (Sigma) at 37˚C for 1 h. Following centrifugation at 3000 g at room temperature for 20 min, supernatants were subjected to a cycle of freeze − thaw, and then centrifuged again at 10,000 g at 4˚C for 30 min. The supernatant was collected, Frontiers in Psychiatry | Addictive Disorders and Behavioral Dyscontrol filtered through a 1-µm disk filter (Fisher, Pittsburgh, PA, USA), and then run by gravity through a heparin agarose type I column (Sigma) pre-equilibrated with phosphate buffer saline/1 mM MgCl 2 /2.5 mM KCl phosphate-buffered saline (PBS-MK). After four washes with 5 ml PBS-MK each, rAAV viruses were eluted by 9 ml of 1 M NaCl/PBS-MK. The first 2 ml was discarded. The next 7 ml was collected, desalted by running through a Centricon Plus-20/Biomax-100 (Fisher) with four changes of lactated Ringer's solution, then concentrated by centrifugation at 3000 g at room temperature, and the elution was collected. Dot blot indicated that the titer of the packaged virus was in the range of 10 9 -10 10 transducing units/ml. ANIMAL SURGERY Rats (n = 10/group) were anesthetized intramuscularly (IM) with 43 mg/kg of ketamine, 8.6 mg/kg of xylazine, and 1.5 mg/kg of acepromazine in physiological saline (0.9% NaCl) and placed in a Kopf rat stereotaxic apparatus (David Kopf Instruments, Tujunga, CA, USA) with the upper incisor bar at −3.8 mm below the interaural line. A Hamilton microsyringe (Hamilton, Reno, NV, USA) was then lowered unilaterally into the VTA at a 9˚from the midsaggital plane in relation to bregma: [anteroposterior (AP) −5.3 mm, mediolateral (ML) + 1.3 mm, and dorsoventral (DV) −8.1 and 8.5 mm from skull (Paxinos and Watson, 1998;Shank et al., 2007)]. The rAAV-5-HT 2A R-Flag (2 µl, 10 9 -10 10 transducing units/ml) or lactated ringer's solution vehicle control (2 µl) was infused into the VTA (n = 10 per group) using the UMP II infusion pump (WPI, Sarasota, FL, USA) at a rate of 18 nl/min; the infusion lasted 2 h. Following infusion, the needle was left in place for 10 min followed by withdrawal from the brain and wound closure. Rats received a single injection (IM) of 300,000 U of sodium ampicillin after surgery and were allowed 1 week to recover, during which time they were handled and weighed daily. Apparatus Locomotor activity was quantified using a modified open-field activity system under low-light conditions (San Diego Instruments, San Diego, CA, USA). Each enclosure consisted of a clear Plexiglas open-field (40 cm × 40 cm × 40 cm) and a 4 × 4 photobeam matrix located 4 cm above the cage floor for the measurement of horizontal activity; each monitor was housed within sound-attenuating chambers. A second horizontal row of 16 photobeams located 16 cm from the floor allowed the measurement of rearing. Activity counts were made by the control software (Photobeam Activity Software, San Diego Instruments, San Diego, CA, USA) and stored for statistical evaluation. Video cameras located above the enclosures were used to monitor activity continuously without disruption of behavior. Effects of 5-HT 2A R overexpression on basal and cocaine-evoked locomotor activity On Day 7 following surgery, animals were placed in activity monitors and horizontal activity and rearing were recorded for 60 min, followed by return to the animal colony. Additionally, activity was again measured in these same animals on Days 14 and 21 following surgery for 60 min on each day. On Day 21, following the measurement of basal locomotor activity, all animals were challenged with 15 mg/kg of cocaine [(-)-cocaine HCl salt; National Institute on Drug Abuse, Research Triangle, NC, USA dissolved in 0.9% NaCl], a dose that consistently produces hyperactivity in our laboratory (McCreary and Cunningham, 1999;De La Garza and Cunningham, 2000;Liu and Cunningham, 2006;Cunningham et al., 2013). Immediately following injection, horizontal activity and rearing were measured for 60 min. Both horizontal activity and rearing counts were totaled for each animal in 10-min time bins and across the 60-min test sessions. All data are presented as mean horizontal activity counts or rearing counts (±SEM). For basal locomotor activity, a two-way ANOVA was used to analyze the effects of intra-VTA pretreatment (control or rAAV-5-HT 2A R-Flag; factor 1) and day (Days 7, 14, 21; factor 2) with pretreatment as a between-subjects factor and day as a within-subjects factor. Planned comparisons for each test day were made with a Student's t -test with a Bonferroni correction. To analyze the time course of cocaine-evoked activity on Day 21, a two-way ANOVA with factors of intra-VTA pretreatment (between-subjects) and time (within-subjects) was utilized followed by planned comparisons at each time point using a Student's t -test with a Bonferroni correction. Differences in the mean total hyperactivity observed for the 60-min period following cocaine injection on Day 21 were analyzed with a Student's t -test. All statistical tests were determined using SAS for Windows (Version 8.1) with an experiment wise α = 0.05. Histology and transgene detection At the end of behavioral testing on Day 21, animals were deeply anesthetized with an intraperitoneal (IP) injection of pentobarbital (Sigma, 100 mg/kg) and transcardially perfused with PBS followed by 3% buffered paraformaldehyde. Brains were then removed, blocked at the mid-pons, and post-fixed in paraformaldehyde at room temperature for 2 h. Tissue was then cryoprotected in 30% sucrose solution at 4˚C for 48 h. Brains were frozen with crushed dry ice and stored at −80˚C. Coronal sections (50 µm) were prepared with a Leica cryostat (CM 1850) at −20˚C and processed to verify microinjection placement and transgene expression using immunohistochemistry (see below). Data obtained from rats with infusion sites outside of the VTA were excluded from analysis. To validate the ability of the rAAV construct to establish expression of 5-HT 2A R and Flag within the VTA, we employed immunohistochemical techniques using diaminobenzidine detection and light microscopy as described previously (Allen and MacPhail, 1991;Ross et al., 2006;Shank et al., 2007). Briefly, sections were blocked with a solution containing 1.5% normal goat serum (Vector Laboratories, Burlingame, CA, USA) in PBS with 0.4% Triton-X (PBS-T; Sigma), followed by incubation in PBS-T containing a polyclonal antibody for either the 5-HT 2A R (1:1000; courtesy of Dr. Bryan Roth, Case Western University Cleveland, OH, USA; Garlow et al., 1993;Roth et al., 1995;Cornea-Hebert et al., 2002;Nocjar et al., 2002;Bubar et al., 2005;Ross et al., 2006) or Flag peptide (1:1000; Sigma). Sections were washed in PBS, incubated in PBS-T containing biotinylated goat-anti-rabbit IGg (1:400; Vector), incubated in an avidin-biotin-horseradish www.frontiersin.org peroxidase complex (Vector), washed in TRIS buffer and developed in 3,3 -diaminobenzidine (0.5 mg/ml; Sigma) with 0.005% H 2 O 2 . Sections were mounted onto slides and coverslipped, followed by visualization with an Olympus Vanox-T AH2 microscope and image capture using a Pixera Professional camera (VCS10132; Sherwood Dallas, Co., Dallas, TX, USA) that was connected to a personal computer. Images of the VTA ipsilateral and contralateral to the injection site were captured and each image was subsequently cropped to a fixed-size rectangle of 420 × 1238 pixels located within the parabrachial-paranigral subnuclei of the VTA (Phillipson, 1979;Swanson, 1982) for comparative analyses. In accordance with recent image guidelines (Couzin, 2006), Adobe Photoshop (Adobe Systems, San Jose, CA, USA) was employed to mask dark shadows arising as injection artifacts on several sections. The fixed-size images of the VTA were analyzed using a program written in Matlab (MathWorks, Inc., Natick, MA, USA) and the red channel of the image data was used for analysis (Hillman, 1984;Pollandt et al., 2005;Liu et al., 2007). The intensity histogram was computed, producing a bell-shaped curve, skewed toward the dense side by the presence of darker (stained) pixels, which occupy a small fraction of the image area. The non-stained tissue density was modeled by fitting a normal curve to the upper portion of the histogram, using the Marquard non-linear least-squares method. The fitted curve was subtracted from the observed histogram on the dense side of the peak, providing an estimate of the intensity distribution of staining. A threshold was selected that was 2.6 standard deviations below the mean of the fitted background curve. Pixels darker than the threshold were considered to be stained and were displayed as a map for visual confirmation. The number of such pixels was counted to quantify immunolabeling (Hillman, 1984;Pollandt et al., 2005;Liu et al., 2007). Total immunolabeling was determined as the sum of stained pixels weighted by their density below the staining threshold and was calculated from VTA images ipsilateral and contralateral to the infusion site. The difference in total immunolabeling from the ipsilateral minus contralateral VTA from each animal was compared between infusion groups with an unpaired Wilcoxon test. To explore localization of 5-HT 2A R within VTA cells and colocalization of 5-HT 2A R in DA neurons, confocal microscopy Bubar et al., 2011;Anastasio et al., 2013) was utilized to study double-label immunofluorescence with previously validated antibodies for the 5-HT 2A R (Garlow et al., 1993;Roth et al., 1995;Cornea-Hebert et al., 2002;Nocjar et al., 2002;Bubar et al., 2005) and tyrosine hydroxylase (TH; Browning et al., 2005). Methods for immunofluorescence were similar to those described above with a few minor modifications. A separate cohort of rats (n = 7) was unilaterally infused with lactated ringer's solution control or rAAV-5-HT 2A R-Flag (as described above) and sacrificed 4 weeks following infusion. Sections (25 µm) were prepared using the Leica cryostat, followed by several washes and incubation in blocking serum (PBS-T plus 1.5% goat serum) as Invitrogen) antibodies at room temperature. Last, sections were mounted as described above and labeling visualized at the UTMB Infectious Disease Optical Imaging Core using a Zeiss LSM 510 Meta confocal microscope and image capture with LSM 5 imaging software (Carl Zeiss Microimaging, Thornwood, NY, USA) that was connected to a personal computer. EFFECTS OF 5-HT 2A R OVEREXPRESSION ON BASAL AND COCAINE-EVOKED ACTIVITY To test the hypothesis that overexpression of the 5-HT 2A R in the VTA enhances basal or cocaine-evoked hyperactivity, male rats (n = 10/group) were pretreated with intra-VTA infusion of either lactated Ringer's solution (control) or rAAV-5-HT 2A R-Flag (virus). Of these, nine control rats exhibited needle placements positioned in the VTA (see below); one animal contained a needle placement outside of the VTA, and was thus excluded. Of viruspretreated animals, five exhibited proper VTA placement as well as virally mediated overexpression of the 5-HT 2A R (see below). One virus-pretreated animal exhibited overexpression in the hypothalamus as a result of incorrect placement, and four animals did not exhibit 5-HT 2A R overexpression; these animals were excluded from analysis. Basal locomotor activity measured on Days 7, 14, and 21 was analyzed for control animals with proper VTA placements and virus-pretreated animals with overexpression confined to the VTA (Figure 1). The levels of basal locomotor activity observed, regardless of pretreatment, were similar to levels of locomotor activity evoked upon saline injection in previous studies Filip and Cunningham, 2002;Bubar et al., 2003). There was no main effect of pretreatment (F 1, 41 = 0.11, p = 0.747), day (F 2, 41 = 3.17, p = 0.06), or a pretreatment × day interaction (F 2, 41 = 3.18, p = 0.06) observed for basal horizontal activity on Days 7, 14, and 21 after viral injections. A priori comparisons indicated that the basal horizontal activity did not differ between pretreatment groups on any test day ( Figure 1A). For basal rearing activity, a main effect of day (F 2, 41 = 10.97, p = 0.0004) in the absence of a main effect of pretreatment (F 1, 41 = 0.65, p = 0.435) or a pretreatment × day interaction (F 2, 41 = 0.46, p = 0.638) was observed ( Figure 1B); a priori comparisons between treatment groups failed to indicate significant differences in basal rearing activity between control and virus treatment groups on any given test day. Levels of basal activity in animals with misplaced rAAV-5-HT 2A R-Flag infusions outside of the VTA did not differ from control animals on days 7, 14, or 21 (data not shown; p > 0.05). On Day 21 following the measurement of basal locomotor activity, all animals were challenged with 15 mg/kg of cocaine and activity was recorded for 60 min (Figure 2). A main effect of pretreatment (F 1, 83 = 7.84, p = 0.016), time (F 5, 83 = 28.37, p < 0.0001), and a pretreatment × time interaction (F 5, 83 = 28.37, p < 0.0001) were observed for cocaine-evoked horizontal activity measured in 10-min time bins during the 60-min test (Figure 2A, left panel). A priori comparisons indicated that viral pretreatment was associated with significantly greater cocaine-evoked horizontal activity during each of the first two time bins (10 and 20 min) of the test period as compared to control animals Frontiers in Psychiatry | Addictive Disorders and Behavioral Dyscontrol (p < 0.008/comparison). A trend for increased cocaine-evoked horizontal activity was observed but not statistically significant at the 30 min (p = 0.04) and 40 min time bins (p = 0.04). The a priori analysis indicated that virus pretreatment was associated with significantly greater levels of cocaine-evoked horizontal activity totaled for the entire 60-min test session (Figure 2A, right panel; p < 0.05). Levels of cocaine-evoked horizontal activity in animals with misplaced rAAV-5-HT 2A R-Flag infusions outside of the VTA did not differ from control animals (data not shown; p > 0.05). Rearing activity was also measured following cocaine injection on Day 21. A main effect of pretreatment (F 1, 83 = 6.61, p = 0.025) and a pretreatment × time interaction (F 5, 83 = 3.13, p = 0.014), but not a main effect of time (F 5, 83 = 0.82, p = 0.538), were observed for cocaine-evoked rearing activity measured in 10min time bins during the 60-min test (Figure 2B). A priori planned comparisons indicated that the viral pretreatment was associated with greater cocaine-evoked rearing activity at the 50 min time bin (p < 0.008/comparison), with the comparisons made at the 30 min (p = 0.047), 40 min (p = 0.01), and 60 min time bins (p = 0.023) of the test period approaching statistical significance (Figure 2B, left panel). The a priori analysis indicated that virus pretreatment was associated with significantly greater levels of cocaine-evoked rearing activity totaled across the entire 60-min test session compared to control animals ( Figure 2B, left panel; p < 0.05). Levels of cocaine-evoked rearing activity in animals with misplaced rAAV-5-HT 2A R-Flag infusions outside of the VTA did not differ from control animals (data not shown; p > 0.05). 5-HT 2A R AND FLAG IMMUNOHISTOCHEMISTRY Following the completion of behavioral testing on Day 21, animals were sacrificed and immunohistochemistry performed to confirm overexpression of 5-HT 2A R and expression of Flag in the VTA (Figure 3). A representative photomicrograph depicting www.frontiersin.org Frontiers in Psychiatry | Addictive Disorders and Behavioral Dyscontrol February 2013 | Volume 4 | Article 2 | 6 5-HT 2A R immunolabeling in the VTA ipsilateral to the infusion site from a control animal illustrates that the majority of the 5-HT 2A R immunoreactivity seems to be confined to cell bodies, with little fiber labeling ( Figure 3A). Control animals exhibited little Flag background labeling ( Figure 3B). In contrast, 5-HT 2A R immunolabeling in the ipsilateral VTA from a virus animal infused with rAAV-5-HT 2A R-Flag ( Figure 3C) illustrates a distinct pattern of 5-HT 2A R immunolabeling characterized by robust 5-HT 2A R immunoreactivity in both cell bodies and fibers. A brain section labeled with the anti-Flag antibody and adjacent to that shown in Figure 3C shows a similar expression of immunoreactivity, with labeled cell bodies as well as fibers ( Figure 3D). Additionally, the arrows indicate labeled cells in Figures 3C,D seem to be identical, providing further evidence of successful overexpression. A comparison of the 5-HT 2A R immunoreactivity quantified in the VTA (Figure 3E) was made using an unpaired Wilcoxon test. Control animals exhibited similar, moderate levels of 5-HT 2A R immunoreactivity in the VTA ipsilateral ( Figure 3A) and contralateral (data not shown) to the infusion site. Infusion of rAAV-5-HT 2A R-Flag resulted in overexpression of 5-HT 2A R in the ipsilateral (Figure 3C), but not contralateral, VTA (data not shown). The total net immunolabeling (ipsilateral immunolabeling -contralateral immunolabeling) was then calculated in individual animals in order to normalize 5-HT 2A R overexpression to basal 5-HT 2A R levels in the brain hemisphere contralateral to viral infusion. This total net immunolabeling was then compared between pretreatment groups and the results indicate that greater levels of 5-HT 2A R expression were exhibited in viruspretreated animals (p < 0.05; Figure 3E). Quantification of Flag immunoreactivity with this same procedure also revealed robust levels of Flag labeling in virus-pretreated animals, as compared to control animals (p < 0.05; Figure 3F), further confirming successful expression of the transgene. Levels of 5-HT 2A R and Flag immunoreactivity in animals with misplaced rAAV-5-HT 2A R-Flag infusions outside of the VTA did not differ from control animals (data not shown; p > 0.05). Confocal microscopy was utilized to analyze tissue sections processed for double-labeled 5-HT 2A R and TH immunofluorescence in the VTA in order to assess localization of 5-HT 2A R to DA neurons (Figure 4) from animals pretreated with vehicle ( Figure 4A) or AAV-5-HT 2A R-Flag (Figures 4B-F). Figure 4A demonstrates a composite confocal image (24 sections, 0.68 µm/slice) of 5-HT 2A R immunoreactivity from the VTA of a control animal. Immunolabeling is predominantly confined to cell body regions ( Figure 4A) in keeping with our observation of 5-HT 2A R staining using DAB (above; Figure 3) and previous studies using the same anti-5-HT 2A R antibody (Nocjar et al., 2002;Bubar et al., 2005). Figure 4B demonstrates a composite confocal image (24 sections, 0.70 µm/slice) of 5-HT 2A R immunoreactivity in the VTA of an animal infused with rAAV-5-HT 2A R-Flag. Overexpression is indicated by the robust immunoreactivity in both cell bodies and fibers ( Figure 4B). Figures 4C-F represents composite confocal images of double-label immunofluorescence from an animal infused with rAAV-5-HT 2A R-Flag. Figure 4C demonstrates a composite confocal image (24 sections, 0.71 µm/slice) of 5-HT 2A R immunoreactivity in the VTA of an animal infused with AAV-5-HT 2A R-Flag, and Figure 4D demonstrates immunoreactivity for TH in the same tissue section as that shown in Figure 4C. The composite image represents localization of 5-HT 2A R immunoreactivity in a TH-positive cell ( Figure 4E). Additionally, 20 of the serial Z -sections that comprise the composite image in Figure 4E are shown in Figure 4F Bubar et al., 2011;Anastasio et al., 2013). DISCUSSION The present study is the first to demonstrate that overexpression of 5-HT 2A R protein in the VTA enhances the behavioral effects of cocaine following successful virally mediated overexpression of the 5-HT 2A R in the adult rat. Intra-VTA transduction with the rAAV-5-HT 2A R-Flag vector, which produced quantifiable overexpression of 5-HT 2A R and appearance of the Flag protein in VTA neurons, had little effect on basal levels of motor activity, but significantly enhanced cocaine-evoked motility relative to controls. These results are in line with an overall facilitatory role for the 5-HT 2A R in mediating cocaine-evoked behaviors (see, Bubar and Cunningham, 2008;Nic Dhonnchadha and Cunningham, 2008) and support the hypothesis that the VTA is a key site of action for the 5-HT 2A R to control the behavioral effects of cocaine. The current results revealing that overexpression of 5-HT 2A R in the VTA enhances cocaine-evoked hyperactivity are in accordance with a previous study from our laboratory demonstrating that intra-VTA microinjection of the selective 5-HT 2A R antagonist M100907 blocked cocaine-evoked hyperactivity . These effects parallel those of systemic injection of 5-HT 2A R ligands, as 5-HT 2A R antagonists block and 5-HT 2A R agonists enhance the hypermotive effects of cocaine Fletcher et al., 2002;Filip et al., 2004) and other stimulants (Auclair et al., 2004;Herin et al., 2005). These data implicate the VTA as a critical site of action for the positive modulatory control of 5-HT 2A R over psychostimulant-evoked motor activity. This stimulatory role for VTA 5-HT 2A R upon cocaine-evoked hypermotility appears to occur in the absence of an apparent tonic regulatory influence of this receptor on motor activity. Overexpression of the 5-HT 2A R in the VTA has no effect upon basal levels of motility evoked upon exposure to the activity monitors. These data are consistent with studies in the literature demonstrating that selective blockade of 5-HT 2A R in the VTA does not alter spontaneous locomotor behavior Auclair et al., 2004). However, we have observed the enhancement of motor activity upon intra-VTA infusion of the non-selective 5-HT 2A R agonist DOI (Herin et al., unpublished observations). Together, these data indicate that, despite robust enhancement of 5-HT 2A R immunoreactivity in both cell bodies and fibers relative to controls, elevated expression of the 5-HT 2A R in the VTA alone is not sufficient to induce overt alterations in basal motor activation. However, the elevated VTA 5-HT 2A R expression generates an augmented and positive modulatory effect over cocaine-evoked hyperactivity. Several characteristics of the 5-HT 2A R may account for the low levels of basal 5-HT 2A R function. For example, the 5-HT 2A R exhibits moderate affinity for 5-HT (Peroutka, 1986;Rothman et al., 2000;Leysen, 2004) and possesses modest constitutive activity in the absence of ligand binding (Berg et al., 2005). In www.frontiersin.org Frontiers in Psychiatry | Addictive Disorders and Behavioral Dyscontrol addition, although the 5-HT 2A R is thought to primarily localize to somata or dendrites postsynaptic to 5-HT terminals, ultrastructural localization studies indicate that the receptor prominently localizes to the cytoplasm, rather than the plasma membrane, and is primarily found in extrasynaptic regions (Cornea-Hebert et al., 1999;Doherty and Pickel, 2000). Such localization patterns suggest that a component of 5-HT actions at the 5-HT 2A R may occur via paracrine or volume transmission which may be minimal at baseline (Miner et al., 2000;Jansson et al., 2001). Thus, activation of the 5-HT 2A R receptors in VTA may only occur during periods of stimulated 5-HT release like that evoked following systemic cocaine administration (Chen and Reith, 1994). The differential effects of 5-HT 2A R antagonists delivered into the VTA upon basal vs. cocaine-stimulated motor activity have been attributed to their efficacy to alter the activation status of the DA mesoaccumbens pathway to control locomotor activity (Kelly and Iversen, 1976;Broderick et al., 2004), and are in accordance with a prevailing hypothesis that the 5-HT 2A R modulates DA mesocorticoaccumbens neurotransmission only under "stimulated" conditions (Schmidt et al., 1992;De Deurwaerdere and Spampinato, 1999;Di Giovanni et al., 1999;Bonaccorso et al., 2002;Kuroki et al., 2003;Auclair et al., 2004). In accordance with the lack of effects of intra-VTA administration of 5-HT 2A R antagonists on basal levels of motor activation Auclair et al., 2004), local infusion of 5-HT 2A R antagonists into the VTA failed to alter basal DA release in the NAc (Auclair et al., 2004), nor did perfusion of 5-HT 2A R antagonists alter firing rates of VTA DA neurons in a midbrain slice preparation (Olijslagers et al., 2004). Conversely, intra-VTA 5-HT 2A R antagonist administration significantly blocked systemic cocaine- and amphetamine-evoked hypermotility (Auclair et al., 2004) and associated amphetamine-evoked NAc DA release (Auclair et al., 2004). Thus the selective effects of VTA 5-HT 2A R overexpression upon cocaine-evoked as opposed to basal motor activity are likely due to 5-HT 2A R-mediated facilitation of DA mesoaccumbens neurotransmission under stimulated vs. tonic conditions, respectively. The 5-HT 2A R is natively resident within DA and non-DAergic (GABA-or possibly glutamate-containing) neurons in the VTA (Doherty and Pickel, 2000;Ikemoto et al., 2000;Nocjar et al., 2002;Yamaguchi et al., 2007) although DA neurons comprise the majority of VTA neuronal cells (Swanson, 1982;Johnson and North, 1992;Ikemoto, 2007). Indeed, our confocal immunofluorescence studies (see Figure 4) provide evidence of 5-HT 2A R overexpression in VTA DA neurons. Activation of elevated levels of 5-HT 2A R resident in DA neurons consequent to cocaineevoked elevations in 5-HT efflux (Chen and Reith, 1994) would be expected to increase activity of DA neurons (Pessia et al., 1994) and release of DA in terminal regions . As noted above, enhanced DA release in the NAc correlates positively with generation of hypermotility (Kelly and Iversen, 1976). Thus, overexpression of 5-HT 2A R within DA neurons that project to the NAc would serve to enhance cocaine-evoked hyperactivity, as was observed in the present study. Although the overexpression of 5-HT 2A R in the DA neurons aligns with the observed behavioral profile, 5-HT 2A R overexpression also likely occurred in non-DAergic VTA neurons, presumably GABA interneurons and/or projection neurons, or possibly glutamate neurons (Yamaguchi et al., 2007; see Figure 4), since the constitutively active promoter utilized evokes gene expression in all neuronal cell types (Kaplitt et al., 2007;St Martin et al., 2007). Future studies employing a promotor that would direct viral expression to either DA, GABA, or glutamate neurons would help to discern the contribution of 5-HT 2A R overexpression within the particular neuronal cell type to basal vs. cocaine-evoked locomotor activity. The behavioral phenotype observed following rAAV-5-HT 2A R-Flag infusion is most likely mediated by neurons intrinsic to the VTA. Measurable 5-HT 2A R overexpression was confined to the site of infusion, and adjacent brain regions (especially, substantia nigra; data not shown) did not demonstrate patterns of 5-HT 2A R overexpression. Second, in keeping with this observation, cells projecting to the VTA were not likely to be transduced, as the virus utilized in these studies (AAV-2) is not readily transported retrogradely following infusion into the brain (Chamberlin et al., 1998). Third, the cells transduced in the VTA fit the morphological profile suggestive of neurons, consistent with previous observations that AAV-2 does not readily transduce glial cells (Chamberlin et al., 1998). Fourth, while theoretically possible that viral vector transduction per se could evoke behaviorally relevant cellular changes, this possibility is highly unlikely. Previous studies have shown that transduction of neural tissue with AAV vectors does not alter the electrophysiological properties of neurons (Ehrengruber et al., 2001) or result in neurotoxicity (Lo et al., 1999), while animals infused intracranially with either vehicle or control viral vectors (Carlezon Jr. et al., 1997;Pliakas et al., 2001), including AAV (Landgraf et al., 2003), exhibit equally normal patterns of behavior. Basal locomotor activity did not differ between the animals infused with control vs. AAV in the present study, and the levels of activity observed were similar to that reported in previous studies following saline injection studies Filip and Cunningham, 2002;Bubar et al., 2003). Finally, animals with rAAV-5-HT 2A R-Flag infusions located outside the VTA exhibited levels of cocaine-evoked hyperactivity similar to that of control animals. Altogether, these data point to the overexpression of 5-HT 2A R in the VTA as the most likely contributor to the observed enhancement of cocaine-evoked hyperactivity following rAAV-5-HT 2A R-Flag infusion. The results of present study suggest that expression levels of the 5-HT 2A R in the VTA regulate vulnerability to the hypermotive effects of cocaine and support a possible role for VTA 5-HT 2A R in modulating other behavioral effects of cocaine mediated by DA mesocorticoaccumbens circuitry. In addition to altering cocaine-evoked hyperactivity, systemic administration of 5-HT 2A R antagonists has been shown to reduce the discriminative stimulus effects of cocaine Filip et al., 2006) and cocaine-evoked behavioral disinhibition (i.e., impulsivity; Anastasio et al., 2011;Fletcher et al., 2011), as well as to block expression of cocaine sensitization (Filip et al., 2004;Zayara et al., 2011). Furthermore, although the 5-HT 2A R does not appear to modulate cocaine intake in the self-administration assay (Fletcher et al., 2002;Nic Dhonnchadha et al., 2009), 5-HT 2A R antagonists have been shown to attenuate both cocaine-and cueevoked reinstatement of cocaine-seeking (Fletcher et al., 2002;Nic Dhonnchadha et al., 2009). However, no studies have evaluated specifically the role of VTA 5-HT 2A R receptors in cocaine-evoked behaviors other than locomotor hyperactivity , though 5-HT 2A R in the NAc (Zayara et al., 2011) and PFC (Pockros et al., 2011) have been implicated in sensitization and cue-evoked reinstatement of cocaine-seeking, respectively. Here we employed a single, low dose of cocaine (15 mg/kg) that consistently induces hypermotility in the absence of overt stereotypic behaviors (Herges and Taylor, 1998) to further establish a critical role for VTA 5-HT 2A R in the hypermotive effects of cocaine. Even with the small sample size employed in the current study, the behavioral response to the single dose of cocaine produced a robust behavioral response with little variability. Our results combined with the knowledge regarding 5-HT 2A R regulation of DA mesocorticoaccumbens activation provide the impetus to conduct more thorough investigations into the role of 5-HT 2A R regulation in the VTA not only in the hypermotive effects of cocaine, but also more complex cocaine-associated behaviors. Furthermore, the methods established here utilizing rAAV-5-HT 2A R-Flag to overexpress the 5-HT 2A R can be employed to evaluate the role of elevated 5-HT 2A R expression throughout the brain.
2016-05-17T21:49:57.541Z
2012-11-30T00:00:00.000
{ "year": 2012, "sha1": "d5154af368f6c2b80281c49fae9981954cad053a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2013.00002/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5154af368f6c2b80281c49fae9981954cad053a", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267955459
pes2o/s2orc
v3-fos-license
Wireless Patient Monitoring System Based on Smart Wristbands and Central user Interface Software In this article, a patient monitoring system is proposed that is able to obtain heart rate and oxygen saturation (SpO2) levels of patients, identify abnormal conditions, and inform emergency status to the nurses. The proposed monitoring system consists of smart patient wristbands, smart nurse wristbands, central monitoring user interface (UI) software, and a wireless communication network. In the proposed monitoring system, a unique smart wristband is dedicated to each of the patients and nurses. To measure heart rate and SpO2 level, a pulse oximeter sensor is used in the patient wristbands. The output of this sensor is transferred to the wristband’s microcontroller where heart rate and SpO2 are calculated through advanced signal processing algorithms. Then, the calculated values are transmitted to central UI software through a wireless network. In the UI software, received values are compared with their normal values and a predefined message is sent to the nurses’ wristband if an abnormal condition is identified. Whenever this message is received by a nurse’s wristband, an acoustic alarm with vibration is generated to inform an emergency status to the nurse. By doing so, health services are delivered to the patients more quickly and as a result, the probability of the patient recovery is increased effectively. Introduction Nowadays, hospitals use wristbands to keep track of patient's identity information including name, birth date, and medical record number.Often, these wristbands are made from paper and are disposable. [1,2]urses and doctors check these wristbands before administering medicine, taking samples, or performing surgery to ensure that each patient receives his/her prescribed treatment correctly. [3]However, this identity method has significant drawbacks arises from human errors and wristbands inadequacies, for example, a person received $12 million in damages from a Maryland Hospital that misidentified him and treated him for cancer instead of head trauma. [4]Moreover, the National Practitioner Data Bank recorded about 3700 wrong-treatment/wrong-procedure errors in the USA hospitals over a period of 13 years. [5] the other hand, using disposable wristbands in hospitals imposes a large economic burden on the health-care system, for example, the US Food and Drug Administration (FDA) estimated that adopting electronic medication systems in hospitals can save around $93 billion in treatment costs over 20 years. [6]cording to the explanations, above, using electrical patient identity wristbands in hospitals can yield the following advantages: (i) patient identity information can be programmed into wristbands through the central information section of hospitals; hence, the errors arise from hospital personnel can be avoided effectively; (ii) Since electrical wristbands are not disposable and can be used several times, treatment cost of hospitals reduces effectively after a long period compared to when paper wristbands are used; (iii) smart wristbands can be equipped with various biomedical sensors; by doing so, preferable vital signals can be obtained easily from patients; (iv) smart wristbands can be equipped with a wireless data transceiver; by doing so, vital signals obtained from internal equipment of wristband can be transmitted to a preferable receiver and monitored for a long time.Moreover, abnormal conditions can be identified immediately; hence, patients with serious conditions receive treatment as soon as possible.As a result, the probability of patient revival from critical situation increases effectively. In this article, a wireless patient monitoring system is proposed which is able to monitor the oxygen saturation (SpO 2 ) level and heart-rate value of patients, identify critical conditions, and notify emergency conditions to the nurses immediately.To realize this patient monitoring system, a special type of smart patient wristbands is designed and implemented which is able to calculate SpO 2 level and heart-rate value of a patient through a pulse oximeter sensor and then transmit the calculated values through the wireless transceiver embedded into the wristband.Moreover, a central graphical user interface (GUI) software is proposed that receives the calculated values from the wristband through a wireless receiver. In the proposed GUI software, the received quantities are plotted.Moreover, the received values are compared with their legal values accordingly.Whenever SpO 2 level or heart-rate value of a person is outside the normal range, relevant nurses are notified immediately through the central GUI software by acoustic alarm and vibration.Furthermore, the patient information as well as his/her bed number is transmitted to the nurses' wristband to guide the nurses to the patient that has a serious condition. Details about the design and implementation of the proposed patient monitoring system are presented in this article as follows: Hardware of the proposed smart wristband is introduced in section II; the implemented wireless network is presented in section III; the proposed GUI is introduced in section IV; in section V, the developed wristband is compared with some of the recent similar commercial smart-wristwatches; experimental results are presented in section VI, and the conclusion is presented in section VII. Proposed Smart Wristbands To monitor the heart-rate value and SpO 2 level of patients, two types of smart wristbands are proposed in this study: (i) patient wristbands and (ii) nurse wristbands.In the following subsections, both types of the wristbands are introduced in detail. Patient wristband The patient wristbands are designed such that they can do the following tasks: (i) illustrating the patient identity information including name, birth date, national code, medical records summary, etc., on the wristband's screen; (ii) receiving data from pulse oximeter sensor; (iii) calculating heart-rate value and SpO 2 level of the patient by executing several signal processing algorithms, (iv) transmitting the calculated values to the nurse's wristband and central GUI software through a Zigbee network. To achieve the objectives, above, the following components are used in the hardware of patient wristbands: (i) a low-power 0.96" Organic light-emitting diode (OLED), (ii) STM32WB55CGU6 microcontroller which is a dual-core high-performance ARM microcontroller equipped with an internal wireless transceiver, (iii) a MAX30100 pulse oximeter sensor, (iv) 1200 mAh lithium-polymer battery, (v) two light-emitting diodes (LEDs) with different colors (red and yellow), (vi) a TTP223 proximity sensor, (vii) an AT24C08 electrically erasable programmable read-only memory (EEPROM), and (viii) a push-button. The top view of the proposed patient wristband is illustrated in Figure 1.As observed, each patient's wristband consists of two parts: a bracelet closed to the patient's wrist and a pulse oximeter probe fixed on the patient's fingertip. At reception time, patient identity information as well as patient's medical records are recorded into the central GUI software and then sent to the patient's wristband.When information is received by the wristband, it is shown on the wristband's screen, instantly.The red LED of the wristband turns on if the patient has previous medical allergies while the yellow LED of the wristband turns on if the patient requires special care (e.g. for seizures and bedsore). To manage the power consumption of the proposed wristband, TTP223 proximity sensor is placed on the bottom layer of the pulse oximeter probe.The location of the proximity sensor in the wristband is adjusted such that whenever the patient's finger is removed from the front of the pulse oximeter sensor, a trigger pulse is sent to the microcontroller.After receiving this pulse by the microcontroller, pulse oximeter sensor is forced into the standby mode of operation by the microcontroller.In contrast, whenever the patient's finger is placed in front of the pulse oximeter sensor, the pulse oximeter is turned on and SpO 2 level as well as heart rate value of the patient is calculated by the sensor.MAX30100 sensor combines two LEDs with RED and Infrared Radiation (IR) wavelengths, two photodetectors, optimized optics, and a low-noise analog-to-digital converter (ADC) unit to measure SpO 2 level and heart-rate value. [7]The SpO 2 subsystem of the MAX30100 is composed of ambient light cancellation, 16-bit sigma-delta ADC, and proprietary discrete time filter.The MAX30100's ADC is a continuous sigma-delta converter with up to 16-bit resolution.The MAX30100 digital output data is stored in a 16-deep FIFO that can be accessed through an I2C serial port of the microcontroller.The internal block diagram of MAX30100 sensor as well as its schematic is illustrated in Figure 2. STM32WB55CGU6 which is an ultra-low power dual-core ARM microcontroller is used in the developed wristband as the main processing unit.The first core of the microcontroller is dedicated to the execution of processing operations while the second core is dedicated to wireless data communication.The wireless core of STM32WB microcontroller supports WiFi, Zigbee, and Bluetooth low energy (BLE) protocol. The digital output of the pulse oximeter is received by I2C port of the microcontroller and stored in a digital array.In the next step, various signal processing algorithms are applied to the received data to calculate the heart-rate value and SpO 2 level of the patient.Details of the signal processing algorithms adopted in this study to calculate the heart-rate value and SpO 2 level are presented in subsection II.3 of the article. The calculated values are transmitted to nurses' wristbands and central GUI software through the wireless transceiver core of the microcontroller.Details of the wireless network implemented in this study to transfer data between the wristbands and GUI are presented in section III of this article. A push button is provided on the body of the patient's wristband; whenever it is pressed by the patient, his/her request as well as his/her name and bed number is transmitted to the relevant nurse's wristband.Therefore, a wireless nurse calling system can be realized through the developed wristbands and the implemented patient monitoring system. To keep the patient identity information unchanged whenever the wristband supply is lost, an AT24C08 EEPROM is used in patient wristbands.Whenever the wristband's battery voltage becomes lower than the predefined threshold value, patient information is stored in the EEPROM and when the battery voltage is increased again, the information is invoked from the EEPROM.Schematics of various parts of the wristband's hardware are presented in Figure 3.In Figure 3a, a schematic of linear voltage regulator adopted to achieve a fixed 3.3V supply is illustrated while in Figure 3b, schematics of MAX30100 sensor, OLED screen, and AT24C08 EEPROM are presented.In Figure 3c, the schematic of the microcontroller as well as its necessary external components is presented.In Figure 3d, schematics of the two LEDs (red and green), push button, universal-asynchronous-receiver-transmitter (UART) port, and programmer port are illustrated.In Figure 3e, the printed circuit board (PCB) of the developed hardware is presented where the top layer is shown on the left side and bottom-layer is shown in the right side.Finally, in Figure 3f, top view of the developed wristband's hardware is presented. Nurse wristband The nurse wristbands are designed such that they can do the following tasks: (i) monitoring heart-rate value and SpO 2 level of under-supervision patients on the wristband's screen, (ii) notifying emergency situation to nurses by generating acoustic alarms and vibrations. To achieve the objectives, mentioned above, nurse wristbands are implemented by using the following equipment: (i) a low-power 0.96" OLED, (ii) STM32WB55CGU6 microcontroller which is a dual-core high-performance ARM microcontroller equipped with an internal wireless transceiver, (iii) a 1200 mAh lithium-polymer battery, (iv) a push button, (v) a buzzer, (vi) a vibrator, and (vii) an AT24C08 EEPROM. The top view of nurse wristbands is just similar to that of the patient wristbands [Figure 1] with only one difference that pulse oximeter probe is eliminated in the nurse wristband. To minimize the power consumption of the developed nurse wristband, its screen keeps off until the push button placed on the wristband's body is pressed by the nurse.In this situation, the OLED is turned on and patient information including patient name, bed number, heart-rate value, and SpO 2 level are shown on the wristband's screen.By pressing the push button again, information of the next patient is shown on the OLED.Ten seconds after pressing the push button, the wristband's screen is turned off automatically. Whenever an emergency condition is alerted by central GUI or a call request is received from a patient, an acoustic alarm with vibration is generated by an internal component of the wristband to inform the nurse.Moreover, the type of the alarm (emergency condition or patient request) as well as patient's name and bed number appears on the wristband's screen to guide the nurse to the corresponding patient. Signal processing algorithm Based on MAX30100 datasheet, whenever the internal FIFO of the sensor becomes full a falling edge is appeared on the interrupt pin (INT) of the sensor.The digital words stored in the sensor's FIFO can be read through an I2C serial port of the microcontroller. Based on the explanations, above, in the developed wristband, the following steps are conducted to calculate SpO 2 level and heart-rate value of a patient: i. INT pin of MAX30100 is connected to one of the general-purpose input/output (GPIO) pins of the microcontroller while the GPIO pin is configured as an external INT pin.Serial clock (SCL) and serial data (SDA) pins of the sensor are connected to one of the I2C ports of the microcontroller, I2C1_SCL and I2C1_SDA, respectively ii.In the service routine function of the GPIO INT, MAX30100's FIFO is read by the I2C serial port of the microcontroller, and the received data is stored in a digital array, for more convenience, at first, the received data are processed in MATLAB software and after finalizing the signal processing algorithms and obtaining desirable simulation results, the finalized signal processing algorithms are coded in C-language and programmed into the microcontroller. In order to process the data by MATLAB, the received data are transmitted to a personal computer (PC) using a UART serial port of the microcontroller and a CP2102 TTL-to-USB converter module iii.In the PC, the data are received by serial port of MATLAB software where the following signal processing algorithms are applied to the received data to calculate the SpO 2 level and heart-rate value: • The second-order digital notch filter proposed in Hirano [8] is adopted to reject undesirable spikes and notches that exist on the received signals.The transfer function of the filter is presented in Eq. 1 where λ is the notch frequency at which there is no transmission through the filter, and b is the 3-dB rejection bandwidth.Within the frequency band centered at w = ʎ and of width b, all signal components are attenuated by more than 3-dB.Moreover, the dc gain of the notch filter is 0-dB Where, • A weighted moving average (WMA) filter is used to reject undesirable noises that appeared on the received signals. To keep the computational complexity as low as possible, a second-order WMA filter is used in the developed wristband where its equation is formulated as Eq.2: Where, x(i) is i th sample of x signal In subsection VI.2, the performance of the adopted filters is investigated through various simulations.As will be seen, the performance of the adopted filters is acceptable in the rejection of undesirable spikes and noises • Golden section search algorithm [9] is adopted to find the extremum points of IR and RED signals.Based on the simulation results presented in subsection VI.2, the performance of this algorithm is acceptable in finding the extremum points of the signals • After finding extremum points, R-parameter is calculated through Eq. 3 where, DC_IR and DC_RED represent the area below the minimum level of the IR and RED signals, respectively, while AC_IR is the area between DC_IR level and IR signal that is calculated through Simpson's algorithm, [10] and AC_RED is the area between DC_RED level and RED signal, i.e. calculated through Simpson's algorithm • After obtaining R-parameter, SpO 2 level can be calculated through Eq. 4 that is obtained by applying a curve fitting algorithm to the R-values and SpO 2 -levels obtained in the calibration stage.Further explanations can be found in subsection VI.3 of the article iv.The signal processing algorithm, mentioned above, is coded in C-language and programmed into the patient's wristband's microcontroller v.In the microcontroller, apart from executing the signal processing algorithms, the following processes must be conducted simultaneously: receiving data from SpO 2 sensor, illustrating SpO 2 level and heart-rate value on the OLED, and transmitting data to the central GUI.To achieve the desired performance, all the processes, above, must be managed properly by the microcontroller.To do this, the microcontroller is programmed such that the following tasks are executed consequently: Proposed Wireless Network In the developed patient monitoring system, data are communicated between wristbands and central GUI through a Zigbee network.As mentioned earlier, each wristband is equipped with a low-power dual-core STM32WB microcontroller.By the proper configuration of the second core of the microcontroller, each wristband can send/ receive data in a wireless manner. [11]reless transceiver hardware The radio frequency (RF) part of the developed wristband is designed based on the following principle: in order to achieve the maximum accessible coverage range for the wireless network, the maximum power must be transferred from the RF pin of the microcontroller to the antenna. [12] satisfy the constraint, above, impedance matching is essential for all parts of the RF data transmission hardware which includes: microcontroller RF pin, band-pass filter, and antenna.It must be noted that a proper band-pass filter must be used in the RF part of the hardware to reject undesirable harmonics in the RF line and ensure that the harmonics in the transmitted data are compliant with the Federal Communications Commission (FCC) regulation. [13]ten, to achieve the impedance matching between two parts of a circuit, a positive matching filter is placed between these two parts.Consequently, the following two matching filters are placed in the hardware of the developed wristband: (i) a π-type LC filter between RF pin of the microcontroller and input of the band-pass filter, (ii) a π-type LC filter between output of the band-pass filter and on-board antenna. The first matching filter is placed as close as possible to the microcontroller pin while the second one is placed as close as possible to the antenna.The schematic of the matching filters is illustrated in Figure 3c while the part number of the matching filter components is tabulated in Table 1. The value of the matching filters components depends on the impedance of the microcontroller's RF pin, impedance of the band-pass filter input, and impedance of the antenna.Therefore, in the developed hardware, RF line between microcontroller and antenna is designed based on a single coplanar structure [14,15] in which the width of RF line on the PCB is optimized to achieve the best impedance matching. In this optimization problem, the width of the RF line is optimized according to the thickness of the PCB core, material of the dielectric used in the PCB, the thickness of the dielectric, thickness of the copper on the PCB layer containing RF line, thickness of the silkscreen, clearance between RF line and ground plane, and frequency of the RF line. [16,17] compliant with FCC regulation, the frequency of wireless data communication is adjusted to 2.45 GHz and a band-pass filter with center frequency of 2.45 GHz is placed in the RF line.Moreover, two ground planes are placed on the top and bottom layers of the developed PCB while numerous vias are placed adjacent to the RF line for shielding purposes. [18]ter preparing the wristband hardware, firmware of the Zigbee network is programmed into the wireless core of the microcontroller while processing algorithms as well as interfacing operations, which are coded in C-language, are programmed into the processing core of the microcontroller.Whenever a wristband is powered on, its Zigbee network is initialized, at first, and after stabilization of the network, the other operations begin. Wireless protocol Through the second core of the STM32WB microcontroller, data can be sent or received in wireless protocols of WiFi, Zigbee, or BLE. In this study, WiFi, Zigbee, and NRF protocols are investigated to implement the wireless network where Zigbee protocol is selected as the best choice since it has the lowest power consumption and highest security level among the other protocols. [19] each Zigbee network, a device is configured as server (or coordinator).The secure pan-id of the network is set initially by the coordinator.Moreover, the number of endpoints that can be connected to the network as well as the unique short address of each endpoint is specified in the network configuration stage.Consequently, illegal access to the network is not possible.On the other hand, since Zigbee networks cannot be discovered by WiFi devices, the security level of the Zigbee networks is considerably higher than that of the WiFi networks. Based on the experimental results, the indoor coverage distance of the developed Zigbee network is about 25 m which can be expanded to 260 m by adopting augmenting modules and external antennas. In the developed patient monitoring system, since GUI behaves as the central monitoring device of the network and receives all patients' data, it is better to configure the GUI software as the coordinator and the wristbands (both the patient and nurse wristbands) are configured as the endpoints (or clients).This configuration has the following advantages in comparison with the other server-client configurations: i.The lowest data traffic ii.All settings of the wristbands can be performed graphically through the GUI iii.The relationship between the patients' wristbands and the nurses' wristbands can be introduced easily in the GUI.Therefore, each patient's data are only transferred to his/her dedicated nurse iv.Occurrences of unusual conditions can be identified immediately and then informed to any desirable group of nurses; hence, primary care can be applied to the patient with critical condition through the nearest nurses. Apart from the benefits, above, since all parts of the Zigbee network are implemented in this study, there is no limitation in the configuration of the developed Zigbee network.That is, instead of GUI software, one of the nurses' wristbands can be selected as the server of the Zigbee network. Proposed Graphical User Interface Software The proposed GUI software is designed such that it can do the following tasks: (i) initializing patient wristbands at patients' reception time and storing their identity information into the wristbands, (ii) managing the implemented Zigbee network as well as data communication between wristbands and the GUI, (iii) receiving heart-rate value and SpO 2 level of all the patients and illustrating them on the GUI screen, (iv) identifying emergency condition by comparing the received values with their normal ranges, and (v) notifying nurses about the emergency conditions by sending a predefined message to the nurses' wristbands immediately. The proposed GUI software is coded in C# language using Visual Studio 10 Enterprise software where the generated executable file of the software can be run on the Windows operating system. In Figure 4, various pages of the proposed GUI software are presented.As observed, filtered signal of the pulse oximeter 4b] is opened which is dedicated to record, editing and save the patient identity information.On the main page, by clicking on "Alarm history" icon, another page [Figure 4c] is opened which is dedicated to time and date of all emergency conditions identified by the software on an adjustable time period.On this page, by clicking on each row, identity information of the patient whom has the emergency condition is presented. In the initialization stage of the developed Zigbee network, all configurations of the wristbands (for both the patient's and nurse's wristbands) are made in the GUI software.Moreover, based on the clinic or hospital policies, each patient is dedicated to a specified nurse (or specified nurses).The relationship between the patient and its dedicated nurse(s) is introduced as a communication rule in the GUI software.Whenever a patient's data are received by the GUI, according to the predefined communication rules, the software identifies the relevant nurse(s) of the patient and sends the patient's data only to the relevant nurse(s).By doing so, privacy and security of the patients are satisfied while the data traffic is minimized. Comparison with Recent Similar Wristbands A wristwatch-based wireless sensor platform for wearable health monitoring applications is presented in. [20]In this wristwatch, SpO 2 level and heart-rate value of a person are obtained by analyzing the photoplethysmography signal.In this study, at first, a comparison between the wireless performance in the 868 MHz and 2.45 GHz bands is performed and then a compact wireless sub-system for 868 MHz is designed and implemented.Moreover, a highly integrated 868 MHz antenna is designed where the antenna structure is printed on the surface of a wristwatch enclosure using laser-direct structuring technology. [20]e differences between the wristwatch developed in [20] and the wristband developed in our study can be summarized as follows: 1.The wireless network of the wristwatch proposed in Kumar [20] operates in 868 MHz band while the wireless network of our developed wristband operates in 2.45 GHz band.The 2.45 GHz band wireless network has the following advantages compared to the 868 MHz band: a. Worldwide availability since the majority of wearable devices operate in 2.45 GHz frequency band; in contrast, 868 MHz band only supported in Europa b.A higher data rate c.Compatibility with a larger number of wireless standards d.Smaller size antenna.2. In the wristwatch proposed in Kumar [20] only point-to-point communication is available for data communication between the wristwatch and a wireless gateway device; that is, several wristwatches of this type cannot be joined together to form a wireless network.However, in our developed wristband, both versions of patient and nurse wristbands can be joined together to form a wireless network.This feature made the proposed wristband an ideal wearable device for the central monitoring section of intensive care units where the vital signals of several patients must be monitored and controlled simultaneously 3.In our proposed wristband, initial configurations of the wristbands can be performed easily through the developed central GUI software.By doing so, the patient's identity information as well as the patient's medical records are transferred from the GUI to the wristband.The identity information is illustrated on the wristband screen while medical records are represented by two LEDs: a red LED to inform the nurse that the patient has previous medical allergies and a yellow LED to inform the nurse that the patient requires special care (e.g. for seizures and bedsore).However, none of these features are provided in the wristwatch proposed in Kumar [20] 4. In the hardware of the wristwatch proposed in Kumar [20] two processors are used where the first one executes signal processing algorithms and the second one manages the wireless data communications.Moreover, an NRF52 BLE module is connected to the SAM R30 platform to implement Bluetooth communication between the gateway device and a smartphone.However, in our developed wristband, only an ultra-low power dual-core STM32 microcontroller is used to perform the entire signal processing algorithms as well as wireless data communications.The first core of this microcontroller executes the signal processing algorithms, while the second core implements Zigbee and BLE protocols simultaneously.Consequently, power consumption of our developed wristband as well as its size, weight, and cost is significantly lower than that of the wristband proposed in Kumar. [20][23] The comparisons indicate that all of these wristwatches suffer from the lack of networking capability and initialization capability through a GUI software.Furthermore, the hardware and software of commercial wristwatches cannot be modified or enhanced by the user. In contrast, since both the hardware and software of our developed wristband are designed and implemented in this study, any desirable modification, optimization, and enhancement can be made on the wristband in future.Therefore, by enhancing the performance of the wristband and incorporating various medical sensors, other health parameters such as blood pressure, body temperature, and respiration rate can be measured by our developed wristband. Experimental Results To assess the performance of the developed wristbands as well as the proposed monitoring system, a prototype is Signal processing algorithm In this subsection performance of the signal processing algorithm proposed in subsection II.3 is assessed through simulations where the results are presented in Figure 6.As observed, at first, the proposed notch filter is applied to the raw data received from the pulse oximeter sensor (top waveforms in Figure 6a and b to eliminate undesirable spikes and notches.The output of the notch filter is presented in Figure 6a and b (middle waveforms).At second, the proposed WMA filter is applied to the output of the notch filter to eliminate undesirable noises and disturbances.The output of the WMA filter is presented in Figure 6a and b (bottom waveforms).At third, the golden section algorithm is applied to the filtered signal to find the extremum points.At forth, heart rate value is calculated by counting the number of extremum points found in one second of the signals.Finally, SpO 2 level is calculated through Eq. 4. Pulse oximeter calibration The noninvasive calibration method proposed in Norbert and Niwayama [24] is adopted in this study to calibrate the developed pulse oximeter sensor.This method is carried out without any blood sampling. Pulse oximeters measure the R-parameter, which is proportional to the SpO 2 level.Calibrating a pulse oximeter means finding a mathematical function between the R-parameter and SpO 2 level.In this study, Beurer P0-30 device, [25] which is a medical-grade calibrated pulse oximeter, is used as the reference device for calibrating the developed pulse oximeter. In the calibration process, 15 min long measurements were performed for four persons with different skin colors while both the reference pulse oximeter (Beurer P0-30) and the developed pulse oximeter are attached to the subject's fingertips at the same time.In the calibration process, air with variable oxygen content was inhaled by each subject while the following two parameters are measured continuously: (i) SpO 2 level which is measured by the reference pulse oximeter, and (ii) value of R-parameter which is measured by the developed pulse oximeter. Based on the measured data pairs, a mathematical function is determined between the R-values and SpO 2 levels.To do this, the obtained data pairs are imported to MATLAB software where the mathematical function is estimated by curve fitting.By doing so, the mathematical function presented in [4] is obtained. Although increasing the order of the estimated function can improve the accuracy of the SpO 2 calculation, to minimize the computational complexity, a second-order function presented in ( 4) is adopted in the developed pulse oximeter. The experimental results obtained in calibration stage show that the average error of the developed pulse oximeter is 1.57% with respect to the Beurer P0-30 device which is appropriate in medical practice. [25]veloped pulse oximeter performance To assess the accuracy of the developed pulse oximeter, apart from the examinations conducted in the calibration stage, one hundred hospitalized persons are selected randomly where heart-rate value and SpO 2 level of them are measured through the following two devices: (a) the wristband developed in this study, and (b) Beurer P0-30 pulse oximeter. A small portion of the results obtained from this examination is presented in Table 2 where the values calculated by the developed pulse oximeter have an average error of 1.34% which is in agreement with the values obtained from Beurer P0-30 pulse oximeter. Conclusion In this article, a patient monitoring system consists of patient wristbands, nurse wristbands, and central GUI software is proposed to (i) record patient identity information at reception time and transfer it to the patient wristband, (ii) monitor heart-rate value and SpO 2 level of all the patients continuously, (iii) identify critical conditions, and (iv) inform emergency status to the nurses immediately. Based on the explanations provided in this article, it can be deduced that the proposed monitoring system has the following advantages compared to the previous projects conducted in this context: i. Central GUI software with the ability to: (i-1) manage the wristbands as well as the wireless network, (i-2) edit and save the patients identity information at patient reception time, (i-3) receive heart-rate value and SpO 2 level continuously and show them on the GUI screen, (i-4) identify critical conditions and notify nurses immediately ii.Nurses wristbands with the ability to: (ii-1) receive emergency alerts from the GUI and notify nurses by generating acoustic alarm and vibrations, (ii-2) observe heart-rate value and SpO 2 level of the patients on the wristband's LCD iii.Calculating heart-rate value and SpO 2 level simultaneously by adopting advanced signal processing algorithms iv.Communicating data between the patient wristbands, nurse wristbands, and central GUI by means of a secure low-power wireless network. In brief, using the proposed monitoring system, emergency conditions can be identified immediately.As a result, treatment and recovery operations can be begun as soon as possible.By doing so, successful rate of patient's treatment and recovery is increased effectively. Financial support and sponsorship None. Figure 1 : Figure 1: Front view of the proposed wristband Figure 2 : Figure 2: Internal block diagram of MAX30100 sensor as well as the external components used to connect the sensor to the microcontroller [7] Figure 3 : Figure 3: Hardware of the developed wristband: (a) schematic of the voltage regulator, (b) schematics of MAX30100, OLED, and EEPROM, (c) schematic of the microcontroller, (d) schematic of LEDs an push-button, (e) top and bottom layer of the developed PCB, and (f) implemented wristband's hardware.PCB -Printed circuit board, EEPROM -Electrically erasable programmable read-only memory Figure 4 : Figure 4: Proposed central GUI software: (a) main page, (b) page dedicated to patient identity information editing, and (c) page dedicated to history of the emergency conditions.GUI -Graphical user interface c Figure 5 : Figure 5: Picture of the implemented patient wristband: (a) wristband and (b) pulse oximeter sensor probe Figure 6 : Figure 6: Performance of the proposed algorithm in filtering the output waveforms of a pulse oximeter sensor: (a) IR waveform, and (b) RED waveform.IR -Infrared Radiation, RED b • Whenever a transition is identified on the INT pin of the MAX30100 module, an INT request is sent to the microcontroller • After receiving the GPIO INT request, the service routine function of this INT is executed immediately.In this function, the output of the sensor is read and stored in an array • The signal proccing algorithm, mentioned in step (iii), is applied to the received data where SpO 2 level and heart-rate value are calculated • The calculated values are compared with their normal ranges to identify unusual conditions immediately.Whenever the calculated values are beyond their normal ranges, a predefined message is sent to the central GUI • The calculated values as well as the status of the patient (normal or abnormal) are sent to the GUI immediately through the internal Zigbee transceiver of the microcontroller • The patient wristband's screen is updated by the newest calculated values • The above procedure is repeated continuously whenever a new transition is occurred on the INT pin of the MAX30100 sensor. Table 1 : Radio frequency matching filters components , heart-rate value, and SpO 2 level of all the patients are presented simultaneously on the main page of the software.In this page, by clicking on "Edit patient data."Icon, another page [Figure output
2024-02-27T18:25:34.245Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "e9b7bfd3393254c2dd5aa34c9f8691eded85abc2", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jmss.jmss_47_22", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "765ea601564deacda178d22ce3b348c9b53795b1", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [] }
3923396
pes2o/s2orc
v3-fos-license
The Rotterdam Study: 2018 update on objectives, design and main results The Rotterdam Study is a prospective cohort study ongoing since 1990 in the city of Rotterdam in The Netherlands. The study targets cardiovascular, endocrine, hepatic, neurological, ophthalmic, psychiatric, dermatological, otolaryngological, locomotor, and respiratory diseases. As of 2008, 14,926 subjects aged 45 years or over comprise the Rotterdam Study cohort. Since 2016, the cohort is being expanded by persons aged 40 years and over. The findings of the Rotterdam Study have been presented in over 1500 research articles and reports (see www.erasmus-epidemiology.nl/rotterdamstudy). This article gives the rationale of the study and its design. It also presents a summary of the major findings and an update of the objectives and methods. Introduction The Rotterdam Study was designed in the mid-1980s as a response to the demographic changes that were leading to an increase of the proportion of elderly people in most populations [1]. It was clear that this would produce a strong rise in elderly people living with diseases, as most diseases cluster at the end of life, and that to discover the causes of diseases in the elderly one would have to study risk factors of those diseases [2]. A major approach to finding causes is the prospective follow-up study, which has proven quite effective in finding causes of heart disease and cancer. The design of the Rotterdam Study The design of the Rotterdam Study is that of a prospective cohort study among, initially, 7983 persons living in the well-defined Ommoord district in the city of Rotterdam in The Netherlands (78% of 10,215 invitees). They were all 55 years of age or over and the oldest participant at the start was 106 years [3]. The study started with a pilot phase in the second half of 1989. From January 1990 onwards participants were recruited for the Rotterdam Study. Figure 1 gives a diagram of the various cycles in the study. In 2000, 3011 participants (out of 4472 invitees) who had become 55 years of age or moved into the study district since the start of the study were added to the cohort. In 2006 a further extension of the cohort was initiated in which 3932 subjects were included, aged 45-54 years, out of 6057 invited, living in the Ommoord district. By the end of 2008, the Rotterdam Study therefore comprised 14,926 subjects aged 45 years or over [4,5]. The overall response figure for all three cycles at baseline was 72.0% (14,926 of 20,744). Since summer of 2016, another extension has started that includes all participants aged 40 years and over. The recruitment of this extension is expected to be completed in 2019 and yield around 4000 new participants. The participants were all examined in some detail at baseline. They were interviewed at home (2 h) and then had an extensive set of examinations (a total of 5 h) in a specially built research facility in the centre of the district. These examinations focused on possible causes of invalidating diseases in the elderly in a clinically state-of-the-art manner, as far as the circumstances allowed. The emphasis was put on imaging (of heart, blood vessels, eyes, skeleton and later brain) and on collecting biospecimens that enabled further in-depth molecular and genetic analyses. These examinations were repeated every 3-4 years in characteristics that could change over time. There were examination cycles from 1990 to 1993, from 1993 to 1995, from 1997 to 1999, from (Fig. 1). In spring 2016, the fourth examination cycle for the second cohort (RS-II-4) was finished. In summer 2016 a fourth cohort was established. The age range for this new cohort is predominantly 40-55 years, the anticipated number of participants is 4000. The participants in the Rotterdam Study are followed for a variety of diseases that are frequent in the elderly: coronary heart disease, heart failure and stroke, Parkinson disease, Alzheimer disease and other dementias, depression and anxiety disorders, macular degeneration and glaucoma, COPD, emphysema, liver diseases, diabetes mellitus, osteoporosis, dermatological diseases and cancer. , and RS-I-6 refer to re-examinations of the original cohort members. RS-II-1 refers to the extension of the cohort with persons from the study district that had become 55 years since the start of the study or those of 55 years or over that migrated into the study district. RS-II-2, RS-II-3, and RS-II-4 refer to re-examinations of the extension cohort. RS-III-1 refers to the baseline The Rotterdam Study has been approved by the institutional review board (Medical Ethics Committee) of the Erasmus Medical Center and by the review board of The Netherlands Ministry of Health, Welfare and Sports. The approval has been renewed every 5 years, as well as with the introduction of major new elements in the study (e.g., MRI investigations). In the remainder of this article the objectives and major findings will be presented with an update of the research methods for cardiovascular diseases, dermatological diseases, endocrine diseases, liver diseases, neurological diseases, ophthalmic diseases, psychiatric diseases, respiratory diseases, as well as for genetic and biomarker studies and for pharmaco-epidemiologic studies. The emphasis is on major findings from the preceding 2 years (since the previous update paper [6]. Cardiovascular diseases Objectives Research on the epidemiology of cardiovascular disease focuses on the etiology, prediction, and prognosis of cardiovascular disorders (including coronary heart disease, stroke, and heart failure), type 2 diabetes (T2D) and metabolic syndrome. The main emphasis is on prevention and management of a first cardiovascular event but prevention of secondary events is also an area of interest. Putative risk factors include five groups: lifestyle factors, endocrine factors, factors involved in hemostasis, inflammation and endothelial function, metabolomic factors and genetic factors. We have five specific focused themes: 1. Lifestyle focused on evaluating the role of lifestyle factors (including nutrition, physical activity, sleep and smoking) in maintaining cardiovascular health as well as the interactions that lifestyle factors might have on other factors (e.g. genes, epigenetic marks and medications). 2. Biomarkers and genes aimed to identify relevant biomarkers for the identification of novel mechanisms of disease. These incorporate both molecular and genetic factors together with their potential interactions. Genomics, epigenetic marks and metabolomics play a key role. 3. Prediction and women's cardiovascular health aimed to improve the identification of individuals at increased risk of developing cardiovascular disease in order to point out windows of opportunities that could permit early preventive interventions and personalised care. A special focus is given to evaluating specific factors and formulating targeted strategies to prevent cardiovascular disease in women. 4. High risk focused on predictors and prognosis of chronic cardiovascular conditions, like heart failure, pulmonary hypertension, and atrial fibrillation. 5. Imaging this work theme aims to identify the contribution that new technologies can provide to the maximum benefit of early diagnosis and accurate prognosis. Major focus is on non-invasive assessment of atherosclerosis to improve the understanding of the atherosclerotic process and the prediction of cardiovascular disease, including measurement of coronary calcification with electron-beam and multi-detector CT (MDCT) and carotid plaque characterization by MRI. Anthropometrics and cardiovascular disease We evaluated different anthropometric measures, including body mass index, waist circumference, waist to height ratio, waist to hip ratio and a body shape index in association with all-cause, cardiovascular and cancer mortality. We have shown that among different anthropometric measures, a body shape index (ABSI) was strongly associated with the risk of all-cause, cardiovascular and cancer mortality [25]. In contrast to body mass index (BMI) and waist circumference (WC), ABSI showed a differential association with fat mass and fat-free mass in men, but not in women. This could suggest ABSI as a useful tool for identifying men at higher risk of sarcopenic obesity [26]. While the role of BMI for prediction of CVD among the elderly remains controversial, we found that the presence of obesity without metabolic syndrome did not confer a higher CVD risk in the Rotterdam Study. However, metabolic syndrome was strongly associated with CVD risk, and was associated with an increased risk in all BMI categories [27]. We also observed that while obesity had no effect on total life expectancy in older individuals of the Rotterdam Study, it increased the risk of having CVD earlier in life and consequently extended the number of years lived with CVD [28]. Furthermore, among individuals who developed CVD during follow-up in the Rotterdam Study, we identified 3 distinct BMI trajectories. These trajectories marked 3 distinct groups of ''stable weight'', ''progressive weight gain'', and ''progressive weight loss'' during follow-up. Other cardiovascular risk factors including glucose and lipid levels differed between the identified BMI subgroups, further highlighting that CVD is a heterogeneous disease with different pathophysiological pathways [27]. Within the European Network for Genetic and Genomic Epidemiology (ENGAGE) consortium, using The Rotterdam Study: 2018 update on objectives, design and main results 809 a mendelian randomization approach, we found that adiposity, as indicated by body mass index, has a causal relationship with coronary heart disease, heart failure and for the first time, ischemic stroke [29]. Also, there were age-and sex-specific causal effects of adiposity on cardiovascular risk factors, including cholesterol, blood pressure, fasting levels of insulin and C-reactive protein [30]. Comparison of guidelines The new American College of Cardiology/American Heart Association (ACC/AHA) guidelines introduced a new cardiovascular (CVD) prediction model and lowered the threshold for treatment with statins to a 7.5% 10-year hard atherosclerotic cardiovascular disease (ASCVD) risk. Using 4854 asymptomatic participants from the population-based Rotterdam Study, we determined the implications of the new ACC/AHA guideline's treatment threshold and risk prediction model and compared it with the Adult Treatment Panel III (ATP-III), and the European Society of Cardiology (ESC) guidelines. We showed that proportions of individuals eligible for treatment with statins differed substantially among the 3 guidelines [31]. The ACC/AHA guideline would recommend statins for nearly all men and two-thirds of women, proportions exceeding those with the ATP-III or ESC guidelines. All risk prediction models underlying the 3 guidelines provided poor calibration and moderate to good discrimination in our population. To facilitate better clinical decision making, improving risk predictions and setting appropriate population-wide thresholds are necessary. Women's health Women experience multiple health issues throughout their life course differently from men. Therefore, attention to women's health is important in all stages in life. To improve women's quality of life and guarantee a longlasting and active role for women in society, prevention of chronic diseases and disability is a key aspect. Our focus, therefore, in the women's health group is on the major health issues for peri-and post-menopausal women, their risk factors, and prevention strategies [32]. As menopausal health is a crucial aspect in healthy and successful aging, we aimed to characterize a concept for healthy menopause. We conceptualized healthy menopause as a dynamic state, following the permanent loss of ovarian function, which is characterized by self-perceived satisfactory physical, psychological and social functioning, incorporating disease and disability, allowing the attainment of a woman's desired ability to adapt and capacity to self-manage. Conceptualization of healthy menopause serves as a crucial step in improvement of health in menopausal women, allowing for adapting adequate preventive and treatment strategies [33]. Although cardiovascular disease (CVD) remains one of the leading causes of death and disability for both men and women, our research underscores considerable sex differences in the occurrence of the various manifestations of CVD. Using the long term follow-up from the prospective population based Rotterdam Study, we showed that despite similar lifetime risks of CVD at age 55 for men and women, considerable differences in the first manifestation exist. Men are more likely to develop coronary heart disease as a first event, while women are more likely to have cerebrovascular disease or heart failure as their first event, although these manifestations appear most often at older ages [34]. Since strategies for prevention of stroke and heart failure might differ from strategies for prevention of coronary heart disease, to devise a sex-tailored primary prevention program, knowledge about the first manifestation of CD is important. A gender perspective on health and ageing Based on 7 domains including chronic diseases, mental health, cognitive function, physical function, pain, social support, and quality of life, we developed a healthy ageing score among women and men in the Rotterdam Study. In all age categories, we found levels of healthy ageing score to be lower in women compared with men. In both genders, the healthy ageing score declined with increasing age, albeit the decline was slightly steeper in women [35]. In an attempt to characterize the relation between fertile life span characteristics and mortality, we found that late first and last reproduction were protective for all-cause mortality, whereas a longer maternal lifespan, postmaternal fertile lifespan, and endogenous estrogen exposure were harmful for all-cause mortality [36]. Further, we used seven metrices of health factors and health behaviors to define the concept of cardiovascular health in the Rotterdam Study. We showed that optimal cardiovascular health was reached by 9.3% of men and 10.4% of women in the Rotterdam Study and was associated with both sex steroids and sex hormone-binding globulin (SHBG) among men and women [37]. To further assess the impact of androgen levels on women's cardiometabolic health, we formed a multi-center study in which we assessed several cardiometabolic features among women with polycystic ovary syndrome (PCOS), women premature ovarian insufficiency (POI), natural post-menopause women, and women with regular menstrual cycles. This study affirmed the potent effect of androgens on cardiometabolic features, indicating that androgens should indeed be regarded as important denominators of women's health [38]. Also, we found that women with POI exhibited an unfavorable cardiovascular risk profile, including higher abdominal fat, elevated chronic inflammatory factors, and a trend toward increased hypertension and impaired kidney function compared to premenopausal women of middle age [39]. Heart failure and atrial fibrillation The Rotterdam Study enabled accurate assessment of the incidence and lifetime risk of heart failure and atrial fibrillation in an elderly population [40][41][42]. It was shown that inflammation and resting heart rate is associated with risk of heart failure [43,44]. In addition we identified several new risk factors of atrial fibrillation. We found that markers of generalized atherosclerosis in persons without a history of myocardial infarction or angina were associated with a higher risk of atrial fibrillation [45]. Furthermore, high-normal thyroid function [46] and higher levels of dehydroepiandrosterone sulfate, a precursor in the biosynthetic pathway of androgenic and estrogenic sex hormones were associated with incidence of atrial fibrillation [47]. Among individuals free of CVD, we also found an association between epicardial fat, measured by CT scan, with AF that was independent of traditional cardiovascular risk factors, coronary atherosclerosis, left atrial size, and various measures of adiposity [48]. In collaboration with several community-based prospective studies we were able to develop a prediction model for atrial fibrillation, only using variables that are routinely collected in primary care settings [49]. In a large collaborative study as part of the CHARGE consortium, we investigated the genetic variation responsible for 6 traits related to cardiac structure and function. We found two replicated loci for left ventricular dimension and 5 replicated loci for aortic root size [50]. Another topic of interest was the search for genetic determinants of several rhythm and conduction disturbances on the ECG, notably RR-interval, QRS duration, and QT(c)interval, PR-interval, as well as atrial fibrillation and sudden cardiac death. For example, we identified several new loci for PR interval [51], heart rate [52], and atrial fibrillation [53,54] in meta-analyses from the CHARGE consortium. Type 2 diabetes Type 2 diabetes (T2D) has become a global epidemic. We took a comprehensive approach to calculate the lifetime risk of the full range of glucose impairments, from normoglycaemia to prediabetes, type 2 diabetes, and eventual insulin use. At age 45 years, the remaining lifetime risk was 48.7% for prediabetes, 31.3% for diabetes, and 9.1% for insulin use. Our findings highlighted the substantial burden of impaired glucose metabolism on population health, emphasizing the need for more effective prevention strategies [55]. Using multistate life table, we showed that obesity in the middle aged and elderly is associated with a reduction in the number of years lived free of diabetes and an increase in the number of years lived with diabetes [56]. In a mendelian randomization study, we did not find evidence for a causal role of serum gamma-glutamyltransferase on the risk of prediabetes or diabetes [57]. Among inflammatory markers, we found EN-RAGE to be a novel inflammatory marker for pre-diabetes, IL17 for incident T2D and IL13 for pre-diabetes, incident T2D and insulin therapy start [58]. Also we reported that serum apoCIII levels as well as apoCIII-to-apoA1 ratio were associated with incident diabetes independent of known risk factors [59]. ADAMTS13, a novel homeostatic factor, was an independent risk factor for incident prediabetes and type 2 diabetes [60]. In women, we found that low levels of sex hormone-binding globuline and high levels of total estradiol were associated with increased risk of T2D, independent of potential intermediate risk factors such as obesity, glucose and insulin levels [57]. In both men and women, serum dehydroepiandrosterone levels were associated with lower risk of T2D, whereas no associations were found for other hormones in either sex [57,61]. Further, we provided insights into potential biological mechanisms connecting tobacco smoking to excess risk of T2D by investigating the association between smoking and DNA methylation of genes previously identified for diabetes. We found that tobacco smoking is associated with differential DNA methylation of the diabetes risk genes ANPEP, KCNQ1 and ZMIZ1 [62]. Cardiovascular risk factors and prediction Endocrine, inflammatory and hemostatic factors and risk of coronary heart disease were addressed in several studies. Subclinical hypothyroidism was an independent risk factor of atherosclerosis and myocardial infarction in older women [63]. In a recent study, we compared the change in the accuracy of risk predictions when newer risk markers, representative of various pathophysiologic pathways, were added to the established clinical risk predictors. Among the biomarkers, improvements in coronary heart disease risk prediction were most significant with the addition of amino-terminal pro-B-type natriuretic peptide (NT-proBNP) [64,65]. Furthermore, plasma C-reactive protein (CRP) and lipoprotein-associated phospholipase A2 (Lp-PLA2) activity were independent predictors of coronary heart disease [66,67]. Earlier findings included the association of tissue plasminogen activator (TPA) with incident coronary heart disease [68]. Using a comprehensive biomarker assay, we analysed multiple markers of inflammation among 800? individuals with incident coronary heart disease [69]. We identified EN-RAGE as a novel The Rotterdam Study: 2018 update on objectives, design and main results 811 biomarker for incidence of coronary heart disease, independent of established risk factors and inflammatory markers, such as C-reactive protein [69]. With respect to the prediction of coronary heart disease, EN-RAGE improved prediction significantly indicating that EN-RAGE might be useful in CHD prediction [69]. Regarding novel hemostasis risk factors, we found low ADAMTS13 activity to be associated with increased risk of coronary heart disease, ischemic stroke, and all-cause and cardiovascular mortality beyond the traditional risk factors [70][71][72]. Recently, we developed and validated a coronary heart disease prediction model tailored for the aging population based on competing risk methodology [73]. Also, we have shown that the non-laboratory based model, based on body shape index, could predict risk of cardiovascular disease as accurately as one that relied on laboratory-based values among men [74]. Non-invasive measures of atherosclerosis Multiple studies focused on the predictive value of noninvasive measures of atherosclerosis for risk of coronary heart disease. Strong associations with risk of coronary heart disease were found for carotid intima-media thickness [75], pulse wave velocity [76], and coronary calcification as assessed by electron-beam CT [77]. The relatively crude measures directly assessing plaques in the carotid artery and abdominal aorta predict coronary heart disease equally well as the more precisely measured carotid intimamedia thickness [78]. We also found carotid stiffness to be associated with incident stroke independently of cardiovascular risk factors and aortic stiffness [79]. In persons at intermediate risk of cardiovascular disease, coronary artery calcium provided the best increment in coronary heart disease risk prediction and stratification (to reclassify persons into more appropriate coronary risk categories) [64,80,81]. The burden of coronary calcification also provides incremental predictive information for heart failure, but nor for cerebrovascular disease [82,83]. In a large meta-analysis of 5 population-based studies, including the Rotterdam Study, we showed that coronary artery calcium was present in approximately one-third of women categorized as being at low CVD risk based on the new ACC/ AHA guidelines. Presence of coronary artery calcium among low-risk women was associated with an increased risk of CVD and led to modest improvement in prognostic accuracy compared with traditional risk factors [84]. Genetic studies Genetic studies included candidate gene studies [85] and more recently genome-wide association studies of clinical disease and risk factor phenotypes. So far we have contributed to more than 100 Genome-wide association (GWA) studies in the field of cardiovascular disease. These GWA studies are primarily conducted in the framework of the Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium [86,87] however in many instances we include further studies. We identified 3 genetic loci associated with uric acid concentration and gout [88]. Three loss-of-function variants in HAL gene were found to associate with histidine levels [89] but not with coronary heart disease. We also identified a significant association between the UMOD gene which encodes Tamm-Horsfall protein and chronic kidney disease [90]. We found four genes for systolic blood pressure, six for diastolic blood pressure and one for hypertension [91][92][93]. We found multiple loci that influenced erythrocyte phenotypes in the CHARGE Consortium [94]. In a metaanalysis in more than 80,000 individuals from 25 studies, we identified 18 loci for CRP levels. The study highlighted immune response and metabolic regulatory pathways involved in the regulation of chronic inflammation [95]. Novel associations of 12 low-frequency exonic variants with plasma levels of factor VII, factor VIII, and von Willebrand factor were also detected [96,97]. The association with these variants was independent of the previously identified common variants associated with these traits, and the effect sizes were larger. We performed the first GWA study of ADAMTS13 activity, identifying independent associations with three common variants at the ADAMTS13 locus, as well as one common variant at the SUPT3H locus [98]. Additionally, we used a genotyping array focused on rare exonic variants to identify three independent rare variants in the ADAMTS13 gene associated with ADAMTS13 activity [98]. We have also identified genetic loci associated with the measures of subclinical atherosclerosis burden. Our genome-wide association studies on the 3 measures of subclinical atherosclerosis identified several new genetic loci [99][100][101]. Our exomewide association meta-analysis demonstrated that proteincoding variants in APOB and APOE associate with multiple subclinical atherosclerosis traits as well as clinical coronary heart disease. We have contributed to GWA studies on coronary artery disease [102,103]. Also, we found that 152 known coronary heart disease SNPs improved the prediction of prevalent but not incident coronary heart disease. This difference may be explained by biases related to the use of prevalent rather than incident coronary heart disease in genome-wide association studies [104]. In addition, by using genome-wide methylation data, we found an effect of tobacco smoking on DNA methylation of 12 coronary artery disease-related genes [105] and associations of blood lipid concentrations with methylation at several metabolic disease-related genes [106], and thus providing novel insights in the pathways underlying cardiometabolic disease. Thus far, a large number of genetic variants have been identified by GWAS that contribute to the induction and development of cardio-metabolic diseases. Nevertheless, the vast majority of the identified variants map to the noncoding regions of genome that their biological relevant to the disease remain unclear. Non-coding RNAs play regulatory roles in various biological processes and cellular contexts. We identified a number functional variants in microRNA-genes and microRNA binding sites on the 3Ú TR of coding genes that affect miRNA gene regulation and explain some of the observed associations from GWAS of cardio-metabolic phenotypes [107][108][109]. Nutrition and lifestyle We found that dietary fat intake palmitic acid, which accounts for half of the total saturated fat intake, was associated with an increased risk of coronary heart disease, as was substitution of total saturated fat with animal protein [110]. We did not confirm a consistent association between dietary fat composition and body fat distribution, but we found that total polyunsaturated fatty acids, and in particular n-6 polyunsaturated fatty acids intake, was associated with lower inflammatory profile [111]. We also conducted several studies on the association between nutrition and cancer. We showed that n-3 polyunsaturated fatty acids intake were associated with increased risk of colorectal cancer, but this association was modified by dietary fiber intake [112]. We did find that dietary polyunsaturated fat intake modified the association between total serum cholesterol levels and the risk of colorectal cancer [113,114]. We also studied whether dietary mineral intake were associated the risk of lung cancer and found that high dietary zinc and iron intake were associated with a reduced risk of lung cancer [115]. In addition to individual nutrient analyses, we performed several studies on a priori and a posteriori defined dietary patterns and health outcomes in The Rotterdam Study. For example, we found that adherence to the Dutch dietary guidelines was inversely associated with 20 year mortality in particular due to cardiovascular disease mortality [116]. We also found that a health conscious dietary pattern, characterized by high intake of fruits, vegetables, poultry ranch fish, may have benefits for bone mineral density. Contrary, adherence to a Processed dietary pattern, characterized by high intake of processed meat and alcohol, was associated with lower bone mineral density [117]. Additionally, we evaluated if dietary patterns that explain most variation in bone mineral density and hip bone geometry are associated with fracture risk. We observed that a pattern high in fruit, vegetables and dairy could be associated with lower fracture risk because of high bone mineral density [118]. As part of the CHANCES consortium, we found that adherence to a healthy diet was not associated with cognitive decline [119] but that adherence to the WCRF/AICR Dietary Recommendations for cancer prevention was associated with a lower risk cancer in older individuals, in particular colorectal and prostate cancer [120]. For physical activity, we observed that higher levels of physical activity were associated with increased life expectancy and more years lived without CVD. Of the different types of physical activity included in the study, cycling provided high effects in both men and women [121]. In line with these results, during a 15-year followup, it was observed that high physical activity was associated with less coronary heart, mainly explained by cycling and domestic work [122]. Furthermore, it was observed that sedentary behavior was, independent of other physical activity, a risk factor for all-cause mortality [123]. Methods update Clinical follow-up Information on clinical cardiovascular outcomes is collected through an automated follow-up system. The followup system involves linkage of the study base to digital medical records from general practitioners in the study area and subsequent collection of letters of medical specialists and discharge reports in case of hospitalisation. With respect to the vital status of participants, information is also obtained regularly from the municipal health authorities in Rotterdam. After notification, cause and circumstances of death are established by questionnaire from the treating physicians. Clinical cardiovascular outcomes are adjudicated according to established definitions based on international guidelines by study physicians and medical specialists in the field affiliated with the Rotterdam Study. Methods of follow-up data collection, adjudication of events, and definitions of cardiovascular end points have been described in detail previously in this journal [124]. Systematic follow-up data collection is done for the occurrence of cardiovascular mortality, coronary heart disease (including coronary death, myocardial infarction, and coronary revascularization procedures), heart failure, atrial fibrillation, and sudden cardiac death [124]. Diabetes mellitus is defined based on guidelines of the American Diabetes Association and the World Health Organization. We defined incident diabetes as fasting plasma glucose level C 7.0 mmol/L, or the use of oral antidiabetic medication or insulin, or treatment by diet and registered by a general practitioner as having diabetes. Non-invasive measures of atherosclerosis At baseline and follow-up examinations, ultrasonographic assessments of carotid intima-media thickness and carotid plaques were conducted in all participants [75]. At these examinations, also measurements of the ankle-brachial index and aortic calcification (on X-rays of the lumbar spine) were obtained [78]. Carotid-femoral pulse wave velocity, a measure of aortic stiffness, was measured in all *participants of RS-I-3, RS-II-1, and RS-III-1 with an automatic device [76]. Measurements of coronary calcification by electron-beam CT and more recently by MDCT were conducted from 1997 onwards in RS-I and RS-II [77,80]. . From 2009 (RS-I-5) onwards, measurements of structure and function of the right heart are also collected, including estimates of pulmonary artery pressure. In the same round a 3-min resting ECG was measured in all participants. Nutrition and lifestyle Dietary intake data have been collected in RS-I-1, RS-I-5, RS-I-6RS-II-1, RS-II-3, RS-II-4, and RS-III-1 by using semi-quantitative food-frequency questionnaires (FFQ). In RS-I-1 and RS-II-1, participants completed a checklist about foods and drinks they had consumed at least twice a month during the preceding year and a standardized interview using a validated 170-item semi-quantitative FFQ [132]. For the later waves and cohort, a more comprehensive 389-item FFQ was used during the visits as described in detail previously [133][134][135][136]. For all cohorts, nutrient intake data were calculated using the Dutch Food Composition Tables, in close collaboration with the Department of Human Nutrition, Wageningen University, the Netherlands. In RS-I-III, RS-I-5, RS-II-3 and RS-III-I, physical activity data was assessed by means of an adapted version of the Zutphen Physical Activity Questionnaire and the LASA Physical Activity Questionnaire [137][138][139]. The questionnaire contained questions on walking, cycling, gardening, diverse sports, hobbies and on housekeeping. According to time spent in light, moderate and vigorous activity, metabolic equivalents of task were calculated. Furthermore, we are implementing objective measurement of physical activity with triaxial accelerometers in all participants. Frailty index As a proxy for overall health we developed a frailty index for the Rotterdam Study, based on predefined criteria [140]. A frailty index is based on the accumulation of health deficits, which can include an unspecified number of symptoms, diseases, laboratory measurements or disabilities, as long as they are health and age related [141]. The severity of frailty is represented by the number of deficits and is expressed on a continuous frailty index score, calculated as the ratio of the deficits present to the total number of variables considered (range 0-1). We calculated a frailty index based on 45 health-related variables, related to cognition, functional status, diseases and biomarkers, for over 11,000 participants. The frailty index showed good construct and criterion validity (e.g. strong association with mortality) [142]. For additional EJE references please see [27,. Dermatological diseases Objectives Dermatoepidemiologic research in the Rotterdam Study focuses on the frequency of the most common skin conditions as well as on genetic and environmental factors associated with these skin diseases. The emphasis is on cutaneous malignancies such as basal and squamous cell carcinomas (BCC and SCC, respectively) and their precursor lesions (actinic keratosis), inflammatory dermatoses such as eczema and psoriasis, and varicose veins. Also, we examine the frequency and determinants including genetics and environmental exposures of skin aging (pigmentation, wrinkling and photodamage) and other visible traits in collaboration with the department of Genetic Identification. Recently, we have introduced optic measures of UV exposed and non-exposed to assess whether they can function as biomarkers of skin and internal diseases. Methods In 2010, dermatology studies were introduced in the Rotterdam Study. To the home interview several items have been added questioning ultraviolet light exposure, history of (personal and familial) psoriasis, history of skin cancer, the diagnostic criteria for atopic eczema, adjusted diagnostic criteria for psoriatic arthritis. More recently, items on skin care and seborrheic dermatitis/dandruff were added. A full body skin examination by physicians trained in dermatology with a focus on the most common skin diseases is the core contribution of dermatology. The clinical presence and extent of specific skin diseases (i.e., actinic keratosis, malignancies, psoriasis, seborrheic dermatitis, xerosis, hand and flexural eczema, alopecia, and signs of chronic venous insufficiency based on the 'C' of the CEAP classification) at time of examination is assessed in a standardized fashion. Other dermatological diseases will just be noted. The extent of skin aging as a global score and broken down in different aspects such as wrinkling, pigmentary spots, and teleangiecatsia are scored using a validated photonumeric scales and computer algorithms. The Norwood-Hamilton classification and the Ludwig classification is used for male and female pattern hair loss, respectively. Fully standardized 3-dimensional photographs (Premier 3dMDface3-plus UHD, Atlanta, USA) of the face are taken to further assess skin characteristics including sagging, wrinkling at different sites, teleangiectasia and pigmented spots. The colour of the facial skin and at the inner side of the upper arm are measured using a spectrophotometer (Konica Minolta Sensing, spectrophotometer CM-700d, Singapore). Recently, we have included a screening venous ultrasound examination of the lower extermities assessing the deep and superficial venous system. Also, we added skin swabs of the nasolabial fold to investigate the diversity of the microbioom across a large population and assess its relationship with other (skin) diseases. As for other cancers, pathology data of the cutaneous malignancies is obtained from linkage to the national cancer registry and the Dutch pathology database (PALGA). In a further attempt to identify cohort members with psoriasis, medical files and dispenses at pharmacies have been investigated resulting in over 350 psoriasis cases. Major findings In the first follow-up study including the skin examinations of more than 2000 cohort members, showed that actinic keratosis is very common in this elderly population (AK prevalence was 49% for men and 28% for women) [166]. After adjusting for other factors, baldness in men was associated with a strongly increased risk of actinic keratosis. A recent update yielded more than 1500 participants with a history of BCC, 450 with a SCC and 150 with a melanoma. We have demonstrated that approximately 30% of people with a BCC develop multiple tumors with 5 years and have developed a prediction model to identify these high risk patients [167]. A first genetic analysis could not confirm any of the existing BCC polymorphisms to be associated with the development of multiple BCC [167]. A subsequent GWAS in an international consortium could not observe the association between common variants and multiple keratinocytic cancers [168]. In new and bigger international collaboration these findings are being reevaluated. We have presented the first GWAS on actinic keratosis [169]. Several skin color genes such as IRF4, The Rotterdam Study: 2018 update on objectives, design and main results 815 MC1R, ASIP and BCN2 were significantly associated with these premalignant skin lesions independently from skin color. Using compound heterozygosity analysis, several other pigment related genes were identified for AK [170]. In a candidate gene study in almost 6000 people, we confirmed known and identified new variants associated with digital skin colour extraction. Of the two new skin color genes, the genetic variants in UGT1A were significantly associated with hue and variants in BNC2 were significantly associated with saturation [171]. In the International Visible Trait Genetics Consortium, we identified novel pigmentation genes confirmed by functional follow up [172]. Several pigmentation genes were also significantly associated with the presence of pigmented facial spots in a GWAS [169]. Among over 3000 individuals several components of skin aging have been investigated. The most recent finding is a study showing that Individuals carrying the homozygote MC1R risk haplotype looked on average up to 2 years older than non-carriers MC1R [173]. Also, we have demonstrated that digitally extracted wrinkle area from facial 3D photo's was higher in men (median 4.5%, [interquartile range (IQR): 2.9-6.3]) than in women (3.6%, [IQR 2.2-5.6]). Age was the strongest determinant, and current smoking and lower body mass index were also statistically significantly associated with increased wrinkling. Pale skin color showed a protective effect and, in men, sunburn tendency was associated with less wrinkling. In women, low educational levels and alcohol use associated with more wrinkling, while female pattern hair loss and a higher free androgen index were associated with less wrinkling [174]. The psoriasis patients within the Rotterdam Study have predominantly mild disease. The distribution of subclinical artherosclerosis measures as well as the cardiovascular events were comparable between the 262 psoriasis patients and the reference population [175]. However, psoriasis patients were significantly more likely to have signs of nonalcoholic fatty liver disease based on ultrasonography than their controls after adjusting for potential confounders [176]. Moreover, psoriasis patients were more likely to have liver fibrosis than controls comparing Fibroscan data [177]. Endocrine diseases Objectives The main objective of the programme of endocrine epidemiology research is to study frequency and etiology of major disorders of the endocrine glands (pituitary, reproductive, thyroid, parathyroid, adrenal, and neuro-endocrine pancreas). These include diabetes mellitus, hypo-and hyper-thyroidism. The evaluation of risk factors for the above mentioned conditions includes serum measurements (such as classical hormones and other endocrine molecules), and genetic determinants of endocrine diseases and traits. In addition, consequences of these endocrine disorders are studied in relation to mortality and aging related diseases, including cardiovascular disease, eye diseases, skin diseases, neurocognitive decline and cancer. Major findings We demonstrated that high-normal thyroid function is associated with an increased risk of atrial fibrillation [46] and subsequently showed that higher FT4 levels are associated with an increased risk of sudden cardiac death, even in euthyroid participants [178]. The absolute 10-year risk of SCD in euthyroid participants increased from 1 to 4% from low-normal to high-normal FT4 levels. A higher thyroid function does not only have negative consequences for the cardiovascular system, since we also showed that a higher thyroid function is associated with increased risk of kidney function decline [179], an increased risk of any solid, lung, and breast cancer [180], as well as an increased risk of AMD [181]. Finally, a high and high-normal thyroid function is also associated with increased risk of developing depression in the elderly [182] and with an increased dementia risk [183]. Interestingly, thyroid function is not related to vascular brain disease as assessed by MRI, suggesting a role for thyroid hormone in nonvascular pathways leading to dementia. Whereas these data suggest that a higher thyroid function can be detrimental during the aging process, other studies have shown negative consequences of a lower thyroid function as well. We recently showed that a lower thyroid function is associated with an increased risk of NAFLD [184], as well as that a low and low-normal thyroid function are risk factors for incident diabetes, especially in individuals with prediabetes [185]. IN previous studies we already demonstrated that subclinical hypothyroidism is also an independent risk factor of atherosclerosis and myocardial infarction in older women [63]. Also for gait, both low and high thyroid function are associated with alterations in Global gait, Tandem, Base of support and velocity [186]. Future studies will focus on the challenge of defining optimal thyroid function for relevant clinical outcomes and determine which subgroups need specific reference ranges. As part of the Thyroid Studies Collaboration, we recently published four individual-participant data analyses. By analyzing individual participant data from 13 prospective cohorts (70,298 participants) we demonstrated that subclinical hyperthyroidism is associated with an increased risk of hip and other fractures, particularly among those with TSH levels of less than 0.10 mIU/L and those with endogenous subclinical hyperthyroidism [187]. An analysis combining data from 17 cohorts and lead by the Rotterdam Study did not show a higher risk of stroke with subclinical hypothyroidism except in participants younger than 50 years of age [188], whereas higher levels of TSH within the reference range may decrease the risk of stroke [189]. A combined analysis in 14 cohorts focusing on risk of coronary heart disease showed no relationship of TSH levels within the reference range and risk of CHD events or CHD mortality [190]. Much of the work of this research is made possible by large-scale collaboration in consortia, some of which focus on one particular disease or trait while others are more broad spectrum strategic collaborations (e.g., CHARGE, ENGAGE). We are part of several such large consortia studying genetic and epidemiological risk factors for diabetes (MAGIC), and thyroid disease (CHARGE and TSC). Major GWAS findings The main factors that influence the relationship between thyroid hormone and concentrations of TSH in our population-based cohort study are age, smoking, BMI, TPOAb levels, and common genetic variants [191]. In a metaanalysis of GWAS data on TSH levels and free T4 levels derived from up to 26,000 subjects, 26 loci were identified explaining 2-5% of the genetic variation of TSH and fT4 respectively [192]. There was only limited overlap between the loci for TSH and fT4, and evidence was obtained for 5 loci to have sex-specific effects. A GWAS meta-analysis focusing on TPO autoantibodies (an important clinical marker for the detection of early AITD) in 16 cohorts identified five newly associated loci, three of which were also associated with clinical thyroid disease. With these markers we identified a large subgroup in the general population with a substantially increased risk of TPOAbs [193]. A follow-up study identifying 4 additional loci associated provided further insight into the genetic underpinnings of hypothyroidism. A Genetic Risk Score showed strong and graded associations with markers of thyroid function and disease in independent population-based studies [194]. Methods update Several specific biomarker assessments in blood/serum/plasma and urine are done for the diagnosis and evaluation of risk factors of endocrine and metabolic diseases (e.g., glucose, TSH, freeT4). Fasting blood samples are collected along with challenged samples as part of a glucose tolerance test. Saliva is collected before and after a dexamethasone-suppression test. Finally, validated questionnaires evaluating nutrient intake (e.g., calcium and vitamins) and activities of daily living, allow to evaluate the role of environmental factors in endocrine conditions and diseases of the elderly. Locomotor diseases Objectives The main objective of the program of locomotor epidemiology research is to study frequency and etiology of major disorders of the musculoskeletal system including osteoporosis (OP), osteoarthritis (OA), sarcopenia and chronic musculoskeletal pain. The evaluation of risk factors for the above mentioned conditions includes genomic determinants; serum biomarkers; nutrients; anthropometrics, imaging of bones and joints by X-ray and MRI; and densitometry and body composition quantification by DXA, and pQCT. In addition, these locomotor conditions are studied in the context of other aging related metabolic diseases, including cardiovascular disease and diabetes. Such deep musculoskeletal phenotyping makes the Rotterdam Study a unique resource to study determinants of OP, OA, sarcopenia, and chronic pain and constitutes one of the largest such dataset in the world. Osteoporosis and bone health We have obtained digitized X-rays for many participants at the several time-points of follow-up, and have applied three different methods to score vertebral fractures: quantitative morphometry (QM), semi-quantitative morphometry (SQ), and the algorithm based qualitative (ABQ) method [198]. A recent comparison of QM assisted by SpineAnalyzer Ò (SA) software and ABQ, showed that vertebral fracture prevalence differed substantially between the methods, with similar findings being done by the Canadian working group on vertebral fx assessment of the CaMos study. Vertebral deformities misclassified as fractures, typically observed in the SA-QM group classified as mild (Grade 1) inflate drastically the prevalence, and are partly responsible for the observed differences across methods. Re-examining SA-QM grade 1 by assessing endplate depression (the ABQ hallmark) helps discriminating deformities from real fractures. Therefore we proposed this approach to be implemented in radiological clinical practice, thus helping practitioners to assess better the indication of osteoporosis therapy [198]. We determined the relationship of metabolic syndrome and bone health [199] establishing that in contrast to T2D no association with fracture risk was identified despite the fact that, among the metabolic syndrome components, glucose levels were associated with high FN-BMD, highlighting the need to preserve glycemic control to prevent skeletal complications. Further, we have looked at the relationship between uric acid (UA) and bone health outcomes [200] showing how higher levels of serum UA are associated with higher BMD (at the expense of thicker bone cortices and narrower bone diameters) also in interaction with age and vitamin C intake. Such relationship between bone health and nutritional factors has been extensively examined within the Rotterdam Study. In relation to specific nutrients, we established a plausible favorable relation between high vitamin A intake from the diet with fracture risk in overweight subjects [201]. We also determined that a diet high in acidforming nutrients (e.g., proteins) may be detrimental to bone health in participants with high intake of dietary fibre [202]. Further, we identified dietary patterns influencing bone health, where beneficial effects on higher BMD were seen with ''Health conscious'' patterns in contrast to patterns characterized ''Processed food'' indicate potential susceptibility to presenting low BMD [117]. In addition, we could establish how specific patterns are associated with bone configurations influencing fracture susceptibility [118]. Finally, we developed a food group-based score translated into a BMD-Diet score, capable of profiling groups of food associated with higher/lower BMD levels; of great potential to be adapted in dietary guidelines focused on promoting healthy aging [203]. Although extreme phosphate levels have been associated with mineralization defects and increased fracture risk it was not known whether phosphate levels within normal range are related to bone health in the general population. In the Rotterdam Study we found that serum phosphate was positively related to fracture risk independently from BMD and phosphate intake after adjustments for potential confounders and these findings were replicated in the US Osteoporotic Fractures in Men (MrOS) study [204]. Phosphate and lumbar spine but not femur neck BMD were negatively related in men only. Our findings suggest that higher phosphate levels even within normal range might be deleterious for bone health in the normal population. Osteoarthritis Over the last years, we have scored X-ray all radiographs of knee, hip and hand of RS I, II and III for osteoarthritic features including up to 20 years of follow-up radiographs. In addition, we have (bilateral) knee MRI images available for a subset (± 1000) individuals of RS III, including a longitudinal follow-up MRI after 6 years. In addition, pain sensitivity measurements have been performed including a quantitative assessment of heat sensitivity on the arm using a standardized device (TSA-II neurosensory analyzer, Medoc), and indications of (wide-spread) pain in any part of the body using a manikin. Over the last 2 years several established and novel risk factors for OA were examined. No clear association between vitamin serum levels and prevalent, incident or progressive knee, hip or hand OA was observed in the Rotterdam Study and subsequent meta-analysis [205]. We showed for the first time that a marker of tissue inflammation, matrix metalloproteinase-dependent degradation of C-reactive protein (CRPM), predicts the risk of OA progression. This risk was independent of the established biomarkers uCTX-II and COMP [206]. Biomarkers of atherosclerosis were not related to progression of knee osteoarthritis [207]. Furthermore, individuals with cam deformity and those with acetabular dysplasia, two hip shape deformities, were shown to be at higher risk for developing OA; these associations were independent of other well-known risk factors [208]. RNA expression in blood was found to associate with peripheral inflammation in the knee, as measured by joint effusion [209]. A large-scale transcriptome-wide study of muscle strength in human adults identified a total of 221 genes, of which circulating expression levels were associated with muscle strength. This study confirmed associations with known pathways involved in muscle and provides new evidence for over half of the genes identified [210]. Chronic musculoskeletal pain The relationship between the presence of chronic pain and brain volumetrics was studied in the largest study to date. Grey matter volume of the temporal and frontal lobes and the hippocampus were found to be smaller in women with pain compared to those without pain, indicated involvement of emotional processing. The volumetric differences found indicated a sex-specific neuroplasticity in chronic pain [211]. Lower sex hormone levels were found to be associated with chronic musculoskeletal pain, independent from lifestyle and health-related factors in women, suggesting that sex hormones play a role in chronic pain and should be taken into account when a patient presents with chronic pain [212]. Chronic joint pain in the lower body was found to be associated with gait differences independent from radiographic osteoarthritis, indicating that gait assessment may help in identifying individuals with OA from those having pain due to other causes [213]. Indeed, asymptomatic radiographic hip osteoarthritis was found to be associated with gait differences [214] especially in women. Central sensitization, as measured by thermal quantitative sensory testing (QST) was shown be present in community-dwelling elderly individuals suffering from self-reported chronic pain. In addition, several determinants influencing thermal QST measurement were identified [215]. Major GWAS findings In a meta-analysis of [ 21,000 individuals, we identified six loci to be associated with cartilage thickness, a socalled endophenotype for osteoarthritis [216]. The most prominent four novel associated genetic loci were located in/near TGFA (rs2862851), PIK3R1 (rs10471753), SLBP/ FGFR3 (rs2236995), and TREH/DDX6 (rs496547), while the other two (DOT1L and SUPT3H/RUNX2) were previously identified. Exome sequencing data (n = 2050 individuals) indicated that there were no rare exonic variants that could explain the identified associations. This is the first report linking TGFA to human OA, which may serve as a new target for future therapies. In addition, we identified a variant in the protein-kinase C gene to be associated with neuropathic pain symptoms after total joint replacement highlights [217]. We performed within an international consortium a meta-analysis of GWA studies for whole body lean body mass which consists primarily of skeletal muscle mass, and found five genetic loci to be significantly associated. The loss of lean mass with aging which may lead to a condition called 'sarcopenia' is associated with physical disability, falls and fractures, poor quality of life and death [218]. In the field of osteoporosis we identified through leading participation in international consortia less-frequent variants in EN1, the first gene identified combining wholegenome sequencing and GWAS in the field of osteoporosis [219]. Similarly, the Rotterdam Study made part of the first epigenome-wide association study in relation to BMD [220]. Furthermore, we co-lead the discovery of rare coding variants influencing human stature identified in a metaanalysis comprising more than [ 700,000 individuals [221]. Liver diseases Objectives The objective of liver research in the Rotterdam study is concentrated on establishing the prevalence, incidence, risk factors and prognosis of liver diseases in the general population. The two main liver traits of interest are non-alcoholic fatty liver disease (NAFLD) and liver fibrosis. NAFLD is considered the hepatic manifestation of the metabolic syndrome and has become the most common chronic liver disease in Western countries in parallel with epidemics of obesity and type II diabetes mellitus. NAFLD comprises the spectrum from simple steatosis (i.e. fatty liver) to non-alcoholic steatohepatitis (i.e. NASH due to hepatic inflammation), fibrosis, cirrhosis, liver failure and hepatocellular carcinoma. It is estimated that about 25% progress to NASH and more severe stages thereafter [222]. In high-risk populations with metabolic syndrome and obesity, NAFLD appears prevalent in up to 70% [223], a very worrisome trend indeed. Despite over 500 ongoing clinical trials in NAFLD and NASH (www.clinicaltrials. gov), no drug has yet been registered for use in NAFLD patients. Hence the cornerstone of treatment continues to consist of nonspecific life style modifications through weight loss and exercise. We aim to study to what extent the following factors play a role in NAFLD occurring in the general and hence unselected population: components of the metabolic syndrome, obesity, dietary composition, dietary patterns, body composition and sarcopenia, gut microbiome, genetic predisposition and cardiovascular morbidity. With this, we aim to gain more insight into the pathogenesis and provide rationale for more specific life style interventions. Fibrogenesis of the liver is most probably not only the result of well-known liver diseases, such as viral hepatitis, alcoholic liver disease or NAFLD, but rather a complex interaction between a genetic predisposition and these liver disorders. Liver research in the Rotterdam Study will concern the association between these known causes of liver disease and the occurrence, magnitude, and progression of fibrosis in combination with genetic and environmental factors. Abdominal ultrasound From February 2009 onwards (cohorts RS-I-5, RS-II-3, RS-III-2, RS II-4 and currently ongoing RS-IV-1), trained technicians perform abdominal ultrasonography in Rotterdam Study participants. The liver parenchyma, biliary tract, gall bladder, spleen, pancreas and kidneys are evaluated in combination with Doppler examination of hepatic veins, hepatic artery and portal vein. All images are stored digitally and are reevaluated by an expert hepatologist trained in hepatic ultrasonography. Assessment of steatosis The diagnosis and grading of liver steatosis is based on ultrasonographic liver brightness, hepatorenal echo contrast, deep attenuation and vessel blurring [224]. Non-alcoholic fatty liver disease is diagnosed by presence of hepatic steatosis on ultrasound and the exclusion of excessive alcohol consumption, presence of viral hepatitis, use of steatogenic agents and recent bariatric surgery. Assessment of fibrosis Ultrasonographic evaluation of the liver parenchyma and liver surface is performed in order to assess severe fibrosis and/or cirrhosis. Additionally, sonographic signs of portal hypertension are studied (i.e. splenomegaly, venous collaterals, portal vein diameter and flow, hepatic venous flow, and the presence of ascites). To assess and quantify the grade of fibrosis, trained technicians perform transient elastography in all participants by the Fibroscan Ò . This test measures non-invasively and quantitatively the liver stiffness using an ultrasonic transducer which transmits a vibration wave through the liver. The velocity of the ultrasonic wave is measured in kPa and correlates directly with liver tissue stiffness and ultimately, degree of liver fibrosis [225,226]. Determinants of interest The association between factors known to influence liver function and the occurrence of steatosis and fibrosis are being studied. Additionally, the association of these conditions with age, gender, nutritional intake, concurrent alcohol intake, (risk factors for) viral hepatitis, BMI, waistto-hip ratio, serum glucose, insulin, and diabetes mellitus, hypertension, serum cholesterol, triglycerides, dietary composition, macronutrients, dietary patterns, sarcopenia, body composition, and gut microbiome are investigated. All clinical information is obtained by interview (updated with liver specific questions) and clinical examination. More recent efforts are focused on identifying common genetic variants associated with liver steatosis and/or fibrosis. Main findings We found a high prevalence of NAFLD of 35.1% within the Rotterdam Study population [227]. Main risk factors for NAFLD were found to be age, decreased physical activity lever, smoking, increased waist circumference, glucose intolerance, hypertension, and hyperlipidemia. Inversely, the risk of NAFLD seems to decrease after statin therapy [228]. Furthermore, using our ultrasound data as reference, we examined the performance of the well-known fatty liver disease index (FLI, based on waist circumference, BMI, triglyceride and gamma-glutamyltransferase (GGT) levels) in the Rotterdam Study population, and found that the FLI is a highly valid tool to predict NAFLD [229]. In another study, we found that all serum liver enzymes are related to all-cause mortality, as well as specifically cardiovascular (GGT) and cancer-related (alkaline phosphatase and aspartate aminotransferase) mortality [230]. Moreover, we have examined the role of genetic factors in the multifactorial etiology of liver fibrosis, and found for example that the single nucleotide polymorphism (SNP) of the interferon gamma receptor 2, a pro-inflammatory gene known to be associated with progression to liver fibrosis in chronic hepatitis C patients, also was related to liver stiffness in the Rotterdam Study participants [231]. Recently, we found that coffee consumption of three cups or more per day, which was found to be beneficial in certain chronic liver diseases and liver fibrosis [232], appeared associated with lower liver stiffness values in the general population as well [233]. At this moment, we are investigating differences in dietary composition (macronutrients) and dietary patterns, body composition and differences in gut microbiota between NAFLD and non-NAFLD participants. Moreover, more studies are currently underway to look at known and unknown genetic and epigenetic factors for liver stiffness and NAFLD. For additional EJE references please see [234,235]. Neurological diseases Objectives Neuroepidemiologic research in the Rotterdam Study focuses on the frequency, etiology and early recognition of the most frequent neurologic diseases in the elderly. We study neurodegenerative diseases (dementia, including Alzheimer disease, and Parkinson disease), cerebrovascular disease (both ischemic stroke and intracerebral hemorrhage as well as transient ischemic attacks), migraine and polyneuropathy. In all of these disorders clinical symptoms typically become manifest late in the disease course, the occurrence of clinical disease does not reflect the underlying spectrum of disease-related pathology, and most of the clinical syndromes are etiologically heterogeneous. Therefore, an additional research focus is on the causes and consequences of pre-symptomatic (brain) pathology that can be assessed with non-invasive modalities, which include MR-imaging, cognitive testing, gait assessment, and electromyography (EMG). Major findings In recent years, we have published contemporary data on incidence of these major neurological diseases. We were the first to show declining incidence of dementia [236] and in recent papers we have demonstrated similar trends for stroke [237] and Parkinson disease [238]. We have also published on prevalence of polyneuropathy [239], showing that 5.5% of the general population suffers from this disease with the disease going unrecognized in almost half of these persons. We have also published normative data for various pre-clinical markers, including cognition [240], gait [241], and various MRI-markers [242][243][244]. One of the main areas of focus in recent years has been understanding how brain pathology affects motor function, with a special emphasis on gait. We have shown strong and specific association of gait with cognition [245], DTI markers [246] and daily functioning [247]. Ongoing work regarding gait includes its longitudinal associations with clinical diseases, including stroke, dementia and Parkinson's disease. Interestingly, using a different test we have already shown motor function to be a predictor of dementia onset over a 9 year period [248]. Moreover, we have also made several contributions towards understanding the etiology of Parkinson's disease [249][250][251]. Similarly, we have now published on several determinants of polyneuropathy [259,260] and migraine [261]. In coming years we will be seeking to develop a research line on epilepsy. Given our longstanding interest in unraveling the etiology of neurodegenerative diseases, our current work also involves leveraging the longitudinal and repeated data collection from the Rotterdam Study to investigate trajectories of various pre-clinical markers and disentangle the patterns of how those relate to incident disease [262][263][264]. In the field of neurogenetics, we have contributed to or led several conventional GWAS efforts as well as more state-of-the-art genomics to discover novel genetic loci for neurologic diseases and their endophenotypes [265][266][267][268][269]. Finally, we are actively investigating how findings on etiology of neurologic diseases can be translated towards public health issues on prevention [270,271] as well as clinical needs regarding prediction [254,272,273] and possibly even interventional studies [274]. Assessment of dementia and Alzheimer disease In the baseline and follow-up examinations participants undergo an initial screen for dementia with the Mini Mental State Examination (MMSE) and the Geriatric Mental Schedule (GMS), followed by an examination and informant interview with the Cambridge Examination for Mental Disorders of the Elderly (CAMDEX) in screenpositives (MMSE \ 26 or GMS [ 0), and subsequent neurological, neuropsychological and neuroimaging examinations [275,276]. Of subjects who cannot be reexamined in person, information is obtained from the GPs and the regional institute for outpatient mental health care. A consensus panel makes the final diagnoses in accordance with standard criteria (DSM-III-R criteria; NINCDS-ADRDA; NINDS-AIREN). Assessment of Parkinsonism and Parkinson disease Participants are screened in the baseline and follow-up examinations for cardinal signs of parkinsonism (resting tremor, rigidity, bradykinesia, or impaired postural reflexes). Persons with at least one sign present are examined with the Unified Parkinson's Disease Rating Scale and a further neurologic exam. PD is diagnosed if two or more cardinal signs are present in a subject not taking antiparkinsonian drugs, or if at least one sign has improved through medication, and when all causes of secondary parkinsonism (dementia, use of neuroleptics, cerebrovascular disease, multiple system atrophy, or progressive supranuclear palsy) can be excluded [277]. Assessment of stroke and stroke subtypes History of stroke at baseline was assessed through interview and verified in medical records. Putative incident strokes get identified through the linkage of the study database with files from general practitioners, the municipality, and nursing home physicians' files, after which additional information (including brain imaging) is collected from hospital records. A panel discusses all potential strokes and subclassifies strokes into ischemic, hemorrhagic or unspecified [278,279]. We also systematically collect transient ischemic and neurological attacks [280]. Assessment of cognitive function Global cognitive function is measured through the Mini Mental State Examination (MMSE) in all surveys. From the third survey (RS-I-3) onwards we added a 30 min test battery that was designed to assess executive function and memory function, and which includes a Stroop test, a Letter Digit Substitution Task, a Word Fluency Test, and a 15 words Word List Learning test. This test battery was expanded from the fourth survey onwards (RS-I-4) to include motor function assessment using the Purdue Pegboard Test. Moreover, from 2009 onwards we expanded further by including the Design Orientation Test (DOT) and a modified version of the International Cooperative Ataxia Rating Scale (ICARS), which assess visuo-spatial orientation and ataxia respectively [240,281,282]. Assessment of gait patterns Halfway through RS-III-1, we successfully implemented the assessment of gait in all participants using the GAI-TRite walkway (http://www.gaitrite.com/). Gait is assessed using a 5.79 m long walkway (GAITRite Platinum; CIR systems, Sparta, NJ, USA: 4.88 m active area; 120 Hertz sampling rate) with pressure sensors. Participants perform a standardized gait protocol consisting of three different walking conditions: normal walk, turning and tandem walk. In the normal walk, participants walk over the walkway at their own pace. This walk is repeated four times in both directions (yielding a total of 8 recordings). In turning, participants walk over the walkway at their own pace, turn halfway and return to the starting position (1 recording). In the tandem walk, participants walk tandem (heel-to-toe) over a line visible on the walkway (1 recording). A total of 30 spatiotemporal gait variables are calculated by the walkway software and downloaded offline for further analysis. Subsequently, principal components analysis on these thirty gait variables is performed to derive summarizing factors, referred to as gait domains. The following gait domains are used: Rhythm, Pace, Phases, Base of Support, Variability, Tandem, and Turn. Gait domains can be compared to cognitive domains, in which each domain reflects a different aspect of the overall concept [241]. Since 2 years we have added another walk to our protocol, namely a dual-task walk, in which participants answer a difficult calculation, while walking over the walkway. The aim of this walk is to compare it with the original normal walk, thereby obtaining the amount of central interference and input on gait. Assessment of polyneuropathy Starting in January 2013, we have successfully implemented a protocol to assess polyneuropathy [239]. This includes a full work-up including questionnaire, neurological exam, and EMG in all participants. In coming years, we will publish on the prevalence, risk factors, and clinical correlates of polyneuropathy in the general population. The continuous measures of conductivity obtained through EMG can also serve as excellent endophenotype for genetic and biomarker studies. Assessment of migraine Migraine is assessed using a validated questionnaire and includes information of aura, severity, and duration of migraine [283]. Rotterdam Scan Study: brain imaging within the Rotterdam Study In 1991, a random sample of 111 participants underwent axial T2-weighted magnetic resonance (MR) imaging to assess presence and severity of white matter lesions [284]. In 1995, a random sample of 563 non-demented participants underwent brain MR imaging in the context of the Rotterdam Scan Study. From August 2005 onwards (RS-II-2 and further), a dedicated 1.5 Tesla scanner is operational in the research center of the Rotterdam Study, and brain imaging is performed in all study participants without contra-indications [285]. Currently, the follow-up of this latter sample extends to up to 12 years (see further section on population imaging). Ophthalmic diseases Objectives Ophthalmic research in the Rotterdam Study focusses on occurrence, causally related determinants, and predictors of common eye diseases. Our main focus is on age-related macular degeneration (AMD), glaucoma, and myopia, and particularly in the last few years we investigated genetic risk variants and pathways. To this end, we connected with many other epidemiologic studies in all parts of the world and formed large international consortia. Age-related macular degeneration (AMD) AMD has been genetically dissected for the most part, and the past 2 years were geared towards understanding the genetic effects and their role in AMD pathogenesis. With the IAMDGC consortium, we analyzed 33,000 participants and identified 52 independently associated common and rare variants distributed across 34 loci [304]. Many of these loci harbored novel genes, and aside from many common variants, various rare variants were identified. The genes in the complement cascade as well as ARMS2 remained the major genes. A subsequent exercise of IAMDGC was to evaluate pleiotropy of the AMD risk variants, and it was found that at least 16 disorders show substantial genetic overlap with AMD [305]. In our own Rotterdam cohort, we used the findings from IAMDGC to investigate genetic variants in miRNAs and miRNA-binding sites [306]. We identified variants in miRNAs (miR-4513; miR-3591; miR-3135b), and 54 variants in miRNA-binding sites associated with AMD. Experimentally, we showed that miR-210-5p influences expression of CFB. These findings are exciting as they point to potential targets that can control the complement pathway, and halt AMD progression. Apart from genetics, we also studied phenotypic association and course of disease. Together with two other populationbased studies (3CC), we found that 19-28% of unilateral any AMD became bilateral in 5 years, and 27-68% of unilateral late AMD became bilateral during that time [307]. Smoking and carriership of genetic risk variants increased progression rates substantially. We also investigated retinal pseudodrusen in more detail, a distinct AMD lesion [308]. 5% of the Rotterdam Study had these lesions, women twice as often as men, as did carriers of certain genotypes. Myopia (nearsightedness) We prolonged our research in the field of refractive errors and myopia in the CREAM consortium. This time we performed a joint meta-analysis to test gene-environment interaction effects, and identified six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error [309]. In Asian populations, three genomewide significant loci AREG, GABRR1 and PDE10A also exhibited strong interactions with education. These findings clearly show that genes for refractive errors need environmental triggers in order to have a significant effect. We were also interested in the susceptibility period for refractive error genes. We therefore investigated the association between age-of-onset of variants at our previously identified loci and refractive error in various cohorts of different ages, including the Rotterdam Study [310]. Specific variants could be categorized as showing evidence of: (a) early-onset effects remaining stable through childhood, (b) early-onset effects that progressed further with increasing age, or (c) onset later in childhood. This shows that most genes in a complex trait such as refractive error do not have a continuous effect, but rather act during a specific age period. Next steps in myopia research will include gene finding in very large data sets ([ 100,000), identification of pathways, and search for leads for intervention. Primary open-angle glaucoma (POAG) The glaucoma research entailed gene finding as well as the study of the associations with glaucoma parameters. The latter included the study of intraocular pressure (IOP) across Europe in the E3 (N = 43,500) consortium [311]. Higher IOP was observed in men, with higher body mass index, shorter height, higher systolic blood pressure, and more myopic refraction. An inverted U-shaped trend was observed between age and IOP, with IOP increasing up to the age of 60 and decreasing in participants older than 70 years. Gene finding was performed in the IGGC consortium. We conducted a genome-wide association metaanalysis of IOP and optic disc parameters and validated our findings in multiple sets of POAG cases and controls. We identified 9 new loci for vertical cup-disc ratio (VCDR), 1 for IOP, 5 for optic nerve cup area, and 6 for disc area. Some genomic regions affected both IOP and the disc parameters. Furthermore, we identified a novel association between CDKN1A and POAG, statistically as well as functionally in a zebrafish model. We also evaluated sequence variations in the myocilin (MYOC) gene, a gene that accounts for approximately 2-4% of glaucoma cases [312]. Mutation Gln368Stop in this gene is known to increase intraocular pressure. We found that this variant was also very frequent among unaffecteds from The TwinsUK and Rotterdam Study (12.5 and 19.4%, respectively). This showed that this seemingly functional variant may not have such large effects as previously thought. Finally, we investigated the performance of a new reference panel, the Haplotype Reference Consortium (HRC), for imputation of genetic variants [313]. We showed that imputation using the HRC panel improved the concordance between assayed and imputed genotypes at common, and especially, low-frequency variants. HRC imputation significantly improved P values for genetic associations with glaucoma parameters, thus our next step is to continue gene discovery using HRC in very large data sets of multi-ethnic origin. Retinal vasculature We also continued this line of research and investigated the meaning of vessel diameter in the retina for pathology at other parts of the body, in particular the brain [314][315][316][317][318]. Retinal vessel calibers were associated with enlarged perivascular spaces in the brain and with white matter microstructure. Interestingly, it was also associated with survival, vitamin D, and N-Terminal Pro-B-Type Natriuretic Peptide, a protein associated with ischemia. This indicates that retinal vessel diameters are important biomarkers for the vascular status elsewhere in the body, and may predict life expectancy. Methods update At baseline and follow-up examinations, participants undergo ophthalmic measurements including best-corrected ETDRS visual acuity, refractive error, Goldmann applanation tonometry, keratometry, slit lamp examination of the anterior segment, and visual field testing. After pharmacological mydriasis, we make 35°color photographs of the macular area, and 20°simultaneous stereoscopic imaging of the optic disc and macular area using stereoscopic digital imaging (Topcon camera). We image retinal layers at the macula and optic disc with Fourier3D Spectral domain optical coherence tomography (Topcon), measure axial length, and biometry of the cornea, anterior chamber, lens, posterior chamber, and retina with Lenstar (Haag-Streit); and perform fundus autofluorescence, infra-red and red-free measurements with Heidelberg. For the newest cohort (RS4-1), we have added corneal topography measurements (Pentacam; Oculus), and replaced visual field screening by Frequency Doubling Technology C20-2 (Carl Zeiss Meditec). The classification of AMD, POAG, refractive error, and retinal vessel diameters remain unchanged. Psychiatric epidemiology Objectives The aim of the psychiatric research in the Rotterdam Study is to investigate the determinants, correlates and consequences of common psychiatric problems in the elderly. The focus lies on studies of depressive and anxiety disorders, sleep disturbances, and complicated grief. Study design update Since 1994 (RS-I-2) most participants in the Rotterdam Study are screened for depressive symptoms and from the third examination (RS-I-3), 1997-1999, onwards, depressive disorders have been ascertained systematically. Assessments of anxiety disorders, sleeping disturbances, and complicated grief were added in the subsequent examination (RS-I-4) and have been performed in all follow-up visits of the original and added cohorts. Other additions to the protocol included a screening for psychotic symptoms in one cohort (RS-III) and, from January 2012 to October 2014, ambulatory polysomnography. In a subsample, taedium vitae was assessed. The most recently introduced assessments include sexual activity, aggression and neuroticism. Major determinants Psychiatric research in the Rotterdam Study focuses on biological risk factors. The vascular depression hypothesis was tested with different measures of atherosclerosis, arterial stiffness and cerebral blood flow [323]. We examined whether blood levels of vitamins and fatty acids, immune parameters, and markers of folate metabolism increased the likelihood of depression [324]. Diurnal patterns of cortisol secretion were studied and recently we performed a low-dose dexamethasone test to assess the negative feedback of the hypothalamic-pituitary-adrenal (HPA) axis functioning [325]. Moreover, several GWAs were conducted in collaborative efforts focussing on depressive symptoms, sleep, anxiety and cortisol [326][327][328]. Several, mostly cross-sectional studies of brain morphology as possible determinants and correlates of common psychiatric disorders were completed [329]. Also, psychiatric problems and psychological traits such as happiness, sleep duration, and depression are increasingly investigated as determinants of health and mortality [330,331]. Major clinical outcomes Information on depression is obtained from (a) psychiatric examinations, (b) self-reported histories of depression, (c) medical records, and (d) registration of antidepressant use [332]. The psychiatric examination during each visit consists of an assessment and screening with the Center for Epidemiologic Studies Depression Scale (CES-D), and in the screen-positive participants a semi-structured interview performed by a trained clinician (Schedules for Clinical Assessment in Neuropsychiatry). To continuously monitor incidence of depression throughout follow-up, trained research-assistants scrutinize the medical records of general practitioners and copy all information mentioning depressive symptoms. The following anxiety disorders are assessed with a slightly adapted Munich version of the Composite International Diagnostic Interview: generalized anxiety disorder, specific and social phobia, agoraphobia without panic disorder, and panic disorder [333]. In addition, the HADS-A is used to assess anxiety traits continuously. Sleep quality and disturbance is measured with the Pittsburgh Sleep Quality Index. In addition, sleep duration and fragmentation are assessed with actigraphy, a method that infers wakefulness and sleep from the presence or absence of limb movement [334]. In total, nearly 2000 persons participated in this actigraphy study: they wore an actigraph and kept a sleep diary for, on average, six consecutive nights. Follow-up assessments of actigraphic assessments in these participants have been conducted. Ambulatory polysomnographic (PSG, i.e., full sleep EEG) recordings of one night have been conducted in 940 participants. We scheduled home visits of a research assistant who placed the sensors to record an ambulant PSG (Vitaport 4; Temec, Kerkrade, the Netherlands). The PSG included six EEG channels, bilateral electrooculography, electromyography, electrocardiography, respiratory belts on the chest and abdomen, oximetry, and a nasal pressure transducer and oronasal thermocouple to measure airflow [335]. All recordings were scored according to American Association of Sleep Medicine guidelines by a registered Sleep Technologist. Recordings were manually scored in 30-s epochs for identification of sleep stages; each epoch was scored as Wake, N1, N2, N3 or REM sleep. In addition, we used PRANA (PhiTools, Strasbourg, France) software to automatically measure the microstructure of sleep, e.g. spindles and REM density. Polysomnography recordings are also used to calculate the apnea-hypopnea index. Circadian rhythms: Sleep-wake activity patterns over a week are studied with actigraphy As a marker of circadian rhythms. In more than 1700 persons we calculated interdaily stability, i.e. the stability of the rhythm over days and the intra-daily variability, i.e. the fragmentation of the rhythm [336]. The Inventory of Complicated Grief is used to identify traumatic grief. This is a condition distinct from normal grief and bereavement-related depression, characterized by symptoms like disbelief about the death and searching for the deceased. Major findings Depression In a series of studies we found some evidence for the vascular depression hypothesis. More severe coronary and extra-coronary atherosclerosis were associated with a higher prevalence of depression, as were cerebral haemodynamic changes [323]. However, our data did not support a specific symptom profile of vascular depression as previously defined. Most importantly, we found no longitudinal relation between peripheral atherosclerosis and incident depression [337]. Recently, we prospectively studied cerebral vascular risk factors such as white matter lesions, silent infarcts or blood flow in relation to depression [338]. We found evidence that small vessel disease predicted the onset of depression. This suggests that atherosclerotic processes in the brain are a specific risk factor for depression. Sleep We investigated the relationships of sleep duration with both cardiovascular risk factors and psychiatric disorders. We also aimed to explain sex differences in subjective and actigraphic sleep parameters [339]. If assessed by diary or interview, elderly women consistently reported shorter and poorer sleep than elderly men. In contrast, actigraphic sleep measures showed shorter and poorer sleep in men. These discrepancies were partly explained by sleep medication use and alcohol consumption. The first results using polysomnography to measure sleep EEG suggest that REM-density is a marker of depressive symptoms in the general population [335]. Other results suggest that sleep apnea and depressive symptoms are not related, although both result in fatigue [340]. Anxiety We studied anxiety as a determinant of mortality and cardiovascular disease, and found anxiety in the elderly does not predict physical morbidity independent of baseline health and behaviour. In contrast [341], we could show that mild cognitive impairment is associated with incident anxiety disorders [342]. Complicated grief In our population-based study of 5741 elderly persons, current grief was reported by 1089 participants, of these 277 (25 or 4.8% of total) were diagnosed with complicated grief, the vast majority of which had no clinical symptoms of anxiety or depression. Persons with complicated grief were older, had a lower level of education, and more often had lost a child [343]. Recently published work suggests that complicated grief occurs together with structural brain atrophy more often than expected by chance [344]. Sexual activity Almost half of partnered older adults engage in sexual activity and over two-thirds engage in physical tenderness, but very few unpartnered older adults engage in sexual behaviour [345]. The greatest barrier to being sexually active at older age is lack of sexual partner availability, for which women are particularly disadvantaged. Moreover, sexual activity is strongly determined by well-being, in particular happiness rather than lack of depression [346]. Genetics of common psychiatric disorders In the past years, we have performed a series of genome-wide association studies of the above psychiatric and psychological phenotypes, mostly as part of the CHARGE consortium and more recently as part of the Psychiatric Genetics Consortium. While initial analyses yielded no convincing genome wide significant results as studies were strongly underpowered, more recent work with larger sample sizes led by our group in the CHARGE or as part of the PGC consortium shows promising results for depression and depressive symptoms [326]. Finally, ongoing psychiatric research projects examine whether and how psychological well-being or psychiatric problems contribute to survival. Most importantly, we are interested in whether the effects are specific to certain behaviour or emotions, are independent of confounding by physical disease, or can be explained by lifestyle, immunological or hormonal regulation [347]. Respiratory diseases In the Rotterdam Study (RS) we investigate the prevalence and incidence of respiratory diseases in middle-aged and older adults, and aim to elucidate the genetic, environmental and life style risk factors for the occurrence of these diseases. Moreover, by applying systems genetic and systems biology approaches, we aim to decipher the pathogenesis and pathophysiology of respiratory diseases. The main focus of research of the respiratory epidemiology group is on common obstructive airway diseases, encompassing asthma, ACOS (Asthma COPD Overlap Syndrome) and Chronic Obstructive Pulmonary Disease (COPD), but also respiratory infections, pneumonia, pulmonary hypertension and lung cancer are thoroughly investigated. Lung function measurements encompassing spirometry and diffusion capacity are performed in all participants during the research centre visit of the RS using a Master Screen Ò PFT Pro by trained paramedical personnel according to ERS/ATS Guidelines [5,354]. Lung function and Chronic Obstructive Pulmonary Disease (COPD) In the large prospective population-based RS cohort, we have determined the prevalence and incidence of COPD in older adults according to age, sex and smoking history [355,356]. In international collaboration, we have elucidated the genetic determinants of the lung function measurements Forced Expiratory Volume in one second (FEV1), Forced Vital Capacity (FVC) and the FEV1/FVC ratio, the defining characteristic of an obstructive syndrome [357][358][359][360]. In the most recent genome-wide association study of COPD, we have discovered 22 loci of genetic susceptibility, including 9 loci which have been previously associated with lung function in the general population, and 4 new loci (EEFSEC, DSP, MTCL1 and SFTPD) [359]. Intriguingly, we highlighted that 2 loci associated with COPD (FAM13A and DSP) were shared with pulmonary fibrosis, but had opposite risk alleles. Moreover, using a systems genetics analysis approach, we have discovered the molecular mechanisms underlying variations in lung function [361]. COPD, co-morbidities and frailty COPD does not only affect the lungs, but is frequently associated with extrapulmonary manifestations and systemic consequences. Therefore, we have investigated multiple co-morbidities of COPD, encompassing cardiovascular diseases, cerebrovascular diseases (Carotid artery atherosclerotic plaques, cerebral microbleeds and stroke) and osteoporosis [362][363][364][365][366][367]. Importantly, we have meticulously validated acute exacerbations of COPD in participants with COPD in the RS, and examined the impact of these exacerbations on acute cardiovascular events (e.g. atrial fibrillation, sudden cardiac death), acute cerebrovascular events (stroke), and mortality [367,368]. Moreover, we have highlighted differences in the distribution of cause-specific mortality in patients with COPD according to disease stage [363,366]. Frailty is a common geriatric syndrome, characterized by a lack of functional reserve to stressors, and defined by Fried et al. as meeting three or more of five established criteria for frailty (nutritional status, physical activity, mobility, grip strength and exhaustion). Of 2833 RS participants with sufficiently evaluated frailty criteria, 163 (5.8%) participants were frail, whereas the prevalence of frailty was significantly higher in subjects with COPD (10.2%) [369]. Adjusted for age, sex and co-morbidities, frail elderly had a significantly increased risk of dying within 3 years, compared to the non-frail elderly [370]. In subjects with COPD, the prevalence of frailty was highest when they suffered from severe airflow limitation, dyspnea and/or frequent exacerbations. Importantly, COPD elderly who were frail had significantly worse survival [369]. Therefore, COPD is a key component of the chronic disease domain of the Healthy Aging Score, which has recently been developed by the RS investigators [35]. Genomics, biomarker and microbiome studies Objectives The team in this research line focusses on bio-banking activities of the participants of the Rotterdam Study and investigates molecular biological determinants of disease in these specimen (i.e., DNA, RNA, proteins, metabolites, microbes, etc.). Bio-banking involves collecting, storing and managing the biological tissues of participants of the Rotterdam Study at all follow-up measurements. This concerns mainly blood, urine, saliva, hair and faeces but with microbiome studies several other specimens are being collected (such as skin swaps, nose swaps, eye swaps, etc.). We have further stored PBMC's for the isolation of induced pluripotent stem (iPS) cells. The research focus of this group concerns assessment of biological determinants of disease (biomarkers) in these biomaterials and the analysis of markers using genomic technologies (such as SNP arrays and next generation sequencing (NGS)). The materials and data generated by this research line now sum up to * 3 9 10 12 data-points, and are actively used by all research groups of the Rotterdam Study. An overview of all the ''omics'' datasets in the Rotterdam Study cohorts is given in Table 1. Major findings Rotterdam Study investigators are playing leading roles in several of the large global consortia focused on assessing the contribution of complex disease gene variants by prospective meta-analysis across many epidemiological cohorts, such as in CHARGE and ENGAGE, and in many disease/phenotype focused efforts such as ADSP, IGAP, PERADES, GIANT, GEFOS, REPROGEN, TREATOA, DIAGRAM, etc. Since 2005 the genome wide association study (GWAS) has changed the field of complex genetics, and identified a still growing list of thousands of common genetic variants contributing to disease risk. While this large scale global collaboration has originated from the GWAS era, similar consortia have been built around the genomics datasets with RNA expression profiles, DNA methylation profiles, and the NGS datasets on DNA, RNA and microbiomes, including the BBMRI-NL sponsored BIOS consortium and several CHARGE working groups. The Rotterdam Study has GWAS data for almost the complete dataset summing to * 12,000 DNA samples, and is involved as a major collaborative center for metaanalysis studies of GWAS data, including national programs (BBMRI-exome chip, BBMRI-BIOS), and international consortia (see above). Especially, from the CHARGE consortium many important publications have emerged on a wide variety of phenotypes and diseases from all major research lines in the Rotterdam Study. They are discussed under the subheadings of each individual research line. Data collection, storage and management In the RS-III round, the collection of faeces material has been initiated for the intestinal microbiome analysis. For this a collection pot is distributed at the research center visit which is to be used at home and then returned by postal mail to Erasmus MC where DNA is isolated and stored at -80°C. This has been done for * 2000 samples in RS-III, and is now continuing for the whole RS study population (with the modification that participants bring their sample directly to the research center to be stored at -80°C) following the cycles of visits to the research center, including longitudinal visits. Total estradiol, total testosterone, sex hormone-binding globulin, dehydroepiandrosterone, dehydroepiandrosterone sulfate, androstenedione, 17-hydroxyprogesterone, cortisol, corticosterone, 11-desoxycortisol, vitamin D, thyroid stimulating hormone, free T4, interleukins, C-reactive protein, Insulin-like growth factor 1, insulin, iron, ferritin, transferrin, fibrinogen, homocysteine, folic acid, riboflavine, pyridoxine, SAM/SAH ratio, cobalamine, Lp-PLA2, Fas/Fas-L, abeta42/40 The Rotterdam Study: 2018 update on objectives, design and main results 827 Metabolomics Two datasets have been created in the Rotterdam Study sub-cohorts that contain information on metabolomics in blood serum or plasma of participants. A. As part of the COMBI-BIO consortium, we used largescale untargeted serum metabolic profiling by proton (1H) nuclear magnetic resonance (NMR) spectroscopy and UPLC Mass Spectrometry to characterize the metabolic signature of 1826 individuals from RS-I-3 in relation with vascular health and cardiovascular disease. B. High-throughput metabolomics measurements as a part of the Biobanking and BioMolecular resources Research Infrastructure The Netherlands (BBMRI-NL) initiative have been performed using plasma samples which were collected in EDTA coated tubes. Fasting samples from RS-I (n = 2880), RSII (n = 663), and RS-III (n = 1838) cohorts have been specifically selected in order to maximize the analytical number of prospective gene expression and gut microbiome research in relation to metabolomics. The plasma samples analyzed by the biomarker platform of Nightingale Health using proton nuclear magnetic resonance (NMR) technique. Spectra have been obtained from 600 to 500 MHz instruments, using three molecular windows, namely lipoproteins, lipids and low molecular weight compounds. The spectra were then de-convoluted by Nightingale's proprietary bioinformatics software leading to quantification of absolute concentrations. The yielding biomarker data contains 228 measurements on apolipoproteins, lipoproteins sub-classes, amino acids, albumin, glucose, glycolysis metabolites, ketone bodies, glycoprotein, sphingolipid, phosphoglyceride, polyunsaturated fatty acids and cholesterols [375]. The Human Genomics facility (HuGe-F) The Rotterdam Study uses the Human Genotyping Facility, HuGE-F (www.glimdna.nl) for all its genomic studies, and which has been generating all GWAS data for the Rotterdam Study as well as its RNA expression profiles, DNA methylation profiles, and all NGS data including whole exome sequences (WES), RNA sequencing data, and the microbiome 16S ribosomal RNA (rRNA) sequencing data. Genome-wide association studies (GWAS) datasets The GWAS dataset of * 12,000 DNA samples from the Rotterdam Study RS-I, -II-, -III cohorts consists of a) a small dataset of * 400 women with 500 K Affymetrix arrays (Nsp250 ? Sty250; the so-called ''pilot'' dataset), and b) a large dataset of * 12,000 samples consisting of 550 K (RS-I, II; single ? duo array format) and 610 K (RS-III; quattro array format) Illumina array genotypes. In the pilot dataset also other array types have been run such as the Illumina Omniexpress 2.5 array, and the new Illumina GSA array and the Affymetrix PMRA array allowing for comparisons. The Illumina GWAS genotype datasets of the Rotterdam Study also form the basis to generate so-called ''imputed'' datasets derived thereof. In this process the genotypes of SNPs which have been genotyped in reference datasets (such as HapMap with * 2.5 million SNPs genotyped or HRC with 40 million SNPs), are being estimated for all Rotterdam Study samples using the basis Illumina 500 K SNP dataset configurations in each subject. With the advent of large reference datasets becoming available based on whole genome/exome NGS, imputation activities using the Rotterdam Study (RS) GWAS dataset will remain an active area of development. So far, the RS GWAS datasets have been imputed to HapMap version 2 and 3 (with * 2.5 million resulting imputed SNP genotypes obtained for the RS dataset), the 1000 genome (1 KG) dataset version Iv3 and IIIv5 (with * 30 and 50 million resulting SNP genotypes, respectively), the Genome of the Netherlands (GoNL), the UK10 k whole genome sequencing dataset, and, more recently, the haplotype reference consortium (HRC) r1.1 dataset (* 40 million SNPs). Especially the latter imputation uses as a reference up to 64,976 haplotypes allowing also the study of less frequent to rare variants and comprising 40 million SNPs, all with an estimated allele count greater than 5. Candidate gene SNPs and special genomic markers About 300 SNPs in several candidate genes have been individually measured over the past 15 years, (including genes such as ApoE, VDR, ESR1, fibrinogen, etc.). Additionally, for a subset of RS-I samples telomere length (n * 1800) and mitochondrial DNA content (n * 500) was measured. Whole genome sequencing (WGS) dataset The whole genome sequencing dataset consists of 100 samples in RS-I which were sequenced as part of the Genome of the Netherlands (GoNL) [376], with an average sequencing depth of 69 and with improved phasing because of the trio-design. Whole exome sequencing (WES) datasets WES NGS data in RS-I is available for 2628 samples as part of the NCHA sponsored project and were generated by the HuGe-F facility on the Illumina HiSeq2000 sequencing machines. The samples for this experiment were selected to constitute a random sample from the RS-I dataset. Through a collaborative grant from the NIH Alzheimer initiative (ADSP) we have obtained an additional * 1.20 samples with WES NGS data from RS-I generated at the Broad Institute, Boston, USA, of which 50 overlap with the NCHA WES dataset)so net total samples with WES data is 3778). The Rotterdam Study WES dataset is now also part of the so-called commons dataset of the CHARGE consortium with * 16,000 WES samples and 5000 WGS samples. RNA sequencing dataset BBMRI has sponsored a collaborative effort to create a large-scale data infrastructure to work on integrative omics studies in Dutch Biobanks. For this the Erasmus MC HuGe-F genomics facility has generated RNA sequencing profiles of in total ± 4000 individuals of six Dutch biobanks, including the Rotterdam Study. A total number of 900 RS-samples were RNA-sequenced at a depth of 30 million paired end reads. Together with colleagues at UMCG Groningen and LUMC Leiden, the dataset was QCed and annotated RNA-expression profiles were generated, and relations between genetics, transcriptomics, and epigenetic measures have been analyzed (see below) and is freely available for all researchers (http://www.bbmri.nl/ on__offer/bios/). Incidental findings in WES data Based on the RS WES dataset and the exome chip dataset we have initiated to look for so-called incidental findings which might be clinically relevant. This is done by determining presence of variants in particular sets of genes such as the list of 57 ''actionable'' genes as established by the American College of Medical Geneticists (AMCG). This research is ongoing, we have established a working group together with Dr Chris O'Donnell, and this is done in collaboration with several groups such as the Broad Institute (Drs. Eric Minikel, Daniel MacArthur) and University of Cologne (Prof. Hilger Ropers). A first result showed that carriers of supposedly pathogenic mutations in the prion gene did not display an evident disease phenotype [377]. WES data was also used to investigate the association between all-cause mortality and carrier-status of somatic mutations in genes linked to clonal expansion of hematopoietic stem cells. We found that, unlike previous reports in predominantly middle-aged individuals, somatic mutations in genes linked to clonal expansion of hematopoietic stem cells do not compromise the 8-to 10-year survival in the oldest old [378]. Integrative genomics Within the Rotterdam Study subcohorts, epigenetic, transcriptomic and microbiome datasets have been generated. Using this data, context-dependent expression quantitative trait loci (eQTL) were identified [379]. In addition, it was found that disease associated genetic variants (GWAS hits) alter transcription factor levels and methylation of their binding sites, offering true biological insight into mechanisms behind the associated GWAS hits [380]. The epigenetic and transcriptomic data have increasingly been explored for associations with disease and traits, and especially environmental factors. Unlike previous efforts in using transcriptomic datasets, this is now also done in large collaborative efforts, increasing robustness and value of the results. Methylation signatures were identified for smoking [62,381], alcohol consumption [382], low grade inflammation [383], liver enzymes and hepatic steatosis [384], lipids [106], body mass and the adverse outcomes of adiposity [385]. Similarly, transcriptomic profiles were identified for smoking [386], fasting glucose and insulin levels [387] and muscle strength [210]. The first epigenome-wide study was also attempted in relation to bone mineral density variation [388]. A number of studies have focused on the relationship between diverse molecular layers and (biological) aging. A large gene expression meta-analysis in 14,983 individuals identify 1497 genes that are differentially expressed with chronological age. The gene expression profiles were used to calculate the 'transcriptomic age' of an individual; differences between transcriptomic age and chronological age were associated with biological features linked to ageing [389]. In a meta-analysis of 3089 individuals were methylation levels were used as a biomarker for ''biological age'', often referred to as ''epigenetic age'', it was shown that epigenetic age predicts all-cause mortality above and beyond chronological age and traditional risk factors [390]. Furthermore, we showed that blood RNA expression profiles undergo major changes during the seventh decade of life [391]. It was shown to be feasible to accurately estimate human age from blood using information from different molecular layers [392]. Reproductive traits Objective The main objective of this program is to study frequency and etiology of major disorders of the reproductive system and their risk factors, including age-at-menopause and fertility. Since most analyses involve women, this program is centered around the study of women's reproductive health. The evaluation of risk factors includes serum measurements of hormones as well as genetic and genomic determinants of reproductive health and related diseases, and studies of the sex chromosomes X and Y. In addition, consequences of these conditions are studied in relation to other aging-related diseases, including cardiovascular disease and disorders of the locomotor system. Major GWAS findings Much of the work of this research is made possible by large-scale collaboration in consortia, some of which focus on one particular disease or trait while others are more broad spectrum strategic collaborations. We are part of several such large consortia studying genetic and epidemiological risk factors for reproductive traits such as CHARGE, REPROGEN, SSCAG and PCOSGEN. Most attention so far has gone to the study of age-at natural-menopause (ANM) and age-at-menarche in women for which our group was the first to report the major loci for age-at-menopause [393,394]. Many of these signals were also observed for women of other ancestries [395] although the studies of other ethnicities are smaller and thus lack in power. In the most recent and largest meta-analysis of GWAS of age-at-menopause so far [396], 44 loci were identified among 70,000 women, of which two with rare variants with large effect size (HELB and SLCO4A1) as discovered by exome-array-based meta-analysis. Together, the genome-wide significant variants explain * 6% of the genetic variation which went up to 21% if we take all SNPs with P \ 0.05. In Mendelian Randomization studies a causal effect was established for age at natural menopause as a risk factor for breast cancer (but not prostate cancer in men), while the effect size was greater for ER-positive than ER-negative breast cancers [396,397]. Similar MR studies are now ongoing for other common diseases influenced by age-at menopause such as cardiovascular disease and osteoporosis. Interestingly, the majority of the loci determining ageat-natural menopause involve genes which are important in the DNA damage response and DNA repair pathways which points to the importance of this system in maintaining an error-free stem cell lineage which produce the oocyte. As such the phenotype of age-at-menopause, represents an interesting model for age-related changes in cell function maintenance and functions as a model to identify molecular mechanisms for damage accumulation and repair during ageing [398]. Several diseases related to infertility, such as Early menopause (EM)/Primary Ovarian Insufficiency (POI) and PolyCystic Ovary Syndrome (PCOS) are now subjected to GWAS and look ups with ANM SNPs. In a GWAS of 3493 EM cases and 13 598 controls from 10 independent studies [399], no novel genetic variants were discovered, but the 17 variants previously associated with normal age at natural menopause as a quantitative trait were also associated with EM and primary ovarian insufficiency (POI). In a GWAS of PCOS in 5184 self-reported cases and 82,759 controls [400], 6 loci were identified in/near genes ERBB4/ HER4, YAP1, THADA, FSHB, RAD50 and KRR1. MR analyses in this study identified causal roles in PCOS aetiology for higher BMI, higher insulin resistance, later menopause, and lower serum SHBG. For several endocrine biomarkers GWAS have been performed to identify the genetic loci influencing their serum levels, i.e., testosteron [401], SHBG [402], DHEAS [403], and these are also involved in several MR analyses in relation to major disease endpoints for which these biomarkers have been suggested to be predictive. In a collaboration with the SSCAG consortium, a recent GWAS of human fertility characteristics (defined as age at first new born (AFB) and number of children ever born (NEB)) in both sexes including 251,151 individuals for AFB and 343,072 individuals for NEB, identified 12 loci [404]. While none of the AFB-or NEB-associated SNPs are associated with age at menopause, there was some overlap with SNPs for behavioral and reproductive phenotypes (such as educational attainment, age-at-menarche, bmi, and age at first sexual intercourse). Methods update Several specific biomarker assessments in * 10,000 blood/serum/plasma and urine samples have been done for the diagnosis and evaluation of risk factors of reproductive traits (e.g., steroid hormones; see under ''genomics, biomarkers, and microbiome''). Current work involves analyses of X and Y chromosome mosaicisms as can be detected in genomic DNA extracted from blood, and how these mosaicisms change with ageing. In addition, DNA methylation is analyzed as well as microbiome profiles in relation to reproductive traits. The CHARGE-S WES dataset is currently being analyzed for the contribution of rare variants to ANM, while a very large meta-analyses of age-at-menopause is underway involving many more HRC imputed GWAS datasets as well as the UK Biobank dataset of * 500,000 samples. Pharmacoepidemiology Objectives Especially during the past 10 years, there has been a strong increase in the number of automated healthcare databases for pharmacoepidemiology. As most of these databases have limitations because their composition is not only healthcare-driven but may also differ between health insurance systems, they are vulnerable to potential selection and information bias. This clarifies the need for prospectively gathered and standardized information on drugs and disease. In the Rotterdam Study, the role of drugs is studied as determinant of diseases in middle-aged and older community-dwelling individuals. This includes studying efficacy and effectiveness of drugs, as well as adverse reactions to drugs. As the drugs used in the Rotterdam Study are licensed and often on the market since several years, research focuses on determinants which modify the safety and effectiveness of widely used drugs because these often have a great impact on healthcare. The Rotterdam Study is a unique resource for pharmacoepidemiology because of its long follow-up since 1990, complete coverage of more than five million dispensings of prescription-only drugs via 7 automated community-based pharmacies in the region, and repeated interview data for studying drug adherence and 'over-the-counter' drugs. In combination with the very rich medical and biological information from repeated interviews and physical, laboratory, imaging data, and genetic and epigenetic determinants, it facilitates a type of pharmacoepidemiologic research which investigates biological-pharmacological mechanisms of drug response. Major findings Below, we summarize findings over the most recent period. Different research themes prevailed, centering around two topics, i.e. studying important drug safety problems and gene-drug interactions of established pharmacologic drug effects. As for the first topic, an important problem for drug licensing authrorities since several years is drug-induced sudden cardiac death. In a recent analysis with data from the Rotterdam Study, we demonstrated that the incidence of sudden cardiac death during the period 1990-2010 declined [416]. Possibly, this is related to the increasing attention for the treatment of cardiovascular morbidity [secondary prevention] and of cardiovascular risk factors such as hypertension and diabetes mellitus [primary prevention]. One of the well-known risk factors for sudden cardiac death is QTc-interval prolongation on echocardiograms [ECGs]. This QTc-interval prolongation is under the influence of genetic variation [417]. An important gene-ABCB1-encodes for the transport protein P-glycoprotein which is abundant in the gut and blood brain barrier. Users of digoxin with a certain variation of the ABCB1-gen had a higher chance of sudden cardiac death [418]. There are many drugs which are able to prolong the QTc-interval, such as serotonin reuptake inhibitors [419]. These SSRI antidepressants are considered to be safer than the traditional tricyclic antidepressants [TCA] when treating elderly with depression but sometimes less effective. However, SSRI are associated with an increased risk of cerebral microbleeds [420]. On the other hand, we found that they are associated with a lower risk of myocardial infarction [421]. Although a large number of studies have been conducted aiming to identify genetic variants associated with antidepressant drug response in depression, only a few variants have been repeatedly identified [422]. Depression is the main indication for antidepressant treatment but results from one of our studies confirmed that antidepressants are also used for off-label indications, subthreshold disorders and complex situations, which were all associated with clinically-relevant depressive symptoms in the middle-aged and elderly population [423]. SSRI use was associated with better subjective sleep, after adjustment for depressive symptoms and concurrent psycholeptic drug use. This suggests that, in clinical practice in the middleaged and elderly population, the sleep quality of some persons may benefit from, continued, SSRI use [424]. The stronger adverse effect of TCA on the QTc-interval proved to be predominantly related to their more powerful anticholinergic activity. This influence on the autonomic nervous system is associated with an increased heart rate. The consequent decrease of the RR-interval mathematically leads to a prolongation of the QTc-interval according to Bazett without changing the QT-interval itself. Therefore, we demonstrated that the Fridericia-correction leads to a more meaningful measure than the Bazett-corrected one when calculating the QTc-interval from ECGs [425]. We conducted race/ethnic-specific genome-wide interaction analyses of TCAs and resting RR and QT intervals in cohorts of European, African, and Hispanic/Latino (n = 13 808; n = 147 TCA users) ancestry, adjusted for clinical covariates. Among Europeans, TCA interactions with variants in BRE and UBE2E2 were identified in relation to RR intervals. Among Hispanic/Latinos, variants in TGFBR3 modified the relation between TCAs and QT intervals [426]. At variance with that which is suggested in product labelling information, concurrent use of two or more QTc-interval prolonging drugs did not further lengthen the interval to a substantial extent [427]. However, It is clear that the association between QTc and sudden cardiac death is not one-to-one and that other risk factors are important. The role of a decreased serum level of magnesium in cardiac arrhythmias is unclear at the moment but we demonstrated that it was associated with an increased risk of sudden cardiac death [428]. Although hypomagnesemia is uncommon in a situation of normal food intake, longterm use of proton pump inhibitors-for instance indicated in elderly who are also chronic users of NSAIDs-can cause this electrolyte disturbance [429]. We found that SSRI with a high receptor affinity had relatively high serum levels of LDL cholesterol [430]. In another analysis in the Rotterdam Study, we demonstrated that use of SSRI was associated with a stronger weight increase [431]. Also, SSRI decreased insulin secretion in older adults and increased the risk of insulin dependence in patents with type 2 diabetes [432]. In a methodological study we tried to find support for the hypothesis that genome-wide association studies would be able to find genetic determinants for response to SSRI, notably the genes FSHR, HMGB4, PLCB1 and HTR2A [433]. Miscellaneous studies consisted, among others, of risk factors in elderly for resistance to ciprofloxacin in community-acquired urinary tract infections due to E coli. Ciprofloxacin resistance in community-acquired UTI was associated with a high intake of pork and chicken and with concomitant prescription of calcium supplements and proton pump inhibitors [434]. In another study, a nested case-control analysis was performed in which we found that participants with a bacterial gastroenteritis were more likely than controls to be current users of PPIs [435]. Furthermore, In a study in elderly from the Rotterdam Study, B-proof, and LASA cohort, we were able to demonstrate that two variants in cytochrome P450 2C9 modified the fall risk of ageing benzodiazepine users [436]. Future developments More and more, pharmacoepidemiology in the Rotterdam Study will concentrate on pharmacological-biological mechanisms of a couple of commonly used benchmark drugs with the help of genetic-and epigenetic techniques, as well as proteomics and metabolomics. Several metaanalyses were performed in recent years. First, in a large international genome-wide association studie of drug-gene interaction, no markers were found for the effect of antihypertensives on cardiovascular disease [437]. One of these antihypertensives, i.e. ACE-inhibitors, is associated with angioedema or coughing which may lead to discontinuation or switching to another antihypertensive. In a second GWAs of 972 switchers from ACE-inhibitors, eight SNPs within four genes reached the genome-wide association study significance level in the meta-analysis: RNA binding protein, Fox-1 homolog (Caenorhabditis elegans), c-aminobutyric acid receptor subunit c-2, sarcoma (Src) homology 2 (SH2) B adaptor protein 1 and membrane bound O-acyltransferase domain containing 1 [438]. Third, in a large-scale GWAs of the effect of sulfonylurea hypoglycemics on QT, JT, and QRS intervals in 11 ethnically diverse cohorts that included 71 857 European, African-American and Hispanic/Latino ancestry individuals eight novel pharmacogenomic loci met the threshold for genome-wide significance. A pharmacokinetic variant in CYP2C9 (rs1057910) that has been associated with sulfonylurea-related treatment effects and other adverse drug reactions in previous studies was replicated [439]. Fourth, we performed a large-scale meta-analysis across the cohorts of the Metformin Genetics Consortium (MetGen). Nine candidate polymorphisms in five transporter genes (organic cation transporter [OCT]1, OCT2, multidrug and toxin extrusion transporter [MATE]1, MATE2-K, and OCTN1) were analyzed in up to 7968 individuals. None of the variants showed a significant effect on metformin response in the primary analysis, or in the exploratory secondary analyses, when patients were stratified according to possible confounding genotypes or prescribed daily dose of metformin [440]. However, The C allele of rs8192675 in the intron of SLC2A2, which encodes the facilitated glucose transporter GLUT2, was associated with a 0.17% greater metformin-induced reduction in hemoglobin A1c (HbA1c) in 10,577 participants of European ancestry. rs8192675 was the top cis expression quantitative trait locus (cis-eQTL) for SLC2A2 in 1226 human liver samples, suggesting a key role for hepatic GLUT2 in regulation of metformin action [441]. Fifth, we performed a metaanalysis of genome-wide association studies (GWAS) to identify variants with an effect on statin-induced high density lipoprotein cholesterol (HDL-C) changes. The 123 most promising signals were followed up in an independent group of 10 951 statin-treated individuals, providing a total sample size of 27,720 individuals. The only associations of genome-wide significance were between minor alleles at the CETP locus and greater HDL-C response to statin treatment [442]. Imaging studies The Population Imaging Unit within the Rotterdam Study aims to assess (quantitative) imaging biomarkers of disease in a pre-symptomatic phase at the population level [455] Advantages of imaging measures include that they mark early disease, can be assessed reliably and reproducibly, and are quantitative rather than qualitative which makes them more powerful than most conventional outcome measures such as clinical phenotypes. The main imaging modalities that are currently being applied in the Population Imaging Unit are multidetector computed tomography (MDCT) and magnetic resonance imaging (MRI). The imaging infrastructure has been described extensively in the previous study design papers [6,19]. Important updates on our research since our last report are the following: Incidental findings on imaging We previously indicated that assessment and management of incidental findings is of great importance in large-scale imaging studies like ours. Unfortunately, guidelines are lacking and information on natural course is still scarce. We have tried to close these gaps by describing an ethical framework which can be used in designing studies [456,457], and we have reported the natural course and clinical management of findings in our study since 2005 [458] Imaging of age-related brain changes and neurological diseases An important focus in our work is on quantitative markers that signify preclinical change, preferably in the earliest state of disease. In this context, we have explored in recent years how structural connectivity in the brain changes with age [459,] and also how these changes affect cognition [460]. Also, we showed that worse microstructural integrity related to higher mortality [461]. Furthermore, we found that future stroke is predicted not only by prevalent vascular lesions (such as infarcts or white matter hyperintensities) but also by subtle alterations in the microstructure of normal-appearing white matter [254]. Inclusion of this effect in risk prediction models produced a significant advantage in stroke prediction compared with the existing Framingham Stroke Risk Profile. After introduction of resting-state functional MRI, we have explored how (change in) brain structure drive brain function, and found that white matter pathology can decrease tract-specific functional connectivity, both in direct and indirect connections [462]. These results provide further evidence for the so-called ''connectivity hypothesis''. We are currently extending this work by defining the ''disconnectome'' in the brain, and by studying how functional brain connectivity changes with age and affects cognitive functioning. Despite increased understanding of microbleed pathology, their clinical implications remained largely unknown. We studied microbleeds as a determinant of stroke and dementia and found that microbleeds associated with an increased risk of recurrent and first-ever stroke, both ischemic and hemorrhagic [463]. Our results confirm that the increased risk is not confined to people with prior strokes, and can be extrapolated to people from the general population. Another finding was the correlation in anatomical location between cerebral microbleeds and intracerebral haemorrhage [463]. Finally, in longitudinal studies we found that microbleed presence related to decrease in cognitive functioning and an increased the risk of dementia, including Alzheimer's dementia [464]. Taken together, our findings suggest that cerebral microbleeds may represent an imaging marker of active vasculopathy, which serves as a predictor of both ischemic and hemorrhagic brain lesions and neurodegeneration. Imaging of atherosclerosis and cardiovascular diseases As described previously [6], we make use of both MDCT and MRI to image atherosclerotic calcifications (in multiple vessel beds), plaque burden and atherosclerotic plaque composition (in the carotid). Important new reports describe the determinants of overall plaque burden and how plaque composition relates to a history of stroke [465,466]. In recent years, we have expanded our interest in imaging markers of cardiovascular disease towards epicardial fat [467,468] and aortic valve calcification [253,469]. In a preliminary investigation, we have applied computational fluid modelling to investigate the relation between shear stress and vulnerable plaque components, and found that higher shear stress related to intraplaque haemorrhage [470]. We are currently expanding this study to measure shear stress in over 2000 carotid MRI scans from our population. Using serial imaging, we were able to describe determinants of change in plaque components over time [471]. Future developments As also mentioned above, focus has shifted in recent years from purely structural imaging to also including functional imaging data, by incorporating resting-state functional MRI into the brain imaging protocol. Changes in the intrinsic activity of resting-state networks are presumed to represent alterations in functional brain connectivity and may mark neurodegeneration in an early, presymptomatic stage. We will further explore the value of functional imaging as an early imaging marker for dementia, by itself or in combination with other imaging markers and risk factors. Another development that has set in and will continue in the coming years is that we do not regard the brain as a stand-alone organ, but rather view it in the context of the rest of the body and other diseases outside the brain. In the past years, we have found abundant evidence that pathology in the brain is linked to (sub)clinical pathology elsewhere in the body [314,315,[472][473][474], and we will explore these interconnections further. Finally, an emerging potential marker is Virchow-Robin (VR) spaces, or enlarged perivascular spaces, spaces filled with interstitial fluid that surround the blood vessels in the brain and which can be dilated. Despite increasing literature on these dilated VR spaces, a major limitation of current research is the lack of a robust and generalizable rating method on MRI. After successful implementation of a new rating method, we are currently investigating the value of VR spaces in a large consortium of other population-based studies [475] (www.uconsortium.org). Besides ever-increasing advances in imaging hardware, software and sequence design, major advances in the short and long run are to be expected from (fully) automated image analysis. Computer processing of images will enable to make fully use of all information contained within the image, introducing new imaging biomarkers. Besides, the vast amount of imaging data that are acquired in population-based studies like the Rotterdam Study renders visual assessment or manual measurements virtually impossible, strengthening the need for (fully) automated methods of data extraction and analysis. Otorhinolaryngology Objectives Otolaryngological research in the Rotterdam Study focuses on the frequency, etiology and consequences of hearing loss. Age-related hearing loss is a common disorder that deprives older people of key sensory input. It leads to social withdrawal and is even been found to be independently associated with poorer cognitive functioning and incident dementia. Still, little is known about the mechanisms that are responsible for developing hearing loss and the way it affects general cognitive functions within the elderly population. Determinants of interest are genetic factors, cardiovascular disease, use of medication, endocrine diseases and neuro-epidemiological factors. Methods Hearing loss is assessed at both ears by performing puretone audiometry in a sound proof room. Hearing thresholds are determined with headphones at frequencies 0.25, 0.5, 1, 2, 4 and 8 kHz. To distinguish between cochlear and middle-ear pathology, also bone-conduction thresholds are measured at frequencies 0.5 and 4 kHz. Additionally, speech perception in noise is tested at the better ear, using a validated triplet digit test [477] with speech-shaped noise at a fixed presentation level of 65 dB SPL. The ability to understand speech in noise is a functional measure that includes both sensory and central aspects of the auditory system. From a subset of the participants peripheral vestibular function is assessed by The Head Impulse Test (HIT), which measures the vestibule-ocular reflex (VOR) for a number of sudden head movements initiated by the tester [478]. Gain and delay are the main parameters that will be used to quantify vestibular function. The main goal is to analyze possible associations between cochlear and vestibular dysfunction, as both sensory organs are connected and use similar mechanisms. The general interview contains ten questions related to hearing and balance problems. In case of hearing-aid use, the participant has to answer five additional questions of the International Outcome Inventory of Hearing Aids (IOI-HA) [479]. In case of frequent tinnitus, ten additional questions of the Short Tinnitus Handicap Inventory (THI-S) are added [480]. Major findings As expected, we found a high prevalence of hearing loss in population of the Rotterdam Study [481]. In the population of 65 years and older, 30% had a hearing loss of 35 dB HL of more. However, the difference in hearing between sexes was considerably less than previously reported. This is probably due to changing lifestyle and environmental circumstances. A general association study including relevant determinants revealed that hearing loss was independently associated with age, education, systolic blood pressure, diabetes mellitus, BMI, smoking and alcohol consumption [482]. Remarkably, different associations were found for low-and high-frequency loss, as well as between men and women, suggesting that different mechanisms are involved in the etiology of age-related hearing loss. Furthermore, a strong and consistent relation was found between hearing loss and a decreased ability to understand speech in noise [483], which confirms the substantial impact of hearing loss on social interaction. To further analyse the possible impact of hearing on general functioning, we studied the relation of hearing loss with brain-related parameters. This study revealed that hearing loss was independently associated with a smaller brain volume [484], which was mainly driven by a smaller white matter volume throughout the brain in case of poorer hearing. Genetic susceptibility to age-related hearing loss is another important topic that is being analysed at the moment in a large meta-analysis of the international CHARGE consortium. Management The Emeritus principal investigators The following persons are Principal Investigator Emeritus of the Rotterdam Study:
2017-11-22T18:04:46.147Z
2017-10-24T00:00:00.000
{ "year": 2017, "sha1": "6cb71b9c8b7ea72f9729ff6659d89ec553ef7652", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10654-017-0321-4.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6cb71b9c8b7ea72f9729ff6659d89ec553ef7652", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244672527
pes2o/s2orc
v3-fos-license
Rheumatic Diseases – What Influence do they Really Have on the Incidence of Periprothethic Joint Infections after Total Knee Arthroplasty? mellitus, obesity, rheumatoid arthritis, and nicotine and alcohol abuse. Patients with diabetes mellitus show worse outcomes after surgical intervention Introduction: Periprosthetic joint infections (PJI) are serious complications after total knee arthroplasty (TKA). According to literature predisposing diseases, operation-specific and postoperative factors are important in development of PJIs. There are still divergent positions in the literature regarding the significance of individual risk factors. The aim of this study was to determine the incidence of PJI in our institution from 2008 to 2018, to identify risk factors by comparing affected patients with a normal population. Method: Fifteen PJIs were detected during research period. Data of 501 consecutive patients were collected retrospectively for comparison. A matched-pair analysis was performed in addition to an analysis of categorical and metric characteristics of the total population (n=516) to adjust the unequal strengths of both groups. Results: The incidence of PJI was 0.8% in our institution. Analysis of all patients showed significant correlations of PJI regarding blood transfusion, hematoma formation and postoperative urinary tract infection. The presence of renal insufficiency, nicotine consumption, and prolonged duration of surgery were identified as risk factors. After matched-pairs analysis (MPA), prolonged surgical duration, blood transfusion and preoperatively decreased haemoglobin (HB) were confirmed as independent risk factors. There was no significant difference regarding the presence of rheumatic diseases. Discussion: Determining risk factors were preoperative anaemia, blood transfusion and prolonged operation times. The presence of rheumatic disease did not appear to be a risk factor. These findings should be incorporated into surgical preparation. For infection prophylaxis, preoperative HB elevation and rapid surgical procedures with low intraoperative blood loss should be discussed. Introduction Implantation of a total knee arthroplasty (TKA) is one of the most common operations in orthopaedics. The number of TKA is steadily increasing due to demographic changes in society [1]. The majority of patients (over 85 percent) present good to very good results after TKA [2]. However, a significant decrease in revision surgeries has not yet been observed despite improved surgical techniques and innovative implants with long prosthesis service lives. Schwartz et al. [3] even predict an increasing number of complications with an increased need for second operations of 170 percent by the year 2030. Early complications include periprosthetic infections in addition to aseptic loosening and dislocation and periprosthetic fractures [4]. The rates of infection after primary TKA are inconsistent in the literature. Despite increasing prevention strategies, the incidence of infection is reported to be 1-2 percent in recent publications [5,6]. with prolonged inpatient stays and increased mortality [7]. Existing obesity is not only a risk factor for the development of osteoarthritis, but also a risk factor for postoperative complications. Thus, obese patients tend to have poorer wound healing, wound dehiscence, prolonged secretion, and increased hematoma formation [8]. According to literature, regular nicotine use causes peripheral vasoconstriction and associated tissue hypoxia due to activation of the sympathetic nervous system [9]. This results, among other things, in slower wound healing with an increased incidence of periprosthetic complications [10]. In their retrospective work, Crowe et al. identified nicotine abuse within one month before surgery as an independent risk factor for periprosthetic infections after primary knee arthroplasty [11]. Some authors also describe an association between the presence of renal insufficiency and an increased risk of PJI [12,13]. Regular consumption of alcohol is also associated with an increase in complications after prosthetic implantation [14,15]. Increased complication rates and infection rates have also been repeatedly described in patients with rheumatoid arthritis [16]. It is assumed that, in addition to the influence of immunomodulatory therapy, T-cell dysfunction is partly responsible [17]. The aim of the present study was to determine the frequency of periprosthetic infections in our institution over a period of 10 years. The focus was on detecting risk factors for periprosthetic infection after TKA. The supposed risk factors were compared with the results of a control group without periprosthetic infection. Supposed risk factors were: Elevated Score of the Physical Status System "American Society of Anaesthesiologists" (ASA), diabetes mellitus, obesity, history of malignancy, preoperative anaemia, and comorbidities such as rheumatoid arthritis or renal insufficiency. In addition, the influence of surgical time, surgeon experience, blood loss during surgery, anaesthetic procedure, and administration of foreign blood were observed. Other abnormalities such as prolonged wound secretion, electrolyte imbalances, and postoperative urinary tract infections were also considered in the study. Patients The data collection for this study was performed retrospectively by evaluating patient data from our department. For the research, the respective patient files as well as digital documents in the information management system "Orbis" were accessed. All existing documents such as physician's reports, admission notes, operation reports, anaesthesia protocols, laboratory findings, microbiological findings, and medical course documentation were considered. The period under investigation was 2008 to 2018. In the years mentioned, a total of 15 patients with periprosthetic infection after implantation of primary knee arthroplasty were registered in our department. These patients constitute the "infection-population" (IP) in the present study. The date of the primary implantation of the first included patient of this group was in May 2008 and that of the last in April 2018. The control-group consists of 501 patients who also received a primary TKA in the same department and did not develop a periprosthetic infection ("normal-population"; NP). The surgeries of this population were performed between January 2016 and December 2018, with a consecutive data collection backwards from 2018. Statistics The data collection was performed with "Microsoft Excel", "Stata/IC 16.1 for Unix" was used for statistical analysis. For descriptive analysis, patient data were categorized into categorical (such as prior surgery) and metric characteristics (such as patient age). Fisher's exact test was used to test whether group membership and the corresponding characteristic were independent. Whether the groups differed with respect to the distribution of a characteristic was tested with the Mann Whitney U test. To eliminate the influence of the large difference in the number of cases in the two groups (infection group and normal group) and to detect the significance of the results, an additional matched-pairs analysis (MPA) was performed. Matching criteria were gender, presence of diabetes, body-mass-index (BMI) (<40/≥40), ASA (≤2/>2), and age (± 2 years). The comparison of the groups was performed by McNemar's exact test for the categorical characteristics and the Wilcoxon signed-rank test for connected samples. All statistical tests were performed at a significance level of 0.05. Clinical examination and questioning For the evaluation, general data such as patient name, age, and gender as well as previous operations on the affected knee joint were recorded. Furthermore, patient-related data were collected. This included height, weight, body mass index, and alcohol and nicotine abuse. Comorbidities such as malignant diseases, relevant cardiovascular diseases, diabetes mellitus, diseases of the respiratory tract, liver and kidney were recorded. Special attention was paid to rheumatic diseases or other inflammatory systemic diseases and associated medication with immunomodulatory drugs or anticoagulants. The surgery-dependent-factors included the indication-related diagnosis, whereby a differentiation was made between primary and secondary arthrosis and inflammatory diseases. In addition, the duration of the operation, the side location, and whether the operation was performed by an experienced surgeon or a surgeon in training were recorded. In addition, the type of anaesthesia and antibiotic prophylaxis as well as the ASA classification were documented. Results The NP included more female than male patients (69.7% versus 30.3%). In contrast, an almost equal gender distribution could be observed in the IP. This difference between the groups turned out to be statistically insignificant. Pre-existing Conditions Regarding to pre-existing diseases or conditions, only the presence of renal insufficiency (p=0.021) and nicotine abuse (p=0.047) showed significant differences in the group comparison. Renal insufficiency showed the largest percentage difference of all surveyed pre-existing conditions. In the infection group, 27 percent (4 of 15) reported having renal insufficiency. In comparison, only 7 percent (35 of 501) of patients in the normal group reported renal insufficiency. Patients were considered smokers if they were current smokers or reported having smoked regularly for at least one year in the past. The type of rheumatic disease included in the study was rheumatoid arthritis, spondylarthritis, and polymyalgia rheumatica. In the infectious group, one of five showed a rheumatoid disease, in the comparison group only one in ten. However, the difference between the two groups regarding the presence of a rheumatic disease was not found to be significant (p=0.170). There were also no significant differences between the two groups regarding previous operations on the affected knee joint (p=1.000) and the presence of diabetes mellitus (p=0.158) as well as other previous diseases (cardiovascular, respiratory diseases, etc.) ( Table 1). In the MPA, no significant difference could be found for any of the above-mentioned conditions (renal disease p=0.083; rheumatic disease p=0.655; nicotine consumption p=0.083), so that the preoperative patient findings in our collective are considered negligible regarding the incidence of periprosthetic infection (Figure 1). Duration of surgery A statistically highly significant difference (p< 0.001) was provided by the duration of surgery, which was significantly longer in the infection-population (mean 91.9 minutes) than in the normal-population (mean 70.3 minutes). The IP showed a minimum of 68 and a maximum of 114 minutes. In the NP, the minimum surgery duration was 43 minutes, while the maximum surgery duration was 115 minutes. The examination of the surgery duration also showed the highest significance of all factors examined in the MPA (p = 0.037) (Figure 2). Postoperative conditions The groups differed most regarding the postoperative course. There was a highly significant correlation of periprosthetic infections with receipt of blood transfusions (p<0.001), hematoma formation (p=0.001), and proven urinary tract infection (p=0.008). In the IP, 40 percent of patients received a blood transfusion. Comparatively, in the NP, transfusions occurred at a strikingly low rate of 3.6 percent. Thus, in the infectionpopulation, the rate of transfusions was three times higher than in the normal-population (40% versus 3.6%). In the MPA, the transfusion of blood also showed a significant difference between the groups (p=0.046). Regarding to this, the preoperative HB value also showed a significant difference between the two groups in the MPA (p=0.049). The formation of hematomas of significant size and other wound healing disorders was documented in seven cases. Three of these cases were in the IP and four cases in the Orthopedics and Rheumatology Open Access Journal (OROAJ) Discussion The presence of rheumatologic disease has been described by some authors as a possible risk factor for periprosthetic infection [12,16,18], with continued "disease-modifying anti-rheumatic drug" (DMARD) therapy in particular being discussed as a crucial risk factor [18]. In addition, there is competing literature that does not assign an increased risk for PJI to rheumatoid arthritis patients without immunosuppressive therapy [5,19]. In our population, we did not find a statistically significant association between the presence of rheumatic disease and an accumulation of PJI. In relative terms, the infection-population included two times more patients with rheumatic disease compared with the normal-population, but without reaching statistical significance. Regarding therapy with DMARD and biologics, the recommendations for perioperative procedures of the "German Society of Rheumatology" were strictly applied in our population [20]. This might have been one reason why no significant association could be found. All other preoperative conditions such as nicotine abuse or the presence of renal insufficiency also failed statistical significance in the MPA, so that we could not confirm these conditions as risk factors in our population. On the other hand, a prolonged operation time turned out to be an independent risk factor with sufficient significance in our collective in the MPA. Wang et al. [21] were also able to identify prolonged surgery time as a risk factor in their study, showing that a 20-minute increase in surgery time resulted in a 25% increase in the probability of infection. In our study, the administration of foreign blood also resulted in significant differences between the two groups in both the overall analysis and after pairing. In the literature, studies have also demonstrated an increased incidence of infections after blood transfusion [22,23]. Pulido et al. [6] calculated a 2.1fold increased risk for PJI following the transfusion of blood. In summary, it can be stated for our collective that both a low preoperative HB value as described by Müller et al. [24] and the necessity of blood transfusion represent an independent risk factor for a PJI, so that the securing of a sufficiently high preoperative HB value should be given special importance in the surgical preparations. Furthermore, a high intraoperative blood loss can be avoided by a rapid and blood-saving surgical procedure, if necessary, with the use of antifibrinolytic drugs. While Pulido et al. [6] identified urinary tract infection as an independent risk factor for PJIs, Koulouvaris et al. did not show an increased rate of deep wound infections associated with urinary tract infections [25]. In our collective, postoperative urinary tract infection increased the risk of PJI significantly. However, after matching, statistical significance could not be confirmed, so we cannot consider the presence of a urinary tract infection as an independent risk factor in our study. According to the literature, the formation of a hematoma is also associated with an increased rate of wound infections [26], although sufficient significance was not achieved here in some studies [6,27]. In our population, this observation can be confirmed, as also here, after attenuating the confounding effects, the significance value decreased below the defined significance level. The main limiting factor of our study might be the retrospective study design, so that a causal association between the risk factors and the infection rate cannot be proven. Some confounders could be attenuated in the study using a matched-pair analysis; nevertheless, other confounders may have influenced the results. Conclusion In our sample, the most important risk factors for the incidence of periprosthetic infection were preoperative anaemia with HB values < 11.5 mg/dl, the application of blood transfusions, and prolonged surgery times. These are all factors that can be considered during surgical preparation and argue for a possible need to raise the preoperative HB level. Furthermore, surgery should be as short as possible and thus probably less bloody. In our collective, there was no evidence for an increased risk in the presence of rheumatic disease under strict adherence to the recommendations of the German Society of Rheumatology on the perioperative management of DMARD and biologicals therapy in inflammatory rheumatic diseases.
2021-11-27T16:34:14.087Z
2021-04-23T00:00:00.000
{ "year": 2021, "sha1": "b215915a5c1ce45aea7b35c3d4177a7d348d0fa9", "oa_license": "CCBY", "oa_url": "http://juniperpublishers.com/oroaj/pdf/OROAJ.MS.ID.555981.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "87b3e194f557e9ac124f42dd09dbe73cac258322", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
219637402
pes2o/s2orc
v3-fos-license
From causes of aging to death from COVID-19 COVID-19 is not deadly early in life, but mortality increases exponentially with age, which is the strongest predictor of mortality. Mortality is higher in men than in women, because men age faster, and it is especially high in patients with age-related diseases, such as diabetes and hypertension, because these diseases are manifestations of aging and a measure of biological age. At its deepest level, aging (a program-like continuation of developmental growth) is driven by inappropriately high cellular functioning. The hyperfunction theory of quasi-programmed aging explains why COVID-19 vulnerability (lethality) is an age-dependent syndrome, linking it to other age-related diseases. It also explains inflammaging and immunosenescence, hyperinflammation, hyperthrombosis, and cytokine storms, all of which are associated with COVID-19 vulnerability. Anti-aging interventions, such as rapamycin, may slow aging and age-related diseases, potentially decreasing COVID-19 vulnerability. Age-related diseases Humans and other animals (including the worm [30] and the fly [31]) do not die from aging itself but from agerelated diseases such as ischemic heart disease (IHD), hypertension, diabetes, cancer, Alzheimer's and Parkinson's diseases, age-related macular degeneration, osteoporosis and sarcopenia (As we will discuss, even seemingly non-deadly diseases such as osteoporosis can lead to deadly complications). The incidence of these diseases increases exponentially with age. Some diseases such as obesity, hypertension and diabetes develop earlier in the course of aging. Other diseases, such as Alzheimer's disease and macular degeneration, are usually diagnosed later [32,33]. Age-related diseases may also occur in younger people with genetic predisposition and environmental exposure hazards. But even without these factors, diseases develop because they are quasi-programmed (see "Quasi-programmed aging section"). These diseases are not diseases of civilization, as it may seem. Humans simply now live long enough to develop them. Of course, "hazards of civilization" can accelerate them at a younger age. Aging and its diseases cannot be separated. Healthy aging, or aging without diseases, is merely a slow aging, when biological age is less than chronological age. During a period of seemingly healthy aging, pre-pre-diseases and pre-diseases are progressing until they eventually reach clinical manifestations. Thus, healthy aging progress to unhealthy and pre-diseases become diseases [34]. Age-related diseases and COVID-19 vulnerability are highly intertwined. Patients, who die from COVID-19, otherwise would die from age-related diseases such as heart disease, cancer, diabetes, hypertension, just a year later. COVID-19 approximately doubles a patient's aging-dependent risk of dying during one year. For example, (numbers are very approximate), a sixty year old woman has 1% chance to die from aging before her 61st birthday. At that age, if infected, the death rate from COVID-19 is around 1% for females. If infected, a patient has approximately doubled chances to die compared with usual age-related mortality during one year. As David Spiegelhalter put it: "getting COVID-19 is like packing a year's worth of risk into a week or two". https://medium.com/wintoncentre/how-much-normalrisk-does-covid-represent-4539118e1196. Children and young adults have a very low risk of death from aging-related diseases, so that risk remains extremely low even when doubled. Although natural mortality is relatively high in the youngest age group, especially in infants, they do not die from age-related diseases of course. Instead, infants are vulnerable to bacterial infections and candida infections due to underdeveloped immune system [35]. Low COVID-19 mortality in the pediatric age group [11] is consistent with the notion that COVID-19 vulnerability is not due to a "weak" immune system. In contrast, as we will discuss in the next section, it is hyper-functional immune response that leads to death from COVID-19 in the elderly by causing cytokine storm. AGING Of course, age-related hyperfunctional response, such as cytokine storm, is not caused by lifelong accumulation of molecular damage. Aging is not caused by molecular damage after all. Instead it's a continuation of developmental/growth programs that lead to hyperfunctions and in turn eventually to dysfunctions. Hyperfunction theory of quasi-programmed aging "Quasi" means "resembling" or "seemingly, but not really." Quasi-program of aging is not a program but a continuation of developmental programs that were not switched off upon their completion [24,50]. They purposelessly unfold, leading to age-related diseases, secondary organ failure and death. Quasi-programmed (program-like) aging is associated with higher than optimal cellular and systemic functions, which eventually, via cellular exhaustion and organ damage, lead to functional decline ( Figure 2). For example, starting from birth, blood pressure increases and continues to increase after organismal growth is completed. Therefore, hypertension is the most prevalent age-related disease. In turn, hypertension can cause organ damage: stroke, infarction and renal failure. Similarly, obesity develops in post-development as a continuation of growth (yet, it can be prevented by low caloric diets, illustrating that quasi-program of aging can be decelerated). Hyperfunction is an excessive normal cellular function: contraction by smooth muscle cells (SMC), adhesion and aggregation by blood platelets, insulin secretion by beta-cells, lipid accumulation by adipocytes, secretion by stromal and immune cells, oxidative burst by leukocytes, just to name a few. When higher than optimal, they cause vasoconstriction and hypertension, thrombosis, hyperinsulinemia, hypertrophy, hyperplasia, obesity, hyper-secretory phenotype or Senescence-associated secretory phenotype (SASP), hyper-inflammation and so on. Hyper-function is not necessarily an absolutely increased function. It may be also insufficiently decreased function (relative hyperfunction). Levels of IGF-1 and growth hormone decrease during lifespan. Despite this decrease, IGF-1 levels are still higher than optimal (relative hyper-function) because further genetic decrease in IGF-1 levels (by genetic means) extends health span and lifespan in mammals [51][52][53]. Cellular hyperfunctions may eventually switch to cellular exhaustion and loss of functions at late stages. During the course of type II diabetes, mTOR overactivation and hyperinsulinemia eventually lead to beta-cell exhaustion and insulin insufficiency, from prediabetes to diabetes [54,55]. As another example, after puberty, hyperstimulation of the ovary eventually leads to oocyte exhaustion and menopause (see Figure 3 in ref. [29]). Depletion of naïve lymphocytes is another example, as reviewed here later. Age-related alterations are mostly noticed when they switch to functional decline, which is a late event. In some cases, functional decline can be primary and programmed. For example, thymus involution (replacement of T cells by adipocytes) starts early in life, accelerates at puberty and continues later. Still loss of thymocytes and their niches may be in part due to adipocyte hyperplasia and hypertrophy [56]. In fact, obesity accelerates involution, whereas calorie restriction decelerates it [57,58]. Furthermore, the oblation of sex hormones decelerates or even reverses thymus involution [59]. Thus, involution is triggered by adipocyte hyperplasia and increased production of sex hormones during puberty [56]. Quasi-programmed aging is not driven by molecular damage. It is driven by nutrient/hormone/cytokinesensing and growth-promoting signaling pathways such as Target of Rapamycin (TOR; mTOR), which are involved in developmental growth and later cause hyperfunctional aging and its diseases [24,26]. Covid-19 vulnerability as an age-related syndrome What is the cause-effect relationship between age-related diseases and COVID-19 lethality? Do patients die from age-related diseases, complicated by COVID-19? Or, in contrast, do these various diseases make COVID-19 infection lethal? Both scenarios take place to some extent. However, the relationship is mostly indirect. Both age-related diseases and COVID-vulnerability result from the same underlying cause (Figure 3). This is why they are highly correlated. The cause is aging itself. Aging is manifested by a sum of deadly -and not so deadly -diseases and conditions ranging from cancer to grey hair. Although not all diseases seem to be deadly, they can cause complications such as stroke, ventricular fibrillation, renal failure, lung edema. Even sarcopenia and osteoporosis lead to falls and broken bones culminating in a deadly sequence of events. Cosmetic manifestations such as aging spots and wrinkles, while not deadly by themselves, can be manifestations of other diseases. For example, baldness correlates with prostate enlargement [60], and the later can lead to urinary obstruction and renal failure. Diseases occur together. For example, chronic obstructive pulmonary disease (COPD) is associated with diabetes, cardiovascular disease and hypertension [61]. If a person has one disease (e.g., diabetes), this patient has higher chances of having other diseases (e.g., hypertension, IHD, cancer) or conditions, including COVID-19 vulnerability, which is revealed only during infection but can be predicted by pre-existing diseases. Aging is initially driven by an increase in cellular and systemic functions (hyperfunction), leading to age-related conditions. For example, hypertension is a systemic hyperfunction due to hyperfunction of multiple cell types such as arterial smooth muscle cells (aSMC). Similarly, COVID-19-vulnerability is associated with hyperfunction of inflammatory cells that, in response to COVID-19 infection, causes cytokine storm, hypercoagulation and damage of the lung and distant organs. The COVID-19 vulnerability syndrome is an agingrelated disease, strictly dependent on biological age, associated with other age-related diseases, and exemplified by hyper-functional response to infection. Inflamm-aging and immunosenescence With hundreds of cell types acting in concert, the immune system is so complex that we cannot discuss age-related alterations without oversimplification. The most noticeable alteration is that memory T and B cells replace naive T and B cells [62]. (This seems natural since life-long exposure to pathogens replaces naïve cells by memory cells). Replacement of naïve immune cells decreases adaptive responses to novel antigens such as SARS-CoV-2. In contrast, immune protection by memory T cells from viral re-infection with known pathogens is usually increased with age [62]. Immune responses are roughly divided into (a) innate responses, carried mostly by neutrophils, macrophages and NK cells, which react to pathogen rapidly and nonspecifically, and (b) adaptive responses, carried by T and B lymphocytes, which are delayed, slower and specific (e.g., antigen-specific clonal expansion of T and B lymphocytes and antibody production by B lymphocytes) [63][64][65]. In the elderly, immune responses to SARS-CoV-1/2 are "stuck in innate immunity," with insufficient progression to adaptive immunity [37]. However, decline in adaptive response, such as antibody production, plays little role in COVID-19 mortality. It is hyper-functional innate immunity, hyper-inflammation, cytokine storm and hyper-coagulation that lead to organ failure and death. In agreement, hyper inflammatory response rather than high virus numbers leads to death of SARS-CoV-infected old nonhuman primates [66]. Aging is associated with diseases of immune hyperfunction such as autoimmune disorders with paradoxical increase in certain signaling pathways and cytokine levels [67][68][69]. In the elderly, innate immune cells are in a state of sustained activation, producing pro-inflammatory cytokines [67,[70][71][72]. Increased pro-inflammatory activity by the innate immune system, especially by monocytes/ macrophages, is a state of alertness and hyper-reactivity on the cost of potential age-related inflammatory diseases [67,[70][71][72]. Whereas some functions are decreased, others are increased. According to the inflamm-aging concept, innate immune system overtakes adaptive immune system in aging. Cause-effect relationships are bi-directional: immunosenescence (namely, a decrease in adaptive response) is a cause and consequence of inflamm-aging [67,[70][71][72]. We can consider inflamm-aging as an example of hyper-function. While some functions are decreased, others are increased. Hyper-function is damaging. (In analogy, increased electric power, without an adaptor, would damage a laptop). Damaging hyper-functions can lead to loss of function and cellular exhaustion. And vice versa, loss of function may cause compensatory hyper-functions of another components. Geroconversion is a continuation of cellular growth [73,74]. Similarly, aging is a continuation of developmental growth (see Figure 1 in ref. [89]). When the developmental program is completed, it becomes a quasi-program of aging. As discussed in detail, chronically activated nutrient-sensing and growth-promoting pathways drive age-related diseases, culminating in organismal death [24,26]. Age-related diseases are quasi-programmed. Aging is a common cause of age-related diseases, a sum of all agerelated diseases. They are diseases of hyper-function, secondary hypo-function and compensation reactions [25]; they are deadly manifestations of aging. From activation of cellular functions to systemic hyperfunctions, from diseases to organ damage and death, hyperfunction theory of quasi-programmed aging describes the sequence of events [26]. And as discussed in 2006, suppression of aging by gero-suppressants, such as rapamycin, will prevent and treat all age-related diseases [24]. This point of view is becoming widely accepted and, in recent literature, quasi-programmed model of diseases (2006) is called "geroscience hypothesis" [2,90]. Figuratively, rapamycin rejuvenates immunity [91] If aging were functional decline due to accumulation of molecular damage, then it would be near to impossible to restore functions and rejuvenate the immune system. In contrast, if functional decline is secondary to hyperfunctions (see Figure 2 in ref. [89]), these hyperfunctions can be suppressed pharmacologically to restore lost functions. Typical drugs are inhibitors of their targets, rather than activators, so they decrease functions of their targets. By decreasing hyper-functions, which otherwise lead to secondary loss of functions, rapamycin may restore "lost" functions ( Figure 4). Differentiation is an increase of tissue-specific cellular functions. Terminally differentiated B, T, and NK cells can rapidly react to already known pathogens [102]. Decrease in naïve T and B lymphocytes (and thus diminished response to novel antigens) results in part from cellular hyper-differentiation in the immune system [64,103]. Hyper-functional differentiation can be counteracted by rapamycin [98]. As another example, age-related exhaustion of stem cells is partially due to loss of quiescence caused by growth over-stimulation [92,[104][105][106]. In general, senescent cells characterized by hyper-proliferative drive coupled with cell cycle arrest [77]. In young mice, mTOR hyperactivation causes senescence of hematopoietic stem cells (HSC) and decreases lymphopoiesis [92]. In old mice, rapamycin rejuvenates hematopoiesis, and improves vaccination against influenza virus [92]. Third, production of lymphoid cells may be decreased because of disruption of hypoxic niches due to adipocytes hyperplasia [56,107]. Hypoxic niches can preserve HSC [108,109] probably because hypoxia inhibits mTOR and cellular senescence [110]. In agreement, rapamycin preserves HSCs [92,98,111,112] reduces the proportion of memory cells and maintains a pool of naïve T cells [92,98]. AGING Fourth, growth factor (GF)-and insulin-resistance is loss of function because cells cannot respond to GF/insulin. But it may be caused by over-activated mTOR, which via S6K/IRS feedback loop blocks insulin and GF signaling. Rapamycin abrogates the loop restoring signaling [113][114][115][116][117][118]. Anti-aging medicine A high prevalence of age-related diseases, often called "diseases of civilization," is a success story of modern medicine. In the past, most people did not live long enough to develop age-related diseases and those who developed them died soon after. Due to medical advances, people survive to 85 on average, despite suffering from age-related diseases. Standard medicine preferentially extends life span, without necessarily affecting health span (see Figure 3 in ref. [119]). For example, defibrillation and coronary stenting can save life but not cure heart disease. It is anti-aging interventions that extend health span, delaying diseases, thus extending lifespan. Aging is a common cause of all age-related diseases. By suppressing aging, anti-aging interventions may delay all age-related diseases [119]. As a well-known example, low calorie diets such as calorie restriction, intermittent fasting, and low carbohydrate diets extend both health and lifespan. Figuratively, low calorie diets prolong life by improving health. Nutrients and obesity activate growth-promoting pathways (e.g., mTOR), thus accelerating development of quasi-programmed (age-related) diseases. Obesity is associated with all age-related diseases from cancer to Alzheimer's and from diabetes to sarcopenia. COVID-19 vulnerability is also associated with obesity [9,19,20,22]. According to hyperfunction theory, obesity accelerates aging and all age-related conditions including COVID-19 vulnerability. Diabetes is one of main risk factors of death in COVID-19 [5,6,12,13,15,21]. Can type 2 diabetes, an agerelated disease, be reversed? In remarkable studies, it was shown that a brief course (6-8 weeks) of very low calorie diets (VLCDs) can reverse type II diabetes. In one study, VLCD reversed diabetes in 46% of patients with up to a 6-year history of diabetes [120]. VLCD is most effective for its prevention and at early stages of diabetes [121]. This anti-aging modality is so simple that remission can be achieved at home by healthmotivated individuals [122]. Simultaneously, it treats other age-related diseases such hypertension [123]. Obesity is associated with other diseases of hyperfunction from diabetes and sarcopenia to cancer and Alzheimer's' disease. Since age-related diseases are predictors of COVID-19 mortality, VLCD in theory may decrease COVID-19 vulnerability. Rapamycin and everolimus as anti-aging drugs In the soil of Easter Island, a complex bacteria produces anti-fungal antibiotic rapamycin to suppress yeast growth but, as a by-product, it also suppresses yeast aging (quasi-programed aging is a continuation of growth). Approved for human use in 1999, Rapamycin (Sirolimus) and its close analog Everolimus are widely used in several diseases including cancer and organ transplantation. Hundreds of clinical trials (and twenty years of clinical practice) have ensured their safety and good tolerability especially in healthy older adults [119]. Currently, several anti-aging clinics prescribe rapamycin out of label to prevent age-related diseases and slow aging. Hundreds of recent reviews discussed rapamycin and everolimus in detail, so I will just emphasize a few points: 1. Crucial prediction of hyper-function theory of quasi-programmed aging in 2006 was that rapamycin will slow aging, extend healthspan and lifespan and decrease all age-related [124]. It has been confirmed: it extends lifespan in animals from worm to mammals. In some strains of short-lived mutant mice, it extends life span two fold [98,125]. 3. mTOR is a potential therapeutic target in chronic obstructive pulmonary disease COPD [126], [127]. Rapamycin (sirolimus) is already approved and successfully used in lymphangioleiomyomatosis (LAM), a progressive, cystic lung disease, associated with inappropriate activation of mTOR [128]. Longterm daily use of rapamycin improves lung function without causing serious side effects (and of course no even minor side effects in the lung, given that rapamycin improves lung function) [128]. 4. Despite widespread misunderstanding, rapamycin and everolimus do not cause diabetes. In contrast, they prevent diabetic complications in animals with diabetes (see for references [129]). In rodents, in some conditions they may cause symptoms of starvation pseudo-diabetes similar to prolong fasting and ketogenic diet [129]. Although, the Johnson study found a slight but significant correlation between Medicare billing for insulin and the use of rapamycin in renal transplant patients, this correlation was mechanistically explained by interaction of rapamycin with two other drugs used in the same patients [130,131]. In cancer patients, everolimus may cause reversible hyperglycemia as a mild, infrequent and reversible side effect after several weeks of daily high doses of everolimus and rapamycin [132]. Mechanistically, everolimus decrease insulin production, not causing insulin resistance [132]. If anything, everolimus and rapamycin can be considered to treat complications of type II diabetes and prevent hyperinsulinemia and obesity ( [129] and references within). What actually contributes to type 2 diabetes is excess of nutrients (and especially carbohydrates), which activate mTOR and cause hyperinsulinemia and insulin resistance. Potential applications of rapamycin/everolimus to COVID-19 As soon as COVID-19 epidemic started, it become clear that COVID-19 vulnerability is an aging-dependent condition and the use of rapamycin (Sirolimus) was immediately suggested by independent researchers [1,3,[133][134][135][136][137]. These proposals were based on a mixture of several rationales, which need to be clearly distinguished. In theory, there are at least three independent applications of rapamycin and everolimus for COVID-19. Currently, they all are still hypothetical. 1. Anti-aging effect ( Figure 5). By decreasing biological age and preventing age-related diseases, a long-term rapamycin therapy may in theory decrease COVID-19 mortality rate in the elderly. Anti-aging application is especially important because it is beneficial regardless of COVID-19. After all, mortality rate from aging and its diseases is 100%, causing more than 2 million deaths in the USA annually. Continuous use of rapamycin is expected to improve health, decrease age-related diseases and extend healthy lifespan, rendering individuals less vulnerable, when infected with the virus. 19 vulnerability (log scale) increases exponentially with age (blue line). The line ends at age 120, a maximum recorded age for humans. In theory, a continuous rapamycin treatment would slow down an increase of the vulnerability with age (red line). The increase is still logarithmic but at a different slope, because rapamycin slows the aging process. The maximum lifespan, in the absence of COVID-19, is extended because the 100% natural death threshold is achieved later. Disclaimer This review is intended for a professional audience, to stimulate new ideas and to aid the global efforts to develop effective treatments for COVID-19 disease. This article does not represent medical advice or recommendations to patients. The media should exercise caution and seek expert medical advice for interpretation, when referring to this article. CONFLICTS OF INTEREST The author declares no conflicts of interest.
2020-06-15T13:02:11.941Z
2020-06-12T00:00:00.000
{ "year": 2020, "sha1": "ec33d2680bbd5d3116e3e8a86972b2f6d19f2edf", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18632/aging.103493", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0a580bf47c0a3ed1d46fc169cb1054a6270c8e42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
40388573
pes2o/s2orc
v3-fos-license
A Review of the Urban Development and Transport Impacts on Public Health with Particular Reference to Australia: Trans-Disciplinary Research Teams and Some Research Gaps Urbanization and transport have a direct effect on public health. A transdisciplinary approach is proposed and illustrated to tackle the general problem of these environmental stressors and public health. Processes driving urban development and environmental stressors are identified. Urbanization, transport and public health literature is reviewed and environmental stressors are classified into their impacts and which group is affected, the geographical scale and potential inventions. Climate change and health impacts are identified as a research theme. From an Australian perspective, further areas for research are identified. Introduction Health can be viewed as a central criterion for judging human sustainability [1]. A complete understanding of this dimension of human health requires knowledge about the effects of global economic and climate change on: ecosystem sustainability and on human health; on the effects of pollutants within human communities; on the interaction between environment, development, and human health; and on the management of solutions to these challenges across local, regional, and global scales. Against this perspective on health and sustainability the scope of this review is restricted to urban development and transport as potential environmental stressors. Within this narrowed scope it is therefore economic development (more specifically, urbanization and transport infrastructure development) that can impair public health if environmental and social considerations are neglected, however, we do consider global climate change as a driver factor in shaping future environmental stressors in the city. Cities are significant "places" to analyze environmental stressors and public health problems. The early 21st century was marked by the extraordinary fact that for the first time in history more people on the planet live in urbanized areas than in rural ones. The geographical focus of this article is on urban settlements. In the developing world, the hinterlands around the urbanized areas have their own distinctive problems, associated partly with transport and communication: health problems are tied to poverty and to isolation and lack of access (see, for example [2], for the specific case of Laos). These problems differ substantially from the health issues in the rapidly expanding cities of the developing world. Within any one human settlement, and especially in the larger metropolitan regions, environmental stressors clearly have both location and time dimensions (duration by time of day, seasonality, trends over time), and it is precisely these aspects that we conceptualize in Section 3 and indicate research gaps, including the difficult empirical analyzes to account for a person's life-time exposure to environmental stressors (Section 4). As humans alter the character of the natural landscape of any region in the urbanization process (the driving forces of change), they directly influence the magnitude of environmental stressors such as impact on regional air quality, energy consumption, and local and regional, and global scale climates. Urban re-development results in changes in land uses, with their associated, more intensive, economic and social activities and these have an additional impact on environmental stressors. These various aspects of the urbanization process impact on human health. It is the complexity of these emerging public health problems that present a major new challenge for sustainable development, well-being and the quality of life in cities. Integrated solutions will require health care professionals, epidemiologists, engineers, environmental scientists, urban planners, designers and managers, policy specialists, economists and social scientists to find new ways to work. The trans-disciplinary approach offers a promising organizational framework. Therefore, in Section 2, "team science" is required to frame the issues, to undertake analyzes of the complex interactions between urban form, transport and health and to generate specific solutions of mitigation and adaptation. A critical case study of the interdisciplinary approach is presented given the recent interest in the science of "team science." In Section 3, we present a descriptive, conceptual model of the characteristics of a hypothetical region that give rise to environmental stressors impacting on health. The model considers the causal relationship between human activity, the pressures they place on the environment, the feedback from those activities on people in the form of environmental pressures, and the actions that are taken by governments, businesses and society in response to these pressures. In the approach taken with the descriptive model we use the typical language of state of the environment reporting: driving forces of environmental change; pressures on the environment; the state of the environment; impacts on the population; and the response of the society. Section 4 contains the results of our literature review of publications across the fields of urban form, transport and public health, where the evidence suggests there are quite distinctive outputs of scientific research across these themes. The classification of this diffuse literature is summarized with particular emphasis on the demographic and socio-economic characteristics of those affected, on the location and geographical scale of the environmental stressor, and on the broad categories of types of policies, programs and solutions. The main research gaps in the literature can be summarized as a need to introduce the long-term temporal dynamics into research investigations including the diurnal timedependent nature of some stressors and the life histories of individuals given that as bodies age exposures over a life time of environmental stressors can accelerate the aging process and trigger disease. These research gaps relate to Australia where an identification of trans-disciplinary research centers and work in progress has lead to the particular research suggested. Although these are countryspecific suggestions they probably have relevance to other countries, noting that Australia is a liberal democratic country and therefore suggested methodologies and solutions may not be transferable to many parts of the urbanized world. Trans-disciplinary Research Teams Improved transport links have both encouraged rural-urban migration flows and have provided the means for urban dwellers to access the land-use activities contained within those urban regions. It is the scale and concentration of these human activities that generate environmental stressors. Cities are objects, which by definition, pertain to many realities and are studied by numerous disciplines [3]. In the transport discipline, the premier international research body -the World Conference on Transport Research Society (WCTRS) -is represented by many disciplines drawn from academia, the professions and policy makers. The scientific society is structured with approved Special Interest Groups, including transport and the environment, which aims at seeking ways to establish effective mechanisms for mitigating environmental degradation due to transport in the international domain [4], yet "transport and health" is yet to be nominated as a sub-theme. The general experience is that interdisciplinary, or multi-disciplinary, team science is being more important, especially through international collaborative exercises but the trans-disciplinary approach is limited. The public health field also has its similar weaknesses of inter-connections with other disciplines. Despite the last decade of the twentieth century witnessing a profusion of projects drawing together multi-disciplinary teams of social and health scientists to study and recommend solutions for a wide range of health problems, a trans-disciplinary approach is required to provide a systematic, comprehensive theoretical framework for the definition and analysis of the social, economic, political, environmental, and institutional factors influencing human health and well-being [5]. Team research is expected to continue its dominance in the production of knowledge in the 21 st century. A transdisciplinary research approach will become more firmly entrenched as the preferred methodology on major research investigations. For example, the mission of the International Association for Ecology and Health (EcoHealth) is to strive for the sustainable health of people, wildlife, and ecosystems. Trans-disciplinary pursuits involve combining strengths to achieve hybridized innovative approaches to problem-solving. Such challenges demand this level of strategic team-based approaches to the tightly coupled human-natural system interactions underlying many of the pressing threats to our sustained health and environment. This prediction about the trans-disciplinary approach is based on an interpretation of the analysis of 19.9 million research articles in the Institute for Scientific Information (ISI) Web of Science database and an additional 2.1 million patent records [6]. There has been a steady increase in team size over the last five decades, and teams now dominate the top of the citation distribution in all four research domains of sciences and engineering, social sciences, humanities, and patents. Even proponents of team science initiatives note that they are highly labor intensive; often conflict-prone; and require substantial preparation, practice, and trust among team members to ensure a modicum of success [7]. A growing number of studies focusing on the processes and outcomes of trans-disciplinary scientific collaboration suggest that the effectiveness of team initiatives is highly variable, and depends greatly on certain contextual circumstances and collaborative readiness factors [8,9]. Trans-Disciplinary -Definition and Methodology Before giving a specific example, and a critical assessment, of our involvement in a transdisciplinary research project, it is important to give a concise definition that distinguishes this methodology from others. The first quote has shaped our thinking from the time a former academic colleague, and now medical practitioner, drew our attention to the methodology whilst undertaking a book review. The second quote is the most recently published and suitable definition for our purposes that we could identify: "Transdisciplinary thinking is primarily a process of assembling and mapping the possible interconnections of disciplinary knowledge about any given…problem until the fullest possible understanding of the problem emerges" [10]. "Transdisciplinarity is an integrative process in which researchers work jointly to develop and use a shared conceptual framework that synthesizes and extends discipline-specific theories, concepts, methods, or all three to create new models and language to address a common research problem" [8]. Both sources identify the differences between the trans-disciplinary approaches and single, multidisciplinary, and inter-disciplinary approaches in convenient tabular form. Furthermore, transcending disciplinary boundaries also requires consideration of different types of integration [11]. ''Horizontal'' integration is defined as integration across knowledge perspectives, such as disciplines or sectors; ''vertical'' integration means integration among different types of knowledge users, and may include perspectives from academics, as well as local communities and cultures, and non-government organizations (NGO) staff, for example. Collaborations by scientists with policy researchers may improve the likelihood of translating research findings into changes in policies, and practices. For instance, multilevel interventions based on ecological models and targeting individuals, social environments, physical environments, and policies must be implemented to achieve behavioral change in physical activity in the four domains of active living: recreation, transport, occupation, and household [12]. Research into both environmental and policy influences on physical activity is well underway in many countries: the 2008 Active Living Research Conference theme was "Connecting Active Living Research to Policy Solutions". Selected journal papers include: principles for improving the translation of research into policy; improving the rigor of research methods; asking policy-relevant questions; presenting country-specific data; and communicating effectively the research findings to policy makers [12]. The main steps of the trans-disciplinary approach [10], which are best illustrated by an example, are:  problem definition;  assembling a team of researchers;  reviewing existing knowledge on the research problem, especially disciplinary and interdisciplinary conceptualizations and explanations;  designing the research enquiry from research gaps; implementing the research enquiry;  refining conceptual understandings and synthesizing data sets; and  recommending the types of interventions (usually with stakeholders) to resolve the problem. The steps are illustrated briefly by making a critical assessment of a completed trans-disciplinary project into one of the major environmental stressors of transport in major urban areas -that of aircraft noise in neighborhoods surrounding major international airports. Example of Aircraft Noise and Community Health (Stress) The context for the research project on the community impacts of aircraft noise was as follows. The Botany Bay Studies Unit at the University of New South Wales, Australia [13], was established as a cross-faculty and cross-university research centre to focus on issues within an area defined broadly as Botany Bay and its hydrological catchments of the Georges River and Cooks River -an area of 960 sq. km., with 14 local government authorities and containing about 40 per cent of Sydney's population, which is currently around 4.5 million people. Botany Bay is one of Australia's most important social environments (Aboriginal heritage of settlement and land management, European colonization, industrial heritage and the growth of seaside suburbia) where there is a strong interaction of the natural and social environments. The state government was conducting a sub-regional planning study at the time and there was a cabinet minute allocating $2 million for independent research into problems associated with the region. The former Director of the Botany Bay Studies Unit (Professor John Benzie) and Professor Tony Underwood (Sydney University) were members of the advisory committee in the formulation of the New South Wales Government's Draft Botany Bay Strategy. The research problem addressed by the research team was suggested following wide stakeholder consultation. The views of 24 key individuals representing a wide range of stakeholders were sought to formulate research issues. A stakeholder forum, Botany Bay -Moving Forward, was held on 28 February 2004 to move forward from previous studies, to identify research gaps and capabilities of researchers, to formulate a draft set of research priorities. The key questions for participants were "what we don't know, and how research could contribute to making the study area more sustainable". In addition, the forum actively sought public submissions on research needs, priorities, and information on relevant studies already conducted. It should be noted that the University provided exemplary support for what eventuated as a trans-disciplinary research study. The Office of the Vice-Chancellor, sponsored the cost of the venue hire -The Scientia Building. Mr Norm Newlin delivered the "Welcome to the Land" address on behalf of the indigenous owners of Botany Bay. The NSW Department of Infrastructure Planning and Natural Resources contributed along with the Botany Bay Studies Unit to the costs of the catering for the workshop. The Sutherland Shire Environment Centre had widely advertised the workshop and this helped ensure important community and NGO participation. The forum nominated community impacts of aircraft noise as a priority research topic -hardly surprising given that the Sydney International Airport has two parallel runways that project into Botany Bay [14]. The current practice in quantifying aircraft noise retards the resolution of aircraft noise problems as far as the community is concerned because of the misunderstandings surrounding the use of aircraft noise metrics, and an underestimation by the government of public health impacts from long-term exposure to chronic aircraft noise. The sound equivalent energy technique with timeof-day weighting (such as Australian Noise Exposure Forecast and Day-Night Average Sound Level (DNL)) was designed for land-use compatibility advice and regulation around commercial and military airports [15]. However, it has been widely interpreted as a metric correlated with community annoyance. Its application to the study of community reaction to the effects of aircraft take-offs, landings, and over-flights have been criticized [16]. Therefore, new metrics [17,18] and impacts were investigated. The core team responsible for the design and implementation of the research project came from different undergraduate disciplines: civil engineering; human geography, mathematics and statistics; and mechanical engineering. Such descriptors of a trans-disciplinary research team can be misleading because additional postgraduate qualifications acquired by the team were education; environmental noise, evidence-based medicine; transport engineering; and urban and regional planning. Officers from the Commonwealth Government, through AirServices Australia and the Department of Transport and Regional Development, provided data on aircraft movements and policy advice, respectively. Finally, translators from the South Sydney Area Health Service assisted in the translation of the questionnaire and covering letters into the main languages spoken in the relevant communities that were surveyed. Guest appearances on a local community radio station "Green City" program kept the activities of the Botany Bay Studies Unit in the public mind by explaining the significance of the research and publicizing the survey. The following steps of the trans-disciplinary process -reviewing existing knowledge on the health impacts of aircraft noise, especially the disciplinary and inter-disciplinary conceptualizations and explanations; designing the research enquiry from research gaps identified; implementing the research project; and refining conceptual understandings and synthesizing data sets -have been reported in the literature [19,20]. One indicator of the quality of the research output is the fact that the paper presented at the 10 th Air Transport Research Society World Conference, held in Nagoya, Japan, was one of the seven papers selected for a special journal edition [21]. As for specifying types of interventions to resolve the problem, this is where the project floundered. Stress management techniques were identified for trial in the proposed research proposal based on the cognitive behavioral therapy literature, especially on chronic pain management and on severe asthma [22]. We invited a medical practitioner with international standing in Sahaja Yoga to join the research team. One hypothesis that we hoped to test is that the stress and other health problems caused by aircraft noise can be ameliorated by non-chemical complementary medicine stress management interventions [23]. This hypothesis is based on understanding the neuroscience of the brain and its response to noise as an environmental stressor. We planned to implement an intervention with aircraft noise affected residents of Sydney (a parallel research proposal would examine stress in the workplace and evaluate SMI). However, the research grant applications were not funded, and, as is common in the case of researchers assessed by their peers as "internationally competitive", other projects were pursued, the trans-disciplinary research impetus on aircraft noise and stress was lost and the core team has subsequently disbanded -by moving to other universities, or retiring. Elsewhere, we have argued for the case for the application of trans-disciplinary research into cities, transport, quality of life and environmental health [21,24,25]. Therefore, in the next section we confirm the research relevance of these specific domains and attempt to identify trans-disciplinary research on urbanization transport and public health. Trans-Disciplinary Research into Transport and Health Given the importance of the transport sector of the economy, the contribution of the road sector in particular to greenhouse gas emissions [26] and the direct and indirect impacts of transport and urbanization on human health we might expect there to be reports published studies organized by the trans-disciplinary approach. Three search methods were used. First, the International Association for Ecology and Health focuses on research and practice that integrates human, wildlife, and ecosystem health and its understanding, draws on integrative and cross-disciplinary approaches involving both the ecological and health sciences. An editorial overview suggests that the common, overarching purpose of ecology, health, and sustainability research domains is to better understand the connections between nature, society, and health, and how drivers of social and ecosystem change will influence human health and well-being, where a trans-disciplinary approach is advocated [27]. Socio-economic change, public health initiatives, and gains in medical care have continued to improve basic health indices in recent decades, but economic development (which includes urbanization and transport infrastructure development) can impair public health if environmental and social considerations are neglected. Secondly, the research activities of the World Conference on Transport and Research Society (WCTRS) -the pre-eminent society for all disciplines associated with transport covering academia, professional practice and policy makers -were examined to see if the emerging link between transport and health had been identified in any of the Special Interest Groups [28]. Finally, we used key words to locate trans-disciplinary studies of urbanization, transport and health, restricting the computer-based search using Medline but found few such over-arching studies reported in the literature, and none from Australia. Research teams examining the nexus between urbanization, transport and health appear not to have chosen the EcoHealth Journal [29] for placement of their research output. Of the 295 articles published between March 2004 and March 2009 only three within the domain of our interest could be located. Only one study aimed to validate a new index of the bio-psycho-social cost of ecosystem disturbance [30] could be assessed with confidence as following the trans-disciplinary model because the innovative environmental stress scale successfully measured and validated the concept of ''solastalgia'' -the sense of distress people experience when valued environments are negatively transformed [31]. In a descriptive narrative, the 'cases', the high disturbance group, experienced greater exposure to dust, landscape changes, vibrations, loss of flora and fauna, and building damage, as well as greater fear of asthma and other physical illnesses due to local pollution than the 'control' group. Although this study was informed by fieldwork in the open-cut mining area of Australia's Upper Hunter Valley, it could be adapted as a general tool to appraise the distress arising from people's lived experience of the desolation of their home and environment -for example, those households in proximity to the construction of major transport infrastructure. The most relevant inter-disciplinary article on transport [32] by sociologists and psychologists, analyzed in-depth interviews about the meaning and significance of social and recreational travel (a proxy for social capital) in Auckland, New Zealand, for a diverse group of Maori (indigenous people of Aotearoa/New Zealand), Samoan (originating from the Pacific Island of Samoa), and Pakeha (a Maori term commonly used to describe New Zealanders of European ancestry). The findings highlight the benefits of social and recreational travel both for maintaining social and family relationships and for general health and well-being, including the opportunity to participate in physical activity and in other activities that help reduce stress. A general, comparative, article on urbanization, inter alia, describes air quality trends in the "sister" cities of Wuhan, China, and Pittsburgh, USA and offers a brief qualitative description of changes to the built environment -albeit with no assessment of health impacts. The only explicit reference to transport is that in Wuhan travelers have switched over time from walking and cycling to using public transport [33]. A scan of the nearly 1,000 paper titles at the 11 th World Conference on Transport Research , hosted by the University of California Berkeley [34] also confirmed that trans-disciplinary research studies were equally rare [35,36]. Researchers and practitioners of environmental health are poorly represented reflecting the fact there are more appropriate societies for their disciplinary interests. This is somewhat surprising given that transport activities impact on health, both negatively and positively; and transport policies are now a key determinant of health [37]. As would be expected, there were specialized sessions in the conference program dealing with transport safety -especially road safetyand security. That there were so few papers relating to urbanization, transport and environmental and health impacts was somewhat surprising given the Society's earlier initiative with a multi-disciplinary, multicountry study of transport and the environment, with the Editors achieving after much debate and argument a common conceptual understanding. Figure 1 illustrates a similar diagrammatic representation of goals, strategies and policy instruments for integrated land use and transport solutions to environmental problems in Japanese cities [24]. The World Conference on Transport Research Society (WCTRS) and the Institute of Transport Policy Studies, Tokyo used a similar conceptual model to guide research collaboration on urban transport and the environment [38]. Figure 1. Goals, Strategies and Policy Instruments for Integrated Land Use and Transport Solutions to Environmental Problems. Source: [30]. Conceptual Representation All models are designed for a specific purpose. Our conceptual representation of the social and environmental (primarily human settlement and transport driven) determinants of health and wellbeing through the life-span of individuals is no exception. Given as a staring point the transdisciplinary focus referred to in Section 2 the model is designed to bring together some of the enabling scientific components from a range of disciplines as they might bring to bear on the problems, analyses and solutions in this conceptualization. Essentially, the purpose of this model is a scoping one that will allow us later in Section 4 to classify the literature more systematically, and to help identify gaps in knowledge, especially as this might relate to one of the key drivers of environmental change -climate change (Section 4) and the particular risks likely to affect human settlements [39][40][41]. The components of this model are best explained using a hypothetical region. In the region of our conceptualization -delimited by its geographical boundary (an assumption later relaxed) -we could consider its sustainability in terms of seven major themes: Atmosphere, Land, Inland Waters, Coasts and Oceans, Biodiversity, Human Settlements, Natural and Cultural Heritage (common categories for State of the Environment reporting). But we are interested in health and sustainability [1] so the prime object are the people, their exposure to environmental stressors and the cumulative effects of exposure to these stressors over their lifetime. The focus on people is because "population health and well-being is the 'bottom line' of sustainability" [42]. The purpose and objectives of trans-disciplinary research when applying the model into the specific health impacts from human settlements and transport are to provide accurate, up-to-date and accessible information on the state of conditions, trends and pressures on people's health (with full cognizance of the ageing process), and to articulate responses by the way of solutions and reporting to key stakeholders who are responsible for the governance of the region. The model proposed to do this is a perfectly general one that reflects the causal relationship between human activities, the pressures they place on the environment, the feedback those activities have on people through environmental pressures, and the actions that are taken in response to these pressures. It is straightforward to see that it is a standard "state of the environment" approach where the DPSIR Indicator Framework [43] -"Driving" forces of environmental change, "Pressures" on the environment, the "State" of the environment; "Impacts" on population, and the "Response" of the society -can be readily adapted. The pressures come from the way human settlement patterns and their transport systems are deployed and utilized by people going about their daily activities, and the associated impacts these have on the health of the population. As our bodies age, our ability to defend against environmental pressures diminishes, and exposures can accelerate the aging process and trigger, or exacerbate, disease. As stated by Hood [44] "decreased efficiency in the blood-brain barrier and the cardiovascular, pulmonary, immune, musculoskeletal, hepatic, renal, and gastrointestinal systems can alter response to environmental agents, leading to heightened susceptibility to the toxic effects of air pollution, pesticides, and other exogenous threats to health." At this point of the exposition, it is important to recognize that, across the world, there are widely varying climate zones and socio-economic variations and human activity patterns that will influence the local details of the environmental stressors, the relative importance attached to them given the values of each society and the responses by the respective public and private sectors of the economy depending on the governance and political economy of each region. The application of the proposed model would capture these contextual differences in the specific research problem statement. For example, and to take two extreme situations, the challenges of sea-level rise as a driving force of environmental change will impact on large populations living in the deltas of many great rivers of the world [45], whereas in arid countries, such as Australia, water security becomes an issue with projected increases on annual mean temperature [46]. To validate the proposed model, research into the above factors in any real region should conducted within a trans-disciplinary framework with team science (see, Section 2 above) that also considers society's response to the environmental pressures and impacts on this life-span aging process by assessing interventions (using economic, social and environmental evaluation methodologies) in an over-arching sustainability framework of alternative scenarios, policies and programs [47,48]. Translation of findings into practical outcomes with key stakeholders is a key goal in a comprehensive evaluation of the model. The scientific rationale of the model derives from four main concepts. The first is founded on the literature on quality of life especially that of a multi-dimensional conceptualization with five categories of concepts and domains of health -opportunity; disadvantage; health perceptions; functional status; impairment; duration of life and death (Table 1). This represents the way we think about the health and well-being of the individuals studied and how we might evaluate the outcomes of any intervention. Health outcome can only be measured within the constraints of the person's environment and their perception of their health [49]. HEALTH PERCEPTIONS General health perceptions Satisfaction with health -Self-rating of health; health concern/worry -Satisfaction with physical, psychological, social function Also, the 'Serial V' concept, that integrates the health outcome measurement, process improvement and continual improvement ( Figure 2) can form part of the evaluation methodology. The outcome measures can relate to: mortality and morbidity rates; physical, mental and social functioning; satisfaction; quality assurance monitors; and cost/resource usage [50]. The basic process measures might be speed, accuracy, appropriateness and efficiency. The high leverage processes are more difficult to identify and require both professional and local knowledge and analyses of cause-and-effect. The second concept, which is derived from work by Swedish geographers led by Professor Torsten Hagerstrand [51] on "time-space" geography, allows for an accounting of peoples' geographical location in the space identified as the region, and the human activities that they undertake in that region. Theoretically, this accounting should take place with longitudinal studies from "cradle to grave" to account for the aging process but the costs and difficulties of such a survey preclude this full accounting [52]. This human activity approach -"the things people do in time and space" [53] with its associated survey methodology [54] -leads to the analysis of the precise location, activity, and timeduration for each individual (with its unique identifier of demographic and socio-economic condition as identified in the first component of the model) and their space-time exposures to the environmental stressors -as conceptualized by Hagerstrand [55,56]. It is especially important to note that survey data from the human activity approach will reveal all travel -which includes the origin and destination of the journey, the route taken, the transport mode taken (walking, cycling, public transport and private transport) and the duration of those journeys. The time spent at various origin and destination locations in the region, including, importantly, in the home, will be an important factor in calculating environmental exposure to stressors such as pollution from the transport systems. Also, the duration of time spent in walking and cycling (plus other physical activities such as sport) will give quantitative information on whether that person is leading a "healthy" or sedentary life-style at that particular stage of the persons aging process. The third component of the model is the associated fundamental science of the stressors under investigation in the region and their impacts on the population -of which a large body of literature may be accessed. Analysis is based on the physical properties of the various stressors (exogenous threats) that have bearing on the health of the population, including the precise amplitude and location by time of day of these stressors across the region. For illustrative reasons only, we select vehicle emissions and transport noise. The combustion of gasoline and diesel fuels in an internal combustion engines results in tailpipe exhaust emissions that mix in the atmosphere and disperse according to specific meteorological conditions that constitute greenhouse gases, suspended particulate matter and contribute to the formation of smog. These fundamentals are complicated because the rate of emissions will depend on a variety of factors including vehicle type and age, standard of maintenance and the traffic flow conditions influencing driving behavior and speeds. These complex interactions are welldescribed in the body of knowledge encompassed in traffic engineering [57]. Sound pressure waves generated by transport vehicles, and their interaction with the road pavement surface (traffic), rail (rail-based freight and public transport) and the air (jet aircraft) propagate from a multitude of sources with the region according to well understood principles of physics. However, it is the acoustical energy of differing wavelengths, again tempered a little by prevailing meteorological conditions that the human ear receives and translates this into the brain as un-wanted sound or "noise" (primarily a psychological reaction that is somewhat subject specific). Psychologists and acoustical engineers have developed a thorough understanding of exposure to environmental noise (doseresponse relationships). The fourth component of the conceptual model is to exploit both the rapid developments on computing technologies for the analyzes and linking of large data bases and the power of geographical information systems (GIS) and the software commercially available to map layers of interest at specific locations across the region, or in a sub-set study area. Visualization using GIS allows layers associated with the seven main themes of environment and heritage to be mapped, and the place and time-dependent nature of the stressors to be superimposed (primarily line sources in the case of most transport infrastructure but three-dimensional plots of flight paths for airports and aircraft noise). The integration of land information systems, mathematical models of travel demand and of road traffic noise and the exposure of neighboring properties to noise impacts have been achieved in a GIS framework [58][59][60]. It is the time-varying location of people going about their daily activities in the region and the different duration of those activities in geographical space that, when mapped into locational patterns of the stressors that will enable exposure to be estimated. Of course, GIS has other important uses in validating the model such as the mapping of land use locations (for studies of accessibility, social capital and well-being) and of traffic accident locations. A final observation on the model is that it can be further revised and refined following the formulation of hypotheses and their testing in real regions of cities and their inter-connecting hinterlands. Suitable analytical tools can be drawn from evidence-based medicine, epidemiology and statistics, systems and risk analysis (widely used in practice by environmental and transport engineers). Risk assessment procedures in the US regulatory agencies came from the US National Academy of Sciences. The formalization of risk assessment and risk management found its way into international organizations, federal agencies and business during the 1980s. More recently, global climate change risk is seen from both broad concepts of mitigation and adaptation. This latter point is a reminder that the hypothetical region that we have outlined has a boundary that is not immutable and so is subject to trans-border events -both of a physical (climate) and a social (human migration) dimension. Literature on Human Settlements, Density and Heath There is a long history of writing on the environmental problems of the 19 th century industrial city, and the contemporary city, such as the architectural guide attempts to define a theory and language for constructing spaces that allow for optimal human happiness and well-being [61] and the effects of traffic on the urban environment [62]. The influence of built-form factors on health (and well-being) is now well established from evidence cited in an extensive literature [63][64][65][66][67][68][69][70][71]. There is a body of literature that links urbanization and urban layout with obesity, especially the influence of low-density suburban developments that are car dependent and encourage sedentary life styles, including chauffeuring children around in motor vehicles. In 1996, the US Surgeon General's report on physical activity and health established the multiple health benefits of physical activity and contended that thirty minutes of moderate physical activity several days per week could help to prevent a range of diseases [72]. The Robert Wood Johnson Foundation's multi million-dollar Active Living portfolio aimed to increase understanding of the policy and other forces that shape the built environment and to alter them to support the formation of environments that are more amenable to physical activity. Active Living by Design funded twenty-five communities [73]. The revised mission from 2007 to 2012 is to stimulate and support research on environments and policies that influence physical activity to inform effective childhood obesity prevention strategies, particularly in low-income and racial/ethnic communities at highest risk. Active Living Research (ALR) examined and measured the design features of communities and charted their connections to levels of physical activity, aiming to accumulate evidence on how the built environments of a range of communities shaped physical activity. However, built environment characteristics near home did not consistently predict walking for exercise in a healthy population in western Washington State, USA. Further, there was little evidence of neighborhood-level variation in walking for exercise, despite neighborhood-level variation in the built environment [74]. The policy changes to which the active-living partners aspired rarely came quickly or without conflict [75] as documented in the special edition of the Journal of Health Politics, Policy and Law [76]. Some of the suggestions that focus on fixing the problem involve transport at their core: collect travel and activity data from people before and after rail lines open or after cycling and pedestrian improvements have been made to analyze the link between investment interventions and travel and health benefit outcomes; require that health criteria be at least considered within cost -benefit analyses across transport modes to be achieved by providing incentives for government bureaucrats to evaluate the health impacts of alternative transport investment; and address transport-related impacts on sedentary behavior through increased funding for public transport, walking facilities and bike paths. Doctors for the Environment Australia [77] are also advocating government spending on public transport through a national initiative to all Members, Senators and Ministers in federal parliament seeking their recognition that public transport is a climate change and a health issue. The economic dimensions of the problem of transport policy that has favored developers in suburban regions and promoted private vehicle usage and road building programs in the USA are becoming clearer. Recently, a submission by the American Public Health Association [78] at a workshop on the Hidden Health Costs of US Transportation Policy estimated the annual costs to be: traffic injuries and fatalities at about $200 billion; obesity/overweight societal cost at about $117 billion and the cost of inactivity at about $76 billion; and air quality from $40 to $64 billion. International comparative studies on such costs should be updated regularly as additional data becomes available. Literature on Transport Impacts on Health From the earliest days of the research that has now blossomed into the World Conference on Transport Research Conferences (see Section 3), urban land use and transport have been analyzed together [79], and their interactions quantified as a key process [80,81]. The dynamics of the driving forces in the DPSIR Indicator Framework for human settlements and transport is now firmly established [82]. The key mechanisms in this driving force are illustrated graphically in Figure 3 as a systems flow diagram. It is worth observing that mathematical models of demand and supply (mechanism 4) have been exhaustively refined and verified by researchers over a long period of time. Monitoring macroscopic changes in transport and land use is a useful starting point, although transport planners would analyze detailed traffic movements and land-use change at the fine geographical detail of micro zones and consider transport pollutants at the more macro, urban level [83]. Figure 3 shows a general set of simple indicators for monitoring change are shown [84], where the blue lines are the early phases and the red lines the latest phase of urban development. In simple outline, the processes of development are driven by economic growth (income per capita) which in turn leads to increased motorization (car ownership); high road congestion levels (road length per registered car), which introduces additional air pollution. Economic growth is a driver of more residential space per person; an increasingly sprawled city (urban radius); poor accessibility with increasing spatial separation and longer journeys (total trip length). In a sprawled, car dependent city total energy consumption (and associated tail-pipe emissions) plus the carbon burden from automobile manufacture for transport is high and this is exacerbated by road congestion. That such a pattern of urban development is unsustainable has been clearly documented by numerous authors, one of who [84] is widely cited in the mainstream transport literature. The main stressors from urban transport systems (only pollution is shown in Figure 3) are: accidents involving road vehicles, cyclists and pedestrians; transport noise and vehicle emissions (and ambient air quality). First, we will mention only in passing traffic accidents because its causes (human factors, vehicle and road environment) and countermeasures are extensively documented in specialist journals. For example, the Journal of Accident Analysis & Prevention [85] provides wide coverage of the general areas relating to accidental injury and damage, including the pre-injury and immediate post-injury phases. Published papers deal with medical, legal, economic, educational, behavioral, theoretical or empirical aspects of transportation accidents, as well as with the accidents themselves. [82] M oni toring the M echani sm M oni toring the M echani sm 1) Economic Growth Year GDP/capita GDP/capita Car ownership Road length/ car 2) Suburbanization Year Urban Radius 6) Environmental Load Year Pollution Per capita Energy Consumption Urban Radius Hayashi (1996) Secondly, the analysis and prediction of aircraft, road and rail traffic noise is well established health effects of transport noise are well documented [86][87][88][89]. There is a large amount of evidence that negative emotional states are acutely associated with cardiovascular pathophysiology [90][91][92]. The evidence about the world-wide distribution and cause of the aircraft noise problem in suburbs surrounding airports is also compelling [93]. Annoyance from aircraft noise, is well documented in the literature [94,95], but stress and hypertension has only been identified in more recent years [96][97][98][99]. In our own contribution to this topic a self-reported questionnaire using the validated instrument SF-36 measured health quality of life, prevalence of hypertension, chronic noise stress, noise sensitivity, noise annoyance, confounding factors, and demographic characteristics [21]. Aircraft noise is one of the best illustrations of environmental stressors that are a major component of sustainable health in cities. For example, research on aircraft noise was of sufficient social importance to be included in The Sydney Morning Herald, Sydney Magazine (Issue #60 of April, 2008, p. 58) [100]. Thirdly, disentangling the impacts of transport pollutants from other pollutants in the atmosphere is more difficult. Determining the risk posed by environmental pollution to public health [101] requires knowledge of five fundamental components: the source of pollutants; the transport of pollutants from sources to humans; the exposure of humans to pollutants; the dose response for those exposed. There is considerable variation between subjects, for example, there will be a large difference between those exposed and not exposed to environmental tobacco smoke [102]. Vehicle emissions are an important source of a number of potentially hazardous air pollutants, including particulate matter, nitrogen dioxide, carbon monoxide, and several air toxics. Although questions remain about some of the chemical and biological processes at play, including whether there are synergistic effects of combined pollutants in the atmosphere [103], there is strong evidence that many of the pollutants emitted by motor vehicles pose serious health risks, and that those risks are elevated for individuals, especially sensitive individuals, living in close proximity to a high-traffic road. The negative health effects of many of these pollutants have been well documented [104]:  "Exposure to fine and coarse PM in ambient air has been associated with a short-term increase in mortality and morbidity from cardiovascular and respiratory diseases.  Studies have found long-term average mortality rates 17% -26% higher than expected in communities with high levels of fine particulate matter.  Diesel exhaust and nitrogen dioxide have been tied to increased asthma symptoms and response to allergens.  Exposure to diesel exhaust has also been associated with increased rates of lung cancer and mortality and morbidity.  Air toxics can cause negative health effects including cancer and respiratory, neurological, reproductive, and developmental effects." The health risks posed by particulate matter (PM) in ambient air are a cause for concern (for a recent overview of research in the USA, see [105]). Fine and ultra-fine particles, because of their small size and/or their chemical composition tend to be more hazardous to human health than coarse particles. Results indicate an elevated mortality risk from short-term exposure to ultra-fine particulates [106]. PM from motor vehicles in particular has been shown by several studies to be more toxic than PM from other sources. Ambient concentrations of particulate matter have been consistently associated with daily mortality [107][108][109]. Associations between ambient concentrations of nitrogen dioxide (NO 2 ), carbon monoxide (CO) and daily mortality have been observed [110,111], however, the causality of the NO 2 effects is currently being debated [112]. Exposure to traffic-related pollution is a complex subject. Residential traffic is associated with both current symptoms and prevalence of diagnosis of asthma and chronic bronchitis, among adults in southern Sweden. Traffic has not only short-term but also long-term effects on adult chronic respiratory disease, even in a region with low overall levels of traffic pollution such as Southern Sweden [113]. Traffic activity increases the level of air pollution, with locations near heavy traffic having significantly higher particulate levels than those locations near lighter traffic [114][115][116][117][118]. A study of cyclists in Mol, Flanders [119] found relatively higher ultra fine particle concentration exposure during morning office hours and moderate ultra fine particle levels during afternoon. The major sources of ultra fine particles and PM 10 were identified from vehicular emission and construction activities, respectively. In fact, air-quality models based on traffic patterns account for up to 50-73% of the variability in average annual levels of fine particulate matter [115,118,120]. Although ambient air quality can vary within a city, generally, people living near high-traffic roads have the highest levels of exposure. Wind speed and direction and building heights can also impact pollutant distribution, adjacent to the roadway and across a metropolitan area [121]. Both PM and ozone can be transported long distances, impacting ambient air quality over a wide area and limiting the effectiveness of local pollution control efforts. Ground-level ozone and PM 2.5 have been linked to negative health impacts ranging from minor respiratory problems to cardiovascular disease, hospitalizations and premature death. For example, based on data from eight Canadian cities, Health Canada has estimated that 5,900 premature deaths each year in these cities are attributable to air pollution [122]. In the city of Sao Paulo, Brazil, logistic regression revealed a gradient of increasing risk of an early neonatal death with higher exposure to traffic-related air pollution [123]. A large body of research suggests that the typical extent of elevated exposure to PM 2.5 and nitrogen dioxide is roughly 100 to 500 meters from a major road [124][125][126][127]. Studies examining the impact of living near major roadways and the consequent long-term exposure to traffic-related air pollutants have shown a variety of health risks, including:  significant increase in the risk of death from cardiopulmonary causes  significant increase in asthma prevalence in children  impacts on lung development in children  increased cardiac arrhythmias Sensitive individuals, who are more likely to experience these effects, include the elderly, those with influenza [124], in asthmatic children [125] and in diabetic subjects [126], and in younger children (in a nationwide US study [127], increased respiratory allergy/hay fever was associated with increased summer ozone levels and increased fine particulate matter). Further research that classifies and analyzes the population by age and socio-economic characteristics and their exposure to pollutants when going about their daily activities in places other than the home is desirable. There is a strand of research that looks at the stress involved primarily from driving private motor vehicles (the trauma of the road toll could be included as a negative impact of using transport) and driving trucks as an occupation [128][129][130][131][132]. The extent of metropolitan congestion is increasing in Australia: for example, motorists in Melbourne are spending an extra day a year behind the wheel. A typical driving commuter spends almost two weeks -about 336 hours -a year going to and from work. In 1999, the same driver would spend only 12 days and 17 hours in the car every year for the same trip [133]. Traffic speeds for commuters to the Sydney CBD who use the M2 toll road, the Lane Cove Tunnel and the Gore Hill Freeway fell to just 31 kilometers an hour in 2007-08, down from 38 kilometers per hour a year ago [134]. In a study of the US National Household Travel Survey [135], higher commuting time (more than 20 minutes) was significantly associated with no socially-oriented trips (as a proxy for social capital) Finally, by way of an observation on the solutions and countermeasures to the environmental stressors and their impacts: these are usually well covered in governments state of the environment reporting for the national regional and, sometimes, local levels. For example, searches can be made of the various Australian state government policies and programs in the various state of the environment reports [136]. A systematic review of the evidence on the most effective ways of improving population health through transport interventions [137] is now dated and requires updating. An international comparative analysis of societal responses within the DPSIR Indicator Framework, much like the comparative analysis of cities, as summarized in Figure 1 of this paper, would give a lead as to what policies and programs have been successful in a range of urban situations. Literature on Visualization and GIS Visualization of those environmental stressors (and health impacts) from the urbanization process outlined in Figure 3 is highly desirable, especially when communicating scientific information to stakeholders and wider publics. Geographic information systems (GIS) provide ideal platforms for the convergence of locating environmental stressors and public health information the natural and manmade environment. They are highly suitable for analyzing epidemiological data, revealing trends and interrelationships that would be more difficult to discover in tabular format. Moreover, GIS allows policy makers to easily visualize problems in relation to existing health and social services and the urban environment, and so more effectively target resources. The World Health Organisation (WHO) has a public health and GIS mapping program -but this is only at the global or national scale and not at the spatial resolution of the city. The logical extension of this visualization challenge of GIS is to apply it to urban areas, human activity patterns and environmental stressors. As a starting point in the development of such a system there are books on GIS and public health [138], public health information visualization technology [139] and a range of recent University initiatives linking geography, web-based spatial analysis (GIS) and epidemiology (for example, McMaster University, Canada, University of Iowa, Improving Public Health Through Geographical Information Systems An Instructional Guide to Major Concepts and Their Implementation Web Version 1.0 December, 1997 [140,141] and disaster management, emergency planning and responses to terrorist attacks [142][143][144]. A number of geographic perspectives on health and environment could create useful connections between geography and public health, via social epidemiology [145,146] -for example, the dust map, or a graphic presentation of particulate concentration, where the particulate concentration ranges are projected on a street plan or aerial photograph [119]. Measured particulate concentrations are coupled with the GPS positions and then projected on the entire transport route (along which individuals pass and are exposed to the pollutant). In order to properly plan, manage and monitor any public health program, it is vital that up-to-date, relevant information is available to decision-makers at all levels of the public health system. As every environmental stressor requires a different response and policy decision, information must be available that reflects a realistic assessment of the situation at the local level. Geo-coding accuracy has been established for environmental exposures and health [146]. This must be done with best available data and taking into consideration, demographics, availability of, and accessibility to, existing health and social services as well as other geographic and environmental features, including climate change impacts. Improved information systems, which are an integral part of outcomes assessment and a continuous quality improvement approach advocated in Section 2, will result in more effective decision making. If health information systems are to make a practical contribution to the health system then there is a need to measure the output concisely [147]. These comments are of particular relevance to integrated solutions for the sustainability of cities and regions include public health informatics and outcomes assessment. A stakeholder survey of 522 leaders and professionals in the 25 largest cities of the world found that health care is a major infrastructure challenge, and furthermore noted that IT in health care has a major role to play, supporting both treatment and administration [148]. Future Issues -Climate Change Within most metropolitan regions of the world there is evidence that environmental stressors are increasing and health impacts will be exacerbated by climate change in the 21st century [149]. Public health specialists have raised the potential impact of climate change on health and there are several research topics emerging. Changes in temperature, humidity, rainfall, and sea level rise could all affect the incidence of infectious diseases, and this is the most common topic [150,151]: a. Association between heavy rainfall and Ross River virus disease. b. Both insects and insect-borne diseases (including malaria and dengue fever) have been experienced at increasingly higher altitudes in Africa, Asia and Latin America. c. Heavy rainfall may cause outbreaks of cryptosporidiosis which causes severe diarrheic diseases in children and can cause death in immuno-compromised individuals. d. An increase of the temperature can activate the blooms and vibrios (cholera) in fishes. e. The emergence of Hantavirus pulmonary syndrome may be linked to heavy rainfall resulting in growth in rodent populations and subsequent disease transmission. f. Extreme flooding or hurricanes can lead to outbreaks of leptospirosis and by their violent nature, natural disasters like storms, floods, cyclones, have the potential to cause morbidity, mortality, and property loss [152]. A comprehensive and recent review of climate change and human health research needs in Australia may be found in a late 2008 report [153]. Heat waves are expected to increase in frequency, intensity and duration this century [154,155] and the urban heat island effect is likely to cause additional problems in cities. Mortality from heat waves is related to cardiovascular, cerebrovascular, and respiratory disease and is concentrated in elderly persons and individuals with pre-existing illness, and deaths have been examined in Europe and the USA [156][157][158]. A study of the 2006 Californian heat wave [159] analyzed county-level hospitalizations and emergency department visits for all causes and for cause groups. During the heat wave (July 15-August 1, 2006), excess morbidity and rate ratios were calculated and compared to July 8-14 and August [12][13][14][15][16][17][18][19][20][21][22]2006. Emergency department visits for heat-related causes were found to increase, especially in the central part of the state which includes San Francisco. Children (ages 0 -4 years) and the elderly (ages ≥ 65 years) were found to be at greatest risk. Increases in mean temperatures and more sunny days will intensify another set of problems. Damage to the earth's stratospheric ozone layer will lead to an increase of solar ultraviolet radiation reaching the Earth's surface, possibly increasing the incidence of skin cancers [160]. An increase of the global temperature causes earlier pollen seasons and the increase of pollen season duration [161] and the pollen produced may be more allergenic [162]. Quantity and seasonality of pollen are likely to be impacted by both climate-forcing of phenology and direct effects on pollen production -trees in the spring, grasses in the summer; ragweed in the autumn [161]. One of the most common plant-induced health effects relates to aerobiology, including sneezing, inflammation of nasal and conjunctival membranes, and wheezing -in Japan, the cedar tree pollen can be "the enemy of some people". In addition to increased pollen exposure, there are other consequences of increased fossil fuel burning which may be synergistic; for example, diesel particles help deliver aeroallergens deep into airways and irritate immune cells [162]. Increased temperatures increase the risk of natural forest fires (although some are deliberately or carelessly started by humans) in dry climate zones (a seminar on this topic presented by Professor David Karoly was held in Melbourne, Australia, on 27 March, 2009 [163]). The recent bush fires in Victoria, Australia that started in successive days of hot temperatures ranging in the mid-40°C, were a brutal reminder of the power of Nature. The Deputy Prime Minister of Australia, Ms Julia Gillard, moved a condolence motion during a two-hour sitting of Parliament: "The 7th of February, 2009, will now be remembered as one of the darkest days in Australia's peacetime history." [164]. Fires burned 413,000 hectares, destroyed 1,834 homes and more than 400 rental properties in 78 affected communities of regional Victoria. As of 5 March, 2009, the official death toll was 210, but that could rise as police continue their search for more bodies; because of the nature and intensity of the fire some victims may never be known. By early March there were still 1,200 kilometers of containment lines surrounding four burning fires, which fortunately held under the "worst conditions" of temperatures and high, gusting winds on 3 March [164]. Such problems of climate change as a driver of urban policy has been recognized in Australia. The National Climate Change Adaptation Framework (the Framework) was endorsed by the Council of Australian Governments (COAG) in April 2007 as the basis for government action on adaptation over five to seven years, and up to $50 million will be invested in priority research for key sectors as identified in National Adaptation Research Plans (NARP). The main purpose of the NARP for human health is to articulate a research agenda for the next 5-7 years through which to acquire fuller understanding of the health risks from climate change in Australia, and how to reduce those risks via planned adaptive interventions [165]. These research projects are now underway in Australia and the next section speculates on some research gaps from a similar, Australian perspective. [166] First, we will make a brief observation on the state of "team science" in Australia to complement Section 2. Then, based on components of the conceptual representation in Section 3, we will now classify in Tables 2 and 3 the common health impacts from urbanization and transport which have been identified in Sections 4.1 and 4.2. The schema adopted here is the geographical location of people when they are affected by environmental stressors and the geographical scale of that environmental stressor (exposure). Suggestions on obvious gaps in the literature with particular reference to the Australia context are offered. This will help suggest research that will bridge the gap between the traditional, individual-level health care approach and population-based health care [167] and provide also a closer link between research and research-led teaching in universities. Classification of the Knowledge Base and Research Gaps Searches of databases including Medline using keywords such as "public health, urbanization, transport and Australia" were found not to be effective, and, in fact, showed zero citations. However, we are aware of key research centers in Australia that are making contributions relevant to the themes of this paper including a transport component. At the University of Sydney, the Centre for Physical Activity and Health, School of Public Health had conducted research in multidisciplinary research teams on how "walkable" local neighborhoods are. Similarly, The University of Western Australia Centre for Built Environment and Health [168] has an on-going research program focused on examining the impact of the urban environment on health indicators and disease outcomes in children, adults and older adults. It is a fertile field for research in the Australian context given the current state of the science in the USA where a substantial amount of research has been conducted [169] and that given the first comprehensive examination of built-environment measures has only recently been published which demonstrates, despite the considerable progress over the past decade, there still is a need to improve the technical quality of measures, understand the relevance to various population groups, and understand the utility of measures for science and public health [170] and shape a research agenda [171]. The Australian National University National Centre for Epidemiology and Population Health [172] is taking a life-course approach to the study of health and wellbeing , including issues such as: "the prevalence and incidence of health problems varying with age or stage of life; the extent to which the health status and wellbeing of individuals changes over time; the risk factors over the life-course that combine to influence health and wellbeing, through cumulative or synergistic effects; and the combination of environmental influences combined with personal vulnerability (including genetic predisposition) in determining health and wellbeing". There is a well-established international public health literature that represents the way in which a variety of influences, including the social environment and the physical environment factors identified in Figure 3 interact to affect individual health and well being but we will cite one important Australian source [173]. There are also examples in Australia of the non-health sectors (including housing and public planning) which may have a role in working with the health sector [174], but these collaborations must be strengthened and solutions considered as an integral part of a trans-disciplinary team. Furthermore, it should be noted that there has been a change in approach to environmental studies within public health that emphasizes the close inter-relationship between human activity, industry, the physical environment, environmental stressors and human disease, ill health and mortality [175]. Selected examples from Australian cities will help illustrate a few of these inter-relationships ( Table 2). The main drivers of the problems of Australian cities are probably the develop-led, urbanization process (and consumer preference for the "quarter acre block" of house and garden that has led to low density, sprawling, suburbs, and material affluence with high levels of motorization where accessibility to health (and other) facilities is a key issue, especially for those without an car [176]. Some of the management, mitigation and adaptation solutions are summarized in the right-hand column of Table 2. As with practical experience of urban transport policy [177], integrated solutions across sectors are essential and research can contribute to helping over barriers to implementation and the simulation of all costs and benefits associated with such a coordinated set of policy options. For instance, integrated land-use and transport planning aims to reduce dependence on cars by improving access to public transport, walking and cycling; providing facilities nearby so people travel shorter distances; and encouraging multi-purpose trips, which reduces the total number of trips. The New South Wales Government's responses to these issues are summarized in its State of the Environment Report [178]. That human health impacts (that are both positive and negative) should be accounted for in the planning, development and management of urban environments given the stressors and impacts identified in this review. Integrated solutions across urban development, transport and health sectors are required. In the Special Edition of the New South Wales Public Health Bulletin on 'Cities, Sustainability and Health' an example of this approach is proposed with a ten-point checklist for the planning and development of healthy and sustainable communities has been proposed [179]. Developing these sector inter-connections a bit further, an initial research project could be to classify evidence on public health and urban form with particular reference to the location and time spent in high environmental stress areas (exposure) by taking into account individual life histories, where possible, and to teasing out built form effects from socioeconomic confounders, including the role of the law in supporting urban dysfunction [180]. All events that impact on the health and well being of the urban population, whether they have been a result of the built form, in general, or motor vehicles or jet aircraft, in particular, exhibit a geographical pattern of incidence. It is the appropriate management of such factors at the correct spatial level of resolution that presents an action-based research challenge to help contribute to more sustainable cities. Innovation is necessary to achieve socially-sustainable solutions. Partial solutions generated by traditionally distinct professional disciplines are unlikely to result in real innovation, as argued throughout this review paper, and by others. For example, some of the research challenges for urban researchers from a social-ecological perspective are [181]: "The spatial and temporal dynamics of social and environmental determinants of human health in urban systems. Who gets sick and where do they live? What are the relative contributions of social versus environmental factors? What types of interventions are available and appropriate? Measures of health in different urban forms. What contribution does urban pattern and social-ecological processes in urban environments make to the functionality of urban habitats? Can we identify the characteristics of dysfunctional and functional urban landscapes and incorporate this knowledge into better urban planning, design, construction and management?" Table 3 identifies in more detail the elements of the urban built form and its transport systems and the geographical scale of the environmental stressor that are associated with human health, including hypothesizing on the dominant group in the population that might be especially exposed to the various stressors. For example, high concentrations of air pollutants in the ambient environment can result in breathing problems with human communities but the research challenge is to determine the spatial concentration or the geographical spread around transport systems. Effective assessment of health-impact risk to exposed populations from air pollution, and other environmental stressors, is important for supporting decisions of the related detection, prevention, and correction efforts and therefore more research into estimating the geographical scale of exposure (from on the road in the case of traffic accidents to metropolitan air-sheds in the case of airborne pollutants) is needed. However, given the comments made in Section 2 about the need for a multi-disciplinary approach to the problem in Australia then inspiration for a collaborative research design could be drawn from INTARESE -a European research project [182] to develop a conceptual framework within which the latest scientific evidence across all the relevant environmental sectors, including transport, housing, agricultural land use, water management, household chemicals, waste management and climate, and disciplines is brought together as a basis for integrated assessment of both environmental and health impacts and risks. The aim is to build an integrated assessment methodology that can be applied to different stressors and environmental media, settings and locations (ambient, domestic, occupation) and stressors (chemicals, solid wastes, natural hazards, noise). By quantify and comparing environment and health risks (including international comparisons) then policy objectives and targets, and progress towards these policy targets can be established and communicated to key stakeholders. Some of the Australian groundwork in looking at, for example, the time-space patterns of road traffic pollutants has been accomplished at the Institute of Transport and Logistics Study, University of Sydney [183][184][185][186] but forging the links with health sciences remains to be undertaken. In addition to the geographical extent of the environmental stressor is its time varying nature, both of which will influence the population exposure given their space-time trajectories through the "polluted spaces" including what is impacting on the residential space, is the difficult question of how to account for time. This "time" is in addition to the diurnal fluctuations in the environmental stressors mentioned above that have the potential to be measured with instruments or modeled (to form aggregated dose-response relationships). The difficult aspect of time to include in any analysis of the health of individuals is their history (conceptually from birth to death) -their cumulative time-space trajectories through the "historical" polluted places of their experience. One modeling approach to this highly complex problem with its longer-term migration and shorter term travel behaviors is to build on the generalized representation for a comprehensive urban and regional model [187] by explicitly incorporating the environmental stressors into the model. Accounting for the various definitions of time is a research challenge. An initial start in this direction of dealing more explicitly with time has been made by the first author who wrote this article. The starting point is the quote, "as bodies age, the ability to defend against environmental stressors diminishes, and exposures over a life time can accelerate the aging process and trigger or exacerbate disease" [44]. An analysis of the data base extracting baby boomers from the Household, Income and Labour Dynamics in Australia (HILDA) is proposed .Respondents born between 1945 and 1962 -will be extracted and place of residence (that is, city location versus suburban location versus rural location) is used as a proxy variable for exposure to environmental pollutants, such as noise, motor vehicle emissions, particulate matter. The HILDA survey is a household-based panel study which began in 2001 with information on: households and family life; incomes and wealth; employment and unemployment / joblessness; and life satisfaction and wellbeing [188]. Where the trans-disciplinary approach becomes critical is in further research designs that bring together the experience, findings and strengths of the main-stream urban planning and transport researchers, the epidemiologists and health science experts and the key stakeholders from the policy sector. Conclusions The complexity of finding solutions to the impacts on public health of urbanization, climate change and specifically in the context of this article, transport, presents a major new challenge for sustainable development. Integrated solutions will require health care professionals, epidemiologists, engineers, environmental scientists, urban planners, designers and managers, policy specialists, economists and social scientists to grapple with working together in new ways, and to establish a common conceptual understanding. Team science working in a collaborative way within the trans-disciplinary framework described in this paper is a promising way forward but such teams require skilled leadership. All leaders are schooled in some form or other of management philosophy. However, those leaders who excel at generating and sustaining trust, who are supportive, democratic, inclusive, empowering, and are committed to encouraging cooperation and engaging the support of others by being generous in offering constructive feedback to colleagues will significantly enhance trans-disciplinary collaborations within the research team, within the research institute or university and amongst key stakeholders. Their encouragement of professional staff to pursue the trans-disciplinary thinking will bear fruits in creating innovation in research into cities, transport public health, sustainable policy, and environmental stewardship. The literature cited in this paper demonstrates a richness across the themes of environmental stressors and their impacts on humans, on one hand, and of the processes of urbanization, transport and the environment. It does not claim to be comprehensive across all domains considered when searching for material but we hope the bibliography will be useful for other researchers to shape action-based, policy-relevant research projects. Our review of this literature concludes with there still being relatively little integration of the material. Repeated search strategies of data bases on key words such as "public health, urbanization, transport and environmental stressors" failed to identify much material. It is clear from the mainstream transport literature, and through the special interest groups of the World Conference on Transport Research Society that, within their environment group, transport and health is currently not on the main topic agenda. Similarly, although with some exceptions, the public health researchers have not connected adequately enough with the urban researchers. Champions are needed in both fields to advance trans-disciplinary research. As a suggestion on research by way of bridging that gap, and recognizing that the core research team of a trans-disciplinary project must bring their our disciplinary skills and interests to the table, we suggest an ambitious challenge would be the explicit recognition of time in all phases from problem definition, through to reviewing existing knowledge on the research problem, especially disciplinary and inter-disciplinary conceptualizations and explanations, designing the research enquiry from research gaps; implementing the research enquiry, refining conceptual understandings and synthesizing data sets; and specifying types of interventions (with stakeholders) and their costs and benefits. One time dimension is represented by an individual's life history: "as the body ages, the ability to defend against environmental stressors diminishes, and exposures over a life time can accelerate the aging process and trigger or exacerbate disease" [51]. We need to establish this life time exposure (in different places and locations) to the environmental stressors from transport systems and their cumulative effects on health and well-being. The methodology could be time-space geography but with the additional complexity of locations and magnitudes of environmental stressors mapped "through which people on their journeys" are exposed. The urban modelers -familiar with spatial interaction models using cross sectional data -need to turn their minds to the long-term dynamics of change as to how peoples' travel patterns pass through "polluted" places as objects of investigation during their life histories as the body ages. The generalised urban and regional model presented by Sir Alan Wilson [197] is the point of departure for their potential contribution. We suggest that the issues of event (environmental stressor), time (from cradle to the grave), and place (locations, especially those with their own time-dependent variable of environmental stress) are the basis of a new type of epidemiological study. regional models at the 9th International Conference on Computers in Urban Planning and Urban Management, University College London, 2005, organized by Professor Batty.
2014-10-01T00:00:00.000Z
2009-04-28T00:00:00.000
{ "year": 2009, "sha1": "5aa84fdc839a7ffff3978a81e24b8d30ddea0534", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/6/5/1557/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5aa84fdc839a7ffff3978a81e24b8d30ddea0534", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Political Science", "Medicine" ] }
247056826
pes2o/s2orc
v3-fos-license
Association of smoking and polygenic risk with the incidence of lung cancer: a prospective cohort study Background Genetic variation increases the risk of lung cancer, but the extent to which smoking amplifies this effect remains unknown. Therefore, we aimed to investigate the risk of lung cancer in people with different genetic risks and smoking habits. Methods This prospective cohort study included 345,794 European ancestry participants from the UK Biobank and followed up for 7.2 [6.5–7.8] years. Results Overall, 26.2% of the participants were former smokers, and 9.8% were current smokers. During follow-up, 1687 (0.49%) participants developed lung cancer. High genetic risk and smoking were independently associated with an increased risk of incident lung cancer. Compared with never-smokers, HR per standard deviation of the PRS increase was 1.16 (95% CI, 1.11–1.22), and HR of heavy smokers (≥40 pack-years) was 17.89 (95% CI, 15.31–20.91). There were no significant interactions between the PRS and the smoking status or pack-years. Population-attributable fraction analysis showed that smoking cessation might prevent 76.4% of new lung cancers. Conclusions Both high genetic risk and smoking were independently associated with higher lung cancer risk, but the increased risk of smoking was much more significant than heredity. The combination of traditional risk factors and additional PRS provides realistic application prospects for precise prevention. BACKGROUND Lung cancer is the most commonly diagnosed cancer and has the highest mortality worldwide among the general population and males, and it has the second leading mortality and the third incidence among females. In 2018, there were more than 2 million new cases and 1.7 million deaths from lung cancer [1]. Tobacco exposure is the leading cause of lung cancer, despite differences in the intensity of smoking and the type of cigarettes, and~90% of lung cancers are attributed to smoking [2]. In addition, genetic factors also play essential roles in cancer development. Twin studies [3] and heritability estimation based on genome-wide association studies (GWASs) [4,5] indicated that genetic factors contribute far less to incident lung cancer than environmental factors, including smoking. However, population-based prospective studies of smoking and genetic risk in lung cancer have not been fully validated. Over the past decade, GWASs have identified multiple susceptibility loci associated with lung cancer risk, including TP63, TERT, CDKN2A/B and CHRNA3/5 [6][7][8][9]. However, while consistently and significantly associated with the lung cancer risk, each common variant's impact is modest. Aggregating multiple single-nucleotide polymorphisms (SNPs) with tiny functions to generate a composite polygenic risk score (PRS) may explain the genetic risk of complex diseases [10]. In addition, multiple genes, including CHRNA3/5, were strongly associated with lung cancer, smoking behaviours [11], and nicotine addiction [12]. Although previous studies have reported a significant association with lung cancer based on case-control designs [13,14], the relevance of combining these risk scores and smoking for individual subjects and whether smoking and genetic risk have a synergistic effect remains uncertain. Therefore, we hypothesised that smoking and genetic risk are independently associated with incident lung cancer. This study's primary purpose was to investigate whether there are differences in the association between smoking and newonset lung cancer among individuals with low, intermediate or high genetic risk in a large population-based cohort. The second aim was to investigate the possible interaction between genetic risk and smoking for incident lung cancer. Study design The UK Biobank study started in 2006 and, until 2010, recruited >500,000 participants aged 40-69 years from the general population at 22 assessment centres throughout the UK [15]. Participants provided information on smoking and other potentially health-related aspects through extensive baseline questionnaires, verbal interviews and physical measurements. Moreover, blood samples were collected for genotyping. Participants were excluded if they withdrew from the study (n = 1298), their genotype data does not meet the quality control conditions, related to another one more than second-degree, or were non-European ancestry (n = 44,072). Besides, participants with missing data on smoking or covariates were excluded (n = 75,546). Participants with a history of cancer at baseline were also excluded (n = 35,814). Polygenic risk score Polygenic risk scores were created following an additive model for previously published common genetic variants associated with lung cancer. To identify relevant risk loci, we began by searching the NHGRI-EBI GWAS Catalog of published GWAS [16]. Then, we reviewed both the original manuscript and supplementary materials to identify SNPs, risk alleles, and effect sizes. SNPs were selected for each locus according to the criteria of independent (r 2 < 0.1), common (minor allele frequencies [MAF] > 0.01 in 1000 Genomes Project European population), UK Biobank available, large sample size in the development cohort, and smallest P value. The number of risk alleles (0, 1 or 2) for everyone was summed after multiplication with the effect size between the SNPs and each trait. A total of 33 SNPs from eight studies were used (eTable 1 in the Supplement) [8,9,[17][18][19][20][21][22]. This polygenic risk score was then z-standardised based on values for all individuals and categorised into low (lowest quintile), intermediate (quintiles [2][3][4] and high (highest quintile) risk. Smoking status and pack-years Touchscreen questionnaires collected information on smoking status and pack-years at baseline. Detailed definitions of smoking status and the packyears of smoking were provided in eTable 2 in the Supplement. All participants were categorised as never, former or current smoking according to their smoking status, and as no (0), light (0. 1-19.9), intermediate (20-39.9), or heavy (≥40) smoking according to the pack-years of smoking. Outcomes Participants with incident lung cancer were identified as having a diagnosis in national cancer registries after baseline assessment. Diagnoses were recorded using the International Classification of Diseases-9 (ICD-9) and ICD-10 coding system (eTable 3 in Supplement). Death was ascertained via linkage to death registries. We calculated the follow-up time from the date of attendance to the date of first diagnosis, date of death, March 31, 2016 for Wales and England, and October 31, 2015 for Scotland, whichever occurred first. Covariates All models were adjusted for age, sex, education, socioeconomic status (household income and Townsend deprivation index [23]), body mass index (BMI), physical activity, diet, alcohol consumption, passive smoking, occupational exposure, the relatedness of individuals in the sample and first 20 principal components of ancestry. Body mass index (BMI) (kg/m 2 ) was calculated for all UK Biobank participants based on their measured weight and height. Duration and intensity of physical activity were ascertained by touchscreen questionnaires based on the validated International Physical Activity Questionnaire [24]. A healthy diet was calculated based on the Dietary Approaches to Stop Hypertension (DASH) recommendation, associated with multiple cancer types [25,26]. Alcohol consumption was calculated based on US Dietary Guidelines for Americans 2015-2020 [27]. Exposure to tobacco smoke from others at home or outside for more than an hour per week was considered passive smoking. Occupational exposure is based on self-reported exposure to asbestos, paints, thinners, glues, pesticides, diesel exhaust, or other chemical smog at work. Statistical analyses Baseline characteristics of participants were summarised across incident lung cancer status as a percentage for categorical variables, mean (standard deviation [SD]) for normally distributed variables, and median (interquartile range) for skewed variables. The association between genetic-risk categories, smoking categories, and the combination of genetic and smoking categories (nine categories with low genetic risk and never-smoking as a reference, 12 categories with low genetic risk and no smoking pack-years as a reference) and incident lung cancer were explored using multivariable Cox proportional hazard models. The assumption for proportional hazards was evaluated by tests based on Schoenfeld residuals [28]; violation of this assumption was not observed in our analyses. The area under the curve (AUC) of receiver operating characteristic (ROC) curves was used to assess each model's predictive ability, including PRS, smoking, and the combination. The associations between PRS and incident lung cancer were evaluated on a continuous scale with restricted cubic spline curves based on multivariable Cox proportional hazards models. Moreover, interactions between polygenic risk scores and smoking status or pack-years were tested. The populationattributable fractions (PAFs), which estimate the proportion of events that would have been prevented if all individuals had been in the neversmoking category, were calculated [29]. The distribution of smoking status in the Health Survey for England (HSE) [30] and European Prospective Investigation into Cancer and Nutrition (EPIC) [31] with better representation to England and the European population were included in the analysis to deal with the incomplete representation of the UK Biobank [32]. Several sensitivity analyses were conducted to verify the robustness of the results. The risk of incident lung cancer was analysed using genetic-risk quintiles and pack-years of smoking in more subdivided groups. The association was also adjusted for self-reported and hospital diagnosed chronic obstructive pulmonary disease (COPD) and chronic pulmonary infections (definitions in eTable 3) at baseline, which may be important confounding factors [33,34]. The sensitivity analysis excluded participants who had third-degree or higher relatedness to further reduce non-random distribution of risk genes, developed outcomes within the first two years of follow-up to avoid reverse causality, and had a mismatch between calculation and self-reported never-smoking. Moreover, stratified analyses were performed to estimate potential modification effects according to sex (female or male), age (<60 or ≥60 years). Analyses were undertaken using R v3.6.1 (R Center for Statistical Computing, Vienna, Austria). P value < 0.05 (two-sided) was considered significant. Participants characteristics A total of 345,794 European individuals with a complete genotype and phenotype were included in the analysis of incident lung cancer, and their detailed information is shown in Fig. 1. Their mean (SD) age was 56.3 (8.0) years, and 186,330 (53.9%) were female. The PRS was normally distributed among all participants (eFigure 1 in Supplement). There were 90,727 (26.2%) former smokers and 33,994 (9.8%) current smokers, among which 40,889 (11.8%) individuals had intermediate smoking exposure (20-39.9 pack-years) and 19,027 (5.5%) individuals had heavy smoking exposure (≥40 pack-years). The participant characteristics are provided in Table 1. Over 2,454,915 person-years of follow-up (median [interquartile range] length of follow-up, 7.2 [6.5-7.8] years), there were 1687 cases of incident lung cancer. Participants who developed incident lung cancer were slightly older, more likely to be male, had more smoking exposure, had less physical activity, and had an unhealthy diet. Meanwhile, they also had higher genetic risks. Associations of genetic risk with incident lung cancer With the increase in genetic risk, the incidence rate and hazard ratio (HR) of lung cancer gradually increased. After additional adjustment for smoking status or pack-years, the HRs of the high genetic-risk group were 1.73 (95% confidence interval [CI], 1.48-2.02) and 1.69 (95% CI, 1.44-1.97) compared with the low genetic-risk group, and the HRs per SD of PRS increase were 1.16 (95% CI, 1.11-1.22) and 1.16 (95% CI, 1.10-1.21). This result was almost the same as before the adjustment (Table 2). When genetic-risk quintiles were used instead of categories, the same results trend was observed (eTable 4 in Supplement). Figure 2a shows the cumulative risk of incident lung cancer in each geneticrisk group during follow-up. Associations of smoking with incident lung cancer With the changing smoking status and increasing pack-years, the incidence and HR of lung cancer were also increased. After additional adjustment for PRS, the HRs of the current or heavy smoking group were 14.54 (95% CI, 12.47-16.94) and 17.80 (95% CI, 15.23-20.81), respectively, compared with the never-smoking group. This result was almost the same as before the adjustment (Table 3). When the number of smoking pack-years was given in more subdivided categories, the same trend of results was observed (eTable 5 in Supplement). Figure 2b and c shows the cumulative risk of incident lung cancer in each smoking status and pack-year group during follow-up. Associations of smoking and genetic risk with incident lung cancer In each genetic-risk group, the incidence and HR of lung cancer increased with the smoking status deteriorating and pack-years increasing. Compared with the low genetic risk and neversmoking group, there was no significant difference of incident lung cancer risk in the high genetic risk but never-smoking group, while the HR of the low genetic risk but the current smoking group was 11.31 (95% CI, 7.84-16.33). A similar pattern was observed among genetic risk and smoking pack-year groups. The highest risks were observed among individuals with high genetic risk and current smoking (HR, 22 compared with those with low genetic risk and no smoking (Fig. 3). There was no significant interaction between the PRS and the smoking status or pack-years (both P for interaction > 0.05). Further analyses stratified by genetic-risk category showed that the association between smoking and lung cancer appeared to increase with increasing genetic risk ( The same pattern of associations was observed in a series of sensitivity analyses with additional adjustment for COPD and chronic pulmonary infections, excluding participants who had third-degree or higher relatedness, excluding participants who developed outcomes within two years of baseline, and those who had a mismatch between calculation and self-reported never-smoking. (eTables 6 and 7 in the Supplement). Stratified analyses were performed by age and sex (eTables 8 and 9 in the Supplement), but the results were not markedly different among male and female or the <60 years and ≥60 years groups. Population-attributable fractions Since there was no significant interaction between PRS and smoking, the population-attributable fractions were calculated regardless of genetic risk. DISCUSSION In this large population-based prospective cohort study of more than 345,000 European individuals, high genetic risk and smoking status were independently associated with an increased risk of incident lung cancer events. Among never-smokers, there was no significant difference in the incident risk between each genetic group. The high genetic risk was two-fold higher than that of low genetic risk for current smokers. A similar pattern was observed for genetic risk and smoking pack-year groups. Meanwhile, there was no significant interaction between the PRS and smoking status or pack-years for incident lung cancer, and smoking cessation or reduction can provide similar protection against lung cancer regardless of genetic risk. The PAF analysis hypothesised that~76% of new-onset lung cancer events might have been prevented if all individuals had never smoked. To our knowledge, this study is by far the most extensive and fully adjusted prospective study of lung cancer incidence treating smoking as a single modifiable factor and incorporating multiple genetic-risk factors. Many common variants with minor effects have been identified as associated with a high risk of lung cancer, and the PRS can indicate their combined impact. Previous studies used 19 SNPs to construct a PRS for non-small cell lung cancer and showed predictive effects in a prospective study of 95,408 individuals [9]. Compared with this previous study, the present study included a larger sample size and more SNPs to increase the power for risk estimation. Meanwhile, we used the upper and lower quintiles to categorise the high and low genetic-risk groups [35,36], which may reduce the accuracy for the high genetic-risk group but warn a broader population that they need to carry out PRS-informed disease screening or life planning for lifethreatening lung cancer. It also ensured that the comparison between the combined smoking and genetic-risk subgroups had sufficient statistical power. Compared with another study based on the UK Biobank [37], the current PRS contains fewer highly independent SNPs in each locus to avoid overinflation of the GWAS summary results caused by many linkage disequilibrium SNPs. Therefore, this PRS may have better generalisations in other populations [38]. The current results showed similar HRs after adjusting for confounding factors (economic and social background, lifestyle factors, occupational exposure). Compared with case-control studies [39,40], prospective studies may lose some statistical power, but estimates of the absolute risk support using the PRS to predict incident lung cancer [10,41]. Regarding the role of PRS in never-smokers, our results suggest that their incident risk did not achieve statistical significance as the PRS group increased. Among never-smokers, the post hoc study powers for incident lung cancer in those with intermediate and high genetic risk were only 0.243-0.293. Therefore, we speculate that more outcome events may bring different results with the extension of follow-up time. To sum up, we believe that PRS could be a powerful tool for lung cancer risk assessment as it provides additional information independent of smoking and combining it with traditional risk factors could contribute to a better prediction of lung cancer. We observed a strong association between smoking and incident lung cancer, independent of genetic risk, and the increased risk was much greater than the genetic risk. This means that smoking will significantly offset low genetic-risk benefits, consistent with a previous study [9]. However, we followed the same grouping method and found that the risk values were much more significant than those in a previous study (eTable 11 in the Supplement). Sample size, confounding factors, subtle differences in smoking habits, and outcome data sources may be the reasons for the differences. We observed similar associations between smoking and lung cancer with other relevant studies [42,43]. Based on a study of the contemporary population, although smoking, a long-recognised risk factor has undergone tremendous changes in production, composition and use method [44], it still plays a decisive role in lung cancer occurrence. Therefore, smoking cessation is still the most significant and cost-effective way to prevent lung cancer. Previous studies believed that smoking was responsible for 80% 90% of lung cancer [2,43,45], and a study showed that 63.6% of lung cancer are attributable to comprehensive modifiable factors, including smoking and air pollution [37]. We found that the entire population would avoid 76.4% of lung cancer cases by becoming never-smokers. The slight reduction in this proportion is probably because of the reduction in smoking prevalence (23.3% of individuals were current smokers in The European Prospective Investigation into Cancer and Nutrition cohort [43]), manifesting the achievement of tobacco use control. In addition, differences in sample, methodology, and confounders' representativeness also contribute to the different PAFs between studies. Furthermore, we also estimated the attribution of smoking by a more natural form of PAFs called the generalised impact fraction [46]. Our results showed that if all current smokers stop smoking and former smokers remain, the expected reduction in lung cancer cases would be 26%, again highlighting the efficiency of smoking cessation. GWASs have shown that a locus may be simultaneously associated with smoking preference and lung cancer [12,47,48]. The interaction between smoking and genetic risk for lung cancer is a topic worth discussing, as it may help explain some of the missing heritability in lung cancer susceptibility [49]. Variants at the 15q25 locus have been confirmed by several studies associated with increased tobacco addiction and lung cancer risk [47,48], but a significant geneenvironment interaction is controversial [50,51]. Some studies suggested that there were significant gene-smoking interactions at 10q25 [52], 14q22, 15q22 [53] and 19q13 [54]. In this study, there was no significant PRS-smoking interaction for lung cancer. This may be because the combination of multiple loci may mask the potential interaction, and the model selection and the specific definition of smoking habits may also affect the results. Besides, the number of positive cases observed in this cohort was far less than in large-scale GWASs, so there may be insufficient statistical power. However, based on the analysis of adjusting for extensive potential confounding factors and using the two smoking measures, we still believe that PRS and smoking promote lung cancer independently. Strengths and limitations This study has several strengths. Many participants from the UK Biobank study provided complete exposure information, and the extensive phenotype information provided many covariates that could be adjusted in the model to eliminate potential confounders. A more detailed grouping of lifetime tobacco exposure showed a typical dose-response relationship. Furthermore, the study population was utterly independent of previous GWASs that identified the risk loci and their effect sizes, which avoided overfitting to some extent. Several limitations also need to be considered. First, the analysis was conducted on overall lung cancer without constructing PRS and assessing their effects for more detailed lung cancer classifications, which may mask their heterogeneity. Second, additional variants or genetic patterns associated with lung cancer are likely to be identified in the future, which may refine estimates of genetic risk. Third, PRS based on GWASs of European ancestry may limit its application in a larger population due to the differences in risk alleles, allele frequency, and the effect sizes of risk alleles. Fourth, smoking behaviours were self-reported and may have recall and misclassification bias, and there may be differences in the distribution of individuals excluded due to lacking smoking information. Fifth, smoking was not randomly assigned. Although analyses were adjusted for several covariates and sensitivity analyses, the possibility of unmeasured confounding remained. Sixth, the current study included 936 (0.27%) participants with inconsistent information on never-smoking and 0 pack-years of smoking. This may be due to the difference between the selfreported state and participants' calculated state with minimal smoking exposure. Although we excluded these people in the sensitivity analysis, there may still be potential inconsistencies. Finally, the potential "healthy volunteer" selection bias in the UK biobank may be accompanied by a lower proportion of the smoking population and underestimated PAF. A mild increase in PAF was found using representative England and European population structures. CONCLUSION In conclusion, high genetic risk and smoking were independently associated with higher lung cancer risk, and there were no interactions between these risk factors. Polygenic risk assessment can provide important information beyond a variety of environmental exposures. This study provided new insights to quantitatively evaluate the role of smoking and genetics in lung cancer. DATA AVAILABILITY The dataset supporting the conclusions of this article is available in the UK Biobank upon request (https://www.ukbiobank.ac.uk/). Cox proportional hazards regression adjusted for age, sex, education, Townsend deprivation index, income, BMI, diet, physical activity, alcohol consumption, occupational exposure, passive smoking, relatedness, and first 20 principal components of ancestry. P value for trend calculated treating each smoking category as continuous variables.
2022-02-24T06:23:09.976Z
2022-02-22T00:00:00.000
{ "year": 2022, "sha1": "162500926acc86e55b4ebe43f5030a3846e53705", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41416-022-01736-3.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "3271212ec4395272f864393a0a84100bc9ae5386", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9434651
pes2o/s2orc
v3-fos-license
Expression of Dictyostelium myosin tail segments in Escherichia coli: domains required for assembly and phosphorylation. The assembly of myosins into filaments is a property common to all conventional myosins. The ability of myosins to form filaments is conferred by the tail of the large asymmetric molecule. We are studying cloned portions of the Dictyostelium myosin gene expressed in Escherichia coli to investigate functional properties of defined segments of the myosin tail. We have focused on five segments derived from the 68-kD carboxyl-terminus of the myosin tail. These have been expressed and purified to homogeneity from E. coli, and thus the boundaries of each segment within the myosin gene and protein sequence are known. We identified an internal 34-kD segment of the tail, N-LMM-34, which is required and sufficient for assembly. This 287-amino acid domain represents the smallest tail segment purified from any myosin that is capable of forming highly ordered paracrystals characteristic of myosin. Because the assembly of Dictyostelium myosin can be regulated by phosphorylation of the heavy chain, we have studied the in vitro phosphorylation of the expressed tail segments. We have determined which segments are phosphorylated to a high level by a Dictyostelium myosin heavy chain kinase purified from developed cells. While LMM-68, the 68-kD carboxyl terminus of Dictyostelium myosin, or LMM-58, which lacks the 10-kD carboxyl terminus of LMM-68, are phosphorylated to the same extent as purified myosin, subdomains of these segments do not serve as efficient substrates for the kinase. Thus LMM-58 is one minimal substrate for efficient phosphorylation by the myosin heavy chain kinase purified from developed cells. Taken together these results identify two functional domains in Dictyostelium myosin: a 34-kD assembly domain bounded by amino acids 1533-1819 within the myosin sequence and a larger 58-kD phosphorylation domain bounded by amino acids 1533-2034 within the myosin sequence. studying cloned portions of the Dictyostelium myosin gene expressed in Escherichia coli to investigate functional properties of defined segments of the myosin tail. We have focused on five segments derived from the 68-kD carboxyl-terminus of the myosin tail. These have been expressed and purified to homogeneity from E. coli, and thus the boundaries of each segment within the myosin gene and protein sequence are known. We identified an internal 34-kD segment of the tail, N-LMM-34, which is required and sufficient for assembly. This 287-amino acid domain represents the smallest tail segment purified from any myosin that is capable of forming highly ordered paracrystals characteristic of myosin. Because the assembly of Dictyostelium myosin can be regulated by phosphorylation of the heavy chain, we have studied the in vitro phosphorylation of the expressed tail segments. We have determined which segments are phosphorylated to a high level by a Dictyostelium myosin heavy chain kinase purified from developed cells. While LMM-68, the 68-kD carboxyl terminus of Dictyostelium myosin, or LMM-58, which lacks the 10-kD carboxyl terminus of LMM-68, are phosphorylated to the same extent as purified myosin, subdomains of these segments do not serve as efficient substrates for the kinase. Thus LMM-58 is one minimal substrate for efficient phosphorylation by the myosin heavy chain kinase purified from developed cells. Taken together these results identify two functional domains in Dictyostelium myosin: a 34-kD assembly domain bounded by amino acids 1533-1819 within the myosin sequence and a larger 58-kD phosphorylation domain bounded by amino acids 1533-2034 within the myosin sequence. ICTYOSTELIUM DISCOIDEUM is an ameboid microorganism capable of many kinds of movement, ranging from transport of intracellular organelles to translocation of the entire cell. Like other eukaryotes, Dictyostelium has an organized cytoskeleton containing, in part, motors thought to drive cellular motility. One cytoskeletal component is the protein myosin, which exists in at least two forms in Dictyostelium. A conventional form, designated myosin in this paper, contains a 240-kD heavy chain protein and light chains of 16 and 18 kD (Clarke and Spudich, 1974). An unconventional form of myosin, known as myosin I, contains a heavy chain of 116 kD (C6t6 et al., 1985). The conventional myosin in Dictyostelium shares structural homology with myosins purified from muscle tissues. At the amino terminus of the molecule, two polypeptide heavy chains each fold into globular head domains, each of which binds two light chains. The heavy chains emerge from the heads and associate with each other in an alpha-helical coiled-coil, forming an elongated tail domain. Recent studies of Dictyostelium mutants highlight the importance of myosin filaments in a nonmuscle cell. Hmm cells have wild-type myosin replaced with a myosin unable to polymerize because it lacks the carboxyl terminus of the tail . Mutant Dictyostelium cells have also been created that are completely devoid of wild-type myosin (Manstein et al., 1989). Both mutants are unable to complete the developmental cycle and are also unable to undergo cytokinesis. The identical phenotypes of the two mutants demonstrates that the enzymatic head domain is not sufficient for myosin function in vivo; a tail capable of polymerization into filaments is also required. Rod-shaped structures with the dimensions of purified myosin filaments have been visualized by immunofluorescence microscopy in Dictyostelium cells (Yumura and Fukui, 1985). These studies demonstrate the dynamic nature of myosin filaments. Myosin assembles in the posterior end of a cell undergoing chemotaxis and, at other times, in the cleavage furrow of a dividing cell. Such changes in myosin localization are correlated with phosphorylation Of the heavy chain in vivo (Berlot et al., 1985;Nachmias et ai., 1989). In vitro studies have suggested that these phosphory-lations may regulate myosin assembly (Kuczmarski and Spudich, 1980;C6td and McCrca, 1987;Ravid and Spudich, 1989). To understand the contributions from specific regions of the myosin molecule to myosin assembly and phosphorylation, we have focused on the carboxyl terminus of the molecule, the myosin tail. Here we report the functional properties of defined segments of the Dictyostelium myosin tail expressed and purified from Escherichia coll. This approach allows us to study purified tail segments with defined amino acid boundaries. Because the proteins are produced in E. coli, they have not been phosphorylated by endogenous Dictyostelium kinases, a condition that might alter their assembly and ability to be further phosphorylated. These studies identify an internal 34-kD domain with solubility and assembly characteristics of myosin filaments, and longer 58-and 68-kD segments that serve as substrates equivalent to myosin for a myosin heavy chain kinase. Construction of Expression Vectors The expression plasmid encoding LMM-58 has been described . DNA fragments encoding other segments of the tail of Dictyostelium myosin were subcloned from pBgl 4.5, a plasmid containing the 3' end of the Dictyostelium myosin heavy chain gene (De Lozanne et al., 1988). For the construct expressing LMM-68, the 2.0-kb Eco RI-Bgl II fragment encoding the 68-kD carboxyl terminus of the Dictyostelium myosin gene (l.8-kb coding sequence and 0.2-kb 3' flanking sequence) was subcloned into the Eco RI-Bam HI sites of the plasmid pIN-I-A2, gift of Dr. M. Inouye (University of Medicine and Dentistry of New Jersey at Rutgers) (Masui et al., 1983). The resulting expression plasmid encoded the 584 amino acids from the carboxyl-tcrminal end of the Dictyostelium myosin gene (residues 1533-2116 of the protein sequence; Warrick et al., 1986) with 4 additional amino acids derived from vector sequence added to the amino terminus. The expression vector for N-LMM-34 was constructed by subcloning the 0.9-kb Eco RI-Xho II fragment from pBgl 4.5 into the Eco RI-Bam HI sites of the plasmid pIN-I-A2. To bring a stop codon from the vector into the reading frame of the myosin fragment, the plasmid was subsequently opened with Barn HI, filled in with Klenow fragment of polymerase I, and religated. The resulting expression plasmid encoded a 287-amino acid 34-kD myosin segment (residues 1533-1819 of the myosin protein sequence). Seven additional amino acids derived from vector sequence were added to the myosin segment; four amino acids to the amino terminus and three amino acids to the carboxyl terminus. The expression vector for N-LMM-37 was constructed by subcloning the 0.95-kb Eco RI-Cla I fragment from pBgl 4.5 into the Eco RI and Bam HI sites of pIN-I-A2. The Cla I site of the insert and the Barn HI site of the expression vector were previously blunted by filling in with the Klenow fragment ofDNA polymerase I. This vector encoded a 316-amino acid 37-kD myosin segment (from 1533-1848 residues of the myosin sequence) with four additional amino acids derived from vector sequence added to the amino terminus and three amino acids added to the carboxyl terminus. The expression vector for C-LMM-34 was constructed by subeloning the l.l-kb Xho II-Bgl II fragment (0.9-kb coding sequence and 0.2-kb noncoding sequence) into the Bam HI site of piN-I-A2. To correct the reading frame for this construct, the Eco RI site was opened, filled in with the Klenow fragment of DNA polymerase I, and religated before subcloning the appropriate myosin gene fragment. This resulted in an expression plasmid encoding a 298-amino acid 34-kD myosin segment (from 1819 to 2116 residues of the myosin protein sequence) and 10 additional amino acids derived from vector sequence added to the amino terminus of the myosin sequence. An alternate expression vector for C-LMM-34 without noncoding sequence was constructed from a 0.9-kb Xho IFHind IlI fragment from the 3' end of pBgl 4.5. This fragment was cloned into the Eco RI-Hind III sites of pIN-I-A3. The Xho II site of the 0.9-kb fragment and the Eco RI site of the plasmid were previously blunted by filling in with the Kienow fragment of DNA pelymerase I. To create a stop codon in the reading frame, the Hind III site was opened, filled in with the Klenow fragment of DNA polymerase I, and religated. This created a Nhe I site that was opened, filled in, and religated. This vector encoded a 297-amino acid 34-kD myosin tail segment (residues 1819-2115) with five amino acids added to the amino terminus and two amino acids added to the carboxyl terminus from vector sequence. In addition to the 34-kD myosin tail segment, this vector also expressed a second 30-kD myosin fragment. This 30 kD fragment could have arisen from proteolysis of the 34-kD myosin segment or from a second translation start within the 0.9-kb sequence. Results from assembly and phospborylation experiments were indistinguishable for C-LMM-34 expressed from either construct. Expression plasmids were transformed into E. coil LE392 or DH5cx made competent with CaCI2. Constructs were verified after cloning either with diagnostic restriction digests or were sequenced directly. Recombinant DNA procedures were performed by standard procedures (Maniatis et al., 1982). Purification of the Expressed Proteins An overnight culture of bacteria containing the appropriate expression plasmid was diluted 1:100 into fresh media. Cells were grown to an OD600 of 0.8 in fermenters containing 12 or 200 liters of LB media with 50 ttg/ml of ampicillin. The harvested cells were weighed, resuspended in 5 vol of lysis buffer (50 ram Tris, pH 7.5, 10 mM EDTA, 48 mM sodium pyrophosphate, 30% sucrose, 0.2 mM FMSF, 0.7/~g/ml pepstatin A, 0.7 /~g/ml leupeptin) per gram of cell pellet and lysozyme was added to 1 mg/ml. After 10 rain at 0°C the lysate was frozen in dry ice and thawed at 22°C to aid cell lysis. All subsequent steps were pertbrmed at 4°C. The lysate was sonicated using a sonifier, (Heat Systems-Ultrasonics, Inc., Farmingdale, NY), with 30-s bursts until no longer viscous and then sedimented at 100,000 g for 45 rain. The supernatant was placed in a boiling water bath in 50-ml aliquots, stirred for 8-10 rain until most proteins denatured, and then centrifuged at 27,000 g for 30 rain. The supernatant was dialyzed overnight into DEAE buffer (10 mM Tris, pH 7.5, 25 mM NaCI, 1 mM EDTA, 1 mM DTT) and applied to a 100-ml column of DEAE-Sepharose Fast Flow (Pharmacia Fine Chemicals, Uppsala, Sweden), equilibrated in the same buffer. The column was eluted with a linear gradient from DEAE buffer to DEAE buffer containing an additional 500 mM NaCI (500 ml total volume). Fractions of 3.5 ml were collected and fractions enriched in the expressed protein (as assessed by immunoblots stained with anti-Dictyostelium-myosin serum) were pooled. Purified LMM-68, LMM-58, and N-LMM-34 were collected as a pellet after dialysis against assembly buffer (10 mM Tris, pH 7.5, 2 mM MgCI2, 50 mM NaCI) and centrifugation at 100,000 g for 45 min. The proteins were resuspended in storage buffer (20 mM Tris, pH 7.5, 600 mM NaCI, 5 mM EDTA, 0.02% sodium azide). C-LMM-34 did not precipitate in assembly buffer and was purified by gel filtration chromatography. DEAE-sepharose fractions of C-LMM-34 were concentrated to a volume of 0.5-1.5 ml with a Centriprep spin column (Amicon Corp., Danvers, MA) and loaded in 0.5-ml aliquots onto a prepacked Superose-6 column (100 mmx 30 ram; Pharmacia Fine Chemicals) equilibrated in assembly buffer. Under these conditions, C-LMM-34 eluted before contaminating proteins. 0.5-ml fractions were collected, and fractions of purified C-LMM-34 (as assessed by immunoblots of the fractions stained with anti-Dictyostelium-myosin serum) were pooled for subsequent study. Because myosin tail fragments seemed to adsorb easily to sticky surfaces, column fractions were collected in plastic tubes. In addition, dialysis membranes and Centriprep columns were blocked with 1% BSA in TBS (50 mM Tris, pH 7.5, 150 mM NaCl) and washed several times with 1 M NaCI washes before use. Protein concentrations were determined by densitometry of Coomassie-stained gels using rabbit muscle LMM as a standard. The concentration of rabbit muscle LMM was determined by absorption at 280 nm assuming an extinction coefficient of 0.3. Electron Microscopy Rotary shadowing of expressed tail segments in 70% glycerol, l0 mM Tris, pH 7.5, 150 mM NaCl was as described (Flicker et al., 1985). For negative stain microscopy, samples were applied to carbon-coated formvar grids for 30 s followed by 1% aqueous uranyi acetate applied for 30 s. Grids of rotaryshadowed samples and negatively stained samples were examined with an electron microscope (201; Philips Electronic Instruments, Inc., Mahwah, N J). The magnification was calibrated using negatively stained tropomyosin paracrystals that have a repeating periodicity of 395 A (Flicker et al., 1985). Optical diffraction patterns were obtained according to De Rosier and King (1972). Electrophoresis and lmmunoblotting SDS-polyacrylamide gels were used for electrophoresis of proteins, and stained with Coomassie brilliant blue (Laemmli, 1970). Gels were scanned with a scanning laser densitometer (Ultroscan XL Laser Densitometer; LKB Instruments Inc., Bromma, Sweden). The areas of peaks were evaluated using the Gelscan XL software program that integrated the area under a Gaussian curve fitted to each peak. Immunoblot analysis was performed according to the method of Towbin et al. (1979) using polyclonal anti-Dictyostelium myosin serum diluted 1:1,000 as the primary antibody (De Lozanne et al., 1985), and afffinity-purified goat-anti rabbit IgG horseradish peroxidase conjugate (1:2,000) as the secondary antibody (Bio-Rad Laboratories, Richmond, CA). Antibody-conjugates were visualized with 4-chloro-1napthol following the manufacturer's protocol (Bio-Rad Laboratories). Phosphorylation of Dictyostelium Myosin and Expressed Segments A myosin heavy chain kinas¢ from developed Dictyostelium cells was purified as described . Myosin purified from Dictyostelium (Griffith et al., 1987) or expressed myosin segments (0.5-1 mg/ ml) were incubated at 22°C with the myosin heavy chain kinase (0.01 mg/ml) in a reaction mixture containing 10 mM Hepes pH 7.5, 6 mM MgCI2, 0.2 mM ['y-32PlATP (500 cpm/pmole) and 1 mM DTT. The reaction was initiated by addition of ATP and stopped by addition of 10% trichloroacetic acid. After a 15 min incubation with 10% TCA at 0*C, the reaction mixture was pelleted in a microfuge (Eppendorf instruments made by Brinkmann Instruments, Inc., Westbury, NY), washed once with 10% TCA and then resuspended in 20/~1 SDS sample buffer and electrophoresed on an SDSpolyacrylamide gel. After staining with Coomassie brilliant blue, the gels were dried and exposed (XAR-5 film; Eastman Kodak Co., Rochester, NY) with an intensifying screen (Dupont Co., Wilmington, DE) at -80°C. To determine incorporation of 32E the protein bands were quantitated by scanning densitometry and then excised from the gel and counted in a scintillation counter (LS 7500; Beckman Instruments, Inc., Palo Alto, CA). Expression of Myosin Tail Segments and Purification from E. coli We have studied five different segments of the tail of Dictyostelium myosin expressed in and purified from E. coli. The expressed proteins were completely soluble upon cell lysis indicating that neither extensive aggregation of the expressed proteins nor partition into bacterial inclusion bodies had occurred. The tail segments were expressed with varying efficiencies: whereas the level of N-LMM-34 was equivalent to major proteins of the bacterial lysate (Fig. 1, lane 3), LMM-68, N-LMM-37, and C-LMM-34 were relatively minor proteins (Fig. 1, lanes 1, 5, and 7). Although the molecular masses predicted from the sequences of expressed LMM-68, N-LMM-34, N-LMM-37, and C-LMM-34 are 69, 34, 38, and 37 kD, respectively, the purified proteins have apparent molecular masses on SDS-polyacrylamide gels of 78, 40, 45, and 44 kD, respectively. This aberrant migration on gels may reflect the highly charged nature of the myosin tail sequence (Warrick et al., 1986). We used a purification scheme based on heat denaturation and removal of proteins sensitive to heat. Immunoblots of the supernatants revealed that the vast majority of each of the five segments remained soluble after heat treatment. Subsequent DEAE chromatography resulted in further purification and removal of nucleic acids. We used these partially purified preparations for the initial characterization of the expressed proteins. We further purified the myosin tail segments to homogeneity to ensure that contaminating proteins had not altered their properties. This last purification step was achieved by gel filtration for C-LMM-34 or by dialysis into assembly buffer and sedimentation for all other constructs. We found no difference in the behavior of the expressed proteins whether partially purified or purified to homogeneity (data not shown). The heat treatment did not affect the properties of the expressed tail segments; the functional properties of LMM-58 or LMM-68 were identical purified by this method or whether purified by selective solubilities in different buffers . Immunoblots stained with a polyclonal antiserum against Dictyostelium myosin ( Fig. 1 B) revealed that the expressed proteins in the bacterial lysates comigrate with the final purified proteins, indicating that proteolysis did not occur during purification. The yield of expressed proteins from 50 g of bacterial cells was 0.7 mg for LMM-68, 20 mg for N-LMM-34, 5 mg for C-LMM-37, and 1 mg for C-LMM-34. The Expressed Tail Segments Have Prope~e~ Predicted for Native Myosin Tail Segments As shown in Fig. 2, the tail segments purified from E. coli were flexible rods, like the tail of Dictyostelium myosin, with lengths (Table I) close to the predicted lengths for alpha-helical coiled-coils calculated from their sequences (McLachlan, 1984). In relation to the ~185-nm Dictyostelium myosin tail, N-LMM-34 and C-LMM-34 were calculated to occupy positions 98-141 and 141-185 nm, respectively, from the headtail junction (Table I). Consistent with these assignments, immunoblots showed reaction of the anti-Dictyostelium myosin monoclonals My 5 and My 4 with N-LMM-34, N-LMM-37, and LMM-68, but not with C-LMM-34 (Table I). These antibodies have been previously mapped to 120 and 135 nm from the head-tail junction of Dictyostelium myosin Flicker et al., 1985). All assembly competent segments formed highly ordered paracrystals with an observed transverse periodicity of 14 nm. Shown are examples of the largest segment, LMM-68 and the shortest segment, N-LMM-34 (Fig. 4). A novel repeat of 4.6 nm was observed in favorable areas. Optical diffraction patterns of electron micrographs of the paracrystals confirmed the presence of the 14-and 4.6-nm repeats. The diffraction patterns also revealed an additional reflection corresponding to 3.5 nm in some areas of LMM-68; this measurement is reminiscent of a 3.8-nm reflection observed in optical diffraction patterns of thick filaments of skeletal muscle stained with uranyl acetate (Hanson et al., 1971). Paracrystals were not observed in samples in high salt buffers. Paracrystals were never seen with samples of C-LMM-34 in any buffer. Phosphorylation of the Expressed Tail Segments To examine the ability of the expressed myosin tail segments to be phosphorylated, the expressed proteins were incubated with 3~-3:p-labeled ATP and a myosin heavy chain kinase purified from developed cells . Phosphorylation levels of Dictyostelium myosin and the expressed tail segments were monitored as a function of time of incubation of equal molar amounts of substrate. LMM-68 and LMM-58 were phosphorylated to the same extent and with similar kinetics as purified Dictyostelium myosin throughout the course of the assay. In contrast, even with extended incubation, the degree of phosphorylation of either N-LMM-34, N-LMM-37, or C-LMM-34 was <15% of either LMM-68 or Dictyostelium myosin (Fig. 5). Discussion Comparison of the properties of the tail segments with purified Dictyostelium myosin, summarized in Fig. 6, identifies functional domains within the tail. A 34-kD segment, N-LMM-34, bounded by amino acids 1533-1819 can assemble into structures with assembly and structural features characteristic of myosin filaments. With regard to phosphorylation kinetics, the larger domain, LMM-58, which contains amino acids 1533-2034 of myosin, is the smallest domain that can serve as a substrate equivalent to myosin for a Dictyostelium myosin heavy chain kinase. Several subdomains of LMM-58 are not phosphorylated efficiently. The solubility of Dictyostelium myosin tail fragments in low salt buffer has been addressed in previous studies that studied proteolytic myosin fragments. Peltz et al. (1981) found a 102-kD chymotryptic peptide of Dictyostelium myosin to be insoluble in low salt buffers. In an analysis of myosin fragments produced with chymotrypsin and mapped with defined monoclonal antibodies, Pagh et al. (1984) identified an 85-kD insoluble fragment and concluded that a region comprising 50-80 % of the tail was important for insolubility in low salt buffer. However these studies analyzed mixtures of several proteolytic fragments, and thus the contribution of interactions between different segments of the myosin tail to produce insolubility could not be addressed. In contrast with Flicker et al., 1985. § Claviez et al., 1982. II De Lozanne et al., 1987 in all cases. previous studies, we have purified to homogeneity myosin segments with defined amino acid boundaries. We can therefore extend previous conclusions to state that N-LMM-34 (comprising the region •53 %-76 % from the head-tail junction) is not only important, it is sufficient for insolubility in low salt buffer. Indeed, despite a large difference in mass, the solubility of the smaller 34-kD N-LMM-34 is strikingly similar to that of the larger 240 kD heavy chain of the native myosin molecule in a range of salt concentrations. We have also examined the structure of the material formed in low salt. The insoluble aggregates that form in low ionic strength buffer differ in many respects from myosin filaments that are bipolar structures with a defined length and width. The paracrystals formed from the expressed myosin segments are both longer and wider than the bipolar filaments formed by intact Dictyostelium myosin (Stewart and Spudich, 1979). However the paracrystals are similar to myosin filaments in that they contain a 14-nm repeat. A repeat or parallel stagger of 14 nm is a hallmark of myosin filaments from both muscle and nonmuscle sources, including illaments of Dictyostelium myosin (Huxley and Faruqi, 1983;Pagh and Gerisch, 1986). The 14-nm repeat present in paracrystals of muscle myosin tail segments is thought to arise from the parallel interaction of individual molecules, each Staggered from its neighbor by 14 nm (Bennett, 1981;Quinlan and Stewart, 1987). The 14-nm stagger observed in paracrystals of N-LMM-34 suggests that the amino acid sequence contained therein may dictate the assembly and parallel spacing of myosin molecules within a thick filament. This is supported by a recent study in which areas of interactions between Dictyostelium myosin assembled into parallel dimers was measured by electron microscopy. The area most likely to be in contact between two myosin molecules (75-125 nm from the head-tail junction) is similar to the area of the tail demarcated by N-LMM-34 (Pasternak et al., 1989). N-LMM-34 is the smallest domain yet studied of a myosin tail segment from any species or tissue that can form paracrystals and indeed may be a minimal domain capable of assembly into paracrystals sampled by a 14-nm repeat. The 287 amino acids contained in N-LMM-34 approaches the theoretical limit of the 198-amino acid repeat of myosin rods necessary for a 14-nm parallel stagger between complementary groups of positive and negative charge in the myosin tail (McLachlan, 1984). Our attempts to express a smaller subdomain of N-LMM-34 resulted in a protein that was not stable in E. coli extracts. Comparison of phosphopeptides in two-dimensional maps of myosin or bacterially expressed LMM-58 phosphorylated by Dictyostelium myosin heavy chain kinases derived from either developed cells or vegetative cells shows near identity Wagle et al., 1988). These experiments indicate that the phosphorylation sites for one or more myosin heavy chain kinases are localized within LMM-58. Here we have examined a time course of phosphorylation of Dictyostelium myosin or myosin tail segments not as a means to identify phosphorylated residues, but rather as a guide to identify parameters of the tail important for efficient phosphorylation. The kinase we used has been purified to homogeneity from developed cells and characterized extensively. It appears to be distinct from previously described Dictyostelium kinases . We show that the extent and kinetics of phosphorylation by this purified heavy chain kinase are virtually indistinguishable between myosin, LMM-68 or LMM-58. Thus, neither the region of the tail adjacent to the myosin heads (analogous to the S-2 region of skeletal muscle myosin) nor the 10-kD carboxyl terminus of the tail are necessary for phosphorylation by the kinase. Phosphorylation sites on the myosin heavy chain have been roughly mapped to a domain 32 kD from the tip of the tail (Pagh et al., 1984). Wagle et al. (1988) have suggested that amino acid 1823, one in a cluster of four threonines might be a phosphorylation site for a kinase from vegetative ceils. Vaillancourt et al. (1988) have directly identified two phosphorylated threonines at positions 1833 and 2029. Since all phosphorylation sites of LMM-58 and LMM-68 are present in either N-LMM-34 or C-LMM-34, our results suggest that additional parameters are necessary for efficient phosphorylation. Notably, all substrates for the kinase assemble into either filaments or paracrystals in low salt buffers in which the kinase is active. Thus, the low efficiency of phosphorylation could be either because of the absence of phosphorylation Dictyostelium makes possible the creation of Dictyostelium mutants containing specifically altered myosin proteins . Thus, it is now possible to create mutants with altered myosin domains and to assess the contribution of these alterations to the dynamics of myosin assembly and phosphorylation within living cells. The creation of myosin molecules with alterations in N-LMM-34 may reveal the contribution of this domain to the assembly of myosin filaments within Dictyostelium cells. Similarly, myosin mutants can now be constructed that lack C-LMM-34, a region we show here to be important for myosin phosphorylation in vitro. Ultimately, we hope to understand how functional domains such as these contribute to myosin function within living cells.
2014-10-01T00:00:00.000Z
1990-01-01T00:00:00.000
{ "year": 1990, "sha1": "5d3a531caeca973c4d43aa5b6514139aa4bb329d", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/110/1/63.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "cb5dbc48add760df62df4c503c5fe0ae668b0abb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
254997257
pes2o/s2orc
v3-fos-license
Readability of online monkeypox patient education materials: Improved recognition of health literacy is needed for dissemination of infectious disease information Background Health literacy is key to navigating the current global epidemic of misinformation and inaccuracy relating to healthcare. The American Medical Association (AMA) suggests health information should be written at the level of American sixth grade. With the monkeypox outbreak being declared a Public Health Emergency of International Concern (PHEIC) in July 2022, we sought to assess the readability of online patient education materials (PEMs) relating to monkeypox to see if they are at the target level of readability. Methods A search was conducted on Google.com using the search term ‘Monkeypox’. The top 50 English language webpages with patient education materials (PEMs) relating to monkeypox were compiled and categorised by country of publication and URL domain. Readability was assessed using five readability tools: Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), and, Simple Measure of Gobbledygook Index (SMOG). Unpaired t-test for URL domain, and one-way ANOVA for country were performed to determine influence on readability. Results Three of the five tools (FRES, GFI, CLI) identified no webpages that met the target readability score. The FKGL and SMOG tools identified one (2%) and two (4%) webpages respectively that met the target level. County and URL domain demonstrated no influence on readability. Conclusion Online PEMs relating to monkeypox are written above the recommended reading level. Based on the previously established effect of health literacy, this is likely exacerbating health inequalities. This study highlights the need for readability to be considered when publishing online PEMs. Introduction Monkeypox is a viral zoonosis that presents with symptoms similar to those previously seen in smallpox patients. Routes of transmission include human-to-human via direct contact with infectious skin or mucocutaneous lesions, respiratory droplets or indirect contact from contaminated objects or materials. Whilst not considered a sexually transmitted disease, the majority of cases have been reported in men who have sex with men (MSM), especially those engaging in high-risk sex with multiple partners. The virus has been endemic in West and Central Africa for decades. However, between January 1, 2022, and July 20, 2022, around 14,500 probable and laboratory confirmed cases were reported to the World Health Organisation (WHO) from seventy-two countries across all six WHO regions. On July 23, 2022, the monkeypox outbreak was declared a Public Health Emergency of International Concern (PHEIC), a status previously achieved by COVID-19, polio and both the Ebola and Zika virus [1]. The WHO has not currently advised mass vaccinations, but they have recommended post-exposure preventive vaccination (PEPV) for contacts of cases, ideally within four days of first exposure, and primary preventive vaccination (PPV) for individuals at high risk of exposure. This group includes MSM or other persons with multiple sex partners, health workers at risk, laboratory personnel working with orthopoxviruses, clinical laboratory staff performing diagnostic testing for monkeypox [2]. Research suggests that the majority of individuals now choose to access health information online, especially with regard to sexual health given the accessibility and anonymity the internet provides [3]. There is currently no quality standard for online information and it is evident from previous research that many online patient education material (PEMs) are written above the recommended reading age [4]. The multitude of complexities and uncertainties associated with the monkeypox outbreak, including atypical presentations and biphasic appearance of lesions, has also undoubtedly contributed to the fuelling of misinformation being shared on a global scale about this public health event. Health literacy is key to navigating the current global epidemic of misinformation and inaccuracy, and is defined by The World Health Organisation (WHO) as 'the personal characteristics and social resources needed for individuals and services to make decisions about health.' [5] The ability to use general literacy and numeracy skills in relation to health-related resources can also be broken down into five core components: obtention, comprehension, appraisal, communication and application of information [6]. Difficulties with any of these stages can signify low health literacy, which has been linked to significant adverse health outcomes stemming from limited understanding of health and disease, including reduced engagement with healthcare providers, lower medication compliance and adherence, increased hospitalisations and increased mortality [7]. A report by National Voices in 2017 declared health literacy 'the strongest correlation to ill health e stronger than education level, deprivation, age or ethnicity' [8]. Readability is defined as the 'ease of understanding or comprehension due to the style of writing' and is frequently measured in the context of American 'grade levels' [9,10]. Readability formulas thus offer a means of ensuring a resource is suitable for its intended audience, with the aim of improving patient education and self-efficacy with regard to health. Globally, one in ten adults lack 'the most basic information processing skills considered necessary to succeed in today's world', whilst almost 50% of adults in Europe are deemed to have 'inadequate' or 'problematic' health literacy [11,12]. A publication from Public Health England and the Institute of Health Equity in 2015 demonstrated that in the United Kingdom (UK) 43% of adults lack adequate literacy skills, and 61% lack adequate numeracy skills, to routinely process health information [13]. In the United Stated of America (USA), this rises drastically to 88% of adults not meeting the 'proficient' target [14]. Whilst there is no consensus on the recommended level of reading difficulty for health information, Health Education England (HEE) advises that patient information should be written at a level comprehensible to an 11 year old in order to facilitate uptake [15]. This is equivalent to a US sixth grade level or less, which is also the level at which the American Medical Association (AMA) suggests health information should be written [16]. Given the trend in current literature, we hypothesised that online PEMs about the monkeypox outbreak are written above the recommended level appropriate for the general public, which may be contributing to public misinformation and adverse health outcomes as a result. Methods Online PEMs were identified using the Google.com search engine. This search engine was selected because, according to the popular online web traffic analysis website Stat-Counter, Google searches comprised 92% of the online search market share at the time that this study was J.C. Frost and A.J. Baldwin + MODEL 2 conducted [17]. The search was carried out using the search term 'Monkeypox' on July 6, 2022. To eliminate any skewed or biased results based on previous search history or internet activity, the Internet browser was cleared of all search history, cache, cookies, and other user data. The sample set for analysis was generated by compiling the first 50 English language webpages that contained patient orientated information aimed at the general public. Webpages were excluded for the following reasons: information pages written for healthcare professionals, studies from peer-reviewed journals, news articles, personal experiences, or, webpages that contained exclusively audiovisual material. PEMs that were solely audio-visual in nature were excluded because these could not undergo readability analysis as proposed in this study's methodology. By enforcing these exclusion criteria, we included only those webpages aimed at providing health education to the general population. To allow for categorisation, basic data on the webpages' characteristics were captured prior to analysis of readability. These data included: country of origin and URL domain name (i.e.gov,. com,.org, etc.). Text from each webpage was entered into a Microsoft Word document and edited based on previously established protocols for readability assessment studies [18,19]. Components of the text unrelated to patient education were removed, as these may influence readability score. This included the removal of disclaimers, advertisements, webpage navigation, website URLs, copyright information, acknowledgements, author information, citation and references. Supplemental editing of non-textual elements and punctuation was performed including paragraph breaks, colons, and bullet points that may cause the readability tool to over-or underestimate the difficulty of a text [20]. Readability scores were generated for the edited text using an online readability software [21]. Text was assessed using five validated readability tools: Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), and, Simple Measure of Gobbledygook Index (SMOG). Each tool calculates readability by applying a mathematical formula to a passage of text (Supplementary Table 1). The formulas each consider different factors when producing an ease of readability score. These factors include a combination of mean number of syllables per word, mean number of words per sentence, numbers of sentences and numbers of complex words (polysyllabic words) [9,10]. The FRES gives a score of 0e100, with higher score being deemed easier to read, whilst the other tools (FKGL, GFI, CLI, and, SMOG) give a reading grade level. The interpretation and target score for each tool is described in Table 1. The methodology used in this study is consistent with other readability studies within the field of infectious diseases, public health, sexual health and other medical and surgical specialties [4,18,19,22,23]. Data analysis Webpage characteristics were described using rounded frequencies (per cent). Overall, and sub-group (based on webpage characteristics) FRES, FKGL, CLI, SMOG, GFI and median grade scores were described as mean values AE standard deviation, having been found to be normally distributed using the ShapiroeWilk test of normality. The median grade score was calculated from the tools that compute the result as a grade score (FKGL, CLI, SMOG, GFI). The median grade score was calculated as the grade scores for each individual webpage were not normally distributed. Unpaired t-test was performed to determine if a government domain extension (.gov) compared to a non-government domain extension (.org,.com,.ca,.uk,.ac,.ch,.ie,.int,.nhs,.nl,.scot,.us.) influenced readability. One-way analysis of variance (ANOVA) was performed to determine if country of publication (United States of America, United Kingdom, Canada or Other, which included Australia, Ireland, Netherlands, Singapore, Switzerland) had an effect on readability. All analyses were performed in R (v4.2.1). Results Fifty webpages were analysed (Supplementary Table 2). When separated by webpage characteristics (Table 2), 29 (58%) of these were.gov webpages, and 21 (42%) had other URL domain names, which included:.org,.com,.ca,.uk,.ac,.ch,.ie,.int,.nhs,.nl,.scot,.us. When divided by country of origin, 34 (68%) webpages originated from the United States of America (USA), 5 (10%) originated from the Table 3. Using FRES, none of the webpages met the target ease of reading score of !80. For the tools that gave a score representing a school grade, or years of education, the GFI and CLI tools found no webpages with a readability score at the target level of sixth grade. The FKGL and SMOG tools identified one (2%) and two (4%) webpages respectively that met the target level of sixth grade, with both the FKGL and SMOG identifying the same webpage in one instance. For all webpages, the FRES ranged from 34.6 (difficult, or College level) to 69.8 (standard difficulty, or grade 8 to 9), with a mean score of 54.4 (SD Z 8.5). FKGL ranged from 7 to 12.8, with a mean score of 9.6 (SD Z 1.6). GFI ranged from 8.9 to 16.6, with a mean score of 12.4 (SD Z 1.8). CLI ranged from 9 to 14, with a mean score of 11.1 (SD Z 1.3). SMOG ranged from 6.6 to 12.4, with a mean score of 9.4 (SD Z 1.3). Separating the webpages by '.gov' sources and 'other URL domain names' or by county of publication showed no influence on readability (Tables 4 and 5). Discussion Health literacy is an important subject across all specialities, including both infectious disease and sexual health. Whilst not a sexually transmitted infection, PEMs created about monkeypox rely on patients having an 'ability to understand sexual health information and application of that information' to be able to make informed choices with regard to sex and safe sex practices, also defined by the WHO as sexual health literacy [24]. Electronic health literacy is also crucial in our increasingly technologized world, and refers to the processing and understanding of health information from electronic sources. In contrast to those with high electronic health literates, research suggests people with low electronic health literacy levels are significantly more likely to believe in misinformation and also to experience information overload from online sources [25]. Although the internet remains the top source of health information for the vast majority of people as it is both accessible and offers anonymity, it is apparent from other readability studies in both infectious disease and sexual health that the majority of PEMs available online are pitched at audiences above the recommended reading level [4,18,22]. Our results are consistent with these findings, demonstrating that none of the fifty total sampled PEMs regarding the monkeypox outbreak identified via Google search were of an appropriate readability for the general public, based on the results of all five validated readability tools, and thus are likely to be suboptimal as educative resources. The COVID-19 pandemic previously underscored the scale of the 'infodemic' we are currently facing as a result of both increased information consumption, especially via a digital means, and global health illiteracy [26]. An infodemic is defined by the WHO as an 'overabundance of information -some accurate and some not -that occurs during an epidemic', which can hugely hinder the evidencebased approach to managing a public health crisis, and make it difficult for the general public to make informed decisions [27]. The COVID-19 pandemic is even more topical when we consider the heightened need for individuals to be able to respond to health information at speed. Health advice relating to monkeypox has been evolving almost daily, which requires the general public to be able to rapidly acquire and apply health information. Low health literacy rates can impair quicker processing times and, combined with online information perhaps unintentionally better suited to a higher readability than recommended, may be further complicating people's ability to comprehend and utilise this information to make informed health decisions. Limitations There are several limitations of this study. Firstly, as noted in other readability studies, information changes rapidly online and search results can change day on day [4]. Due to the cross-sectional design of a study such as this, online information is only captured at a snapshot in time. It is also important to consider that readability does not alone give a complete assessment of how understandable a webpage may be. Other factors such as visuals, both pictographic and audio-visual, and design elements, including font size and the amount of white space, have also been shown to influence comprehension of patient education information [28]. None of the readability formulas used were designed to consider the effect of these design elements. Future studies of PEMs created for monkeypox should examine 'understandability', likely using the Patient Education Materials Assessment Tool (PEMAT); 'quality', likely using with the DISCERN instrument; and, 'accountability', likely using the Journal of the American Medical Association (JAMA) benchmarks of accountability, which are 'authorship', 'attribution', 'disclosure', and 'currency'. This study was also limited to material written in English and did not include pamphlets or other downloadable content. Future research should assess readability of downloadable content, including pamphlets, and readability of non-English material, especially given the international nature of this public health issue. This study also did not include an analysis of which polysyllabic words were used most frequently. An analysis such as this could reveal the vocabulary contributing to the more difficult readability scores. Another limitation of this study was that the sample was limited to the first 50 webpages returned by the search term. This sample size is, however, in keeping with the design of other readability studies and in the case of this study appears to have given representative findings, given the homogeneity of our results. Furthermore, individuals most commonly access the top webpages returned from any given search, therefore our sample is representative of the webpages that the typical lay individual would access. Despite the limitations of readability tools, they remain an efficient way of testing ease of readability and their use should be encouraged so as to aid those designing PEMs to carefully select their words and content in a way that is mindful of the literacy levels in the general population. Future studies of online patient information regarding monkeypox might also assess quality of the content. Nevertheless, the findings of this study have started to fill a gap in the available literature. Conclusion Ultimately, this study has shown that PEMs relating to monkeypox are written above the recommended reading level. Based on the previously established effect of health literacy, this is likely exacerbating health inequalities. This 5 study also highlights the need for readability to be taken into account when publishing online resources to ensure simpler information is made available on this topic. Improving uptake and understanding of health information is not only the responsibility of individuals, but also that of governments, health systems and organisations, to ensure that accurate, appropriate and digestible information is made accessible for diverse audiences. Readability can be improved easily by using shorter words, shorter sentences and avoiding complex or medical vernacular. There are many tool-kits available online to simplify complicated scientific information into better designed communication materials [29]. A more symbiotic and holistic approach to address health literacy may reduce disparities in health-associated quality of life and improve health outcomes and treatmentseeking. Addressing the gap in health literacy may also tackle misinformed stigma of certain groups and the perpetuation of incorrect information online [29]. Further research, however, is needed to fully understand how best to articulate consistent and clear messages across multiple populations within a 'rapidly changing scientific and politically charged environment', especially within a sexual health context [30]. Ethics This study used information freely available in the public domain and did not include human subjects and therefore did not require ethical approval.
2022-12-24T05:16:24.360Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "32a151310290350110a15ba4e36db45714e357e6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "42d8c435eb158716d0e46e12ad0e1940c4ee82a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
225309316
pes2o/s2orc
v3-fos-license
Self-assembly of photoresponsive azo-containing phospholipids with a polar group as the tail Vesicles or micelles prepared from amphiphiles with azobenzene (Az) moieties and long alkyl chains have attracted much attention in drug delivery systems. To induce release behavior from smart carriers via trans–cis photoisomerization of the Az groups, UV light exposure is typically used, but it can damage DNA and hardly penetrates cells. In this paper, Az-containing phospholipids without long alkyl tails were designed and synthesized; in these compounds, the end group of the Az moiety was substituted with a –NO2 and –OCH3 group (abbreviated N6 and M6, respectively). N6 self-assembled into H-aggregates with an interdigitated bilayered structure in water through the antiparallel orientation due to π–π interactions of the Az group, the attractive van der Waals forces, and the interactions and bending behavior of the phosphocholine groups. Vesicles showing visible light stimuli-responsive behavior were obtained by mixing N6 and M6, and the release of encapsulated calcein was triggered by visible light. Characterization of N6, M6, and N3 NMR spectra were measured in CD 3 OD solutions. 1 H NMR spectra were recorded on a JEOL JNM-LA400 at 400 MHz. The chemical shifts of the 1 H NMR signals were referenced to Me 4 Si as the internal standard ( = 0.00) and are expressed as chemical shifts in ppm (), multiplicity, coupling constant (Hz), and relative intensity. Elemental Crystal data and X-ray single crystal structure of N3 For the evaluation of the self-assembling structure of N6 and N6/M6 mixtures, xray single crystal structure analysis of N3 was carried out. The single crystal was obtained by slow cooling of aqueous hot solution of N3 containing acetonitrile. The crystal data were collected by a R-AXIS RAPID diffractometer using multi-layer mirror monochromated Cu-K radiation at 90 K, and the crystal structure was analyzed using the SHELXL within the OLEX-2 GUI for modelling the molecular structures. The Figure 2(d) shows the part of the crystal structure of N3 focusing on the phosphocholinemoiety to clarify the formation mechanism of the stacked bilayer of N6. The magentacolored part represents the phosphocholine-moiety of N3. The part bended away from the long axis of the Az skeleton to increase the affinity for the water layer through hydrogen bonding. The bending behavior of the phosphocholine moiety might enable phospholipids without a long alkyl tail to form a bilayer-type self-assembled structure through H-aggregation. Actually, it is impossible for the hydrophilic parts amphiphiles with only ammonium moieties but no phosphocholine moiety to bend in this way. Figure S2. An X-ray single crystal structure of N3. Effect of hydrophilic moiety on the formation of H-aggregates As shown in Figure S5(a), the blue shift was not observed when the concentration of NAz6TM increased. It suggested that the NAz6TM molecules in water did not assembled into aggregation even at a higher concentration. As shown in Figure S5(b), the same situation was also found in the mixture of NAz6TMA and MAz6TMA in water. Figure S5. Absorption spectra of (a) NAz6TMA; (b) a mixture of NAz6TMA and MAz6TMA (R = 50/50) in water S10 Figure S6(a), an absorption band at approximately 362 nm was observed at lower concentration in water (e.g. c = 0.05 mM). The wavelength of the absorption band was shifted to shorter wavelength (329 nm) by increasing the concentration (e.g. c = 1.0 mM). The blue shift of this band with increasing concentration was due to the formation of H-aggregates, and the driving force is furtherly enhanced by the stacking of the Az moieties. Figure S6 1 HNMR characterization to form self-assembled vesicles in water The solubility of Az-containing phospholipids is poor at room temperature, which resulted in the concentration of samples too low and signals too weak to be detected. Therefore, 1 H NMR measurements were performed at 60 °C in D 2 O to investigate interactions between N6 and M6 ( Figure S7). When measuring M6 in D 2 O at 60 °C using 1 H NMR, one of sharp peaks was obtained at 7.7 (δ/ppm), corresponding to the symmetrical aromatic hydrogens (-ArH-N=, 4H) (M6-a). Meanwhile, the peak at 7.8 (δ/ppm), was assigned to the aromatic hydrogens (O 2 N-ArH-, 2H) (N6-a) of N6. However, a relatively broadened and shifted peak was observed at 7.51 (δ/ppm) for the M6-a and N6-a, when incubated N6/M6 mixture in D 2 O at 60 °C. The reason for the chemical shifts and broaden peaks was that electrostatic interactions was introduced between the groups MeO-and O 2 N-. Consequently, the change of hydrogen environment drove the occurrence of chemical shifts and broadened peaks. Four peaks at 23, 27, 30, 38 Å were mainly observed, and the peak profiles were changed by the change in the N6/M6 mixing ratio. By the comparison with the results shown in Figure 2, the scattering at 38 Å could be assigned as the bilayer thickness of the vesicles. On the other hand, peaks at 30 and 27 Å were due to layer thickness of N6 and M6 crystalline powders. As shown in Figure 2, there was neither phase separation nor recrystallization of N6 and M6. In spite of careful treatment, freeze drying process might influenced the crystal formation. Further, an unassignable broad scattering at 23 Å were sometimes observed. Changes in the absorption spectra of the vesicles as a function of irradiation time of UV light The N6/M6 mixture (R = 50/50, c = 0.5 mM) in water are vesicles with an average diameter of ca. 500 nm, which has been presented in Figure S11. Then, the changes in the absorption spectra of the vesicles as a function of irradiation time of UV light was detected. As shown in Figure S12, before UV irradiation, there was an absorption band at 340 nm, corresponding to the H-aggregation. The UV irradiation respectively caused the decrease and increase in absorbance at 340 nm and at 450 nm due to the photoisomerization of the Az groups in N6 and M6 from trans-form to cis-one, and the reverse situation was observed upon visible light irradiation less than 30 s.
2020-09-10T10:03:34.797Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "c6c5fdf75b66e163ed2903390c2bc655f8796d26", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra06803a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb5e90e2fcdcbf63981b4e4ac04082fdf6f856ae", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
118998698
pes2o/s2orc
v3-fos-license
Sub-kpc star-formation law in the local luminous infrared galaxy IC 4687 as seen by ALMA We analyze the spatially resolved (250 pc scales) and integrated star-formation (SF) law in the local luminous infrared galaxy (LIRG) IC4687. This is one of the first studies of the SF law on a starburst LIRG at these small spatial scales. We combined new interferometric ALMA CO(2-1) data with existing HST/NICMOS Pa$\alpha$ narrow-band imaging and VLT/SINFONI near-IR integral field spectroscopy to obtain accurate extinction corrected SF rate (SFR) and cold molecular gas surface densities ($\Sigma_{gas}$ and $\Sigma_{SFR}$). We find that IC4687 forms stars very efficiently with an average depletion time ($t_{dep}$) of 160 Myr for the individual 250 pc regions. This is approximately one order of magnitude shorter than the $t_{dep}$ of local normal spirals and also shorter than that of main-sequence high-z objects, even when we use a Galactic $\alpha_{CO}$ conversion factor. This result suggests a bimodal SF law in the $\Sigma_{SFR} \propto \Sigma_{gas}^{N}$ representation. A universal SF law is recovered if we normalize the $\Sigma_{gas}$ by the global dynamical time. However, at the spatial scales studied here, we find that the SF efficiency (or $t_{dep}$) does not depend on the local dynamical time for this object. Therefore, an alternative normalization (e.g., free-fall time) should be found if a universal SF law exists at these scales. Introduction There is a strong correlation between the star formation rate (SFR) and the cold molecular gas content in galaxies. This relation is usually referred to as the star formation (SF) law (or as the Kennicutt-Schmidt relation;Schmidt 1959;Kennicutt 1998) and it is expressed as where Σ SFR and Σ gas are the SFR and cold molecular gas surface densities, respectively. For galaxy integrated observations, the typical power-law index, N , is 1.4-1.5 (Kennicutt 1998;Yao et al. 2003). The physical processes leading to this value of N are not well established yet, although theoretical models suggest that variations of the free-fall time (t ff ) and the orbital dynamical time might define the observed relation (see McKee & Ostriker 2007 and Kennicutt & Evans 2012 for a review). In general, it is assumed that the normalization of the SF law, A, is constant, that is, independent of the galaxy type. However, some works have found bimodal SF laws when main sequence (MS) and starbursts (those with higher specific SFR than MS galaxies for a given redshift) are considered (e.g., Daddi et al. 2010;Genzel et al. 2010;García-Burillo et al. 2012). In these cases, normal galaxies have depletion times (t dep = M H2 /SFR) between 4 and 10 times longer than starburst galaxies. The possible existence of this bimodality in the SF law affects the determination of N . Actually, these works find an almost linear relation, N ∼ 1, for each galaxy population (MS and starbursts) when they are treated independently. Recently, many studies of the resolved sub-kpc SF laws in nearby galaxies have appeared (e.g., Kennicutt et al. 2007;Bigiel et al. 2008;Leroy et al. 2008;Verley et al. 2010;Rahman et al. 2012;Viaene et al. 2014;Casasola et al. 2015). Most of these works find a wide range of N values (0.8-2.3) and a considerable scatter in the relations (0.1-0.4 dex). This could be explained if the SF law breaks down on sub-kpc scales (e.g., the location of the cold molecular gas peaks, CO, and the SFR regions, Hα, are not always coincident; Kennicutt et al. 2007;Schruba et al. 2010) and/ or because some systematics affect these sub-kpc studies (e.g., the treatment of the diffuse background emission; Liu et al. 2011). These previous sub-kpc studies are focused on very nearby (d < 20 Mpc) spiral galaxies and active galactic nuclei (AGN). That is, the only objects where sub-kpc resolutions could be achieved before the arrival of the Atacama Large Millimeter/submillimeter Array (ALMA). Therefore, the most extreme local starbursts (i.e., luminous and ultraluminous IR galaxies) are absent in previous sub-kpc studies, although they are important to understand extreme high-z SF (e.g., Daddi et al. 2010). In this paper, we present one of the first sub-kpc analyses of the SF law in a local extreme starburst. In particular, we study the local (d=74 Mpc; 345 pc arcsec −1 ) luminous IR galaxy (LIRG) IC 4687. This galaxy has an IR luminosity of 10 11.3 L , which corresponds to an integrated SFR of Pereira-Santaella et al.: Sub-kpc SF law Bellocchi et al. 2013 for details). Therefore, the starburst of IC 4687 might be induced by this weak interaction. We obtained new ALMA 12 CO(2-1) observations with ∼100 pc spatial resolution to study the SF law in IC 4687. We combined these observations with HST /NICMOS maps of Paα (50 pc resolution; Alonso-Herrero et al. 2006) and VLT/SINFONI near-IR integral field spectroscopy (200 pc resolution;Piqueras López et al. 2012. This multi-wavelength dataset allowed us to get a novel insight into the sub-kpc SF law in extreme local starbursts. In addition, optical integral field spectroscopy data of the entire IC 4686/4687/4689 system have recently been obtained (Rodríguez-Zaurín et al. 2011;Bellocchi et al. 2013;Arribas et al. 2014;Cazzoli et al. 2015, submitted), but they were not considered in the present analysis because of their different spatial resolution. This paper is organized as follows: We describe the observations and data reduction in Section 2. The analysis of the cold molecular gas and SFR maps of IC 4687 are presented in Section 3. In Sections 4 and 5, we discuss our results in the context of nearby and high-z galaxies, respectively. Finally, in Section 6, we summarize the main findings of the paper. The on-source integration times were 18 and 9 min, respectively. Both observations were single pointing centered at the nucleus of IC 4687. The extended configuration had baselines between 33.7 m and 1.1 km, while the baselines ranged between 15.1 m and 328 m for the compact configuration. For these baselines, the maximum recoverable scales are 4.9 and 10.9 , respectively. Observations and data reduction Two spectral windows of 1.875 GHz bandwidth (0.48 MHz ∼ 0.6 km/s channels) were centered at the sky frequencies of 12 CO(2-1) (226.4 GHz) and CS(5-4) (240.7 GHz). In addition, two continuum spectral windows were set at 228.6 and 243.4 GHz. In this paper, we only present the analysis of the CO(2-1) data. The two datasets were calibrated using the standard ALMA reduction software CASA (v4.2.2;McMullin et al. 2007). We used J1617-5848 for the amplitude calibration, assuming a flux density of 0.651 Jy at 226.4 GHz, and Titan, using the Butler-JPL-Horizons 2012 model, for the extended and compact configurations, respectively. The uv visibilities of each observation were converted to a common frequency reference frame (kinematic local standard of rest; LRSK) and then combined. The amplitudes of the baselines in common for both array configurations were in good agreement. Then, the continuum (0.15-0.05 mJy beam −1 ) was fitted with the line free channels and subtracted in the uv plane. In the final data cubes, we used 4 MHz channels (∼5 km s −1 ) and 256×256 pixels of 0. 07. For the cleaning, we used the Briggs weighting with a robustness parameter of 0.5 (Briggs 1995), which provided a beam with a full width half maximum (FWHM) of 0. 31×0. 39 (∼100 pc×130 pc) with a position angle (PA) of 35 • . A mask derived from the observed CO(2-1) emission in each channel was used during the clean process. For the final 4 MHz channels, the achieved 1σ sensitivity is ∼1 mJy beam −1 . We applied the primary beam correction to the data cubes. The integrated CO(2-1) flux in the considered ALMA field of view (18 ×18 ) is 460 Jy km s −1 with a flux calibration uncertainty about 15%. For comparison, the singledish CO(2-1) flux measured with the 15 m SEST telescope (24 beam size) is 480 Jy km s −1 (Albrecht et al. 2007). Therefore, combining both the compact and the extended ALMA array configurations we are able to recover most of the CO(2-1) flux of this source. Ancillary HST/NICMOS and VLT/SINFONI data We used the continuum subtracted narrowband Paα HST / NICMOS image of IC 4687 (Alonso-Herrero et al. 2006) to determine the resolved SFR. The original Paα map (0. 15 resolution) was convolved with a Gaussian kernel to match the angular resolution of the ALMA map. To correct the Paα emission for extinction (see next section), we used the 2D extinction maps of this galaxy derived with near-IR VLT/SINFONI integral field spectroscopy (Piqueras López et al. 2013. Both the ALMA CO(2-1) and the NICMOS Paα maps cover similar fields of view. However, we limited our analysis to the smaller field of view of the SINFONI extinction map (8 ×8 ; see Figure 1) so the dataset would be homogeneously corrected for extinction. Nevertheless, this region contains ∼85% of the total CO(2-1) flux. We calculated the position of the dynamical center of the CO(2-1) emission by locating the maximum of the directional derivative of the velocity field ( Figure 2; see also Arribas et al. 1997). Then, we aligned the peak of the stellar mass distribution traced by the NICMOS and SINFONI near-IR continuum with the CO(2-1) dynamical center. Molecular and ionized gas morphology In Figure 1, we show the Paα and CO(2-1) integrated flux and peak intensity maps of IC 4687. Four spiral-like arcs are visible in the molecular gas emission. The two more external arms are also evident in the Paα map (see also Figure 3). The Paα emission of the southern arc is dominated by a bright ∼1 kpc in diameter region, while in the northern arm it is spread over ∼3 kpc along the arc. There is a 1 kpc diameter ring of molecular gas around the nucleus. This ring is spatially coincident with the This ring is relatively weak in the observed Paα emission. This is mainly because of the higher extinction of the nuclear ring with respect to the rest of SF regions (see Section 3.2). Figure 3 shows the comparison of the Paα and CO(2-1) emissions. The general agreement is good. As stated above, both the Paα and CO(2-1) emissions trace similar spiral arcs and a circumnuclear ring, however, on scales of 100 pc the CO(2-1) and Paα emission peaks do not always coincide. There are regions where the Paα emission is strong and there is no clear CO(2-1) peak associated with the emission, while other regions detected in the CO(2-1) maps do not show Paα emission. Characterization of the regions We used the integrated CO(2-1) emission map to define individual emitting regions. Emission peaks above a 10σ level were considered. This conservative σ level was chosen to exclude residual side lobes produced by the bright central region. We applied the same procedure to the Paα map and then we combined both sets of regions. In Figure 4, we plot the location of these 81 regions. We assumed that regions with centroids separated by less than 0. 35 (ALMA beam) in the CO and Paα maps correspond to the same physical region. Using this criterion, 23 regions are detected in both the CO(2-1) and Paα maps. There are 43 and 15 regions detected only in the CO or Paα maps, respectively. Both CO(2-1) and Paα emissions are detected at more than 6σ for all these regions, although an emitting clump is seen only in one of the maps. The diameter of the regions was fixed to 0. 7 (∼2 times the ALMA beam), which corresponds to ∼250 pc at the distance of IC 4687. This physical scale is ideal for comparison with previous works (see Section 4). To measure the CO(2-1) emission of each region, we extracted their spectra and integrated all the channels above Dashed circles indicate regions whose Brγ or Brδ emissions are detected below a 6σ level. a 3σ level within the velocity range of the CO(2-1) emission in this object (5000-5400 km s −1 ). We estimated the cold molecular gas mass from the CO(2-1) emission using the Galactic CO-to-H 2 conversion factor, α 1−0 CO = 4.35 (Bolatto et al. 2013; see Section 3.3.3), and the CO(2-1) to CO(1-0) ratio (R 21 ) of 0.7 derived from the single-dish CO data of this galaxy (Albrecht et al. 2007). This R 21 value is similar to that found by Leroy et al. (2013) in nearby spiral galaxies. Using this conversion factor, the molecular gas surface density ranges from 10 2.3 to 10 3.4 M pc −2 within the 250 pc of diameter apertures. This corresponds to molecular masses of the individual regions in the range M H2 = 10 7 − 10 8 M , so they likely include several giant molecular clouds. We estimated the SFR of the regions using the extinction corrected Paα emission. First, we performed aperture photometry on the Paα image for each region. To correct the Paα emission for extinction, we used the Brγ/Brδ ratio map of Piqueras López et al. (2013) and assumed an intrinsic Brγ/Brδ ratio of 1.52 (Hummer & Storey 1987) and the Fitzpatrick (1999, F99) extinction law. This A K determination is very sensitive to the uncertainty in the Brδ and Brγ fluxes. Therefore, we only considered regions where both the Brδ and Brγ transitions are detected at >6σ. Using this criterion, our final sample includes 54 out of the 81 original regions. Almost 90% of the regions detected in both the CO and Paα maps fulfill this criterion, while ∼40% of the regions detected only in CO or Paα are excluded. Most of the excluded regions are those at the low-end of the CO and Paα luminosity distributions. This suggests that we are limited by the sensitivity of the Brδ and Brγ maps, which would be lower than that of the ALMA and HST/NICMOS data. The measured extinction range is A K =0.2-3.5 mag (A V =2-30 mag) with a median A K of 1.3 mag (A V =11 mag). Correcting the observed Paα emission by this median extinction yields an extinction-corrected flux that is ∼4 times the observed flux. We show the spatial distribution of A K in the left panel of Figure 5. The most obscured regions (A K > 1.7 mag) are located at the ring of molecular gas around the nucleus, while the regions in the arms have lower A K values. The extinction-corrected Paα luminosities of the regions were converted into SFR following the Kennicutt & Evans (2012) calibration for Hα (assuming Hα/Paα = 8.58; Hummer & Storey 1987). For this SFR calibration, Kennicutt & Evans (2012) adopted the Kroupa (2001) initial mass function. The SFR surface density in this galaxy is 1-100 M yr −1 kpc −2 for the 250 pc regions. All of these Σ H2 and Σ SFR values are multiplied by cos 47 • to correct for the inclination of this galaxy (i = 47 • ; Bellocchi et al. 2013). Systematic uncertainties Both the SFR and the cold molecular gas surface density estimates are affected by several systematic effects. These effects have been widely studied in the past (e.g., Rahman et al. 2011;Liu et al. 2011;Schruba et al. 2011;Bolatto et al. 2013;Genzel et al. 2013;Casasola et al. 2015). Therefore, in this section, we briefly discuss the possible systematic effects due to the region selection, SFR tracer, extinction correction, and CO-to-H 2 conversion factor we used. Region selection In the local spiral M 33, for physical scales of 300 pc, Schruba et al. (2010) found that the depletion time (t dep ) is shorter by a factor of ∼3 for apertures centered on Hα peaks than for apertures centered on CO peaks. For IC 4687, the average log t dep /yr is 8.3±0.3 and 8.2±0.4 for the CO and Paα selected regions, respectively. That is, we do not see any significant difference between the CO and Paα selected regions with the 250 pc apertures in this galaxy in terms of t dep . Therefore, in the following, we do not distinguish between CO and Paα selected regions. Extinction correction and SFR tracer We used the Galactic F99 extinction law to correct for dust obscuration effects on IC 4687. This choice is appropriate because we use hydrogen recombination lines (Paα, Brδ, and Brγ) to derive A K values (A V = 8.6×A K ). We also tried the Calzetti et al. (2000) Near-IR transitions yield higher A K (or A V ) values than optical transitions (e.g., Balmer decrement). This is because the relative contribution of highly obscured regions to the total line emission is higher for the near-IR lines. Therefore, the equivalent A V estimated from near-IR lines is higher as well. Consequently, SF laws depend on how the extinction correction is applied. For instance, using optical tracers, the derived SFR varies up to a factor of 10 for extremely obscured galaxies and for the slope of the SF laws (e.g., Genzel et al. 2013). In the near-IR, the extinction effects are greatly reduced, so for our case, we estimate that the uncertainties due to the application of the extinction correction are only a factor of ∼2 for the A K range of this object. In addition, extinction corrections can be performed region by region or using an average A K value. For IC 4687, we find that the results are similar on average when using the region-by-region extinction and the integrated extinction (see Section 5). However, like Genzel et al. (2013), we find that the relation between the SFR and cold molecular gas is flatter if we use an average extinction value. This is because of the relation between the A K and the H 2 column density ( Figure 6). Regions with more cold molecular gas are more extinguished, so they are undercorrected when we apply the average extinction. Therefore, the relation be-tween the SFR and the cold molecular gas is flatter when the average A K is assumed. Finally, to check the extinction correction applied, in Figure 6 we plot the relation between the H 2 column density, which is derived from the Σ H2 values, and A K . In Galactic regions there is a correlation between these two quantities (Bohlin et al. 1978;Pineda et al. 2010). In IC 4687, this trend is relatively weak (Spearman's rank correlation coefficient r s = 0.30, probability of no correlation p = 0.04), although, as expected, regions with higher H 2 column densities tend to have higher A K . In particular, this occurs for regions with A K > 1.6 mag and log N H2 (cm −2 ) > 22.7. For lower A K and N H2 values, this relation disappears in IC 4687 at these spatial scales. For comparison, we also plot in Figure 6 the Galactic relation as a solid line. The measured A K values in IC 4687 are systematically lower than the Galactic prediction by factor of ∼6. This suggests that the dust properties and/or geometry of the star-forming regions of this object differ from those found in Galactic regions. In Section 3.3.3, we explain that the Galactic α CO factor is favored for IC 4687. However, we emphasize that using the α CO of ULIRGs, which is 5-7 times lower than the Galactic α CO factor (Bolatto et al. 2013), would reconcile the observed H 2 column densities and A K with the Galactic relation. CO-to-H 2 conversion factor The derived cold molecular gas masses depend on the α CO conversion factor used. In Section 3.2, we assumed that the Galactic α CO factor is a good choice for IC 4687. However, we could expect a lower conversion factor, similar to that of ULIRGs, in this galaxy because of its high specific SFR (sSFR = SFR/stellar mass), ∼0.4 Gyr −1 (Pereira-Santaella et al. 2011). Genzel et al. (2015) proposed that galaxies with high sSFR, that is, galaxies that lie above the MS of SF galaxies, have reduced α CO factors. IC 4687 has a sSFR ∼ 6 times higher than a local MS galaxy with the same stellar mass (Whitaker et al. 2012). Therefore, using the α CO factor of ULIRGs could be justified. However, it is not clear if the integrated CO-to-H 2 conversion factor of ULIRGs, where CO emission is not likely confined to individual molecular clouds (Bolatto et al. 2013), applies to our 250 pc regions in IC 4687. In addition, IC 4687 is not a strongly interacting galaxy or a merger like most local ULIRGs ( Figure 1); it has a velocity field dominated by rotation (Figure 2), although it is perturbed by noncircular motions. In addition, the morphology of the CO emission of IC 4687 resembles that of a normal spiral galaxy (see Leroy et al. 2008) with the SF spread over a region of several kpc. Therefore, it is possible that the cold molecular gas properties (turbulence, temperature, and density) of IC 4687 differ from those of local ULIRGs where a lower α CO factor is required. In fact, in a single-dish survey of local U/LIRGs, Papadopoulos et al. (2012) found that near-Galactic α CO values for U/LIRGs are possible when the contribution from high density gas (n > 10 4 cm −3 ) is taken into account. Also, in the case of IC 4687, if we used the α CO of ULIRGs, the t dep of the regions would be extremely short, that is, almost 100 times shorter than those of local spiral galaxies (see Section 4.1). Comparison with local galaxies In Figure 7, we compare the spatially resolved (200-500 pc) SFR and molecular gas surface densities of nearby galaxies presented by Leroy et al. (2008) and Casasola et al. (2015) with those of IC 4687. Leroy et al. (2008) studied a sample of 23 nearby (d < 15 Mpc) normal spiral galaxies, while Casasola et al. (2015) studied four nearby (d < 20 Mpc) low-luminosity AGN. Figure 7 shows that IC 4687 regions have high molecular gas surface densities, log Σ H2 (M pc −2 ) = 2.9 ± 0.2, close to the high end of the Σ H2 distribution observed in nearby active galaxies. Moreover, the IC 4687 regions form stars more rapidly than normal galaxies do. These regions have log Σ SFR (M yr −1 kpc −2 ) = 0.7±0.4, which is a factor of ∼10 higher than the most extreme values measured in nearby galaxies. This is consistent with the general behavior of local LIRGs, as their H ii regions are typically a factor of 10 more luminous than those in normal star-forming galaxies (Alonso-Herrero et al. 2002. Consequently, the t dep of the IC 4687 regions is 160 +750 −140 Myr (average log t dep / yr = 8.2 ± 0.4). This is approximately one order of magnitude shorter than in nearby galaxies, which is 1-2 Gyr (Bigiel et al. 2008(Bigiel et al. , 2011Leroy et al. 2013;Casasola et al. 2015) for similar physical scales. Systematic uncertainties In this section, we use data from the local studies of Leroy et al. (2008) and Casasola et al. (2015). Leroy et al. (2008) used the Hα+24 µm luminosities to derive the SFR, so our results are directly comparable. Casasola et al. (2015) instead used the extinction-corrected Hα luminosity. They used the integrated Paα/Hα ratio to derive this correction, so we expect their SFR to be systematically underestimated when compared to ours (Section 3.3.2). Since their galaxies are less extinguished (A K ∼ 0.2 mag) than the LIRG IC 4687, we estimate that this difference is less than a factor of 2. These local studies assume a Galactic CO-to-H 2 conversion factor, which is the adopted factor for IC 4687. Consequently, if this factor is valid for IC 4687, all the cold molecular gas surface density comparisons should be consistent. However, in Figure 7, we also plot the SF law assuming an α CO factor typical of ULIRGs (Downes & Solomon 1998) for IC 4687. Using this factor, the molecular gas masses and depletion times are reduced by a factor of ∼5. Therefore, the regions in IC 4687 would have Σ H2 similar to those of normal galaxies, but Σ SFR ∼100 times higher. Actually, if we apply the α CO factor of ULIRGs to IC 4687, this galaxy would be an extreme starburst compared to local and highz galaxies (Section 5). In principle, we do not expect such extreme behavior in a weakly interacting spiral galaxy such as IC 4687. Therefore, we consider the Galactic α CO factor preferred for IC 4687. Higher star formation efficiency? On average, the SF regions of the LIRG IC 4687 have higher cold molecular gas surface densities than those in other nearby galaxies measured on similar spatial scales when assuming the same α CO (Figure 7). There is some overlap, however, with the regions measured by Casasola et al. (2015) in the range M H2 = 10 2.5 − 10 3.1 M pc −2 . If the SFR were linearly correlated with the amount of molecular gas (see e.g., Bigiel et al. 2008), we would expect higher SFR densities in IC 4687, but also similar depletion times. However, the depletion times in IC 4687 are 10 times shorter than in nearby galaxies. Therefore, for IC 4687, the SFR surface density does not follow the relation observed in nearby galaxies, even in overlapping mass range. Alternatively, a nonlinear SF law fits the data with a power-law index of N = 1.6 (Figure 7), which is similar to the indexes derived for galaxy integrated data (see Section 1). However, if we exclude the IC 4687 data from the fit, a linear relation is recovered. That is, the nonlinearity of the relation is only due to the IC 4687 regions. Sub-kpc resolved observations of more extreme starbursts will be needed to determine if they follow a nonlinear SF law or if the SF efficiency (SFE) is actually bimodal since it is more efficient in starburst galaxies (see Section 5.4). Dispersion of t dep in IC 4687 We find that the t dep scatter within IC 4687 regions is relatively high, at 0.4 dex. A similar, although slightly lower, dispersion of the t dep values is found in nearby galaxies observed at similar spatial resolution Casasola et al. 2015). In addition, in IC 4687 ( Figure 7) the correlation between the molecular gas and the SFR surface densities is weak (r S = 0.24, p = 0.08). This suggests that, on scales of 250 pc, the relation between the SFR and the cold molecular gas breaks in this galaxy, or at least, it is hidden by the scatter. Some works (e.g., Onodera et al. 2010;Schruba et al. 2010;Kruijssen & Longmore 2014) argue that the time evolution of the SF regions plays a key role in explaining the t dep scatter when high spatial resolution data is used. Actually, at high spatial resolution (75 pc) the distributions of the CO and ionized gas emissions are different (e.g., Schruba et al. 2010). This is also partially true on scales of 130 pc for IC 4687 (Figure 3). Therefore, the evolutionary state of the molecular clouds in IC 4687 could give rise to the scatter in the SFR vs. cold molecular gas relation. With the current observations, however, it is not possible to establish the evolutionary state of the regions in IC 4687, so we cannot test this hypothesis. Additional scatter is produced by the selected SFR tracers (e.g., Schruba et al. 2011). We use the extinctioncorrected Paα emission as a tracer of the SFR. The Paα traces the ionizing radiation produced by young stars and it is detectable for clusters younger than ∼10 Myr (Kennicutt & Evans 2012). Therefore, our SFR estimates are only sensitive to recent SF (<10 Myr), which might be more variable than the SFR averaged over longer time periods (∼100 Myr) traced by the UV or IR continuum (e.g., Schruba et al. 2011;Casasola et al. 2015). This short-term SFR variability can also produce part of the scatter seen in Figure 7. Finally, when the mass of the young SF regions is low (<10 5 M ), the incomplete sampling of the IMF can induce large variations in the SFR tracers (e.g., Verley et al. 2010). To test this effect, we estimated the mass of the young stars in each region from the Paα luminosity. For an instantaneous burst of SF, the starburst 99 code (Leitherer et al. 1999) provides the ionizing radiation produced by a cluster as a function of time. Therefore, assuming that the regions of IC 4687 are close to the peak of the ionizing radiation production (i.e., 0-3 Myr old), we determine that these young clusters have stellar masses between 10 5.5 and 10 7 M (these are lower limits if the regions are older than 3 Myr; see also Alonso-Herrero et al. 2002). Cerviño et al. (2002) showed that for young clusters more massive than 10 5 M (stellar mass) the uncertainties due to the IMF sampling are less than 25 %. Consequently, it is not likely that the IMF sampling has any effect on the correlation shown in Figure 7 at the SFR level of IC 4687 using 250 pc apertures. Local dynamical time An alternative formulation of the SF law uses the dynamical time (or orbital time = 2πr/v rot ) to normalize the molecular gas surface density (e.g., Kennicutt et al. 2007). This formulation uses an average dynamical time for integrated measurements; it is able to recover a universal SF law valid for objects with high SFE, such as ULIRGs or sub-mm galaxies, and for normal spirals (see also Section 5.4). For our resolved observations, it is possible to estimate the dynamical time of each region from their deprojected radius (assuming i = 47 • and a major axis PA of 39 • ) and the rotation curve derived by Bellocchi et al. (2013) using kinemetry (Krajnović et al. 2006). In Figure 8, we show that the depletion time (or SFE) does not depend on the dynamical time (r S = 0.02, p = 0.86). In the right panel of Figure 5, we show the spatial distribution of depletion times where no clear trends are seen. This absence of correlation is also seen in resolved observations of normal spirals . Therefore, the local dynamical time does not seem to influence the local SFE at spatial scales of ∼250 pc. Integrated properties of IC 4687 The SF laws derived for high-z galaxies are mostly based on integrated measurements. Therefore, it is useful to calculate the integrated properties of IC 4687 using an approach comparable to high-z studies. We limited our integrated study to the 3×3 kpc 2 area covered by the field of view of SINFONI (see Section 2) to obtain an accurate measurement of the extinction. This area contains about 85% of the total CO(2-1) and 90% Tacconi+13 IC 4687 Fig. 9. Comparison of the SFR surface density as a function of the molecular gas surface density for high-z MS and sub-mm galaxies and IC 4687. The red circle indicates the integrated measurement for IC 4687, as described in Section 5, using the Galactic α CO factor. The error bars indicate systematic uncertainties due to the extinction correction in Σ SFR (vertical) and a change in the α CO factor from Galactic (assumed) to that considered for ULIRGs (horizontal; see Section 3.3). The green and red squares correspond to z ∼ 1.2 and 2.2 MS SF galaxies from Tacconi et al. (2013) and BzK z ∼ 1.5 galaxies from Daddi et al. (2010). For both datasets we used the Galactic α CO factor. The blue, purple, and orange stars correspond to sub-millimeter galaxies at z ∼ 2, 4.0, and 5.2 from Bothwell et al. (2010), Hodge et al. (2015), and Rawle et al. (2014), respectively, for which a ULIRG-like α CO factor was applied. The dotted lines indicate constant log t dep times. of the observed Paα emissions. Therefore, by limiting our analysis to this area, we only miss 10-15% of the total emission. We derived an integrated extinction of A K = 1.2 ± 0.1 mag (A V = 10.1±0.8 mag) based on the integrated Brγ/ Brδ ratio. The total SFR, 43±4 M yr −1 , is calculated from the extinction-corrected Brγ integrated flux 1 because the HST /NICMOS Paα image is not sensitive to diffuse Paα emission due to its small pixel size; this diffuse emission would be included in integrated measurements of high-z objects. This SFR is ∼1.5 times higher than that derived from the IR luminosity, but this difference is within the assumed systematic uncertainties (see Section 3.3). From the integrated CO(2-1) emission, we derive a total cold molecular gas mass of 5.5×10 9 M . In IC 4687, the extinction-corrected Paα emission from the regions defined in Section 3.2 accounts for ∼70% of the integrated SFR derived here. For the cold molecular gas, ∼65% of the CO(2-1) emission comes from these regions. These two fractions are similar. Therefore, the integrated t dep agrees with the average t dep of the individual regions. The effective CO(2-1) emitting area (that containing 50% of the total CO(2-1) emission) has a R 1/2 of 1.0 kpc, which corresponds to an ∼30% lower area than that of the individual regions combined. Therefore, because of the increased integrated CO(2-1) and Paα fluxes from diffuse emission and the lower emitting area estimate, both the integrated SFR and cold molecular gas surface densities are ∼2 times higher than the average values of the resolved regions in IC 4687, although the depletion times are similar. We find that the integrated H 2 and SFR surface densities of IC 4687 lie at the high end of the distributions of values measured for high-z MS galaxies. The Σ SFR and Σ H2 values directly depends on the size of the emitting region, which is not easy to estimate for integrated galaxies (see Section 5.1 and Arribas et al. 2012). The depletion time, however, is independent of the emitting region size. Therefore, we focus on the t dep differences in this section. A galaxy with the sSFR of IC 4687 (sSFR = ∼0.4 Gyr −1 ) would be a MS galaxy at z ∼ 0.9 (see Section 3.3.3). As shown in Figure 9, high-z MS galaxies, in general, are less efficient than IC 4687 forming stars. On average, they have t dep that are 6 times longer than IC 4687 (using the Galactic α CO for IC 4687). Therefore, even if these z = 1.2 − 2.2 galaxies have sSFR, SFR, and stellar masses similar to IC 4687, they are more similar to local starbursts in terms of t dep ). This can be explained by the correlation between the t dep and the sSFR normalized by the sSFR of a MS galaxy between z = 0 and 3 (Genzel et al. 2015). Since, the sSFR of IC 4687 is ∼6 times higher than the local MS sSFR (Section 3.3.3), we expect a shorter depletion time in this local LIRG than in MS galaxies, at least for z < 3. On the other hand, the t dep of IC 4687 is similar to that of high-z sub-mm galaxies (Rawle et al. 2014;Hodge et al. 2015; Figure 9). The amount of cold molecular gas in IC 4687 and these sub-mm galaxies is also similar (∼10 10 M ), however, this depends on the CO-to-H 2 conversion used. For the sub-mm galaxies, the α CO used is similar to that of local ULIRGs, that is, it is lower than the Galactic α CO used for IC 4687. Consequently, if we would apply the ULIRG α CO factor to IC 4687, its t dep would be at the high end of the t dep range measured in high-z sub-mm galaxies (Figure 9). Systematic uncertainties For high-z galaxies, the SFR is mainly obtained from the spectral energy distribution fitting. This kind of analysis includes the IR emission, therefore it is possible that our SFR derived from the Paα luminosity are underestimated by a factor of two (see Piqueras López et al. 2015, submitted). The α CO factor applied to high-z galaxies depends on the object class (Galactic factor for MS galaxies; ULIRGlike factor for sub-mm galaxies). Therefore, the comparison with the molecular gas surface density of IC 4687 is somewhat uncertain. For reference, in Figure 9 we represent the range of Σ H2 assuming the Galactic and ULIRG α CO factors. Bimodal SF law Some studies have shown that the SF laws have a bimodal behavior with a factor 3-4 lower t dep in local U/LIRGs than in normal SF galaxies (e.g., Daddi et al. 2010;Genzel et al. 2010;García-Burillo et al. 2012). We observe this bimodal behavior when we compare MS galaxies (Figures 7 and 9) with IC 4687. To recover a universal SF law for integrated measurements of galaxies, several alternative formulations are proposed. We discuss two of them here. The first is to normalize the cold molecular gas surface density by the dynamical time (Σ H2 /t dyn ;Silk 1997;Tan 2000). When this normalization is applied to integrated measurements, a global t dyn is used (e.g., Kennicutt 1998;Daddi et al. 2010). In this case, the global dynamical times of U/LIRGs (∼45 Myr) are 4-5 times shorter than those of spirals (∼370 Myr), so the lower t dep values of U/LIRGs are compensated and a universal relation between Σ SFR and Σ H2 /t dyn is obtained (e.g., Genzel et al. 2010;García-Burillo et al. 2012 In Section 4.4, we showed that the SFE does not depend on the local t dyn in IC 4687. Thus, SF does not seem to be strongly affected by the local effects of the disk rotation. Therefore, the normalization by the global t dyn might be a simplification of the physical mechanisms leading to this universal SF law relation. Alternatively, the free-fall time is another proposed normalization for the cold molecular gas surface density (Σ H2 / t ff ; Krumholz & McKee 2005) to recover a universal SF law. The t ff is proportional to 1/ρ 0.5 , where ρ is the molecular gas volume density (Binney & Tremaine 1987). Therefore, systems with higher ρ have lower t ff . If the molecular gas density is higher in U/LIRGs than in normal spirals (Gao & Solomon 2004), the t ff normalization would recover a universal relation as well. Conclusions We have analyzed the resolved (250 pc scales) and integrated SF law in the local LIRG IC 4687. This is one of the first studies of the SF laws on a starburst dominated LIRG at these spatial scales. We combined new interferometric ALMA CO(2-1) observations with existing HST /NICMOS Paα narrowband imaging and VLT/SINFONI near-IR integral field spectroscopy to obtain accurate cold molecular gas masses and extinction-corrected SFR estimates. The main conclusions of our analysis are the following: 1. We defined 54 regions with a diameter of 250 pc centered at the CO and Paα emission peaks. The resolved Σ H2 values of IC 4687 lie at the high end of the values observed in local galaxies at these spatial resolutions. Whereas the Σ SFR are almost a factor of 10 higher than those of local galaxies for similar Σ H2 . For the resolved regions of IC 4687, the correlation between Σ H2 and Σ SFR is weak (r S = 0.25, p = 0.08). This suggests that the SF law breaks downs in this galaxy on scales of 250 pc. 2. Compared with resolved SF laws in local galaxies, IC 4687 forms stars more efficiently. The range of t dep of the individual regions is 20-900 Myr with an average of 160 Myr. This is almost one order of magnitude shorter than that of local galaxies. For these estimates, we used a Galactic α CO conversion factor; using an ULIRG-like factor would make the t dep even shorter by an additional factor of 4-5. 3. The 1σ scatter in the t dep values is 0.4 dex. We suggest that this can be due to the rapid time evolution of the SFR tracer we used (Paα). We rule out that the IMF sampling causes the observed scatter for this galaxy because of the high young stellar masses (10 5.5−7 M ) of the studied regions. We also show that the local dynamical time does not significantly affect the SF efficiency in IC 4687 (up to ∼1.5 kpc away from the nucleus). 4. The galaxy integrated log Σ H2 (M pc −2 ) = 2.6 − 3.2 and log Σ SFR (M yr −1 kpc −2 ) = 1.1 ± 0.2 of IC 4687 make this object have a t dep ∼6 times shorter than MS high-z galaxies. The Σ H2 lies at the high end of the Σ H2 distribution of high-z MS objects, whereas the Σ SFR is ∼10 times higher than in high-z objects with similar Σ H2 . There are some high-z MS galaxies with comparable Σ SFR levels, although they have higher Σ H2 values than IC 4687. 5. Our results suggest that SF is more efficient in IC 4687 than in nearby star-forming galaxies. This agrees with some works that propose the existence of a bimodal SF law. After normalizing the Σ H2 by the global dynamical time, IC 4687 lies in the universal SF law. However, since the local dynamical time does not affect the local SFE, this global dynamical time normalization could be contrived. Alternatively, a normalization using the t ff might recover a universal SF law. The t ff depends on the volume density, therefore, future high spatial resolution observations of dense molecular gas in LIRGs and normal galaxies will reveal whether the local t ff has any influence on the SFE at sub-kpc scales.
2016-01-12T17:25:14.000Z
2016-01-11T00:00:00.000
{ "year": 2016, "sha1": "87bf4af4922b654ab059d9aaf403369a5a1ad932", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2016/03/aa27693-15.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "87bf4af4922b654ab059d9aaf403369a5a1ad932", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10943561
pes2o/s2orc
v3-fos-license
Protective Effect of Crocin on Gastric Mucosal Lesions Induced by Ischemia-Reperfusion Injury in Rats. The present study aimed to evaluate the protective effect of crocin on gastric mucosal lesions caused by ischemia-reperfusion (I/R) injury in rats. Forty male rats were randomly divided into sham, control (I/R injury) and three crocin-pretreated groups. To induce I/R lesions, the celiac artery was clamped for 30 min and then the clamp was removed to allow reperfusion for 3 h. Pretreated-rats received crocin (7.5, 15 or 30 mg/kg, i.p.) 30 min prior to the induction of I/R injury. Samples of gastric mucosa were collected to measure the following variables: 1 mRNA expression of superoxide dismutase (SOD) and glutathione peroxidase (Gpx) by RT-PCR; 2 activity of superoxide dismutase and glutathione peroxidase and 3 tissue levels of malonyldehaldehyde (MDA). Pretreatment with crocin decreased the total area of gastric lesions. Messenger RNA expressions of SOD and Gpx in control I/R injury rats were significantly decreased as compared with sham-operated group (P<0.001). Crocin pretreatment 30 min prior to I/R injury significantly increased mRNA expressions of SOD and Gpx genes. The gastric mucosal activities of SOD and Gpx in control I/R injury rats were significantly lower than in crocin-pretreated groups (P<0.01). Crocin pretreatment decreased mucosal production of MDA. Our findings showed the protective effect of crocin on gastric mucosa against ischemia-reperfusion injury. These effects of crocin were mainly mediated by increasing the mRNA expressions- and the enzyme activity of SOD and Gpx as well as by inhibiting the production of free radicals. Introduction It has been shown that reactive oxygen species (ROS) such as superoxide, hydrogen peroxide, and hydroxyl radicals play a major role (are involved) in the development of ischemiareperfusion injury in a variety of tissues, including the stomach (1, 2). Under normal conditions, the endogenous antioxidant enzymes such as SOD and GSH neutralize reactive radicals of oxygen molecules produced during metabolism and prevent their destructive effects (3). However, when there was an imbalance between the production of oxidants and the antioxidant defense systems, the oxidative stress occurs. The human body possesses many natural antioxidants; however, none of them are capable of protection against the attack of oxidants induced by the ischemia-reperfusion condition (4). Antioxidant agents have demonstrated to protect the gastrointestinal mucosa against I/Rinduced damages in many animal reports (5-7). Crocin, the most important and abundant antioxidant constituent of Crocus sativus stigma (8), has been demonstrated to exert antioxidant property (9). Its beneficial effects has been shown in different models of experimentally induced gastric ulcer models such as NSAIDs (10)-, pylorus ligated (11)-, and water immersion restraint stress (12). To our knowledge, no previous study has specifically investigated the possible protective effect of crocin on gastric mucosal following I/R injury in rat. Therefore, the aim of the present study was to evaluate the protective effect of crocin on gastric mucosal lesions induced by I/R injury in rat by evaluating the changes in the level of mRNA expression and enzyme activity of superoxide dismutase and glutathione peroxidase. Chemicals and Animals Crocin (Cat NO-17304) was purchased from Sigma (USA). Male Wistar rats (body weight 130-160g) were purchased from the animal house of Ahvaz Jundishapur University of Medical Sciences. The animals were fed on conventional diets and had free access to tap water. They were maintained under standard conditions of humidity, temperature (22 ± 2 ºC) and light/dark cycle (12 h:12 h). The animals were deprived of food but not water 16 h before the experiment. All experiments were carried out in accordance with ethics committee of Ahvaz Jundishapur University of Medical Sciences (PRC144). Animal grouping and surgical procedures Forty male Wistar rats were randomly assigned to one of 5 groups (n=8): Sham, control (gastric ischemia-reperfusion; I/R injury) and 3 crocin-pretreated groups. Gastric I/R injury was induced according to the method of Wada (13). Briefly, under a sodium pentobarbital anesthesia (50 mg/kg, i.p.), the rats underwent a midline laparatomy and the celiac artery was carefully isolated from its adjacent tissues. The celiac artery was then clamped by a ligature for 30 min to induce ischemia and the ligature was removed to allow reperfusion for 3 h. Sham-operated rats underwent laparatomy without inducing I/R injury. To investigate the gastroprotective effect of crocin against mucosal damage induced by I/R injury, 3 groups of animals received crocin (i.p.) at doses of 7.5, 15 or 30 mg/kg 30 min prior to I/R injury. At the end of experiment, animals were killed by cardiac exsanguination. In order to calculate the gastric mucosal lesions, the stomachs of animals were removed, opened along the greater curvature, rinsed with physiological saline and pinned out in ice-cold saline. To calculate the degree of gastric lesions, the total area of mucosal lesions were measured by Image J software. The lesion area is expressed as a percentage of the total area of the glandular stomach except for the fundus using following formula : UI(%)=[Ulcerated area/ total stomach area expect fundus]×100 (14). Immediately after taking photo of the stomachs for measurement of the surface area of gastric lesions, two samples of gastric mucosal tissue (50 mg in each) including the lesions area and the surrounding ulcer margin were quickly excised, snap-frozen and stored in liquid nitrogen for molecular analysis, determination of enzyme activity and lipid peroxidation. The macroscopic evaluation of gastric mucosal lesions showed that the optimal protective dose of crocin against I/R injury was 15 mg/kg. Therefore, the molecular analysis was carried out in animals that received the optimal dose. RNA extraction and cDNA synthesis The total RNA was extracted from the frozen tissue samples using RNeasy mini plus kit (Qiagen, USA). The concentration and purity of the total RNA was determined spectrophotometerically at 260 and 280 nm wavelength (Eppendorf, BioPhotometer Plus, Germany). The cDNA was synthesized from one microgram of the total RNA by using Quantitect Reverse Transcription kit (Qiagen, USA) according to the manufacturer´s instruction. Reverse transcriptase PCR All PCR amplifications were performed in final volume of 25 µL containing 1µg cDNA, 50 nm of specific primers, 2.5 µL of 10X PCR buffer, 1 uDNA Taq polymerase and 50 nm of dNTP. The mRNA levels of SOD, Gpx and the housekeeping gene glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were measured by RT-PCR using Master Cycle Personnel (Eppendorf AG, Hamburg, Germany). The specific primers (Bioneer, Daejoun, South Korea) used in this study are listed in Table1. The thermal cycling conditions for the amplification of GAPDH, SOD and Gpx genes were as follows: initial denaturation at 94 °C for 5 min followed by 40 cycles of 1 min at 94 °C; annealing time 60 s and the temperature was at 53 °C for GAPDH, 54 °C for SOD and 55 °C for Gpx and the elongation time was 1 min at 72 °C. A. final elongation cycle at 72 °C for 5 min was also performed. The PCR products were analyzed on a 2% agarose gel and the density of each band was measured with Image J software. The levels of the target studied genes; SOD and Gpx; were determined by calculating the density ratio of each studied mRNA/GAPDH mRNA. The activity of superoxide dismutase (SOD) and glutathione peroxidase (GPx) in the homogenates of gastric mucosal tissue were measured using a commercial kit (Biocore Diagnostik Ulm GmbH, Veltlinerweg 29, Deutschland) according to the manufacturer's instructions. Determination of lipid peroxidation MDA levels were measured to show the level of lipid peroxidation. Briefly, in this method MDA reacts with thiobarbituric acid (TBA) as a thiobarbituric acid reactive substance (TBARS) to generate a red-colored complex that has peak absorbance at 532 nm. Three mL phosphoric acid (1 %) and 1 mL TBA (0.6 %) was added to 0.5 mL of homogenate and the mixture was heated for 60 min in a boiling water bath. Then, twentyfive µL HCl was added to the ice-cooled mixture and vortexed. At the end, 3.5 mL of n-butanol was added the mixture and incubated for 5 min and centrifuged at 15000 rpm for 10 min to separate n-butanol phase. The supernatant was transferred to a new tube and its absorbance was measured at 532 nm. The standard curve of MDA was constructed over the concentration range of 0-40 μM (9). Statistical analysis Data are shown as mean ± S.E.M. Statistical analysis was performed by one-way ANOVA and followed by post hoc Tukey's test. Significance was set at a P<0.05 level. Effect of crocin pretreatment on gastric mucosal lesions induced by I/R injury As shown in Figure 1, no macroscopic lesion was observed in the gastric mucosa of the normal rats in the sham-operated group. Pretreatment with crocin attenuated the gastric lesions induced by I/R injury ( Figure 1). As shown in Figure 2, the total area of lesions induced by I/R injury was significantly decreased by pretreatment of crocin in a dose-dependent manner (P<0.01). The results also showed crocin at 15 mg/kg was the optimal protective dose. Effect of crocin pretreatment on mucosal mRNA expressions of SOD and Gpx Thirty min gastric ischemia followed by 3 h reperfusion significantly down regulated the basal level of messenger RNA expression of SOD and Gpx as compared with the shamoperated animals (P<0.001). As shown in Figure 3, crocin pretreatment was significantly reversed these reduction (P<0.01) (Figure 3). Effect of crocin pretreatment on SOD and Gpx activity As shown in Figure 4 (A and B), the activities of SOD and Gpx in mucosal tissue of the stomach in control I/R injury rats were significantly lower than in sham-operated animals (P<0.01). These levels were significantly increase by a single administration of crocin (P<0.01). Effect of crocin pretreatment on lipid peroxidation in gastric mucosal tissue Following gastric I/R injury, the mucosal level As shown in Figure 1, no macroscopic lesion was observed in the gastric mucosa of the normal rats in the sham-operated group. Pretreatment with crocin attenuated the gastric lesions induced by I/R injury ( Figure 1). As shown in Figure 2, the total area of lesions induced by I/R injury was significantly decreased by pretreatment of crocin in a dose-dependent manner (P<0.01). The results also showed crocin at 15 mg/kg was the optimal protective dose. Effect of crocin pretreatment on SOD and Gpx activity As shown in Figure 4 (A and B), the activities of SOD and Gpx in mucosal tissue of the stomach in control I/R injury rats were significantly lower than in sham-operated animals (P<0.01). These levels were significantly increase by a single administration of crocin (P<0.01). of MDA was significantly increased as compared with sham-operated rats (P<0.01). Free radicalinduced lipid peroxidation was significantly decreased as indicated by a reduction in the MDA levels of gastric mucosal tissue by crocin pretreatment at three studied dose (7.5, 15 and 30 mg/kg) (P<0.05, Figure 5). Discussion The results of the present study showed that a single administration of crocin protected the gastric mucosa against ischemia-reperfusion injury in rats. It has been shown that SOD attenuated the total area of gastric lesions in rats (15). Aqueous extract of saffron exhibited significant antiulcer activity in rat. The mechanism of this protection was proposed to be mediated by decrease in gastric non-protein sulfhydryl contents (16). In addition crocin and safranal produced their gastroprotective effects against indomethacin, in both diabetic and nondiabetic rats, by increasing gluthatione level and diminishing the lipid peroxidation. Moreover, it has been reported the neuroprotective mechanism of crocin against cerebral ischemia is mediated through suppression of free radical production by inhibiting lipid peroxidation (16).Consistent with these results; the present study revealed that crocin pretreatment decreased the tissue levels of MDA ( Figure 5). Therefore, crocin protected the gastric mucosal tissue against I/R injury through reducing the production of free radicals. Vakili and Hosseinzadeh in their reports have been shown that the neuro-and reno-protective effects of crocin against ischemia-reperfusion injury is mediated through an increase the activity of SOD and Gpx (9,16). The results of this study are also in agreement with these findings that showed crocin pretreatment increased the activity of SOD and Gpx in gastric mucosa. Therefore, these reports together show that protective effect of crocin on I/R injury could largely mediated through an increase the activity of antioxidant. It has been shown that mRNA expression of antioxidants, SOD and Gpx, down-regulated following oxidative stress (17). Consistent with these results, our results showed that the gene expression of SOD and Gpx significantly decreased after gastric I/R injury. Therefore, following ischemia-reperfusion injury, there is an increase the production of free radicals due to a reduction in antioxidants which it in turn damage tissues. The present study revealed that crocin pretreatment increased the mRNA expressions of superoxide dismutase and glutathione peroxidase in mucosal tissue of the stomach in male Wistar rat. Therefore, the other possible gastroprotective mechanism of crocin could largely be mediated by up-regulating the antioxidants, SOD and Gpx. Hosseinzadeh and his colleagues have been shown that the reno-protective effect of crocin on renal ischemia/reperfusion injury significantly increased as a dose dependent manner (9). They showed that crocin protected the renal function by inhibiting lipid peroxidation. The antioxidant potency of crocin at 200 and 400 mg/ Discussion The results of the present study showed that a single administration of crocin protected the gastric mucosa against ischemia-reperfusion injury in rats. It has been shown that SOD attenuated the total area of gastric lesions in rats (15). Aqueous extract of saffron exhibited significant antiulcer activity in rat. The mechanism of this protection was proposed to be mediated by decrease in gastric non-protein sulfhydryl contents (16). In addition crocin and safranal produced their gastroprotective effects against indomethacin, in both diabetic and nondiabetic rats, by increasing gluthatione level and diminishing the lipid peroxidation. Moreover, it has been reported the neuroprotective mechanism of crocin against cerebral ischemia is mediated through suppression of free radical production by inhibiting lipid peroxidation (16).Consistent with these results; the present study revealed that crocin pretreatment decreased the tissue levels
2018-04-03T00:20:32.118Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "38bc6b30f11a61f30dd700a3b9a5dd8f94c37e7f", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "38bc6b30f11a61f30dd700a3b9a5dd8f94c37e7f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
238695488
pes2o/s2orc
v3-fos-license
Association of deployment with maintenance of healthy weight among active duty service members in the Millennium Cohort Study Abstract Objective Understanding body size in relation to deployment readiness can inform Department of Defense fitness policies. This study examined longitudinal associations between deployment and changes in body mass index (BMI) among active duty service members. Methods Service branch‐specific changes in BMI post‐deployment were examined using logistic regression models among active duty Millennium Cohort Study participants without obesity at baseline (n = 22,995). BMI was categorized using self‐reported height and weight as healthy weight (18.5–24.9 kg/m2), overweight (25.0–29.9 kg/m2), and obese (≥30 kg/m2). Number of deployments between baseline and follow‐up and initial deployment lengths (in months, using service branch‐specific deployment times) were examined. Results Among the pooled population and specifically Army and Marine Corps service members without obesity, those with longer deployments were significantly less likely to maintain a non‐obese BMI than those deploying for shorter lengths. Each additional deployment increased the likelihood of maintaining a non‐obese BMI post‐deployment for personnel in the Army, Marine Corps, and within the pooled population. Conclusions Multiple deployments may support healthy weight maintenance; longer deployments may adversely impact weight maintenance. Future research should determine modifiable behaviors related to weight gain post‐deployment to inform fitness policies designed to optimize service member readiness and deployability. | INTRODUCTION Adherence to the Department of Defense's physical fitness policies is necessary to optimize service member readiness and deployability. While physical fitness requirements vary across service branch, one common element is meeting a specified threshold for weight and body mass index (BMI). Despite the fact that rates of obesity in the United States (US) military are lower overall than those observed in the general population, 1 research from the Millennium Cohort Study has shown that the prevalence of obesity doubled among service members over a 7-year follow-up period, increasing from 10% in 2001 to 20% in 2008. 2 Recent estimates suggest that 63.5% of US service members were classified as either overweight or obese in 2018. 1 Understanding factors related to increasing trends in overweight and obesity among military personnel are important for identifying potential health risks among service members, as having obesity is associated with increased risk for cardiovascular disease, diabetes, and certain cancers, 3 as well as excess mortality due to cardiovascular disease and obesity-related cancers. 4 Among service members specifically, obesity has been observed to be associated with physical and mental health comorbidities, such as hypertension, diabetes, sleep apnea, coronary heart disease, posttraumatic stress disorder, depression, and multiple somatic symptoms. 2 Similarly, comorbid medical conditions (such as high cholesterol, high blood pressure, diabetes, and sleep apnea) and joint and back disorders were recorded during nearly a quarter of medical encounters for clinical overweight or obesity among US military personnel serving in the active component between 1998 and 2010, 5 and it has been estimated that the Department of Defense (DoD) spends $1.1 billion per year on medical care costs associated with excess weight and obesity. 6 In addition to the potential health consequences obesity may impose on service members, meeting weight standards is a critical element of military service since maintaining a healthy weight is imperative for carrying out certain military duties. In fact, service members who are unable to meet weight standards are either required to participate in a weight control program or may be discharged from service, since this indicates unsatisfactory levels of readiness. 7 For example, in 2008, more than 4,500 active duty service members were discharged early due to a failure to meet weight standards, 8 incurring significant monetary costs to the DoD related to recruitment and training, as well as a loss of force strength. 9 Given the often physically rigorous demands experienced in a deployed environment, such as walking and/or running long distances and carrying heavy packs, having a healthy weight is even more important to the safety and operational success of service members. Little research exists on the deployability of overweight or obese service members because in general, the most fit and healthy service members are the ones who deploy. 10,11 However, there is mounting evidence that BMI may not accurately predict a service member's ability to perform military-specific physical tasks. 12 Further, optimal performance may not align with the most strict body fat requirements. 7 While some studies have shown service members with the highest BMIs have the greatest risks for injury, 13 a 2017 study by Jones et al. showed that service members with the lowest BMI were at greatest risk for injury, regardless of their aerobic fitness level. 14 Furthermore, another 2017 study concluded that "older age and poor aerobic fitness are stronger predictors of injury than BMI". 15 However, BMI remains an easy, efficient, and cost-effective method for approximating a service member's level of body fat. Taken together, lower levels of fitness, higher rates of injuries due to elevated BMI, and failure to meet body fat standards leading to disability among recruits undermine force readiness. [16][17][18] Although service members must qualify as fit prior to deployment, there are factors during deployment, such as access to food, stressors, and unhealthy sleep patterns, that may influence one's ability to maintain a healthy weight and which may be exacerbated to a greater extent during longer deployments. 19 In addition, there is very little research examining the effects of multiple deployments on healthy weight maintenance. A 2011 study by Macera et al. indicated that a short duration between two deployments was associated with increased weight after deployment, 19 but it remains unclear whether weight maintenance is affected by deploying multiple times over an extended period of time. Thus, understanding the relationship between body size and deployment will be instrumental in the examination and potential revision of physical fitness requirements. The goal of this study was to examine the relationship between deployment and subsequent change in BMI among active duty participants enrolled in the Millennium Cohort Study. | Study population The Millennium Cohort Study is the largest and longest running longitudinal epidemiologic study in the DoD that was designed to track service members throughout their military careers and beyond. 20 | BMI assessment Pre-deployment BMI at baseline and BMI at follow-up after deployment were calculated using participants' self-reported height and weight and classified as healthy weight (18.5-24.9 kg/m 2 ), overweight (25.0-29.9 kg/m 2 ), and obese (≥30 kg/m 2 ). At followup, underweight participants (<18.5 kg/m 2 ) represented 0.2% of the population and were combined with the healthy weight category. | Deployment assessment Deployment dates were ascertained from Defense Manpower Data Center electronic military records. Because deployment lengths vary by service branch, the average length of the first deployment between baseline and follow-up was defined as 9 months for Army, 6 months for Navy/Coast Guard, 8 months for Marine Corps, and 4 months for Air Force, 21 and categorized as deploying for at or above the branch-specific average number of months versus deploying for less than the branch average. The number of deployments between baseline and follow-up were examined as a continuous variable, ranging from 1 to 42 deployments. | Analyses Among those without obesity at baseline (i.e., those with a healthy weight or overweight BMI), separate logistic regression models estimated the likelihood of maintaining a non-obese BMI versus developing obesity in relation to the branch average length of deployment and the number of deployments following baseline. To assess the likelihood of obesity soon after return from deployment, sensitivity analyses were conducted that were restricted to 5,188 participants who completed follow-up surveys within 1 year after deployment. Analyses were pooled as well as stratified by service branch, and adjusted for all covariates (enrollment panel, age, sex, race/ ethnicity, marital status, pay grade, education level, occupation), the number of deployments before baseline, and the total length of time between the baseline BMI measurement and the post-deployment follow-up BMI measurement. All analyses were conducted in SAS (Version 9.2, Cary, NC); p-value <0.05 was considered statistically significant. | RESULTS Baseline characteristics of the sample are listed in Table 1. The average length of time between baseline and follow-up was 4.6 years (SD: 2.3) among participants. Overall, a majority of participants (≥88.2%) maintained a non-obese BMI between baseline and followup, with the mean change in BMI between baseline and follow-up being less than 1.5 kg/m 2 for all service branches. Figure 1 displays adjusted odds ratios (AORs) and 95% confidence intervals (CIs) for maintaining a non-obese BMI among service members who had a deployment length at or above the branch average compared with those who deployed for less than the branch average. Army and Marine Corps service members were significantly less likely to maintain a non-obese BMI following a deployment that was at or above the average length for their branch compared with those who deployed for less time than the average, with similar trends among Navy/Coast Guard and Air Force personnel. In the pooled population, participants who deployed at or above the branch average length were significantly less likely to maintain a non-obese BMI compared with participants who deployed for less than the average length. Among the subset of personnel whose post- Table S1. AORs for maintaining a non-obese BMI with each additional deployment between baseline and follow-up among service members are listed in Figure 2. Army and Marine Corps personnel were increasingly more likely to maintain a non-obese BMI with each additional deployment between baseline and follow-up, with a similar trend among Navy/Coast Guard and Air Force personnel. In the pooled population, service members were significantly more likely to maintain a non-obese BMI with each additional deployment. Among the subset of personnel whose post-deployment follow-up was within 1 year of their last deployment (results not shown), service members in the Army (AOR = 1.32, 95% CI: 1.13-1.54) and the pooled population (AOR = 1.12, 95% CI: 1.04-1.21) were increasingly more likely to maintain a non-obese BMI with each additional deployment. AORs for maintaining a non-obese BMI by mutually-adjusted relevant demographic and military covariates in models examining the number of deployments are listed in Table S2. | DISCUSSION This study examined the likelihood of maintaining a non-obese BMI following deployment as a function of deployment length and frequency. Personnel were less likely to maintain a non-obese weight if they experienced a deployment that was at or above the branch average in length. This may indicate that longer deployments CAREY ET AL. Conversely, each additional deployment between baseline and follow-up increased the likelihood of personnel maintaining a nonobese BMI post-deployment. These findings suggest that those who maintain a healthier weight status in general could potentially be more primed and available for multiple deployments, while preparation for additional deployments may motivate service members to maintain a healthy weight. However, these findings were somewhat unexpected given that a large number of deployments may lead to the same vulnerabilities that interrupt normal exercise, eating, and sleep patterns as longer deployments. Additionally, frequent deployments may be particularly stressful for certain service member populations, such as those with spouses and families from whom they are separated during deployments. However, in all adjusted models, maintenance of a healthy weight was not significantly associated with marital status. Measures of relevant health behaviors, such as physical activity and sleep duration, were also examined in initial analytic models but were eliminated from the final models because they did not change the magnitude of effect estimates by more than 5%. While it is true that a large number of deployments may create additional and undue stress, in this study, the average number of deployments among those who deployed more than once was 3.4 (SD: 2.1), with a median of three deployments. It is possible that personnel in this population may not have deployed frequently enough to experience deployment-related stressors such as interrupted daily routines and separation from family that could impact weight maintenance in the same manner as longer deployments. Thus, in the present study, the theory holds that those called up to deploy multiple times represent individuals that may indeed be the most fit and ready for deployment missions. This fitness for service is analogous to the healthy warrior effect where service members are often healthier than their civilian counterparts. 10,11 Additional -251 research is needed to determine whether there is a threshold at which the frequency of deployment becomes detrimental to the maintenance of a healthy weight. These analyses utilized data from a large, representative cohort of service members that conferred a high level of statistical power and generalizability. The prospective ascertainment of BMI allowed for the distinction of pre-and post-deployment body size, though measurement of BMI is limited by the reliance on self-reported height and weight, which may be less accurate than objective measures. Selfreported height and weight data in the present study could not be validated due to a lack of access to objectively collected anthropometric data from the services, though processes to obtain these data are underway for future studies. Additionally, BMI is a crude measure of body fat that does not account for variations in body types (i.e., muscular or athletic builds) and thus, it is possible that the overall body size or observed weight gain among participants may be due to increased muscle mass gained as a function of fitness maintenance and duties performed while in the field, rather than as a result of adipose tissue. While it is possible that other factors such as eating patterns, metabolism, and/or the microbiome may be altered by multiple deployments and thus, impact the observed findings, data relevant to these factors were not available for the study population and as such, their respective influences on weight maintenance could not be assessed. Due to loss to follow-up of survey non-respondents and a limited pool of participants with survey data available within a short time period after deployment, there were small sample sizes for some sub-populations (e.g., Marines with obesity), limiting statistical power in some analyses. Further, how personnel lost to follow-up may have differed from those included in the study population in their postdeployment BMI cannot be determined. | CONCLUSION These findings suggest that while multiple deployments may contribute to the maintenance of a healthy body weight, longer deployments may negatively impact post-deployment body size. Readiness involves ensuring that service members are deployable and can maintain their mission during combat or other stressful situations, and service members who are able to maintain a healthy weight may have higher levels of readiness and deployability compared with service members with obesity. Continued research in this area is necessary to determine additional modifiable behavioral factors that may be related to weight gain post-deployment and inform the optimization of service member readiness and deployability. ACKNOWLEDGMENTS Kimberly A. Roenfeldt and Felicia R. Carey carried out the analyses. All authors were involved in the conception of the study, writing the paper, and had final approval of the submitted and published versions. In addition to the authors, the Millennium Cohort Study team includes Jennifer Belding, PhD; Satbir Boparai, MBA; Ania F I G U R E 2 Adjusted odds ratios (95% confidence interval) of maintaining a non-obese Body Mass Index versus developing obesity with each additional deployment between baseline and follow-up. Adjusted for all covariates, the number of deployments before baseline, and time between baseline and follow-up surveys. Bolded values are statistically significant (p < 0.05)
2021-09-25T15:26:18.423Z
2021-08-28T00:00:00.000
{ "year": 2021, "sha1": "59e563b5300f4f827bcd539b58c991e8f72df93c", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "e0e8a4787a98a5b2193a4ea773a303d895a6ae34", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
219885475
pes2o/s2orc
v3-fos-license
Rural Landscape as Heritage: Reasons for and Implications of Principles Concerning Rural Landscapes as Heritage ICOMOS-IFLA 2017 In 2011, the ICOMOS-IFLA International Scientific Committee on Cultural Landscapes (ISCCL) began the World Rural Landscapes Initiative (WRLI) project to develop a complete and systematic approach to cultural heritage for rural areas. Rural landscapes need further study in terms of methodology, operation and internationally recognised documents: protection and promotion, knowledge, methodology and management at international, national and local levels. The goals of the WRLI were: a principles text containing theoretical, methodological and operational criteria; a website; a glossary; an atlas of rural landscapes; and a general bibliography. The first goal has been achieved: Principles Concerning Rural Landscapes as Heritage was adopted as a doctrinal text by ICOMOS (2017). This paper presents the main cultural premises and contents of the Principles text: (I) the theoretical concepts of the ‘Rural Landscape’ and ‘Rural Landscape as Heritage’; and (II) ‘Action criteria’ which guide the development of policies for rural landscapes as heritage and resources: knowledge, protection, sustainable management, communication and transmission of physical places and associated heritage values. This paper covers: the importance of time in policy strategy; the (false) contradiction of conservation and innovation and the concept of ‘appropriate’ transformation; the role of stakeholders; value recognition; knowledge; information; communication and public reception. as an object, rich with traces of natural and human history. In this approach, the landscape is the result of centuries of small daily actions of construction and transformation carried out by agricultural workers, punctuated by single greater events-drainage works and works by large land-holders-and the construction of new urban settlements. In the recent ICOMOS Charter on the Conservation and Restoration of Cultural Heritage (ICOMOS, 2000) the landscape, understood as cultural heritage, has taken its place as an item of interest for the first time. Connected to this cultural journey, is the evolution of legislation concerning heritage protection, which, above all in Western countries over the first decades of the 20 th century, concerns not only 'historical monuments' (Jokilehto 1999), but also so-called 'natural beauties' . This is a concept that unifies the cultural vision (aesthetic and more) of nature with the scientific one and has become one of the roots of the contemporary understanding of landscape (Luginbühl 2012). In the past, the word 'landscape' was used to describe a purely artistic representation of places by painters. UNESCO is credited with considering rural landscapes as heritage from the 1980s onwards. In 1992, UNESCO introduced the concept of landscapes at a global level, in substitute for the more generic 'site' which was a feature of World Heritage Convention (UNESCO 1972). Rural Landscape was a category of interest for the Convention (Art. 1 and 2) but had become insufficient as attention shifted and grew towards a more precise understanding of the meanings of 'landscape' , 'nature' and 'environment'ideas which had been previously confused or, paradoxically, seen as separate concepts (Leach 1980;D' Angelo 2001). It is interesting to review the debate leading to the 1992 document, which has been little studied (Droste, Plachter, Rössler 1995;Fowler 2003aFowler , 2003bUNESCO 2003a;Cameron and Rössler 2013;Gfeller 2013), because this scientific elaboration started as a result of the difficulty in conceiving rural landscapes seen through separate visions of nature and culture ('natural sites' and 'cultural sites'), with stringent operational consequences in evaluation of candidates for the World Heritage List. At the beginning of the 1980s, there was growing demand for inclusion in the World Heritage List, leading to difficulty, in a scientific and operational mode, in clearly separating 'natural sites' from 'cultural sites' as foreseen by the World Heritage Convention. This was particularly acute in sites whose value was predominantly natural, but were historically used or built by man (cases like Meteora, in Greece; Capri, in Italy; the Lake District, in England) (Cameron and Rössler 2013, 60-64). On top of this, especially during the phases of candidacy, theoretical and methodological research was taking place on certain geographical landscapes (for example, 'terraced landscapes ' , and 'vineyard landscapes'). In 1984, the notion of 'rural landscapes' was introduced to the Committee meeting in Buenos Aires by the French delegation: 'Historically, since Neolithic times, in Europe at any rate, man has greatly transformed the land to cultivate it, to make it habitable. In transforming the land, he has modified the ecosystem … he has created a new land that often presents outstanding characteristics, for example the rice terraces of Java or the Philippines, which respond to the spirit of the Convention.' (Cameron and Rössler 2013, 61) An expert task-force from ICOMOS, IUCN and the International Federation of Landscape Architects (IFLA) was set up and worked over the years 1985/1986 to create the underlying documentation that would include 'rural landscapes' in the Committee Guidelines for the World Heritage List, formulating definitions and evaluation criteria. In 1987, at the World Heritage Committee, the concept of 'mixed sites' , was coined which possessed 'both cultural and natural attributes' , with rural landscapes being part of this. Then, the definition changed again: the concept of 'cultural landscapes' was used by the Committee for the Lake District in England (in 1987 and1989), an emblematic case leading to numerous requests for the site to be included on the World Heritage List, which was finally granted in 2017. The term 'cultural landscapes' is 'a new term that curiously replaced the term 'rural landscapes' without explanation' (Cameron and Rössler 2013, 66). In 1992, a definition was given to 'cultural landscapes' so they became part of the heritage category in the Guidelines, with three sub-categories: 'continuing landscapes' , 'designed landscapes' and 'associative landscapes' . Since then, within the cultural landscapes 'continuing landscapes' category, the sub-category 'ongoing landscapes' has included rural landscapes even if the latter were not explicitly mentioned in the definition. Methodological, historical and practical on-site research can identify whether any other forms of 'ongoing landscapes' can be recognised in addition to rural landscapes (such as some mining landscapes). The second half of the 20 th century saw profound transformations in the concept of landscapes (Luginbühl 2012). This was rooted in the European Landscape Convention (ELC) (Council of Europe 2000), endorsed by the Council of Europe, a document which has become a reference point for other continents, so much so that it is now possible for non-European countries to adhere to its principles. This introduced innovative concepts into the debate and the general operation, with landscapes included as both a physical object and a cultural perception; the right of all people to enjoy quality of life in any place, avoiding strategies that would create 'protected islands' of exceptional cultural or natural significance; the need for widespread participation in place management, considering the vast areas in question, across all stakeholders; and the identity value of landscape, in addition to being a physical and cultural resource, i.e., landscape is to be considered from the heritage point of view too. The term used now is simply 'landscape' , without further adjectives (such as cultural or natural or historic). This concept eliminates separation between nature and culture (landscape is 'the result of the action and interaction of natural and/or human factors'-Art.1) because a landscape is always physical and cultural. The nature-culture separation has been a part of Western culture over the 20 th century (D' Angelo 2001;Olwig 2002;Scazzosi 1999;Scazzosi 2002;Donadieu, Küster, Milani 2008;Taylor and Francis 2014) and is partly linked to the growth of independent environmental and ecological scientific approaches and protection policies (Deléage 1991). This has become a subject for discussion and international debate (Beresford, Brown and Mitchell 2005), both for furthering international visibility of Oriental approaches to landscape (Taylor and Lennon 2012;Han 2012;Taylor 2012) and as general approach to heritage (ICOMOS Australia 2013). There are also important convergences between the World Heritage Centre (WHC) and the European Landscape Convention (ELC) (Scazzosi 2003a(Scazzosi , 2004. Historical articulation from Western cultures on the concept of landscape-both in the scientific approaches and in national legislation-is not widely known at the world level and could be useful to improve the current international debate. Toward a Systematic Approach to Concepts and Tools The international research and documents coming out of UNESCO in the 1990s demonstrated the need for a systematic vision on three aspects: 'classification' ('The first challenge…is to find an approach to the classification of typology of such landscapes…'); 'evaluation' ('The second… is to develop meaningful guidance for comparative evaluation of the quality of such landscapes…'); and 'management' ('The third challenge is perhaps the most daunting of all. Because the essence of this type of cultural landscape is its dependence on a living culture, the management of such landscapes has to be through the community, rather than of the landscape as such') (UNESCO 1995, 445). This touches on the question of criteria and the stakeholders active in the protection of a site whose end-users change it through continuous and capillary transformations. As long ago as 1985, expert working groups had identified all rural landscapes as sites of interest, not only those of exceptional quality: 'Continuing landscapes…are very widespread: all agrarian landscapes can be considered in that light' (UNESCO 1995, 445); only through widespread knowledge can sites of particular value be correctly identified ('[it is the] basis from selecting from such a potentially vast field') (UNESCO 1995, 445). Therefore, widespread study is necessary across the world's regions ('extensive consultation and comparative study on a regional basis are essential') (Cleere 1995, 55). The WRLI develops three crucial questions posed by debate in the 1980s which have certain key aspects. The first is scientific: finding a solution to the lack of methodology concerning the knowledge and management of the heritage aspect of rural landscapes, making the solution, in general terms, a shared, usable tool for all countries. This approach can have relevant consequences on operations over the current historical moment in which such a necessity is growing quickly. The perceived wisdom at the time was to start from an empirical basis, featuring large scale rural landscape categories then under study and often under consideration for inclusion on the list: 'landscapes associated with rice cultivation'; 'landscapes associated with pastoralist groups' (for example, the Saami population in the Northern Scandinavia); 'landscapes associated with non-agricultural societies' (for example, hunter-gatherer societies such as the Aboriginals of Australia); settlement landscapes 'vernacular settlements' (as in Hungary and Slovakia) which are 'surrounded by land-holding patterns … still in use' (Cleere 1995, 55); 'and some other landscapes which have been fashioned by humanity (e.g. managed by fire regimes)' (UNESCO 1995, 445). More recently, other projects, directed toward the creation of a global rural landscape protection inventory have taken place. The FAO with Globally Important Agricultural Heritage Systems (GIAHS) declared that in 'traditional agricultural knowledge systems' tied to rural locations and communities' attention is primarily dedicated to maintaining site-specific technical traditions. The Convention for the Safeguarding of the Intangible Cultural Heritage (UNESCO 2003) is sometimes used as a tool to indirectly safeguard the physical aspect of rural landscapes, as it recognises the importance of specific agricultural technical traditions and includes them on the World Heritage List (WHL, this is the case of citrus fruit cultivation on the island of Pantelleria, Italy). In addition, there are catalogues of national projects, such as the Italian catalogue, which is promoted by the Ministry for Agricultural. Food and Forestry Policies (Agnoletti 2013). Each of these tools is intended to identify and protect exceptional sites in a kind of 'oasis of happiness' way. Today, the question is looked at more directly: the cultural landscapes on the WHL are numerous and comparative, so systematic study is possible. Certain rural landscape study programs are at an advanced phase. Systematic knowledge projects for all rural landscapes as heritage which exist in different regions across the globe can be collected, compared and improved. For example, in Europe, some international research has begun to apply a systematic knowledge of heritage to all rural landscapes (Meeus, Wijermans and Vroom 1990;Fairclough and Møller P.G. 2008;Pungetti and Kruse 2010). The second reason, closely linked to the first, concerns heritage protection and management in the development of rural sites. Rural territory is considered an economic, environmental, social and productive resource. Awareness also of its cultural relevance has grown over recent years, becoming a key expression of a population's identity. This awareness is specifically driven (and maybe accelerated) by the fact that rural landscapes in many parts of the world are undergoing radical transformations due to the growth of urban areas and the progressive abandonment of the countryside, intensive and industrial farming methods and the loss of local rural knowledge and traditions. It must be said that the process of recognising new categories of historical heritage and their protection has always had, among its drivers, the perception of a progressive separation from the idea of the asset representing contemporary values and the idea that its transformation will lead to a complete loss of historical foundation and remnants. While the sense of appreciation towards the historical and cultural values of rural landscapes among many populations in the world is increasing, there is a grave lack of criteria and strategies in place to maintain such sites unless they are of exceptional quality. It is also true that many projects, differing in scale and focus, publicly or privately managed, deal with rural landscapes as heritage-some of these succeed, others dramatically fail. This happens for a variety of reasons, including: lack of multi-disciplinary and multi-stakeholder approaches which would lead to policy able to overcome sectorial disagreements or the start-stop problems often experienced where many players are involved (experts, academics, public administration, local councils, local communities, farmers, land owners, citizens); and the lack of scientific background to define typology, definition and reading of a site, to support local, regional, international or even global knowledge and policies. The difficulty in taking decisions continues to grow, because landscapes are in continuous evolution thanks to their links to farming practices and farmers' ways of life. Clearly, such a situation requires research and experience concerning the double-sided coin of transformation and conservation, although these aspects are not necessarily contradictory. There is also difficulty in evaluating traces of heritage in the present day, the concept of authenticity and acceptance of elements of differing epochs, all now historical in significance. 'The criterion of authenticity here needs liberal interpretation, rejecting only discordant elements from an alien culture (for example, garish billboards advertising western consumer goods in an oriental agricultural landscape). ' (Cleere 1995, 58) A key role was played by the ICOMOS-IFLA Charter on Historic Gardens, Florence 1982(UNESCO 1982, which not only focused attention on open spaces but built areas too. It introduced the theoretical and methodological question of how to protect heritage values when the assets in question are in rapid transformation, being made of natural materials leading to specific problems that other types of historical heritage, like buildings of mineral origin (made up of e.g., brick and stones), do not seem to encounter. It is worth bearing in mind that the reference for heritage protection criteria was the Venice Charter (ICOMOS 1964), and that over the 1970s, a heated debate took place (for example, in Italy) concerning the opposition of theoretical and operational conservation considering the importance of all historical phases of places, or reconstruction meaning returning the site to one specific period (Brandi 1963;Jokilehto 1999). A debate took place for historic gardens between one position considering only the permanence of the shape related to a specific period as a value, and another considering the values of its tangible materials as document of the past and as symbolic features, as is a very old tree (Scazzosi 1993, 27-83). Western cultural positions on historic gardens and landscapes were and are actually more varied than expected by non-experts and they are useful for going deeper into the methodological issues needed by the 'landscape as heritage' approach. From these needs came the WRLI whose aim was to form the foundation of a common thread concerning protection and promotion, knowledge methodology and management of rural landscapes at differing levels (international, national and local). The project was aimed at all bodies involved in such processes to encourage the exchange of experience and knowledge and strengthen awareness of the value of rural landscapes based on each one's particularities, traditions and sustainable usage. The goals of the WRLI are to create: a principles text containing theoretical, methodological and operational criteria; a website 1 to involve experts and stakeholders; a glossary; an atlas of rural landscapes; and a general bibliography. Research is carried out by academics, researchers and professionals from around the world, members of the ISCCL, but also by international associations, public bodies, universities, local groups and volunteers. Inside ISCCL, a multi-disciplinary group was set up. The WRLI has been developed according to traditional university scientific studies and is effective because multiple cultures and rural knowledge from across continents are leveraged leading to exchange, debate and sharing of the differing cultural approaches. The first goal has been achieved: a draft of a principles document was subject to an intense period of preparation by the ISCCL working group and passed through national and international committees in the ICOMOS association as well as other international cultural institutions (IUCN, FAO-GIAHS), individual academics and international cultural associations. The Principles Concerning Rural Landscapes as Heritage were approved by the IFLA in October 2017 and adopted as a doctrinal text by the ICOMOS General Assembly in Delhi in December 2017 (ICOMOS 2017a). Its predecessor was the ICOMOS-IFLA ISCCL Milano Declaration 2014. The Principles' preamble encompasses the document's raison d'être. The document sums up the value and peculiarities of rural landscapes as a physical and cultural resource. It underlines just how widespread such conditions are ('one of the most common types' of landscapes and its features of 'continuing cultural landscapes' , according to the UNESCO definition Guidelines): 'Rural landscapes are a vital component of the heritage of humanity. They are also one of the most common types of continuing cultural landscapes. There is a great diversity of rural landscapes around the world that represent cultures and cultural traditions. They provide multiple economic and social benefits, multifunctionality, cultural support and ecosystem services for human societies' (ICOMOS 2017a). The following is the document's fundamental focus: 'This document encourages deep reflection and offers guidance on the ethics, culture, environmental, and sustainable transformation of rural landscape systems, at all scales, and from international to local administrative levels. Acknowledging the global importance of culturally-based food production and use of renewable natural resources, and the issues and threats challenging such activities within contemporary cultural, environmental, economic, social, and legal contexts. ' (ICOMOS 2017a) The document, as is the case with all documents of this type, international or regional, which preceded it, constitutes a reference point either general or in part concerning its cultural significance. It thus becomes useful as information on the 'state of the art' in document format. The Principles text, therefore, is an addition to global documents concerning historical and cultural heritage: such as the Venice Charter (ICOMOS 1964), the UNESCO World Heritage Convention (UNESCO 1972), the Nara Document on Authenticity (ICOMOS 1994), and the Burra Charter (Australia ICOMOS 2013), expanding and integrating fields of interest. The Principles Document The 'Principles Concerning Rural Landscapes as Heritage' (ICOMOS 2017) is split into two sections. The first, ('I. Principles') is specific to the subject of interest, indicating its values ('I.A. Definitions; I.B. Importance'), but also the current risks and critical aspects to which the heritage related to rural landscapes is subject. It also considers the potential, opportunities and benefits that rural landscapes can bring in terms of a place's sustainability ('C. Threats, D. Challenges, E. Benefits, F. Sustainability'). The second section ('II. Action Criteria') provides principles and criteria for the protection and promotion of heritage related to rural landscapes. It is divided into chapters concerning the steps to be taken: knowledge, protection, sustainable management, communication and transmission of physical places and the values associated with them ('Specific measures are: understand, protect, sustainably manage transformation, communicate and transmit landscapes and their heritage values') (ICOMOS 2017a). Section I: Definitions and Values The first section defines the main theoretical questions through two concepts ('rural landscape' , and 'rural landscape as heritage'): these are fundamental in understanding the entire document. The first definition ('rural landscape') is packed with content and implications. The concept of 'rural landscape' signifies all areas resulting from the interaction of humans and nature to produce food and other renewable resources useful to humankind: '… rural landscapes are terrestrial and aquatic areas coproduced by human-nature interaction used for the production of food and other renewable natural resources' (ICOMOS 2017a). The section then presents a general ranking of points of interest, covering all parts of the globe with the intent of giving valid meaning across cultures to the areas resulting from the interaction of man and nature '… via agriculture, animal husbandry and pastoralism, fishing and aquaculture, forestry, wild food gathering, hunting, and extraction of other resources, such as salt' (ICOMOS 2017a). There is an implicit distinction between 'rural' and 'agricultural': agricultural activity (featuring the term 'agriculture') is an activity historically focused on sedentary food production, and takes into account scientific debates and terminology which have enlivened historical, geographical and agronomic studies on the question of agriculture and rurality which continue to this day. The term 'rural' , in the context of the Principles, is necessary to clearly articulate the types of production activities developed through the centuries in the various areas of the world, on top of that of simple agriculture. It acts as a kind of 'umbrella' definition inside which concepts like aquaculture and fishing, different kinds of animal husbandry, forestry management, hunting, natural product harvesting, extraction and working of shared resources such as salt, are collected, clarified and categorised. In various parts of the world, each one of these activities has given rise to specific landscapes and continues to do so (e.g. saltworks, fishing valleys, and pasture lands). In other cases inter-connected agriculture has spread (think of the many examples of aquaculture for fish production in the rice fields of Asiatic regions as well as in Europe, a good example being areas on the Po river plain). The first definition also introduces the concept of landscape and specifies the meaning given to it in the document. Such precision is necessary due to the changes undergone by the word's meaning during the 20 th century, moving from a concept of view and panorama based on aesthetic values, to the more complex definition now attributed, which has come to the fore over recent decades and has laid the foundations for international documents and treaties. In the Principles, landscape is understood as the copresence of physical features ('rural landscapes are 'areas'') and of meanings attributed to it ('rural areas have 'cultural meanings''): ' At the same time, all rural areas have cultural meanings attributed to them by people and communities.' (ICOMOS 2017a) The logical consequence is that 'all rural areas are landscapes' . In other words, rural activity creates rural spaces which can be read through the lens of landscape concepts, underlining both the physical characteristics and the multiple cultural values attributed to them. Therefore, all rural areas are landscapes. The field of interest of the Principles is vast ('terrestrial and aquatic areas') (ICOMOS 2017a). Rural landscapes read as heritage are omni-present without geographical distinction like distance from urban areas and size which can vary from large areas to fragmented sections ('They can be huge rural spaces, peri-urban areas as well as small spaces within built-up areas') (ICOMOS 2017a). Only densely built areas of a city are excluded together with zones with a clearly different function like mines, quarries and waste landfills. Further study and experience will help to formulate new criteria for or modifications to the Principles text (for example, the open question of the historical relationship between mines and forested areas as landscape), as has been requested by the decree from the ICOMOS General Assembly in Delhi 2017 which adopted the Principles as a doctrinal text. In the study of 'rural landscape as heritage' , the current state of conservation must never influence their significance: there is equal interest in 'both well-managed or degraded or abandoned areas that can be reused or reclaimed' (ICOMOS 2017a). It is clear, however, there must always be a distinction between the knowledge of a place (for this reason, degraded or abandoned sites can be considered as rich with heritage value as well-preserved areas) and the evaluation of the heritage values. The two aspects are separate, even if they are inevitably connected in terms of the decision-making process that will lead to action being taken. The field of interest is intentionally wide reaching, bringing with it the awareness that all rural areas have been subject to a long history of human-made transformation and use leaving clear, albeit sometimes difficult to recognise, traces to this day (for details see the following definition 'rural landscape as heritage'). The second definition from the Principles-'rural landscapes as heritage'-defines the concept of heritage in conjunction with the concept of landscape. The text defines how heritage can be present in rural landscapes and should be subject to study and eventual protection. Different types of heritage will be grouped into two macro areas in terms of tangible heritage ('physical attributes' like 'morphology' , 'vegetation' , 'settlements' , and 'hydrography') and 'intangible' heritage (knowledge, social structures, practices, cultural, spiritual and natural attributes). These are well known to those studying historical heritage. They make up the basis of the knowledge required to characterise a rural area, also known as 'biocultural diversity' and form one of its fundamental corner-stones. The Principles confirms that heritage values can be present in all rural areas: ' All rural areas can be read as heritage, both outstanding and ordinary, traditional and recently transformed by modernisation activities' while, depending on the location, 'heritage can be present in different types and degrees' (ICOMOS 2017a). The Principles do not address questions of evaluation, which would require specific scientific investigation (Calabrò 1981). This could be successively carried out by the WRLI. Value derives from changes in epoch and events taking place over the history of man-nature interaction: historical traces can be seen in the present, like a palimpsest ('related to many historic periods, as a palimpsest') (Corboz 1983). Studies into history, geography, environmental and landscape archeology, ecology, as well as anthropology, art history and semiology, are fundamental cultural references (Bloch 1952;Sereni 1961;Gambi 1972;Rackham 1980;Cosgrove 1984;Schama 1997;Emanuelsson 2009). These are not only relevant to clear signs of human intervention (e.g., buildings, pasture lands, and fields) but also the less evident traces of human civilisation and transformation of nature. Both situations are tricky in their reading as they require consideration in quantitative and qualitative terms of the traces-both material and immaterial-visible in the present day. The concepts expressed in the Principles in the two definitions represent a strongly innovative declaration when compared with the traditional political vision of protection of heritage that has lost efficacy, and which is based on a reading of heritage as specific areas to be chosen and protected for their exceptional qualities. In particular, there is a common over-simplification that tries to split rural areas into two categories: the first is related to industrialised production, having lost any historical memory or heritage value, the second is related to areas where any remaining traditional activity is viewed as an oasis of cherished values at risk of disappearing forever. This type of prejudice really distinguishes extremes of black and white, ignoring the many shades of 'colour' that exist in between if we are willing to search them out. Other points in section I of the Principles ('B. Importance, C. Threats, D. Challenges, E. Benefits, ') briefly develop themes on the reasons for the importance of rural landscapes and their benefits from a cultural heritage point of view for today's society, not forgetting the threats of their destruction and the challenges that must be faced to examine the situation from a different perspective. This is one of the most studied and well-known material by scholars, scientific and cultural associations and public administrations, both at international and local levels. Two declarations, contained in section I, sum up the main concepts that form the basis of detailed protection policy which are then further developed in section II. The first concerns the recognition of rural landscapes as a resource, considering their heritage value: 'Rural landscapes are multifunctional resources. ' In other words, rural landscapes are not only a productive, social and economic resource, as is well recognised, but have a socio-cultural value as well, constituting an added, strategic character. This will interact with other resources and increase the overall potential of such places to make them sustainable not only for local populations but all society. The second declaration concerns the creation of policy that will interpret, protect, enhance and correctly use the values of heritage, while recognising the inevitable transformations always characterising rural landscapes: 'rural landscape policies should focus on managing acceptable and appropriate changes over time, dealing with conserving, respecting, and enhancing heritage values' (ICOMOS 2017a). The ability to manage time is essential. Section II: Action Criteria Section II focuses on the fundamental criteria that will inspire courses of action. These will be complementary and will be defined with awareness of: knowledge (II.A Understand rural landscapes and their heritage values), protection (II.B Protect rural landscapes and their heritage values), sustainable transformation (II.C. Sustainably manage rural landscapes and their heritage values) and communication and public awareness (II. D. Communicate and transmit the heritage and values of rural landscapes). Certain transversal points within the Principles are useful to clarify the main cultural premises and contents of the document. The Importance of Time in Policy Strategy Time is strategic in heritage policy choice for rural landscapes. The document focuses on the process of designing and programming both daily and one-off operations at all levels: 'II.B.5. Prepare effective policies based on informed local and other knowledge of the landscapes, their strengths and weaknesses, as well as potential threats and opportunities. Define objectives and tools. Program actions with regard to long, medium, and shortterm management goals' (ICOMOS 2017a). Such an approach is stipulated by international documentation, in particular, the European Landscape Convention (Council of Europe 2000) Art. 6 and its Guidelines (Council of Europe 2008), but also the Guidelines for Management Plans of sites of the World Heritage List (ICOMOS 2010). The approach is laid out and tested by international operational research and through collaboration between regional and local public administration (e.g. in Europe the international projects L.O.T.O. and PaysMED 2 ). The key points for effective decision-making processes are: the detailed and deep knowledge of each landscape and its tangible and intangible characteristics; strengths, risks, potential and opportunity analysis (the well-known SWOT analysis); and defining landscape quality objectives specific to each place that identify strategies and actions to reach such objectives. Establishing goals must not only be an issue of rural landscape heritage protection and enhancement, but rather a holistic approach aimed at 'sustainable' landscape quality in the aspects of the current concept of sustainability. This integrates the original three pillars (economy, environment and society) with a fourth pillar of culture as this is a complementary resource. Possible actions are 'conservation, repair, innovation, adaptive transformation, maintenance, and long term management' . As with every landscape, one site may require all these types of actions at the same time but in differing measures, independently of recognised values (exceptional or ordinary), conservation status and geographical and administrative scale ('Define strategies and actions of dynamic conservation, repair, innovation, adaptive transformation, maintenance, and long term management') (ICOMOS 2017a, Section B.5.). Insistence on the concept of management, present in many sections of the Principles, underlines the fact that rural landscapesalong with all landscapes, in fact-are subject to continuous, inevitable and irreversible transformation. Effective strategies that correctly consider heritage value will manage these transformations. Tools that lead to policy include 'laws, rules, economic strategies, governance solutions, information sharing, and cultural support' (ICOMOS 2017a, Section II.B.2.), as well as landscape and territorial planning and design. Thus, it is necessary to have cross-sector rural landscape policies, as should be the case in any effective landscape strategy, that give a new integration of different policies and tools (e.g., agriculture, energy, ecology, culture, tourism, urban planning, economy, and society). The (False) Contradiction of Conservation and Innovation: 'Appropriate' Transformation The document introduces the concept of 'dynamic conservation' ('II.B.3 Define strategies and actions of dynamic conservation…') to underline the fact that conservation of rural landscapes-as ongoing landscapes-must be understood in its own right. Over the history of cultural heritage protection, especially for buildings, conservation is often seen as a desire to stop-even 'freeze'-the site to avoid further detrimental change, especially physical change. Critics have spoken of the 'muzzling' of sites, wanting to keep them as museums with artifacts or traditional techniques or ways of life to be guarded, understood by few and touched by even fewer (as in the origin of museums). Such an attitude has often led to conflict, above all, in the management of the lives of local populations. Such a radical interpretation of conservation, already theoretically difficult when concerning buildings and other heritage, is even less applicable in the case of ongoing landscapes and rural landscapes, as they are subject to continuous human energy-both individuals and the community as a whole-and to cycles of nature and environmental change, rules and opportunities for production of food, as well as economic, social, cultural, local and global impulses. The concept of dynamic conservation implies the centrality of the time perspective-short, medium and long term-in the choices made and actions undertaken and the awareness of the importance of management of small but daily transformations (daily management): ' As landscapes undergo continuous, irreversible, and inevitable processes of transformation, rural landscape policies should focus on managing acceptable and appropriate changes over time, dealing with conserving, respecting, and enhancing heritage values. ' (ICOMOS 2017a) The concept of dynamic conservation is strictly linked to the need to reflect on the question and methodology of the transformation. Recognition of a place's intrinsic dynamism does not mean allowing simply any change to take place, especially when based on sectoral or partial needs, rather, it means basing choices on knowledge and respect for a place and its inherited character. This is a pre-condition before making any other choices of transformation. The concept of 'respecting' inherited values and implied character requires awareness of the need to set limits and quality criteria for transformation ('acceptable and appropriate changes over time') (Roca, Claval, and Agnew 2011;Scazzosi 2011). There can be no destruction without awareness of values we destroy, even in inevitable transformation. Transformative sustainability is the goal as a future strategy. This also, means continuity. Transformation inevitability and irreversibility as a concrete and positive way to manage rural landscape quality using and enhancing heritage values involves conscious future construction, not nostalgia for a past that will not return. This is, in some ways, turning the received wisdom of heritage management on its head: the central point becomes the relationship between innovation and conservation in historical feature transformation management. This has been the subject of theoretical and practical study of historical heritage (artefacts, monuments, single buildings, historical cities and gardens, modern architecture) as can be clearly seen in the history of heritage management. It has been at the centre of recent debate and international documents concerning historical cities as the UNESCO Recommendation on the Historic Urban Landscape (UNESCO, 2011). The concept of dynamic conservation does not, however, mean ignoring the heritage value of rural landscapes. It is clear there must be a distinction between the requalification of neglected and degraded areas and those using innovation to seek new functionality. It is essential that each of us is aware of the heritage character present in a rural landscape. It must also be clear that the individual characteristics of a place must also determine its weighting and role. Landscape needs an interwoven fabric of protection, innovation and re-qualification. In a single area (whether that be an area of particular quality or completely normal, or to be re-qualified or innovated) certain aspects must be protected, other aspects reorganised, others again requalified and still others innovated ('Define strategies and actions of dynamic conservation, repair, innovation, adaptive transformation, maintenance') (ICOMOS 2017a, Section II.B.3.). The Principles always require a reading of heritage values in every transformative operation, recognizing that the artificial nature of rural landscape will always produce some of these values, even where these may be hidden or faintly visible or not well conserved at all, because these are resources they possess. Here, limits must be assessed beyond which rural landscapes will be destroyed either in a physical way or in significance or used as a purely instrumental asset. The references in the Principles text to the European Landscape Convention 2000 (Council of Europe 2008), UNESCO orientations for Cultural Landscapes in WHL (Mitchell, Rössler and Tricaud 2009) and recent general views (Roe and Taylor 2014) are clear. The Role of the Stakeholders A second concept is present throughout the Principles text: this concerns the governance of rural landscape and the role of stakeholders, whether they be individuals, organised groups, associations, communities or public or private bodies. The document develops a cultural guideline of global operation concerning rural landscapes and the participation within these of populations-not only local onesin the process of cultural heritage recognition, policy and strategy governance and daily management. This is the subject of a growing number of international documents, such as the Convention for the Safeguarding of Intangible Cultural Heritage (UNESCO, 2003b), the Recommendations for Historical Urban Landscape (UNESCO, 2011) and debates, e.g. the theme of the General Assembly at ICOMOS 2017 was 'Heritage and Democracy' (ICOMOS 2017b). It is the focus of general documents on landscape, especially the European Landscape Convention (Council of Europe 2000), and on heritage as the Faro Convention on the Value of Cultural Heritage for Society (Council of Europe 2005). The idea is that knowledge and use of heritage forms part of the citizen's right to participate in cultural life as defined in the Universal Declaration of Human Rights. The Principles confirm the importance of active participation from all stakeholders and hope for a support role to be played by public administration: 'Consider that effective policy implementation is dependent on an informed and engaged public, on their support for required strategies and involvement on actions. It is essential to complement all other actions. Public administrations should support pro-active and bottom-up initiatives' (ICOMOS 2017a, Section II.B.7.) In the case of rural landscapes, the reasons are even stronger than for other heritage areas: rural landscapes are a centuries old tradition in whose upkeep all have a role and responsibility, individually or collectively. Stakeholders are not only the administrators, business community and corporations, but all those in daily contact with such sites, imperceptibly modifying them, like farmers above all in rural areas and citizens in the urban fabric of metropolitan areas. Farmers have a key role to play: they are responsible for the production of a population's source of sustenance; they are passers-on of knowledge, having, in many cases, contributed to shaping and conserving rural landscape to this day; they are the maintainers and guardians of their territory. The farming community has also the historical memory of tradition which would otherwise quickly disappear with the shift of population from rural areas to big cities. We must 'Recognise key stakeholders of rural landscapes, including rural inhabitants, and the local, indigenous, and migrant communities with connections and attachments to places, their role in shaping and maintaining the landscape, as well as their knowledge of natural and environmental conditions, past and present events, local cultures and traditions, and scientific and technical solutions trialed and implemented over the centuries.' (ICOMOS 2017a, Section II.C.2) The Principles underline the highly important role of local, indigenous or migrant populations in the conservation of the relationship between humans and nature, a role that in many areas of the world can be of paramount importance as more recent generations have begun to shun such inherited knowledge. Attention to the quality of life of rural workers is fundamental for effective policy for rural landscapes, as is recognition and respect for their professional status: a principle stemming from a positive change in consideration of the role of food production in many areas of the world which are increasingly at risk of starvation and drought (' Acknowledge that the good standard and quality of living for rural inhabitants enables strengthening of rural activities, rural landscapes, and transmission and continuity of rural practices and cultures') (ICOMOS 2017a Section II.C.2). There is a need to stop rural decline, the flight of population from rural areas and find solutions to 'needs of rural workers' quality of living, which is a prerequisite for the continuation of activities that generate and sustain rural landscapes' . Such necessities are now of a widespread nature and not only limited to economic aspects: 'Quality of living consists of both income and social appreciation, provision of public services including education, recognition of culture rights, etc. ' ('Find a balance') (ICOMOS 2017a, Section II.C.5) In the present-day relationship between urban and countryside dwelling, it is the large metropolitan areas which are experiencing growth across all continents. Within this phenomenon, urban dwellers and farmers are the key players and are often linked through forms of urban farming, a widespread trend but one which has only recently been specifically and systematically studied (Lohrberg, Licka, Scazzosi and Timpe 2016). Residents require multi-functionality from their rural landscape, which is a resource on many levels for their quality of life: 'recreation, food quality and quantity, firewood, water and clean air quality, food gardening' as well as 'ecosystem services' . They can be considered also as a new form of urban parks. In turn, farming activity can benefit from proximity to cities, where production can be integrated with other economically relevant operations ('recreation, education, agri-tourism, etc. '), in a process known as 'multi-functionality' (ICOMOS 2017a, Section II.C.4). In such cases, rural landscape heritage is a resource in terms of local identity, site and local residents' quality of life, quality of food (for example, in the short chain), as well as environmental knowledge, agricultural culture and techniques and oral memory. All stakeholders must be fundamentally engaged in the process of knowledge, decision making and management: 'Consider that effective policy implementation is dependent on an informed and engaged public, on their support for required strategies and involvement on actions. It is essential to complement all other actions. ' (ICOMOS 2017a, Section II.B.7) Public administration is key because it promotes and supports pro-active events and participation. The bottom up approach can find its place, complementing top down approaches, in public administration at various levels. Theoretical, methodological and experimental participation has been studied in many parts of the world to find tools and ways that guarantee efficacy but, at the same time, keep the competencies, roles and responsibility of public and private players specific and clear. Many international projects are experimenting with this, such as the recent European REACH 3 . Value Recognition: Knowledge, Information, Communication and Public Reception The Principles dedicate a specific section to the knowledge process and the areas of information, communication and public reception. These are all fundamentals in a strategic approach that understands the importance of widespread participation by the population in the management of heritage site character, where all players share a long-term view. The Principles consider the question in two sections, one focusing on knowledge entitled 'Understand Rural Landscapes and Their Heritage Values' (ICOMOS 2017a, Section II.A.) and one on value communication entitled 'Communicate and Transmit the Heritage and Values of Rural Landscapes' (ICOMOS 2017a, Section II.D), creating criteria and suggesting methods and tools. Knowledge has to be gained before any kind of actions (conservation, innovation or requalification): it is the basis for all planning, design, protection, management and monitoring tools, but also for informing and raising public awareness and training for technicians. As indicated in the definitions developed in Section I of the text entitled 'Rural Landscape and Heritage' , knowledge must be ever present in all rural landscape assessment ('Recognise that all rural landscapes have heritage values, whether assessed to be of outstanding or ordinary values') (ICOMOS 2017a, Section II.A.1), both in terms of physical characteristics as well as tangible and intangible values. As with every landscape, knowledge content should concern the current physical site characteristics; sociocultural perception; inherited and contemporary history; present day changes taking place concerning physical and cultural aspects; and ongoing dynamics and the challenges they pose. In the case of rural landscapes, historical knowledge also requires an understanding of spatial, functional, productive, cultural and social relationships which have led them to become 'production systems' and which can still be read to this day. The Principles recognise that knowledge of value is key ('such heritage values will vary with scale and character, shapes, materials, uses and functions, time periods, changes') (ICOMOS 2017a), but do not look at assessment difficulties, which would require rural landscape specific methodological investigation. Tools like inventories, catalogues and mapping allow for systematic knowledge of scale ('world, regional, national, local') and, at the same time, are specific to each place: these are tools already widely used in all other areas of heritage management, from buildings and historical cities to gardens. However, for a complex subject like rural landscapes, experience is often limited and incomplete, methodology is not shared, tested or consolidated and results are difficult to compare, above all, when seen at large scales. This is especially true when considering all rural landscapes not only sites classified as exceptional. Large scale description is often boiled down to analysis of maps and land use or geographical description/historical anecdotes with a list and mapping of a site's make up (e.g., buildings, channels, lines of trees, agriculture, and agricultural techniques). Comparison is difficult where sites have similarities (e.g. terraced landscapes, vine plantations, and places with studied history thanks, in part, to being candidates on the World Heritage List). The Rural Landscape Atlas project by ICOMOS-IFLA ISCCL is working towards methodological criteria for a unified and systematic approach to reading rural landscape at all levels. It puts forward a first classification level concerning macro-categories (clear physical or historical characteristics and comparative research on landscape on the World Heritage List). This includes a description of historical characteristics that maps organisation, function, production, social, economic and cultural relevance giving rise to characteristics partly or completely visible todaylandscape as physical expression of production function, social organisation and cultural value, a kind of great production 'machine' (Lebeau 1969;Tricot 2013;Scazzosi 2018;Laviscio 2018). Knowledge production comes through integration of competencies from experts, ordinary citizens (not only those who are local) and stakeholders ('integrate local, traditional and scientific knowledge') (ICOMOS 2017a, II.A. 4), in an interdisciplinary relationship (contribution from various fields) and is, itself, inter-disciplinary, with a reciprocal exchange and integration across all stakeholders-citizens, farmers, technicians, experts and owners ('Recognise local populations as knowledge-holders') (ICOMOS 2017a, Section II.A.6). In this framework, ease of data availability, reading and understanding is key both for specialists and non-specialists with organised data return systems. It is also necessary to have feasibility studies of the costs of inventory, catalogues and mapping. Other necessary elements of any inventorying and cataloguing project include the time necessary for data collection and difficulty in its processing, expert presence, nonexpert involvement methods, investigative organisation (one superficial but geographically relevant, others going into further depth) and database comparison at different administrative levels. Experience gathered for other types of heritage is also paramount. Testing and reflection on positive and problematic results of involving local populations are underway in many countries around the world, covering a huge variety of traditional and cultural diversity. These show the efficacy of tools, approaches and communication practices based on best practice, guidelines, operational (e.g. help and technical desks) technical and professional training; educational programs in schools, training courses in universities; award ceremonies and widespread use of media ('Communicate awareness of the heritage values of rural landscapes through collaborative participatory actions, such as shared learning, education, capacity building, heritage interpretation and research activities') (ICOMOS 2017a, Section II.D.1). The role of technicians and experts (landscapers, planners, historians, geographers, botanists, naturalists and conservationists) changes and requires interaction, understanding and mediation, while recognising the differing roles, competencies and responsibilities each group has. Conclusions The Principles Concerning Rural Landscapes as Heritage and the World Rural Landscapes Initiative aims to not only recognise the importance of rural landscapes, but also to support the development of conservation and management policies that can be applied to them. It intends to be an opportunity for a further analysis and debate among experts and stakeholders, who-at various levels-have to do with rural landscapes as historical and cultural heritage, and who participate in the definition of policies. The WRLI started from the lack of theory and methodological difficulties of UNESCO elaboration. It aims to support scientific identification, description, comparison and evaluation during studies for site candidature on the UNESCO World Heritage List and related policies. The goal is to enhance the 'continuing cultural landscapes' category, sub-category 'ongoing landscapes' . At the same time, WRLI will support the actions of administrations, farmers and people at all levels (national, regional, local) whenever they are conscious of the importance of heritage aspects of rural landscapes and are involved in its protection and use. The general goal is to clarify that rural heritage is both a resource for human development, the enhancement of cultural diversity and the promotion of intercultural dialogue, and part of an economic development model based on the principles of sustainable resource use. The Principles text, as with all universal texts, is not intended to be absolute and able to cover all questions and specificities of all places of the world. In addition, it is time-specific and, in the future, it should be subject to review, revisions, additions and up-dating, in connection with the on-going transformation of the concepts and approaches to heritage. Notes 1. www.worldrurallandscapes.org 2. www.paysmed.net 3. www.reach-culture.eu
2020-06-04T09:05:59.421Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "4eadbf4e8202f3b3e924758ae35977457ac0b839", "oa_license": "CCBY", "oa_url": "https://built-heritage.springeropen.com/track/pdf/10.1186/BF03545709", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "dd42b2e3fa9d7a95accbd33ed14f945ae0b57898", "s2fieldsofstudy": [ "Environmental Science", "Geography", "History" ], "extfieldsofstudy": [ "Political Science" ] }
225254930
pes2o/s2orc
v3-fos-license
Remotely-Sensed Surface Temperature and Vegetation Status for the Assessment of Decadal Change in the Irrigated Land Cover of North-Central Victoria, Australia : Monitoring of irrigated land cover is important for both resource managers and farmers. An operational approach is presented to use the satellite-derived surface temperature and vegetation cover in order to distinguish between irrigated and non-irrigated land. Using an iterative thresholding procedure to minimize within-class variance, the bilevel segmentation of surface temperature and vegetation cover was achieved for each irrigation period (Spring, Summer and Autumn). The three periodic profiles were used to define irrigation land covers from 2008–2009 to 2018–2019 in a key agricultural region of Australia. The overall accuracy of identifying farms with irrigated land cover amounted to 95.7%. Total irrigated land cover was the lowest (approximately 200,000 ha) in the 2008–2009 crop year which increased more than three-fold in 2012–2013, followed by a gradual decline in the following years. Satellite images from Landsat series (L-5, L-7 and L-8), Sentinel-2 and ASTER were found suitable for land cover classification, which is scalable from farm to regional levels. For this reason, the results are desirable for a range of stakeholders. Introduction Irrigated agriculture is a vital source of food and fiber [1][2][3]. Recently there has been an increasing reliance on irrigated agriculture worldwide as the dryland agriculture has been adversely impacted by climatic changes [4]. Irrigated agriculture uses a large proportion of available freshwater globally, amounting for about 70% [2,5]. In the face of competing demands for limited water resources, there is a sharp focus on the management of irrigated agriculture [6]. The mapping and monitoring of accurate irrigated land cover is vitally important for effective irrigation management and judicious decision making. The past few decades have witnessed a wide application of Remote Sensing for mapping cropland [3,7,8]. Several studies have adopted standard procedures of different classification techniques for mapping at various spatial scales, including regional [9][10][11][12] and continental [9,11,13,14] levels. There is an increasing interest in cropland mapping at global scale utilizing high frequency coarse resolution data sets from various sensors/satellites including MODIS, AVHRR, MERIS and SPOT VGT [15,16]. Mapping irrigated agriculture and assessing irrigation activities is not common at a global or regional level [17]. There are very few studies on operational approaches to map and monitor Land 2020, 9, 308 2 of 19 irrigated agriculture on a routine basis [18,19]. Some recent reviews have adequately summarized the current status of Remote Sensing applications to agricultural mapping [3,7,18]. Studies on irrigated agriculture are notably far more scarce as compared to agriculture in general. What most studies on irrigated cropland have done so far is to delineate the usual 'irrigation' areas, which is not the same as the actually 'irrigated' land within an irrigation period. In order to investigate periodic or in-season variation in irrigation, it is important to identify the land cover which is actually 'irrigated' with sufficient spatial detail, which has been attempted in this study. Since the earliest attempts to develop vegetation indices in 1960s-1970s [20,21], many new and modified indices have been researched that relate to plant parameters [8]. Most of these vegetation indices use the combination of spectral responses between visible and the near-or mid-infrared range. However, the most widely used measure is the normalized difference vegetation index (NDVI), which is based on two spectral bands (near infrared and red). Vegetation indices including NDVI have been used in mapping agricultural land cover. However, for irrigated land cover, surface wetness is required as additional information. Surface temperature has been recognized as an indicator of surface moisture and crop water [22]. The differences in surface temperature are potential indicators of irrigation variations [7,13,19]. However, there is a lack of detailed studies using temperature information for irrigation mapping [7]. The objective of this study was: (a) to map irrigated areas for each season by using satellite-based surface temperature and NDVI; (b) to identify land cover classes by using the seasonal profile of irrigated areas; and (c) to evaluate the changes in irrigated land cover over ten years (2008)(2009) to 2018-2019) in a key irrigation region located in the northern part of Victoria, Australia. For the accuracy assessment of our mapped irrigated areas, we used information on irrigation water deliveries of a recent season to determine whether the farms were actually irrigated or not within that season. Study Area The study area is located approximately between 35.14 • S and 36.71 • S latitudes, and between 143.31 • E and 146.03 • E longitudes in the north-central part of Victoria, Australia ( Figure 1). With the Murray River in the north, it is spread over the river catchments of Goulburn-Broken, Campaspe and Loddon. It covers an area of about 9950 sq km. About 75% of the land is irrigated. Of the total irrigated land, 87% is used for pastures, 4% for treed horticulture crops and the remaining 9% is used for other purposes including vegetables and grain crops. The climate is temperate, and the region is relatively dry with average annual rainfall of between 300 mm and 500 mm. Generally, winters (June to August) are wet, receiving most of the annual rainfall. Summers (December to February) are usually dry and are when the demand for supplemental water for crops is the highest. Irrigation demand during Spring (September to November) and Autumn (March to May) is variable. The major agricultural industries are dairy, and stone and pome fruit production. The flat terrain and shallow natural drainage of the region are overlaid by a network of irrigation channels. Multiple irrigation system configurations are used in the region including micro-irrigation, conventional sprinkler, flood and furrow. For management purposes, the irrigation region is divided into six sub-regions or 'irrigation areas' [23] as shown in Figure 1B: Central Goulburn, Shepparton, Murray Valley, Campaspe, Pyramid-Boort and Torrumbarry. Materials and Methods In this study, we present an operational approach to use satellite-based surface temperature and vegetation status to map irrigated areas with enough spatial detail and to monitor land cover changes. We introduce an approach to incorporate the relative differences of surface temperature with NDVI to map the areas of irrigated agriculture [24]. The operational approach adopted in this study was firstly to process the satellite images and generate surface temperature (Ts) and NDVI. Secondly, to identify relative differences of Ts and NDVI, appropriate thresholds were determined. At this stage, seasonal matrices of individual pixel profiles based on Ts and NDVI were compiled. Land 2020, 9, 308 3 of 19 As a third step, irrigated land cover classes were generated using seasonal profiles of pixels, and maps were created for each crop year (September-May). Finally, temporal changes in the irrigated land covers were evaluated. were created for each crop year (September-May). Finally, temporal changes in the irrigated land covers were evaluated. Step 1: Data Preparation Satellite images were collected to represent the three irrigation periods (Spring, Summer and Autumn) of every crop year during 2008-2009 to 2018-2019 seasons, except the three crop years (2010-2011, 2011-2012 and 2016-2017) when there was no suitable imagery available over the study area, which have been excluded from the study ( Table 1). Most of the data sets were acquired from three Landsat satellites (L5, L7 and L8) sourced from USGS (https://earthexplorer.usgs.gov/). Data gaps were filled using ASTER (https://search.earthdata.nasa.gov/) and, to a lesser extent, Sentinel-2 (https://scihub.copernicus.eu/) imagery. To have a complete coverage over the study area, multiple adjacent images were acquired for each season (Figure 2). Step 1: Data Preparation Satellite images were collected to represent the three irrigation periods (Spring, Summer and Autumn) of every crop year during 2008-2009 to 2018-2019 seasons, except the three crop years (2010-2011, 2011-2012 and 2016-2017) when there was no suitable imagery available over the study area, which have been excluded from the study ( Table 1). Most of the data sets were acquired from three Landsat satellites (L5, L7 and L8) sourced from USGS (https://earthexplorer.usgs.gov/). Data gaps were filled using ASTER (https://search.earthdata.nasa.gov/) and, to a lesser extent, Sentinel-2 (https://scihub.copernicus.eu/) imagery. To have a complete coverage over the study area, multiple adjacent images were acquired for each season ( Figure 2). For vegetation status, NDVI was calculated using reflectance of near infrared (NIR) and red spectral bands [25]: Standard procedures were used to calculate reflectance from Landsat series [26,27] and ASTER images [28]. The following formula was used to calculate at-sensor top of the atmosphere (TOA) reflectance: The downloaded Sentinel-2 images were already converted to reflectance [29]. Ts from the Landsat series was calculated using the standard procedures [26,27]. Ts as at-sensor brightness temperature was calculated using the following formula: For vegetation status, NDVI was calculated using reflectance of near infrared (NIR) and red spectral bands [25]: Standard procedures were used to calculate reflectance from Landsat series [26,27] and ASTER images [28]. The following formula was used to calculate at-sensor top of the atmosphere (TOA) reflectance: The downloaded Sentinel-2 images were already converted to reflectance [29]. Ts from the Landsat series was calculated using the standard procedures [26,27]. Ts as at-sensor brightness temperature was calculated using the following formula: Surface temperature from ASTER was calculated using band TIR4 (10.25-10.95 µm) [28]. NDVI from ASTER and Sentinel-2 as well as Ts from ASTER were re-sampled to a 30 m resolution using bilinear interpolation method and were adjusted to be comparable to Landsat equivalents. The adjustment factors used in this study were taken from our previous investigations [24,30]. Ts provides information on vegetation water status [31]. In well-watered irrigation situations, the Ts is often lower than the surrounding air temperature (Ta). In other situations, when vegetation is water-stressed, there is less transpiration and often vegetation surface temperature rises above the surrounding Ta [32]. Therefore, it is considered useful to use the difference between surface and air temperatures (Ts -Ta) to assess vegetation water status in this study. Half-hourly Ta data, close to the time of satellite overpass, was sourced from the Bureau of Meteorology (www.bom.gov.au) for 13 weather stations across and close to the study area. Ta point data sets were rasterized, using the inverse distance weighted (IDW) method to match the Ts extent and spatial resolution. Then surface-air temperature difference (Ts -Ta) at a pixel level was calculated. Step 2: Thresholding Process It is well known that an 'irrigated' crop has high vegetation and low temperature as compared to other land covers within a region. To take this concept further, temperature (Ts -Ta) and vegetation (NDVI) were segmented into two classes each. Temperature classes were referred to as 'irrigated' (low Ts -Ta) and 'non-irrigated' (high Ts -Ta) and vegetation classes were 'crop' (high NDVI) and 'non-crop' (low NDVI). An iterative thresholding method was used to achieve the binary classification by minimizing within-class variance (σ 2 Within ) [33]: Ti is the threshold which varies by iteration i, ω 0 and ω 1 are the weights (proportion of pixels) of the two classes, and σ 2 0 and σ 2 1 are the variance of the two classes of NDVI and Ts -Ta each. For operational purposes, the thresholding procedure utilized the relationship of σ 2 Within with between-class variance (σ 2 Between ) and total variance (σ 2 Total ) [33]: The initial NDVI threshold (α) was taken as 0.4. All pixels >α were considered as 'crop'. Initial temperature threshold (β) was the median value of Ts -Ta. All pixels <β were taken as 'irrigated'. The iteration interval was set at 0.005 within the limit of ±0.025 of the initial NDVI threshold. For temperature, the iteration interval was set 0.1 within the limit of ±0.5 • C of the initial Ts -Ta threshold (i.e., the median). Altogether, 11 iterations each for NDVI and Ts -Ta were performed for each image. Thresholds with minimum σ 2 Within were used for binary classification. Binary classes were assigned to each pixel with four combinations, which we termed as pixel identification (PID) as shown in Figure 4. S1 denotes dry conditions with no or low vegetation; S2 denotes wet conditions with low or no vegetation; S3 indicates some vegetation, possibly crops, without irrigation; and S4 denotes vegetation with wet conditions indicating 'irrigated crop/pasture'. All pixels in an image of a season were assigned a PID. Thus, each pixel profile had three PID assignments representing three irrigation periods (Spring, Summer, Autumn) in each crop year. For accuracy assessment, we used the water supply data of a recent season (2018-2019 summer) as a 'reference'. The mapped 'irrigated crop/pasture' PID (S4) of the same season was taken as an 'estimate'. Information on irrigation water supplies was sourced from the Victorian Water Register (VWR), a state-wide irrigation water database (https://waterregister.vic.gov.au/). Binary classes were assigned to each pixel with four combinations, which we termed as pixel identification (PID) as shown in Figure 4. S1 denotes dry conditions with no or low vegetation; S2 denotes wet conditions with low or no vegetation; S3 indicates some vegetation, possibly crops, All pixels in an image of a season were assigned a PID. Thus, each pixel profile had three PID assignments representing three irrigation periods (Spring, Summer, Autumn) in each crop year. For accuracy assessment, we used the water supply data of a recent season (2018-2019 summer) as a 'reference'. The mapped 'irrigated crop/pasture' PID (S4) of the same season was taken as an 'estimate'. Information on irrigation water supplies was sourced from the Victorian Water Register (VWR), a state-wide irrigation water database (https://waterregister.vic.gov.au/). Step 3: Identifying Irrigated Land Cover Classes Seasonal profiles of pixels were based on NDVI and Ts -Ta binary classes. These pixel profiles provided an indication for each season as to whether or not irrigation was actually applied, and the crop was actively growing. A set of rules were developed to identify irrigated land cover classes ( Table 2). Pixels with 'S4' PID were classed as irrigated. However, when some isolated pixels with 'S3' PID were located within or on the border of S4 cluster, those were also recognized as irrigated. Otherwise all pixels with S1, S2 and S3 were recognized as non-irrigated. A contiguity analysis was carried out on the land cover raster layer of each crop year to identify and filter out the small isolated groups of pixels, which were considered 'noise'. The groups of six pixels or less (<0.5 ha) were considered unlikely to belong to any managed irrigated crop or pasture. Step 3: Identifying Irrigated Land Cover Classes Seasonal profiles of pixels were based on NDVI and Ts -Ta binary classes. These pixel profiles provided an indication for each season as to whether or not irrigation was actually applied, and the crop was actively growing. A set of rules were developed to identify irrigated land cover classes (Table 2). Pixels with 'S4 PID were classed as irrigated. However, when some isolated pixels with 'S3 PID were located within or on the border of S4 cluster, those were also recognized as irrigated. Otherwise all pixels with S1, S2 and S3 were recognized as non-irrigated. A contiguity analysis was carried out on the land cover raster layer of each crop year to identify and filter out the small isolated groups of pixels, which were considered 'noise'. The groups of six pixels or less (<0.5 ha) were considered unlikely to belong to any managed irrigated crop or pasture. Table 2. Land cover classification criteria. Step 4: Evaluating the Irrigated Land Cover Changes To evaluate the temporal changes, maps and diagrams were generated using the pixel level land cover classes of each crop year. Maps of irrigated land cover classes were prepared for visual evaluation. Illustrations were created to show the changes of land cover classes for the total region as well as for the six individual irrigation sub-regions. Results Irrigation is the application of supplemental water to crops in order to maintain and enhance crop growth. Therefore, in the management of irrigation due consideration is given to the amount of irrigation water that is to be applied to meet the crop water demand. As the crop water demand varies due to multiple factors including crop type, phenology and evapotranspiration [34], the irrigation application also varies. In the North-Central Victoria, we have identified seven types of irrigated land cover ( Table 2). The land which is actively irrigated throughout all seasons is designated as 'perennially active', which refers to either perennial pasture (e.g., perennial ryegrass) or perennial horticulture crop. Other land areas are seasonally irrigated in one or two seasons (Table 2), which refer to either annual pasture, annual horticulture or seasonal crop. To check the accuracy of whether a land parcel was 'irrigated' or not, we used the 2018-19 summer season data sets. An accuracy assessment was carried out at the farm level (irrigation water delivery unit). Figure 5 shows the spread of farms selected for accuracy assessment. Farms receiving ≤10 mL of water within this season were considered as not actively managed for plant production and were excluded from accuracy assessment. All farms that received irrigation water deliveries during this period were treated as 'irrigated' and others as 'non-irrigated'. These were taken as a 'reference' indicating 'actual' irrigation occurrence. The farms with an area (≥1 ha) identified as actively irrigated ('S4 ) were considered as 'estimates'. Altogether, 5650 farms were selected for accuracy assessment. Producer's and user's accuracies were calculated as per standard procedure [35,36]. Table 3 shows the results of the accuracy assessment. The values in Table 3a are the number of farms. The values in Table 3b are the proportion (%) of farms. The overall accuracy was 95.7 percent. In the sub-sections below, we present the temporal changes in irrigated land cover at regional as well as sub-regional levels. Also, we present the changes in an important land cover class (i.e., 'perennially active') that has significant implications for dairy pastures and perennial horticulture. In the sub-sections below, we present the temporal changes in irrigated land cover at regional as well as sub-regional levels. Also, we present the changes in an important land cover class (i.e., 'perennially active') that has significant implications for dairy pastures and perennial horticulture. Changes in Sub-Regions The six sub-regions experienced changes in the area of irrigated land cover with moderate differences (Figure 8). Pyramid-Boort and Torrumbarry, located in the western part, appeared to have recovered from drought sooner than the rest of the region. Irrigated land in Pyramid-Boort sub- Changes in Sub-Regions The six sub-regions experienced changes in the area of irrigated land cover with moderate differences (Figure 8). Pyramid-Boort and Torrumbarry, located in the western part, appeared to have recovered from drought sooner than the rest of the region. Irrigated land in Pyramid-Boort Changes in Perennially Active Class across Sub-Regions A large proportion of 'perennially active' land cover class is occupied by perennial pasture (Lolium perenne L.) followed by perennial horticulture. Changes in irrigation over the last 10 years Changes in Perennially Active Class across Sub-Regions A large proportion of 'perennially active' land cover class is occupied by perennial pasture (Lolium perenne L.) followed by perennial horticulture. Changes in irrigation over the last 10 years have been driven by several factors including seasonal extremes, commodity prices and policy reforms in agriculture. However, these changes in irrigated land cover are not uniform across the region. Figure 9 shows the level of changes that occurred in the surface area occupied by the perennially active class of land cover in the six sub-regions during the study period. Land 2020, 9, x FOR PEER REVIEW 14 of 19 have been driven by several factors including seasonal extremes, commodity prices and policy reforms in agriculture. However, these changes in irrigated land cover are not uniform across the region. Figure 9 shows the level of changes that occurred in the surface area occupied by the perennially active class of land cover in the six sub-regions during the study period. In Central Goulburn, the total area of perennially active class in 2008-2009 was approximately 10,000 ha, which increased to over 40,000 ha in 2013-2014. Thereafter, there has been a consistent decline, reaching down to approximately 22,000 ha in 2018-2019 ( Figure 9A). The decline may be attributed to the trend of transition from dairy to annual horticulture in this sub-region. Still a large proportion of irrigation water here is used by dairy farmers [38]. In Central Goulburn, the total area of perennially active class in 2008-2009 was approximately 10,000 ha, which increased to over 40,000 ha in 2013-2014. Thereafter, there has been a consistent decline, reaching down to approximately 22,000 ha in 2018-2019 ( Figure 9A). The decline may be attributed to the trend of transition from dairy to annual horticulture in this sub-region. Still a large proportion of irrigation water here is used by dairy farmers [38]. In Shepparton Sub-region, total area of perennially active class in 2008-2009 was a little over 5000 ha, which increased to approximately 8000 ha in 2009-2010 and to 14,000 ha in 2012-2013. The changes in the subsequent years have not been uniform though area was reduced to approximately 10,000 ha in 2018-2019 ( Figure 9B). There has been a reduction in dairy in the last five years, transitioning to non-dairy activities including mixed farming and cropping [38]. In Murray Valley, the area of perennially active class in 2008-2009 was approximately 7,000 ha, which increased in the subsequent years, reaching up to approximately 22,500 ha in 2014-2015. Thereafter there was a decline in area, reaching down to approximately 14,500 ha in 2018-2019 ( Figure 9C). There has been a reduction in dairy and perennial horticulture industries during the last five years. A transition from dairy to cropping has occurred in this sub-region as well [38]. In Figure 9D). The decline is related to the reduction in dairy industry in recent years. This sub-region is more known for cropping and mixed farming than for dairying [38]. The Pyramid-Boort Sub-region has the highest extent of cropping land use in the region. Other dominant land use is mixed farming. Dairying is limited with notable reduction especially found in the south-east in the recent past [38]. This is reflected in the relatively low area of perennially active class, which was approximately 7000 ha in 2017-2018 and 5000 ha in 2018-2019 ( Figure 9E). In the past few years, there has been some decrease in dairy industry in the central and south-eastern half of Torrumbarry. This did not create dramatic changes in the total area of perennially active class from 2012-2013 until 2017-2018, ranging between 17,000 ha and 19,000 ha. However, in 2018-2019, the area was reduced to approximately 12,500 ha ( Figure 7F). Recently, mixed farming and grazing have increased in this sub-region [38]. Discussion The study area is part of the Murray-Darling Basin (MDB). In MDB, there are highly regulated water management provisions, which are used to provide a reliable and equitable supply of water for irrigation and other uses. Recent regulations have strengthened the legal, market and price aspects of water supply. This has incentivized irrigators to achieve irrigation efficiencies. Irrigators have water entitlements of nominal volume of water to use. However, to accommodate the year-to-year variability in the amount of water available, annual allocations are changed. As a result, in years of drought, the nominal volume of water entitlement may be very low. During the millennium drought annual allocations fell to as low as 10% of the pre-drought entitlements [39]. The effect of low water availability was reflected in the low area of the 2008-2009 irrigated land cover estimated in this study. This study presented the application of relative differences in temperature and vegetation to identify and monitor actual irrigated areas. Relating low surface temperature and high vegetation cover to irrigated crops is not a new concept [31,[40][41][42]. However, it was only after the space-borne thermal sensors became available that regional scale studies became possible. Temperatures derived from the Landsat series (L5, L7 and L8) and ASTER are suitable for farm level agricultural studies, both in terms of large regional coverage and appropriate spatial resolution, as presented in this study. The coarse resolution thermal bands (e.g., MODIS and AVHRR), though not ideal for farm scale studies, are useful to provide information at regional and landscape levels. More recently, surface temperatures from the Ecosystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) at 60 m spatial resolution have offered increased opportunities for studies in irrigated agriculture [43]. Vegetation cover (NDVI) at medium to fine resolution is readily available from several satellites (e.g., Sentinel-2, SPOT, RapidEye, QuickBird and WorldView). However, synchronous delivery of vegetation cover and surface temperature is available only from Landsat and ASTER. In some circumstances, vegetation cover (NDVI) alone may provide some indication of irrigated land but to investigate in-season changes in irrigation, temperature differences are needed. If the study area is large with heterogenous climatic conditions, surface temperature may not provide the desired distinction between irrigated and non-irrigated lands across the whole region. In such a situation, using the surface-air temperature difference is a better option. The operational approach to map irrigated land cover used in this study is most suited to arid or semi-arid areas where rainfall is not a confounding factor. In this study area, the rainfall totals varied from year to year and from season to season ( Figure 10). Occasional spikes in rainfall were unlikely to influence the results of this study because the seasonal totals were not high enough to meet the crop water demand. Land 2020, 9, x FOR PEER REVIEW 16 of 19 vegetation cover and surface temperature is available only from Landsat and ASTER. In some circumstances, vegetation cover (NDVI) alone may provide some indication of irrigated land but to investigate in-season changes in irrigation, temperature differences are needed. If the study area is large with heterogenous climatic conditions, surface temperature may not provide the desired distinction between irrigated and non-irrigated lands across the whole region. In such a situation, using the surface-air temperature difference is a better option. The operational approach to map irrigated land cover used in this study is most suited to arid or semi-arid areas where rainfall is not a confounding factor. In this study area, the rainfall totals varied from year to year and from season to season ( Figure 10). Occasional spikes in rainfall were unlikely to influence the results of this study because the seasonal totals were not high enough to meet the crop water demand. The initial threshold of 0.4 was adopted for the bilevel segmentation of NDVI in this study on the assumption that any vegetation under 0.4 NDVI is not a managed crop or pasture. However, the initial threshold for Ts-Ta was the median value of Ts-Ta, which varied as per the surface and air temperatures at the time of image acquisition. The aim was to identify pixels on the basis of 'relative differences' in Ts-Ta. The assumption was that the relatively cool pixels indicated 'irrigation' and the relatively warm pixels indicated the absence of irrigation. This assumption holds good if a large proportion of area under study is agricultural land with no major features of climatic extremes as is the case in this study. The relative differences in Ts-Ta were meant to seek distinction between two situations i.e., (1) Actively irrigated land, and (2) Land with no active irrigation and/or rainfed area. It is desired that the Ts-Ta threshold and its range is revisited if significant changes occur in the land cover composition in the study area. The study area is dominated by irrigated pastures (87% of irrigated land). The key assumption made about irrigated pasture, unlike dryland pasture, is that the greenness is maintained throughout The initial threshold of 0.4 was adopted for the bilevel segmentation of NDVI in this study on the assumption that any vegetation under 0.4 NDVI is not a managed crop or pasture. However, the initial threshold for Ts -Ta was the median value of Ts -Ta, which varied as per the surface and air temperatures at the time of image acquisition. The aim was to identify pixels on the basis of 'relative differences' in Ts -Ta. The assumption was that the relatively cool pixels indicated 'irrigation' and the relatively warm pixels indicated the absence of irrigation. This assumption holds good if a large proportion of area under study is agricultural land with no major features of climatic extremes as is the case in this study. The relative differences in Ts -Ta were meant to seek distinction between two situations i.e., (1) Actively irrigated land, and (2) Land with no active irrigation and/or rainfed area. It is desired that the Ts -Ta threshold and its range is revisited if significant changes occur in the land cover composition in the study area. The study area is dominated by irrigated pastures (87% of irrigated land). The key assumption made about irrigated pasture, unlike dryland pasture, is that the greenness is maintained throughout the season to almost a uniform level, by applying irrigation as required. It is therefore sufficient to use one satellite image to represent an irrigation period, as done in this study. Similarly, for irrigated horticulture crops, the key assumption is that irrigation is used to maintain an optimal level of root zone moisture throughout period of canopy growth. Therefore, the use of one image per season is considered adequate to capture irrigation activity. Our decision to use a single image per irrigation period was in accordance with previous studies. In a study conducted previously on the perennial horticulture crops in the same region, it was found that the use of single 'midseason' image is adequate to assess maximum crop cover because of strong temporal stability in NDVI response (O'Connell 2011, p. 61) [44]. Quoting multiple studies, Velpuri et al. (2009Velpuri et al. ( , p. 1384 [45] reported that "single date fine-resolution imagery, acquired at critical growth stages, is sufficient to identify irrigation". However, minor temporal fluctuations in vegetation cover within a season are possible due to certain factors including over-grazing of pastures, onset of crop disease or extreme weather events. Severe cases of temporal fluctuations may warrant more than one image per season for land cover analysis. Conclusions The synchronous measures of surface temperature and vegetation cover based on the satellite images from the Landsat series (L-5, L-7 and L-8) and ASTER were found to be suitable to distinguish between irrigated and non-irrigated pasture/crop to an overall accuracy of 95.7% in the North-Central Victoria, Australia. The irrigation profiles of spring, summer and autumn seasons defined by the combined binary classes of the two measures (Ts -Ta and NDVI) were useful for land cover classification,
2020-09-03T09:12:27.757Z
2020-09-02T00:00:00.000
{ "year": 2020, "sha1": "4ffd2365993d8fee3e1633c25237ec2cb35a16ff", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-445X/9/9/308/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "749051511c4c8aaf08264838e143a94adde1c1a1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
15485694
pes2o/s2orc
v3-fos-license
Effects of Light Intensity Activity on CVD Risk Factors: A Systematic Review of Intervention Studies The effects of light intensity physical activity (LIPA) on cardiovascular disease (CVD) risk factors remain to be established. This review summarizes the effects of LIPA on CVD risk factors and CVD-related markers in adults. A systematic search of four electronic databases (PubMed, Academic Search Complete, SPORTDiscus, and CINAHL) examining LIPA and CVD risk factors (body composition, blood pressure, glucose, insulin, glycosylated hemoglobin, and lipid profile) and CVD-related markers (maximal oxygen uptake, heart rate, C-reactive protein, interleukin-6, tumor necrosis factor-alpha, and tumor necrosis factor receptors 1 and 2) published between 1970 and 2015 was performed on 15 March 2015. A total of 33 intervention studies examining the effect of LIPA on CVD risk factors and markers were included in this review. Results indicated that LIPA did not improve CVD risk factors and CVD-related markers in healthy individuals. LIPA was found to improve systolic and diastolic blood pressure in physically inactive populations with a medical condition. Reviewed studies show little support for the role of LIPA to reduce CVD risk factors. Many of the included studies were of low to fair study quality and used low doses of LIPA. Further studies are needed to establish the value of LIPA in reducing CVD risk. Introduction Cardiovascular disease (CVD) remains the leading cause of death worldwide [1]. Several biological risk factors, such as male gender, family history of heart disease, high blood pressure (BP), dyslipidemia, obesity, glucose abnormalities, insulin resistance, and lifestyle risk factors, such as smoking, poor diet, lack of physical activity, low cardiorespiratory fitness, excessive alcohol use, and stress, are associated with the development and progression of CVD [2,3]. Notably, these lifestyle risk factors strongly influence the established biological CVD risk factors and also affect novel pathways of risk such as inflammation [4]. For instance, physical activity and cardiorespiratory fitness (measured by maximal oxygen consumption (VO 2 max) and heart rate (HR)) are known to improve a number of traditional biological risk factors for CVD, including BP [5], high-density lipoprotein (HDL) cholesterol [6], body fat [7], and novel risk factors such as Creactive protein (CRP) levels [8]. There is excellent evidence that physical activity, particularly moderate-to-vigorous intensity physical activity (MVPA), is effective in the prevention and treatment of CVD [9,10]. The existing public health guidelines emphasize participation in MVPA to achieve health benefits [9,10]. However, the view that physical activity has to be moderate to vigorous to achieve cardiovascular risk reduction has been questioned [11]. It is suggested that physical activity performed at light intensity level can also provide health benefits [12,13]. As such, although early studies demonstrate that light intensity physical activity (LIPA) ( is not associated with reduced CVD and overall mortality rates [15,16], there is growing recognition of the potential for LIPA to reduce disease risk, particularly CVD [17]. This is emphasized by cross-sectional studies demonstrating that LIPA is associated with CVD risk factors [12,13,18]. LIPA is important to understand, from a health perspective, as adults tend to spend a greater portion of their day (6.5 hr/day [13,14]) performing LIPA compared to MVPA (0.7 hr/day [13,14]). Many people often find it more attractive and attainable to perform LIPA than MVPA (40 < 85% VO 2 max) [19]. Furthermore, recent evidence suggests that muscle fiber recruitment during LIPA may potentially produce cellular signals which may regulate risk factors for disease [20]. As a result, clarifying the role of LIPA in CVD prevention is important given the amount of time people spend engaged in light intensity activities and its potential as an intervention target. To date, there has been no comprehensive review of literature describing the role of LIPA on CVD risk factors. Therefore, the aim of this review is to systematically examine the effects of LIPA on CVD risk factors (body composition, BP, glucose, insulin, glycosylated hemoglobin, total cholesterol, low-density lipoprotein (LDL) cholesterol, HDL cholesterol, and triglycerides) and other CVD-related markers (VO 2 max, HR, CRP, interleukin-6, tumor necrosis factor-(TNF-) alpha, TNF receptor 1 (TNFR1), and TNF receptor 2 (TNFR2)) in adults. Methods A systematic search was performed on 15 March 2015 according to PRISMA guidelines [21]. Articles were retrieved from PubMed, Academic Search Complete, SPORTDiscus, and CINAHL using multiple search criteria provided in Supplementary Table 1 in Supplementary Material available online at http://dx.doi.org/10.1155/2015/596367. Initially, titles and abstracts of identified articles were checked for relevance by two reviewers (RB and PT). Subsequently, both reviewers independently reviewed the full text of potentially eligible papers. Any disagreement between the two reviewers for inclusion was resolved through discussion. Additional articles were identified via hand-searching and reviewing the reference lists of relevant papers. Figure 1 presents the flow of papers through the study selection process. Studies were considered to be eligible for inclusion based on the following criteria: (i) participants were ≥ 18 years of age; (ii) the study examined at least one of the following CVD risk factors/markers in humans: body mass, body mass index (BMI), waist circumference (WC), hip circumference, waist-to-hip ratio (WHR), % body fat, HR, BP, VO 2 max, glucose (fasting or postprandial), glycosylated hemoglobin, insulin, total cholesterol, HDL cholesterol, LDL cholesterol, triglycerides, CRP, interleukin-6, TNF-alpha, TNF receptor 1, or TNF receptor 2 levels; (iii) the study reported an intervention (both randomized and nonrandomized) that imposed on participants a single or periodic bouts of LIPA defined as activities between 1.6 < 3.0 METs, 20 < 40% VO 2 max, and 20 < 40% heart rate reserve (HRR) or the relative intensity of 40 < 55% HR max [14,22]; (iv) the study included quantitative analysis (statistical comparison of intervention to baseline or a control group) of the effect of LIPA on at least one of the outcome measures; (v) the study was published or accepted for publication in refereed journals from 1970 up to and including the search date; (vi) the study was published in the English language. Due to the lack of a standardized definition of LIPA for resistance training, only aerobic/flexibility exercises were included in the study. Two authors (RB and PT) independently assessed the quality of the studies that met the inclusion criteria ( Table 1). The risk of bias and strength of evidence from individual studies were assessed using Downs and Black Checklist [23], allowing for the assessment of the methodological quality of randomized controlled trials and nonrandomized studies of health care interventions. This 27-point checklist assesses the strength of reporting, external validity, internal validity, and statistical power. As some questions are worth more than one point, the maximum score that can be received is 32. Adapted from another systematic review [24], the score obtained by each study was divided by 32 and multiplied by 100 to provide a "study quality percentage." Study quality percentages were then classified as high (66.7% or higher), fair (between 50.0 and 66.6%), and low (less than 50.0%) [24]. Following data extraction, the interventions included in this review were heterogeneous in terms of the type, frequency, and duration of physical activities, as well as body mass, physical fitness, and dietary intake of the participants. Thus, meta-analyses or pooling of data across studies would be inappropriate so a qualitative synthesis of the evidence was performed instead. A modified form of coding system described by Sallis et al. [25] was used to summarize the effect of LIPA on CVD risk factors/markers. If 0-33% of the studies reported a statistically significant difference between LIPA and CVD risk factors/markers, the result was categorized as no effect (0). If 34-59% of the studies reported a statistically significant difference, the result was categorized as inconsistent (?). If 60-100% of the studies reported a statistically significant difference, the result was rated as positive (+) or negative (−), respective of the direction of the effect. When four or more studies supported a difference or no difference, it was coded as ++, − −, or 00 to indicate consistent observations. The ?? code indicated a marker that has been examined in four or more studies with inconsistent findings (e.g., out of 5 studies, 3 indicated a significant positive effect and 2 indicated a significant negative effect). Results were then stratified by health status of the population (healthy or those with a medical condition). Studies in which participant physical activity was less than 150 min/wk of moderate intensity physical activity or 75 min/wk of vigorous intensity physical activity or participants were not engaged in regular physical activity/exercise (as described in the primary study) or participants were defined as sedentary were subsequently classified as "physically inactive" and the results are summarized separately for these studies. A summary table of the effect of LIPA on CVD risk factors and markers can be found in Table 2; the effects of LIPA on CVD risk factors/markers reported in each study are presented in Supplementary Table 3. Results demonstrated LIPA training interventions to have no significant effect on markers of body composition in physically inactive or healthy, with a medical condition, adults. All studies that examined the effect of LIPA on body mass [26,29,35,53], WC [31,32,35,41], BMI [26,31,32,41], and % body fat [30,32] reported no significant change. LIPA was found to have no effect on systolic or diastolic BP in healthy adults while improvements in BP were found in physically inactive populations with a medical condition. Three [31,32,39] of 9 studies (33%) reported significant decreases in systolic BP while 2 [31,39] of 9 studies (22%) reported significant decreases in diastolic BP. CVD markers (waist-to-hip ratio, heart rate maximal, and tumor necrosis factor receptor 2) with only one study demonstrating the effect of light intensity activity were excluded in this summary table. BF: body fat; BMI: body mass index; BP: blood pressure; CRP: C-reactive protein; CVD: cardiovascular disease; Hba1c: glycosylated hemoglobin; HDL: highdensity lipoprotein; HR: heart rate; LDL; low-intensity lipoprotein; NA: not applicable; TNF: tumor necrosis factor; VO 2 max: maximal oxygen uptake; WC: waist circumference. LIPA was found to have no effect on glucose and insulin response in physically inactive or healthy, with a medical condition, adults. Three [33,38,40] of 16 studies (19%) reported significant decreases in glucose and 1 of 13 (8%) reported significant decrease in insulin level. When the effect of LIPA on blood lipid markers was examined, no significant changes were found for total cholesterol, HDL cholesterol, LDL cholesterol, or triglycerides in physically inactive or healthy individuals and inconsistent findings on triglycerides in healthy adults. One [50] of 11 studies (9%) reported a significant increase in total cholesterol while 2 [26,46] of 11 studies (18%) reported significant decreases in total cholesterol. One [50] of 13 studies (8%) reported a significant increase in HDL cholesterol, 5 [36,46,49,54,57] of 13 studies (38%) reported significant increases in triglycerides, and 0 of 6 studies (0%) reported an effect on LDL cholesterol. Regarding other CVD-related markers, the effect of LIPA on VO 2 max is inconclusive in physically inactive or healthy adults. Three [27,32,56] of 8 studies (38%) reported significant increases in VO 2 max. LIPA was also found to have no effect on resting HR in physically inactive or healthy adults. One [32] of 5 studies (20%) reported a significant reduction in resting HR. All studies that examined the effect of LIPA on CRP [30,37], interleukin-6 [30,34,37,43], and TNF-alpha [30,34] reported no significant effect in physically inactive, with a medical condition, adults. Discussion The effect of LIPA on markers of cardiovascular risk factors was systematically reviewed. LIPA resulted in no significant improvements in body composition, glucose, insulin (in physically inactive or healthy, with a medical condition, adults), total cholesterol, HDL cholesterol, LDL cholesterol (in physically inactive or healthy adults), or triglycerides (in physically inactive adults) and inconsistent findings on triglycerides in healthy adults. On the other hand, LIPA was found to improve systolic and diastolic BP in physically inactive populations with a medical condition. Additionally, when examining CVD-related markers, we found inconsistent results regarding the effect of LIPA on VO 2 max in physically inactive or healthy adults, no significant changes on resting HR in physically inactive or healthy adults, and no significant changes on inflammatory markers in physically inactive or with a medical condition adults. Nine studies [26, 29-32, 35, 41, 53, 55] examined the effect of LIPA on body composition and found no effect in either physically inactive or healthy, with a medical condition, populations. One study concluded that LIPA performed 30 min, 8 times a day, for 5 days, did not result in any significant change on body mass and WC [35]. The rest of the studies demonstrated that LIPA performed 30-90 min, 3 to 5 times per wk, for ≥7 wk, did not result in any significant effect on body mass [26,29,53,55], WC [31,32,41], BMI [26,31,32,41], WHR [26], or % body fat [30,32]. This result is consistent with previous research findings that conclude at least 250 min/wk of moderate intensity (≥3 METs) training is needed if the primary purpose of the training program is to elicit reductions in body mass and fat mass [58,59]. There are no recommended durations of LIPA required to elicit weight loss; however, the amount of LIPA required to improve body composition is likely to be much greater than that required for MVPA given the reduced intensity level. Physical activity alone if greater than 250 min/wk without caloric restriction has a limited influence on body composition [27,28,60] and may only cause 1-3% change in body mass and adipose tissue [61]. In addition, evidence suggests that the total volume of physical activity is a key factor in achieving weight loss [62]. An individual intending to lose weight through physical activity without dietary restriction would need to engage in a large volume (26 MET-hr per wk) of physical activity to achieve a 5% weight reduction [62]. In most studies to date, the volume of LIPA used is less (<10.5 MET-hr per wk) than the 26 MET-hr per wk that may be required to improve body composition. Findings of this review also indicate no significant changes in resting BP in healthy adults but found significant improvements in physically inactive individuals with a medical condition. Participants who followed a single bout (15 min) [52] and periodic bouts (2 min every 20 min over 5 hr period) [40] of treadmill walking, and long term (30-60 min 3-5x/wk, ≥10 wk) LIPA [30,32,41,53,56] demonstrated no significant changes in resting BP. Two studies [31,32] reported a decrease in systolic BP and one study reported [31] a decrease in diastolic BP following ≥ 10 wk of walking [31] and a combination of treadmill walking, stationary cycling and stepping [32]. In both studies, the improved BP response was found in physically inactive participants with hypertension. Similarly, in prehypertensive and hypertensive physically inactive, obese adults, one study [39] with a different study design (randomized cross-over study breaking up prolonged sitting with LIPA breaks) found significant reductions in systolic and diastolic BP in individuals interrupting sitting time with light intensity walking relative to individuals with uninterrupted sitting. Thus, LIPA appears unlikely to influence the BP response in normotensive populations but may be able to provide an effect in hypertensive, physically inactive populations. There were no significant improvements in glucose and insulin response following LIPA in either physically inactive or healthy, with a medical condition, adults. All 6 studies [36,44,45,47,49,63] reported no effects of glucose and insulin response during a single bout (35-237.5 min) of LIPA. Following periodic bouts (214.5 ± 28 min divided in 9 bouts, 30-60 min 3x/wk, 30 min 8x/day, 4 hr walking, and 2 hr standing/day) of LIPA, 7 [30,32,35,41,53,54,57] of 10 studies reported no significant changes in glucose and 6 [30,32,35,53,54,57] of 7 studies reported no significant changes in insulin. These results are consistent with epidemiological data showing no significant association between fasting glucose and time spent performing LIPA (5.7-6.0 hr/day) (but not with 2 hr plasma glucose which was found to be significantly associated with LIPA) [12,13]. In contrast, 2 studies [38,40] reported a decrease in postprandial glucose (and insulin [38]) after interrupting sitting with light intensity standing/walking. These laboratory-based studies compared a light intensity standing/walking group to a sitting group, employed LIPA (14 sessions of 2 min LIPA separated by 20 min sitting period) dispersed throughout the day, and measured postprandial glucose. These findings were validated in a recent meta-analysis that found significant reductions in blood glucose postprandial response and insulin levels after interrupting sedentary periods with LIPA breaks [64]. Another study [33] with a longer, structured, light intensity walking intervention period (120-160 min/wk walking for 6 wk) also demonstrated reductions in capillary glucose concentrations post-intervention compared to baseline. This study used obese women with gestational diabetes. In summary, there is no consistent intervention evidence to support improved glucose metabolism with LIPA in healthy adults. Studies [33,38,40] suggesting that LIPA may improve glucose and insulin response examined individuals with higher glucose baseline values or compared LIPA to a sedentary (sitting) group or used multiple bouts of LIPA dispersed throughout the day. Thus, there may be some evidence to support the view that LIPA influences glucose and insulin metabolism, but this evidence appears to be limited to individuals (1) with impaired cardiometabolic function or (2) who are compared to no activity (sedentary) control groups. In this review, 5 [36,46,49,54,57] of 10 studies demonstrated significant reductions in triglycerides following LIPA in healthy adults. These studies used short intervention periods (≤4 days) of light intensity walking and 3 studies [36,49,54] used a high fat test meal prior to blood sampling. This immediate lowering of serum triglycerides following LIPA is most likely due to enhanced triglyceride peripheral tissue uptake of serum triglycerides that result from exerciseinduced activity of lipoprotein lipase, the rate limiting enzyme for the hydrolysis of triglyceride-rich lipoproteins [67]. The increased activity of lipoprotein lipase (persisting up to 18 hr) following muscular contractions causes an increase in the removal of triglycerides from the circulation [68]. Unfortunately, none of these studies explored whether or not triglyceride reductions persisted for more than 24 hr following LIPA bout. VO 2 max, resting HR, and inflammatory markers that are known to impact CVD risk factors were also examined. LIPA had inconsistent results in regard to the effects on VO 2 max and no effect on resting HR in physically inactive or healthy adults. Studies employing long term (≥8 wk) LIPA protocols generally reported no change [28-30, 42, 48, 53, 56], while others reported improvement in VO 2 max [27,32,56] and HR [32]. It is possible that certain types of aerobic exercise may lead to health-related benefits and yet may not be of sufficient quantity or quality to improve VO 2 max or decrease resting HR [69]. Despres and Lamarche [70] proposed that prolonged (exact duration not specified) low intensity (approximately 50% VO 2 max) endurance exercise performed 45-60 min on an almost daily basis significantly improved insulin sensitivity and lipoprotein metabolism through mechanisms that are likely to be independent of the training-related changes in cardiorespiratory fitness. The proposed mechanisms included the net increase in energy expenditure and losses in total body fat and abdominal adipose tissue which contributed to improved carbohydrate and lipid metabolism [70]. This hypothesis, however, remains to be established. At present, only 3 [27,32,56] of 7 studies reported a positive effect on VO 2 max; 2 [27,32] of these 3 studies examined physically inactive, overweight adults. The third study [56] neglected to report baseline physical activity and BMI. Thus, the beneficial effects of LIPA, in regard to adaptations to VO 2 max, are equivocal and may be most pronounced in individuals with low levels of physical activity [71,72] suggesting that the benefits of LIPA on VO 2 max may be limited to populations who are least active. No significant changes in inflammatory markers (CRP, interleukin-6, and TNF-alpha) were found in physically inactive or with a medical condition participants engaging in a single bout (40-60 min) [37,43], periodic bouts (40 min/day for two wk) [34], or long term (30 min 3x/wk for 16 wk) [30] LIPA. Research in this area is limited and more studies are needed to clarify the effect of LIPA on inflammatory markers. Results from the interventions (3 out of 4 studies) included in this review demonstrate that acute effects are unlikely to occur and future research should seek to examine changes in inflammatory markers following participation in LIPA over longer time periods. This review provides consistent evidence that LIPA is not effective at improving CVD risk factors and other CVDrelated health markers in apparently healthy individuals. Some evidence surfaced suggesting that LIPA may improve markers of CVD risk factors (BP) in physically inactive adults with a medical condition. These findings provide some support to cross-sectional studies suggesting that LIPA may be beneficial in elderly, physically inactive, with a medical condition, individuals [65,73,74]. However, due to limited intervention studies available that have examined these cohorts of individuals, it is difficult to make conclusions with full certainty. Future studies should attempt to elucidate the effects of LIPA in elderly, physically inactive, with a medical condition, adults. Since LIPA is low intensity and appears to be most practical in physically inactive populations, daily LIPA and MVPA of participants should be accounted for in future work. The dose of LIPA used in the reviewed studies was modest in comparison to the volume of LIPA typically performed by individuals (e.g., ≤150 min/wk which equates to <10.5 METhr per wk). Therefore, future studies are encouraged to use greater doses (much higher than the recommended 150 min/wk moderate intensity physical activity due to the reduced intensity level of LIPA) to assist in clarifying the role of LIPA to elicit positive changes in CVD risk factors and CVD-related markers. Conclusions Although cross-sectional research findings [12,13,18] suggest that LIPA may help to improve an individual's metabolic profile, there is no evidence to support the effect of LIPA in providing positive changes in CVD risk factors in healthy adults. Little intervention evidence was found to support the positive effect of LIPA in CVD risk factors in physically inactive adults with a medical condition. In particular, significant improvements in BP following LIPA were achieved by physically inactive, hypertensive individuals [31,32,39]. However, it should be noted that many studies reviewed did not control, either statistically or by design, for potential confounding variables such as controlling for accumulated MVPA or monitoring dietary intake. Most of the studies have also used small doses of LIPA (<10.5 MET-hr per wk). Given that adults spend a considerable proportion of their day (6.5 hr/day [13,14]) performing LIPA, it may be possible that this volume of LIPA is not enough of a stimulus to promote favorable adaptations in the examined biological markers of CVD risk. Aside from increasing the volume, it may also be worthwhile to examine the effects of LIPA dispersed throughout the day similar to recent studies [35,[38][39][40] that have used regular short bouts of LIPA to interrupt prolonged periods of sitting. This may be useful as recent meta-analysis found these breaks in sitting to be associated with improved glucose and insulin response [64]. In summary, there may be some evidence to support the view that LIPA influences some CVD risk factors in certain populations, but more welldesigned experiments with greater control of confounding factors are required to confirm this.
2018-04-03T02:31:13.420Z
2015-10-12T00:00:00.000
{ "year": 2015, "sha1": "915120ec3494ed0420dee8f7a0cdb7e1cd19ff1b", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2015/596367.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ca3f98ddf8d2e8491dc550c65277f82004b9dd9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251713120
pes2o/s2orc
v3-fos-license
Higher RUNX1 expression levels are associated with worse overall and leukaemia‐free survival in myelodysplastic syndrome patients Abstract RUNX1 mutations are frequently detected in various myeloid neoplasms and implicate unfavourable clinical outcomes in patients with myelodysplastic syndrome (MDS) and acute myeloid leukaemia (AML). On the other hand, high expression of RUNX1 is also correlated with poor prognosis in AML patients. However, the clinical relevancy of RUNX1 expression in MDS patients remains elusive. This study aimed to investigate the prognostic and biologic impacts of RUNX1 expression in MDS patients. We recruited 341 MDS patients who had sufficient bone marrow samples for next‐generation sequencing. Higher RUNX1 expression occurred more frequently in the patients with Revised International Prognostic Scoring System (IPSS‐R) higher‐risk MDS than the lower‐risk group. It was closely associated with poor‐risk cytogenetics and mutations in ASXL1, NPM1, RUNX1, SRSF2, STAG2, TET2 and TP53. Furthermore, patients with higher RUNX1 expression had significantly shorter leukaemia‐free survival (LFS) and overall survival (OS) than those with lower expression. Subgroups analysis revealed that higher‐RUNX1 group consistently had shorter LFS and OS than the lower‐RUNX1 group, no matter RUNX1 was mutated or not. The same findings were observed in IPSS‐R subgroups. In multivariable analysis, higher RUNX1 expression appeared as an independent adverse risk factor for survival. The prognostic significance of RUNX1 expression was validated in two external public cohorts, GSE 114922 and GSE15061. In summary, we present the characteristics and prognosis of MDS patients with various RUNX1 expressions and propose that RUNX1 expression complement RUNX1 mutation in MDS prognostication, wherein patients with wild RUNX1 but high expression may need more proactive treatment. In summary, we present the characteristics and prognosis of MDS patients with various RUNX1 expressions and propose that RUNX1 expression complement RUNX1 mutation in MDS prognostication, wherein patients with wild RUNX1 but high expression may need more proactive treatment. K E Y W O R D S leukaemic stem cells signature, myelodysplastic syndrome, prognostication, RUNX1 expression, survival INTRODUCTION Myelodysplastic syndromes (MDSs) represent a heterogeneous group of malignant haematopoietic stem cell (HSC) disorders, with cardinal features of ineffective haematopoiesis, dysplasia of haematopoietic cells, genetic alterations and an inherent propensity of transformation to acute myeloid leukaemia (AML) [1]. The clinical and molecular heterogeneities make these diseases arduous to model and study, highlighting the importance of personalized management [2]. The International Prognostic Scoring System (IPSS) and the revised IPSS (IPSS-R) have been broadly utilized to risk-stratify MDS patients and guide treatments [3,4]. Nonetheless, the prognosis of patients may vary considerably, even within the same risk groups. Therefore, it is crucial to identify novel prognostic biomarkers for better risk classification of patients with MDS. Besides cytogenetical abnormalities, genetic mutations also correlate with disease phenotypes and clinical outcomes of MDS [5,6]. Interestingly, growing evidence showed that normal RUNX1 also played a critical role during leukemogenesis. Leukemic cells of corebinding factor AML and certain types of leukaemia with MLL rearrangements require normal RUNX1 to survive [14]. More recently, Wesely et al. delicately demonstrated that the RUNX1 transcription factor is essential for maintaining LSC across various genetic subgroups in AML, implicating RUNX as a potential therapeutic target [19]. Moreover, high expression of RUNX1 has been shown to be intimately associated with poor prognosis in cytogenetically normal AML (CN-AML) patients [20]. Meanwhile, although MDS is also considered as LSC-derived myeloid malignancy, the clinical relevancy of RUNX1 expression in MDS patients remains obscure. Thus, this study aimed to investigate the prognostic and biologic impacts of RUNX1 expression in MDS patients. Patients We Cytogenetic study and molecular mutation analysis by targeted next-generation sequencing (NGS) Cytogenetic analyses were performed as previously described and interpreted according to the International System for Human Cytogenetic Nomenclature [25,26]. We employed the TruSight myeloid sequencing panel and the HiSeq platform to analyze gene alterations and mutant allele burden of 54 myeloid-neoplasm relevant genes (Table S1) as previously described [27] on BM samples of 333 MDS patients. The library preparation and sequencing were following the manufacturer's instructions. The Catalogue Of Somatic Mutations In Cancer database version 86 [28], ClinVar [29], dbSNP database version 151 [30], PolyPhen-2 (Polymorphism Phenotyping v2) [31], and SIFT [32] were used to evaluate the result of each variant. Library preparation and RNA sequencing We prepared the sequencing library with purified RNA as previously described [33], using the TruSeq Stranded mRNA Library Prep Kit (Illumina, San Diego, CA, USA) and following the manufacturer's recommendations. The detailed methods are described in Supplementary Method 1. Bioinformatic analysis and statistical analysis The normalized signals for RNA sequencing data were analyzed using Patient characteristics The patient characteristics are summarized in Comparison of clinical characteristics and genetic alterations between patients with higher and lower RUNX1 expression Histograms representing the distribution of RUNX1 expression were plotted in Figure S1A. We first explored the expression of RUNX1 in various IPSS-R subgroups and found that patients with higherrisk IPSS-R had higher expression of RUNX1. Additionally, Pearson's correlation revealed that RUNX1 expression significantly correlated with IPSS-R subgroups (r = 0.41, p < 0.001, Figure S1B). Specifically, patients with MDS with excess blasts (MDS-EBs) had substantially higher RUNX1 expression than those with non-EB MDS (p < 0.001, Figure 1A). The same was true in the GSE 114922 cohort, wherein gene expression of BM CD34+ cells was available (p < 0.001, Figure 1B). More intriguingly, in the GSE 145733 cohort, in which gene expression of CD34+ cells was analysed, RUNX1 expression was significantly correlated with blast counts (r = 0.355, p = 0.003) while patients with AML-MRC had higher RUNX1 expression than those with either MDS-EB or non-EB MDS (p = 0.037, Figure 1C), indicating that RUNX1 might play a role during the acute transformation of MDS. Meanwhile, patients with poor-risk karyotype had higher RUNX1 expression than those with normal karyotypes and others in the NTUH cohort (p = 0.003, Figure 1D). We next examined the difference in RUNX1 expression between patients with wild and mutated RUNX1. Patients with RUNX1 mutations had higher RUNX1 expression than unmutated patients. The expression was remarkably higher in those carrying C-terminal mutations than others (median, unmutated vs. N-terminal mutated vs. C-terminal mutated: 59.6 vs. 81.2 vs. 95.4, p < 0.001), as illustrated in Figure S1C. The mutation details and expression levels of RUNX1 in patients with mutant RUNX1 are displayed in Table S2. (p < 0.001). Furthermore, the higher-RUNX1 patients had higher frequencies of IPSS-R high and very-high risk MDS but lower frequencies of low and very-low risk MDS (p < 0.001). Complex karyotypes were more common in the higher-RUNX1 patients than the lower ones (19.8% vs. 6.7%, p = 0.001, Table S3). There were also more higher-RUNX1 patients harbouring poor or very poor risk karyotypes per IPSS-R classification (23.4% vs. 7.3%, p < 0.001). Regarding molecular gene alterations, 260 (78.1%) of the 333 patients with available data had at least one mutation in the 54 genes analysed. As listed in Table S4, the most common mutation was Table S4) whereas lower-RUNX1 expression, with SF3B1 mutation (p = 0.002). The effects of RUNX1 expression on LFS and OS Parameters including age, sex and those associated with RUNX1 expression (as shown above) were examined consecutively for their potential of confounding with RUNX1 expression (Table S5). By threshold of 10% change in the hazard ratio (HR), IPSS-R, excess of blasts, respectively, Figure 5A,B) as well as higher-risk (high and very high risk) Figure S3C,D). We further analysed the influence of RUNX1 expression on clinical outcomes of MDS patients receiving different treatment regimens. The higher-RUNX1 patients consistently had inferior LFS and OS ( Figure S4) Figure S6A,B). Additionally, higher-RUNX1 patients who underwent allo-HSCT had a comparable OS to lower-RUNX1 patients with or without HSCT ( Figure S6B). In multivariable analysis, we included age, sex and the confounders IPSS-R and TP53 mutations in the analysis for LFS and OS. Higher RUNX1 expression, either divided by a median ( in Tables S8 and S9, DISCUSSION To the most of our knowledge, this is the first study to investigate the prognostic significance of RUNX1 expression levels in MDS patients. We found that the patients with higher RUNX1 expression showed distinct clinical and biological characteristics and had shorter LFS and OS. Higher RUNX1 expression was an independent poor prognostic factor, irrespective of other risk factors in MDS patients. Furthermore, the prognostic implication of RUNX1 expression remained significant in both IPSS-R higher-and lower-risk patients as well as RUNX1-mutated and wild-type groups. RUNX1 encodes the DNA binding alpha subunit of the core binding transcription factor, which is a pivotal regulator of definitive haematopoiesis [14]. RUNX1 controls the expression of various target genes involved in haematopoietic differentiation [34,35]. The roles of RUNX1 in normal haematopoiesis are juxtaposed with high frequencies of RUNX1 mutations and translocations in leukaemia [36,37]. RUNX1 is involved in recurrent chromosomal translocations, such as t (8;21) (RUNX1-RUNX1T1) and t (3;21) (EVI1-RUNX1) in AML [21]. Besides balanced rearrangements, recurrent intragenic mutations have also been identified in AML, MDS and chronic myelomonocytic leukaemia [36,38,39]. RUNX1 somatic mutations are detected in roughly 15% of adult patients with de novo AML [36]. They are closely associated with older age, male gender and inferior prognosis compared to AML patients without RUNX1 mutations. In MDS, somatic mutations in RUNX1 occurs in approximately 10% of patients. These patients had a higher propensity and shorter latency for progression to AML than patients with wild RUNX1 [40]. Although RUNX1 is generally considered to be a tumour suppressor, accumulated evidence reveals it plays a central role in leukemogenesis and can act as an oncogene as well [14,41]. Wild-type RUNX1 is required for the development of CBF-AML, including t (8;21)/RUNX1-RUNX1T1 and inv [16]/CBFB-MYH11 leukaemia, which suggests a delicate balance between wild RUNX1 and RUNX1-fusion protein contributes to leukaemia cell survival [14]. RUNX1 is also indispensable for MLL-fusion leukaemia [42,43]. Moreover, AML harbouring FLT3-ITD has higher levels of RUNX1 [44,45]. In such a context, upregulated expression and disease progression, as well as the dynamics and physiologic implications of RUNX1 expression in the BM, awaits further investigation. In summary, the investigations herein provide evidence that RUNX1 expression can be prognostic for LFS and OS in patients with MDS, corresponding to that in patients with AML. The prognostic relevance remained valid across IPSS-R subgroups, among patients with different RUNX1 mutation statuses, and in two external independent cohorts. Higher expression of RUNX1 was also confirmed prognostically detrimental in the multivariable analysis. In connection to the above, experimental studies will be needed to foster our understanding of the regulation of RUNX1 in the heterogeneous cellular contexts of MDS and ultimately deliver patient-tailored therapeutic avenues.
2022-08-22T15:03:23.205Z
2022-08-19T00:00:00.000
{ "year": 2022, "sha1": "105986832c555e411828249aaa13d5bab7e98e46", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "5b636e0f7ec8c69f21f8ecc9b77fa8e7540d2b64", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14419407
pes2o/s2orc
v3-fos-license
From Random Matrices to Stochastic Operators We propose that classical random matrix models are properly viewed as finite difference schemes for stochastic differential operators. Three particular stochastic operators commonly arise, each associated with a familiar class of local eigenvalue behavior. The stochastic Airy operator displays soft edge behavior, associated with the Airy kernel. The stochastic Bessel operator displays hard edge behavior, associated with the Bessel kernel. The article concludes with suggestions for a stochastic sine operator, which would display bulk behavior, associated with the sine kernel. Introduction Through a number of carefully chosen, eigenvalue-preserving transformations, we show that the most commonly studied random matrix distributions can be viewed as finite difference schemes for stochastic differential operators. Three operators commonly arise-the stochastic Airy, Bessel, and sine operators-and these operators are associated with three familiar classes of local eigenvalue behavior-soft edge, hard edge, and bulk. For an example, consider the Hermite, or Gaussian, family of random matrices. Traditionally, a random matrix from this family has been defined as a dense Hermitian matrix with Gaussian entries, but we show that such a matrix is equivalent, via similarity, translation, and scalar multiplication, to a matrix of the form 1 h 2 ∆ + diag −1 (x 1 , . . . , x n−1 ) + 2 √ β · "noise", in which ∆ is the n-by-n second difference matrix, diag −1 (x 1 , . . . , x n−1 ) is an essentially diagonal matrix of grid points, and the remaining term is a random bidiagonal matrix of "pure noise." We claim that this matrix encodes a finite difference scheme for which is the inspiration for the stochastic Airy operator. (The "noise" term will be made precise later.) The idea of interpreting the classical ensembles of random matrix theory as finite difference schemes for stochastic differential operators was originally presented in July 2003 [3], and the theory was developed in [16]. The present article contains several original contributions, including firm foundations for the stochastic Airy and Bessel operators. The standard technique for studying local eigenvalue behavior of a random matrix distribution involves the following steps. (1) Choose a family of n-by-n random matrices, n = 2, 3, 4, . . . , (2) Translate and rescale the nth random matrix to focus on a particular region of the spectrum, and (3) Let n → ∞. When this procedure is performed carefully, so that the eigenvalues near zero approach limiting distributions as n → ∞, the limiting eigenvalue behavior often falls into one of three classes: soft edge, hard edge, or bulk. The largest eigenvalues of many random matrix distributions, notably the Hermite (i.e., Gaussian) and Laguerre (i.e., Wishart) ensembles, display soft edge behavior. The limiting marginal density, as the size of the matrix approaches infinity, of a single eigenvalue at the soft edge is associated with the Airy kernel. Tracy and Widom derived formulas for these density functions in the cases β = 1, 2, 4, relating them to solutions of the Painlevé II differential equation. See Figure 1.1(a). Relevent references include [5,8,9,19,20,22,23]. The smallest eigenvalues of some random matrix distributions, notably the Laguerre and Jacobi ensembles, display hard edge behavior. The limiting marginal density of a single eigenvalue at the hard edge is associated with the Bessel kernel. Formulas exist for these density functions as well, expressible in terms of solutions to Painlevé equations. See Figure 1.1(b). Relevant references include [5,6,11,21]. The eigenvalues in the middle of the spectra of many random matrix distributions display bulk behavior. In this case, the spacing between consecutive eigenvalues is interesting. The spacing distributions are associated with the sine kernel, and formulas for the density functions, due to Jimbo, Miwa, Môri, Sato, Tracy, and Widom, are related to the Painlevé V differential equation. See Figure 1.1(c). Relevant references include [7,11,12,18]. This article contends that the most natural setting for soft edge behavior is in the eigenvalues of the stochastic Airy operator A suggestion for a stochastic sine operator, along the lines of (1.1-1.2), is presented at the end of the article. The correct interpretations of the "noise" terms in (1.1) and (1.2) will be specified later in the article, as will boundary conditions; see Definitions 3.2 and 3.4. The parameter β has its usual meaning from random matrix theory, but now the cases β = 1, 2, 4 do not seem special. Numerical evidence is presented in The stochastic Airy, Bessel, and sine operators were discovered by interpreting the classical ensembles of random matrix theory as finite difference schemes. We argue that (1) When scaled at the soft edge, the Hermite and Laguerre matrix models encode finite difference schemes for the stochastic Airy operator. (2) When scaled at the hard edge, the Laguerre and Jacobi matrix models encode finite difference schemes for the stochastic Bessel operator. See Section 3.2 for an overview. Exactly what is meant by "scaling" will be developed later in the article. Typically, scaling involves subtracting a multiple of an identity matrix and multiplying by a scalar to focus on a particular region of the spectrum, along with a few tricks to decompose the matrix into a random part and a nonrandom part. The structured matrix models introduced by Dumitriu and Edelman [1] and further developed by Killip and Nenciu [10] and Edelman and Sutton [4] play vital roles. The original contributions of this article include the following. • The stochastic Airy and Bessel operators are defined. Care is taken to ensure that the operators involve ordinary derivatives of well behaved functions, avoiding any heavy machinery from functional analysis. • The smoothness of eigenfunctions and singular functions is investigated. In the case of the stochastic Airy operator, the kth eigenfunction is of the form f k φ, in which f k is twice differentiable and φ is a once differentiable (specifically C 3/2− ) function defined by an explicit formula. This predicts structure in the eigenvectors of certain rescaled matrix models, which can be seen numerically in Figure 1.4. Figure 1.5 considers analogous results for the stochastic Bessel operator. • The interpretation of random matrix models as finite difference schemes for stochastic differential operators is developed. This approach is demonstrated for the soft edge of Hermite, the soft and hard edges of Laguerre, and the hard edge of Jacobi. Notable work of others includes [2] and [14]. Although the stochastic Airy operator is not explictly mentioned in the large β asymptotics of Dumitriu and Edelman [2], it appears to play an important role. The stochastic operator approach has very recently been given a boost by Ramírez, Rider, and Virág [14], who have proved a conjecture contained in [3,16] relating the eigenvalues of the stochastic Airy operator to soft edge behavior. In addition, they have used the stochastic Airy operator to describe the soft edge distributions in terms of a diffusion process. The next section reviews necessary background material and introduces notation. Section 3 provides formal definitions for the stochastic Airy and Bessel operators and provides an overview of our results, which are developed in later sections. Background Much work in the field of random matrix theory can be divided into two classes: global eigenvalue behavior and local eigenvalue behavior. The entrywise ratio of two eigenvectors of the rescaled Hermite matrix model H β soft is "smoother" than either individual eigenvector. The plots are generated from a single random sample of H β soft with β = 2 and n = 10 5 . A log scale is used for visual appeal. ∇ refers to Matlab's gradient function, and ∇ 2 indicates two applications of the gradient function. See Section 7.3 for details. Global eigenvalue behavior refers to the overall density of eigenvalues along the real line. For example, a commonly studied distribution on nby-n Hermitian matrices known as the Hermite ensemble typically has a high density of eigenvalues near zero, but just a scattering near √ 2n by comparison. Such a statement does not describe how the eigenvalues are arranged with respect to each other in either region, however. In contrast, local eigenvalue behavior is observed by "zooming in" on a particular region of the spectrum. The statistic of concern may be the marginal distribution of a single eigenvalue or the distance between two consecutive eigenvalues, for example. Local eigenvalue behavior is determined by two factors-the distribution of the random matrix and the region of the spectrum under consideration. For example, the eigenvalues of the Hermite ensemble near zero display very different behavior from the eigenvalues near the edge of the spectrum, at √ 2n. Conceivably, the eigenvalues of a different random matrix may display entirely different behavior. Interestingly, though, the eigenvalues of many, many random matrix distributions fall into one of three classes of behavior, locally speaking. Notably, the eigenvalues of the three classical ensembles of random matrix theory-Hermite, Laguerre, and Jacobi-fall into these three classes as the size of the matrix approaches infinity. In this section, we present background material, covering the three most commonly studied random matrix distributions and the three classes of local eigenvalue behavior. 2.1. Random matrix models. There are three classical distributions of random matrix theory: Hermite, Laguerre, and Jacobi. The distributions are also called ensembles or matrix models. They are defined in this section. Also, joint distributions for Hermite eigenvalues, Laguerre singular values, and Jacobi CS values are provided. We use the word spectrum to refer to all eigenvalues or singular values or CS values, depending on context. Also, note that the language earlier in the article was loose, referring to eigenvalues when it would have been more appropriate to say "eigenvalues or singular values or CS values." 2.1.1. Hermite. The Hermite ensembles also go by the name of the Gaussian ensembles. Traditionally, three flavors have been studied, one for real symmetric matrices, one for complex Hermitian matrices, and one for quaternion self-dual matrices. In all three cases, the density function is in which β = 1 for the real symmetric case, β = 2 for the complex Hermitian case, and β = 4 for the quaternion self-dual case. The entries in the upper triangular part of such a matrix are independent Gaussians, although the diagonal and off-diagonal entries have different variances. The eigenvalues of the Hermite ensembles have joint density Dumitriu and Edelman extended the Hermite ensembles to all β > 0 [1]. Below, X ∼ Y indicates that X and Y have the same distribution. Definition 2.1. The n-by-n β-Hermite matrix model is the random real symmetric matrix in which G 1 , . . . , G n are standard Gaussian random variables, χ r denotes a chi-distributed random variable with r degrees of freedom, and all entries in the upper triangular part are independent. The β = 1, 2 cases can be derived by running a tridiagonalization algorithm on a dense random matrix with density function (2.1), a fact first observed by Trotter [24]. H β is the natural extension to general β, and it has the desired eigenvalue distribution. As β → ∞, the β-Hermite matrix model converges in distribution to This matrix encodes the recurrence relation for Hermite polynomials. In fact, the eigenvalues of this matrix are the roots of the nth polynomial, and the eigenvectors can be expressed easily in terms of the first n − 1 polynomials. See [16] and [17] for details. 2.1.2. Laguerre. The Laguerre ensembles are closely related to Wishart matrices from multivariate statistics. Just like the Hermite ensembles, the Laguerre ensembles come in three flavors. The β = 1 flavor is a distribution on real m-by-n matrices. These matrices need not be square, much less symmetric. The β = 2 flavor is for complex matrices, and the β = 4 flavor is for quaternion matrices. In all cases, the density function is in which A * denotes the conjugate transpose of A. For a Laguerre matrix with m rows and n columns, let a = m − n. It is well known that the singular values of this matrix are described by the density in which λ i is the square of the ith singular value. As usual, β = 1 for real entries and β = 2 for complex entries. Dumitriu and Edelman also extended this family of random matrix distributions to all β > 0 and nonintegral a. The notation in this article differs from the original notation, instead following [16]. in which χ r denotes a chi-distributed random variable with r degrees of freedom, and all entries are independent. The (n + 1)-by-n β-Laguerre matrix model, parameterized by a > 0, is with independent entries. Notice that (L β,a )(L β,a ) T and (M β,a ) T (M β,a ) are identically distributed symmetric tridiagonal matrices. This tridiagonal matrix is actually what Dumitriu and Edelman termed the β-Laguerre matrix model. For more information on why we consider two different random bidiagonal matrices, see [16]. The β = 1, 2 cases can be derived from dense random matrices following the density (2.3), via a bidiagonalization algorithm, a fact first observed by Silverstein [15]. Then the general β matrix model is obtained by extending in the natural way. As β → ∞, the n-by-n β-Laguerre matrix model approaches in distribution, and the (n + 1)-by-n β-Laguerre matrix model approaches in distribution. The nonzero singular values, squared, of both of these matrices are the roots of the nth Laguerre polynomial with parameter a, and the singular vectors are expressible in terms of the first n − 1 polynomials. See [16] and [17] for details. 2.1.3. Jacobi. Our presentation of the Jacobi matrix model is somewhat unorthodox. A more detailed exposition can be found in [4]. Consider the space of (2n+a+b)-by-(2n+a+b) real orthogonal matrices. A CS decomposition of a matrix X from this distribution can be computed by partitioning X into rectangular blocks of size (n + a)-by-n, (n + a)-by-(n + a + b), (n + b)-by-n, and (n + b)-by-(n + a + b), and computing singular value decompositions for the four blocks. Because X is orthogonal, something fortuitous happens: all four blocks have essentially the same singular values, and there is much sharing of singular vectors. In fact, X can be factored as in which U 1 , U 2 , V 1 , and V 2 are orthogonal and C and S are nonnegative diagonal. This is the CS decomposition, and the diagonal entries c 1 , c 2 , . . . , c n of C are knows as CS values. An analogous decomposition exists for complex unitary matrices X, involving unitary U 1 , U 2 , V 1 , and V 2 . The Jacobi matrix model is defined by placing Haar measure on X. The resulting distribution on CS values is most conveniently described in terms of λ i = c 2 i , i = 1, . . . , n, which have joint density The Jacobi matrix model has been extended beyond the real and complex cases (β = 1, 2) to general β > 0, first by Killip and Nenciu and later by the authors of the present article [4,10,16]. The following definition involves the beta distribution beta(c, d) on the interval (0, 1), whose density is a distribution on orthogonal matrices with a special structure called bidiagonal block form. It is defined in terms of random angles θ 1 , . . . , θ n and φ 1 , . . . , φ n−1 from [0, π 2 ]. All 2n − 1 angles are independent, and their distributions are defined by The entries of the β-Jacobi matrix model are expressed in terms of Theorem 2.6 ( [4,10]). Partition the 2n-by-2n β-Jacobi matrix model into four blocks of size n-by-n. The resulting CS values, squared, have density function (2.8). This is true for all β > 0. As β → ∞, the angles θ 1 , . . . , θ n and φ 1 , . . . , φ n−1 converge in distribution to deterministic anglesθ 1 , . . . ,θ n andφ 1 , . . . ,φ n−1 , whose cosines and sines will be denotedc i , . Because the angles have deterministic limits, the matrix model itself converges in distribution to a fixed matrix J ∞,a,b . The entries of J ∞,a,b encode the recurrence relation for Jacobi polynomials. The CS values, squared, of J ∞,a,b are the roots of the nth Jacobi polynomial with parameters a, b, and the entries of U 1 , U 2 , V 1 , and V 2 are expressible in terms of the first n − 1 polynomials. See [16] and [17] for details. 2.2. Local eigenvalue behavior. The three classes of local behavior indicated in Figure 1.1 are observed by taking n → ∞ limits of random matrices, carefully translating and rescaling along the way to focus on a particular region of the spectrum. This section records the constants required in some interesting rescalings. For references concerning the material below, consult the introduction to this article. Note that much of the existing theory, including many results concerning the existence of large n limiting distributions and explicit formulas for those distributions, is restricted to the cases β = 1, 2, 4. Further progress in general β random matrix theory may be needed before discussion of general β distributions is perfectly well founded. Although these technical issues are certainly important in the context of the stochastic operator approach, the concrete results later in this article do not depend on any subtle probabilistic issues, and hence, we dispense with such technical issues for the remainder of this section. In the case of Hermite, the kth largest eigenvalue In the case of Laguerre, the kth largest singular value σ n+1−k (L β,a ) displays soft edge behavior. Specifically, −2 2/3 n 1/6 (σ n+1−k (L β,a ) − 2 √ n) approaches a soft edge distribution as n → ∞. Of course, the kth largest singular value of the rectangular model M β,a displays the same behavior, because the nonzero singular values of the two models have the same joint distribution. In the case of Laguerre, the kth smallest singular value σ k (L β,a ) displays hard edge behavior. Specifically, approaches a hard edge distribution as n → ∞. Of course, the kth smallest nonzero singular value of M β,a displays the same behavior, because the singular values of the two matrix models have the same joint density. In the case of Jacobi, the kth smallest CS value displays hard edge behavior. Specifically, let c kk (J β,a,b ) be the kth smallest diagonal entry in the matrix C of (2.7), when applied to the β-Jacobi matrix model. Then (2n + a + b + 1)c kk (J β,a,b ) approaches a hard edge distribution as n → ∞. 2.2.3. Bulk. Bulk behavior is seen in the interior of spectra, as opposed to the edges. Suppose that the least and greatest members of the spectrum of an n-by-n random matrix are O(L n ) and O(R n ), respectively, as n → ∞. Then bulk behavior can often be seen in the spacings between consecutive eigenvalues near the point (1 − p)L n + pR n , for any constant p ∈ (0, 1), as n → ∞. This is true for the Hermite, Laguerre, and Jacobi ensembles. Because this article does not consider bulk spacings in great detail, the constants involved in the scalings are omitted. 2.3. Finite difference schemes. The solution to a differential equation can be approximated numerically through a finite difference scheme. This procedure works by replacing various differential operators with matrices that mimic their behavior. For example, the first derivative operator can be discretized by a matrix whose action amounts to subtracting function values at nearby points on the real line, essentially omitting the limit in the definition of the derivative. With this in mind, ∇ m,n is defined to be the m-by-n upper bidiagonal matrix with 1 on the superdiagonal and −1 on the main diagonal, The subscripts are omitted when the size of the matrix is clear from context. Up to a constant factor, ∇ m,n encodes a finite difference scheme for the first derivative operator when certain boundary conditions are in place. The matrix ∆ n is defined to be the symmetric tridiagonal matrix with 2 on the main diagonal and -1 on the superdiagonal and subdiagonal, Note that ∆ n = ∇ n,n+1 ∇ T n,n+1 . Under certain conditions, ∆ n discretizes the second derivative operator, up to a constant factor. A few other matrices prove useful when constructing finite difference schemes. Ω n denotes the n-by-n diagonal matrix with −1, 1, −1, 1, . . . along the main diagonal. F n denotes the n-by-n "flip" permutation matrix, with ones along the diagonal from top-right to bottom-left. In both cases, the subscript is omitted when the size of the matrix is clear. Finally, the "interpolating matrix" S m,n = − 1 2 Ω m ∇ m,n Ω n proves useful when constructing finite difference schemes for which the domain and codomain meshes interleave. S m,n is the m-by-n upper bidiagonal matrix in which every entry on the main diagonal and superdiagonal equals 1 2 . Subscripts will be omitted where possible. Results This section defines the stochastic Airy and Bessel operators, briefly mentions the stochastic sine operator, and states results that are proved in later sections. An eigenvalue-eigenfunction pair consists of a number λ and a function v such that A ∞ v = λv. The complete eigenvalue decomposition is for k = 1, 2, 3, . . . , in which Ai denotes the unique solution to Airy's equation f ′′ (x) = xf (x) that decays as x → ∞, and λ k equals the negation of the kth zero of Ai. As typical with Sturm-Liouville operators, A ∞ acts naturally on a subspace of Sobolev space and can be extended to all L 2 ((0, ∞)) functions satisfying the boundary conditions via the eigenvalue decomposition. Intuitively, the stochastic Airy operator is obtained by adding white noise, the formal derivative of Brownian motion, However, white noise sometimes poses technical difficulties. To avoid these potential difficulties, we express the stochastic Airy operator in terms of a conjugation of a seemingly simpler operator, i.e., by changing variables. The stochastic Airy operator A β acts on functions v( or, to abbreviate, An eigenvalue-eigenfunction pair consists of a real number λ and a func- Note that φ(x) is defined in terms of a Riemann integral of Brownian motion, which is continuous. This is not a stochastic integral, and nothing like an Itô or Stratonovich interpretation must be specified. To see this, apply A β to v = f φ, proceeding formally as follows. Combining The stochastic Airy operator acts naturally on any function of the form f φ, in which f has two derivatives. Also, the Rayleigh quotient defined by is well defined and does not require an Itô or Stratonovich interpretation if v is deterministic, decays sufficiently fast, and is sufficiently smooth, say, if it has a bounded first derivative. See [13]. Stochastic Bessel operator. Definition 3.3. The classical Bessel operator with type (i) boundary conditions, parameterized by a > −1, is the operator whose action is acting on functions v satisfying a v)(0) = 0. We will abuse notation and also denote the classical Bessel operator with type (ii) boundary conditions by J ∞ a . The action of this operator is also x , and it is defined for all a > −1, but its domain consists of functions v satisfying The adjoint of the classical Bessel operator (with either type (i) or type (ii) boundary conditions) has action The singular value decompositions are defined in terms of the Bessel functions of the first kind j a by The purposes of the boundary conditions are now clear. The condition at x = 1 produces a discrete spectrum, and the condition at x = 0 eliminates Bessel functions of the second kind, leaving left singular functions that are nonsingular at the origin. Intuitively, the stochastic Bessel operator is obtained by adding white noise to obtain However, the following definition, which avoids the language of white noise, offers certain technical advantages. Definition 3.4. Let a > −1 and β > 0, let B(x) be a Brownian path on (0, 1), and let The stochastic Bessel operator, denoted J β a , has action Either type (i) or type (ii) boundary conditions may be applied. (See (3.4) and (3.5).) The function ψ involves a stochastic integral, but because the integrand is smooth and not random, it is not necessary to specify an Itô or Stratonovich interpretation. Note that when β = ∞, the stochastic Bessel operator equals the classical Bessel operator. When β < ∞, equations (3.6) and (3.8) are formally equivalent. To see this, apply J β a to v = f φ, proceeding formally as follows. The stochastic Bessel operator acts naturally on any function of the form f ψ for which f has one derivative, assuming the boundary conditions are satisfied. Its adjoint acts naturally on functions of the form gψ −1 for which g has one derivative and the boundary conditions are satisfied. Sometimes, expressing the stochastic Bessel operator in Liouville normal form proves to be useful. The classical Bessel operator in Liouville normal form, denotedJ ∞ a , is defined bỹ with either type (i) or type (ii) boundary conditions. The singular values remain unchanged, while the singular functions undergo a change of variables. The SVD's are Note that although the change of variables to Liouville normal form affects asymptotics near 0, the original boundary conditions still serve their purposes. Definition 3.5. Let a > −1 and β > 0, and let B(x) be a Brownian path on (0, 1). The stochastic Bessel operator in Liouville normal form, denoted J β a , has action ψ is defined in (3.7). Either type (i) or type (ii) boundary conditions may be applied. This operator acts naturally on functions of the form f ψ 3.1.3. Stochastic sine operator. The last section of this article presents some ideas concerning a third stochastic differential operator, the stochastic sine operator. This operator likely has the form "noise" "noise" "noise" "noise" . Key to understanding the stochastic Airy and Bessel operators are the changes of variables, in terms of φ and ψ, respectively, that replace white noise with Brownian motion. No analogous change of variables has yet been found for the stochastic sine operator, so most discussion of this operator will be left for a future article. Much of the remainder of the article is devoted to supporting the following claims, relating the stochastic differential operators of the previous section to the classical ensembles of random matrix theory. The claims involve "scaling" random matrix models. This is explained in Sections 5 and 6. Claim 3.8. The Laguerre matrix models, scaled at the hard edge, encode finite difference schemes for the stochastic Bessel operator. L β,a hard encodes type (i) boundary conditions, and M β,a hard encodes type (ii) boundary conditions. See Theorems 5.6, 5.8, 6.7, and 6.9. Claim 3.9. The Jacobi matrix model, scaled at the hard edge, encodes a finite difference scheme for the stochastic Bessel operator in Liouville normal form with type (i) boundary conditions. See Theorems 5.10 and 6.12. 3.3. Eigenvalues/singular values of stochastic differential operators. Based on the claims in the previous section, we propose distributions for the eigenvalues of the stochastic Airy operator and the singular values of the stochastic Bessel operators. The conjecture now appears to be a theorem, due to a proof of Ramírez, Rider, and Virág [14]. Conjecture 3.11. The kth least singular value of the stochastic Bessel operator with type (i) boundary conditions follows the kth hard edge distribution, with the same values for β and a. With type (ii) boundary conditions, the hard edge distribution has parameters β and a + 1. The conjecture should be true for both J β a andJ β a . Some matrix model identities This section establishes relations between various random matrices which will be useful later in the article. The identities are organized according to their later application. This section may be skipped on a first reading. 4.2. Identities needed for the hard edge. The remaining identities are derived from the following two completely trivial lemmas. The first operates on square bidiagonal matrices, and the second operates on rectangular bidiagonal matrices. The lemmas immediately establish the following three identities. Consult Section 2.3 for the definitions of Ω and F . For the square β-Laguerre matrix model, we have For the rectangular β-Laguerre matrix model, we have 1 . The random angles θ 1 , . . . , θ n and φ 1 , . . . , φ n−1 are independent, and their cosines and sines are denoted by c i , s i , c ′ i , and s ′ i , as usual. The constants c i ,s i ,c ′ i , ands ′ i are introduced after the definition of the β-Jacobi matrix model in Section 2.1.3. Zero temperature matrix models as finite difference schemes As seen in Section 2.1, the Hermite, Laguerre, and Jacobi matrix models approach nonrandom limits as β → ∞. We call these matrices "zero temperature matrix models" because of the well known connection with statistical mechanics. By appropriately transforming the zero temperature matrix models-via operations such as translation, scalar multiplication, similarity transform, and factorization-we can interpret them as finite difference schemes for the classical Airy and Bessel operators. This approach anticipates analogous methods for the β < ∞ case. In short, β = ∞ matrix models discretize nonrandom operators, and β < ∞ matrix models discretize stochastic operators. in which DH ∞ D −1 is the matrix of (4.2). Note that "scaling at the soft edge" modifies eigenvalues in a benign way. The translation and rescaling are designed so that the smallest k eigenvalues of H ∞ soft approach distinct limits as n → ∞. (The largest eigenvalues of DH ∞ D −1 are first pulled toward the origin, and then a scalar factor is applied to "zoom in." The scalar factor is negative to produce an increasing, as opposed to decreasing, sequence of eigenvalues starting near zero.) The following theorem interprets H ∞ soft as a finite difference scheme for the classical Airy operator A ∞ = − d 2 dx 2 + x on the mesh x i = hi, i = 1, . . . , n, with mesh size h = n −1/3 . Furthermore, for fixed k, the kth least eigenvalue of H ∞ soft converges to the kth least eigenvalue of A ∞ as n → ∞, Proof. The expression for H ∞ soft is straightforward to derive. For the eigenvalue result, recall that the kth greatest eigenvalue of H ∞ is the kth rightmost root of the nth Hermite polynomial, and the kth least eigenvalue of A ∞ is the kth zero of Ai, up to sign. The eigenvalue convergence result is exactly equation (6.32.5) of [17]. (The recentering and rescaling in the definition of H ∞ soft is designed precisely for the purpose of applying that equation.) It is also true that the eigenvectors of H ∞ soft discretize the eigenfunctions of A ∞ . This can be established with well known orthogonal polynomial asymptotics, specifically equation (3.3.23) of [17]. We omit a formal statement and proof for brevity's sake. in which D L is the matrix D of Lemma 4.2 (with β = ∞) and P L is the matrix P of the same lemma. The (2n + 1)-by-(2n + 1) ∞-Laguerre matrix model scaled at the soft edge is Proof. For odd j, the (j + 1, j) entry of E L equals −h(2a + 1), and every other entry of E L equals zero. For even j, the (j + 1, j) entry of E M equals −h(2a − 1), and every other entry of E M equals zero. For the eigenvalue result, check that the kth greatest eigenvalue of (4.3), resp., (4.4), equals the kth greatest singular value of L ∞,a , resp., M ∞,a , and that this value is the square root of the kth rightmost root of the nth Laguerre polynomial with parameter a. The eigenvalue convergence then follows from equation (6.32.4) of [17], concerning zero asymptotics for Laguerre polynomials. Also, the kth eigenvector of L ∞,a soft , resp., M ∞,a soft , discretizes the kth eigenfunction of A ∞ , via (3.3.21) of [17]. Theorem 5.6. Let h and x i be defined as in the previous paragraph, and make the approximation with S defined as in Section 2.3. Then the error term E is upper bidiagonal, and the entries in rows ⌈ ε h ⌉, . . . , n of E are uniformly O(h), for any fixed ε > 0. Furthermore, for fixed k, the kth least singular value of L ∞,a hard approaches the kth least singular value of J ∞ a with type (i) boundary conditions as n → ∞, . By a Taylor series expansion, , uniformly for any set of x values bounded away from zero. This implies that the (i, i) entry of E is O(h), for any sequence of values for i bounded below by ⌈ ε h ⌉ as n → ∞. , from which similar asymptotics follow. For the singular value result, recall that the kth least singular value of L ∞,a is the square root of the kth least root of the Laguerre polynomial with parameter a and that the kth least singular value of J ∞ a is the kth positive zero of j a , the Bessel function of the first kind of order a. The convergence result follows immediately from (6.31.6) of [17]. In fact, the singular vectors of L ∞,a hard discretize the singular functions of J ∞ a with type (i) boundary conditions as well. This can be proved with (3.3.20) of [17]. Analogous results hold for the rectangular β-Laguerre matrix model. with S defined as in Section 2.3. Then the error term E is upper bidiagonal, and the entries in rows ⌈ ε h ⌉, . . . , n of E are uniformly O(h), for any fixed ε > 0. Furthermore, for fixed k, the kth least singular value of M ∞,a hard approaches the kth least singular value of J ∞ a−1 with type (ii) boundary conditions, . By a Taylor series expansion, , uniformly for any set of x values bounded away from zero. This implies that , from which similar asymptotics follow. For the singular value result, the proof of Theorem 5.6 suffices, because L ∞,a hard and M ∞,a hard have exactly the same nonzero singular values, as do J ∞ a with type (i) boundary conditions and J ∞ a−1 with type (ii) boundary conditions. Also, the singular vectors of M ∞,a hard discretize the singular functions of J ∞ a−1 with type (ii) b.c.'s, although we omit a formal statement of this fact here. Jacobi → Bessel. Definition 5.9. The n-by-n ∞-Jacobi matrix model scaled at the hard edge is with S defined as in Section 2.3. Then the error term E is upper bidiagonal, and the entries in rows ⌈ ε h ⌉, . . . , n are uniformly O(h), for any fixed ε > 0. Furthermore, for fixed k, the kth least singular value of J ∞,a,b hard approaches the kth least singular value ofJ ∞ a with type (i) boundary conditions as n → ∞, Rewriting this expression as it is straightforward to check that the entry is 1 2h + (a + 1 2 ) · 1 2 1 x2i + O(h), uniformly for any sequence of values i such that x 2i is bounded away from zero as n → ∞. The argument for the superdiagonal terms is similar. For the singular value result, note that the CS values of J ∞,a,b equal the singular values of its bottom-right block, and that these values, squared, equal the roots of the nth Jacobi polynomial with parameters a, b. Also recall that the kth least singular value ofJ ∞ a with type (i) boundary conditions is the kth positive zero of j a , the Bessel function of the first kind of order a. The rescaling in the definition of J ∞,a,b hard is designed so that equation (6.3.15) of [17] may be applied at this point, proving convergence. It is also true that the singular vectors of J ∞,a,b hard discretize the singular functions ofJ ∞ a with type (i) boundary conditions. As presented here, the theorem only considers the bottom-right block of J ∞,a,b , but similar estimates have been derived for the other three blocks [16]. Briefly, the bottom-right and top-left blocks discretizeJ ∞ a and (J ∞ a ) * , respectively, while the top-right and bottom-left blocks discretizẽ J ∞ b and (J ∞ b ) * , respectively, all with type (i) boundary conditions. Random matrix models as finite difference schemes The previous section demonstrated how to view zero temperature matrix models as finite difference schemes for differential operators. Because the matrices were not random, the differential operators were not random either. This section extends to the finite β case, when randomness appears. The eigenvalues of H β soft display soft edge behavior as n → ∞. The underlying reason, we claim, is that the matrix is a discretization of the stochastic Airy operator. The next theorem interprets H β soft as a finite difference scheme with mesh size h = n −1/3 and grid points x i = hi, i = 1, . . . , n. Proof. The derivation of the expression for H β soft is straightforward. The mean ofχ 2 (n−j)β is exactly 0, and the variance is exactly 1 − h 2 x j . We claim that the matrix W discretizes white noise on the mesh from Theorem 5.2. The increment of Brownian motion over an interval (x, x + h] has mean 0 and standard deviation √ h, so a discretization of white noise over the same interval should have mean 0 and standard devation 1 √ h . The noise in the matrix W has the appropriate mean and standard deviation. in which D L is the matrix D of Lemma 4.2 and P L is the matrix P of the same lemma. The (2n + 1)-by-(2n + 1) β-Laguerre matrix model scaled at the soft edge is in which D M is the matrix D of Lemma 4.3 and P M is the matrix P of the same lemma. The eigenvalues of L β,a soft and M β,a soft near zero display soft edge behavior as n → ∞. The underlying reason, we claim, is that the matrices themselves encode finite difference schemes for the stochastic Airy operator, as the next theorem shows. Theorem 6.4. The 2n-by-2n and (2n + 1)-by-(2n + 1) β-Laguerre matrix models scaled at the soft edge satisfy . . . ,χ 2 2β ,χ 2 (a+1)β ,χ 2 β ,χ 2 aβ ), and h = (2n) −1/3 . All 2n − 1 subdiagonal entries of W L and all 2n subdiagonal entries of W M are independent, withχ 2 r denoting a random variable with distributionχ 2 r ∼ 1 √ 2βn (χ 2 r − r). The entries of W L and W M have mean approximately 0 and standard deviation approximately 1 √ h . Therefore, we think of W L and W M as discretizations of white noise on the mesh from Theorem 5.4. The situation is very similar to that in Theorem 6.2, so we omit a formal statement. 6.1.3. Overview of finite difference schemes for the stochastic Airy operator. In light of Theorems 5.2, 5.4, 6.2, and 6.4, we make the following claim. H β soft , L β,a soft , and M β,a soft discretize the stochastic Airy operator The conjecture now appears to be a theorem, due to a proof of Ramírez, Rider, and Virág [14]. The least singular values of L β,a hard display hard edge behavior as n → ∞. We claim that this can be understood by viewing the matrix as a finite difference scheme for the stochastic Bessel operator with type (i) boundary conditions. The next theorem demonstrates this, using the same mesh seen in Theorem 5.6. Theorem 6.7. Let L β,a hard be a matrix from the n-by-n β-Laguerre matrix model scaled at the hard edge. Adopting the notation of (4.5) and Theorem 5.6 and settingg i = − βx i /h g i , we have . . ,g 2n−1 are independent, and, for any ε > 0, the random variablesg ⌈ε/h⌉ , . . . ,g 2n−1 have mean O( √ h) and standard deviation 1 + O(h), uniformly. Proof. Conclusions L β,a hard appears to be a finite difference scheme for the stochastic Bessel operator with type (i) boundary conditions, L β,a hard ∼ e Deven L ∞,a hard e −D odd n→∞ ψJ ∞ a ψ −1 ∼ J β a . Definition 6.8. The n-by-(n + 1) β-Laguerre matrix model scaled at the hard edge is . F and Ω are defined in Section 2.3. The small singular values of M β,a hard display hard edge behavior as n → ∞, because, we claim, the matrix is a finite difference scheme for the stochastic Bessel operator with type (ii) boundary conditions. The next theorem demonstrates this, using the same mesh seen in Theorem 5.8. Theorem 6.9. Let M β,a hard be a matrix from the n-by-(n + 1) β-Laguerre matrix model scaled at the hard edge. Adopting the notation of (4.6) and Theorem 5.8 and settingg i = − βx i /h g i , we have . . ,g 2n are independent, and, for any ε > 0, the random variablesg ⌈ε/h⌉ , . . . ,g 2n−1 have mean O( √ h) and standard deviation 1 + O(h), uniformly. The point of (3) is that the sequence 1 √ hg 1 , . . . , 1 √ hg 2n is a discretization of white noise. Hence, the expression for e di in (2) is a discretization of ψ(x i ). (1) and (2) are simply restatements of facts from (4.6). For (3), the independence ofg 1 , . . . ,g 2n was already established in the context of (4.6). For the asymptotic mean and standard deviation, use the asymptotics for chi-distributed random variables from the proof of the previous theorem. Proof. Conclusions Hence, M β,a hard can be viewed as a finite difference scheme for the stochastic Bessel operator, is the bottom-right block of the 2n-by-2n β-Jacobi matrix model J β,a,b . F and Ω are defined in Section 2.3. As n → ∞, the small singular values of J β,a,b hard display hard edge behavior. We explain this fact by interpreting the rescaled matrix model as a finite difference scheme for the stochastic Bessel operator in Liouville normal form with type (i) boundary conditions. First, though, a lemma is required. Lemma 6.11. Suppose that θ is a random angle in [0, π 2 ] whose distribution is defined by cos 2 θ ∼ beta(c, d). Then log(tan 2 θ) has mean Γ ′ (d) Proof. tan 2 θ = 1−cos 2 θ cos 2 θ has a beta-prime distribution with parameters d, c. Hence, tan 2 θ has the same distribution as a ratio of independent chisquare random variables, with 2d degrees of freedom in the numerator and 2c degrees of freedom in the denominator. Let X ∼ χ 2 2d and Y ∼ χ 2 2c be independent. Then the mean of log( , and the variance equals Theorem 6.12. Let J β,a,b hard be a matrix from the n-by-n β-Jacobi matrix model scaled at the hard edge. Adopting the notation of (4.7) and Theorem 5.10 and settingg 2i−1 = − (βx 2i−1 )/(2h) 1 2 (log(tan 2 θ i ) − log(tan 2θ i )) andg 2i = (βx 2i )/(2h) 1 2 (log(tan 2 θ ′ i ) − log(tan 2θ′ i )), we have (1) J β,a,b hard ∼ e Deven J ∞,a,b hard e −D odd . 1 is odd and greater than one, or R = (log s j − logs j ) − (log s n − logs n ) if i = 2j is even. (3)g 1 , . . . ,g 2n−1 are independent, and, for any ε > 0, the random variablesg ⌈ε/h⌉ , . . . ,g 2n−1 have mean O( √ h) and standard devia- The point of (3) is that the sequence 1 √ hg 1 , . . . , 1 √ hg 2n−1 is a discretization of white noise. Hence, the expression for e di in (2) is a discretization of ψ(x i ) √ 2 . (The remainder term R has second moment O(h) and is considered negligible compared to the sum containing 2n − i terms of comparable magnitude.) Proof. Conclusion (1) is direct from (4.7). Now, we prove conclusion (2). According to (4.7), when i = 2j is even, Compare with The remainder term R is designed to cancel terms that occur in one expression but not in the other. The argument for odd i is similar. The asymptotics in conclusion (3) can be derived from the explicit expressions in the previous lemma. The details are omitted. 6.2.3. Overview of finite difference schemes for the stochastic Bessel operator. Considering Theorems 5.6, 5.8, 5.10, 6.7, 6.9, and 6.12, (1) L β,a hard discretizes J β a with type (i) boundary conditions, for finite and infinite β. hard discretizesJ β a with type (i) boundary conditions, for finite and infinite β. Based on these observations and the fact that the small singular values of L β,a hard , M β,a hard , and J β,a,b hard approach hard edge distributions as n → ∞, we pose the following conjecture. Conjecture 6.13. Under type (i) boundary conditions, the kth least singular value of the stochastic Bessel operator follows the kth hard edge distribution with parameters β, a. Under type (ii) boundary conditions, the hard edge distribution has parameters β, a + 1. This is true both for the original form, J β a , and for Liouville normal form,J β a . 7. Numerical evidence 7.1. Rayleigh-Ritz method applied to the stochastic Airy operator. This section provides numerical support for the claim that stochastic Airy eigenvalues display soft edge behavior. Up until now, our arguments have been based on the method of finite differences. In this section, we use the Rayleigh-Ritz method. To apply Rayleigh-Ritz, first construct an orthonormal basis for the space of L 2 ((0, ∞)) functions satisfying the boundary conditions for A β . The obvious choice is the sequence of eigenfunctions of A ∞ . These func- Note that the stochastic integral ∞ 0 v i v j dB is well defined, and its value does not depend on specifying an Itô or Stratonovich interpretation, because v i and v j are well behaved and not random. (In fact, the joint distribution of the stochastic integrals is a multivariate Gaussian, whose covariance matrix can be expressed in terms of Riemann integrals involving Airy eigenfunctions.) Introducing the countably infinite symmetric K, According to the variational principle, the least eigenvalue of A β equals inf v =1 v, A β v , which equals min c =1 c T Kc, which equals the minimum eigenvalue of K. This suggests a numerical procedure. Truncate K, taking the top-left l-by-l principal submatrix, and evaluate the entries numerically. Then compute the least eigenvalue of this truncated matrix. This is the Rayleigh-Ritz method. The histograms in Figure 1.2 were produced by running this procedure over 10 5 random samples, discretizing the interval (0, 86.9) with a uniform mesh of size 0.05 and truncating K after the first 150 rows and columns. The histograms match the soft edge densities well, supporting the claim that the least eigenvalue of A β exhibits soft edge behavior. 7.2. Rayleigh-Ritz method applied to the stochastic Bessel operator. Now consider applying the Rayleigh-Ritz method to the stochasic Bessel operator in Liouville normal form with type (i) boundary conditions. Liouville form is well suited to numerical computation because the singular functions are well behaved near the origin for all a. We omit consideration of type (ii) boundary conditions for brevity. Two orthonormal bases play important roles, one consisting of right singular functions and the other consisting of left singular functions ofJ ∞ a , from (3.9). For i = 1, 2, 3, . . . , let v i (x) be the function √ xj a (ξ i x), normalized to unit length, and let u i (x) be the function √ xj a+1 (ξ i x), normalized to unit length, in which ξ i is the ith zero of j a . The smallest singular value ofJ β a is the minimum value for Expressing v as v = f ψ √ 2 and expanding f in the basis In terms of the countably infinite symmetric matrix K, , which equals the square root of the minimum solution λ to the generalized eigenvalue problem Kc = λM c. To turn this into a numerical method, simply truncate the matrices K and M , and solve the resulting generalized eigenvalue problem. The histograms in Figure 1.3 were produced using this method on 10 4 random samples of the stochastic Bessel operator, discretizing the interval (0, 1) with a uniform mesh of size 0.001 and truncating the matrices K and M after the first 75 rows and columns. The histograms match the hard edge densities well, supporting the claim that the least singular value of the stochastic Bessel operator follows a hard edge distribution. 7.3. Smoothness of eigenfunctions and singular functions. Up to this point, we have proceeded from random matrices to stochastic operators. In this section, we reverse direction, using stochastic operators to reveal new facts about random matrices. Specifically, we make predictions regarding the "smoothness" of Hermite eigenvectors and Jacobi CS vectors, using the stochastic operator approach. Verifying the predictions numerically provides further evidence for the connection between classical random matrix models and the stochastic Airy and Bessel operators. First, consider the eigenfunctions of the stochastic Airy operator. The kth eigenfunction is of the form f k φ, in which f k ∈ C 2 ((0, ∞)) and φ ∈ C 3/2− ((0, ∞)) is defined by (3.2). In light of the claim that H β soft encodes a finite difference scheme for A β , the kth eigenvector of H β soft should show structure indicative of the kth eigenfunction f k φ of A β . For a quick check, consider the ratio of two eigenfunctions/eigenvectors. The kth eigenfunction of A β is of the form f k φ, which does not have a second derivative (with probability one) because of the irregularity of Brownian motion. However, the ratio of the kth and lth eigenfunctions is f k φ f l φ = f k f l , which, modulo poles, has a continuous second derivative. Therefore, we expect the entrywise ratio between two eigenvectors of H β soft , L β,a soft , or M β,a soft to be "smoother" than a single eigenvector. Compare Figure 1 , in which f k , g k ∈ C 1 ((0, 1)) and ψ ∈ C 1/2− ((0, 1)) is defined by (3.7). The situation is similar to the Airy case. Any one singular function may not be differentiated in the classical sense, because of the irregularity of Brownian motion. However, the ratio of two singular functions is smooth. We expect the singular vectors of L β,a hard , M β,a hard , and J β,a,b hard to show similar behavior. Compare Figure 1.5. Preview of the stochastic sine operator We have seen that the eigenvalues of the stochastic Airy operator display soft edge behavior, and the singular values of the stochastic Bessel operator display hard edge behavior. Is there a stochastic differential operator whose eigenvalues display bulk behavior? Because of the role of the sine kernel in the bulk spacing distributions, it may be natural to look for a stochastic sine operator. In fact, [16] provides evidence that an operator of the form "noise" "noise" "noise" "noise" may be the desired stochastic sine operator. This operator is discovered by scaling the Jacobi matrix model at the center of its spectrum, and an equivalent operator, up to a change of variables, is discovered by scaling the Hermite matrix model at the center of its spectrum. The exact nature of the noise terms in (8.1) is not completely understood at this point. A change of variables analogous to those that transform (3.1) to (3.3) and (3.6) to (3.8) would be desirable.
2014-10-01T00:00:00.000Z
2006-07-19T00:00:00.000
{ "year": 2006, "sha1": "35b7e4717a5406ff512a648c05064fc4fd6f1803", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math-ph/0607038", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "49b053b2e0893e5e9f5f906af67630c2893ddfbc", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
265368424
pes2o/s2orc
v3-fos-license
Alcohol intake and cause-specific mortality: conventional and genetic evidence in a prospective cohort study of 512,000 adults in China Background Genetic variants strongly influencing alcohol use in East Asians can help assess the causal effects of alcohol consumption on cause-specific mortality. Methods The prospective China Kadoorie Biobank enrolled 512,724 adults aged 30-79 years from ten areas during 2004-2008, and recorded 56,550 deaths during 12-years follow-up, including 23,457 deaths among 168,050 participants genotyped for ALDH2-rs671 and ADH1B-rs1229984. Adjusted hazard ratios (HR) for cause-specific mortality by self-reported and genotype-predicted alcohol intake were estimated using Cox regression. Findings Among men, 33% drank alcohol in most weeks. In conventional observational analyses, compared with moderate drinkers, ex-, non-, and heavier drinkers had higher risks of death from most major causes. Among current drinkers, higher alcohol intake was associated with higher mortality risks from cancers, CVD, liver diseases, non-medical causes, and all-causes (HRs 1.18, 1.19, 1.51, 1.15 and 1.18 per 100 g/week, respectively). In men, ALDH2-rs671 and ADH1B-rs1229984 genotypes predicted 60-fold differences in mean alcohol intake. Genotype-predicted alcohol intake was linearly and positively associated with risks of death from all-causes (n=12939; HR=1.07, 95%CI=1.05−1.10) and from pre-defined alcohol-related cancers (n=1274; 1.12, 1.04−1.21), liver diseases (n=110; 1.31, 1.02−1.69), and CVD (n=6109; 1.15, 1.10−1.19), chiefly due to stroke (n=3285; 1.18, 1.12-1.25) rather than IHD (n=2363; 1.07, 1.00-1.15). Results were largely consistent using a polygenic score to predict alcohol intake, with no departure from linearity for alcohol-related cancers, CVD and all-cause mortality across low to high alcohol intake strata. Among women, ~2% drank alcohol, and although power was low to assess observational associations of alcohol with mortality, the genetic evidence suggested that the excess risks in men were due to alcohol not pleiotropy. Interpretation Higher alcohol intake linearly increased the risks of death overall and from major diseases in Chinese men. There was no genetic evidence of protection with moderate drinking for all-cause and cause-specific mortality, including CVD. Funding Kadoorie Charitable Foundation, National Natural Science Foundation of China, British Heart Foundation, Cancer Research UK, GlaxoSmithKline, Wellcome Trust, and Medical Research Council, Chinese Ministry of Science and Technology Background Worldwide the harmful use of alcohol accounted for an estimated ~3 million deaths in 2016. 1 The main alcohol-attributed causes of death include liver cirrhosis, cardiovascular disease (CVD), certain cancers (e.g.mouth and throat, oesophagus, liver), tuberculosis, pneumonia, mental health problems, and injuries. 1,2Estimates of the disease burden attributed to alcohol intake have been typically based on risk estimates derived from observational studies of predominantly Western populations.A recent study highlighted the importance of evidence from diverse populations, with different region-and age-specific disease rates. 36][7] However, systematic differences in health characteristics and behaviours (such as prior ill health, socio-economic status, or smoking behaviours) between non-drinkers, moderate and heavy drinkers, often influenced by selection into cohort studies and their demographics characteristics, can lead to reverse causation (where health status affects drinking patterns), confounding and other biases. 7,8In China, where alcohol consumption has increased steadily in recent decades, there is limited evidence available on alcohol drinking and cause-specific mortality in the general adult population. 9,10ndelian randomisation (MR) uses genetic variants as instrumental variables to assess the causal relevance of alcohol intake while minimising the biases inherent in conventional observational studies. 11An MR study of European ancestry individuals associated alcohol intake with a higher risk of all-cause mortality, but specific causes of death were not investigated, nor was the shape of the association across different levels of intake. 12In East Asian populations, two common genetic variants (ALDH2-rs671 and ADH1B-rs1229984) alter the function of enzymes involved in alcohol metabolism and strongly affect alcohol tolerability and alcohol intake. 4These genetic variants have been used to assess the causal relevance of alcohol intake for incidence of CVD and other diseases, and overall mortality. 4,13,14Ascertaining the causal relevance of alcohol for major causes of death, particularly CVD where associations of alcohol with fatal compared with non-fatal events may differ, can improve estimations of the global burden of alcohol use and inform policies for prevention of alcohol-related harms. This study investigated the associations between alcohol consumption and cause-specific mortality among >512,000 adult men and women from the prospective China Kadoorie Biobank (CKB).In addition to assessing conventional observational associations, we used an MR approach to assess the strength, shape and causal relevance of genotype-predicted alcohol intake with cause-specific mortality among a subset of >168,000 men and women with data on ALDH2-rs671 and ADH1B-rs1229984 genotype.Additional analyses used a polygenic score to predict alcohol intake and evaluate linearity of the associations with mortality. Study design and participants CKB is a prospective cohort of 512,724 adults aged 30-79 years and without major disability at enrolment (response rate 28%) during 2004-2008 from ten areas of China. 15t baseline, participants attended survey clinics and completed an interviewer-administered laptop-based questionnaire covering socio-demographic and lifestyle characteristics (e.g.smoking, alcohol drinking) and medical history.Physical measurements were taken (e.g.blood pressure, anthropometry), and a 10 ml blood sample was collected.Resurveys of ~5% of surviving participants, following similar procedures, were undertaken in 2008 (n=19,786), 2013-14 (n=25,041), and 2021-22 (n=25,087).Ethics approval was obtained from local, national and international ethics committees and all participants provided written informed consent. Assessment of alcohol drinking Alcohol drinking patterns were self-reported at baseline and resurveys. 16,17Participants were classified as current drinkers (some alcohol use in most weeks in the past year), non-drinkers (no alcohol use in the past year and never drank in most weeks), occasional drinkers (occasional alcohol use in the past year and never drank in most weeks), and ex-drinkers (occasional or no alcohol use in the past year but previously drank in most weeks).Current drinkers provided further details about their drinking patterns (including frequency, amount, beverage type), and were further classified by weekly intake (<140, 140-279, 280-419, 420+ g/week for men; <70, 70+ g/week for women).To account for measurement error and within-person variability in self-reported alcohol use over time, for each of these baseline-defined groups the usual mean level of alcohol intake of the group was estimated from the average of intakes at two resurveys (appendix p8). 18 Follow-up for cause-specific mortality Cause-specific mortality was ascertained through linkage via unique national identification number to local death registries managed by China Centre for Disease Control (CDC).All deaths were reviewed by regional CDC staff and the underlying cause of death was assigned using the International Classification of Diseases, tenth revision (ICD-10).By 1.1.2019,after median 12 years follow-up (interquartile range 11-13), 56,550 (11%) participants had died, and 4,028 (1%) were lost to follow-up. For the present study, deaths were grouped into broad categories (e.g.CVD ICD-10 chapter I00-I99), specific causes (e.g.IHD ICD-10 I20-I25), or by previously assigned relationship to alcohol (e.g.cancers or other diseases and injuries designated as related to alcohol by IARC or WHO) (appendix p9). 2,19notyping and estimation of genotype-predicted mean alcohol intake 168,050 participants were genotyped for ALDH2-rs671 and ADH1B-rs1229984, including 151,347 randomly-selected (included in all genetic analyses), and 16,703 who had been selected for nested case-control studies of CVD or COPD (only included as cases in analyses of relevant outcomes) (appendix p10). Using a previously-described approach, alcohol intake was predicted using a combination of genotype and study area, both of which had strong associations with alcohol intake, enabling a wide range of alcohol intake levels to be assesed. 4Mean alcohol intake was calculated among men within each of the 90 combinations of genotypes (ALDH2-rs671 and ADH1B-rs1229984 each AA, AG or GG, resulting in nine combined genotypes) across the ten areas.Thresholds at 10, 25, 50, 100, and 150 g/week, were applied to group the genotype-predicted mean alcohol intake into six categories (C1-C6) for genetic analyses among all genotyped participants.Combining genotype with study area enabled a reliable Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts assessment of the shape and strength of associations with outcomes across a wide range of genotype-predicted mean alcohol intake, rather than the smaller range predicted by the genotypes alone. Women were assigned into the same six categories as men based on their genotype and area, without reference to their mean alcohol intake, to assess potential pleiotropic effects of the genotypes studied ie, effects of genotype not mediated by alcohol.Supplemental analyses among 85,386 men and women used a weighted polygenic score of 825 alcohol-related variants from a multi-ancestry genome-wide meta-analysis to predict alcohol intake. 20e Supplementary Methods for details. Statistical methods Analyses were conducted among men and women separately.In conventional observational analyses, Cox proportional hazards regression models were stratified for age-at-risk (5-year groups from 35-84 years) and ten areas, and adjusted for education, household income, smoking, physical activity, and fresh fruit intake.Participants reporting prior diseases at baseline were excluded.To allow comparisons in analyses involving more than two exposure groups, the variance of the log risk in each group, including the reference group, was calculated to obtain group-specific 95% CIs. 21To account for measurement error and within person variability in alcohol use over time (i.e.regression dilution bias), among current drinkers the log HRs were plotted against usual alcohol intake. 18The slope of a weighted linear regression through the plotted log HRs was used to estimate the HR per 100 g/week (~1-2 drinks/day, assuming 1 drink=10g alcohol) usual alcohol intake.Sensitivity analyses excluded the first five years of follow-up and additionally adjusted for red meat intake and self-rated health. In genetic analyses, associations of genotype-predicted alcohol categories with alcohol intake and with potential confounders were assessed.Cox proportional hazards regression models were stratified for age-at-risk and ten areas, and adjusted for genomic principal components (PCs). 22Log HRs were plotted against mean alcohol intake in each genotypepredicted alcohol intake category.To estimate the HR per 280 g/week, analyses were performed separately within each area with adjustment for age-at-risk and regional PCs.The slopes of a weighted linear regression within each area were meta-analysed with inversevariance weighting (IVW-MA).To assess potential pleiotropy of the genetic instrument, a heterogeneity test compared the meta-analysed slopes between men and women.Sensitivity analyses included adjusting for covariates; excluding prior diseases; using logistic regression or a two-stage least-square (2SLS) MR approach; using the 90 genotype-area combinations as a continuous exposure; and excluding the highest category of predicted alcohol intake. 23nalyses of the individual genetic variants included a comparison of GG vs. GA genotypes, and interaction between genotypes and self-reported alcohol intake. Supplemental analyses with a polygenic score used a 2SLS approach within areas, followed by IVW-MA.Beta estimates from the regression of alcohol against the polygenic score Lancet Public Health.Author manuscript; available in PMC 2024 March 19. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts in men were applied to the polygenic score values in women, to facilitate an assessment of pleiotropy.Non-linear MR stratified participants by average alcohol intake levels using the doubly-ranked method. 24Local Average Causal Effects (LACE) within strata calculated the ratio of the associations of the polygenic score with alcohol intake (log-transformed) and with mortality.Fractional polynomial smoothing was used to generate risk curves.Sensitivity analyses included use of the residual method for stratification. 24nce all-cause mortality is a competing risk for cause-specific mortality, Cox regression models censored participants at death from any cause (or loss to follow-up or the global censoring date 1.1.2019)to estimate cause-specific HRs, which compared event rates in participants who were alive and free of the event of interest.Comparing the HRs for the first 6 and subsequent years of follow-up, showed no evidence of departure from the proportional hazards assumption, apart from liver disease deaths in genetic analyses, with greater HRs in the earlier follow-up period (p-heterogeneity=0.002). See Supplementary Methods for details of statistical methods. Analyses used R software (version 4.0.5). Role of the funding source The funders had no role in the study design, data collection, data analysis and interpretation, writing of the manuscript, or the decision to submit the article for publication. Results Among 512,724 study participants, the mean age at baseline was 52 years (SD 11), 210,205 (41%) were men and 226,191 (44%) were from urban areas.Among men, 69,900 (33%) reported drinking alcohol in most weeks (current drinkers), which varied across the ten study areas (Table 1; appendix p11).Non-and ex-drinkers were older than occasional and current drinkers, were more likely to live in rural areas, and had poorer health at baseline.Education and household income levels were highest among moderate drinkers (up to 140 g/week).Heavier drinkers were more likely to smoke, and consumed fresh fruit less frequently.Alcohol was consumed mainly as spirits, and with meals, and 18% of current drinkers reported flushing after drinking (appendix p12).Among 302,519 women, 101,285 (33%) drank alcohol occasionally, but only 6244 (2%) were current drinkers. Genetic associations with alcohol intake and confounders The A-alleles of ALDH2-rs671 (frequency 0.21, range by area 0.13-0.29)and ADH1B-rs1229984 (0.69, 0.64-0.74)(appendix p13), were both associated with lower alcohol intake (appendix p14).Mean alcohol intakes among men with AA, AG and GG genotypes, respectively, were 2, 37 and 162 g/week for ALDH2-rs671, and 101, 109 and 162 g/week for ADH1B-rs1229984.Combining the two variants with area predicted 60-fold differences in mean alcohol intake in men, from 4 g/week in the lowest to 255 g/week in the highest category, with the prevalence of ever-regular drinking ranging from 3% (124/4269) to 74% (11,720/15,838) (appendix p15-16).These categories were not associated with education, smoking, or other potential confounders, except for fresh fruit intake, which was lower in the higher alcohol intake categories.Among women, similar genotype-area categories were not associated with appreciable differences in mean alcohol intake (range 1-8 g/week), or potential confounders. Associations of self-reported alcohol intake with mortality Of the 56,550 deaths recorded (31,956 in men and 24,594 in women), CVD (23,290 deaths) and cancers (17,691) together accounted for 72%, with respiratory diseases (5,362), and non-medical causes (3,750) accounting for a further 16% of deaths (appendix p17). Among men, there were J-shaped or U-shaped associations between self-reported alcohol consumption and major causes of death, with higher risks in ex-and non-drinkers and heavier drinkers, compared with occasional or moderate drinkers, in analyses adjusted for age-at-risk, area, education, household income, smoking, physical activity, and fresh fruit intake (Figure 1; appendix p18).Consistent with the J-shaped association with allcause mortality, the estimated survival rate was higher in occasional and current drinkers, compared with non-and ex-drinkers (appendix p19). For specific causes of death, usual alcohol intake was associated with higher risks of IHD and stroke types (Figure 2), cancers of the oesophagus, liver, and stomach, ALD and liver cirrhosis, and self-harm (appendix p18).Associations were stronger for cancers pre-defined by IARC as alcohol-related (1.33, 1.27−1.41),compared with other cancers (1.09, 1.04−1.13),and for causes pre-defined by WHO as alcohol-related (1.25, 1.21−1.28),compared with other causes (1.09, 1.05−1.12)(Figure 2; appendix p21).The patterns of association were unaltered in sensitivity analyses to further address reverse causation and residual confounding (appendix p22). Among women, ex-and non-drinkers had higher risks of deaths from most causes compared with occasional/moderate drinkers, but among the few current drinkers, usual alcohol intake was only significantly associated with CVD mortality (1.50, 1.06−2.13))(appendix p23). Associations of genotype-predicted alcohol intake with mortality in men Among genotyped participants, there were 23,457 deaths (13,177 in men, 10,280 in women) (appendix p17).In contrast with the J-or U-shaped associations seen with self-reported alcohol consumption, among men mortality risks increased linearly across the range of genotype-predicted mean alcohol intake for CVD (HR per 100 g/week 1.15, 95% CI 1.10−1.19),liver diseases (1.31, 1.02−1.69),and all-causes (1.07, 1.05−1.10), in pooled within-area analyses adjusted for age-at-risk and genomic PCs (Figure 1; Table 2).The genetic results were somewhat weaker than the corresponding estimates in the observational analyses (e.g.1.07 vs. 1.18 per 100 g/week for all-cause mortality).There were no associations with respiratory, other medical or non-medical causes of death.Although there was no association of genotype-predicted alcohol intake with overall cancer mortality (1.01, 0.97−1.06),there was a positive association with the aggregated alcohol-related cancers (1.12, 1.04−1.21)(Figure 2), including cancer of the oesophagus (1.16; 1.02−1.31)(Table 2).In contrast to the positive association in conventional analyses, there was no associations with the aggregated other cancers (0.91; 0.91−1.01). Sensitivity analyses, including those which excluded the highest category of genotypepredicted alcohol intake, did not materially alter the main genotypic findings, and although the magnitude of the excess risks varied e.g.7-10% per 100 g/week for all-cause mortality, the 95% CIs all overlapped (appendix p24-26). ALDH2-rs671 GG was associated with higher risks of CVD and all-cause mortality, and ADH1B-rs1229984 GG with higher risks of alcohol-related cancer, CVD and all-cause mortality, compared with GA genotypes (appendix p27-28). There were interactions between ALDH2-rs671 genotype and self-reported alcohol intake for alcohol-related cancers, other cancers, and all-cause mortality, with higher risks among male drinkers with AG compared with GG genotypes (appendix p29).When cancers were excluded, the interaction for all-cause mortality was null.There were no interactions with ADH1B-rs1229984 (appendix p30).The HR per 100 g/week genotype-predicted alcohol intake for all-cause mortality excluding cancers was 1.10 (1.07−1.13)(appendix p21). Genetic associations in women to assess pleiotropy Among women, using the same genotype-area categories as in men, there were no excess risks of cause-specific mortality (appendix p31).There were, however, lower risks of deaths from other medical causes (n=868 deaths; 0.85, 0.78-0.92),all-causes (n=10,057; 0.97, 0.94-0.99),and colorectal cancer, lung cancer, and diabetes.Genotype-predicted risks differed substantially between men and women, with excess risks among men for alcoholrelated cancer, CVD (including stroke types), liver, and all-cause mortality (Figures 1 & 2).For both individual variants, there were excess risks among men for CVD and all-cause mortality, compared with women (appendix p27-28).Restricting the genetic analyses to 292,724 women non-smokers did not alter the main findings (appendix p32). Among men, the associations with mortality from alcohol-related cancers, CVD, WHO alcohol-related causes, and all-causes were generally linear and uniform, with similar LACE estimates across five strata with mean alcohol intakes from 4-371 g/week (appendix p37-38).Although LACE estimates varied when alcohol was not log-transformed, or using the residual method to define strata, these factors may cause bias, particularly for alcohol intake which has an irregular distribution. 24 Discussion In this large prospective study of Chinese adults, using a strong genetic instrument to predict alcohol intake, we demonstrated that genotype-predicted alcohol intake was associated with higher risks of mortality from CVD, particularly stroke, certain cancers, liver diseases, and all-causes.In contrast to the J-shaped associations seen in conventional observational analyses, there was no genetic evidence for a protective effect of moderate drinking for major causes of death, including stroke and IHD, or overall mortality.For stroke, mortality risks increased linearly with amount of genotype-predicted alcohol intake, while for IHD mortality, there was a non-significant positive trend.Moreover, analyses among Chinese women, who had very low intakes of alcohol, showed that the excess mortality hazards among men were likely to be chiefly due to alcohol itself, rather than to genetic pleiotropy. Over the past several decades, numerous prospective studies have reported the lowest mortality risks among moderate drinkers (i.e., 1-2 drinks per day), driven mainly by CVD deaths, in particular IHD. 5,6,9,25In a combined analysis of 83 prospective studies, involving mainly Western populations and ~48,000 deaths, the adjusted all-cause mortality risks were higher in ex-and non-drinkers and heavier drinkers, compared with moderate drinkers, and among current drinkers, risks did not increase until a threshold of ~2 drinks/day. 5While stroke mortality increased with higher alcohol intake, associations with IHD were less clear, with potentially different patterns for fatal and non-fatal events. 5In the present study, with ~20,000 deaths in Chinese men,, we found similar lower risks among moderate drinkers, for mortality overall, and for most major causes, including IHD and stroke, despite rigorous approaches to control for reverse causation and residual confounding.Among male drinkers, however, there were continuous positive associations with major causes of death, apart from respiratory diseases, even at lower intake levels, with no evidence of a threshold below which alcohol was unrelated to risk. Europe PMC Funders Author Manuscripts Europe PMC Funders Author Manuscripts In recent years, MR has been used to evaluate the likely causal relevance of alcohol for different diseases, but although a few previous MR studies have reported higher risks of allcause mortality with alcohol intake, they did not assess cause-specific mortality, or evaluate causal relevance at different levels of intake. 12,14,26A study including 13,700 deaths in UK Biobank, reported higher risks of all-cause mortality associated with each additional drink/day, using ADH1B-rs1229984 (OR 1.44, 95% CI 1.09-1.90)or a 25-SNP score (1.31, 1.08-1.59). 12A study in Australian men with 1,329 deaths reported 47% higher all-cause mortality risk for ADH1B-rs1229984 GG compared with GA/AA genotypes (who drank less). 26In the present genetic analyses, with 12,939 deaths in men, there was 17% (10-24%) higher risk of all-cause mortality for ADH1B-rs1229984 GG compared with GA genotypes. In East Asians, where the common ALDH2-rs671 variant is a strong determinant of alcohol intake, previous studies, including CKB, have assessed the causal relevance of alcohol in incident risk of CVD, cancer, and other diseases. 4,13,27,28For cause-specific mortality, however, evidence from prospective studies is limited.A study with 2037 deaths in Chinese men reported a nominal trend for higher all-cause mortality with alcohol intake predicted by ALDH2-rs671. 14In Biobank Japan participants (31,403 deaths) both ALDH2-rs671 and ADH1B-rs1229984 A alleles were weakly associated with lower all-cause mortality, but the findings were adjusted for alcohol so causal relevance could not be properly evaluated. 29n the present MR study of ~23,000 deaths (~13,000 in men), with a genetic instrument that predicted a 60-fold difference in mean alcohol intake in men, we found a linear dose-response causal association of alcohol intake with risks of death from all-causes, CVD (particularly stroke), certain cancers (e.g., oesophageal) and liver diseases, consistent with well-established hazards considered by WHO to be alcohol-related. 2For IHD mortality, there was no genetic evidence of any apparent protective effects of moderate drinking, if anything there was a positive trend towards higher risks with alcohol intake, which differs somewhat from the null association with non-fatal IHD. 4,13. Given the very low alcohol consumption among women in the study, there was a unique opportunity to assess pleiotropy of the genetic variants, which provided strong support that the excess risks for CVD, certain cancers, liver and overall deaths in men were due to alcohol itself.Although the ALDH2/ADH1B instrument had inverse associations in women for some outcomes, these were modest, and if anything would have attenuated the genetic associations in men towards the null.For causes pre-defined as unrelated to alcohol, the null genetic associations in men, in contrast to positive associations with self-reported alcohol intake, indicate that the genetic approach is robust to confounding.Moreover, the lower genetic risk estimates, compared with the conventional dose-response estimates, also suggest potential uncontrolled residual confounding in the conventional analyses. Estimation of the alcohol-attributable disease burden generally uses evidence from observational studies which may not always reflect causal associations (e.g. the apparently lower risks of CVD with moderate drinking), and large-scale randomised trial evidence is unavailable. 1,3We demonstrate that alcohol itself is likely to be causally associated with deaths from several major causes in a linear and graded manner, with no apparent protective effects of moderate drinking for major causes of death, including CVD.Based on the ~7% excess risks for overall mortality per 100 g/week genotype-predicted alcohol intake, and the reported mean alcohol intake levels among men in the study, we estimate that alcohol drinking accounted for ~7-8% of male deaths in this Chinese population.This is somewhat lower than that reported by other studies in China (e.g.~12% of male deaths at age 40-70 years in the 2016 GBD report). 1,16In addition to differences in relative risk estimates, sex-, region-and age-specific drinking levels, and the proportions of deaths from different causes in different settings, could greatly affect the estimation of alcohol-attributed mortality in China (and elsewhere). 3r study has several strengths, including a large number of deaths, use of strong genetic instruments, and ability to assess genetic pleiotropy.However, it also has limitations.First, we lacked statistical power to study the effects of alcohol on less frequent causes of death, (e.g.tuberculosis), causes only affecting women (e.g.breast cancer), or causes such as injuries which may relate to alcohol differently among younger people or in different social contexts. 1,3Second, our cohort study may have recruited disproportionately fewer heavy drinkers, or more healthy people who had survived to middle-age, leading to potential selection biases.Third, we did not assess associations of longitudinal drinking measurements with cause-specific mortality.Fourth, using the genetic methods available, we could not assess the causal relevance of drinking patterns (e.g.heavy drinking episodes or consumption with meals), and beverage types (e.g.wine compared with spirits) for cause-specific mortality.Finally, the causal estimates varied somewhat by the methodology used, and were lower than the estimates in conventional analyses.However, this variation was small, and different methods, including use of an alternative polygenic score, gave generally consistent findings. This study has shown that alcohol use uniformly increases the risks of death overall, and from major causes including CVD, certain cancers and liver diseases, among Chinese men, with no evidence of protection conferred by moderate alcohol intake.Genetic evidence about the causal relevance of alcohol consumption for mortality from different causes, in populations of diverse ancestry and demography, can improve the estimation of the global harms of alcohol use.Evidence on the harms of alcohol use is important to inform and support public health strategies to reduce population levels of alcohol consumption.This has started to be reflected in policy changes in some countries, for example, Canada has recently introduced guidance for low-risk drinking at a threshold of 1-2 drinks/week, 30 Research in context Evidence before this study Moderate alcohol intake has been associated with lower risks of mortality overall and from certain specific diseases, in particular, IHD.However, these associations may be largely non-causal as conventional observational studies of alcohol use are susceptible to bias from reverse causation and residual confounding.Genetic evidence from Mendelian randomisation studies, in particular using the ALDH2-rs671 and ADH1B-rs1229984 variants which strongly affect alcohol intake and are common in East Asian populations, can help assess the causal relevance of alcohol intake for cause-specific mortality. The genetic evidence on alcohol consumption and mortality, was ascertained by searching PubMed from database inception to 25 February 2023 using the following search terms (title/abstract): ((Alcohol AND Mendelian) or (ALDH2 or ADH1B or rs671 or rs1229984 or aldehyde dehydrogenase or alcohol dehydrogenase)) AND (mortality or death or fatal), and reviewing bibliographies within the identified publications. Two previous MR studies of alcohol and mortality in European ancestry populations, and one in Chinese men, reported that higher alcohol intake was associated with higher risks of all-cause mortality.However, these studies did not assess causal relevance across a wide range of alcohol intakes, and did not evaluate effects on cause-specific mortality. Added value of this study The present prospective study used both conventional and genetic approaches within the same population.The genetic analyses minimised artefacts of confounding and reverse causation, and assessed potential causal relevance across a wide range of alcohol levels, from negligible, to moderate and heavy intakes.Among Chinese men, conventional observational analyses demonstrated characteristic J-shaped associations of self-reported alcohol intake categories with overall and cause-specific mortality, with highest risks among ex-, non-, and heavy drinkers, and lowest risks among moderate drinkers, consistent with findings from similar studies in Western populations.Genetic analyses, using two genetic variants that predicted a 60-fold difference in mean intake from 4 g/week in the lowest to 255 g/week in the highest category, showed that higher alcohol intake was associated with a linear dose-response increase in risks of death overall and from certain cancers, CVD and liver diseases.There were no genetic associations with respiratory or non-medical (mainly accidents and injuries) causes of death.There was no genetic evidence that moderate alcohol intake (i.e.10-20 g/day) had substantial protective effects for cause-specific or overall mortality, including for IHD deaths.In separate genetic analyses using a polygenic score to predict alcohol intake, there were similar and apparently linear associations with mortality overall, and from alcohol-related cancers and CVD, across different alcohol intake levels. Alcohol intake was extremely low among women in the study, and the genetic variants had little effect on mortality overall or from specific causes, suggesting that the higher risks in men were chiefly mediated by alcohol, rather than by any pleiotropic effects of the genotypes studied. Implications of all the available evidence Genetic studies, in East Asian ancestry populations in particular, have helped to reliably clarify the causal relevance of alcohol intake with mortality.Although this question has been extensively studied using conventional observational approaches, these methods have been unable to fully account for biases.The genetic evidence provides strong support for causal harmful effects of alcohol use with risks of deaths from CVD, cancer, liver disease and all-causes.There is no genetic evidence of any beneficial effects of moderate drinking compared with not drinking, for any causes of death, including CVD.These genetic studies which assess causal relevance have improved our understanding of the adverse effects of alcohol use on mortality, particularly at lower intake levels.This can improve estimation of the regional and global burden of alcohol use, and inform public health policies to address the risks of moderate as well as heavier drinking.Conventional epidemiological analyses relate self-reported drinking patterns at baseline to mortality from major causes (all major causes are shown except for infectious diseases where numbers of deaths were lower) and all-causes.Current drinkers with the lowest mean alcohol intake are the reference group.The black squares represent findings from the main model adjusted for age-at-risk, area, education, household income, smoking, physical activity, and fresh fruit intake, with exclusion of participants with prior chronic disease.The HRs for current drinkers are plotted against usual alcohol intake and a weighted linear regression through the plotted estimates gives the HR (95% CI) per 100 g/week (~1-2 drinks/day, assuming 1 drink contains 10g alcohol).The grey squares represent findings from sensitivity analysis which further exclude the first five years of follow-up.Genetic epidemiological analyses relate mean alcohol intake in six categories of genotype-predicted intake to mortality from major causes.The lowest mean intake group is the reference, and analyses are adjusted for age-at-risk, area and genomic national principal components.HRs are plotted against the mean alcohol intake in each category.The HR (95% CI) per 100 g/week is the inverse-variance-weighted mean of a weighted linear regression through the plotted estimates within each study area, adjusted for age-at-risk, and genomic regional principal components.The HR (95% CI) across six genetic categories in women applied the mean male intakes for each category, and the heterogeneity of effects was compared between men and women, to assess pleiotropy.The HR is plotted on a log scale.Each box represents HR with the area inversely proportional to the variance of the group-specific log hazard within each subplot.The vertical lines indicate group-specific 95% CIs.HR: hazard ratio; CI, confidence interval. Figure 1 . Figure 1.Conventional and genetic associations of alcohol intake with major cause-specific and all-cause mortality, in men Figure 2 . Figure 2. Conventional and genetic associations of alcohol intake with mortality from aggregated cancers and cardiovascular disease types, in menAlcohol-related cancers: Lip, oral cavity, pharynx, larynx, oesophagus, liver, colon-rectum, and female breast, defined as related to alcohol by the International Agency for Cancer Research (IARC).Conventions as Figure2. /d, metabolic equivalent of task per hour per day; SD, standard deviation; y/yr, yuan/year Means and percentages are adjusted for the age and study area structure of the CKB population for the four drinking groups, and for the CKB drinker population for the weekly intake groups, using direct standardisation separately by sex. a 4+ days/week; b Chronic diseases included self-reported history of coronary heart disease, stroke, transient ischaemic attack, diabetes, tuberculosis, cirrhosis, hepatitis, rheumatoid arthritis, peptic ulcer, emphysema/chronic bronchitis, gallstone/gallbladder disease, rheumatic heart disease, and kidney disease. and new evidence from the present study may help accelerate the policy changes in other countries. Lancet Public Health.Author manuscript; available in PMC 2024March 19.
2023-11-23T16:17:53.175Z
2023-11-21T00:00:00.000
{ "year": 2023, "sha1": "6eef3961cd575edfe23e57f003affa7ccffe1b6c", "oa_license": "CCBY", "oa_url": "http://www.thelancet.com/article/S2468266723002177/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f6dfa7e308c63afc940ab5e09241c4a0f2dc1583", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
121379615
pes2o/s2orc
v3-fos-license
Characterisation of an inhomogeneously irradiated microstrip detector using a fine spot infrared laser A prototype silicon microstrip detector for the LHCb vertex locator (VELO) has been partially irradiated using a 24 GeV/c proton beam at the CERN-PS accelerator. The detector possesses a radial strip geometry designed to measure the azimuthal coordinate (Phi) of tracks within the VELO. The peak fluence received by the detector was measured to be 4.6 × 10 14 p/cm 2 though the non-uniform nature of the exposure left part of the detector unirradiated. The inhomogeneous irradiation introduced a damage profile in the detector approximating to that expected in the VELO. High irradiation gradients are important to study as they can modify the electric field within the silicon. Of special interest are changes in the component of the electric field parallel to the strip plane but perpendicular to the strips which lead to systematic shifts in the reconstructed cluster position. If these (flux and position dependent) shifts are sufficiently large they could contribute to a degraded spatial resolution of the detector. In order to quantify these effects a precise fine light spot infrared laser was used to investigate the charge collection properties of the sensor. Particular attention was devoted to the regions where a high gradient of the fluence introduced a large gradient in the effective local space charge. The results reported below place limits on the “distortions” expected in the VELO due to non-uniform irradiation. I. Introduction The silicon detectors for the VELO tracker [1] are designed to provide the azimuthal (phi measuring sensor) and radial (R measuring sensor) coordinates [2]. An LHCb Phi measuring detector was non-uniformly irradiated, with 24 GeV/c protons in the CERN-PS/T7 experimental area, to study the effects of an inhomogeneous damage profile in the detector bulk. To study the position dependent properties (e.g. the depletion voltage and charge collection) of the irradiated sensor a precise method of injecting electron-hole pairs is necessary. An infrared laser system has been built that allows the sensor to be scanned using a fine beam. The results of the study on a partially irradiated phi-type detector are presented below. II. LHCb prototype phi detector The LHCb prototype phi measuring sensors are 300 µm thick, p-strip in n-bulk silicon, with a semicircular shape covering 182 degrees. The sensor masks were designed at the University of Liverpool and fabricated by Micron Semiconductor [3] using an oxygen enriched (density ~2×10 17 cm -3 ) [8] FZ silicon 6" wafer. The sensors are divided into inner and outer radial sections each containing 1024 strips (Fig. 1). The inner strip and outer strips are collinear and are AC coupled to the p-implants on a "first" metal layer. The outer strips have bond-pads at the outer radius of the sensor. The inner strips are routed to the bonding pads, also located at the outer radius of the sensor, via metal strips. These metal routing strips are fabricated on a "second" metal layer and are insulated from the first metal layer by a 3.7-4 µm thick oxide layer. The routing lines from the inner section run in between the strips in the outer section. Some dimensions of the detector geometry are given below. • The bonding pads have a pitch of 62 µm. To enable easier bond the pads for the inner strips are interleaved and staggered with those for the outer ones so that the effective pitch is 124 µm. • The pitch of the innermost end of an outer strip is ~55.5 µm. • The pitch of the innermost and outermost ends of an inner strip are ~24.4 µm and ~55.4 µm respectively. • The strip width increases with the radius. The width of the innermost end of an inner strip is ~12 µm and the outermost end is ~ 16.7 µm. The width of the innermost end of an outer strip is ~16.8 µm and the outermost end is ~ 37 µm. • The detector is also divided in 8 sectors, each one with bonding pads for the 128 inner and 128 outer strips. III. Inhomogeneous irradiation of the Phi detector A schematic of the set-up used to irradiate the Phi detector with 24 GeV/c protons in the CERN-PS East Hall is shown in Fig. 2. A holder containing the detector was mounted in a motorised carrier (Shuttle) in the IRRAD-1 facility [4]. The Shuttle allows positioning of the detector in the beam with the beam centre aligned as shown in Fig. 2. The beam has a nearly Gaussian profile with a FWHM of approximately 2 cm. Fig. 3 shows the fluence profile as measured using the activated aluminium foil technique [5] during the irradiation. The error on the absolute magnitude of the fluence is about 10%. The sensor position relative to the beam centre is known to an accuracy of about 2mm. The radiation damage introduces changes in the detector reverse current and effective space charge (N eff ). The reverse current and, for high doses, N eff increase proportionally to the fluence. As a consequence of the beam profile, the detector N eff has a nearly Gaussian profile across the device. The detector was irradiated during ~14 hours between the 7 th and the 8 th of October 2000. The temperature of the irradiation area was about 32 o C. After irradiation the detector was kept at ~-20 o C to freeze the annealing process, until it was sent to Liverpool for measurement. During transport (~48 h) the temperature was not monitored, but it is safe to assume that it never exceeded 20 o C. On arrival at Liverpool the detector was stored at ~-25 o C. The detector was removed from cold storage and glued onto a support (necessary for bonding) and kept in a room at 25 o C for 12 hours to allow curing of the glue. The equivalent annealing time corresponds to 5 days at room temperature (20 o C in Oct). a. The detector support To study the various sectors of the detector using a single read-out chip with 128 channels, a dedicated rebondable support, an infrared laser system and a precise x-y table have been set-up. The support (Fig. 4) has been designed to allow the sensor to be rotated hence enabling each sector to be bonded in turn. A rebondable fan-in extension was added to permit multiple bonding to the chip fan-in. A single chip with 128 channels was used to read-out the outer radial section. In order to study the effect of the routing lines (from the inner radial section) on the electrical properties of the detector, two sets of read-out channels (generally 10) at both sides of each sector studied, were bonded to the outer strips and the intermediate routing lines alternatively. All the remaining channels were bonded to the outer strips only, leaving the routing lines floating. b. The read-out system The sensor was read-out by the SCT128 LHC speed electronics [7]. The output of the chip is a data stream of 128 channels divided into 25 ns time bins containing single channel information. This output was acquired by the LeCroy LC574AL 1GHz oscilloscope and averaged over 1000 sweeps to record the height of the signal. This technique strongly suppresses the effect of electronic noise. The read out speed was 40 MHz and the peaking time about 25 ns. The timing of the readout relative to the laser pulse was adjusted to optimise the signal size. The signal was recorded for different trigger delays time allowing the shape of the signal to be deduced. This shape is compared to the design pulse shape in Fig. 5. An asymmetry due to the read-out system was found. The time "bin" that follows the hit channel exhibits a negative signal, shown in Fig. 6, where the pulse is injected every four channels using the internal calibration of the SCT128 chip. Two heights of calibration pulse were injected and, in both cases, a large (10-15%) negative signal was observed in the adjacent time bin. A correction for this effect has been applied. c. Infrared laser set-up and mechanics An infrared diode laser with 1060 nm wavelength was driven by an external pulse generator to deliver short (< 5 ns FWHM, Fig. 7) pulses. The light was guided by a single mode optical fibre to a light splitter with three output lines delivering about 60%, 20% and 20% of the input intensity respectively. The 60% output was connected via a single mode optical fibre and a coupler to a TTI TIA-950 Optical Electrical converter connected to the oscilloscope. This was used to monitor the stability of the laser power emission. A 6µm diameter core fibre, terminated with an optical focuser of 12mm focal length, was connected to one of the 20% output lines. The light beam had a slightly elliptical profile with full-width-half-maxima of 7 µm and 6.6 µm, and widths of 13.9 µm and 12.9 µm at 13% of the maximum peak intensity on the major and minor axes. The laser power output was adjusted to produce a signal corresponding to 3 minimum ionising particles in the sensor. The sensor was mounted on an x-y table perpendicular to the beam (Fig. 8). Two micromanipulators allowed precise movement with a resolution of two microns. a. Full depletion voltage (V fd ) profile The charge collection efficiency (CCE) allows the extraction of the local V fd [9]. V fd is proportional to N eff (1) where q 0 is the electron charge, w is the thickness of the detector and ε Si is the dielectric constant of silicon. The collected (cluster) charge was defined to be the sum of the charge collected on a strip and the two neighbouring strips (and routing lines where they were connected) on each side. An example of CCE as a function of bias voltage for a low irradiation region (<5 × 10 13 p/cm 2 ) of the sensor is shown in Fig 9a. The CCE curves are normalised to the maximum charge collected with strong over-depletion (500 V). The collected charge can be seen to rise approximately linearly at both very low and high voltages, with a smooth transition between these two regions of linear behaviour. At low voltages the charge collected rises rapidly, as the depleted depth increases, and at high voltages the sensor is in a plateau region corresponding to full depletion. The two linear regions were fitted with straight lines, which were extrapolated into the transition region. The intersection of the lines characterizes the centre of the transition from one behaviour to the other and this was taken as an estimate of V fd . The low irradiation region (Figure 9(a)) may be compared a region that has received a high dose (Figure 9(b)) (about 4.4× 10 14 p/cm 2 ). It can be seen that irradiated region has a higher V fd and that the plateau region still exhibits a significant rising trend. In this case the maximum bias limited the number of measurement in the plateau region. This led to a larger uncertainty in the evaluation of V fd . The error on V fd was estimated using a different choice of points for the two trend lines. The CCE is arbitrarily normalized at the maximum of the CCE-voltage curve. Further studies are being performed to establish the absolute amount of charge collected at a specific bias voltage. Fig. 10 shows an example of charge collection efficiency (CCE) curves in three different positions and estimated radiation fluences (<5 × 10 13 ,1.2×10 14 , 3.4×10 14 p/cm 2 ) of the detector. Fig. 11 shows the map of V fd measured across the whole detector at the outer end of the outer radial section. The shape of V fd closely follows the radiation damage profile. Between strip #570 and #670 the gradient of V fd (and therefore of N eff ) is positive, while between strip #690 and #760 it is negative. Transverse component of the electric field generated by the inhomogeneity of the irradiation would thus be expected to be opposite direction in these two regions. The comparison of the CCE properties in these two regions enables the study of the effects of a possible transverse electric field. b. Noise In the low depletion voltage region (low fluence), noise due to micro-discharges [10] may be observed. In the more irradiated area the bias could be raised above 400 volts without any increase in the noise. Fig. 12 shows the noise measured in outer radial sector 5 of the sensor (Fig. 1) as a function of voltage. The noise is not flat V fd =q 0 w 2 N eff / 2ε Si across the chip because of the poor condition of the fan-in extension. This measurement was not intended to determine the absolute noise level of the detector, but to give evidence of micro-discharges above 200Volts in the low fluence area (<1.0×10 14 ). With the same applied bias voltage, the electric field is much higher in the low V fd region than in the high one, explaining the shape of the measured noise. In Fig. 12(c) and (d) the noise values for the strips between #549 and #559 (in the low fluence region) are missing because of an automatic masking, within the analysis program, of channels with extreme noise. c. Charge sharing between adjacent strips A detailed study of the charge division between adjacent strips (called Left and Right for simplicity) has been carried out in different areas of the detector. The charge division has been evaluated using the pulse height as a function of the local coordinate x, defined as the distance in microns of the centre of the light spot from the centre of the left (L) strip. The following algorithm has been used to evaluate the ratio (η) of the charge seen by the right strip (R) to the total charge: where H R and H L are the height of the signals seen by the left and right strip respectively. The read-out system introduces a distortion in the time bin that follows the signal bin, as described above. Fig. 13 shows the charge sharing as a function of x for the two different bonding schemes. In Fig. 13a two adjacent strips were bonded to adjacent read out channels so that the signal of the right strip is in the time bin that follows the left strip signal. In this case an asymmetry is found in the η function: no charge is seen on the R strip at the local coordinates 30 and 40 µm. This is because the over-shoot compensates the small charge picked by strip R In Fig. 13b the intermediate routing line is also bonded to the read out chip, in the time bin that follows the signal on the left strip. In this second case the over-shoot does not affect the signal in the right strip, which is two time bins away from the strip signal. The over-shoot affects the routing line and does not disturb the charge seen by strip R. In this second case the symmetry of the charge-sharing scan across two strips is very good. This allows the application of a correction for the η scan performed between strips measured leaving the intermediate routing line floating. Figures from 14 to 19 show the η scan as measured in the outer end of the outer radial section. Fig. 14 shows the η scan for a non-irradiated area (strips #244-245) (<5 × 10 13 p/cm 2 ). Fig. 15 is taken from strips located in the area where the gradient of V fd is small (#490-491) (<5 × 10 13 p/cm 2 ) and the depletion voltage is about 50 Volts. Fig. 16 is from an area where the gradient of V fd is increasing from strip L towards strip R(#634-635)( 3.9× 10 14 p/cm 2 ). Fig. 17 is from strips located in the area where the gradient of V fd is small but with a high radiation level (#690-691) ( 4.4× 10 14 p/cm 2 ). Fig. 18 shows the area where the gradient is decreasing from strip L towards strip R (#760-761) )( 2.2× 10 14 p/cm 2 ) . There is no evidence of distortion due to a transverse electric field in the region with maximum gradient of V fd . Fig. 19 shows the comparison of the charge sharing scan in the non-irradiated area with the two highly irradiated areas with opposite V fd gradient. The three sets of data look similar with no evidence of an asymmetry leading to the conclusion that the effects of the transverse electric fields produce distortions of less than 2 microns in the reconstructed cluster position for fluence gradients of ∼4× 10 14 p/cm 3 . This is approximately the spatial resolution expected from this procedure. More precise measurements are planned to reduce this uncertainty. The strips #760-761 have been studied as a function of radial position in the outer radial section. Fig. 18, 20 and 21 show the η scan in the innermost, intermediate and outer part of the strip, where the pitch is 78 µm, 95 µm and 118 µm respectively. The irradiation was not collinear with the strips and there is a gradient along the strips as well as across them. Fig. 22 shows the CCE curves measured in the three positions, clearly indicating a difference in V fd . This explains the difference in the different bias voltage behaviour of the η scan for the different regions. In Fig. 20 and 21 the detector is non-depleted at 150 and 200 volts (positions 1 and 2 of Fig. 22) and the data are well separated from the data at 400 volts, when the detector is fully depleted. In Fig. 18 the detector is depleted at 200 volts (Pos. 3 of Fig. 22) and the set of data taken at 200 and 400 volts of applied bias almost superimpose each other. In the η scan the solid line represents the curve for ideal resolution. It is apparent that the charge sharing between neighbour strips in the irradiated part of the detector is much closer to the ideal resolution when the detector is biased below depletion. The presence of a non-depleted layer next to the read out strips enhances charge division, as it is shown in Fig. 23, where the laser was focused between strips #740-741, close to the #741. A significant fraction of the charge is distributed over 4 strips at low bias voltages and decreases significantly by increasing the bias. A similar situation is not found in non-irradiated detectors, where the charge division for bias voltages below and above full depletion is more similar (Fig. 14). Data were taken in a region with wide inter-strip pitch and intermediate routing lines. The presence of the 17 µm wide routing line prevented the accurate study of the middle of the inter-strip space. A further study to be carried out in the outermost part of the inner sector (strip pitch ~ 50 µm and no routing line) will give a more precise measurement of the η function. d. Effect of the routing-line on the collected charge The non-uniform irradiation permits the study of the effect of the routing lines on the CCE as a function of the dose. After the type inversion of the detector bulk from n-type to p-type because of the radiation damage, in the highly irradiated regions the junction side migrates from the back plane of the detector towards the readout strips. When the detector is under-depleted a low electric field region separates the active volume from the read out strips (double junction effects [11] are of lesser importance and are not addressed here). When the detector has not inverted, the p-n junction, and therefore the high field region, is on the side of the implanted p-type strips. The electric field distribution influences the charge trapping as well as the signal shape and duration. In the case of the particular geometry of the LHCb Phi detector, a small signal could be induced on the routing lines in the outer segment (Fig. 5), which is dependent on the electric field distribution. The fraction of the signal measured on the routing lines as a function of the irradiation dose and of the bias applied to the detector has been studied. In the non-irradiated region of the detector no charge was observed on the routing line at any bias voltage. Some loss of the charge to the routing has been observed in the irradiated and type inverted region, as shown in Fig. 24. The highest charge observed on the routing line coincides with the highest fluence region. In general, the charge in the routing line decreases strongly with bias, and becomes negligible for bias voltages above V fd . The ratio of the width of the routing line to the width of the read-out strips can influence the amount of the charge collected by the routing lines, because of the high relative capacitance for high ratios. The width of the routing line is constant (17 µm) while the width of the outer strips increases with radius, as described in section II. Fig. 25 shows the ratio of the charge collected by the routing line to the total charge for five radial positions from the inner to the outer part of the detector. The strip width, as measured using the deficit of signal caused by the screening to the laser light from the metal read-out strip was 16, 20, 25, 30, 35 µm for position 1 to 5 respectively. The effect of the lower depletion voltage in the outer part of the strip is clearly shown by the earlier decrease of the signal on the routing line for the outer positions. The effect of the relative width of the routing line to the strip, which varies from ~1 to ~0.5 going outwards, is not evident. (This would appear as a trend in the maximum sharing observed as a function of voltage.) VI. Conclusions The infrared laser set-up is an effective tool for studying the effects of inhomogeneous irradiation. The measurement of the charge collection properties across the irradiated LHCb-Phi detector allowed the reconstruction of the measured irradiation profile. The study of the charge sharing between neighbouring strips as a function of the local position shows no evidence of distortion of the resolution as a consequence of a distorted electric field resulting from inhomogeneous irradiation. Evidence has been found of increased charge sharing in the irradiated and type inverted part of the detector when biased below V fd . This effect is expected because of the presence of the non-depleted bulk next to the read-out strip. The enhanced resolution implied by the increased charge sharing is not necessarily beneficial. Operating (p-in-n) sensors under-depleted implies a large reduction in charge collection efficiency. Post-irradiation it is important to operate the detector with high charge collection to optimise the signal to noise ratio. An alternate choice of the diode structure, like n-strips (n-in-n) or n-strips in p-bulk, would greatly benefit the operation of irradiated detectors [12][13][14]. The measured shape of the noise at high bias voltages on the irradiated sector of the detector (where the V fd varies from 50 V to 280V) is compatible with noise induced by micro-discharges. The micro-discharge effect depends on the strength of the electric field and not on the applied bias. With the same applied bias voltage the electric field is higher for lower V fd . In this region the noise is inversely proportional to V fd confirming the hypothesis on the origin of the noise. The unique geometry of the LHCb-phi detectors includes metal lines to route the innermost strips to the bonding pads located on the outer end of the detector. The metal lines run symmetrically between the outer strips. The effect of these metal lines on the charge collection has been studied. When the detector is type inverted and biased below depletion, a fraction of the total charge is seen by the routing line and therefore lost. This effect is suppressed when the detector is depleted. Moreover, this effect is not seen in the non-irradiated part, suggesting that it is correlated with the presence of the non-depleted layer next to the strips. In case of read-out from the junction side, as for n-in-n or n-in-p detectors, no charge loss in the routing lines is predicted, improving the performances for non-depleted operation, as compared with p-in-n detectors. Fig. 1
2019-04-19T13:05:36.239Z
2003-10-11T00:00:00.000
{ "year": 2003, "sha1": "c196c0dfa34a66209c8f02b0ff8a7ea537a79768", "oa_license": "CCBY", "oa_url": "http://cds.cern.ch/record/691530/files/lhcb-2001-053.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "b6986cab32b727beb154e37e04413688fa03675c", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
258958042
pes2o/s2orc
v3-fos-license
FRAUD RELATED TO EU FUNDS. THE CASE OF ROMANIA According to the latest PIF report on the protection of the financial interests of the European Union, in 2021 Romania reported to the European Commission fraud related to European projects worth 1.4 billion euros. The reported amount is quite impressive as it represents over 80% of the total amounts reported as being fraudulently obtained in 2021 by all member states, making Romania a true performer in this regard at the European level. Starting from these values, using descriptive statistics, the study analyzes the data reported by Romania in the last ten years, compared to the other EU member states, data extracted from the PIF Reports (2012-2021), trying to verify if Romania's status as a champion in terms of fraud with European funds is fully deserved or circumstantial. Although the numbers show that Romania is indeed a performer in this regard, both in terms of value and the number of reported cases, these values must be viewed in context, as they are influenced by a wide series of factors, including reporting errors, the capacity and willingness of member states to detect irregularities, as well as the particularities of the awarding procedures and contracting periods. Also, in the end, the paper presents some recommendations for strengthening efforts to combat the phenomenon of fraud related to EU funds in Romania. Introduction One of the major concerns of the European Union throughout its evolution has been the protection of its financial interests and the fight against fraud in this area.To this end, in order to assist in identifying, preventing, and detecting fraud against the EU's financial interests, the EU has adopted an appropriate regulatory framework, equipped itself with stronger analytical capacity, a centralized supervisory system and established an "early detection and exclusion system" regarding unreliable economic operators from EU funding.Additionally, every year, the Commission presents a report on the protection of the EU's financial interests (the "PIF report") to the European Parliament and the Council of Ministers.The most recent initiative is the launch of the European Public Prosecutor's Office (EPPO), which has been given the responsibility of investigating, prosecuting and bringing to trial offenses against the EU's financial interests (i.e.various types of fraud, VAT fraud with damage exceeding 10 million euros, money laundering, corruption etc.). For the 2021-2027 period the EU makes available for member states, a package of over 1.800 billion EUR, representing the largest recovery plan in Europe since the Marshall Plan.Faced with such sums, there is strong concern among all actors involved in managing and controlling EU funds regarding the actual level of fraud affecting these funds, with fears of an increase in the phenomenon in the near future. There are several estimates of the proportion of EU funding that has been affected by fraud and other criminal activities.Precise estimates are impossible due to the nature of fraud, which is illegal by definition and largely hidden.Comparative studies between member states regarding this aspect are also missing.The Commission did not have comprehensive information on the extent, nature and causes of fraud against the Union budget (European Court of Auditors, 2019a).Some estimates in this regard indicate that between 2.0 and 2.7 billion EUR of EU finances are lost annually due to organized crime.Given the EU budget for 2020 of 160 billion EUR, this amount represents between 1% and 1.6% of the EU budget (CSES, 2021). The data that appears in the PIF reports represent a valuable statistical source in this regard.Based on sophisticated reporting tools and procedures, they contribute to creating an image regarding the dimensions and evolution of the phenomenon each year. According to the latest PIF report, in 2021, Romania reported to the Commission fraud related to European projects worth 1.4 billion EUR.The reported amount represents over 80% of the total amounts reported as fraudulently obtained in 2021 by all member states, making Romania a true performer in this regard at the European level. In the absence of comparative studies about this aspect, using descriptive statistics, the study analyses the data reported in the last ten years, by all EU member states, trying to verify Romania's status as a champion in terms of fraud with European funds is fully deserved or circumstantial.The results show that Romania is indeed a performer in this regard, both in terms of the number of reported cases and their value.However, these values must still be viewed in context, being influenced by a whole series of factors, namely reporting errors, the capacity and willingness of other member states to detect these irregularities, as well as the peculiarities of procurement procedures and contracting periods. The rest of this paper is structured as follows: section 1 presents a short literature review of how the estimation of the level of fraud can be made, about its connection with corruption and regarding the perception of corruption and tax evasion in Romania; in section 2 we present the data used, as extracted from the 2012-2021 PIF reports and the methodology (descriptive statistics) used to get insights of the dimension of this phenomenon with a special focus on Romania; section 3 is dedicated to the analysis of the data that show that Romania is indeed a performer in this regard, both in terms of the number of reported cases and their value.The paper ends with the formulation of the final conclusion warning that these values must still be viewed in context, being influenced by a whole series of factors, namely reporting errors, the capacity and willingness of other member states to detect these irregularities, as well as the peculiarities of procurement procedures and contracting periods.At the same time are presented some recommendations for consolidating efforts to reduce the phenomenon of fraud related to EU funds in Romania. Literature review As already mentioned, the extent to which EU spending in member states is subject to the diversion of funds, either by organized crime or other perpetrators, has proved particularly difficult to quantify.Existing figures, which are found in official reports, are acknowledged to be conservative and underestimate the magnitude of the problem.On the other hand, it is highlighted the fact that the Covid-19 pandemic provided exceptional opportunities for bribery and corruption, especially in public procurement and healthcare (Dikmen and Çiçek, 2023;Bîzoi and Bîzoi, 2023).Also, Batrancea et al. (2023) found that crunching numbers could sometimes uncover corruption that would otherwise go unnoticed. However, perception and experience-based indicators like the Corruption Perceptions Index (CPI) and the World Bank Control of Corruption Index (WB-CCI) have been used to measure corruption, which is also a hidden crime.Given the strong link between fraud related to EU funds and corruption (Fazekas and King, 2018), the same indicators used to measure corruption were used to determine the real level of this type of fraud.Such evaluations are not carried out to replace official statistics, but rather to complement them, their accuracy being the subject of much criticism.It has been argued that perceptions may not be related to experience (Rose and Peiffer, 2012), they can be influenced by economic growth (Kurtz and Schrank, 2007) or the publicity of high-profile corruption cases (Golden and Picci, 2005).Moreover, it has been shown that these indicators vary very little over time, suggesting that they are too insensitive to change (Mungiu-Pippidi, 2011).It can be said that perceptions of grand corruption are even more uncertain than perceptions of everyday corruption, as experts and citizens have almost no direct experience of this type of corruption.As these indicators are derived mainly from non-representative surveys, distortions of representativeness and reflexivity (i.e., respondents influenced by previous and future measurements) are likely to be exaggerated by the small size of the sample (Golden and Picci, 2005). Other authors, recognizing the shortcomings of the above indicators, have embarked on developing objective tools based on directly observable behavioural indicators, which indicate likely corrupt behaviour.These studies analyse corruption in various contexts, such as elections and high-level politics or social services and redistributive policies.For example, Golden and Picci (2005) propose a new corruption measurement tool based on the difference between the amount of infrastructure and public expenditures for it.In addition, Olken (2007) uses independent engineers to review road projects and calculates the amount and value of missing inputs to determine corruption.Other authors use indicators such as the political connections of winning firms (Goldman, Rocholl and So, 2013) or the use of certain exceptional procedures. Several studies (Fazekas and Kocsis, 2020; Fazekas, Cingolani and Tóth, 2018; Fazekas, Tóth and King, 2016) have estimated the risk or level of corruption control in public procurement by using large volumes of administrative objective data from public procurement databases 1 .The solidity of this method was also confirmed by Roman, Popescu and Achim's study (2022a), as the level of fraud was better explained by an objective indicator than a perception and experience-based fraud indicator. The studies referring to Romania focus on the analysis of the phenomenon of corruption and tax evasion, as well as their determinants.According to author Treisman (2000) and Paldam (2002) corruption is found as being a "poverty disease" and it will disappear when the countries become richer.A high level of corruption is correlated with a low level of development and well-being (Achim and Borlea, 2020).This applies also in the case of Romania where, the study carried out by Duțulescu and Nișulescu-Ashrafzadeh (2016) identified among the main causes of corruption, the low life standard (compared to that of citizens from Western Europe), as well as the general conception of people, which proves to be permissive enough referring to this fact. An 82-country study of how often voters tended to be bribed ranked Romania 39th, which is only slightly above average, slightly more often than Montenegro and Ethiopia and slightly less often than Indonesia and Iran.Romanian male voters were only slightly more likely to be offered a bribe than Romanian female voters (McGee and Petrides, 2023a). A related issue is how risky it is to take or receive a bribe.A study of 56 countries ranked Romania 23, meaning that in Romania taking or receiving a bribe was slightly less risky than in most other countries (McGee and Petrides, 2023b).A 52-country 1 These are used to construct an innovative Corruption Risk Index (CRI) which is based on a deep understanding of the phenomenon of rent extraction, derived exclusively from objective data describing fraudulent behaviour, allows for coherent temporal comparisons within and between countries, and whose calculation methodology can be replicated for many countries using pre-existing data.The method uses large volumes of data from large public procurement databases (Tenders Electronic Daily), data from trade registers, as well as financial and property data.Tenders Electronic Daily (TED) is the online version of the 'Supplement to the Official Journal' of the EU, dedicated to European public procurement.TED publishes 643 thousand procurement award notices a year, including 244 thousand calls for tenders which are worth approximately €545 billion. study on the prevalence of bribery ranked Romania 40, meaning that bribery is more prevalent in Romania than in most of the other countries in the study, slightly more prevalent than in Vietnam and Lebanon and slightly less prevalent than in Pakistan and Nigeria (McGee and Zhou, 2023). When asked whether bribery could ever be justifiable, Romanian women were significantly more opposed to bribery than Romanian men (McGee and Benk, 2023a).Although all social classes of Romanians had strong opposition to bribery, the lower middle class and working class showed the strongest opposition, while the lower class showed the least opposition (McGee and Benk, 2023b).The relationship between education and attitude toward the acceptability of bribery was linear.The more education a person had, the stronger the opposition to bribery for the Romanian sample (McGee and Benk, 2023c).Although opposition to bribery increased as income level increased, the differences in mean scores were not significant (McGee and Benk, 2023d).Age was not a significant demographic variable in the Romanian sample, meaning that individuals of all ages had about the same degree of opposition to bribery These studies basically find that, although tax evasion is generally considered to be unethical, there are cases where it has been justified, either in the theoretical or empirical literature.Some of the main reasons why people have justified tax evasion over the past few millennia are in cases where the government is corrupt or inefficient, or where people do not feel that they are receiving much in exchange for their tax payments.Other reasons have included the inability to pay or the case where the government engages in human rights abuses.Pardisi and McGee (2024a) examined the relationship between confidence in government and attitude toward tax evasion and found that people in the Romanian sample who did not have much confidence in government had less aversion to tax evasion than did those who held government in higher regard.A ranking of 88 countries on attitude toward the acceptability of tax evasion ranked Romania 65, indicating that it was less opposed to tax evasion than 64 other countries (Pardisi and McGee, 2024b). Another study found that Romanian men and women both had strong opposition to the acceptability of tax evasion.The difference in their mean scores was not significant (p = 0.8395) (Pardisi and McGee, 2024c).The Romanian working class and lower middle class had the strongest opposition to tax evasion (Pardisi and McGee, 2024d).The relationship between education level and attitude toward the acceptability of tax evasion was curvilinear.Although all three education levels showed strong opposition to tax evasion, those with a middle level education were significantly less opposed to tax evasion than those with either more or less education (Pardisi & McGee, 2024e).Another study found that Romanians in the lower income level were significantly less opposed to tax evasion than those in the middle and upper income levels (Pardisi and McGee, 2024f). The relationship between age and attitude toward tax evasion was found to be linear.The older groups were significantly more opposed to tax evasion than the younger groups (Pardisi & McGee, 2024g).Married Romanians were found to be significantly more opposed to tax evasion than were single Romanians (p < 0.0001).(Pardisi and McGee, 2024h).Urban and Rural Romanians were equally opposed to tax evasion (Pardisi & McGee, 2024i). Regarding the comparative analysis of the situation in the EU member states regarding the level of fraud affecting EU funds, studies are quite limited or focus on related aspects, such as the link between the level of absorption of European funds and subjective indicators of corruption or the performance of new member states in the process of absorbing these funds (Roman, Popescu, Achim, 2022b; Incaltarau, Pascariu and Surubaru, 2020; Achim and Borlea, 2015; Tosun, 2014). On the other hand, in the PIF reports, although data on member states are presented in parallel, there is no particular approach or special focus on one country or another.The Commission avoids making rankings between countries and specifies each time that the presented data may be influenced by a large number of variables. In this context, starting from the most recent available data, our paper seeks to analyze the particular case of Romania, in comparison with the other member states, trying to verify if Romania's status as a champion in terms of fraud with European funds is fully deserved or circumstantial. Research methodology In this paper the centre point of the analysis was investigating and understanding the official data reported by Member States on the number of cases of fraud related to EU funds, attempting to see which are the states that report the most fraud and what lies behind these numbers.The main purpose was to explore and evaluate what is Romania's position in this context. Using descriptive statics, we try to get a clear picture of the magnitude of the phenomenon, and implicitly of the main trends, analysing the data reported by the Member States on the reported fraudulent irregularities related to EU spending on agriculture and fisheries, cohesion policy and pre-accession policy, as shown in the 2012-2021 PIF reports.Unfortunately, due to the way, these reports have been compiled over the years, it has been difficult to obtain data on the problem analysed before 2012.We analyse the total number and the total amount of fraud (intentionally committed irregularities) reported by all Member States (EU28) over the period 2012-2019, and then in 2020-2021 by all Member States (EU27) without the United Kingdom, trying to see which the main trends are in the last years.Based on these values, we focused on the data reported by Romania over the past ten years, compared to other Member States, in an attempt to verify whether Romania's status as a champion in terms of fraud with European funds is fully deserved or circumstantial. The data were extracted from the annual reports on the protection of the EU's financial interests ("PIF" Reports) 2 .Each year, the Commission, in cooperation with the EU member states submits a report to the European Parliament and the Council.The report presents the measures taken to protect the EU budget, and to counter fraud and any other illegal activities affecting the financial interests of the EU.EU countries are obliged by law to report all irregularitiesboth fraudulent and non-fraudulentto the European Commission, which then compiles the information in this annual report.An irregularity 3 is a non-compliance with the EU rules and requirements connected to EU funds spending.Oftentimes irregularities are genuine errors e.g.not filling out a form correctly, or not complying 100% with the tendering procedure.Fraud 4 is an intentionally committed irregularity (an act or omission relating to the use or presentation of false, incorrect or incomplete statements or documents or to nondisclosure of information in violation of a specific obligation) set off by a malicious intent. The report, being part of the Commission's policy of transparency for financial management, provides an in-depth analysis of the approaches, procedures and tools used by EU Member States in their fight against fraud, details the level of fraud both on the revenue and the expenditure side of the EU budget, helps to assess which areas are most at risk, thereby helping to better target action at both EU and national level and follows on the previous year's recommendations.This report is compiled mainly using data and information submitted by the EU Member States, given that they are on the frontline of managing and controlling 74% of EU expenditure and they are collecting the Traditional Own Resources (customs duties).Information The information available at the Commission level is also used. To make it easier to report irregularities, a dedicated electronic system has been developed and put at the disposal of Member States and beneficiary countries: the Irregularity Management System (IMS).The IMS is operated within the Anti-Fraud Information System (AFIS) and is used by 35 countries.Member states, candidate countries and other non-EU countries have established a hierarchical reporting structure with different levels of responsibility.Approximately 700 reporting organizations with more than 3,000 IMS users are responsible for the timely reporting of irregularities.The reporting flow has different hierarchical levels and different roles within the same level to ensure multiple quality checks before reports are sent to the Commission.Reporting authorities provide information about who committed the irregularity/fraud (involved persons), the measure of support, such as the fund, program, project, and budget line, the financial impact (expenditures and irregular/fraudulent amount), how/when/where the irregularity/fraud was committed, the method of detecting the irregularity/fraud, administrative, judicial, or criminal follow-up/sanctions imposed.Also, behind this reporting system lies (even if not directly) the risk assessment tool ARACHNE, an integrated IT tool for data mining and data enrichment.ARACHNE establishes a comprehensive database of EU projects implemented under the Funds, provided by managing authorities and paying agencies, and enriches these data with publicly available information in order to identify, based on a set of risk indicators, the projects, beneficiaries, contracts and contractors which might be susceptible to risks of fraud, conflict of interest and irregularities.The tool provides highly valuable risk alerts 5 to enrich management verifications, but it does not supply any proof of error, irregularity or fraud.ARACHNE can increase the efficiency of project selection, and management checks and further strengthen fraud identification, prevention and detection. Results The graph below (Graph no. 1) shows the evolution of the number and the amount of fraud reported by all Member States (EU28) over the period 2012-2020.There is a declining trend in their number in the period 2013-2019, while in 2020 there is a slight increase.As for the amounts affected by the fraud, they stay below the limit of EUR 400 million per year, except for the years 2015 and 2018 (just in the middle of the programming period 2014-2020) when they reach approx.EUR 550 million, respectively EUR 1 billion, in 2020 their value is approx.EUR 250 million, decreasing by approx.30% over the previous year.The figures, although important, do not stand out.And maybe that is exactly why there should be some signs of concern.The information in the global fraud register created by the Chartered Institute of Public Finance and Accountancy, together with the accountancy firm Moore Stephens, suggests that the risk of fraud could be high in grant spending (which accounts for a big share of EU spending).This register is based on a global survey of over 150 accountancy and fraud risk professionals across 37 countries, in order to gauge the most serious risk areas across the globe.Respondents considered 18 different types of fraud and bribery risk, scoring them from 1 (lowest risk) to 5 (highest risk).Almost half (48 %) of all respondents surveyed said that grant fraud posed a high or very high risk, putting it at number one on the register (European Court of Auditors, 2019a).Therefore, the reported figures do not seem to reflect at all, the above mention risks. According to the latest PIF Report on the protection of the financial interests of the European Union in 2021, at the level of the entire Union (EU27), a total of 466 cases of fraud related to European projects with an estimated value of approximately 1.66 billion euros were reported.Most of the reported cases come from Romania, namely 174 cases of fraud, which represents approximately 37% of the total cases reported in 2021 by all member states.It is followed at a great distance by Hungary (39), Slovakia (37) and Poland (36) (see Graph no. 2). Graph no. 2: Number of frauds reported in UE27 -2021 Source: Own processing Regarding the value of amounts affected by fraud (Graph no.3), Romania is once again in first place with 1.4 billion euros, which represents approximately 80% of the total reported at the European level, followed by Slovakia with 159 million euros and Portugal with 35 million euros. Graph no. 3: Value of reported fraud in UE27 -2021 (in € million) Source: Own processing Based on these values, furthermore, we analysed the data reported by Romania over the past ten years, as reflected in the PIF Reports, compared to other Member States, in an attempt to verify whether Romania's status as a champion in terms fraud with European funds is fully deserved or circumstantial. Thus, we observed (Graph no.4) that during the reference period, Romania is in the first place, leading in terms of the number of reported frauds and second place in terms of the amounts involved, competing with countries such as Slovakia and Poland (Graph no.5). Graph no. 5: Value of fraud in EU28 -2012 -2020 (country level analysis) Source: Own processing It can also be observed that both at the top and the of the ranking, the same countries are approximately present, even if not in the same order.Thus, Romania, Poland and Italy compete for the top positions both in terms of the number of reported fraud cases and their value.In contrast, Luxembourg seems to be the country with the lowest number of reported fraud cases (only 2), followed by countries such as Finland, Malta, Belgium, Sweden, Austria, Cyprus, and Ireland.It is worth noting the peculiar case of Slovakia, which, although not in the top three in terms of the number of reported cases, leads in terms of fraudulent amounts. With few exceptions (Italy), it can be observed that the countries where the most fraud cases are reported are the Eastern countries (Romania, Poland, Slovakia, Hungary, Czech Republic), former communist countries that later became EU members, while in the founding countries of the EU (Belgium, Netherlands, Luxembourg, France, Spain), in Nordic countries (Finland, Sweden), or in island countries (Malta and Cyprus), the cases are fewer. Regarding fraudulent irregularities related to the way these funds were spent, starting from 2013, Romania reports over 100 cases each year, with peaks in 2016 and 2020 when it even exceeded the threshold of 200.Thus, out of the total of 6.628 reported fraud cases at the EU level during this period, almost a quarter are reported by Romania, with 1,525 belonging to Romania, which is over 23%, ranking first in the EU, followed by Poland (938) and Italy (718) (Graph no. 6). Graph no. 6: Numbers of fraud reported by Romania (2012-2021) Source: Own processing Regarding their value, it has increased significantly in the last two years, from 166 million euros to 1.4 billion euros.Moreover, the amount reported as defrauded in 2021 is impressive, representing over 80% of the total amounts defrauded at the EU level (Graph no. 7). Graph no. 7: Value of fraud reported by Romania (2012-2021) (€ million) Source: Own processing On the other hand, out of the total of approximately 5.5 billion defrauded at the EU level, 1,991,438,883 euros are defrauded by Romania, which represents almost 37%.With these figures, Romania is clearly in first place in the ranking of defrauded amounts, followed by Slovakia with 1,534,801,074 euros and Poland with 512,960,008 euros.It should be emphasized that to a large extent, this situation is due to a single case, an irregularity reported in connection with a major rail infrastructure project worth 1.2 billion euro. 6 Discussions The data shows that in Romania, the issue of fraud is serious, but the country's system has been recognized as having effective mechanisms for identifying potential fraud cases.There are many institutions through which fund documents pass, which reduces the risk of fraud going unnoticed.This efficiency may explain the large number of suspected fraud cases detected. Furthermore, there is always a time lag between the moment irregularities are committed and the moment they are discovered and reported (on average, between two and three years).Additionally, a large portion of EU spending follows multi-annual cycles, with a progressive increase in execution until the program is closed, which also causes years with the highest reports of irregularities.For these reasons, the annual comparison of irregularity reports does not provide a reliable picture of the situation, especially regarding the financial impact, as this can be influenced by the existence of a very small number of cases with high values (in 2021, in Romania, a single irregularity wrongly reported -represented 1.27 billion euro; in 2018, in Slovakia, two irregularities represented, respectively, 300 million and 290 million euro). Then, it should also be taken into account that not all suspicions of fraud are proven true in the end.In this regard, there are also the data available in the latest OLAF and EPPO reports.The EPPO report from 2022 shows that out of a total of 3,318 complaints received at the European level, only 1,117 cases resulted in actual investigations, solved or ongoing.Of these, only 124 are related to Romania, ranking it third among the most investigated countries in the EU, after Italy (285) and Bulgaria (143), with Germany ranking fourth (114), while 5 EU countries do not recognize the authority of this 6 Trying to find details about this case, we approached the Management Authority of the Large Infrastructure Operational Program 2014-2020 managed by the Ministry of Investment and European Projects, which reported the fraud.According to them, the affected project is the Curtici-Simeria railway infrastructure project, part of the IV Pan European corridor, and the irregularity was discovered by the DLAF and transmitted for investigation to the DNA, concerning the change of destination of the funds received as an advance -worth about 70 million lei.However, it seems that in the meantime, at the beginning of this year, the value of this irregularity was radically revised (from the initial reported value of 1.2 billion euro for the project to about 300 million euro for the economic contract), and the sum in question was temporarily not included in the expenditure statements (Article 19 of Government Ordinance 66/2011).Given these last-minute clarifications, which will only be included in the official reports at the end of the year, the statistics change radically, in the sense that out of the 5.5 billion euro affected by fraud in the period 2012-2021 at EU level, only approximately 1 billion euro belongs to Romania, i.e., 18%, being surpassed by Slovakia in the ranking of countries with the highest amounts defrauded. institution, including Poland and Hungary, where the appetite for fraud is well-known (see Graph no.8). Graph no. 8: The number of EPPO investigations 2021-2022 Source: Own processing On the other hand, the OLAF Report from 2021 shows that only 7 out of the 167 fraud cases analyzed at the EU level and sent to EPPO with a recommendation to initiate legal proceedings were linked to Romania, with Belgium (30), France (18), and Italy ( 16) occupying the top positions. Most fraud cases in Romania occur (CSES, 2022) during the granting phase, in the procurement procedure, with the most common type of fraud being document forgery (bidders present false documents, such as certificates attesting to a certain experience and forgery of signatures).In the implementation phase, frequently encountered types of fraud are: changing the destination of planned investments (i.e., once the money is obtained, it is used for other purposes, either for personal interests or for other company projects), illegal claims for VAT reimbursement, cases of tax evasion, research plagiarism and conflict of interest.Finally, in the sustainability phase, a form of fraud is the production of false documents to justify compliance with sustainability obligations. At the institutional level, starting in 2002, Romania has gradually equipped itself with all the necessary monitoring, control and reporting instruments.Diverse authorities, such as the Management Authority (MA), the Certification and Payment Authority (CPA), the Audit Authority (AA), the Department for Anti-Fraud Fight (DLAF), and the National Anti-Corruption Directorate (DNA) share their responsibilities related to this subject, constituting an effective network of control and monitoring in which DLAF holds the role of anti-fraud coordinator and IMS administrator (European Court of Auditors, 2019b). From the perspective of anti-fraud strategies, Romania is among the 13 European states that have adopted a national anti-fraud strategy.The most recent form of this strategy covers the period of 2017-2023 7 and focuses on strengthening the national anti-fraud system through preventive measures, increasing the efficiency of fraud detection/irregularities and administrative investigations/controls, and consolidating inter-institutional cooperation in the area of budgetary debt recovery.In addition to this strategy, Romania has a solid legislative framework that regulates the prevention of fraud in the use of European funds 8 .Despite this rigorous system, institutions also identify a series of difficulties in the process of identifying or analyzing a complaint, which could turn out to be irregularities, fraud, or nothing at all.The biggest challenge is the time that passes between the moment the offense is committed and its discovery, which generally makes subsequent investigations more difficult.Another difficulty arises from the lack of sufficient collaboration between Managing Authorities and other institutions involved in the system.Last but not least, there is also the problem of the small number of people working in fraud control departments within Managing Authorities, as well as their lack of proper training. Starting from these reported difficulties, there are proposed measures (CSES, 2022) that could contribute to improving the system of preventing and combating fraud related to EU funds, including: consolidating institutional cooperation between AMs and DLAF to harmonize procedures, solutions, and competences, which vary widely from one ministry to another; creating an inter-institutional system for digital transmission of information and concluding collaboration protocols between the various institutions involved in case analysis; improving reporting procedures, as it has been found that ongoing investigations or decisions to initiate criminal proceedings are not systematically reported in the IMS system (due to legislative gaps) so that reporting in IMS is unnecessarily delayed and some cases pending in the courts could be completely excluded from the system; improving control systems, as the information in ARACHNE needs to be supplemented with data from other national databases (the Trade Registry Office database), or with other sources of information or by introducing new categories of data; training courses for institutions involved in program management, both in terms of correctly using specific terminology in fraud cases and using the ARACHNE system, developing information exchange between AMs (related to different case solutions), promoting dialogue with the Romanian Audit Authority, as well as the European Court of Auditors and using the Commission guidelines; simplifying access and control procedures to EU funds; awareness campaigns with beneficiaries regarding what constitutes fraud and its consequences. Conclusions There are several estimates of the proportion of EU funding that has been affected by fraud and other criminal activities.Precise estimates are difficult due to the nature of fraud, which, being illegal by definition, is largely hidden.Some estimates in this regard indicate that between 2.0 and 2.7 billion EUR of EU finances are lost annually due to organized crime. The data that appears in the PIF Reports represent a valuable statistical source in this regard.Based on sophisticated reporting tools and procedures, they contribute to creating an image regarding the dimensions and evolution of the phenomenon each year.On the other hand, in these reports, although data on member states are presented in parallel, there is no particular approach or special focus on one country or another. The Commission avoids making rankings between countries and specifies each time that the presented data may be influenced by a large number of variables. In this context, starting from the most recent available data, which shows Romania as the absolute European champion in terms of fraud with European funds, this paper analyses the reported data from all member states over the past 10 years to see if this status is fully deserved or just a matter of circumstance.The results of the study shows that in Romania, the issue of fraud is serious, both in terms of value and the number of reported cases.However, these values must be viewed in context, as they are influenced by a wide series of factors, including reporting errors, the capacity and willingness of member states to detect irregularities, as well as the particularities of the awarding procedures and contracting periods. Thus, there is always a time lag between the moment irregularities are committed and the moment they are discovered and reported (on average, between two and three years).Additionally, a large portion of EU spending follows multi-annual cycles, with a progressive increase in execution until the program is closed, which also causes years with the highest reports of irregularities.For these reasons, the annual comparison of irregularity reports does not provide a reliable picture of the situation, especially regarding the financial impact, as this can be influenced by the existence of a very small number of cases with high values (in 2021, in Romania, a single irregularitywrongly reported -represented 1.27 billion euros; in 2018, in Slovakia, two irregularities represented, respectively, 300 million and 290 million euros). At the same time, Romania's system has been recognized as having effective mechanisms for identifying potential fraud cases.At the institutional level, Romania gradually equipped itself with all the necessary monitoring, control, and reporting tools.There are many institutions through which fund documents pass, which reduces the risk of fraud going unnoticed.This efficiency may explain the high number of cases of alleged fraud detected, although, it should also be taken into account that not all suspicions of fraud are proven true in the end.Despite this rigorous system, institutions also identify a series of difficulties in the process of identifying or analysing a complaint, which could turn out to be an irregularity, fraud, or nothing at all.The biggest challenge is the time between the commission of the offense and its discovery, which generally makes subsequent investigations more difficult.Another difficulty arises from the lack of sufficient collaboration between MAs and other institutions involved in the system.Last but not least, there is the issue of the small number of people working in fraud control departments within the MAs, as well as the lack of proper training for them. Based on these highlighted difficulties, there are proposed measures (CSES, 2022) that could contribute to improving the system for preventing and combating fraud related to EU funds, such as improving institutional cooperation, developing information exchange, organizing continuous training courses, simplifying procedures for accessing and controlling funds, as well as initiating awareness campaigns for beneficiaries regarding what constitutes fraud and its consequences. On the other hand, it is becoming increasingly evident that there is a need for the most accurate estimation possible of the extent to which EU expenditures in member states are subject to the diversion of funds, either by organized crime or other actors, given the unanimously accepted fact that existing figures, as highlighted in the PIF reports, underestimate the scale of the problem, and that, with certain exceptions, the scientific community has not been particularly focused on the subject.Existing evaluations generally concern the method of estimating corruption, which, like fraud, is a hidden crime. In this context, future research directions aim to address the sensitive issue of undetected fraud, in order to identify and extract from relevant databases all the necessary elements to build a proper model for predicting the actual level of fraud related to European funds, which can subsequently be used in the activities of institutions with responsibilities in preventing and combating the phenomenon. (McGee and Benk, 2023e).Romanian urban dwellers and rural dwellers expressed equal opposition to bribery (McGee and Guadron, 2023).Several other studies have been done on bribery (McGee, 2022a; McGee and Benk, 2023, 2024), some of which include Romania.McGee, Achim and Mureșan (2024) examine a number of aspects of bribery in Romania, including culture.Another study (Achim and McGee, 2023) examines bribery as well as other forms of corruption in Romania.Corporate governance in Romania was also studied and found to be somewhat weak (McGee, 2008), although such governance has hopefully improved since that study was done.Many studies have been done on the ethics of tax evasion (McGee, 2012, 2022b; McGee and Shopovski, 2024a, 2024b) and why people evade taxes (McGee, 2023). A few studies have been conducted on Romanian opinion toward tax evasion.McGee (2006) surveyed the opinions of Romanian business students and faculty.Comparative studies have been done of on Romania and Bosnia (McGee, Basic & Tyler, 2008) and Romania and Moldova (McGee, 2009).McGee and Vlasin (2024) surveyed Evangelical Christians in Romania.They found that the participants voiced strong opposition to tax evasion.McGee, Achim and Mureșan (2024) compared Romanian views on the ethics of tax evasion to that of other offenses. Graph no. 1 : Irregularities reported as fraudulent in EU28 2012 -2019 + EU27 (less UK) 2020 Source: Own processing Just to have a clear image of what these figures mean, we compared the total amounts foreseen in the long-term EU budget for the period 2014-2020, in terms of expenditure (€ 908.51 billion), with the amount reported as being affected by fraud or irregularities in the same period.It resulted a percentage of 0.35% (€ 3.23 billion) of the total payments made as being affected by fraud.
2023-05-29T15:05:08.693Z
2023-05-24T00:00:00.000
{ "year": 2023, "sha1": "8afe3f3d8439e5e753f4e731e78ec8216e579225", "oa_license": "CCBY", "oa_url": "https://revista.isfin.ro/wp-content/uploads/2023/05/9.-Roman-Achim-Mc.-Gee.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6842047b068b62f09b62290f483531af7c0f103b", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
231874074
pes2o/s2orc
v3-fos-license
Cross-Feeding of a Toxic Metabolite in a Synthetic Lignocellulose-Degrading Microbial Community The recalcitrance of complex organic polymers such as lignocellulose is one of the major obstacles to sustainable energy production from plant biomass, and the generation of toxic intermediates can negatively impact the efficiency of microbial lignocellulose degradation. Here, we describe the development of a model microbial consortium for studying lignocellulose degradation, with the specific goal of mitigating the production of the toxin formaldehyde during the breakdown of methoxylated aromatic compounds. Included are Pseudomonas putida, a lignin degrader; Cellulomonas fimi, a cellulose degrader; and sometimes Yarrowia lipolytica, an oleaginous yeast. Unique to our system is the inclusion of Methylorubrum extorquens, a methylotroph capable of using formaldehyde for growth. We developed a defined minimal “Model Lignocellulose” growth medium for reproducible coculture experiments. We demonstrated that the formaldehyde produced by P. putida growing on vanillic acid can exceed the minimum inhibitory concentration for C. fimi, and, furthermore, that the presence of M. extorquens lowers those concentrations. We also uncovered unexpected ecological dynamics, including resource competition, and interspecies differences in growth requirements and toxin sensitivities. Finally, we introduced the possibility for a mutualistic interaction between C. fimi and M. extorquens through metabolite exchange. This study lays the foundation to enable future work incorporating metabolomic analysis and modeling, genetic engineering, and laboratory evolution, on a model system that is appropriate both for fundamental eco-evolutionary studies and for the optimization of efficiency and yield in microbially-mediated biomass transformation. Introduction The global economy relies substantially on fossil fuels as a source of carbon compounds with applications ranging from energy to medicine. The pressing need to reduce dependency on these nonrenewable sources has inspired interest in the development of sustainable energy and bioproduct feedstocks. Due to its availability and energy density, lignocellulosic biomass has long been a target for the bioproduction of fuels, bioplastics, and other commodity chemicals [1][2][3]. However, while biological methods for upcycling the cellulosic portion are well developed, a significant challenge to the economic feasibility of using lignocellulose for bioenergy is the chemical complexity of lignin and its recalcitrance to breakdown. Lignin can compose 15-40% of unprocessed plant matter; initial processing of lignin yields complex mixtures of aromatic compounds, which may vary between types of feedstock [4,5]. There is no known single organism capable of catabolizing every compound in this complex mixture, and degradation can result in toxic intermediates that inhibit the growth of some of the very organisms involved. For instance, lignin-derived aromatic compounds are heavily substituted by methoxy (-OCH 3 ) groups, which are transformed to formaldehyde in the process of microbial degradation [6][7][8][9][10][11]. Chemical pretreatment of lignin requires extreme chemicals or heat and may result in the destruction of important organisms or enzymes. Lignin is, therefore, considered an economically ineffective waste product of the food and other commercial industries. For lignocellulose to compete with nonrenewable carbon feedstocks, efficient, cost-effective processes of transformation must be developed that are robust over time and adaptable to diverse feedstocks. In nature, lignocellulose is consumed by complex and dynamic communities of microbes, where distinct catabolic niches allow symbiosis by crossfeeding and detoxification of dangerous intermediates. While industries may mimic this "natural" strategy by using complex, undefined microbial communities (as is done in wastewater treatment [12]), there are long-standing problems with an approach in which the chemical transformations are not fully understood, particularly when it comes to the efficient production of specific desired bioproducts. To the same ends, with very different means, synthetic and systems biology research frequently attempts to build a single metabolic powerhouse: one well-understood, often genetically engineered species capable of carrying out the entire process (e.g., [13]). This strategy has the advantage of not requiring the careful balance of growth conditions tailored to multiple species, and it is theoretically possible to maintain a consistent community in a fermentation system. However, when the specific enzymes responsible for particular transformations-as in the case of many lignocellulose components-are unknown, or function poorly in organisms with well-developed systems, complex and multistep processes can be a challenge. Incorporating the best of both systems, the synthetic ecology approach describes a highly defined and specifically engineered community of organisms [14][15][16][17]. By narrowing the number of organisms, compared to "natural" undefined communities, it is possible to create conditions where each can thrive, especially if organisms are engineered or evolved for their roles. It is also possible, in contrast to the "powerhouse" strain approach, to select organisms with native affinities and tolerances suited for their role that are complex to engineer, including resistances to heat, acid, toxic intermediates, or the ability to store carbon or biomass in a form that can be used economically in downstream processes. In this work, we describe a synthetic microbial community designed for the degradation of lignocellulose, with a particular focus on addressing the problem of a toxic compound generated in lignin degradation: formaldehyde. Formaldehyde is a small aliphatic aldehyde that is often overlooked in bioprocessing. Yet it is inhibitory to ethanol-generating yeast at concentrations as low as 1.0 mM, lower than is found in many chemically pretreated lignocellulosic feedstocks, resulting in a reduction in product generation of up to tenfold [18]; formaldehyde accumulation is often a challenge overcome via engineered resistance [19]. It can be inhibitory even to the organisms that generate it: formaldehyde detoxification can prove a rate-limiting step in the microbial degradation of methoxylated aromatic compounds [8,9,20]. In this study, we have taken advantage of the single-carbon (C 1 ) metabolism of the model methylotroph Methylorubrum extorquens (formerly Methylobacterium extorquens), which uses formaldehyde as a central metabolic intermediate [21]. Our aim was to investigate the potential for M. extorquens, as a member of a defined lignocellulose-degrading community, to increase the efficiency of lignocellulosic breakdown by consuming the inhibitory formaldehyde. The Methylobacterium and Methylorubrum clade encompasses a diverse range of species in a variety of metabolic niches, including commensal relationships with plants as well as independently, in soil and leaf litter [22]. While the extent of the metabolic capabilities of the genus is not fully characterized-and recent work suggests some species may have the ability to utilize lignin-derived aromatic compounds [23]-we chose to include in our consortium, M. extorquens PA1, a model organism for which extensive metabolic and physiological data exist [21,[24][25][26]. The other bacterial members of this defined community included Pseudomonas putida, a canonical lignin degrader that has been studied extensively for its aromatic catabolism [20,[27][28][29][30], and Cellulomonas fimi, a cellulose degrader of interest for its ability to utilize diverse polysaccharides and to channel the products of their degradation to other organisms in co-culture [31][32][33]. In some experiments, a fourth microbial strain, the oleaginous yeast Yarrowia lipolytica, was included for its ability to grow on organic acids generated by other consortium members, and to produce neutral lipids as a potential end product [33][34][35]. We envision, ultimately, developing a stirred aerobic bioreactor for the transformation of lignocellulose hydrolysate; for this reason, we chose not to work with mycelial fungi or filamentous bacteria. In place of a complex and undefined plant biomass substrate, we opted for a simple and defined set of compounds to stand in for lignocellulose: cellobiose (a disaccharide of glucose, a product of cellulose hydrolysis), xylose (a 5-carbon sugar found in hemicellulose), and vanillic acid (a simple methoxylated aromatic compound, a derivative of guaiacyl lignin phenylpropanoids, for which formaldehyde generation is the first step in catabolism by P. putida [20]). Of these compounds, P. putida could consume only vanillic acid and C. fimi only cellobiose and xylose (with cellobiose the preferred substrate [33]); M. extorquens and Y. lipolytica could consume neither and, therefore, subsisted on metabolites generated by P. putida and C. fimi. The hypothesized interactions around which this community is built are shown in Figure 1. Our ultimate goal is the development of a metabolically efficient and ecologically robust model microbial lignocellulose-degrading community, in which we can take advantage of recent developments in metabolomic measurement and metabolic modeling to enable a flexible, predictive strategy to maximize community output [36]. To achieve this, we sought to establish robust and reproducible methods for the culture of this novel community and for measurement of organism growth, activity, and interactions; to characterize the dynamics of formaldehyde during the consumption of lignin-derived aromatic compounds. Materials and Methods Specifics of the strains and culture conditions used in each experiment are given in Table S1. Details are provided below; abbreviations used in Table S1 [26]. Strain CM3745 is M. extorquens PA1 ∆celABC ∆efgA, a mutant with increased tolerance to formaldehyde [37]. CM3745 was used in early experiments, but as the formaldehyde concentrations in coculture never exceeded the maximum inhibitory concentration (MIC) of wild-type M. extorquens and we observed no difference in performance between the two strains, later experiments were conducted with CM2730. Strain CM4744 was a methionine-overproducing strain developed for this study, by taking advantage of the fact that methionine overproduction can confer resistance to the methionine-analog ethionine [38,39]. M. extorquens CM2730 was grown to stationary phase in MP + 15 mM methanol. A total of 200 µL of culture was spread-plated onto each of two plates with MP-methanol agar medium containing 1 mg/mL of ethionine. A total of 8 colonies were chosen and re-streaked onto fresh MP-methanol-ethionine plates. These isolates were then tested for their ability to promote the growth of C. fimi in the absence of methionine, and all performed equally well. Results shown in this text (Figure 9) are from a single isolate, CM4744. For all experiments, freezer stocks were streaked onto MPI agar plates with 15 mM succinate or 125 mM methanol (M. extorquens), or onto Nutrient Agar (Difco) (other strains), to obtain colonies. A single colony was used to inoculate 5 mL of liquid culture medium and grown overnight until stationary phase. Unless otherwise noted, pre-growth medium was MP with methionine/thiamine/biotin supplements in standard concentrations and 15 mM methanol (M. extorquens) or 10 mM glucose (other strains). When necessary, this inoculum was subcultured once into a different medium (e.g., Model Lignocellulose) for an additional 24 h of growth to acclimate it for the experiment. After 24 h, stationary-phase cultures of all species were diluted to normalize their ODs to match that of the least-dense culture, then equal volumes of all cultures were inoculated into the experimental medium, at a dilution of 1:64 (vol:vol) into the final medium, unless otherwise stated. Basal medium and buffer: The medium used as a basis for all growth experiments was a modified PIPES-buffered medium previously described [26] . All were stored as sterile aqueous stock solutions and added to cultures by pipet. Because of their poor solubility in water, VA and PCA were stored as 50 mM stock in MP medium so that their addition would not dilute the final culture medium. Whereas most carbon substrates were sterilized by autoclaving, VA and PCA were sterilized by filtration out of an abundance of caution because we found that autoclaving changed their color (brown and purple, respectively). For the experiments identifying the nutritional needs of C. fimi ( Figure S5), all amino acid stocks were made in MP medium at 10 g/L (10 mL vol), except asparagine (6.67 g/L), aspartic acid (2 g/L, NaOH added to bring pH to 6.5), and tyrosine (50 g/L in DMSO), due to solubility. For all stocks, regardless of concentration, 100 µL of stock added to 10 mL culture medium, with the exception of the tyrosine DMSO stock, of which only 10 µL was added. Wolfe's vitamins made according to [40] and provided at 1×. Yeast extract (VWR) was provided at a final concentration of 0.3 g/L and tryptone (Peptone from Casein, Sigma Aldrich) at 0.16 g/L. aqueous stocks were provided at concentrations of 2 mg/L, 5 µg/L, and 40 µg/L, respectively (standard concentrations), unless otherwise noted. Stocks were made in water, filtersterilized, stored at 4 • C, and added to medium prior to each experiment. For experiments testing vanillic acid toxicity, glucose was provided for C. fimi and methanol for M. extorquens as additional carbon substrates as they cannot grow on vanillic acid, but no additional carbon substrate was provided for P. putida. For experiments testing the effect of iron concentration, 17 mM (1000×) aqueous, autoclave-sterile FeSO 4 stock was added. For use in enumerating colony-forming units of the different species from cocultures, standard MP medium was prepared with 1.8 g/L glucose and 4.05 g/L sodium succinate dibasic hexahydrate, methionine/thiamine/biotin in standard concentrations, and 15 g/L agar. Formaldehyde was produced fresh weekly as 1 M stock by combining 0.3 g paraformaldehyde powder (Sigma Aldrich, St. Louis, MO, USA), 9.95 mL ultrapure water, and 50 µL 10 N NaOH solution in a sealed tube and immersing in a boiling water bath for 20 min to depolymerize. The stock was stored at room temperature and removed from the sealed tube using a syringe when needed. Vessels and incubation conditions: All cultures were grown at 30 • C, either in culture flasks, culture tubes, multiwell culture plates, or culture plates with solid medium [agar]. For multiwell culture plates [multiwell], we used 48-well tissue culture plates (Corning Costar, Tewksbury, MA, USA) with a total volume of 640 µL per well, and incubated in a LPX44 Plate Hotel (LiCONiC, Mauren, Liechtenstein) with shaking at 650 RPM. For glass tubes, we originally used Balch tubes with serum stoppers [Balch] (Chemglass, Vineland, NJ, USA) in order to ensure that volatile compounds such as formaldehyde were not lost in the gas phase; however, experiments demonstrated no difference in any measured compounds between Balch tubes and simple 16 × 150 mm glass culture tubes with loosefitting lids [tube], so we ultimately used aerobic tubes for convenience. All culture tubes contained 5 mL of liquid medium unless otherwise stated and were incubated with shaking at 250 rpm. For culture flasks [flask], we used 50 mL-capacity glass Erlenmeyer flasks, containing 10 mL of liquid culture, also shaken at 250 rpm. Measurements: In growth experiments conducted in tubes or flasks with multiple timepoints, a typical sampling procedure was as follows: 100 µL of culture was removed from the vessel, by syringe and needle through the stopper for Balch tubes or by pipet otherwise, transferred into a microcentrifuge tube, and centrifuged for 1 min at 14,000× g to pellet the cells. 60 µL of supernatant was used for formaldehyde measurement and 20 µL for GC-MS or HPLC analysis. The cell pellet was reserved for species-specific analysis of cell abundance as colony-forming units [CFU]. Not all analyses were carried out for all timepoints, but the sampling procedure nonetheless remained the same. Abbreviations in Table S1 For CFU measurement, cell pellets were resuspended in 980 µL of MP medium without carbon substrate (1:10 dilution) and then subjected to serial 1:10 dilutions down to 10 −6 (7 dilutions total). These dilutions were either spread-plated or spot-plated onto solid culture medium. For spot-plates, three replicates spots of 10 µL of each dilution were pipetted onto plates and spots were dried under a laminar flow hood, then incubated at 30 • C for 4 days or until colonies were visible. Species were identified by colony morphology ( Figure S1). The colonies in each series of 7 dilution spots was counted; the number of colonies in the two spots of the highest dilution levels that had countable colonies were summed, then multiplied by 1.1 times the lower of the two dilution factors to calculate the number of colony-forming units (CFU) per mL in the original undiluted sample. The mean and standard deviation were calculated for the three replicate spot series representing each sample. For spread-plates, between 1 and 3 dilutions were chosen for plating based on predicted cell abundance; from each dilution, 100 µL of the dilution was spread onto a culture plate. Plates were dried and incubated as for spot-plates. CFU/mL was calculated by multiplying by the dilution factor and accounting for the volume plated. Measurement of optical density at 600 nm (OD 600 ) for cultures in glass tubes was carried out nondestructively by reading the whole tube with a Spectronic 200 spectrophotometer (Thermo Fisher, Waltham, MA, USA). For cultures in flasks, a 100 µL sample was transferred to a trUVue low-volume cuvette (Bio-Rad, Hercules, CA, USA) and read in a SmartSpec Plus spectrophotometer (Bio-Rad). For experiments in multiwell plates, optical density was assessed using a Wallac 1420 Victor2 Microplate Reader (Perkin Elmer, Waltham, MA, USA), reading OD 600 for 0.4 s. In experiments involving different carbon substrates, blank wells were included for each medium composition for blanking purposes, as solutions containing PCA and VA were purple and brown, respectively. Formaldehyde was measured using the method of Nash [41]. Reagent B was prepared as described (2 M ammonium acetate, 50 mM glacial acetic acid, 20 mM acetylacetone); for each assay, equal volumes of sample (or standard) and Reagent B were combined in a microcentrifuge tube and incubated for 6 min at 60 • C. Absorbance was read on a spectrophotometer at 412 nm and formaldehyde concentration calculated using a standard curve made from freshly prepared formaldehyde stock. To assay large numbers of samples, a 96-well polystyrene flat-bottom culture plate (Olympus Plastics, San Diego, CA, USA) was used, with a total volume of 200 µL per well and incubation time of 10 min before absorbance at 432 nm was read using a Wallac 1420 Victor2 plate reader. A clean plate was used for each assay; each plate contained each sample in triplicate as well as a standard curve run in triplicate. The absorbance of vanillic acid was not found to interfere with formaldehyde measurements. Vanillic acid was measured by gas chromatography-mass spectrometry (GC-MS) using an extraction and derivatization procedure modified from [42]. 20 µL of culture supernatant was combined with 1.2 µL of 1 M HCl to acidify to pH~2. The sample was combined with 100 µL of a 1:100 mixture of 2-chlorobenzoic acid:ethyl acetate, and vortexed to extract the vanillic acid into the organic phase. The sample was centrifuged at 14,000× g for 1 min to separate the phases, and 80 µL of the organic phase was transferred to a clean GCMS sample vial (1 mL capacity). Samples were dried in a fume hood, then 400 µL of derivatization reagent was added. The derivatization reagent consisted of a 99:1:1000 mixture of N,O-Bistrifluoroacetamide:Trimethylsylil chloride:acetonitrile (that is, BSTFA-TMCS (TCI America, Portland, OR, USA) diluted 1:10 in acetonitrile). The sample was incubated, sealed, at 70 • C for 30 min, then cooled. Samples were analyzed on a Shimadzu GCMS-QP2010 Plus with a 30 m × 0.25 mm dimethyl polysiloxane column (Rxi-1ms, Restek, Bellefonte, PA, USA) in splitless mode with a 1-min injection at 280 • C. The GC run program was as follows: hold at 80 • C for 1 min; ramp to 110 • C at 20 degrees/min; ramp to 240 • C at 10 degrees/min; ramp to 280 • C at 40 degrees/min; hold at 280 • C for 5 min. The MS was run on SIM mode. Vanillic acid was detected as as 3-methoxy-4-[(trimethylsilyl)oxy]benzoic acid trimethylsilyl ester, with retention time 10.4 min. Characteristic fragments used for quantitation were at m/z 267 and 297, with other fragments at 126, 193, 253, 312. Standard curves were generated from vanillic acid stocks made in lab and extracted alongside the samples. Cellobiose and xylose were measured using a Shimadzu LC-20 high-performance liquid chromatograph (HPLC). Supernatant samples were filtered through a 0.2 µm syringe filter to remove any particles, and diluted in water if necessary to maintain signal within quantifiable range. They were run on an Aminex HPX-87h column (Bio-Rad) at a flow rate of 0.6 mL/min, column temperature 30 • C, with 5 mM H 2 SO 4 as eluent. Peaks were detected using a RID-20A refractive index detector: cellobiose with a retention time of 7.1 min and xylose at 9.3 min. Formaldehyde tolerance distributions: The distribution of formaldehyde tolerance phenotypes within a population was assessed by counting colony-forming units on agar medium containing formaldehyde, as described previously [43]. MP medium was prepared with the necessary carbon substrates and supplements for each species; after autoclaving, the medium was cooled to 50 • C and formaldehyde was rapidly mixed in. Agar was poured into 100 mm culture plates, dish lids were replaced, and plates were allowed to cool on the benchtop. Plates were stored at 4 • C for no longer than 1 week. Cultures were grown to stationary phase on MP medium with a preferred carbon source (methanol for M. extorquens; glucose for other species), then CFU were spot-plated as described above onto a series of plates containing a range of formaldehyde concentrations, and the number of cells capable of forming colonies at each concentration of formaldehyde was calculated. Note that an abundance of 34 CFU/mL is necessary to observe 1 cell per 30 µL plated, so for a population of~2 × 10 8 CFU/mL (as was typical for M. extorquens samples), this method has a limit of detection of 1.65 × 10 −7 . Spent medium experiment: To generate P. putida spent medium, P. putida was grown on Model Lignocellulose medium (Table 1) to stationary phase. The culture was then centrifuged and the supernatant filtered through a 0.2 µm filter to remove cells. Vanillic acid was added again to a final concentration of 4 mM, to replenish the vanillic acid consumed by P. putida. This was then used as the growth medium for C. fimi. Data analysis and visualization: Original data are available as spreadsheets in Supplemental Data File 1. All data were analyzed using R v. 4.0.2 in Rstudio v.1.3.959. Growth rates (r) were calculated by fitting the exponential portion of the growth curve to the model N(t) = N 0 e rt . Lag time was calculated as the intersection of the fitted growth curve with OD = 0.0126 (the threshold of detection in multiwell plates). The relationships between lag time and substrate concentration, or between growth rate and concentration, were calculated by fitting a linear relationship using the lm package in R. Inkscape v. 1.0 was used to generate the conceptual figures (Figure 1 and Figure 10) and for customizing layout and annotations on other figures. Both Vanillic Acid and the Formaldehyde Generated during Its Degradation Are Toxic to Consortium Members Our initial experiments aimed to define the role of toxic compounds in our community, particularly formaldehyde generation and consumption, and formaldehyde-mediated growth inhibition. Previous work in our lab has shown that P. putida growing on vanillic acid can generate formaldehyde that escapes the cell to accumulate in the growth medium [23]. However, the effect of that formaldehyde on other members of the microbial community is unknown. Moreover, given that lignin-derived aromatic compounds can themselves be toxic, we reasoned it was possible that vanillic acid could also inhibit growth in our consortium and that the benefit of its degradation might even outweigh the risks of formaldehyde production. We, therefore, assessed the tolerance of consortium members to both compounds. Because P. putida can grow on vanillic acid as a sole carbon source and would be required to do so in our consortium, we tested its growth tolerance to a range of vanillic acid concentrations while using the compound as its sole carbon substrate. For the other organisms, we were required to provide an alternative growth substrate in addition to the vanillic acid (see Table S1 for details). Remarkably, vanillic acid had a very small effect on the growth rate (slope = −0.0032, p < 0.05), and no detectable effect on the lag time, for M. extorquens, up to 14 mM ( Figure 2). However, vanillic acid did inhibit the growth of both C. fimi and P. putida; both organisms showed a significant increase in lag time (slope = 1.77 and 0.93, respectively; p < 0.05), and C. fimi showed a decrease in growth rate (slope = −0.0097, p < 0.05), with increasing vanillic acid concentrations (Figure 2, Figure S2). For C. fimi, no growth was detectable at 14 mM vanillic acid and it was not possible to calculate lag time above 8 mM. Figure S2). C. fimi was provided with cellobiose and M. extorquens with methanol as growth substrates, whereas P. putida was able to use the vanillic acid as a growth substrate. Each point represents a biological replicate. Only growth rates for which R 2 > 0.9 are shown here. In the left panel, error bars denote the standard error of the fitted growth rate. For C. fimi and P. putida, lag time and growth rate are dependent on vanillic acid concentration. For P. putida, we conducted similar experiments on protocatechuic acid, another ligninderived aromatic compound that lacks the methoxyl group but is otherwise identical to vanillic acid. While P. putida showed a slight decrease in growth rate with PCA concentration, it was much lower than that for vanillic acid, and there was no detectable change in lag time ( Figure S3). It is therefore likely that much of the toxic effect on P. putida is due to the methoxyl group of the vanillic acid, either directly, or due to the formaldehyde that can be generated from it. We also measured the effect of formaldehyde on the consortium members, by assessing growth in liquid medium, and by measuring the frequency distribution of formaldehyde-tolerant individuals by counting colony-forming units on formaldehyde agar [43]. In both media, we found that C. fimi showed no growth at concentrations of 0.5 mM and higher, establishing it as the most formaldehyde-sensitive of the organisms in our consortium. In contrast, 100% of M. extorquens cells are able to grow in the presence of 1 mM formaldehyde, and both P. putida and M. extorquens showed some growth at concentrations of 3 mM or higher (Figure 3). Note that OD is shown here on a linear scale for ease of interpretability; note also that the color scale for formaldehyde concentrations is different between panels (A-C) and panel (D). Data from panel (D) are reproduced from [43]. (E) An alternative method of understanding formaldehyde tolerance is the enumeration of cells that are able to form colonies on agar medium containing formaldehyde. In M. extorquens, 100% of the plated population formed colonies at 1 mM and 1/10,000 formed colonies at 2 mM. P. putida cells also formed colonies at those concentrations but at a lower frequency. In C. fimi, no colonies were observed at 0.5 mM formaldehyde or higher. Error bars show the standard deviation of three replicate platings. In our experiments with P. putida growing on vanillic acid as the sole carbon source, formaldehyde levels in the medium increased throughout the period of growth and decreased only when vanillic acid was depleted and the culture entered stationary phase ( Figure 4). Regardless of the initial concentration of vanillic acid, the formaldehyde concentration uniformly reached a peak of between 0.6 and 0.8 mM; however, because higher concentrations cause P. putida to grow more slowly, formaldehyde remained present in the medium for a longer time. In contrast, growth on PCA resulted in faster growth than on vanillic acid and no formaldehyde production ( Figure 4). Notably, because C. fimi growth is inhibited at concentrations of 0.5 mM and higher, these results pointed to the potential for lignin degradation to interfere with cellulose degradation, unless a detoxification mechanism for formaldehyde could be introduced. Furthermore, while P. putida showed very little inhibition from formaldehyde concentrations lower than 1 mM in our single-species experiments, we reasoned that removal of formaldehyde in the medium might also help relieve some of the burden of intracellular detoxification from P. putida. In cultures containing P. putida, formaldehyde was generated on vanillic acid during the exponential growth phase, and the duration and peak of the formaldehyde was lower in cultures containing M. extorquens. The data in panel (B) are from a larger experiment testing M. extorquens cultures from different pre-growth conditions and inoculation ratios; full results for that experiment are shown in Figure S4. All data shown here are from cultures initiated in stationary phase, and from M. extorquens pre-grown on methanol. M. extorquens Reduces the Formaldehyde Concentrations in Cocultures Growing on Vanillic Acid We therefore tested the hypothesis that co-culturing M. extorquens with P. putida growing on vanillic acid could lower the formaldehyde concentrations in the medium, as M. extorquens can use formaldehyde as a growth substrate. We conducted a number of experiments at different vanillic acid concentrations and adding the two organisms at different ratios. We consistently found that including M. extorquens did indeed result in lower concentrations of measurable formaldehyde (Figure 4). While preliminary experiments indicated that the amount of formaldehyde reduction could be influenced by the conditions in which M. extorquens was grown prior to being added to the coculture ( Figure S4), in the spirit of creating a robust and sustainable community that might withstand serial culture, we ultimately proceeded with experiments in which all organisms were grown to stationary phase in similar conditions before being combined. The modest change in formaldehyde concentrations did not noticeably alter the overall growth rate or yield of the P. putida + M. extorquens coculture (Figure 4). We next needed to test whether it would have an effect on C. fimi, the most formaldehyde-sensitive member of the consortium. To do so required developing a set of culture conditions that would support growth of all community members together. A Minimal Growth Medium Can Support All Members and Facilitates Metabolomic Analysis, with Modest Amino Acid and Vitamin Supplements and Reduced Buffer Concentrations For experiments in the community dynamics of the consortium, we needed to develop a new culture medium that would not only support the growth of each consortium member individually, but also facilitate full chemical characterization of their interactions, and enable reproducible experiments. Because a defined mineral medium can simplify metabolomics analysis and avoid the batch effects sometimes observed in complex media, we began with a PIPES-buffered mineral medium (MP) that had originally been optimized for M. extorquens [26]. We found that it supported P. putida growth well, on multiple carbon substrates. However, C. fimi showed no measurable growth on MP on any carbon source. The addition of yeast extract or peptone did aid growth, leading us to test the possibility of an amino acid or vitamin auxotrophy. As most published media for Cellulomonas species contain undefined media or supplemental vitamins (e.g., [44,45]), we conducted set of trials using different combinations of amino acids and vitamins to deduce that C. fimi required supplementation by methionine and thiamine in order to grow on MP ( Figure S5). Biotin was also added to the medium, due to an indication early in experimentation that it might help C. fimi growth, and from literature mentioning that it might be necessary for Y. lipolytica [46,47] and C. fimi [45], although in many cases its addition did not noticeably affect growth of any of the organisms ( Figure 5). The addition of these supplements had no measurable effect on the growth of M. extorquens and only a very small effect on P. putida ( Figure 5). During the single-species growth assays we also measured growth rates of each of the organisms alone in MP medium, in optimal growth conditions ( Figure S6, Table S1). Growth rates differed markedly among organisms: P. putida grew most rapidly by far (r = 0.40), C. fimi the slowest (r = 0.21), and M. extorquens and Y. lipolytica with similarly moderate growth rates (r = 0.24 and 0.25, respectively). While these differences posed a challenge in terms of our understanding of metabolic interactions among consortium members, it ultimately made it relatively easy to interpret growth curves of mixed cultures (as described further below). A final amendment to the growth medium recipe resulted from some preliminary work we conducted testing the feasibility of untargeted metabolomics analysis on the community. It was found that the 30 mM PIPES buffer in the medium interfered with sample processing for LCMS and GC-MS analysis. To remedy this problem, we explored the possibility of lowering the buffer concentration and explored the effect of this on the growth of the consortium members. We found that lowering the PIPES concentration had very little effect on the growth of most organisms, but P. putida proved the exception. However, it was still able to maintain reasonable growth at a PIPES concentration of 3 mM (tenfold lower than the original) ( Figure S7). Supplementation of extra phosphate aided growth as well, though phosphate also interferes with metabolomics analysis, so some experiments were conducted with 5× the original phosphate concentration, but not all. As mentioned above, we chose as carbon substrates a set of relatively simple compounds with chemical similarity to the key components of lignocellulose: cellobiose, xylose, and vanillic acid ( Figure 1, Table 1). When necessary, we used 15 mM methanol or 3.5 mM succinate to support the growth of M. extorquens, though in many experiments we chose not to supplement M. extorquens growth beyond the formaldehyde and organic acid generated by the other consortium members. To decide upon the concentration of each lignocellulosederived compound to include, we considered the typical balance among lignin, cellulose, and 5-carbon sugar components found in plant-based feedstock [48] and chose a concentration high enough to support measurable microbial growth within an experiment lasting 1-3 days, while being low enough to mitigate the effects of vanillic acid toxicity. We began with the 4:5:4 molar ratio of cellobiose:xylose:vanillic acid listed in Table 1, which translates to a 46:24:30 ratio by C molarity or a 49:27:24 ratio by mass. We then tested different concentrations. C. fimi grew fastest on the most dilute medium; at the intermediate concentration it would likely have achieved a higher final yield but did not do so within the length of the experiment (60 h); at the highest concentration it showed no growth at all. P. putida also showed earlier growth at the lower carbon concentrations (time to reach OD~0.6 on 1× C was approximately 2 h earlier than on 3×), but due to its faster growth rate and moderate tolerance to vanillic acid, it recovered rapidly from the toxic effects of the concentrated medium and ultimately achieved the highest final optical density in the medium with the most carbon ( Figure 6). The final recipe for our basal growth medium, which we called Model Lignocellulose, is given in Table 1. Details on the individual growth conditions of each experiment described in this manuscript are given in Table S1. Figure 6. Higher carbon concentrations in model lignocellulose medium result in slower growth and higher final yield; C. fimi is more strongly inhibited than P. putida. Each species was grown in pure culture in Model Lignocellulose medium (Table 1) with either 1× carbon (4 mM cellobiose, 5 mM xylose, 4 mM vanillic acid), or 0.25 times or 3 times those concentrations. We developed methods to measure the production and consumption of all the compounds of interest: a colorimetric method for formaldehyde, GC-MS for vanillic acid, and HPLC for cellobiose and xylose. We also explored ways to measure the dynamics of individual species as they grew together in liquid co-culture. While we were unable to distinguish among species using flow cytometry, they were easily distinguished by colony morphology when plated onto agar medium ( Figure S1). Furthermore, because C. fimi, M. extorquens, and P. putida had such distinctive growth rates, each formed a different part of the growth curve, and it was, therefore, possible to make an initial qualitative assessment of culture dynamics based on optical density alone. In general, P. putida was recognized as the earliest, most rapid part of the growth curve, which was followed by a slight dip in OD once P. putida had exhausted its carbon substrate (likely due to accumulation we observed of P. putida cells on the walls of the culture vessel, or to change in cell shape as has been observed for M. extorquens [26]). C. fimi growth, being much slower, was recognized as the slower growth becoming prominent well after P. putida reached its maximum. With its moderate growth rate, M. extorquens was recognizable as any increase in the growth curve relative to the C. fimi growth alone-though some of that increase in OD may also have been attributable to a stimulation of C. fimi growth by M. extorquens. An example of these dynamics is visible in Figure 7. P. putida Inhibits Growth of C. fimi in Model Lignocellulose Medium, and C. fimi Supports the Growth of M. extorquens Having developed a medium in which we could grow all organisms in coculture, we next set out to measure the dynamics of formaldehyde production and consumption on the microbial community, specifically to address the question of whether formaldehyde production by P. putida had an adverse effect on the activity of C. fimi (the most formaldehyde-sensitive of the organisms), and whether the presence of M. extorquens could modulate that effect. While the pulse of formaldehyde produced by P. putida during growth on vanillic acid was above the maximum inhibitory concentration (MIC) of C. fimi, it was transient and, therefore, we did not know how to predict its effects. We conducted several experiments in which we observed members of the consortium alone or in different combinations, growing in Model Lignocellulose medium or similar conditions (Figure 7 and Figure S8). We consistently found that while C. fimi (with or without M. extorquens) was able to reach a high OD after 60-80 h of growth, when we added P. putida to the culture, that high OD was never reached. This was true in medium with 4 mM vanillic acid, in which we observed an early peak in OD at <20 h due to P. putida growth, and in medium with only 2 mM vanillic acid, in which much less P. putida growth was supported and yet C. fimi growth was still inhibited. The inhibited growth was also reflected in slower cellobiose consumption by C. fimi and in reduced final colony counts (Figure 7). From the colony counts, we also observed that the presence of C. fimi supported the growth of M. extorquens, even when no carbon substrate was provided for M. extorquens, in agreement with our initial hypothesis that C. fimi might produce substrates upon which M. extorquens could cross-feed. Yet when P. putida was present, M. extorquens abundance was depressed; this may have been an indirect effect due to the reduced activity by C. fimi, or a direct interaction between P. putida and M. extorquens (Figure 7). In cultures with P. putida, we observed the expected peak in formaldehyde production between 10 and 20 h of growth; as observed previously, the addition of M. extorquens to the coculture reduced the magnitude and duration of this peak (Figure 7 and Figure S8). Furthermore, the addition of M. extorquens resulted in a slight increase in OD relative to a coculture with only P. putida and C. fimi. However, M. extorquens was not able to completely ameliorate either the formaldehyde production or the growth inhibition. This observation led us to investigate whether the production of formaldehyde was in fact the primary reason for the effect of P. putida on C. fimi. If it was, then improving the performance of M. extorquens could dramatically improve the efficacy of the consortium in degrading our model lignocellulose compounds. Formaldehyde peaked and disappeared, and vanillic acid was consumed, within 12 h, indicating activity by P. putida despite fact that the change in OD was not measurable. (E) Viable cells from each species, as measured by colony counts, at the beginning of the experiment ("inoc." = inoculum), and at the end (78 h; each panel is titled with the species present in that culture tube). While no growth substrate was explicitly included to support M. extorquens, it increased slightly in abundance by the end of the experiment and the greatest increase was in the presence of C. fimi and absence of P. putida. Replicate culture tubes are shown along the x-axis; points indicate replicate measurements from each tube. (F) Results from a similar experiment, testing only a subset of the species combinations in Model Lignocellulose medium. Only optical density is shown here. P. putida Inhibition of C. fimi Growth May Be due to Multiple Mechanisms To test our hypothesis that formaldehyde production was the reason for the effect of P. putida on C. fimi growth, we tested growth on a modified lignocellulose medium, in which we replaced vanillic acid with protocatechuic acid (PCA). PCA is the immediate product of vanillic acid demethoxylation, and, therefore, can serve as a control for P. putida growth on aromatic compounds without formaldehyde production. We found that P. putida also inhibited C. fimi in these formaldehyde-free growth conditions, in a manner consistent with our observations on vanillic acid and independent of the addition of M. extorquens, leading us to conclude that formaldehyde production was not the most important aspect of the interaction between C. fimi and P. putida ( Figure 8A). We next explored several other hypotheses to explain the inhibitory effect of P. putida on C. fimi. To assess whether P. putida was competing with C. fimi for important compounds in the medium, we tested higher concentrations of methionine, thiamine and biotin, or iron. In case the inhibition was due to an excreted compound, we also tested whether spent medium from P. putida could inhibit C. fimi growth. We found evidence supporting all three potential mechanisms. Addition of iron and vitamin/amino acid supplements, both alone and in combination, improved growth of the coculture (Figure 8). This was true only to a point; the highest iron concentrations tested resulted in no growth, likely due to toxic effects. Whereas the addition of iron resulted in an increase in the growth rate during the time <15 h, indicating an effect on P. putida growth, the effect of methionine, biotin, and thiamine was seen primarily in the second portion of the growth curve (C. fimi and M. extorquens). Thus, although our observations from pure culture indicate that thiamine and biotin provide P. putida with only a slight improvement in growth and methionine has no measurable effect, the species might still be actively removing these compounds from the medium. Moreover, while the addition of P. putida spent medium did not affect the initial growth rate of a C. fimi-M. extorquens coculture, it did reduce the final yield ( Figure 8). This could indicate either the effect of a soluble compound produced by P. putida, or the depletion by P. putida of important medium components, or both. One possibility might be siderophores, as P. putida produces siderophores that can inhibit the growth of competitors, and has been observed to accelerate their production in response to the availability of aromatic growth substrates [49,50]. Because increasing the iron concentration in the medium benefitted the growth of the consortium, we considered changing the base recipe for our model lignocellulose medium. However, we found that adding high concentrations of iron was in fact inhibitory to C. fimi growth when P. putida was not present to consume it ( Figure S9). Clearly, the ideal growth medium to support interactions among the members of the consortium was not the same as the medium that best supported each organism individually. 3.6. Methionine-Overproducing M. extorquens Can Support the Growth of C. fimi Given our observation that adding supplements to the medium could create unintended consequences, we opted to explore the possibility of including a consortium member with the ability to provide the methionine required by C. fimi for growth. This would not only eliminate the necessity for us to add it to the medium, but also create the opportunity for positive feedback between C. fimi and M. extorquens, given our data suggesting that C. fimi could support M. extorquens growth. We used a previously published method to generate a methionine-overproducing strain of M. extorquens by selecting on medium containing ethionine [38,39]. When we cultured this M. extorquens strain with C. fimi, or both C. fimi and P. putida, in medium without added methionine, we found that it was able to support the growth of C. fimi to abundances similar to or better than those supported by 20 mg/L methionine ( Figure 9). This effect was unique to the overproducing strain: wild-type M. extorquens did not promote C. fimi growth. Moreover, the effect was dependent upon the presence of either methanol or succinate to support the growth of the methionine producer (Figure 9), indicating that the benefit required having a high abundance of growing M. extorquens cells. The inclusion of a methionine-overproducing strain in the consortium might therefore prove to be a remedy to resolve one of the several interactions between C. fimi and P. putida-the competition for methionine. Figure 9. A methionine-overproducing strain of M. extorquens can support the growth of C. fimi without the addition of methionine to the medium. Top: growth curves of C. fimi with different strains of M. extorquens (symbols), on MP medium with glucose and supplements (panels). Without succinate, M. extorquens growth is negligible. For C. fimi alone, growth with methionine is much greater than without. However, when succinate is present to enable M. extorquens growth, the culture of C. fimi + the methionine excreter reaches substantially higher growth than by C. fimi + WT M. extorquens, suggesting a benefit to C. fimi from the methionine. Bottom: colony counts of each species (symbols) from the endpoint of an experiment in which the three-species consortium was grown on Model Lignocellulose medium with either of the two strains of M. extorquens (x-axis), and supplemented with either nothing, methanol to support M. extorquens growth, or both methanol and methionine (panels). C. fimi grows only when methionine is present or when both methanol and the methionine-excreting M. extorquens strain are present. P. putida does not show the same reliance on methionine or M. extorquens. Discussion We have established a system for studying microbial lignocellulose degradation that uses tractable, well-characterized microbial species in ecological roles that take advantage of their evolved metabolic capabilities, and have developed a simple defined Model Lignocellulose medium with which to investigate interspecies interactions. This groundwork will enable future investigations involving metabolomic analysis and modeling for a more complete understanding of metabolic exchange within the community, and further work using engineering and laboratory evolution to optimize the efficiency and yield of the biomass transformation. While much remains to be learned about the mechanisms of the dynamics observed in this microbial community, the methods and initial findings described here comprise the first step toward using this promising model consortium for studying the microbial transformation of lignocellulosic biomass. A valuable lesson learned from this study relates to the prevalence of unexpected ecological interactions that may be discovered even in a simple three-species community ( Figure 10). Dynamics we discovered that were not originally predicted included the dramatic interspecies differences in nutritional needs, individual growth rates, pH sensitivity, and tolerance to toxins such as formaldehyde and vanillic acid. Some of these differences may provide clues about the organisms' ecological niches. For example, formaldehyde tolerance shows a link to species' metabolic capabilities: in M. extorquens, a methylotroph, formaldehyde is detoxified by the dephospho-tetrahydromethanopterin pathway [51,52], and vanillic-acid-consuming P. putida carries a set of redundant glutathione-dependent formaldehyde dehydrogenases [30]. However, in C. fimi, to our knowledge, formaldehyde detoxification has not been characterized. Another complication in our understanding of the community's dynamics was the consumption by P. putida of compounds (iron and methionine) seemingly in excess of what was limiting for its growth. Critically, very little about these findings would be captured by genome-scale metabolic modeling, a method often based on the assumption that each member achieves optimal metabolism on the available substrates [53]. Our results point to the importance of taking into account traits outside of metabolism for their roles in interspecies interactions. As many model experimental systems in microbial ecology focus primarily on metabolite exchange and chemical warfare (e.g., [38,[54][55][56]), our microbial consortium could prove a useful model for the study of alternative modes of ecological interactions. To our original model, we have added competition between P. putida and C. fimi for iron and methionine, and inhibitory effects of vanillic acid on C. fimi and P. putida and high concentrations of iron on C. fimi. We have found little evidence for inhibitory effects of formaldehyde (at the concentrations produced here) on any of the members besides C. fimi; there is evidence that C. fimi, but not P. putida, supports M. extorquens growth, likely through organic acid production. In addition, we have added the ability for M. extorquens to support C. fimi through the production of methionine. The discovery of C. fimi's dependence on methionine for growth provided a fortuitous opportunity to enrich the existing interspecies interactions though the development of a methionine-excreting strain of M. extorquens (Figure 9). This is especially promising, as we have observed C. fimi already has the capacity to promote M. extorquens growth (Figure 7), likely through the generation of carbon substrates. Prior work using exometabolomic analysis of C. fimi growing on cellulose and galactomannan has shown it to produce several organic acids (e.g., alpha-ketoglutaric acid, 3-hydroxypropionic acid, D-malic acid, citric acid, and amino acids) that are among those known to support the growth of M. extorquens [33]. The exchange of methionine for carbon substrates by crossfeeding is, in fact, the basis of another well-characterized model microbial consortium, which was developed through laboratory evolution and has been studied extensively as an example of a stable bidirectional costly mutualism [38,57,58]. It is likely that the growth promotion observed here could in the future be similarly developed through laboratory evolution into a true codependence. A central goal of our work was to address the issue of formaldehyde production during lignin degradation and to test the hypothesis that a methylotrophic organism could improve the efficiency of lignocellulose degradation by removing a toxic burden. We made significant contributions in that area, by documenting formaldehyde accumulation due to P. putida growth on vanillic acid over time and the effect of formaldehyde consumption by M. extorquens. However, much remains still to be investigated. Because formaldehyde concentrations in the consortium are dynamic, their effect may be dramatically different in different growth conditions. Formaldehyde-induced cell damage occurs over time [43], and as formaldehyde remains in the medium as long as P. putida is actively consuming vanillic acid, a continuous-culture growth environment might induce a more substantial effect of formaldehyde on C. fimi viability. Yet it was clear in the present study that other interspecies interactions had a stronger effect on the community than formaldehyde, and those interactions must be resolved first before the role of formaldehyde can be accurately explored and quantified.
2021-02-11T06:18:16.148Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "b94bccb2d7cd543970c4ebc503aa8137002b94f6", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7914493", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e51f3ca403592d0dbdbfcb81215a4ee19e040107", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53247166
pes2o/s2orc
v3-fos-license
Communicating for conservation: circumventing conflict with communities over domestic dog ownership in north Morocco Conservationists consider open and direct communication as best practice even when their data conflict with local beliefs. However, ensuring the effective delivery of a controversial message without overtly challenging community identity is difficult. Such a scenario needs high levels of meaningful contact and trust-building dialogue between conservationists and communities as well as innovative means of communicating controversial information. Indirect communication is one such strategy, allowing people to draw their own conclusions about controversial information. We present an example of successful indirect communication of such information in the context of a long-term Barbary macaque community conservation project in Morocco. Dogs in the area kill macaques and domestic livestock in the forest, and local shepherds believed these dogs to be feral. However, our observations identified these dogs as being owned, free-roaming village dogs rather than feral dogs. To impart this controversial information, we developed a dog health programme to communicate our findings and improve the health of domestic dogs to safeguard human and animal health. We administered rabies vaccinations to dogs in three villages and provided their owners with brightly coloured dog collars. After observing collared dogs hunting in the forest, the shepherds realised the dogs had owners. Community participation was high and we vaccinated 242 dogs achieving 60–81% vaccination coverage. An additional benefit of the activity was to successfully convey the message that the conservation team is committed to local people’s welfare as well as to Barbary macaque conservation. Introduction Including local people in conservation initiatives in a meaningful manner is a complex undertaking. Poor-quality relationships between the various actors involved, along with imbalances in power relationships which are often unacknowledged, cause many failures (Russell and Harshbarger 2003;Geoghegan 2009;Madden and McQuinn 2014). Trust and meaningful engagement between local people and conservationists and local communities are fundamental to successful conservation outcomes (Bell et al. 2008;Sprague and Draheim 2015;Madden and McQuinn 2017;Setchell et al. 2017). The way in which conservationists present quantitative data that oppose local beliefs can be a major cause of alienation and conflict between and among stakeholders (Peterson et al. 2013;Redpath et al. 2013). Communities may feel that the conflicting information challenges their identity causing resentment and reinforcing their incorrect beliefs (Peterson et al. 2013;Redpath et al. 2013;Sprague and Draheim 2015). For instance, cattle ranchers in Florida believe the population of the Endangered Florida panther (Puma concolor cori) is much higher than state officials claim. The ranchers' refusal to accept scientific information may be based on their community identity as land owners who resent the state's protection of a recognised cattle predator (Kreye et al. 2017). Such clashes between the differing realities of conservationists and local people are common and may be related to the different relationships the two parties have with wildlife (Milton 2000;Theodossopoulos 2003;Bell et al. 2008). Badly managed or culturally inappropriate communication has led to costly, acrimonious and long-term disputes often characterised by important stakeholders feeling excluded from participatory processes if their views are left unheard or belittled by conservationists or bureaucrats (Saunders 2011; Sprague and Draheim 2015). Such disputes, in which stakeholders' positions become polarised and entrenched, often leave members of local communities sceptical about the need for wildlife conservation measures (Krange and Skogen 2011;Redpath et al. 2015;Sprague and Draheim 2015) and feeling excluded from conservation activities (Peterson et al. 2002;Skogen et al. 2008). Where a controversial message conflicts with the long-held beliefs of some members or groups of stakeholders, oblique or circumspect communication may be more effective than direct presentation of the information. Tacit communication can help avoid loss of credibility by conservationists, local people, or both. Moreover, tacit communication may also be effective as a form of expression in societies where direct communication and contradiction are culturally inappropriate (Cohen 1987). Ideally, the method of message delivery will effectively communicate information and provide a benefit to the target communities. One way to provide a benefit to communities is to work with them to combat zoonoses transmitted between people, wildlife and livestock-often referred to as a One Health approach (Cleaveland et al. 2014). Here, we describe how a carefully considered intervention can facilitate communication of empirical data that do not accord with local beliefs. We developed a communication delivery programme in the context of a Barbary macaque (Macaca sylvanus) conservation programme in Bouhachem, Morocco, the cornerstone of which was to vaccinate village dogs against rabies. Domestic dogs harass and kill Barbary macaques and kill villagers' cows in the forest of Bouhachem Waters et al. 2018). Shepherds blamed this loss of livestock on a pack of feral, forest dwelling dogs which, they believed, originated from the closest town, Mulay Abdesalam, where they were abandoned by visiting pilgrims. However, we found that the free-ranging dogs in the forest were owned by villagers (see Waters et al. 2018). Our observations conflicted with the shepherds' belief that the dogs they observed hunting in the forest were feral. The shepherds did not recognise village dogs because they did not view dogs as individuals (Waters et al. 2018). When we tentatively conveyed these findings that the feral dogs were village dogs to four shepherds sympathetic to the work of our Barbary macaque conservation project, the information was met with general amusement or disagreement. This alerted us to a possible future conflict situation if we continued to impart our findings directly. We were aware of the history of local people's exclusion from and resistance to top-down development initiatives and thus mindful of potentially alienating them if we persisted with a direct communication strategy. As in other developing countries (Knobel 2005), rabies affects the health of people, domestic animals and wildlife in Morocco. Between 1978 and2008, it claimed the lives of~22 people per year, with a~406 reported cases annually in animals in the same period. A national campaign to eradicate rabies began in 1986, with free vaccinations offered to dog owners at the veterinary service offices in large provincial cities and towns (Fassi-Fihri 2008). There are no data on vaccination coverage in each town, as officials do not census dog populations. The programme effectively excludes the rural village dog population because people must take their dogs to veterinary service offices in towns and cities for vaccination (Fassi-Fihri 2008). Around Bouhachem, shepherds reported livestock deaths from rabies and we knew that human deaths, though rare, also occur. Shepherds told us that their dogs had never been vaccinated against rabies and there were no recent records of veterinary visits to Bouhachem to vaccinate dogs. We have observed potentially rabid domestic dogs attacking macaques, with the risk of an injured macaque contracting rabies . A rabid macaque attacking people could have disastrous consequences for both people and macaques. Based on this, we developed a dog vaccination programme, with the following aims: i. To communicate our findings about dog ownership to shepherds and other local people without threatening community identity. ii. To collaborate with villagers to improve the health of their domestic dogs and reduce the risk of rabies transmission to people and livestock. After presenting our methods, we describe how we conducted a programme that we hoped addressed local people's beliefs and concerns. We explain how the programme was successful in its aims but had unforeseen consequences. Study site Bouhachem Nature Reserve (Fig. 1), which we refer to as Bouhachem hereafter, is an area of mixed oak forest situated west of the Rifian mountain chain in the north of Morocco. It is a mountainous area of approximately 142 km 2 and home to the endangered Barbary macaque, now only present in fragmented populations in Morocco and Algeria. In October, 2009, we initiated an ongoing research and conservation project focusing on the Barbary macaque in Bouhachem with the aim of including communities in conservation activities. We applied a biosocial approach integrating quantitative and qualitative methods to develop conservation strategies which addressed the local situation for people, their livestock and the Barbary macaque. Ten villages are adjacent to or directly on the periphery of the forest. There has been no recent census at a household level so no population data are available. The villagers are agropastoralists. Domestic livestock include goats (Capra hircus) and cows (Bos taurus). Cows graze in the forest unattended, but shepherds herd goats into and out of the forest and use livestock guarding dogs to protect goats from the African wolf (Lupus lupus lupaster) and feral dogs. The remote location of the villages means that their inhabitants have been historically marginalised and excluded from decisions concerning the forest they use to sustain their livelihoods as well as being discriminated against by city dwellers. To avoid their further exclusion, we engaged local shepherds in project research activities by integrating our different knowledge systems to coproduce information about Barbary macaque population status in Bouhachem. In the context of conservation, this approach provided an entry point into engagement with local people by including them in the conservation research effort and had enormous benefits in terms of establishing a dialogue and close relationship with a group of people who regularly used Barbary macaque habitat. Our regular engagement, allowed us to identify and, if possible address, issues that were important to shepherds and their communities. Methods We collected data between January 2009 and April 2011. Study participants were men aged 14-84 years working as shepherds regularly or occasionally at the time of the study. We interviewed five shepherds from each of the ten villages on the periphery of Bouhachem forest. We encountered many of these individuals regularly while conducting Barbary macaque surveys in the forest. We collected interview data from March to November 2010 using semi-structured interviews to enable interviewees to communicate their depth of knowledge and their thoughts about the subject matter in their own words (Huntington 1998;Drury et al. 2011). Our interview focused on the shepherds' knowledge of the macaques' locations. However, many shepherds spontaneously expressed their beliefs and views about Barbary macaques, domestic dogs and other species as well as livestock depredation. During 2010, we also visited all study villages at least once every 8 weeks (weather permitting) to familiarise people with our presence. We collected data on accompanied and unaccompanied dogs in the forest during spring 2010 (see Waters et al. 2018). We conducted the vaccination programme over 2 weeks in late September 2010 and at the end of November we distributed dog owners' vaccination certificates. In the spring of 2011, we briefly interviewed 30 shepherds from participating and nonparticipating villages asking them about any feral dog activity. We chose three villages, Lahcene, Talyamin and Mtahen, for the first phase of the vaccination programme because their inhabitants asked us lots of questions about our activities. We conducted house to house visits in these villages in September 2010, wearing t-shirts with the conservation project logo on the front. On the first visit, we introduced ourselves and explained the dog health programme even if the householder did not own a dog. We enquired whether householders with dogs wished to participate in the vaccination programme and ascertained approximately how many dogs we would be vaccinating. We informed each dog owner of our return date so they could try to keep their dogs close to their home. We recorded the number, sex and age of each dog reported to us by villagers so the veterinary authorities could provide us with the necessary number of rabies vaccines and vaccination certificates. On our second visit, 2 d after the first, we vaccinated as many dogs from participating households against rabies as possible. We provided the owners of vaccinated dogs with brightly coloured collars to prevent duplicate vaccinations. We completed the certificates and retained them for validation by the provincial veterinary authorities. We distributed the certificates to participating households 6 weeks later in November 2010. In spring 2011, we asked 15 shepherds from the three participating villages about the feral dog pack, to understand whether our method of communicating dog ownership using coloured collars had been successful. We also interviewed 15 shepherds from three villages 25-30 km away for information about their recent observations of feral dogs. We had previously interviewed all these shepherds in 2010. The first author kept field notes to identify the themes emerging from all our engagements with local men. Our analysis followed an iterative grounded approach where we used open coding to further analyse and identify emerging themes based on the qualitative data as opposed to identifying them beforehand (Tadie and Fischer 2013) and we continued the analysis until these themes became stable (Cassidy 2017). Results Eighty percent (116/145) of households owned 1-7 dogs used to guard property and livestock. Almost all dog owners wanted their dogs vaccinated. However, some dogs were pregnant, infirm or of the wrong age to be vaccinated. Others were used to accompanying the goats into the forest and their owners could not stop them from doing so on the vaccination day. We vaccinated some of these dogs in the forest a few days later when the village shepherds presented them to us. Four Mtahen shepherds declined to participate in the programme. However, when we returned to the village for the second day of vaccinations, all four men approached the team asking us to vaccinate their dogs, which we duly did. We vaccinated a total of 242 dogs and achieved 60-81% vaccination coverage for the three villages (Table 1). When we asked 15 shepherds from the three participating villages if they would take their dogs to the closest town if the regional veterinary authorities set up rabies vaccination services there, they all said they would find it impossible as their dogs could get lost, be attacked by other dogs, or attack people on the way. This confirms that despite the free provision of rabies vaccines for dogs, the strategy of administering them in a nearby town discourages rural dog owners from participating due to the logistical difficulties of travelling any distance with untrained dogs. When we asked 15 shepherds from the three participating villages about the feral dog pack after the vaccination programme, in spring 2011, four shepherds said that there were no feral dogs, and 11 others informed us that the feral dogs had moved from the area. In contrast, the 15 shepherds from three villages 25-30 km away from those that had been offered the programme reported that the feral dog pack had increased and killed many cows in the forest over the winter. These results suggest that the shepherds from the three participating villages understood that the dogs were from those villages, through their observations of collared dogs, and so no longer mentioned the pack of feral dogs. There was, however, some confusion about our project among younger boys from Mtahen who had only ever seen us in the village when vaccinating dogs. When we encountered these young boys in the forest following the vaccination programme, they shouted excitedly that we were injecting the macaques. The experienced shepherds quickly corrected them and told the boys that we were protecting the macaques. One older shepherd explained further: The Monkey People do not vaccinate the macaques. The macaques don't need to be vaccinated as they live in the forest. Village dogs must be vaccinated to keep our livestock and us well. This shepherd's explanation indicated that he had adequately understood the rationale behind the programme. Discussion Our conversations with shepherds suggest that they understood that the Bferal^dogs had owners when they observed unaccompanied collared dogs in the forest, 6 mo after the programme had taken place. We succeeded in communicating our information to shepherds without prioritising our knowledge over theirs and without threatening anyone's identity, maintaining and increasing our good community relations which continue to this day. Our initiative avoided the risk of information being misunderstood or distorted by adult villagers, although, some children who were unfamiliar with our work misinterpreted our activity. Conservationists often erroneously assume that everyone shares the same interpretations of a community conservation initiative but this incident highlights the risks of conducting community actions without adequate awarenessraising appropriate to social and cultural norms. We responded to the boys' misinterpretation of our work by conducting annual visits to village schools around Bouhachem to inform children about our activities. This study highlights the importance of consistent contact with and commitment to local people by conservationists. Our inclusive strategy of visiting every household in a village to ascertain whether they were dog owners ensured that all households had some social interaction with the team and the programme in the first instance, acknowledging their status as stakeholders in our work. Our study also illustrates the importance of social factors in recruiting shepherds and others to activities initiated by the conservation team. The four shepherds who initially refused the vaccinations found they had excluded themselves from a social activity and changed their minds. The majority of villagers welcomed the initiative, viewing it as directly benefitting themselves and their livestock. Our subsequent engagement with many villagers established our reputation as Bgood people^, and part of the social landscape. Participation in the vaccination programme had no financial benefit for the villagers, but subsequent requests from four other villages to participate in the programme show the value people place on it. This supports the suggestion that financial incentives are not the only incentive to which local people respond in conservation initiatives, although they are important (Kuriyan 2002;Madden and McQuinn 2014;Silva and Mosimane 2014). The accepted vaccination coverage for the eventual eradication of rabies from an area is 60% (Hampson et al. 2009). Visiting individual households to vaccinate dogs appears to be an effective strategy to ensure vaccination coverage in rural areas of north Morocco. The high uptake of the vaccinations indicates that, if we continue them, along with a dog sterilisation programme, then human and livestock deaths from rabies should decrease in these three villages. The Dog Health Programme provided salient and meaningful benefits to local communities and has stimulated their interest in conservation activities. Some villagers believe that the vaccination initiative lessened the risk of rabies transmission from village dogs to other livestock. For example: It [the programme] avoided problems for other animals like mules because dogs infect other animals with rabies too. (Anon,~70 years, Talyamin). There are no official data available to substantiate these beliefs as villagers do not report rabid dogs or other livestock to the authorities. The programme appeared to empower some villagers to control the dog population as, in a follow up study in 2014, we found that shepherds had begun to sterilise their male dogs to prevent them roaming in the forest. We suggest that this behaviour change means that shepherds have accepted some responsibility for their dogs' behaviour, instead of placing the blame on outsiders visiting Mulay Abdesalam. The vaccination programme facilitated the development of management strategies which balanced Barbary macaque conservation needs with the important role dogs play in protecting villagers' livestock in the forest. An additional benefit of the activity was to successfully convey the message that the conservation team is committed to local people as well as to the conservation of the Barbary macaque. Local people may feel excluded from conservation because they feel that they are treated as less important than endangered wildlife (Tumusiime and Svarstad 2011). People's differing priorities often underlie human-wildlife conflicts, which are more suitably framed as human conflicts about wildlife based on the Lahcene 39 Yellow 84 17 67 63 75 Mtahen 78 Green 183 28 155 148 81 Talyamin 18 Pink 52 9 43 31 60 Total 125 319 54 265 242 diverging interests of conservationists and communities (Madden and McQuinn 2014;Redpath et al. 2015;Madden and McQuinn 2017). Failure to develop inclusive and meaningful relationships with local communities can lead to ineffective dialogues, and hinder conservation work (Madden and McQuinn 2014). Conservation practitioners should be aware that directly communicating controversial findings may be culturally inappropriate and threaten local identities. In our case, building trustful relations included using indirect communication of controversial information (i.e. identifying village dogs using brightly coloured collars). This strategy allowed local people to assimilate this information for themselves on observing the collared dogs in the forest thus avoiding loss of credibility for all involved. By communicating indirectly in situations where direct communication may be unwelcome, it is possible to avoid a build-up of resentment, subversive behaviour and ultimately full-blown conflict with the very people who must co-exist with the species we are trying to conserve. Our method seems to have encouraged accountability for dog behaviour among some villagers. Our efforts to prevent conflict succeeded but the sustainability of our approach depends on our constant reflection on how local people view us, our activities and the macaques. Conflict prevention efforts need good community relations backed up by appropriate methods of communication and will only be effective if conservation practitioners have a profound understanding of the situational context of their study site.
2018-11-15T18:38:58.458Z
2018-11-10T00:00:00.000
{ "year": 2018, "sha1": "1956b64670b5960f59b2457ac43867032e492efd", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10344-018-1230-x.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "44568adde469cb7834bd0cb7d5ec46e19ad2dea6", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Business" ] }
255722979
pes2o/s2orc
v3-fos-license
Prevalence of Multidrug-Resistant Bacteria (Enteropathogens) Recovered from a Blend of Pig Manure and Pinewood Saw Dust during Anaerobic Co-Digestion in a Steel Biodigester South Africa adopts intensive livestock farming, embracing the employment of huge quantities of antibiotics to meet the increased demand for meat. Therefore, bacteria occurring in the animal products and manure might develop antibiotic resistance, a scenario which threatens public health. The study investigated the occurrence of Gram-negative bacteria from eighteen pooled samples withdrawn from a single-stage steel biodigester co-digesting pig manure (75%) and pine wood saw dust (25%). The viable counts for each bacterium were determined using the spread plate technique. The bacterial isolates were characterised based on cultural, morphological and biochemical characteristics, using the Analytical Profile Index 20 e test kit. In addition, isolates were characterised based on susceptibility to 14 conventional antibiotics via the disc diffusion method. The MAR index was calculated for each bacterial isolate. The bacterial counts ranged from 104 to 106 cfu/mL, indicating manure as a potential source of contamination. Overall, 159 bacterial isolates were recovered, which displayed diverse susceptibility patterns with marked sensitivity to amoxicillin (100% E. coli), streptomycin (96.15% for Yersinia spp.; 93.33% for Salmonella spp.) and 75% Campylobacter spp. to nitrofurantoin. Varying resistance rates were equally observed, but a common resistance was demonstrated to erythromycin (100% of Salmonella and Yersinia spp.), 90.63% of E. coli and 78.57% of Campylobacter spp. A total of 91.19% of the bacterial isolates had a MAR index > 0.2, represented by 94 MAR phenotypes. The findings revealed multidrug resistance in bacteria from the piggery source, suggesting they can contribute immensely to the spread of multidrug resistance; thus, it serves as a pointer to the need for the enforcement of regulatory antibiotic use in piggery farms. Therefore, to curb the level of multidrug resistance, the piggery farm should implement control measures in the study area. Introduction South Africa has profound interest in livestock farming since it contributes immensely to its socio-economic capacity, creating jobs for the population, most especially to the individuals in the poverty-stricken rural communities [1]. Consequently, 70% of its agricultural land is employed in livestock farming with the distribution in numbers, breeds and species throughout all the provinces being influenced by grazing, environment and production systems [2]. Livestock farming generates copious quantities of manure containing significant organics, nutrients, heavy metals, antibiotics and pathogens; however, the heavy metals and pathogens can result in serious environmental pollution [3]. Specifically, in South Africa, the province of Eastern Cape is viewed as the poorest province amongst all the provinces of the country following its demographic, health and socioeconomic patterns [4]. Owing to its high level of food insecurity, the local people depend on natural resources for daily living and subsistence. According to Ngumbela et al. [5], the food insecurity challenge can be addressed via agricultural productivity. Therefore, some local inhabitants may utilise raw manure as fertiliser on farms to promote the growth of plants and crops; this is because animal manure is regarded as the oldest and a universal fertiliser and as a source of humus, micro and macro nutrients, useful organisms, which indirectly and positively affect the chemical, physical and biological components of the soil when applied on agricultural lands [6]. Moreover, Blaiotta et al. [7] noted that livestock manure constitutes a variety of pathogens, of which some are pathogenic causing infections in humans, whereas others are highly host-adapted and not pathogenic to humans. The accidental or deliberate release of the manure (containing microbes and other chemical contaminants such as antibiotics) into the environment and water bodies might occur through uncontrolled application of animal manure onto land, heavy rainfall wash offs and by leachate seeping through the soil from heaped/stockpiled manure on farms where composting is allowed [6]. Globally, antibiotics are employed in animal farming for several reasons, including growth promotion, treatment and/or prophylaxis. However, there appears to be a variation in the association between antibiotic use in the livestock industry and meat production in European countries, the United States and in other nations of the world [8]. Precisely, in developing countries, Van et al. [9] pointed out that the use of antibiotics in the food industry is to promote the wellbeing and growth of the animals and such a practice provides a host of economic benefits to the producers and the consumers in general. Similarly, Meissner et al. [10] highlighted that in South Africa, livestock farming is practised either as intensive farming to ensure sustainability or associated with communal and rural farming, but, on the whole, livestock is kept owing to its contribution to food security, sustenance and the economy. Despite these positive contributions, livestock farming is associated with serious negative environmental and health impacts, including climate change effect (global warming), damage to the ecosystems, reduction in biodiversity as well as pollution that might result in antibiotic resistance through animal manure, animals and animal products [11]. It is worrisome as some of the veterinary antibiotics utilised are similar or surrogates for those employed in clinical settings for the treatment of humans [10]. The frequency of use of antimicrobials in food production directly mirrors the level/degree of emergence of resistant foodborne pathogens. However, the overuse or the frequent use of the antibiotics administered either to the animals for treatment/prophylaxis and/or included in small quantities in their feeds leads to constant exposure to these molecules, thus creating selective pressure that can ultimately lead to the selection and/or emergence of antibiotic-resistant microorganisms [8]. In addition, antibiotic resistance of microorganisms found in animals, animal wastes and animal products cannot be secluded as these organisms can eventually be transmitted through the food chain, direct contact with animals and the environment to humans by contamination/pollution [12]. Consequently, antibiotic resistance has been regarded as one of the major serious public health threats confronting humanity [11]. More elaborately, there are plausible routes through which antibiotic-resistant bacteria can reach humans or the environment; administered drugs are being excreted and can remain unaltered in the environment [13], and, during slaughtering, gastric lavage might occur, a situation wherein the intestinal contents are spilled over the meat products. Slaughterhouse processing of infected animals as well as animal husbandry staff and farm workers with a greater probability of developing resistant infections may in turn disseminate these resistant bacteria through meat and even the staff themselves to the population at large [10,12]. It is of no doubt that the environment harbours bacteria that are resistant to antibiotics and, naturally, bacteria can develop resistance to antibiotics over time [10]. Nevertheless, the food industry is fraught with a fundamental challenge, which seems to be antibiotic-resistant bacteria occurring in the food chain [8]. The presence of antibiotic resistance in humans can cause difficult to treat infections, long hospital stays, high treatment costs, less sustainable production of food causing food shortages, serious treatment side effects owing to the employment of the last line antibiotics involving the increasing use of broad-spectrum antibiotics and, finally, increased morbidity and mortality [14,15]. It is worth mentioning that testing the susceptibility of bacteria to antibiotics helps in the management of infections by clinics, hospitals and national programmes for the control and prevention of infectious diseases. In particular, antibiotic susceptibility testing has great impact on a patient's management via identifying the specific diagnosis and targeting the specific disease-causing agent responsible for the disease condition [16]. Notwithstanding, antibiotic susceptibility testing has been implemented continuously in surveillance activities for resistance patterns in bacteria. Data have demonstrated that the antibiotic susceptibility profiles of bacteria vary with time, the individual and the geographical area owing to mutations in the bacterial DNA and the possibility of transmission of resistance genes through horizontal gene transfer from one bacterium to another [17]. Interestingly, Leopold et al. [18] noted the high disease burden in Sub-Saharan Africa that includes bacterial infections (diarrhoea, pneumonia, typhoid, sepsis, sexually transmitted diseases) caused by the bacteria of the Gram-negative category, and they are said to be the leading cause amongst others of infections/diseases and deaths occurring throughout the region. Furthermore, in this region, antimicrobial resistance (AMR) is exacerbated following the imprudent use of antibiotics that are bought over the counter without appropriate prescription, scarcity of clinical microbiology laboratories that perform sensitivity assays, the non-existence of a harmonised AMR observatory and frail regulatory frameworks for the exposure and implementation of antimicrobial agents added to the endorsement of the use of antibiotics by a great portion of the population having the human immunodeficiency virus to eradicate opportunistic diseases, a practice that has aggravated the emergence of resistant pathogens [18,19]. Against this background, it is apparent to investigate possible sources of origin and the distribution of antibiotic resistant bacteria in a bid to generate the information necessary to update the existing national and international database of antibiotic resistance. Such findings will be relevant in developing disease control measures, in policy making as well as in improving the principles guiding effective antimicrobial stewardship [20]. In this light, the study was conducted to screen manure samples and the co-digesting mixture (pig manure and pine wood sawdust) during anaerobic digestion for the occurrence of bacterial pathogens (Gram negative) of known environmental and public health concern, to elucidate their antibiotic resistance patterns as well as to determine the multiple antibiotic resistance indices and phenotypes. Sampling Each sampling entailed the collection of multiple samples (between 5 and 7 mL) from different sites of the biodigester following stirring to ensure consistency, homogeneity and even distribution of the microorganisms throughout the digesting mixture. Then, the samples were pooled together, representing the sample for each day. Overall, 18 pooled samples (between 15 and 21 mL) were withdrawn from a single-stage steel biodigester that was charged with substrates, comprising pig manure (75%) and pine wood saw dust (25%) in the ratio 3:1 for anaerobic co-digestion. The pig manure was procured from a piggery farm and the saw dust from a sawmill, both located in proximity to the Fort Hare University, Alice Campus, and were used to charge the biodigesting chamber. The biodigesting chamber of 100 L capacity but of 75 L working volume was batch-operated under a psychrophilic temperature range of 13.16 to 26 • C and the samples were withdrawn after stirring for 2-3 min (to mix the contents) with the help of a stirrer inserted into the digester during the designing and deployment of the biodigester. Mixing was done daily, since mixing has prominent effects on microbial community, methane content and volatile fatty acids and mixing at intervals is more preferable [21]. However, samples were only collected every 7 or 14 days over seven months for the evaluation of bacterial counts and the cultivation of enteropathogens of public health concern as per the procedures of Poudel et al. [22]. The overall number of samples constituted both the untreated and the treated biomasses. The untreated biomasses were the original samples procured from the sawmill and the piggery farm while the treated biomass was the mixture, or a blend of the samples withdrawn from the digester during the anaerobic digestion process. Each sample was collected and introduced into a sterile centrifuge tube containing tryptic soy broth (Liofichelm, Diagnostics, Roseto degli Abruzzi, Italy) and the tubes were placed on ice [23] upon transportation to the laboratory for subsequent processing, bioassays and analysis. Determination of the Counts of Viable Bacteria The counts of viable bacteria (four) classified as Gram-negative, including Escherichia coli, and species of Salmonella, Yersinia and Campylobacter that are mostly associated with gastrointestinal diseases, were determined using the spread plate technique, performed according to the method previously described [22]. Briefly, each sample was carefully vortexed and 1 mL was transferred into 9 mL of sterile physiological saline (0.9%) in a test tube to prepare an initial 1:10 dilution, which was 10-fold serially diluted to constitute dilutions, including 10 −1 to 10 −5 . Depending on the bacteria of interest, different microbiological media were prepared following manufacturer's instructions, which consisted of E. coli chromogenic agar (Conda, Madrid, Spain), Salmonella/Shigella agar (Conda, Madrid, Spain), Yersinia selective agar base (Conda, Madrid, Spain) and, lastly, modified charcoal cefoperazone desoxycholate agar (Conda, Madrid, Spain). For the growth of Yersinia and Campylobacter species, the microbiological media were enriched with CIN (cefsulodin, 7.5 mg: irgasan, 2.0 mg: novobiocin, 1.2 mg) supplement and CCDA (cefoperazone, 0.016: amphotericin B, 0.005) supplement, respectively, depending on the volume of media prepared. All the supplements were purchased from Oxoid, United Kingdom. While observing aseptic conditions, the solidified agar plates were inoculated by dispensing 100 µL of each dilution and the inoculum spread using a sterile glass spreader. Subsequently, the inoculated plates were incubated aerobically at 37 • C for between 18 and 24 h for the cultivation of E. coli and Salmonella spp., at 30 • C for 24-48 h to enable the growth of Yersinia spp. and at 42 • C under a controlled oxygen atmosphere constituting 5% O 2 , 10% CO 2 and 85% N 2 (BR0038, Oxoid, UK) for 24-48 h for the growth of Campylobacter spp. Presumptively, the bacteria were identified following incubation as follows: dark blue or violet colonies observed on E. coli chromogenic agar were considered as E. coli [24], pink colonies with a black centre on SSA were counted as Salmonella spp. [25], colourless colonies with a red centre or which appeared with red bull's eyes were regarded as Yersinia spp. [26] and the presence of colourless, tiny, smooth, convex and translucent to grey coloured colonies were enumerated as Campylobacter spp. [27]. The bacterial counts were enumerated as colony forming units and the values were recorded as a mean of triplicate assays on respective tables. Isolation, Biochemical Characterisation and Storage of the Bacteria After enumerating, well isolated and suspected/presumed colonies on the different agar plates were selected and steaked several times either on freshly prepared sterile nutrient agar (Merck, Modderfontein, South Africa) or Mueller Hinton agar (Conda, Madrid, Spain) plates, to generate pure culture as per the method of Manyi-Loh et al. [28]. Oftentimes, subcultures are performed on nutrient agar, a multipurpose and cheap agar. However, when fastidious organisms that need special nutritional requirements for growth are involved, the subculture medium must be chosen carefully. Therefore, nutrient agar was used for the subculture of E. coli and Salmonella isolates while Mueller Hinton agar, a standard medium recommended for antibiotic susceptibility testing, was employed and supplemented with the antibiotics for the growth of Yersinia and Campylobacter species [29]. This is because Mueller Hinton is a loose agar and permitted a better diffusion of the supplements (antibiotics) throughout the medium, resulting in adequate bacterial growth. Purified cultures of each bacterium were harvested and introduced into tryptic soy broth supplemented with 20% glycerol (a cryopreservation agent that inhibits intracellular ice formation during freezing) and were stored at −80 • C as stock cultures [30] for further analysis. Bacterial identification was established by growth on selective microbiological media, morphological characteristics along with biochemical characterisation [31]. The analytical profile index (API) 20e kit was used for the biochemical characterisation of Enterobacteriaceace, while other biochemical tests, including the presence of enzymes (catalase, oxidase, urease), fermentation of sugars, susceptibility to nalidixic acid, microaerobic growth at 37 • C and 42 • C and the production of hydrogen sulphide gas (Triple Sugar Iron) and indole test, were employed for the confirmation of the other bacteria. Confirmed isolates were preserved at −80 • C in tryptic soy broth plus 20% glycerol. Determination of Antibiotic Resistance Phenotypes The growth inhibition caused by antibiotics was assayed using the Kirby-Bauer disc diffusion technique while employing a collection of 14 traditional antibiotics (Mast, Diagnostics, Bootle, UK). Stock cultures were resuscitated on either nutrient agar (Merck, Modderfontein, South Africa) or Mueller Hinton agar (Conda, Madrid, Spain) considering the species of bacteria. The growth of the bacterial isolates was harvested and employed in the susceptibility testing, which was performed guided by the procedures of the Clinical Laboratory Standard Institute (CLSI) [32]. A standardised inoculum, containing 10 8 cfu (matching 0.5 MacFarland Standard), was prepared for each bacterial isolate by emulsifying 2-3 distinct colonies from each growth in sterile physiological saline (0.9%) contained in a test tube. Mueller Hinton (Conda, Madrid Spain) agar plates were prepared following manufacturer's instructions and were allowed to solidify. Each solidified plate was swabbed with inoculum-impregnated cotton sticks to generate an even growth pattern of the organism on the plate. The plates impregnated with different bacterial inocula were left for a while to reduce wetness on the plates and then the antibiotic discs were aseptically placed on each plate at equal distance from each other, but not too close to the borders of the plates. This action aided in preventing the zones of inhibition from overlapping following antibiotic action. The discs employed included ampicillin (AMP; 25 µg), gentamicin (GM; 10 µg), chloramphenicol (C; 30 µg), ciprofloxacin (CIP; 5 µg), amoxicillin (10 µg), nalidixic (NA; 30 µg), tetracycline (TET; 25 µg), amoxicillinclavulanic acid (Augmentin, AUG; 30 µg), trimethoprim-sulfamethoxazole (co-trimoxazole, TS; 25 µg), erythromycin (E; 15 µg), streptomycin (S; 10 µg), nitrofurantoin (Ni; 300 µg), sulfamethoxazole (SMX; 300 µg) and cefotaxime (CTX; 30 µg). The tested plates were incubated at different temperatures and atmospheric conditions based on the bacterium. The examination of incubated plates was conducted to locate zones of inhibition and the diameter of the emergent zone of inhibition around each disc was measured in millimetres and recorded in the respective tables. Each measurement represented the mean of triplicate assays. The interpretation criteria based on the diameter of the zone of inhibition were adopted from CLSI [32], describing the isolate as susceptible or intermediate or resistant. As a positive control, Escherichi coli ATCC 25922 was evaluated alongside the test bacterial isolates. Calculating Multiple Antibiotic Resistance (MAR) Index and Presentation of Their Resistance Patterns The MAR index can be obtained from the formula MAR= a/b, where 'a' denotes the amount of antibiotics that expressed no significant activity against the tested bacterium, being referred to as resistance, and 'b' gives the complete antibiotics to which the isolate was exposed in the susceptibility study [33]. An estimated MAR value above 0.2 was indicative of resistance expressed to multiple antibiotics and the bacterial isolate originated from a hypothetically unsafe source, where antibiotics are repeatedly being used, thus a great risk of contamination [34]. Multiple antibiotic resistance phenotypes were developed for all 159 bacterial isolates that were tested against 14 commercial antibiotics and MAR was considered as resistance to ≥3 antibiotics. Results Bacterial counts (colony forming units per millilitres) indicated differences between bacterial species at the time of charging of the digester. Data showed that the co-digesting mixture contained 7.1 × 10 4 cfu/mL of Yersinia spp., 9.0 × 10 4 cfu/mL Campylobacter spp., 2.0 × 10 6 cfu/mL of E. coli and 7.0 × 10 4 cfu/mL of Salmonella spp. As the anaerobic codigestion progressed with time, the bacterial counts of the different species were reduced by 1 log, and E. coli had the shortest survival period of 77 days, followed by Salmonella spp. (84 days) and Yersinia spp. (98 days), while Campylobacter survived for the longest period of 112 days. Moreover, 18 pooled samples were withdrawn from the digester at 7-or 14-day intervals over a period of seven months. The prevalence rates ranged from 22.22% to 50%, with nine (9) samples found positive for Campylobacter spp., five samples (5/18; 27.78%) were positive for Salmonella sp. and Yersinia sp. alongside four samples (4/18; 22.22%) being positive for E. coli. In total, 159 bacterial isolates belonging to the genera Yersinia (26), Salmonella (45) and Campylobacter (56), as well as Escherichia coli (32) bacterial strains, were recovered from the co-digesting medium (pig manure plus pine wood sawdust) before and during anaerobic digestion taking place in a single-stage steel biodigester. These bacterial species were confirmed into the various genera as follows: E. coli isolates showed a positive reaction to the presence of the enzymes (lysine decarboxylase, ornithine decarboxylase), fermentation of various sugars (glucose, sorbitol, mannose, rhamnose, melibiose, arabinose), hydrolysis of o-nitrophenyl-b-D-galactopyranoside and the production of indole from tryptophan. Additionally, Salmonella spp. fermented various sugars (glucose, mannose, sorbitol, rhamnose and arabinose), producing hydrogen sulphide gas, utilised citrate as the sole source of carbon and hydrolysed o-nitrophenyl-b-D-galactopyranoside. Similarly, Yersinia spp. showed the presence of catalase enzymes, fermenting glucose and sucrose with no hydrogen sulphide gas produced, but in the reaction in the indole test, urease and oxidase were both positive and negative, implying we had both negative and positive isolates. Furthermore, Campylobacter spp. showed the presence of oxidase and catalase enzymes and exhibited susceptibility and resistance to nalidixic acid. Table 1 shows the percentages of sensitive, intermediate and resistant bacterial isolates realised from the sensitivity assay, evaluating the activity of a panel of 14 antibiotics against 159 bacterial isolates. Overall, the susceptibility profiles depended on the tested antibiotics and the bacterial isolates; the maximum sensitivity of the different bacterial isolates included a 100% sensitivity of E. coli to amoxicillin, 96.15% and 93.33% sensitivity demonstrated by both Yersinia and Salmonella spp. to streptomycin, respectively, as well as nitrofurantoin's exhibited effect against 75% of Campylobacter sp. In addition, appreciable intermediate activity was exerted by nalidixic acid against 86.67%, 76.92% and 68.75% of Salmonella spp., Yersinia spp. and E. coli, respectively. On the other hand, the greatest resistance amongst the bacterial isolates occurred against erythromycin, as a common antibiotic, as follows: 100% of Yersinia spp. and Salmonella spp. and 90.63% of E. coli, but 98.21% Campylobacter spp. displayed resistance to cefotaxime. However, in particular, Salmonella isolates showed complete resistance (100%) to two (2) other antibiotics, including tetracycline and amoxicillin. Similarly, E. coli isolates showed a notable resistance (81.25%) to tetracycline. In addition, Yersinia isolates exhibited a profound resistance of 92.31% to sulfamethoxazole. Lastly, Campylobacter isolates also presented with a huge resistance to erythromycin (78.57%). From the calculated multiple antibiotic resistance index, it was revealed that 145 bacterial isolates (91.19%) had a MAR index > 0.2, indicating resistance to ≥3 antibiotics and the range of the MAR index for each bacterium is as shown in Table 2. More elaborately, only five isolates (3.45%) had a MAR index less than 0.2; however, two (2) isolates displayed a MAR index of 0.9 (i.e., resistance to 12 or 13 antibiotics of the 14 antibiotics employed in this study), comprising one (1) Salmonella spp. and one Campylobacter spp. Taking into consideration the data on the calculated multiple antibiotic indices of the bacterial isolates, the antibiotic resistance fluctuated over time, showing a reducing trend as presented in Figure 1. As shown in Table 2, most of the isolates with a MAR index > 0.2 belonged to the genera Campylobacter (31.03%) and Salmonella (31.03%), followed by E. coli (20.00%). In addition, a total of ninety-four (94) multidrug resistance phenotypes were observed in the bacterial isolates evaluated against a suite of 14 conventional antibiotics. The distribution of the MDR phenotypes occurred as follows: 29 for E. coli, 27 for Campylobacter spp., 22 for Salmonella spp. and 19 for Yersinia spp. The most common MDR profiles demonstrated in this study were resistance to five antibiotics (E, TET, SMX, AUG, AMOX) by Yersinia spp. and Salmonella spp. and resistance to four antibiotics (E, TET, CTX, AMOX) by E. coli and Campylobacter spp. as well as E, AUG, AP and AUG by Yersinia spp. and E. coli. Overall, only one (1) Campylobacter spp. demonstrated resistance to the highest number of antibiotics (13) presenting with the MAR phenotype TS, E, C, TET, CIP, SMX, AUG, S, CTX, NA, GM, AP, AMOX. As shown in Table 2, most of the isolates with a MAR index > 0.2 belonged to the genera Campylobacter (31.03%) and Salmonella (31.03%), followed by E. coli (20.00%). In addition, a total of ninety-four (94) multidrug resistance phenotypes were observed in the bacterial isolates evaluated against a suite of 14 conventional antibiotics. The distribution of the MDR phenotypes occurred as follows: 29 for E. coli, 27 for Campylobacter spp., 22 for Salmonella spp. and 19 for Yersinia spp. The most common MDR profiles demonstrated in this study were resistance to five antibiotics (E, TET, SMX, AUG, AMOX) by Yersinia spp. and Salmonella spp. and resistance to four antibiotics (E, TET, CTX, AMOX) by . and Remarkably, all Yersinia spp. (100%), Salmonella spp. (100%) and 90.63% E. coli plus 80.36% Campylobacter spp. were multidrug-resistant; however, the MAR phenotypes varied with the bacteria as displayed in Table 3. Accordingly, the predominant MAR phenotypes were in the following order: 28.89% (Salmonella sp.) presented as TS, CIP, E, C, TET, SMX, AUG, CTX, AMOX, NI; 26.79% (Campylobacter) observed as TS, CIP, E, C, TET, SMX, S, CTX, NA, GM; 15.38% (Yersinia spp.) associated with the MAR pattern E, SMX, AUG, AP, AMOX; and, lastly, 6.25% (E. coli) represented by E, TET, CTX, AP, AMOX, NI and E, TET, AUG, CTX, NA, AP, AMOX, NI. Discussion One Health embodies three main components: humans, animals and environment health, emphasising that the health of these three are interdependent or interconnected, meaning any problem facing the health of one component will affect the other two. Accord-ingly, Mackenzie and Jeggo [35] defined One Health as a collaborative, multisectoral, and transdisciplinary approach, working at the local, regional, national and global levels, with the objective of accomplishing ideal health outcomes, recognising the interconnectedness between people, animals, plants and their shared environment. Antimicrobial resistance (AMR) is viewed as a critical and major One Health problem as it might affect global public health, food safety and food security [14]. The One Health approach is essential in combating antimicrobial resistance as various bacteria, including E. coli, Yersinia spp., Salmonella spp. and Campylobacter spp., are becoming increasingly resistant and livestock manure may be a reservoir. Animal manure is viewed as a favourable environment for the survival of pathogens and the number and type of the microbial pathogens occurring in livestock wastes is closely associated with the animal species, physicochemical composition of the manure and the geographic location of the farm, and the feeding habits can determine the biochemical and biological properties of the manures [7]. The level of bacterial counts is a measure of the hygienic condition of the manure; in this study, the bacterial counts ranged from 10 4 to 10 6 cfu/mL, with the highest estimated E. coli counts of 2 × 10 6 cfu/mL. This can be affirmed by the findings of Dawangpa et al. [36], who mentioned that swine farms have installed water treatment facilities to treat the water prior to discharge into the environment. However, the treated water is being reused on farms after treatment, therefore creating the likelihood that E. coli could be recycled back to the swine. This could explain the high E. coli counts enumerated in this study. In addition, the evidence of the varying counts of E. coli, Yersinia spp., Salmonella spp. and Campylobacter spp. in the manure indicated that the pig manure can be a potential source of contamination to water, soil and agricultural products, thereby causing gastrointestinal infections in children, elderly and immunocompromised individuals [11]. Details of the findings on the influence of time on the dynamics of the bacterial species in terms of bacterial counts have been published by Manyi-Loh and Lues [37]. The authors demonstrated further that the different bacterial species were inactivated by 1 log reduction, and E. coli had the shortest survival period of 77 days, followed by Salmonella spp. (84 days) and Yersinia spp. (98 days), while Campylobacter survived for the longest period of 112 days. This may suggest that the fastidious organisms requiring selective supplements for growth survived for a longer period than Salmonella and E. coli as they had an adequate supply of nutrients encouraging their growth. Moreover, over the period of the study, 56 Campylobacter spp., 45 Salmonella spp., 32 E. coli and 26 Yersinia spp. were recovered. Although Yersinia spp. lasted longer as opposed to Salmonella spp. and E. coli, it can be suggested that the higher number of species of the latter two bacteria could be attributed to the initial concentration of these bacteria in the procured samples [38]. The prevalence of the bacteria ranged from 22.22% (4/18; E. coli) to 50% (9/18; Campylobacter spp.), while Salmonella sp. and Yersinia spp. had a common prevalence rate of 27.78%. Contrary to our findings, Dikonketso and Olayinka [39] enumerated viable cells in seepage samples recovered from a pig farm in the range from 4.30 × 10 2 to 1.29 × 10 9 cfu/mL, while Peng et al. [40] noted a prevalence of 2.76% of Y. enterocolitica in swine faeces collected from Sichuan and Shandong provinces of China. Considering the variation in the bacterial counts and the prevalence rates in manure reported in the different studies, this could be attributed to the fattening stage of the pigs, the sampling times and points [41], the chemical composition (which depends on the feed composition), the type or variety of vegetation available added to the microbiological compositions of the pig manure, environmental temperature as well as the waste collection and management systems on the farms [40]. It is worth mentioning that South Africa is regarded as one of the major contributors to the total global increase in meat consumption [42]. As a consequence, the intensive animal production approach involving frequent and huge applications of antibiotics is adopted to meet the increased demand in meat and meat products. Therefore, the South African Veterinary Association (SAVA) published the guidelines for using antimicrobials in the South African pig industry and advocated for the use of critically and highly important drugs, including streptomycin, gentamicin, erythromycin, ampicillin, ciprofloxacin and tetracycline for pigs, despite their crucial and high relevance in human medicine [43]. With regards to the susceptibility data as shown in Table 1, overall, the bacterial isolates showed diverse susceptibility patterns to all the tested antibiotics, represented by different percentages. The sensitivity displayed, however, depended on the bacterial isolates and the antibiotics used in question: E. coli (100% amoxicillin; 68.75% chloramphenicol), Salmonella spp. (93.33% streptomycin, 68.89% gentamicin), Yersinia spp. (96.15% streptomycin, 80.76% ciprofloxacin) and Campylobacter spp. (75.00% nitrofurantoin, 71.00% augmentin). These antibiotics are from different classes and as such possess different chemical structures as well as differing modes of action. The findings are similar to those of Musonye and colleagues [44], who mentioned varied susceptibility of the bacterial isolates from urine samples of livestock and wildlife against co-trimoxazole, amoxicillin, ciprofloxacin, streptomycin, nalidixic acid, chloramphenicol and gentamicin. More elaborately, intermediate sensitivity was displayed by 86.67% Salmonella sp., 76.92% Yersinia sp. and 68.75% E. coli to nalidixic acid. The demonstration of intermediate sensitivity can be viewed as the bacterial isolates gradually losing sensitivity to this drug; a situation which can be explained by the fact that this drug is normally considered as a strategic therapy against resistant isolates. Therefore, this promoted the wide use of this drug to treat a variety of bacterial diseases in humans, leading to growth in the number of resistant isolates. This is affirmed by the recent study of Moyen et al. [45], who demonstrated a total resistance to nalidixic acid of Enterobacteriaceae recovered from household wastewater in Brazzaville, Republic of Congo. On the other hand, a high and common resistance was shown in all the bacterial isolates to erythromycin, a macrolide. With the exception of the Campylobacter isolates, the finding is congruent with the study of Kilonzo-nthenge et al. [46], who reported a total resistance (100%) to erythromycin of bacteria recovered from retailed chicken and beef products that were grouped in the family Enterobacteriaceae. In detail, E. coli isolates in this study demonstrated a considerable resistance to tetracycline (81.25%); consistent with this study were the high resistance rates between 33.3% and 93% reported for tetracycline-resistant E. coli from dairy farms in many farms operated in the UK, Asia and the USA [47,48]. Clearly, zoonoses are crucial to note, owing to human health. In this light, a significant resistant action was displayed by Salmonella spp. to amoxicillin (100%), tetracycline (100%) and sulfamethoxazole (97.78%). This finding is similar to the study of Rasschaert and co-authors [49], who demonstrated that Salmonella species from 89 samples of pig manure collected in Flanders (northern part of Belgium) were highly resistant to ampicillin (54.7%), tetracycline (45.3%) and sulfamethoxazole (47.2%). Additionally, 92.13% of Yersinia spp. were observed to present with resistance against sulfamethoxazole, contradicting the study of Peng et al. [40], who noted complete resistance (100%) of Yersinia enterocolitica to ampicillin, augmentin and cefazolin. Overall, it is inferred that there are differences in the use and management of antibacterial agents on farms between countries [8]. Seemingly, the acquisition and the distribution of antibiotic-resistant genes and bacteria can be affected by a host of factors, including the weather and climate, the breed of the animals, the antibiotic dosage for administration, the duration of the treatment, the capacity of the farm [36], the animal husbandry practices, the hygienic conditions of the farm as well as the commitment with control measures and disease prevention [50]. Additionally, the presence of the carrier animal moving among the animal herds and through vector action is important. Nevertheless, our data showing huge resistance to the commonly used antibiotics describes the complex use of antibiotics in pig farming, which employs more antibiotics than poultry and cattle farming and is associated with the interconnecting areas of animal health, welfare and economics [51]. However, there are observed discrepancies in the use of antibiotics across the different stages of pig production, owing to the differences in the diseases, the epidemiology and the route of administration of the available drugs [52]. This could be the reason for the resistance observed to the advocated antibiotics though in varying percentages in the present study; therefore, the findings call for the need for stringent antimicrobial surveillance programmes for the use of antibiotics in animal farming. The members of the Enterobacteriaceae exhibited resistance to antibiotics owing to the mobilisation of continuously expressed single genes that encode efficient drugmodifying enzymes. This could be the reason Salmonella (100%) isolates in this study demonstrated resistance to amoxicillin. Our findings corroborate those of Singh et al. [53], who reported the resistance to amoxicillin of bacterial isolates recovered from spring water that were categorised as Enterobacteriaceae. Furthermore, susceptibility to nalidixic acid has been implemented as a criterion to differentiate between C. jejuni and C. coli. Only seven percent (7.14%) of the Campylobacter isolates were susceptible to nalidixic acid, but 26.79% displayed susceptibility in an intermediate range. On the other hand, Campylobacter species demonstrated resistance (66.07%) to nalidixic acid, substantiating the study of Ogbor et al. [54], who noted resistance of a higher magnitude (100%) to nalidixic acid by C. coli isolated from poultry farms in Lagos Sate, Nigeria. Clearly, the difference in the percentage resistance of Campylobacter spp. in both studies can be due to the type of animals investigated (pig against poultry) and the countries in question (South Africa against Nigeria). In detail, Sibanda et al. [55] emphasised that chickens (Avian species) represent a significant reservoir for the transmission of Campylobacter species owing to their high body temperature which is necessary for the optimum growth of the pathogen, which can colonise the caeca of chickens in very high numbers. Although a developing country, South Africa is a more advanced country than Nigeria (in terms of GDP) and is governed by more stringent policies relating to antibiotic consumption; however, policies regarding the purchase and consumption of antibiotics in both human and animals will differ between the countries as they have different disease burdens, environmental conditions and socioeconomic status [56]. A host of authors pointed out that the quality of governance, the availability and conditions of health facilities, poverty, education and sanitation are strongly related to the differences observed in antibiotic resistance and antibiotic consumption profiles existing between regions/countries [57,58]. Almost fifty-four percent (53.57 approx. 54%) of the isolates were resistant to ciprofloxacin (another quinolone) and seventy-nine percent (78.57~79%) were resistant to erythromycin, both drugs included in the treatment regimen recommended for the treatment of campylobacteriosis caused by Campylobacter species [54]. In addition, the Campylobacter isolates established an extreme resistance (98.12%) to cefotaxime. Taking into consideration the above-mentioned prevalence of resistance, public health attention is aroused because the organism is considered as one of the key zoonotic pathogens responsible for the cause of gastroenteritis in humans worldwide [59]. The multiple antibiotic resistance (MAR) index describes resistance to three or more antibiotics. Monitoring of resistance to multiple antibiotics is critically necessary for effective containment programmes, and more especially in Gram-negative bacteria, which is the case with this study. Taking into consideration the data on the calculated multiple antibiotic indices of the bacterial isolates, the antibiotic resistance fluctuated over time as presented in Figure 1. The effect of time on the bacterial antibiotic resistance seems to mirror the growth curve of a bacterium. In any environment that a bacterium is found, it tends to thrive by first adapting to the environmental conditions, then grows and multiplies via binary fission (the parent bacterium produces a replica of itself, including antibiotic resistance genes), thus causing a rise in antibiotic resistance genes. An increase in the antibiotic resistance could also be attributed to bacterial isolates taking up resistance genes through horizontal gene transfer mediated by integrons, transposons, plasmids via transduction, transformation and conjugation [60]. However, when growth of the bacterial cells become limited, cell division ceases and the bacterial cells tend to die out, encountering a decline phase [61] owing to unfavourable conditions, including limited space, exhaustion of nutrients and the accumulation of toxic wastes/substances in the environment. These conditions have been reported to occur in an anaerobic digester as the process progresses with time. Accordingly, Jiang et al. [62] noted a combination of factors causing bacterial reduction via anaerobic digestion in a biodigester. Therefore, from Figure 1 it can be depicted that the antibiotic resistance was reduced over time. The findings corroborate those of Katada et al. [63] who noted that antibiotic resistance genes such as tet A, tetB and bla TEM were reduced through the anaerobic treatment of livestock manure. Clearly, a biodigester, a bioengineered environment, equally creates an impact on the trend of antibiotic resistance of the bacteria present; however, the extent of this effect will depend on whether the digester is continuous or batch-operated. This is because fresh substrates are added intermittently and digestate is discharged simultaneously in the continuous mode, affecting both the microbial population and antibiotic resistance as the microbial composition of the added substrate can influence the antibiotic resistance genes content of the digester [60] as the inactivation of the bacterial population will not be efficient since the starting bacterial population is not allowed to spend substantial time inside the digester. Contrarily, in batch operation, which is the case with this study, the substrates were fed once into the digester and were discharged only after the anaerobic digestion process was completed, thus leading to the reduction of the bacterial isolates as presented by Manyi-loh and Lues [37] as well as the antibiotic resistance. The study revealed a very high prevalence of 91.19% (145/159) of MAR bacterial isolates exhibiting resistance to ≥3 antibiotics and MAR indices > 0.2, indicating that the MAR bacteria arose from a probably hazardous source where antibiotics are used often, such as in the pig farm, either for growth promotion, treatment or prevention of diseases [8] as well as the occurrence of high selective pressure in this environment. Varying levels of resistance and MDR were noted in relation to the different antibiotic classes tested against the bacterial isolates. The display of different MDR phenotypes in different permutations and combinations provides the evidence of the complexity and diversity of resistance in animal manure [27], which has been considered as a reservoir of pathogenic bacteria, resistant bacteria and resistance genes. The demonstration of resistance to the antibiotics recommended by SAVA indicated the possibility of high exposure of these organisms on the farm. A spectacular fraction of the bacterial isolates (91.19%) were MDR with Yersinia spp., Salmonella spp., E. coli and Campylobacteria spp. recording 100%, 100%, 90.63% and 80.36%, respectively ( Table 3). As shown in Table 3, overall, the MDR bacterial isolates displayed a total of 94 MAR phenotypes, with E. coli demonstrating more diverse resistance profiles (29 MDR phenotypes), followed by Campylobacter spp. (27 MDR phenotypes). In accordance with our findings was the 95.5% resistance to three or more classes of antimicrobials noted by Chala et al. [64] of bacterial isolates recovered from humans, animals and water sources in livestock-owning households, residing in the peri-urban areas of Addis Ababa, Ethiopia. It is striking and a call for concern that almost all the isolates belonging to Enterobacteriaceae (Yersinia spp., Salmonella spp. and E. coli) were multidrug-resistant. In addition, two (2) isolates presented with a MAR index of 0.9, showing resistance to almost all the antibiotics employed in the investigation and also higher selective pressure. Therefore, these strains offer higher chances of contamination and the spread of antibiotic resistance genes (serving as vehicles for antibiotic resistance genes) via horizontal gene transfer. Moreover, one (1) Campylobacter isolate showed a MAR phenotype of TS, E, C, TET, CIP, SMX, AUG, S, CTX, NA, GM, AP, AMOX, insinuating resistance to 13 tested antibiotics. Thus, Ogbor et al. [54] opined that there seems to be a paradigm shift, with Campylobacter isolates now becoming multidrug-resistant, notably, in their resistance to tetracycline and fluoroquinolone. According to the One Health concept, our findings of antibioticresistant and multidrug-resistant bacterial pathogens would have serious implications in humans when the untreated manure is applied on agricultural lands and ultimately enters ground water and spring water utilised by humans for domestic and sanitation purposes via hydrological processes (heavy rainfall or storms), creating opportunities for diseases and infections [6]. The diseases and infections caused by bacteria expressing decreased sensitivity to the highly recommended drugs employed for antimicrobial chemotherapy, therefore, result in failed treatment, the patient's condition becoming deteriorated as well as causing elevated financial constraints on the people and the facility engaged in the delivery of health care services [15]. Additionally, humans might ingest these resistant bacteria from meat that is contaminated with faeces/manure during slaughtering at the abattoirs. The higher prevalence of MDR Enterobacteriace (Gram-negative bacilli) can cause difficult to treat infections, consequently endangering a greater number of hospitalised patients [15], interfering with many facets of antimicrobial stewardship. Similarly, the observation of high levels of resistance and multidrug resistance is rather disturbing, considering that South Africa has a high prevalence of HIV/AIDS, a population who depend on the use of antibiotics to boost their immune system during the management of more than a few bacterial infections, including gastrointestinal problems that are recurrent in the said population [22]. In addition, spillover of the resistant pathogens and resistance genes can occur from the faeces of one animal to another via horizontal transfer as these animals live in proximity on the farm [27]. However, in this study, remarkable resistance was shown against erthromycin, amoxicllin, sulfamethoxazole, tetracycline and cefotaxime. Future studies are inevitable to ascertain the occurrence of the resistance genes provoking the resistance action in the bacterial isolates recovered from this study, and virulence factors will be performed simultaneously since they are interconnected. Conclusions Clearly, the pig manure has been shown to contain zoonotic enteropathogens in varying levels or counts with maximum counts due to E. coli. This study ascertained the presence of multidrug-resistant bacteria in a mixture of pig manure and pine wood saw dust. The results showed that the bacterial isolates demonstrated great resistance to erythromycin, sulfamethoxazole, amoxicillin, tetracycline and cefotaxime. Additionally, the MAR index occurred between 0.1 and 0.9, with 91.19% of the isolates exhibiting resistance to ≥3, a situation that merits public health attention. This is so because some of the tested bacteria are the major waterborne or foodborne pathogens, which are capable of causing close to 2 million deaths per year in developing countries. Having the resistant counterpart of these bacteria will apparently compound the problem further, posing a serious threat to health care systems as the organisms spread from the environment to clinical settings. Acknowledging the fact that antimicrobial resistance is a growing global problem, it is therefore pertinent to conduct periodic monitoring of resistance patterns of common bacteria of public health and environmental significance to receive updates on their susceptibility/resistance profiles that might in turn be beneficial to the patient as well as clinician in the selection of chemotherapy. The profile of antibiotic resistance fluctuates with socioeconomic strata, geographic criteria and it is different between studies; bacterial resistance is affected by the time span, the study design and the type of population involved in the investigation [17]. Nonetheless, following the findings of this study presenting the high prevalence of resistance to erythromycin, sulfamethoxazole, amoxicillin and tetracycline, it is worth concluding that the individual antibiotics should not be used as a monotherapy as our results clearly reject the antibiotics to be employed as the empirical treatment of infections caused by these bacterial isolates and for effective hospital infection control. Additionally, the pig manure tested should not be employed as a fertiliser according to traditional custom unless treated further, as it will serve as a source of contamination and dissemination of genes resistant to antibiotics to the microbial population in the environment and clinical setting. In addition, the findings represent a baseline for future investigations into identifying the genes responsible for antibiotic resistance and or virulence in these organisms in order to devise alternative therapies in the form of vaccines and antimicrobials to prevent and cure infections. Data Availability Statement: Data available on request due to restrictions (privacy). The data presented in this study are available on request from the corresponding author. The data are not publicly available as the Intellectual property belongs to the University.
2023-01-12T17:30:27.480Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "c7de7d7a8ddf5c0dc91ea47e71e9a690c81727da", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/20/2/984/pdf?version=1672912062", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "842b9e69ca4f9a2f4c789b81f22cb943f0864323", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
118640927
pes2o/s2orc
v3-fos-license
Advances in QCD sum rule calculations We review the recent progress in the applications of QCD sum rules to hadron properties with the emphasis on the following selected problems: (i) development of new algorithms for the extraction of ground-state parameters from two-point correlators; (ii) form factors at large momentum transfers from three-point vacuum correlation functions; (iii) properties of exotic tetraquark hadrons from correlation functions of four-quark currents. INTRODUCTION The method of sum rules is 35 years old. In spite of this respectable age, the method is being permanently enriched by new ideas and new calculations and remains one of the widely used and competitive tools both for the determinations of the fundamental QCD parameters (e.g., quark masses and α s ) and for the calculation of hadron properties. In this talk we review the recent progress in the applications of QCD sum rules to hadron properties with the emphasis on the selected topics: (i) sum rules for two-point vacuum correlation functions and leptonic decay constants of heavy mesons; (ii) sum rules for three-point vacuum correlation functions, form factors and three-meson couplings; (iii) sum rules for exotic tetraquark states. QCD sum rules [1] (see also [2,3] for further references) is one of the main analytic methods for the study of hadron properties from the field-theoretic Green functions (correlators) in full QCD. The correlators are calculated by means of the Wilsonian operator product expansion (OPE) which provides the rigorous framework for the separation of long and short distances, in QCD being dominated by nonperturbative and perturbative physics, respectively [4]. The OPE clearly identifies, e.g., the origin of chiral symmetry breaking and the emergence of hadron masses, leads to factorization of complicated amplitudes of hadron interactions at large momentum transfers. • QCD sum rules provide hadron amplitudes which satisfy all rigorous properties imposed by perturbative QCD and, at the same time, contain nonperturbative contributions determined in a unique way. As an OPE-based method, QCD sum rules are formulated in the Euclidean region. However, by combining OPE with the knowledge of the analytic structure of the Green functions and resummation schemes, the analytic continuation to the Minkowski space may be performed. In this respect QCD sum rules may have a broader range of applicability than lattice QCD. Last but not least, as an analytic method, QCD sum rules provide physics insights in the hadron structure, which are not easy to get from the numerical results of lattice QCD. • The method of QCD sum rules favourably compares with other analytic methods, such as effective theories or functional methods: the method of sum rules is based on the Wilsonian OPE in full QCD and therefore involves no other implicit assumptions often present in other analytic method. a. OPE and the sum rule for the correlator The basic object in the method of QCD sum rules -as well as in lattice QCD -is the vacuum-to-vacuum correlator, i.e., the vacuum average of the T -product of quark and gluon currents. In lattice QCD, one finds this correlator numerically at large values of the Euclidean time τ. In the method of QCD sum rules, one calculates the correlator analytically as the Taylor expansion in τ. Technically, one considers a so-called Borelized correlator, i.e. applies the Borel transform to the Feynman diagrams, written as spectral representations in the energy variables. The inverse Borel mass parameter is related to τ. The OPE provides the analytic double expansion of this correlator in form of a perturbatively calculable power series in the strong coupling constant α s and in powers of τ; the "power corrections" -terms involving powers of τ -are given via condensates, expectation values of gauge-invariant operators over the physical vacuum in QCD; these condensates describe in an unambiguous way nonperturbative QCD contributions. Alternatively, one may derive a representation for the Borelized correlator in terms of the intermediate hadron states. The two representations for the Borelized correlator -by OPE and by sum over hadron states -constitute the two sides of the QCD sum rule. b. Isolating the ground-state contribution from the Borelized correlator At large τ, the ground-state dominates the correlator which thus fully determines the ground-state parameters. In the region of small and intermediate τ, where the truncated OPE gives a good description of the correlator, excited hadronic states give sizeable contributions. In order to get rid of the excited states and to isolate the ground-state contribution from the correlator, one invokes the idea of quark-hadron duality [5][6][7]: the excited states are dual to high-energy parts of Feynman diagrams of perturbative QCD. The ground-state contribution is then equal to the "dual correlator" -the correlator in which the spectral integrals for perturbation theory diagrams are cut at a certain effective continuum threshold s eff , or simply "effective threshold". The effective continuum threshold differs from the physical continuum threshold determined by masses of low-lying hadrons. Obviously, apart from a truncated OPE for a correlator, the effective continuum threshold is a crucial ingredient of every sum-rule extraction of ground-state parameters; this quantity governs the accuracy of the quark-hadron duality and determines to large extent the numerical value of the extracted parameters of the bound state. The truncated OPE itself cannot provide precise values of the ground-state parameters. Therefore, the method of QCD sum rules provides hadron parameters with some uncertainty which is referred to as systematic uncertainty [8]. Understanding the properties of the effective continuum threshold and finding a criterion for fixing this quantity is the key to obtaining reliable hadron parameters from sum rules. TWO-POINT CORRELATION FUNCTION AND THE OPE Let us start with the simplest object -the two-point correlation function; the perturbative expansion for this object is known to a higher accuracy compared to more complicated correlators. Because of that, the formulation and application of the appropriate and reliable algorithms for the extraction of the hadron parameters from this correlator is becoming increasingly important. The two-point function, i.e. the vacuum average of the T -product of two interpolating quark currents is the basic object for the sum-rule calculation of the decay constants of the heavy-light mesons such as B, B s , D, D s or their vector analogues. For instance, for heavy-light pseudoscalar currents j 5 = m bq iγ 5 b (here m b is the scale-dependent MS mass of the heavy quark and M b will denote its pole mass; the light-quark mass is neglected) one obtains The Wilson OPE for the T -product and for the correlation function has the following form: and Here the physical QCD vacuum |Ω is a complicated object which differs from perturbative QCD vacuum |0 . The properties of the physical vacuum are characterized by the condensates -the nonzero expectation values of gaugeinvariant operators over this physical vacuum: The numerical estimates for the condensates may be found in [2,3]. Here we only list the recent determinations of the lowest-dimension condensates which claim an extremely high accuracy: Ω|qq(2 GeV|Ω MS = (282 ± 2 MeV) 3 [9], Ω| α s π G a µν G a,µν |Ω = 0.013 ± 0.0016 GeV 4 [10]. The two-point function satisfies the dispersion representation (which requires subtractions not shown here) and may be calculated both using OPE (which gives it in the form Π OPE (p 2 )) and using the sum over the hadron intermediate states (which gives it in the form Π hadr (p 2 )). The sum rule is the statement that both forms represent the same quantity and thus should be equal to each other The spectral densities for the two representations read Here M B denotes the heavy-meson mass, f B is its decay constant defined as The truncated OPE series has quark and gluon singularities and does not have the hadron ones; therefore, comparison of the truncated OPE and the hadron representation in (7) may be done in the region of p 2 far from hadron thresholds and resonances. Performing the Borel transform which serves several purposes (suppressing the contribution of the excited states, killing the subtraction terms in the dispersion representation for Π(p 2 ), improving the convergence of the perturbative expansion [1]) one arrives at the Borel image of the two-point function where s phys = (M B * + M P ) 2 is the physical continuum threshold, determined by the masses of hadrons which may appear as the intermediate states, and where power corrections Π power (τ, µ) are given via the condensates and radiative corrections to them. The sum rule now takes the form Recall that the hadron (i.e. full-QCD) representation Π hadr (τ) is an infinite sum of the exponential terms, whereas power corrections in Π OPE (τ) contain polynomials in τ multiplied by exp(−M 2 b τ). Therefore the truncated OPE provides a good description of Π hadr (τ) at "not too large" values of τ. This determines the choice of the Borel windowthe working τ-range where the OPE gives an accurate description of the exact correlator (i.e., all higher-order radiative and power corrections are under control) and at the same time the ground state gives a "sizable" contribution to the correlator. The best-known 3-loop calculations of the perturbative spectral density [11] have been performed in form of an expansion in terms of the MS strong coupling α s (µ) and the pole mass M b : An alternative option [12] is to reorganize the perturbative expansion in terms of the running MS mass, m b (ν), by substituting M b in the spectral densities ρ (i) (s, M 2 b ) via its perturbative expansion in terms of the running mass m b (ν) Advanced algorithms for an isolation of the ground-state contribution The hadron representation contains the sum over all hadron intermediate states, whereas we are primarily interested in the ground state contribution. To exclude the excited-state contributions, one adopts the duality Ansatz: all contributions of excited states are counterbalanced by the perturbative contribution above an effective continuum threshold, s eff (τ, µ) which differs from the physical continuum threshold. Applying the duality assumption yields: The rhs is the dual correlator Π dual (τ, s eff (τ)) (we shall not explicitly write µ as an argument of s eff but this dependence should be kept in mind). Obviously, even if the QCD inputs ρ pert (s, µ) and Π power (τ, µ) are known, the extraction of the decay constant requires s eff (τ, µ). Let us emphasize, that the effective threshold should be the function of τ and µ: (i) one can easily check that s eff should depend on τ in order the τ-dependences of the r.h.s. and the l.h.s. of (15) match each other; (ii) since the truncated OPE is used in the r.h.s. of (15), the effective threshold also depends on the choice of the scale µ. In early applications of the method of sum rules, it was common to use the approximation s eff (τ) = const; the value of this constant has been fixed by requiring the maximal stability (i.e. the least unphysical dependence of the hadron observable on the Borel parameter τ). This procedure proved to work reasonably well, although it did not allow one to probe the uncertainty of the extracted hadron parameter induced by using the approximation of a constant effective continuum threshold. It should be emphasized that even if the OPE for the correlation function is known with very high accuracy in the Borel window, the hadron parameters can still be determined with some uncertainty which reflects the limited intrinsic accuracy of the method of sum rules. We refer to the corresponding uncertainty as to the systematic uncertainty. The latter is related to the adopted prescription for fixing the effective continuum threshold s eff (τ). As the accuracy of the OPE for the correlation functions has increased, one faced the acute necessity to provide more accurate and reliable procedures for the extraction of hadron parameters: gaining control over the systematic uncertainties has become mandatory [8]. The results of [14] demonstrated that in those cases where the bound-state mass M B is known, one can use it and improve the accuracy of the decay constant. We introduce the dual invariant mass M dual and the dual decay constant f dual The deviation of M dual (τ) from M B measures the contamination of the dual correlator by excited states. Starting with any trial function for s eff (τ) and minimizing the deviation of M dual from M B in the τ-window yields a variational solution for s eff (τ). As soon as the latter is found, one readily obtains the corresponding decay constant f B from (15). We consider polynomials in τ and obtain their parameters by minimizing the squared difference between M 2 dual and M 2 B in the τ-window: As shown in several exactly solvable models, the band of the estiamtes for f B corresponding to the variational solutions for linear, quadratic, and cubic trial s eff (τ), provides a realistic estimate for the systematic uncertainty of the decay constant [15,16]. The resulting f B obtained according to the procedure described above is sensitive to the input values of all the OPE parameters (quark masses, α s , the condensates) which are known with some uncertainties thus yielding the OPE-related uncertainty of f B . To obtain the latter, one assumes the Gaussian distributions for the OPE parameters mentioned above. Moreover, because of the truncation of the OPE series, the decay constants exhibit an unphysical dependence on the precise value of the renormalization scales µ. A priori, any choice of the scale is equivalently good; therefore, we average over the scale in some intervals assuming the uniform distribution of µ. • Another simple algorithm for fixing the τ-dependent effective threshold in the Borel sum rule has been recently adopted in [17]: for each value of τ the authors calculated M dual (τ) neglecting the τ-dependence of s eff (τ) and then easily obtain s eff by solving the equation M dual (τ) = M B . Obviously, the resulting effective thresholds do depend on τ; neglecting their τ-dependence while calculating the dual mass leads to some intrinsic inconsistencies. Following our old idea, we tested the algorithm of [17] in a quantum-mechanical potential model for the case of a potential which contains the confining and the Coulomb parts [15]. This analysis shows that in quantum mechanics the algorithm with the variational solutions described above provides more reliable and accurate estimates for the decay constants of the heavy-light mesons compared with the algorithm of [17]. • An interesting approach to the extraction of the ground-state parameters within the finite-energy sum rule has been formulated and applied to the decay constants of heavy-light and heavy-heavy mesons in [18]. We have also tested this algorithm in the potential model [15]. For the potential-model parameters appropriate for for heavy-light mesons the algorithm of [18] was shown to provide rather accurate estimates for the decay constants such that the "invisible" systematic error remains at a few percent level only. Charm sector For the extraction of the decay constants of the charmed pseudoscalar and vector mesons, one makes use of the best-known three-loop expression for the spectral densities of the two-point functions for pseudoscalar and vector currents. The OPE in terms of the pole mass M b calculated in [11] does not exhibit a perturbative hierarchy, therefore one rearrange the OPE in terms of the running MS-mass [12]. Then, the perturbative hierarchy of the correlation function starts to depend on µ; this feature allows one to choose the range of µ where the perturbative hierarchy is visible. The negative effect of this rearrangement of the perturbative expansion is that, because of the truncation of the OPE series, the extracted decay constants acquire an unphysical dependence on the scale µ. In the charm sector this however does not lead to any serious problems. Figure 1 shows the dependence of the decay constants of the charmed pseudoscalar and vector mesons for the central values of all other OPE parameters after applying the algorithm for fixing the effective thresholds described above. One can see a weak µ-dependence of the decay constants of the pseudoscalar mesons mesons, whereas for vector mesons this µ-dependence is more pronounced. Averaging over the OPE parameters in their respective intervals and over the scale in the range 1 ≤ µ[GeV ] ≤ 3 one arrives at the following results [19] For the ratio we reported f D * / f D = 1.221 ± 0.080 OPE ± 0.008 syst , which compares nicely with the lattice QCD result f D * / f D = 1.20 ± 0.02. The results for the charmed mesons from other sum-rule analyses [17,20] agree well with each other and with the results from lattice QCD [21]. Beauty sector Similar to the charm sector, the OPE for pseudoscalar and vector currents containing the b-quark, does not show any perturbative hierarchy; there is no reason to assume that the unknown higher-order perturbative contributions are small. Rearranging the perturbative expansion in terms of the running mass introduces the dependence of the scale µ and opens the possibility to choose the working range of µ in which the perturbative hierarchy is explicit thus allowing to hope the unknown higher orders do not contribute substantially to the correlation function. In the b-sector one encounters two interesting features of the sum-rule analysis: • The sum-rule results for the beauty-meson decay constants correlate very strongly with the b-quark mass [22] δ The sum-rule results for the decay constants corresponding to this value of the b-quark mass read • For the decay constant of B * , one observes an unexpectedly strong µ-dependence [24]: Averaging over the scale range 3 < µ[GeV] < 6 leads to Taking into account only low-scale results for 2.5 < µ[GeV] < 3.5, yields f B * / f B = 0.994 ± 0.01. The sum-rule analysis [17] also gives indications that f B * / f B ≤ 1 (ses Table II of [17]). Surprisingly, the QCD sum-rule prediction for f B * / f B is below the corresponding results from lattice QCD, which seem to favour a value slightly above unity [21,25]. Clearly, such tension calls for further detailed investigations. µ-dependence of the physical quantities The heavy-light correltors are known with an impressive three-loop accuracy and are therefore rather weakly sensitive to the variations of the scale. Nevertheless, the dual correlator of the vector currents which includes the lowenergy region of the Feynman diagrams only and, respectively, the vector-meson decay constants are rather sensitive to the choice of the scale. In many cases this scale-dependence is the main sourse of the OPE uncertainty in the decay constants. We should mention that in some publications the µ-dependence is treated in a specific way [20]: one just chooses one scale at which the decay constant has, e.g., an extremum in µ, and provides the results for this very scale assigning no theoretical uncertainty to the scale fixing. This of course reduces strongly the total uncertainty of the decay constant obtained with the sum-rule technique but from our point of view such a treatment is not justified: the (unphysical) µ-dependence is an effect of the truncation of the OPE series and thus reflects an essential feature of QCD. Any of the scale for which a reasonable perturbative hierarchy is seen, may be used for the determination of the hadron parameter; the unpleasant µ-dependence of the sum-rule results should be thus properly reflected in the theoretical uncertainty of the hadron parameter obtained using a QCD sum rule. SUM RULES FOR THREE-POINT VACUUM CORRELATION FUNCTIONS Let us now discuss the calculation of the meson elastic and transition form factors from the three-point vacuum correlation functions [26,27]. The basic object in this case has the form The three-point Green function in full QCD contains the double pole related to the mesons in the p 2 and p ′2 -channels in the timelike region. The residue in this double pole is the form factor of interest. The Green function in the spacelike region may be calculated using the same method as the two-point function, i.e. by performing the OPE. One represents the Green function Γ(p 2 , p ′2 , q 2 ) as a double spectral integral in p 2 and p ′2 , performs the double Borel transform p 2 → τ and p ′2 → τ ′ (which, similar to the two-point function, kills the subtraction terms and suppresses the contributions of the excited states), equate to each other the OPE and the hadron representations for Γ(p 2 , p ′2 , q 2 ), and use duality property to isolate the ground-state contribution, thus relating the meson form factor to the low-energy region of the triangle diagrams of perturbative QCD and power corrections given through the condensates. For instance, the pion elastic form factor, in which case one sets τ = τ ′ , has the form [27] An essential feature of the three-point sum rule is that the effective threshold now depends on the Borel parameter τ and the momentum transfer Q [28][29][30]; obviously, one faces a serious problem of finding appropriate algorithms for fixing s eff (Q 2 , τ). It should be understood that the effective continuum threshold for the form factor differs from the effective threshold for the decay constant. 1 For large Q 2 , power corrections calculated in terms of the local condensates rise as polynomials with Q 2 , thus preventing a direct use of the sum rule (23) at large Q 2 . There are essentially only two possibilities to study the region of large Q 2 starting with the vacuum correlators: • use nonlocal condensates which are aimed at the resummation of the local condensate effects [31,32]. • work in the so-called local-duality (LD) limit τ = 0 [31]. A specific feature of this limit is that all power corrections vanish in this limit and details of non-perturbative dynamics are hidden in one complicated object -the Q 2 -dependent effective threshold s eff (Q 2 ). A similar treatment may be performed for, e.g., the π 0 → γγ * transition form factor [33][34][35] for which one obtains the single spectral representation in the LD limit: Due to properties of the spectral functions ∆ pert (s 1 , s 2 , Q 2 ) and σ pert (s, Q 2 ), the form factors obey the factorization theorems as soon as the effective thresholds satisfy Remarkably, due to the QCD factorization theorems for the hard form factors, the effective thresholds at Q 2 → ∞ are given through the decay constants of the participating mesons. It should be emphasized that the only feature of theory relevant for this property of s eff (Q 2 ) is factorization of hard form factors. For finite Q 2 , the effective thresholds s eff (Q 2 ) ands eff (Q 2 ) depend on Q 2 and differ from each other [36,37]. Nevertheless, setting s rme f f (Q 2 ) = s rme f f (Q 2 → ∞) for all not too small Q 2 [27] provides an approximate parameterfree prediction for the form factors which is becoming increasingly accurate as soon as Q 2 increases. The results of [37] give convincing evidences that s eff (Q 2 ) ands eff (Q 2 ) are close to their asymptotic values already at relatively low values Q 2 ≈ 4 − 8 GeV 2 . Thus, the LD approximation for the form factors-which requires as its crucial ingredient the knowledge of O(1) and O(α s ) double spectral densities-is increasingly accurate in the region not too close to zero recoil. The LD approximation is very promising for the application to, e.g., heavy-to-light weak form factors. A still missing ingredient here is the two-loop O(α s ) double spectral density of the triangle diagram for different currents and arbitrary quark masses in the loop. This is a really challenging calculation which however opens the possibilities of very interesting applications. So far the only known results correspond to all massless quarks in the loop [38] and to HQET [39,40]. SUM RULES FOR THE EXOTIC POLYQUARK CURRENTS The OPE for the correlation functions of the exotic polyquark currents involving 4 (or 5) quark fields of the type whereÔ is an appropriate combination of the Dirac matrices and possibly also of the (covariant) derivatives, have specific features compared to the OPE for the bilinear currents of the form j(x) =q 1 (x)Ôq 2 (x) used for usual "nonexotic" mesons. Namely, the lowest-order O(1) contribution to the OPE for any correlator involving the exotic current, e.g. Π DD = 0|T (D(x)D(0)|0 , is given by the disconnected diagrams. As known from the general features of the Bethe-Salpeter equation and also emphasized recently by Weinberg [41], these disconnected diagrams are not related to the exotic bound states. The connected diagrams relevant for the exotic states emerge in the OPE for any correlator at the order O(α s ) and higher; therefore for the analysis of the exotic states the knowledge of the radiative corrections is mandatory. This makes the analysis of the exotic states a more technically involved problem than the analysis of the normal hadrons. Nevertheless, due to the fact that the observed exotic states are narrow, the procedure of extracting their parameters from the OPE has the same features and the same challenges as for the normal hadrons. Our experience in the analysis of the usual hadrons proves that a truncated OPE for the correlation function does not allow one to study at the same time both the existence of the isolated ground state and of its properties. However, if the mass of the narrow bound state is known, the method of sum rules allows one to obtain reliable predictions for its decay constants and the form factors. Structure of the exotic tetraquark states Obviously, the exotic tetraquark states may have a rather complicated "internal" structure; two most popular scenarios of this structure are a confined tetraquark state (i.e. a bound state in a confining potential between the two color-triplet diquarks) and a molecular "nuclear-physics like" bound state in the system of two colorless mesons. However, an important question about the structure of the exotic state-which to large extent determines also its production mechanism-is not easy to answer [42]: (i) by a combined color-spinor Fierz rearrangement of the tetraquark interpolating current D(x) one can write it either in diquark-antidiquark or meson-meson form; (ii) the same quantum numbers of the exotic interpolating current may be obtained by different combinations of its diquarkantidiquark or meson-meson bilinear parts. The simplest characteristic of a usual meson is its decay constant, i.e. the transition amplitude between the vacuum and the meson induced by its interpolating current; for a heavy quarkonium state the decay constant is analogous to its wave function at the origin ψ(r = 0). For an exotic tetraquark state one should considers the connected self-energy functions and study the corresponding sum rules. However, for an exotic state one may obtain a set of the decay constants, related to different structure of the interpolating current with the quantum numbers of the exotic tetraquark of interest. The answer to the question of the dominant structure of the tetraquark may be given only by the analysis of a large set of the decay constants. • As the first step, one needs to study systematically the interpolating currents for tetraquark currents with different quantum nembers. As the next step one can calculate the set of Π DD . Because of the factorization property of the twopoint function of the local tetraquark currents [43], the radiative corrections to Π DD are given via radiative corrections to the various two-point functions of the bilinear quark currents. For some of these two-point functions (namely, VV and AA ) the radiative corrections are well-known, for some of them (such as T T , T is the tensor bilinear current) these corrections should be calculated. • Then, the set of the sum rules for different two-point functions Π DD should be studied and only then the answer about the structure of the observed narrow exotic candidates may be obtained. Especially interesting cases here are the narrow charged tetraquark Z − (4430) (J P = 1 + and the width ≃ 45 MeV, valence-quark contentccūd) and X(3872) (J PC = 1 ++ X(3872), the width < 24 MeV). Another interesting possibility-so far not discussed in the literature-is considering nonlocal interpolating currents for the exotic mesons. The nonlocality of the interpolating currents should allow one to access in a better way subtle details of the tetraquark structure. Strong fall-apart decays of the exotic tetraquark states In the last decade, QCD sum rules have been extensively applied to the analysis of strong decays of exotic multiquark states (see e.g. [44,45] and references therein). The basic object for the analysis of these decays in QCD is the threepoint functions of the type This correlator contains the triple-pole in the Minkowski region where dots stay for less singular terms. Here g XM 1 M 2 is the three-hadron coupling which describes the X → M 1 M 2 transition; f X , f M 1 , and f M 2 are the decay constants of the mesons describing the strength of their their interaction with the interpolating current X|D(0)|0 = f X and M 1,2 | j 1,2 (0)|0 = f 1,2 (we omit here all Lorentz indices and for simplicity neglect the spins of the hadrons and of the interpolating currents). The OPE allows one to calculate the expansion of this correlator at the spacelike momenta far from the hadron thresholds. Again, the leading contribution in α s is given by a disconnected diagram (see Fig.3a) which factorizes and does not depend on the momentum of the exotic current p 2 at all: Performing the Borel transform p 2 → τ, which comprises one of the steps of the sum-rule analysis, we see that the Borel image of the disconnected leading-order contribution vanishes. 2 Therefore any attempt to extract the tetraquark decay amplitude from the leading-order contribution is inconsistent. Relevant for the exotic-state properties are the O(α s ) corrections which are technically very difficult. This is a difficult calculation but it should be done before one may hope to get reliable predictions for the tetraquark properties. So far these corrections have been calculated only for the three-point function of the bilinear currents in two cases (i) for massless quarks and (ii) for infinitely heavy active quark and a massless spectator. For the O(α s ) corrections to the three-point functions Γ, involving one tetraquark and two bilinear currents, no results exist in the literature. Nevertheless, the common feature of all previous calculations of these decays within QCD sum rules (e.g. [45,46]) was the attempt to study the tetraquark (and pentaquark) decays basing on the factorizable leading-order contribution which intrinsically has no relationship with the tetraquark properties (which is clear both from the factorization property Γ(p, p ′ , q) = Π(p ′2 )Π(q 2 ) and from the large-N c behaviour of the QCD diagrams emphasized by Weinberg [41]. Therefore the existing analyses should be strongly revised by calculating and taking into account the nonfactorizable two-loop O(α s ) corrections. Nonzero results based on the leading-order correlation function may be obtained only by a trick. Let us consider e.g. the decay Z → ψ ′ + π − . One makes use of the tetraquark current j(x) =c(x)c(x)ū(x)d(x) (we again omit the Dirac matrices for simplicity). The corresponding three-point correlation function of interest is A nonzero result for the Borel transform of the disconnected zero-order contribution may be obtained by first considering the soft-pion limit q → 0, i.e. p ′ = p, which gives for the disconnected contribution Π(p 2 )Π(0) and then performing the Borel transform p 2 → τ. However, the decay rate obtained in this way is not really trustworthy. We therefore conclude that the "fall-apart" decay mechanism of exotic hadrons differs from the decay mechanism of the ordinary hadrons and requires the appropriate treatment within QCD sum rules. The calculation of the radiative corrections is mandatory for a reliable analysis of the properties of the exotic states. SUMMARY AND OUTLOOK In the recent years, great progress has been seen both in the calculations of the OPE series for various correlation functions and in the direction of formulating advanced algorithms for the extraction of the individual hadron parameters from these correlators. We could not discuss all the developments in this talk but let us try to mention in this summary the interesting open issues to be addressed in the future analyses: • Let us recall that combining moment QCD sum rules with experimental/lattice data gives the most accurate estimates of the heavy-quark masses [47]. • Hadron properties from 2-point functions: a. We have seen a visible progress in developing the new algorithms for extracting ground state parameters from the OPE of the correlators and gaining control over the systematic errors of the decay constants (finite-energy sum rules, Borel sum rules). Although it seems impossible to predict both masses and decay constants with a controlled accuracy, using the mass of the ground state as input, systematics can be controlled). b. We have encountered interesting puzzles in the b-sector: (i) The b-quark mass 4.18 GeV [23] when used in the Borel sum rules for f B leads to tension with lattice results for f B . (ii) Unexpectedly strong scale-dependence of decay constants of vector mesons and of f B * / f B even using the O(α 2 s ) correlation function. c. Calculation of the decay constants of heavy-quarkonium states within the method of QCD sum rules is still not fully settled: The problem here is that the OPE for the doubly-heavy correlation functions contain relatively small nonperturbative power corrections. Therefore in QCD, the structure of OPE for the heavy-quarkonium system is somewhat similar to the structure of OPE for a purely coulomb system. Obviously, the algorithms adopted and tested for light or heavy-light hadrons in which cases the nonperturbative corrections are large, may work differently for heavy quarkonium states. This feature may be the origin of the tensions between the sum-rule predictions and the results from lattice QCD and other nonperturbative approaches for e.g. the decay constants of B c mesons and some charmonium states [20,48]. A more critical analysis of the procedures of an isolation of the ground-state contribution from the correlation function and in particular of the way of obtaining the systematic uncertainties is necessary. d. Since the accuracy of the isolation of the ground-state contribution from the correlation function can be controlled, one may try to apply the method for the analysis of the excited states. Very little efforts in this direction have been done so far. • Meson elastic and transition form factors from three-point functions The Borel sum rules at τ = 0 (the so-called local duality limit) open an interesting possibility of obtaining parameterfree predictions for the elastic and the transition form factors of light mesons in a broad range of the momentum transfers. The crucial ingredients necessary for these calculations are the O(1) and O(α s ) spectral densities of the triangle diagrams. As soon as these are known, the effective thresholds are determined in a unique way by the QCD factorization theorems for hard form factors. Assuming the effective thresholds to weakly depend on the momentum transfers-a hypothesis which finds support in the data for the pion form factors-one obtains the parameter-free predictions for the form factors in a broad range of the momentum transfers. Our analysis suggests that these representations for the form factors work with a few percent accuracy for Q 2 ≥ a few GeV 2 . It seems very promising to apply the same ideas to heavy-to-light transition form factors. The main problem here is the necessity to calculate the radiative corrections to the triangle diagrams which is a very difficult task which needs serious efforts. As soon as this ambitious task is fulfilled, QCD sum rules could provide parameter-free predictions for the form factors, increasingly accurate with increase of Q 2 . • Baryon elastic and transition form factors The calculations for baryons are obviously technically extremely involved. Many sum-rule analyses of the baryon elastic and transition form factors have been presented in the recent years (see e.g. [49][50][51][52] and references therein). With a few exceptions (e.g. [49]), these calculations are based on the leading-order O(1) correlation functions and use the traditional approaches to fix the effective thresholds, usually neglecting the τand Q 2 -dependence of the latter. These analyses are expected to provide reasonable ball-park estimates for the form factors; however, in most of the cases, the estimates of the OPE-errors (related to the uncertainties of the QCD parameters, to the missing radiative corrections, and in particular to a strong dependence on the scale µ) and the systematic errors, related to the adopted procedures of fixing the effective continuum thresholds) are not done properly. Many efforts are still to be done in the domain of the baryon form factors. • Three-meson strong couplings of the type g D * Dπ These quantities have been extensively addressed using three-point vacuum correlators and the corresponding sum rules. Again, the radiative corrections to the correlation functions have not been taken into account. Moreover, the results for the decay constants require extrapolations over large ranges of the momentum transfers. Therefore, one cannot expect good accuracy of these estimates. In many cases, the results from sum rules lead to an unrealistic picture of the SU(3)-breaking effects (see [53] and refs therein). For a real progress, one needs the calculation and the inclusion of the radiative corrections to the appropriate three-point functions. • Properties of the exotic tetraquark states The "fall-apart" decay mechanism of exotic hadrons differs from the decay mechanism of the ordinary hadrons and requires the appropriate treatment within QCD sum rules. In distinction to the decays of the usual hadrons, where the knowledge of the radiative corrections is necessary for improving the accuracy of the sum-rule form factor calculations, the calculation of the radiative corrections is mandatory for a reliable analysis of the properties of the exotic states. The decays of the exotic states are intrinsically unrelated to the O(1) disconnected correlation functions; the results obtained from these O(1) correlators cannot be treated as fundamental and reliable. From this summary of the recent advances and still open issues it seems obvious that the future progress in the sum-rule calculations of the properties of the usual and the exotic hadrons will be related (i) to the calculations of the radiative corrections to the correlation functions and (ii) to further development of the appropriate algorithms for the extraction of the properties of the individual hadrons from these correlators.
2015-01-26T10:41:29.000Z
2015-01-26T00:00:00.000
{ "year": 2015, "sha1": "cb5846f0553fd405ff47f88bd8eec19938200523", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1501.06319", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cb5846f0553fd405ff47f88bd8eec19938200523", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
210307479
pes2o/s2orc
v3-fos-license
Spatial-temporal Pattern Evolution of Ecological Land Use in Hebei Province Ecological land use is an important component of ecosystem. This study presents spatial and temporal pattern evolution characteristics of ecological land for the period from 2009 to 2017, based on GIS technology and mathematical statistics. The results show that ecological land structure is stable, amount tends to decrease, average annual decrease of 21, 000 hm2. Ecological land types were mainly transformed into farmland and urban land, Internal transformed mainly between woodland and grassland. Spatial aggregation were existed, Aggregation degree: forest land > grassland > water area and wetland > desert. But high concentration areas (“HH” related area) is reduced from 11 counties to 9 counties, and low concentration areas (“LL” related areas) increased from 37 counties to 40 counties, from 2009 to 2017. According to the results, proposals for ecological land use were put forward. Introduction Ecological land use important research subject in global environmental change and sustainable development. because it could promote the sustainable use of uatural resources. Bailey [1] put forward the concept of composite ecosystem.Zonneveld [2] analyse ecological land by grade distribution theory. Ecological land use is also an important component of the European land use classification system [3] and in United States land use planning [4]. The research contents of scholars are mainly concentrated on the definition, spatial and temporal pattern evolution and driving force. Dong Yawen [5] described the ecological land from patch and corridor forms in the study of ecological protection in urbanized areas. Yuejian [6] described ecological land qualitatively, could classify the status of land as "unused". On the basis of the classification, scholars have explored the temporal and spatial patterns and driving forces at the national level, key areas or specific areas. Some scholars have also measured and analysed the value of ecological services. The research methods mainly include geographic detector analysis, transfer matrix, regression model, and so on. In the paper, analyses changes of structure, quantity and type variations, then presents spatial and temporal pattern evolution characteristics of ecological land, based on GIS technology and mathematical statistics, from 2009 to 2017 2 Data sources and methods Study area Hebei Province, across 36° 03'N to 42° 40'N and 113° 27'E to 119° 50'E, has the total area of 188.8 × 10 4 km. Bordering the Bohai Sea in the east, Beijing and Tianjin in the inner ring, Taihang Mountain in the west, Yanshan in the north, and Zhangbei Plateau in the north of Yanshan. In the geomorphological pattern, from northwest to southeast, plateau, mountain and plain are arranged in turn, with obvious zonal distribution characteristics. There are four main river systems in the province: Haihe River System, Luanhe River System, Inland River System and Liaohe River System. With the coordinated development Hebei Province has become more and more important as the ecological environment supporting area for Beijing-Tianjin-Hebei Urban Agglomeratio. Data sources There is no unified standard for the classification of Main methods ESDA includes global spatial autocorrelation and local spatial autocorrelation. Global spatial autocorrelation can determine whether a phenomenon or attribute value has aggregation characteristics in space. By estimating global spatial autocorrelation statistics such as Moran's I, Global Geary's and Join Count, the spatial correlation degree and spatial difference degree of regional population are analysed. The most commonly used index is Moran's I. The calculation formula is as follows: In the formula, is the number of subjects, x the observed value and the average value of x i . To study the spatial connection matrix between I and j, the spatial connection matrix represents the potential forces of interaction between spatial elements. Spatial connection matrix is generally expressed as N-dimensional matrix W (n × n), which is determined by spatial adjacency and spatial distance. Moran's I value is between -1 and 1, I > 0 indicates spatial autocorrelation, spatial entity is aggregated distribution, I < 0 indicates spatial negative correlation, spatial entity is discrete distribution, I = 0 means spatial entity is random distribution. The larger the I value, the greater the correlation of spatial distribution. Local spatial autocorrelation can measure the spatial location and range of spatial heterogeneous aggregates of phenomena or attribute values. Local Moran's I statistics and LISA indicators are used to reveal the degree of spatial autocorrelation of each regional unit. LISA essentially decomposes Global Moran's I into regional units. Flow direction analysis of ecological land The total amount of ecological land in Hebei Province decreased from 8.88×10 6 hm 2 to 8.71 ×10 6 hm 2 during the research period, average annual decrease of 2.1 ×10 4 hm 2 . The largest reduction of land types was grassland, reduced area was 6.2 ×10 4 hm 2 . Secondly, the decrease of water-wetland was 4.2 × 10 4 hm 2 . On the whole, the annual change of ecological land tends to decrease, decreased change rate from 0.2% to 0.3%. The forestland decreased from 0.05% to 0.16%, the waterwetland decreased from 0.4% to 1.34%. The proportion of forest land: grassland: water-wetland: desert were 0.5:0.3:0.1:0.1. There is a case of ecological land changed to other land types. From the perspective of variation, the first is ecological land changed to cultivated land about 9.6 ×10 4 hm 2 .Secondly, changed to turban land was 5.0 ×10 4 hm 2 , cultivated land and turban land accounted for 85.9% of the total ecological land changed. Internal transformation of ecological land, it mainly occurs between forestland and grassland. The forestland changed to grassland was 3213hm 2 , followed forestland changed to water-wetland was 770.63 hm 2 . The total other land types changed to forestland was 3691.00 hm 2 .The main source from cultivated land, accounted for 69.50%. The forestland changed to other land types, was urban land, accounted for 42.69%. The grassland mainly come from woodland, accounted for 47.57%. The grassland mainly changed to cultivated land, accounted for 62.34%. The ecological land transferred from water-wetland was 6155.72 hm 2 . Forestland and water conservancy facility land, changed to grassland were 770.63 hm 2 and 608.46 hm 2 respectively. The water-wetland total area changed to other land types was 4.9×10 4 hm 2 . The water-wetland mainly changed to arable land, accounted for 49.94% of the total water -wetland. Global spatial autocorrelation analysis of ecological land use in Hebei Province GeoDa software was used to calculate the Moran's I index of ecological land and various types of land. The results showed that the Moran's I index of ecological land, woodland and grassland were tested by Monte Carlo simulation method. Moran's I index were 0.7450.744\0.695. The larger the index, the greater the degree of spatial aggregation. Forestland and grassland have a greater impact on spatial aggregation of ecological land. According to the survey spatial aggregation degree of forestland > grassland > water area-wetland > desert; Compared with 2009, the spatial aggregation degree of ecological land decreased in 2017. Spatial autocorrelation analysis of ecological land in Hebei Province In order to further reveal the location of agglomeration, local autocorrelation analysis was used, and the Cluster Map of ecological land in 2009 and 2017 was obtained. (1) The high concentration areas ("HH" related area) In 2009, there were 11 counties, which decreased to 9 counties in 2017. This area mainly distributes in the eastern part of Zhangjiakou and the northern part of Chengde. The Ba-shang Plateau has been an important forest area in study area. The famous Saihanba Forest Farm is also located in this area, and the ecological land in the area has maintained a high level in history. Regional differences in ecological land use, forest land accounted for about 70% of the total ecological land ,in Chengde city.While in Zhangjiakou city, grassland accounted for about 40%. The highest proportion of woodland Weichang County is 77%, and that of grassland in Huailai County is 44%. (2) Low concentration areas ("LL" related areas), in 2009, there were 37 counties in this area, with an added to 40 in 2017. Human activities are frequent in this area. Agricultural land and construction land are the priority in land use, while ecological land is less. From the point of view of geomorphology, the region is located in the central and south eastern part of the province. It is mainly composed of Piedmont plain, central plain and coastal plain. It is formed by alleviation of ancient Yellow River, Haihe River and Luanhe River. The Haihe River basin runs through the whole territory. Haihe River system collects water from Yanshan Mountain and Taihang Mountain, forming fan-like water system. The ecological land mainly consists of water area, wetland and woodland, while the grassland and desert land are less. The proportion of wetland to ecological land ranges from 0.1 to 0.9, of which 26 counties ranges 0.3 to 0.9. Distribution characteristics are obvious according to river trend. Because most of Baiyangdian Lake in Anxin County makes the proportion of water -wetland in the county's ecological land as high as 0.8. The Luanhe River system is located in the eastern Hebei Province. After the eastern Hebei Plain, it enters the sea in Changli county and Leting county. So the proportion of these areas water-wetland in the area are higher. The proportion of forestland to ecological land in 28 counties ranges from 0.3 to 0.7. The distribution of Forestland is more dispersed. The Forestland in this area is different from the natural forestland in mountain and plateau areas such as Zhangjiakou and Chengde. Most of the forestland in this area is planted forest. (3) HL and LH region do not exist in the spatial distribution of ecological land in Hebei Province. Conclusions and suggestions The paper systematically analysed the status of ecological land use in Hebei Province from 2009 to 2017. The results showed that: (1) The proportion of forest land, grassland, water area, wetland and desert in the total ecological land use was relatively stable at 0.5:0.3:0.1:0.1.The ecological land decreased by 2.1×10 4 hm 2 annually. The absolute value of ecological land use change decreased from 0.2% to 0.3%. (2) The conversion of ecological land to agricultural land and construction land was 9.6 ×104 hm 2 of cultivated land. The internal conversion of ecological land mainly occurred between forest land and grassland. The forest land changed to grassland was 3213 hm 2 . (3) The spatial aggregation degree of forest land > grassland > water area and wetland > desert. (4) The "HH" correlation area was reduced from 11 counties in 2009 to 9 counties, mainly distributed in the eastern part of Zhangjiakou and northern part of Chengde. The "LL" correlation area was from 37 counties added to 40. The main ecological land in this area is water area, wetland and woodland. The wetland in water area has obvious distribution characteristics according to the trend of rivers. From the research results, it can be seen that the structure of ecological land is relatively stable and the total amount of ecological land tends to decrease. With more and more attention to ecological protection, the combination of advanced engineering technology and ecological protection will reverse the trend of ecological land reduction. In addition to focusing on quantity, we can consider breaking administrative boundaries and dividing protection units according to geographic factors. Such as topography, landform and river distribution. Based on the 9 counties in HH area, they are further divided into the grassland protection area of Bashang Plateau, the forest land protection area of central and southern Yanshan Mountains, and the forest and grassland belt of Liaohe-Luanhe River. LL areas should not only consider the actual requirements of economic development for ecological land use, but also strengthen the protection and restoration of ecological environment.
2019-10-10T09:33:50.304Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "74e59bccf323a757394747038199f8abea425711", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/44/e3sconf_icaeer18_03045.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8ce2dfbddde1a38123eec9e5f47bde311c68e73d", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
17275000
pes2o/s2orc
v3-fos-license
CP violation in B decays to charm and charmonium at Belle We present the study of CP violation in charm and charmonium decays, using a data sample corresponding to 657 \times 10^6 BBbar events collected with the Belle detector at the \Upsilon(4S) resonance at the KEKB asymmetric-energy e^+e^- collider. We report measurements of the polarization fraction and time-dependent CP-violation parameters of the decay B^0\to D^{*+}D^{*-} and of the branching fraction and charge asymmetry in the decay of the Cabibbo- and color-suppressed process B^{\pm} \to \psi(2S) \pi^{\pm}. INTRODUCTION The study of exclusive B meson decays to charm and charmonium has played an important role in exploring CP violation [1][2][3]. Amongst them the Cabibbo-suppressed decays have an increased sensitivity to New Physics effects and can be studied at the B factories which, due to their high integrated luminosities, overcome the suppression factor. The B 0 → D * + D * − and B − → ψ(2S)π − decay proceed primarily via a b → ccd tree diagram while penguin contributions are expected to be small in the Standard Model (SM). A large deviation of the measured CP parameters from the SM prediction can be a hint of New Physics. DATASET AND EVENT RECONSTRUCTION Both analyses are based on a data sample containing 657 million BB pairs, collected with the Belle detector [4] at the KEKB asymmetric-energy e + e − collider [5] operating at the Υ(4S) resonance. The Υ(4S) meson is produced with a Lorentz boost βγ = 0.425 along the z axis, opposite to the positron beam direction, and decays mainly into a B 0 B 0 or a B + B − pair. Theoretical Motivation The main B − → ψ(2S)π − diagram is not only Cabbibo-suppressed but also color-suppressed. A measurement of its branching fraction, unknown so far, is shown in this paper. Assuming tree dominance and factorization, the branching fraction B(B − → ψ(2S)π − ) is expected to be about 5% of that of the Cabibbo-favored mode B − → ψ(2S)K − [6]. Furthermore under these assumptions, CP violation should be negligibly small. However if penguin contributions or new physics effects are present, a non-zero charge asymmetry can occur. Results The ψ(2S) meson is reconstructed through the ℓ + ℓ − and J/ψπ + π − decay channels, where the J/ψ decays to ℓ + ℓ − (ℓ = e or µ). Inclusion of charge-conjugate modes is implied throughout the paper. Contamination from the B − → ψ(2S)K − decays, where a kaon is misinterpreted as a pion, results in a peak at ∆E ≈ −0.07 GeV, which is The curves show the signal (dashed) and background components (dot-dashed and dotted) as well as the overall fit (solid). modeled using a Monte Carlo (MC) sample. A fit performed simultaneously to all the considered ψ(2S) decay modes ( Fig. 1) provides the branching fraction: Branching fractions for the B + and B − decays are extracted to measure the charge asymmetry A. Signal yields of 89 ± 11 and 93 ± 11 events for B + and B − , respectively, result in a charge asymmetry of which is consistent with no direct CP violation. Finally we measure which is consistent with the theoretical prediction of the factorization hypothesis. Theoretical Motivation The time-dependent decay rate of a neutral B meson to a CP eigenstate, such as D * + D * − , is given by: where q = +1(−1) when the other B meson in the event decays as a B 0 (B 0 ) and ∆t = t CP − t tag is the proper time difference between the two B decays in the event, τ B 0 is the neutral B lifetime, ∆m d the mass difference between the two B 0 mass eigenstates. The CP -violating parameters are defined as where λ is a complex observable depending on the B 0 and B 0 decay amplitudes to the final state and the relation between the B meson mass eigenstates and its flavor eigenstates. When ignoring penguin corrections, the SM predictions for the CP parameters are A D * + D * − = 0 and S D * + D * − = −η D * + D * − sin 2φ 1 , where φ 1 = arg[−V cd V * cb ]/[V td V * tb ] and η D * + D * − is the CP eigenvalue of D * + D * − , which is +1 when the decay proceeds through the S or D wave, or −1 for the P wave. A significant shift of the CP parameters from the SM predictions can be a sign for New Physics. Yield and angular analysis The D * ± mesons are reconstructed in the D 0 π + and D + π 0 modes. The signal is extracted from a two-dimensional unbinned maximum likelihood fit in the M bc vs. ∆E plane. We obtain 553 ± 30 signal events with a signal purity of 55%. The first two plots in Fig. 2 show the projections of the fitted M bc and ∆E distributions in the signal region. To obtain the CP -odd fraction we perform a time-integrated angular analysis in the transversity basis [7]. The differential decay rate as a function of the transversity angle θ tr reads where R 0, and R ⊥ are the CP -even and CP -odd fractions of the three transversity components respectively. A one-dimensional fit of the cos θ tr distribution allows the extraction of the CP -odd fraction. Its distortion due to the angular resolution and the slow pion reconstruction efficiency is modeled using signal MC samples. The fraction R 0 /(R 0 + R ) is taken from the previous Belle analysis [3]. The signal-to-background ratio is determined on an event-by-event basis using the M bc − ∆E distribution. The result (shown in the right plot of Fig. 2) is Time-dependent CP violation measurement Because the B 0 and B 0 are approximately at rest in the Υ(4S) CM frame, the ∆t value can be determined from the separation in z of the two decay vertices, ∆t ≃ ∆z/(βγc), where c is the speed of light. To obtain the ∆t distribution, we reconstruct the tag-side B vertex and its flavor inclusively from properties of particles that are not associated with the reconstructed B 0 → D * + D * − decay [8]. The tagging information is represented by two parameters, the flavor of the tagging B 0 , q, and the tagging quality given by seven r intervals from r = 0 meaning no flavor discrimination to r = 1 for unambiguous flavor assignment. with a statistical correlation of 10.7%. The total significance of non-zero values of S ′ and A is 3.1 σ. We define the raw asymmetry in each ∆t bin as (N + − N − )/(N + + N − ), where N + (N − ) is the number of observed candidates with q = +1(−1). Figure 3 shows the ∆t distribution and the raw asymmetry for events with a good-quality tag (r > 0.5). Our measurement of S ′ and A is consistent with the SM expectation for a tree-dominated b → ccd transition. Conclusion We reported a measurement of the CP -violating parameters in B − → ψ(2S)π − and B 0 → D * + D * − decay using 657 million BB pairs recorded with the Belle detector. Both measurements are compatible with the SM predictions in absence of penguins. The branching fraction of B − → ψ(2S)π − is extracted as well as its ratio with respect to B − → ψ(2S)K − . The result supports the factorization hypothesis. In the B 0 → D * + D * − analysis the CP -odd fraction is obtained to allow for an undiluted measurement of sin 2φ 1 .
2014-10-01T00:00:00.000Z
2008-10-17T00:00:00.000
{ "year": 2008, "sha1": "8d030403b74398446617e7fe44af52597be9fedf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "00f74ccba37887facde0145e954c454eee929ad6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18647760
pes2o/s2orc
v3-fos-license
Dramatic Response of Nail Psoriasis to Infliximab Nail psoriasis, affecting up to 50% of psoriatic patients, is an important cause of serious psychological and physical distress. Traditional treatments for nail psoriasis, which include topical or intralesional corticosteroids, topical vitamin D analogues, photochemotherapy, oral retinoids, methotrexate, and cyclosporin, can be time-consuming, painful, or limited by significant toxicities. Biological agents may have the potential to revolutionize the management of patients with disabling nail psoriasis. We present another case of disabling nail psoriasis that responded dramatically to infliximab. Introduction Nail psoriasis, affecting up to 50% of psoriatic patients, is an important cause of distress, impairment of manual dexterity, and pain [1]. Nail involvement varies from pitting to nail dystrophy. Psoriatic nail disease is a therapeutic challenge, and to date, patients and physicians are often dissatisfied with current standard therapeutic approaches. Infliximab, which is approved for the treatment of moderate to severe plaque psoriasis and psoriatic arthritis, is a chimeric monoclonal antibody that inhibits the action of tumour necrosis factor alpha (TNFα). Several reports have confirmed its usefulness in treating nail psoriasis [2]. The present paper describes another case of disabling nail psoriasis that responded dramatically to infliximab. Case Presentation A 34-year-old farmer with a history of stable localized plaque psoriasis was referred to our department with psoriatic nail disease involving four fingernails. The toenails did not show any significant lesions, and no symptoms of psoriatic arthritis were noted. He had previously been prescribed topical treatments, including potent topical corticosteroids and vitamin D 3 analogs, without any efficacy. The patient's discomfort was affecting his ability to work and his quality of life because of psychosocial impairments. On examination, subungual hyperkeratosis, pitting, and onycholysis were seen in four fingernails (Figures 1(a) and 1(b)). Because of the presence of disabling symptoms in this patient, acitretin (25 mg daily) was initially started, but this drug proved ineffective after 6 months of therapy. After screening for infection, neoplasm, and autoimmunity, all of which were negative, the patient was started on infliximab at a dose of 5 mg/kg administered at weeks 0, 2, and 6. A dramatic improvement was noticed after the second infusion of infliximab at week 6 ( Figure 2). The patient's nails remained free of psoriatic lesions after the third infusion. This treatment regimen was followed by a single infusion every 8 weeks for maintenance therapy. Discussion Biological agents have demonstrated efficacy for the treatment of plaque psoriasis and psoriatic arthritis and are now widely used. Even if the severity of psoriatic skin lesions remains the primary reason to start biological treatments, nail psoriasis should be considered a valid reason to start these therapies because of its large impact on daily living activities and quality of life. Furthermore, nail psoriasis can be a predictor of future inflammatory joint damage, a precursor of psoriatic arthritis, and a visible indicator of disease activity [3]. There are currently no standardized therapeutic regimens for nail psoriasis. Traditional treatments for nail psoriasis, which include topical or intralesional corticosteroids, topical vitamin D 3 analogues, photochemotherapy, oral retinoids, methotrexate, and cyclosporin, can be timeconsuming, painful, or limited by significant toxicities [4]. Biological agents that target cytokines may have the potential to revolutionize the management of patients with disabling nail psoriasis. These agents include anti-TNFα agents such as infliximab, adalimumab, and etanercept, and antiinterleukin (IL)-12/-23, the first drug of a new class of biotherapy agents. Adalimumab is a fully human anti-TNFα antibody that is administered subcutaneously every 2 weeks. In a prospective, open-label, uncontrolled study conducted in nine European countries, patients with active psoriatic arthritis received adalimumab 40 mg every other week for 12 weeks in addition to their pre-existing antirheumatic treatment. Of 442 patients, 259 had nail involvement. After the relatively short treatment duration of 12 weeks, the median reduction in Nail Psoriasis Severity Index (NAPSI) score was 57%. Clearance of psoriasis of the nails was increasing in those patients who continued adalimumab up to week 20. The NAPSI improvements were independent of other changes in skin assessment measures and occurred regardless of joint response status [5]. Etanercept is a TNFα receptor fusion protein administered subcutaneously, which binds with and antagonizes the action of TNFα. In one post hoc analysis, etanercept 25 mg twice weekly produced a mean reduction in NAPSI score of 51% after 54 weeks in 711 patients with psoriasis, 80% of whom had nail involvement [6]. In another retrospective study of 66 patients treated with etanercept at 25 mg or 50 mg twice weekly at intermittent intervals over a period of 3.4 years, nail involvement improved significantly in each of the treatment cycles [7]. Ustekinumab is a fully human anti-IL-12/-23 monoclonal antibody that binds with high specificity and affinity to the shared p40 protein subunit of the cytokines IL-12 and IL-23, blocking the differentiation and expansion of T-helper (Th)1 and Th17 populations. It has recently been approved in the USA, Europe and Canada for the treatment of moderate to severe plaque psoriasis. Recently, ustekinumab has also been reported as an effective therapeutic alternative in nail psoriasis [8]. Infliximab is a chimeric monoclonal antibody that inhibits TNFα and is administered intravenously. At the present time, the best evidence for the efficacy of a TNFα inhibitor in nail psoriasis comes from a phase III, multicentre, double-blind, placebo-controlled trial designed to evaluate long-term efficacy and safety of infliximab in patients with moderate to severe plaque psoriasis. The primary endpoint of the study was the proportion of patients achieving at least 75% improvement in Psoriasis Area and Severity Index (PASI) compared with baseline. The percentage improvement in NAPSI at weeks 10, 24, and 50 was also specifically investigated. In this well-controlled trial, infliximab resulted in complete clearing of nail psoriasis in 6.9% of patients within 10 weeks, rising to 26.4% after 24 weeks, and 44.7% after 50 weeks. Nail clearance was observed in 1.7% and 5.1% of patients at weeks 10 and 24 in placebo recipients, but this Case Reports in Medicine 3 increased to 34.5% at week 38 and to 48.2% at week 50 after the patients had switched to infliximab [2,9]. Infliximab is one of the most extensively studied biological agents in dermatological practice and is considered by many to be the most effective treatment for nail psoriasis to date [10,11]. No drug is completely safe, and several safety issues should be considered. However, cumulative evidence indicates that treatment with infliximab is safe and well tolerated, especially if physicians are thoughtful in diagnosing infections and infusion reactions early [12]. In our patient, infliximab showed remarkable and rapid effectiveness in the treatment of psoriatic nail disease. In addition, the present case illustrates the need to further evaluate biological therapies and their cost effectiveness, especially as first-line systemic agents, for the treatment of severe psoriatic nail disease. We believe that infliximab may represent a treatment of choice for the many patients with this distressing condition.
2014-10-01T00:00:00.000Z
2011-05-10T00:00:00.000
{ "year": 2011, "sha1": "38900e738d4e7b807cb2f316f57c9ccda9ce9880", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/crim/2011/107928.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a71eaada878a33dfeb9173ef25ad9db74c4615f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11021785
pes2o/s2orc
v3-fos-license
Addressing Language Barriers: Building Response Capacity for a Changing Nation The absence of universally available language services is a national healthcare system failure, the burden of which is suffered by patients with limited English proficiency and their healthcare providers. Conceptualizing mandatory provision of language access as an unfair, unfunded mandate ignores massive and fundamental social changes taking place. Overcoming language barriers is essential to safe, quality health care. This paper, informed by the experience of Hablamos Juntos, a national demonstration project funded by the Robert Wood Johnson Foundation, argues that national and health industry investments are needed to develop population-based approaches supported by communication and information technology, and that these investments may prove useful to improving healthcare communication for English-speaking patients as well. INTRODUCTION Around the world, 160 million people live outside their country of origin. 1 The face of America is changing too. In 1950, there were nine White persons under age 40 for every one person of color; 2 by 2000, this ratio was 1.7. 3 Today, one in eight Americans is foreign-born, 4 and 45% of children under age 5 are children of color. 5 These demographic changes signal fundamental social changes that, in health care, will translate into increased cultural and language diversity among patients. Because communication in health care is vital to safe and quality health care, language barriers are emerging as a new risk that few doctors and healthcare organizations are prepared to handle. Conceptualizing mandatory provision of language access as an unfair, unfunded mandate ignores massive and fundamental social changes taking place in the U.S. and abroad. As healthcare leaders, we can continue to leave patients and providers to figure this out, one encounter at a time, or we can act boldly by investing in broader strategies and policies, those that can lead to building response capacity for the healthcare industry as a whole. To do more than just say no requires that healthcare leaders accept that our nation will include LEP populations well into the future and that overcoming language barriers is essential to safe, quality care. CREATING POPULATION-BASED MODELS In 1896, Henry Ford built his first car in a little brick shed in his garden. 6 Thin Lizzie, as it was called, consisted of a twocylinder, four-cycle motor, mounted on bicycle wheels with no reverse gear or brakes. Others were also building cars at the time, and Ford's initial attempt was not a big success. Later, as we know, Ford did succeed, by systematizing the manufacture of automobiles and by tapping the efficiencies and qualitycontrol advantages of mass production. It is fair to say that Ford accelerated the adoption of automobiles as a mode of transportation and that, largely due to his big-picture thinking, we drive cars today whose performance, safety, and reliability, for the most part, we take for granted. The healthcare industry's current response to language barriers is essentially requiring each provider to invent his or her own car. Hospitals and doctors, on their own, are expected to readily respond to all languages spoken in their communities, a daunting challenge when we consider there are more than 300 languages spoken in the U.S. What is even more amazing is that some healthcare providers are investing significant resources to meet the language needs of their communities. They are traveling health care's highways and byways in their versions of Thin Lizzie. Henry Ford's innovative thinking has not taken hold in the field of healthcare interpreting in the U.S. In contrast, Australia has taken a population-based approach to translation and interpreting (together referred to as language services). The Translating and Interpreting Service (TIS), established in 1973 and operated by the Commonwealth Department of Immigration and Multicultural and Indigenous Affairs, is the oldest interpreting service in Australia. 7 Initially, the objective of the agency was to enable communication for immigration and naturalization services and for emergencies. Soon, other government and commercial businesses pressed for routine access to these language resources. Today, TIS is the largest language agency in Australia, competing for clients and interpreters against several other government and privately run language services. This paper has not been presented at any conferences. Hablamos Juntos and the work described in this paper were funded by The Robert Wood Johnson Foundation. I recently site-visited TIS's national office and was able to observe their operations firsthand. The agency is highly computerized and enables access to more than 1,500 interpreters, who can be reached through a national call center located in Melbourne using a standard, toll-free number. Daily, interpreters speaking over 120 languages and dialects report their availability to accept assignments from their home computers or telephones. Interpreters can use cell phones, land lines, or computers to provide services and can be deployed to nearby assignments when in-person interpreting is needed. By simply calling the toll-free number and providing a personal identification number, federal, state, and local government offices, hospitals, doctors, and businesses can all be connected within seconds to an interpreter who speaks the language for which they need interpretation. Healthcare callers are given priority by the technology, and fees are waived for services to government-sponsored patients. 8 TIS call-center operators simply ask what language is needed and call up a list of interpreters currently signed on to the system. The information technology supporting the network generates miniprofiles of interpreter qualifications and their certification, training, and interpreting experience to enable operators to match interpreters to the assignment. As impressive are the proficiency exams that have been developed for 57 of the 120 languages spoken in Australia. These exams are the work of the National Accreditation Authority for Translators and Interpreters, LTD (NAATI), an Australian government-owned company established in 1977 to develop standards and accredit interpreters and translators. 9 NAATI serves as an advisory body for the translation and interpreting industry in Australia and is the accreditation body of first resort for new emerging languages. It is charged with creating methods to train and assess the skills of interpreters of less frequently used languages. Australia and the U.S. are significantly different. The TIS model may not be suited to our market-driven healthcare system, but applying technology and population-based strategies may offer opportunities not imaginable in today's environment. State or federal grants to develop publicly funded regional models to pace fees set by private language agencies, and establish quality standards, can lead to benefits beyond cost savings and are certainly worth exploring. INTERPRETERS FOR HEALTH CARE Hablamos Juntos is a national program, funded by the Robert Wood Johnson Foundation, supporting 10 demonstration projects aimed at improving language services in healthcare organizations. As Director of the National Program Office overseeing these demonstrations, I have learned that assessing for language proficiency and training interpreters can be challenging and time-consuming. Through this program, I have also learned firsthand the level of effort required to develop trained interpreters for one language and wondered, "Why must each healthcare organization do this alone?" Foundation funding, critical to incubate practical and innovative solutions for language services, is not enough to develop the resources needed or to match the scale of demand. Nationally coordinated efforts to assure readily available, trained interpreters and translators would be more efficient. The federal Department of Education's Office of Special Education and Rehabilitative Services offers an example of a nationally coordinated approach using competitive grants to support Regional Interpreter Education Centers charged with growing the number of sign-language interpreters in the nation. 10 Located in colleges and universities, these centers receive congressional funding to teach interpretation skills to new interpreters for the deaf and hard-of-hearing. 11 This sustained investment, over the last 30 years, has led to numerous American Sign Language interpreter training programs. The last round of funding, for the first time, designated a coordinating center to promote and encourage collaboration among the regional centers to advance sign-language interpreter training. A similar national investment is needed to develop the pedagogy, assessment tools, and teaching methods needed to ensure consistent development of trained interpreters and translators for healthcare environments. DEVELOPING HEALTH COMMUNICATION RESEARCH CENTERS When Peter Sutherland, honorary Ambassador for the United Nations Industrial Development Organization, was asked how companies can prepare for success in an age of globalization, he responded "The only point of view any of us depends on is the view from where we are standing. Stand in many places to get many points of view." 12 So it is with language barriers. We need to stand in many places to envision new ways to meet the language and communication needs of diverse communities; interpreters are but one essential element. Language and communication are decidedly different. Adjunct to a shared language is content knowledge, a mysterious mixture of health literacy, culture-bound notions related to health and illness, and potentially other influences not yet defined. Key to communication is having a common language, but clearly, when 90 million English-speaking Americans have trouble understanding and acting on health information, shared language alone does not assure effective communication. 13 As healthcare professionals, we need to develop a deeper understanding of communication issues that come with diverse patient populations and to distinguish health literacy challenges from language and cultural barriers. In doing so, we may be able to apply what we learn to improve communication with English-speaking and non-English-speaking patients alike. We also need to test and learn the benefits and drawbacks of different ways to deploy or use interpreters effectively and ways to ensure quality and safe health care for every patient without incurring more cost than value. With scarce resources being an ever-present challenge, effectiveness researchto guide responses in communities, within organizations, and between patients and doctorsis imperative. Without evaluation, we can end up foolishly spending healthcare dollars for Cadillac services when a Volkswagen may do. The issues that need examination are complex and require the contributions of a variety of experts. Action-oriented research centers are needed to examine language barriers in the context of health communication, bringing together disciplines from different fields (applied and social linguistics, communication, etc.), as well as healthcare practitioners and patients, to study different aspects of interpreting and applying what is learned to model development and best practices. Our current experience-based knowledge needs to be supplemented with disciplined examination of the benefits and limitations of different styles of interpreting (e.g., dialogue, simultaneous, consecutive) and the different mediums for providing interpreter services (via Internet, telephone, in person, or through video conferencing). Language access research can also explore the viability of virtual translation centers and repositories of translated documents and promote local registries of qualified interpreters and translators. We also need to test different ways to pay for these services (e.g., subscriptions, per-minute fees) and to explore other solutions to leverage economies of scale across the healthcare industry or within regions or communities. If we are willing to redefine language barriers as a national concern visited on healthcare providers, we can see new approaches to address language barriers to health care. CONCLUSION Clear communication is essential for safe, quality healthcare services. Poor communication can lead to disastrous outcomes, especially for patients with limited English ability. Through the work of Hablamos Juntos, it has become clear that national and health industry investments are needed to develop the field of language services and that these investments may prove useful to improving health communication for English-speaking patients as well. The absence of universally available language services is a national healthcare system failure, the burden of which is suffered by patients with LEP and their healthcare providers. Healthcare organizations borrow and replicate untested solutions and programs and struggle to grow trained interpreters. There is no valid reason that healthcare organizations should independently develop, from scratch, the resources needed to provide language access for LEP patients. Lack of coordinated efforts is wasteful and contributes to wide variations in quality of interpretation and, ultimately, in quality of care and health outcomes. Eliminating language barriers in health care requires a calibrated and focused effort to develop response capacity across the nation. Attending to language barriers at the provider level is essential, but working only at this level leaves communication gaps that undermine the benefits of these investments. Other than Hablamos Juntos, there have been few national investments to address language barriers to health care. Healthcare organizations expend precious resources reinventing the wheel without assuring quality and safe health care for all patients. Sustained investments in population-based solutions that leverage the power of computers and communication technology can lead to solutions that can reach across boundaries of responsibility to enable large and small healthcare provider organizations to serve patients of many languages. Funding for action-oriented research and evaluation, and to stimulate innovations and use of technology to make language services more affordable for everyone, is needed, as are investments in the training of interpreters and development of healthcare materials in many languages. As our nation grows ever more linguistically diverse, we need to face the needs posed by language barriers in health care and develop efficient, coordinated solutions to meet them, rather than continue to reinvent the wheel, one provider at a time.
2014-10-01T00:00:00.000Z
2007-10-24T00:00:00.000
{ "year": 2007, "sha1": "bdb60f74d30804c8008c1d694d9953d8394cd0fd", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-007-0367-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bdb60f74d30804c8008c1d694d9953d8394cd0fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11548736
pes2o/s2orc
v3-fos-license
Events and children’s sense of time: a perspective on the origins of everyday time-keeping In this article I discuss abstract or pure time versus the content of time, (i.e., events, activities, and other goings-on). Or, more specifically, the utility of these two sorts of time in time-keeping or temporal organization. It is often assumed that abstract, uniform, and objective time is a universal physical entity out there, which humans may perceive of. However, this sort of evenly flowing time was only recently introduced to the human community, together with the mechanical clock. Before the introduction of mechanical clock-time, there were only events available to denote the extent of time. Events defined time, unlike the way time may define events in our present day culture. It is therefore conceivable that our primeval or natural mode of time-keeping involves the perception, estimation, and coordination of events. I find it likely that events continues to subserve our sense of time and time-keeping efforts, especially for children who have not yet mastered the use of clock-time. Instead of seeing events as a distraction to our perception of time, I suggest that our experience and understanding of time emerges from our perception of events. Introduction The ability to keep track of events, activities, and other goings-on in our environment is of fundamental importance for our adaptation to the conditions of our earthly habitat. In everyday life, we need to organize and coordinate our own activities with that of others in our community. This ability for perceiving the constellation of events around us, how they are configured in relation to each other as well as to ourselves, is what makes the cross-temporal organization of our everyday lives at all possible. Both the ability to perceive these events and the ability to organize our own activities in concordance with the configuration of these events, is often referred to as having a Sense of time. Time in itself is not the main objective, though, but the events that may be gauged in terms of their temporal extent. This is so, because every event has a temporal aspect. Cross-temporal organization of behavior would perhaps, be another way of describing this ability, but I have chosen the shorter term time-keeping. Initially my project was to investigate the ability for time-keeping in children, but for reasons explicated in the following account, I found it necessary to take a closer look at time-keeping generally. In our present day culture we have a magnificent tool for time keeping at our disposal in the form of standardized time units or clock-time. The duration of any event or activity can be translated into uniform and objective time units. This way, events may be measured, added up and compared, forward, backward, and sideways, any way you like, in a perfectly objective and reliable manner. The process is somewhat analogous to how we use money for reasoning about and carrying out transactions concerning value. Money is a token of value or an abstraction of value. Any and every traded commodity may be translated into the abstract value of money. In the same way we may reason about and carry out transactions involving events and activities in terms of standardized clock-time. Children, however, do not have access to this tool as their skills in time-keeping by means of clock-time is limited. Even though they learn how to read a clock, to tell time, during their early school years, it takes them a long time to learn to translate their experience into standardized time units (Harner, 1982;Friedman, 1986;Levin, 1992;Pouthas, 1993). How long is an hour? How much of a certain activity can fit within an hour or 20 min? What do I have to do now in order to be ready to leave for school in 10 min? These are the sort of temporal tasks children struggle with and for which they will need support from parents and teachers for many years. Consequently, when investigating children's developing sense of time or time-keeping ability, any method involving clock-time is unsuitable. Neither would it be meaningful to look for developmental precursors of clock-time mastery. Since clock-time is such a late contrivance of the human community, we cannot expect to find an innately based capacity for clock-time. Now, someone might object, even though the mechanical clock is of a recent date, time itself has always been the same and the mechanical clock is only a more efficient way of keeping track of it. From our viewpoint of the 21st century this is how it may seem, immersed as we are in standardized, uniform, and abstract clocktime. However, it is a misunderstanding. The introduction of the mechanical clock, meant more than just a more efficient technology, it also introduced a new and different sort of time -a uniform, evenly flowing time. While other modern instruments enabled the detection of previously undetectable natural phenomena such as radiation, the mechanical clock created its own new phenomenon. A Short History of Time-Keeping Devices Temporal organization involving days, months, and years has been around in the human community for millennia. Cyclically recurring celestial events such as the day-night cycle, the cycle of the sun, the cycles of the moon's phases have informed the construction of calendars since the early days of human communities. It is in principle not too mysterious. The cyclic events are there for the counting, the only requirement is to keep your eyes open and devise a method for keeping track of the cycles. These cyclic natural events then become a back-drop, against which other events may be gaged. This is a simplified description to make a point. In reality, there are records of systematic observations of temporal patterns in the movement of heavenly bodies of all kinds. Not only the most salient, like the sun and the moon. But in principle, even an uneducated stone-age man could construct a simple calendar based on these most obvious celestial events. However, for temporal organization within the day (the 24 h cycle of the earth's rotation around its axis), there are no natural cycles to count. In ancient civilizations such as Egypt and Babylon sundials and water-clocks were used to aid time-keeping. The principle of the sundial is to subdivide the cycle of the sun into equal units. The Egyptians divided the day in 12 h, but these hours were not standardized to be uniform the way our modern hours are. They would vary in length with the season. (These unequal hours are sometimes referred to as temporal hours or true time; Landes, 2000) Thus the temporal units of the sundials could not be used as an objective measure of time, e.g., the duration of an event. They could unambiguously only indicate points in time such as sunrise, high noon, and sunset, which also could be determined simply by eyeballing the sky. In overcast weather and at night, when the sundial does not work, the water-clock was useful. The principle of the water-clock is different than that of the sundial. Instead of subdividing the duration of a known event (the suns movement across the sky) an event is created (the slow drip of water in or out of a vessel) and then the accumulated events (the volume of water) are measured. Interestingly, in antiquity the water-clock was calibrated to conform to the sundial. It had a different scale for different months, even though the technology would easily have allowed for the introduction of standardized time units. This means that, a question such as: -What time does the sun set? -would, in ancient Greece, be met with incredulity, and your interlocutor would, while speaking very slowly, explain to you that at sunset the time is SUNSET! Thus, for most of human history, time-keeping has been a matter of gauging one event against another. Alexander Zsalai's comment about time in antiquity comes to mind: "In his time (Heroditus'), and even much later, human activity served much more as a measure of time and not the other way around." (Szalai, 1966, described in Levine, 2006. In other words, rather than having time define events, events defined time. This sort of eventtime is still in use in some places. If you were to ask a person in rural Burundi when he wants to meet, he might say that he will meet you when the young cows go out. In some parts of Madagascar, a question about how long time something takes might produce an answer like the time of a rice-cooking (about half an hour; Levine, 2006). The Mechanical Clock -A Paradigm Shift in Our Conceptualization of Time The mechanical clock dates back to the end of the 13th century. The principle of its operation was similar to the water-clocka uniform, artificial event was generated, and then the event was repeated, while keeping an accumulative count. Unlike the clepsydra, the mechanical clock technology did not allow for calibration with a sundial. It could not handle the ever-changing temporal hour. A standard had to be chosen and the choice fell on (subdivisions of) a mean solar-day. The mechanical clocks were at first not very good and they did not indicate minutes. Eventually they improved and in the 17th century when a pendulum was added to the construction, the resulting clock looked like our modern clocks and performed almost as well (Lundmark, 1989;Dorn-van Rossum, 1996;Landes, 2000;NIST, 2009). The mechanical clock brought about a new sort of time; uniform, objective, and abstract, free of its content. It created uniform units for abstract time. People have always known of an abstract time, beyond or behind the events, i.e., chores could be finished sooner or later, the length of the day varied with the seasons. But without units, abstract time is truly evasive and of little practical use in time-keeping. Summary So Far I think it is safe to say that for time-keeping, the event-time mode has been the standard for a vastly longer period of human existence than has time-keeping by means of clock-time. Therefore I find it unlikely that humans would be equipped with a built-in ability to detect abstract, uniform, and objective time as this sort of time is a product of the mechanical clock. I think it is more likely that our ability to operate with clock-time overlays the older event-time mode. Perceiving abstract, uniform, and objective clock-time is likely a learned skill which entails translating our experience of events into clock-time units. Events One of the most repeated passages, a Locus Classicus, in the literature on psychological time, is an anecdote of how events sometimes distort our estimation of time. It typically reads something like this: -Have you noticed how, when you are engaged in or observing a rousing, entertaining or novel event, time seems to pass rapidly, while time seems to drag when nothing much is happening or the event is a boring one. Generally the analysis ends there. The assumption seems to be that the temporal information embedded in events is inherently unreliable, and events are therefore rejected as a source of temporal information. I think this rejection may be a bit premature. Undoubtedly, there are extraordinarily captivating events which make us forget about everything else, as well as sluggish ones that never seem to end, but these are at the extreme ends of the scale. There are also events somewhere in the middle, appealing or important enough to keep your attention up, but not so to make us lose sight of other matters of the day. Of particular interest for the account presented here is a class of events which we have experienced many times, and regarding which we possess a substantial amount of knowledge or eventknowledge. These are the events and activities of everyday life which are so familiar to us that the memories of them come to possess a schema-or script-like character. This type of events are frequently referred to as everyday events, routine events, or recurring events. Given that it is logically impossible for the same event to happen more than once, our minds are apparently not conforming to the rules of logic in this matter. This is more than a lucky accident, since our event-scripts are so useful to us. An event-script may scaffold our memory so that we don't have to remember everything from scratch; we know how the type of event usually unfolds. It guides our perception and attention so that we may interpret a situation quicker; we know what to look for. If we know how an event usually unfolds, we may make better predictions about what will happen next and what actions to take. (Zacks et al., 2007;Sargent et al., 2013). Furthermore, it is a matter of cognitive ergonomy; to process a routine event requires less resources than if we had to perceive or interpret it from scratch, as a novel event, every time. This way we may reserve resources for dealing with unexpected and perhaps dangerous occurrences. As the event scripts are acquired through individual experience, we might expect a certain amount of variation between individuals. And there are differences, but also a surprisingly good agreement between individuals regarding what constituent parts makes up a certain type of event (Bower et al., 1979), and between and within individuals in how events are temporally structured (Newtson, 1976;Zacks and Tversky, 2001;Speer et al., 2003). The consistency in how we perceive everyday events implies that our experience can be communicated and reasoned about together with others, which is very helpful in temporal organization endeavors. Children and Events Contrary to the traditional belief that young children's skills are poor in representing and remembering an event sequence (Piaget, 1926(Piaget, , 1969Fraisse, 1963), Nelson and Gruendel (1981), found that even quite young children have generalized, temporally organized representations of familiar, everyday events (Nelson, 1986(Nelson, , 1996. Children, as young as 3 years, can when asked about familiar events, such as going to a birthday party or having lunch at their preschool, verbally report the component acts of these events in correct temporal order. And already at the age of 4, children begin to grasp temporal relations among everyday events, such as waking, eating lunch, eating dinner, and going to bed (Friedman, 1977(Friedman, , 1982(Friedman, , 1990. Young children accomplish these tasks with the help of script-like event representations. The event scripts help them predict the course of events in everyday life as well as guiding action and attention; they serve as representation of past experience, and helps with the interpretation of present experience of events. For children the event scripts also have a more profound function as they may be the child's earliest form of knowledge representation and as such a basic building block of cognition that serve as a foundation for more complex cognitive structures (Nelson and Gruendel, 1981, p. 150, 155;Lucariello and Rifkin, 1986). Children's event representations eventually give rise to more abstract forms of knowledge, such as concepts and categories and also support language acquisition. Logical and temporal relations first appears in the context of event representations. Time and temporal relations "is a basic dimension of action, activity, and event structure" (Nelson, 1996, p 259) and thus part of the child's experience of events. Two basic dimensions of time, duration, and sequence (e.g., Fraisse, 1963), are also basic and indispensable dimensions of events. In turn, these basic dimensions embed other time concepts such as before and after, while, now, and soon. Thus, a basic understanding of temporal relations is implicit in the young children's knowledge of events. The trick that children are expected to, and eventually come to master is to translate their event experience into clock-time and its linguistic representations (Nelson, 1996). In Nelson's view, language is crucial to children's development of time knowledge because language is an important mediator of knowledge. Language makes it possible to construct abstract concepts and complex representations that go beyond the more basic ones acquired from direct experience of events (Nelson, 1996). I agree with Nelson, but I would like to add that perhaps the experience and understanding of events also have a more direct effect on children's emerging knowledge of time, as our primordial mode of experiencing and understanding time may be by way of events. What Events? In psychology and philosophy events are sometimes defined as simply a change (e.g., Rey, 2015) (Ducasse, 1926;von Wright, 1963, described in Casati andVarzi, 2014). Though, for the time-keeping discourse presented here, a single change doesn't qualify as an event. Neither does a series of unrelated changes, although they may (or may not) give rise to some sort of temporal experience. To be functional in time-keeping, the event must consist of a series of related changes, i.e., a coherent everyday event with a beginning, a middle, and an end. This sort of event is perceived as a unit because it has a meaning; a purpose or an end state. A broad and informal definition would be Gotogether goings-on. Most importantly, this is the sort of events which make up much of everyday life and from which we may form event-scripts and event configuration scripts. A change could be part of this event but any random change does not necessarily constitute a time-keeping event. Thus, this sort of event contains not only changes, but also continuity. In everyday life we experience events such as going to work, cleaning up after dinner, playing a game of soccer. Events of this type and on this scale are the ones we need to choreograph as we maneuver through an ordinary day in real life. Consequently it is events of this sort and on this scale that are of interest here. Katherine Nelson's description captures the gist of everyday events well: ". . .they involve people in purposeful activities, and acting on objects and interacting with each other to achieve some result" (Nelson, 1986, p 11). Thus, the meaning of event in Frontiers in Psychology | www.frontiersin.org this article has more in common with its meaning in everyday language than with its meaning in Philosophy or any other academic discipline. The aim of the research described above was not to uncover the processes underlying time-keeping, but I think these results indicate that event representations play an important role in everyday life and furthermore, that events are not merely random noise but are perceived in a consistent and lawful manner. This in turn suggests that event-time and time-keeping by way of events possibly still is part of our cognitive repertoire. In my view, our modern way of timekeeping most likely consists of event-time together with clocktime. Children may, however, rely more on event-time and it may therefore be advantageous to investigate the development of time-keeping ability or sense of time in the context of events. Event Based Time-Keeping So What Would a Time-Keeping Task by Means of Event-Time be Like? The Figure (Figure 1 ) Shows an Example Reasoning in terms of temporal relations among events entails a sort of mental time-travel. By constructing a mental event-model we may stop time for a moment, so that we may, in our minds, travel forward in time, and also backward to try out different alternatives. As some of the events overlap, we must also travel sideways. With their greater repertoire of event-representations and greater general processing resources, adults are obviously more competent event based time-keepers than children. In my opinion, it is the precursors of this competency we should look for when investigating children's sense of time.
2016-05-12T22:15:10.714Z
2015-03-11T00:00:00.000
{ "year": 2015, "sha1": "75b99eabe86e6d89b8dec2e262402ff117bb62ea", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00259/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ae2674297ad78228ca77bac38518fb3c94b3531e", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
258841210
pes2o/s2orc
v3-fos-license
Enhancing Accuracy and Robustness through Adversarial Training in Class Incremental Continual Learning In real life, adversarial attack to deep learning models is a fatal security issue. However, the issue has been rarely discussed in a widely used class-incremental continual learning (CICL). In this paper, we address problems of applying adversarial training to CICL, which is well-known defense method against adversarial attack. A well-known problem of CICL is class-imbalance that biases a model to the current task by a few samples of previous tasks. Meeting with the adversarial training, the imbalance causes another imbalance of attack trials over tasks. Lacking clean data of a minority class by the class-imbalance and increasing of attack trials from a majority class by the secondary imbalance, adversarial training distorts optimal decision boundaries. The distortion eventually decreases both accuracy and robustness than adversarial training. To exclude the effects, we propose a straightforward but significantly effective method, External Adversarial Training (EAT) which can be applied to methods using experience replay. This method conduct adversarial training to an auxiliary external model for the current task data at each time step, and applies generated adversarial examples to train the target model. We verify the effects on a toy problem and show significance on CICL benchmarks of image classification. We expect that the results will be used as the first baseline for robustness research of CICL. Introduction Deep learning has achieved remarkable performance in various field of computer vision. However, it remains vulnerable to adversarial attacks, which add minuscule perturbations to an image that are almost imperceptible to the human eye but cause the model to make incorrect predictions. This has made adversarial attacks a major concern for researcher, as they pose a significant security risk when deep learning is applied in real-world scenarios. Therefore, developing defenses * corresponding author against and methods for launching adversarial attacks have become a focus of research in the field. Despite the significance of continual learning (CL) in realworld applications of deep learning, there has been limited research on adversarial attacks and defenses in this context. CL examines how models can effectively learn from a stream of continuous data. In our empirical analysis to reveal the impact of attacks, we found that class-incremental CL (CICL) setting is vulnerable to adversarial attack. Furthermore, adversarial training (AT), the most widely used adversarial defense method, is ineffective in CICL settings. Compared to the expected robustness enhancing and small clean accuracy loss of AT in a single task, the AT in class-incremental CL shows larger drop of clean accuracy and only small improvement of robustness. We argue that the cause of this problem is that the class imbalance, an inherent property of CL, deepens the model disturbance effect of AT. To address these problems, we propose External Adversarial Training (EAT), an adversarial training method that can create adversarial examples that exclude the class imbalance problem of CICL. EAT can be easily applied to any method using experience replay (ER) which includes the SOTA models. To the best of our knowledge, EAT is the most effective method for defending against adversarial attacks while maintaining clean accuracy. We verify and analyze the points on state-of-the-art and well-known rehearsal-based CICL methods on on split CIFAR-10 and split tiny-imagenet benchmarks. In summary, our contributions are as follows. • verifying AT is ineffective in CICL • analyzing the causes of the problem based on attack overwhelming • presenting a simple but effective EAT method to exclude the causes • providing baseline of robustness for several rehearsalbased method 2 Background Class-Incremental Continual Learning Continual learning is environments in which a model called target model is trained to learn new tasks or classes in sequential manner, without forgetting the previously learned tasks or classes. This means that the model is continually exposed to new data and must learn to adapt to the new information while retaining the knowledge it has gained from previous tasks. There are many different settings for continual learning, following recent CL literature [Mai et al., 2022;Cha et al., 2021;Buzzega et al., 2020], we consider the supervised classincremental continual learning setting where a model needs to learn new classes continually without task-iD. The stream D is a sequence of disjoint subsets whose union is equal to the whole training data, notated as {T 1 , · · · , T N } where T i indicates the subset called a task at ith time step. Each task is a set of input and ground truth label pairs. Training in the class-incremental continual learning has two constraints: 1) a target model which want to training for continuous dataset composed of an encoder and single-head classifier is shared overall tasks, and 2) the model learns from only a task at each time step without accessibility to the other tasks. The single-head classifier uses all classes in D, not restricted to the classes of a task, which is more challenging environment than the other settings using task-IDs or using different classifier for tasks. In CICL setting, the model suffer from class imbalance because the previous task data is inaccessible. Rehearsal-based method Rehearsal-based methods, also known as replay-based methods, are a popular approach for addressing the issue of catastrophic forgetting in CL. These methods use a memory buffer composed of a small fraction of previous training samples to reduce forgetting of previously learned information. et al., 2021] improves performance through contrastive learning. These methods have been shown to be effective in reducing forgetting and improving performance in classincremental CL scenarios. Adversarial Attack Adversarial example/image, were first introduced by [Szegedy et al., 2013]. These examples are modified versions of clean images that are specifically designed to confuse deep neural networks. Adversarial attacks are methods for creating adversarial examples. These attacks can be classified into various categories based on their goals and specific techniques. In this paper, we will focus on whitebox attacks, which assume knowledge of the model's parameters, structure, and gradients. Fast Gradient Sign Method (FGSM) [Szegedy et al., 2013;Goodfellow et al., 2014] is a popular white-box attack that utilizes gradient information to update the adversarial example in a single step, in the direction of maximum classification loss. The FGSM update rule is given by x ′ = clip [0,1] {x+ϵ·sign(∇ x , L(x, y; θ))}. Basic Iterative Method (BIM) [Kurakin et al., 2018] is an extension of FGSM which use iterative method to generate adversarial examples through multiple updates. Projected Gradient Descent (PGD) is simi-lar to BIM, but with the added feature of randomly selecting an initial point in the neighborhood of the benign examples as the starting point of the iterative attack. PGD can be interpreted as an iterative algorithm to solve the following problem : max x ′ :||x ′ −x||∞<α L(x ′ , y; θ). PGD is recognized by [Athalye et al., 2018] to be one of the most powerful firstorder attacks. The use of random noise was first studied by [Tramèr et al., 2017]. In the PGD attack, the number of iteration K is crucial factor in determining the strength of the attacks, as well as the computation time for generating adversarial examples. In this paper, we will refer to a K-step PGD attack as PGD-K. Adversarial defense methods have been widely studied in recent years due to the increasing concern for the security of deep learning models. These methods aim to improve the robustness of deep neural networks against adversarial attacks, which are specifically designed to exploit the weaknesses of the model by introducing small, imperceptible perturbations to the input data. Adversarial training (AT) [Goodfellow et al., 2014] is a popular method that trains the model with generated adversarial examples, making the model more robust against similar attacks. Robustness is used as a measure of how well the model defends an attack. This is a count of how much it is correct after applying an adversarial attack to the clean test data. To avoid confusion, in this paper, accuracy means clean accuracy using clean test data, and robustness means accuracy for adversarial attacks on clean test data. Critical Drawback of Adversarial Training in CICL Naive application of AT to CICL causes serious problems on both robustness and accuracy. Figure 1 shows the negative impact of AT on a CICL data. This experiments conducted on sequencial CIFAR-10 and detail setting same as Section 5. In the figure, applying AT to joint training decreases clean accuracy slightly, but increases the robustness dramatically. This is well known effect of AT [Goodfellow et al., 2014;Zhang et al., 2019]. However, AT in ER shows largely different results to this well-known effect of AT. Clean accuracy significantly decreases and robustness also drops than joint adversarial training. This example shows the potential risk of AT in CICL framework. Problem of AT in CICL (a) clean test, balanced CT Attack Overwhelming by Class-Imbalance Classimbalance of CICL increases the number of adversarial attacks of a majority class, which overwhelms the number of clean examples for the minority class. The class-imbalance, which is a well-known but still unsolved problem of CICL, causes the imbalance of adversarial attacks, because AT generates them by distorting all clean samples in a mini-batch. For example, AT using 10% of training data for a class, and 90% for the others exactly inherits the rate to the generated adversarial examples [Goodfellow et al., 2014]. In usual CICL settings [Buzzega et al., 2020], the class-imbalance is a common property, and therefore the imbalanced attack occurs in most CICLs to naively adopt AT. Weak Resistance to Inbound Attacks by Class-Imbalance The small clean examples by class-imbalance weakens the resistance to inbound attacks. We use the term, the inbound attack for a class, to indicate closely located adversarial examples from the other classes. When the inbound attacks are trained in AT, the number of clean examples has an important role of resisting to distortion of existing information by the attacks in the model. The resistance is weaken for the minority class in CICL, which has insufficient clean examples compared to the other majority classes. For example, in rehearsal-based method, model can access full current task data but only access previous task data within a very small memory size compared to the current task size. This imbalance of previous-current tasks ratio gap as CL progresses. Problem: Decision Boundary Distortion The two properties, attack overwhelming of the majority class and weakening resistance of the minority class, cause critical distortion of trained information from clean data. This phenomenon appears as the distortion of decision boundaries. The overwhelming attacks increase the inbound attacks for the minority classes, and the minority classes has insufficient resistance Settings for Empirical Analysis We prepared a toy binary classification task to preliminarily verify the distortion phenomenon. In the task, we generated the same number of crescent-shaped input representations for each of two classes as Figure 2, like [Altinisik et al., 2022]. Each class has 1000 input samples. We trained a simple linear network which composed three hidden nodes, two layer feed-forward network on the data in the four conditions of training data: 1) balanced clean data, 2) balanced clean data with balanced adversarial examples, 3) imbalanced clean data, and 4) imbalanced clean data with imbalanced adversarial examples (1:9). Training used SGD optimizer, learning rate as 0.1, doing 500 epochs. For adversarial training, using PGD attack with 10 iters. The trained models are used for plotting their decision boundaries by generating predicted classes over representation space as shown in Figure 2. The boundary is tested on balanced clean samples and balanced adversarial examples that shown as dot distribution in the first and second row of the figure. Detail accuracy can be seen at Table 1. Figure 2a, the model trained with balanced clean data shows clear decision boundary to distinguish the clean test samples. In Figure 2b, the model which trained with balanced adversarial samples changes the boundary slightly, but still maintains the boundary of clean data. Using the imbalanced adversarial examples (in Figure 2d), the model largely moves the boundary from the majority (red) to the minority of classes (blue) and incorrectly classify more blue test samples. The results imply that the imbalanced adversarial training has a potential to distort the boundary and destroy the original decision boundary built by clean data. Note that there has been no critical clean accuracy degradation in imbalanced clean training (in Figure 2c). This degradation in performance and increase poor robustness occur only when AT is combined in the imbalanced setting. It does not happen in simple imbalanced setting. Distortion Compared to Clean Test In Distortion Compared to Robustness Test In Figure 2d, balanced clean training shows the base robustness to the adversarial attacks generated for its trained model. Applying AT to the balanced data, Figure 2e, the trained model shows significantly improved robustness, which is a desirable gain by AT in an ordinary balanced training environment. However, imbalanced AT shows less improvement, compared to the imbalanced clean training. In the balanced case, the boundary is nearly changed, but the imbalance case shows the shifted boundary toward the blue area when AT is applied. Then, most of robustness test samples for blue class are incorrectly classified. This result also provides an evidence of robustness degradation by decision boundary distortion. Method Simple Solution: External Adversarial Training In Figure 3, the details of EAT to CICL with experience replay setting are shown. Compared to typical AT, EAT creates an additional external model whose backbone has the same network architecture to the CL model shared over tasks (Target model). At each step, the method creates an external model, trains it via AT only for the current task at the step from the scratch, generates adversarial examples, and deleted. Then, Target model trains with current task data, replayed samples from memory, and the generated adversarial samples without AT. Detail process is described in Algorithm 1. Note that EAT doesn't need any extra external memory size. External model deleted after generate adversarial examples, do not saved for future tasks. Motivation: Effective Exclusion of AT on Class-Imbalance Motivation of EAT is to effectively exclude imbalanced AT for reducing the distortion effect. In CICL, the return AE i imbalanced AT appears by the imbalanced size of current task data and replayed samples, and therefore AT over only different tasks suffers from the distortion problem. Excluding the cases of applying AT over different tasks is a practically achievable way for the goal, because the class-imbalance is a nature of CICL method, which has no clear solution in a limited computing environment. A simple way of the exclusion is to learn Target model only on the current task data, called current task adversarial training (CAT) in this paper. However, this method generates attacks from a current task to other different tasks in CICL settings to incrementally expand the class set for prediction. To enhance the exclusion, EAT uses an external model focused on attacks between classes of a current task. In Figure 4, the rate of adversarial samples between different Tasks is shown. This experiment is conducted on split CIFAR-10 and the other settings are shown in Section 5. In the result, EAT shows higher rate than CAT over all training epochs, which verifies more effective exclusion of EAT. In fact, the unclear exclusion of CAT improves largely decreases the accuracy and robustness slightly as shown in Figure 1. Ho and Nvasconcelos, 2020]. Memory Update After training of the base model at a epochs, the external memory is updated by inserting samples randomly selected from the task at the step. This memory update method is called as Reservoir sampling. If memory is already full, we randomly choose data in the memory and replace this data to new data. Datasets We use three datasets, Split-CIFAR-10, Split-CIFAR-100, and Split-MiniImageNet. Each set is created through splitting original data by classes, composing of classes for each task, and ordering the tasks as a stream. The task composition and ordering determine the information for transfer over tasks, and their different settings cause the large change of results. For clear analysis, we fixed task composition in ascending order of labels. Results and Discussion Performance Comparison with State-of-The-Art The performance of accuracy and robustness in some of state-ofthe-art models are shown in Table 3. The results are categorized to two cases using 200 and 500 buffer size for experience replay. In each memory setting, we reproduce state-ofthe-art methods and their results are close to their reference accuracy results with some variance. In the accuracy results, AT significantly decreases accuracy of experience replay methods in all cases compared to their original accuracy, whereas EAT show significantly larger accuracy than AT. In the robustness results, AT improves robustness of all base methods. EAT significantly increases robustness again and shows the best value over all methods in the table. The results imply that EAT effectively solves the problems on accuracy and robustness drop of AT in CICL. Furthermore, EAT is the most effective method to enhance robustness in CICL. Less accuracy than the best original method is the trade-off between accuracy and robustness, which is the property of AT observed in usual cases. Robustness and Accuracy on Each Task After Training Figure 5 shows detail robustness of AT and EAT for each task after training over all tasks on CIFAR-10. In the results, EAT shows higher robustness than AT in all tasks. AT nearly improves any previous tasks except current task (Task5). The accuracy of EAT is higher at Task1 and Task2, which are the oldest two tasks, whereas AT shows slightly higher accuracy in the recent tasks. Considering the total accuracy and robustness increase of EAT, the results imply that EAT improves both, specifically improves older accuracy better, and significantly improves all robustness. Note that EAT has never learned inter-task adversarial attacks, but the robustness increases overall tasks. This is the strong evidence of drawbacks of unnecessary class-imbalance attacks of AT between tasks . Performance Difference over Time Steps Figure 6 shows accuracy and robustness averaged over involved tasks at each step in training. The accuracy gradually decreases by steps in CICL settings, while the model is repeatedly trained for a new task and forgets the previous task information. This phenomenon of CICL appears for both AT and EAT, but their overall accuracy is slightly higher with EAT. The robustness is similar, but not exactly equal at step1, which is caused by randomness of adversarial attacks of AT. The difference of the robustness results significantly increases at step2 and it shows the similar robustness until step5. As step2 includes only Task1 and Task2 but step5 includes all Tasks, the remaining difference imply that CICL settings with AT have sufficiently large robustness degradation when adding a new task. Reducing Computational Cost Both EAT and AT are computationally expensive to build and train adversarial samples. In particular, EAT is more expensive because it trains and uses new external models. For practical use, the cost may be a limitation, so we also verify the performance of EAT in better CICL settings to use faster attack method, FGSM [Szegedy et al., 2013]. Compared to 4-PGD attack, the method reduces the time complexity to about 25% [Szegedy et al., 2013]. Table 4 shows the performance in the efficient setting. In the results, accuracy and robustness are still improved by EAT significantly, so the limit of EAT in computational cost can be sufficiently alleviated. Method Accuracy Conclusion In this paper, we show that existing AT do not work well in class-incremental continual learning setting with experience replay. We argue that its cause lies in AT on the classimbalance data and its distortion of decision boundaries results in accuracy and robustness drop. To solve the distortion, we introduced EAT that effectively excludes the imbalanced AT between different tasks. In the experiments on CICL benchmarks, we verify that our method significantly improves both accuracy compared to AT suffering from the negative effect of class-imbalance. Moreover, EAT provides the new state-of-the-art defence performance (robustness) in CICL with ER environment. Future Works Although robustness of several methods has been investigated in this paper, the robustness of many CL methods is still insufficient. In addition, there is also a lack of study about how adversarial defense method except adversarial training affect to CL. Wide and various study on the adversarial robustness in CL need to be studied with future work. To the best of our knowledge, this study is the first to study adversarial defenses specialized in CL. Affordable and effective adversarial defenses specialized in CL should also be studied in the future. Related Work Continual learning CL can be divided into several categories according to problem setting and constraints. One group extends the architecture of the model for each new task. Another approach is to regularize the model with respect to previous task knowledge while training new tasks. And Rehearsal method use stored data or samples from generative models to resist catastrophic forgetting. Rehearsal method is very effective in class-incremental CL, but there are additional computational cost and memory costs. Recent rehearsal-free method have shown high performance with little memory cost using vision transformer and prompt tuning. This setting is more realistic and shows higher performance than setting starting with scratch. In this paper, we focus on the setting of class-incremental CL from the scratch. Adversarial Defense There are various adversarial defense methods that have been proposed in the literature, including adversarial training, defensive distillation, input preprocessing methods, and model ensemble methods. Defensive distillation [Papernot et al., 2016] is another method that improves the robustness of the model by distilling the knowledge from a robust model into a less robust one. Input preprocessing [Dziugaite et al., 2016] methods aim to preprocess the input data to remove the adversarial perturbations before feeding it to the model. Model ensemble methods [Pang et al., 2019], on the other hand, aim to increase the robustness by combining the predictions of multiple models. Other methods such as gradient masking, Randomized smoothing and Adversarial Detection are also proposed in recent years. Gradient masking [Lee et al., 2020] is a method that hides the gradients of the model to prevent the gradientbased attacks. Randomized smoothing [Cohen et al., 2019] is a method that makes the model more robust by adding random noise to the input data. Adversarial Detection [Liu et al., 2018] is a method that aims to detect the adversarial examples and discard them before they are fed to the model. Continual learning with adversarial defense Efforts to incorporate adversarial robustness into CL have not been long studied. [Khan et al., 2022] is studied on how to increase robustness in joint training using continual pruning method. But this study didn't study about how to increase robustness in CL. [Chou et al., 2022] using the robust and non-robust data set found in [Ilyas et al., 2019] to increase clean accuracy of continual learning model. They also did experiments on robustness of CL model, but only conducted experiments in seq CIFAR-10 with large memory (=16000). And their goal is to increase clean accuracy, they have not studied how to increase adversarial robustness. In this paper, we first studied how to increase adversarial robustness in CL, and measured robustness of various method in various setting.
2023-05-24T01:16:32.832Z
2023-05-23T00:00:00.000
{ "year": 2023, "sha1": "0eb7da5dc9ae9c6ce0ebc7d4149c6344271c76e2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0eb7da5dc9ae9c6ce0ebc7d4149c6344271c76e2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
253498861
pes2o/s2orc
v3-fos-license
Emergence of Concepts in DNNs? The present paper reviews and discusses work from computer science that proposes to identify concepts in internal representations (hidden layers) of DNNs. It is examined, first, how existing methods actually identify concepts that are supposedly represented in DNNs. Second, it is discussed how conceptual spaces -- sets of concepts in internal representations -- are shaped by a tradeoff between predictive accuracy and compression. These issues are critically examined by drawing on philosophy. While there is evidence that DNNs able to represent non-trivial inferential relations between concepts, our ability to identify concepts is severely limited. Introduction There is a well-known story of how deep neural networks (DNNs) predict classes in an image classification task [5,22]: In the hidden layers of DNNs, progressively abstract concepts are represented. Take a model that classifies animals such as cats, dogs, cows. According to the story, the model detects low-level concepts such as colors and textures in the first layers. In intermediate layers, the model detects higher-level concepts, such as body parts (eyes, ears), or complex textures (fur), by composing low-level concepts. In the final layer, the model detects animals by composing higher-level concepts. Importantly, these concepts are emergent, i.e., they are not hard-wired into the models and do not correspond to the labeled classes, but are acquired through the learning process. The direct route to verify this story is to examine the internal representations (hidden layers) of DNNs and to identify concepts supposedly represented there. The main goal of the present paper is to review and discuss work from computer science that proposes to do this. Two issues with concepts in internal representations of DNNs are discussed. First, how do these methods actually identify concepts that are supposedly represented in a DNN? Second, how are conceptual spaces -sets of concepts in internal representations -shaped by the classes to be predicted and by the representational capacities of DNNs? These questions are critically examined by drawing on philosophy. Background Concepts Before concepts in internal representations are discussed, criteria for concept possession should be stated. There are various philosophical theories of concepts [24]. Here an undemanding theory, or explication, of concept possession is used, which does not assume that concept possession requires mental states or consciousness, or that concepts are abstract objects [9]. Rather, concepts are taken to be associated with abilities. An important distinction is between the extension and the meaning (intension) of a concept -the distinction goes back to Frege [43]. The extension of a concept is the collection of entities falling under it. Here, DNNs are taken to "recognize" (partial) extensions through activation patterns in their hidden or output layers. Labeled input instances can be seen as (partial) extension of a concept. When humans label instances, they do this on the basis of prior knowledge about the instances -the meaning of a concept. In DNNs, the possession of meaning encompasses the representation of some inferential relations, e.g.: a cat is an animal, has four legs, a head, fur (usually), and so on. It will be assumed that both the extension and the meaning of a concept are relevant. As we will see below, one of the main challenges in the context of DNNs is that concepts are primarily identified through (partial) extensions, which underdetermines meaning. We will see evidence that DNNs learn non-trivial inferential relations between concepts. Internal representations arise as a function of the objective of predicting n classes. The concepts we will focus on are not the predicted classes, but emerge as a function of the objective of learning to predict the classes. In particular, DNNs apparently learn concepts that are shared by several classes. To use the example of classifying animals, in order to classify cats and dogs, a DNN may learn concepts such as 'fur', 'head', 'paw', 'eye', and so on that are shared by cats and dogs. This issue has been explored for some time, e.g., DNNs supposedly exploit that many classes have shared features at least at a low level in order to improve generalization [5]. If these findings can be confirmed and DNNs are in fact able to learn non-trivial inferential relations between concepts, the representation of concepts contains information about meaning rather than extensions. It could be argued that the exercise of examining concepts in internal representations is superfluous, because the predictive successes of DNNs shows that they are able to automatically identify predictively salient concepts. If this were not the case, DNNs would not be able to generalize as well as they do. However, DNNs are not always successful; there are known failure modes such as adversarial examples [36]. Also, predictively successful models are not necessarily models that represent their target system adequately; this is true for scientific models as well as for DNNs, as philosophers know [18,32]. Conceptual Spaces DNNs may be able to learn concepts that are relevant to predict more than one class. Other factors may shape how concepts are represented in DNNs as well. First, the predicted classes may not share certain concepts and be mutually exclusive (to some extent). In the example of animal classification, in order to classify cows, a useful concept to be learned by a DNN may be horns. This concept does not contribute positively to the classification of cats and dogs, because cats and dogs are not horned. Second, the concepts populating the internal representation take up some space in the internal representation, they are in competition for a finite amount of representational space. This competition may lead to compression and thus shape the internal representation of all concepts. Third, individual concepts may be compressed as well: if the representation of a concept contains predictively irrelevant details, this will lead to overfitting. All these factors contribute to the formation of a conceptual space, the set of concepts in an internal representation of a DNN. Conceptual spaces are formed as a function of both the set of predicted classes and the representational capacity of the DNN. There have been some studies of how conceptual spaces are formed in DNNs, but less is known about this than about the emergence of individual concepts. Below we will see some evidence for compression due to competing concepts, and it will be argued that understanding conceptual spaces may be necessary to understand how individual concepts are represented in DNNs. Limitations Not all aspects of the internal representation of a DNN have an interpretation in terms of concepts. An internal representation may not relate to any concept in that a) there may be a failure to represent, as in adversarial examples [36] or b) what is represented may not be accessible or comprehensible to humans and therefore not correspond to a concept [8]. Note that adversarial examples may also constitute predictively useful patterns, or artifacts [10]. Cases a) and b) are examples of non-conceptual content of an internal representation. Understanding the scope and limits of non-conceptual content is important, but the following discussion will focus on the modes of representation of concepts that can be grasped by humans. The discussion will focus on post-hoc methods to extract concepts from trained models, excluding methods like concept whitening [12] or concept bottleneck [21] that modify the architecture of DNNs to enhance interpretability. Identifying Emergent Concepts In this section, empirical work on the emergence of concepts in the internal representation of DNNs is reviewed. The focus is on concepts that are relevant to several of the predicted classes, because such concepts indicate non-trivial inferential relations. Network Dissection Network dissection by Bau et al. [3] proposes to identify concepts associated with individual neurons in CNNs. Specifically, the emergence of object detectors in scene classifiers is examined. For example, the CNN learned the concept 'airplane' in the process of classifying 'airfield' and 'hangar'. Concepts are identified by matching the region of the input that maximizes activation of a neuron with a region associated with a concept given by an image segmentation method. Many concepts were found to be important for the classification of multiple scenes. Network dissection has several advantages: it is automatic, allows for a quantitative evaluation of similarity, and for visual inspection of image regions. A drawback is that the image segmentation method can only identify a fixed, limited set of concepts. If a concept is not included in this set, it cannot be identified. Therefore, network dissection falls prey to a version of the "bad lot" argument by van Fraassen [37,14]: If we explain scientific evidence (here: region with high activation by a neuron) using the best hypothesis from a limited set (here: concepts from image segmentation) it is not clear that the best hypothesis is also true, because the true concept may simply not be in the scope of the image segmentation method. It could be thought that this problem can be overcome by visually inspecting the regions with high activation to identify the concept. Such a region, however, is only (part of) an extension of a concept, and it may be unclear what meaning is associated with that region -a segment containing a plane can also be described as a tube with wings. This is a version of the so-called indeterminacy of reference described by Quine [30,25]. Feature Visualization Feature visualization by Olah et al. [27] proposes to identify concepts by constructing input instances that maximize the activation of neurons (or other parts) of CNNs. The method generates synthetic images that maximize activation of a neuron. Olah et al. note that direct, unregularized optimization can lead to degeneracies (akin to adversarial examples), and that different kinds of regularization have to be used to obtain natural-looking images. Feature visualization can be used to show that low-level concepts combine to higher level concepts, e.g., a car detector is assembled from features like windows, car body and wheels [26]. Feature visualization has the advantage that one does not need to infer the meaning of a concept from a set of instances. Rather, it provides a single visualization (or few). Of course, one still needs to determine meaning from an instance. As Olah et al. acknowledge, while many visualizations have a rather clear semantic interpretation, some visualizations appear to have a mixed meaning (so-called polysemantic neurons, more on these below), and some visualizations have no discernible meaning at all. Thus, the indeterminacy of reference is an issue here as well. Furthermore, the visualizations depends on the choices made in optimization, the regularizations in particular, which may introduce artifacts. The use of optimization raises further concerns, for example, the method could get stuck in a local optimum. Optimization in DNNs is not very well understood from a theoretical point of view [38,6], and the possibility of local optima makes the method susceptible to the "bad lot" argument. TCAV Testing with Concept Activation Vectors (TCAV) by Kim et al. [19] is a method to examine how strongly a user-defined concept is associated with a predicted class in a particular layer. Concepts are defined extensionally by a user through a set of input examples of that concept and a set of random counterexamples. The concept activation vector of a layer is the vector normal to the hyperplane that best separates the activations of examples and counterexamples. One can test how strong the association of this concept with a predicted class is by measuring how well its vector aligns with the vector of that class. Kim et al. claim that DNNs learn emerging concepts with considerable accuracy. Classifiers of low-level concepts (colors, shapes) achieve high accuracy in early layers, while more complex concepts (race, gender) achieve higher accuracy in later layers. Note that other researchers have explored the activation of layers with linear classifiers [2]. The main advantage of TCAV is that is allows users to choose the concepts to be identified through customized sets of examples. TCAV thereby overcomes, to some extent, the philosophical problem of the indeterminacy of reference we encountered above: in principle, there is no limit on the number and variety of instances to define a concept extensionally. There are, of course, practical limitations. Also, the extensional definition of concepts limits the control on the meaning of the concept being defined. A further drawback of the method is its limitation to testing for linear information in the layers. Non-local Representation of Concepts The above methods differ in how they identify concepts, but they also vary in where they take concepts to be represented (in single neurons, layers, spread over several layers). It is known that concepts are not (only) represented by individual neurons, but have distributed representations. There is evidence that the representation of concepts is not limited to single layers. Yosinski et al. [42] examined concept representations from the perspective of transfer learning. They found that feature representation in intermediate layers is distributed over consecutive layers: freezing only a portion of consecutive intermediate layers led to a worse performance than freezing all intermediate layers in question. This constitutes indirect evidence that the relevant concepts are distributed over these layers. Conceptual Spaces In this section, it is discussed how conceptual spaces arise as a function of the predicted classes and of compression. The discussion is more speculative than the last section, because there is less empirical work on the global perspective of conceptual spaces. Polysemantic Neurons Some indirect, local evidence for competing concepts is provided by feature visualization, see above. If concepts are disjunctive and in competition for representational space, say, in a layer of a DNN, then one observable consequence may be that some concepts have imperfect representations and become mixed. This phenomenon has been observed by Goh et al. [17]. They find that while many neurons maximize activation for a single, identifiable concept, so-called polysemantic neurons are composites of different, seemingly unrelated concepts, e.g., a neuron representing a mix of cats and cars. Goh et al. point out that one possible explanation of this sort of disjunctive neuron is that they could make "concept packing more efficient" [17]. This idea is discussed in more detail by Olah et al. [26] as the superposition hypothesis. Completeness-aware Concept-based Explanations Completeness-aware concept-based explanations (CCE) proposed by Yeh et al. [41] is a method geared towards discovering sets of concepts that are not only positively relevant to the predicted classes, but complete. A complete set is akin to a sufficient statistic, that is, a function of the input that retains all predictively relevant information [11]. CCE identifies concepts by partitioning linear directions in the activation space of a hidden layer, such that similar concepts are as close as possible, and dissimilar ones as distant as possible. The meaning of concepts is determined by inspecting input instances. This approach distinguishes itself by identifying complete sets of concepts as opposed to single concepts. However, it affords little control on whether the discovered concepts are meaningful. Minimal Sufficient Statistics and the Information Bottleneck It is plausible that DNNs learn compressed, efficient representations of concepts, because the space to represent concepts is limited. From a statistical point of view, the layers of a DNN form a Markov chain, which means that information is lost and internal representations become more abstract in deeper layers [2,1]. But what are the rules that guide how concepts are compressed? To understand these rules, a more global perspective on the representation of concepts is necessary. The CCE approach provides a global perspective in the form of a complete set of concepts, i.e., a sufficient statistic. However, in order to account for the idea of an efficient representation of concepts, minimal sufficient statistics (MSS) are relevant [11]. MSS are sufficient statistics that are as coarse as possible and thus provide the most efficient representation without losing predictive power. It has been argued that DNNs cannot learn minimal sufficient statistics [35]. MSS only yields a useful degree of compression for a very particular kind of data distribution [11], which is not given for most empirical datasets processed by DNNs. A helpful framework that generalizes MSS and can be applied to DNNs is the so-called Information Bottleneck (IB) [34,35,31]. Tishby and collaborators propose that the IB explains how internal representations of DNNs arise as a tradeoff between predictive accuracy (sufficiency) and compression (minimality). Formally, the IB tradeoff is a constrained optimization problem, which yields a predictively optimal representation for a given level of compression (information loss). Tishby et al. have argued that layers of DNNs in fact approximate the optimum given by the IB tradeoff. Note that while the IB framework has been applied extensively, it has been contested whether it is an adequate account of how internal representations arise in DNNs [33,15]. Visualizing Conceptual Spaces: Color Naming The IB framework provides a theoretical picture of how entire conceptual spaces emerge in DNNs. Unfortunately, there is a lack of work establishing relations between this theoretical picture and the actual representation of concepts in given DNNs. In order to illustrate what picture could emerge if conceptual spaces in DNNs were investigated, we will now consider an application of the IB framework to concepts in an empirical context. Zaslavsky et al. [45] use the IB framework to explain how color naming systems arise as a result of efficiency. Different natural languages use different systems to name colors. Based on a standard representation of colors (the WCS stimulus palette, see Figure 1), one can determine how speakers of different languages name the color chips on this palette. (Figure 2, bottom row). The main difference between languages is the number of color concepts they use. A language with more color concepts yields a more fine-grained partition, a language with less colors a more coarse-grained partition. The partitions derived from the IB framework are determined to a large degree by the tradeoff between accuracy and compression, controlled by the parameter β l , which yields different numbers of concepts (the theoretical predictions also depend on the so-called least informative prior). The close fit between theoretical and empirical partitions suggests that color naming systems in different languages have evolved to communicate accurately about colors at a given level of compression, and that the level of compression is due to the different communicative needs of the societies using the languages. How is this related to conceptual spaces in DNNs? To spell out the analogy, the naming systems correspond to internal representations, e.g., partitions of activation patterns in a hidden layer. The cells of the partition (colors) correspond to clusters of activation patterns with a meaning (concepts). The degree of compression is measured by the number of concepts, and the number of concepts is determined by communicative need in the case of color naming systems, and the predicted classes and representational capacity in the case of DNNs. The analogy is substantive to the extent that both color spaces and representations in DNNs are driven by the IB objective. The analogy allows us to get a sense of how sets of concepts may emerge holistically in DNNs, that is, as a function of predicting classes while having a limited representational capacity. If we compare the different partitions in Figure 2, we can see that as the number of colors changes, the entire partition changes. This illustrates how the representation of concepts depends on the representational capacity. Note that polysemantic neurons would indicate an internal representation that is too compressed for the concepts to be represented, such that two concepts (colors) merge. The analogy has its limits. For one, color naming is special because colors are disjunctive, which need not be the case for other concepts. Also, the representation of colors is non-hierarchical, in contrast to complex representations in DNNs. Note that the conceptual spaces of other kinds of objects have been investigated, but they do not allow for similarly striking visualizations [44]. An important open question about compressed representations concerns the mechanism by which compression is achieved. It is unclear whether compression is due to limited representational space, because many successful DNNs are overparametrized, as witnessed by the double-descent risk curve [4,6]. Compression could also be an effect of randomness induced by stochastic gradient descent [35]. Discussion Robust Identification of Concepts Network dissection and TCAV identify concepts via partial extensions, which leads to problems because partial extensions underdetermine the meaning of concepts. Feature visualization relies on optimization, which raises other issues. These problems can be seen as in-principle, philosophical obstacles to identifying concepts in DNNs. From a more pragmatic perspective, the individual weaknesses of these methods could be overcome to some extent by combining them and performing what is known as robustness analysis. Robustness analysis, first proposed in population biology, determines whether different, imperfect methods arrive at the same pre-diction to increase reliability, under the slogan: truth is at the intersection of independent lies [23, 40,39,20]. One could apply methods like TCAV and feature visualization to the same model. If different methods identify the same concept independently, this should raise our confidence that the methods are somewhat reliable. Robustness analysis is limited in that it will not yield an absolute confirmation of concepts [29] -it is only as good as the set of methods in combination -but it is better than using only one method. A combination of different methods contributing to interpretability has been proposed and explored [28,19]. Testing Methods with Synthetic Data The methods for identifying concepts considered here are limited to extracting local or linear information. It would be desirable to extend the scope of the methods to encompass the identification of concepts with distributed and non-linear representations. However, this will be hard to carry out by sticking to the extensional paradigm of identifying concepts. Defining concepts with sets of instances only allows for limited control on the meaning of concepts. One possibility to gain more control on meaning would be to create synthetic datasets in which not only the predicted classes (animals) are labeled, but also intermediate concepts (body parts, textures, etc.), which may re-emerge in internal representations of DNNs -interpretable datasets, so to speak. This approach has been proposed in the context of interpretable architectures [21]. However, synthetic datasets could also be used to test methods for non-interpretable architectures, such as TCAV or feature visualization. In the context of physical modeling, the use of simulation data has led to some progress in developing DNN emulators for which emerging, high-level properties (e.g. energy conservation) can be checked [16,7,13]. One of the main challenges of this approach would be to come up with a principled labeling system for the intermediate concepts. The Need for Conceptual Spaces We have discussed the identification of both concepts and conceptual spaces. It could be asked whether both are really necessary, because once we have identified all the concepts in an internal representation, we have arguably also identified the conceptual space. This argument presupposes that the identification of individual concepts in internal representations is reliable and leads to a neat partition of an internal representation. However, this presupposition is not realized in practice. Methods to identify particular concepts, and also sets of concepts, are not (yet) reliable. Also, conceptual spaces may contain elements like polysemantic neurons, as well as artifacts, which do not have neat conceptual counterparts. Understanding how entire conceptual spaces are formed is an additional path to understanding how individual concepts are formed. Bottom-up methods, which identify concepts, and top-down methods, which examine entire conceptual spaces, should not be seen as competing, but as complementary ways of triangulating concepts in internal representations, ultimately making the triangulation more reliable. Conclusion There is evidence that DNNs able to represent non-trivial inferential relations between predicted classes and emergent concepts that are represented internally. This indicates that DNNs may be able to acquire information that is not purely extensional. However, our ability to identify emergent concepts in the first place is severely limited, because existing methods rely on limited extensions of concepts, which makes them susceptible to philosophical problems such as the indeterminacy of reference and the bad lot argument. These limitations should give us pause, given that we have used an undemanding theory of concepts. Finally, the problem of understanding how entire sets of concepts arise holistically in internal representations through tradeoffs between predictive accuracy and compression is underexplored. Novel methods to identify concepts as well as conceptual spaces are urgently needed.
2022-11-14T06:41:52.470Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "5aca3e7bc7eee0a17b4354267db4b42e3067051f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5aca3e7bc7eee0a17b4354267db4b42e3067051f", "s2fieldsofstudy": [ "Philosophy", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
203929950
pes2o/s2orc
v3-fos-license
“Unrigging the support wheels” - A qualitative study on patients’ experiences with and perspectives on low-intensity CBT Background Low-intensity treatments imply reduced therapist contact due to an emphasis on self-help and the use of technologies to deliver treatment. The role of the remoteness, the reduced therapist contact, and the interplay of these components has not been differentiated from a patients’ perspective so far. This study’s purpose is to capture patients’ experiences with telephone-based self-help cognitive behavioural therapy (tel-CBT). Methods A subsample of mildly to moderately depressed patients (N = 13) who finished tel-CBT as part of a larger randomised controlled trial (RCT) in routine care were interviewed using a semi-structured questionnaire. Interviews were audiotaped, transcribed verbatim, and independently coded by two coders blind to treatment outcome. Using qualitative content analysis with deductive and inductive procedures, a two-level category system was established. Results The category system contains four category clusters regarding expectations, self-help related aspects, telephone-related aspects, and implications for patients’ treatment pathway, and subsumes a total of 15 categories. Self-help related aspects circulate around the interplay between written materials and professional input, trust and support in the therapeutic relationship and its relation to the initial personal contact, as well as CBT principles. Telephone-related aspects entail perceived advantages and disadvantages of the telephone on an organisational and content level as well as a discourse around distance and closeness in the interaction. Although patients raised doubts regarding the long-term effect of the intervention on symptomatology, patients expressed satisfaction with the treatment and reported an immediate as well as a longer lasting personal impact of the treatment. These results indicate user acceptance with tel-CBT. Conclusions This qualitative analysis captures patients’ experiences with tel-CBT and the perceived helpfulness of the diverse treatment components. This can facilitate refining aspects of low-intensity treatments and might improve dissemination. Trial registration ClinicalTrials.gov NCT02667366. Registered on 3 December 2015. Electronic supplementary material The online version of this article (10.1186/s12913-019-4495-1) contains supplementary material, which is available to authorized users. Background The introduction of low-intensity treatments as one way to mitigate the overwhelming disproportion of demand and supply in evidence-based mental health services has been called a "revolution in mental health care" (Bennett-Levy et al., 2010, p. 3). The aim of this transformation is to improve access to evidence-based care by providing the least resource-intense care for common mental disorders. Lowintensity interventions entail a variety of different formats, modalities, and levels of intensity regarding duration and frequency of treatment as well as extent of therapist contact (Benett-Levy et al., 2010; [7]). Their commonality is the reduction of time and encounters between therapist and patient, by encouraging patients to manage their symptoms and by employing "health-technologies". This is achieved through a) a shift to applying written and self-help materials, and b) the use of communication technologies (internet and telephone) intending to reach patients from a distance and to support therapists in conveying therapeutic content. A growing body of literature supports the feasibility, acceptability, and efficacy of low-intensity treatments, including guided self-help cognitive behavioural therapy (CBT) [15,18] and telephone-based interventions [4,29]. Telemedicine has a long tradition in health care and the use of the telephone as a facilitator of guided selfhelp dates back to early studies on self-help interventions for depression (e.g., [10]). Increasingly, the telephone is recognised as a mediator of full psychological treatments with clinical effectiveness [4]. Telephonedelivered CBT (tel-CBT) as a stand-alone treatment normally comprises a highly structured treatment program based on core elements of CBT [23,27,33,39]. Tel-CBT thus combines a low-threshold approach with the benefits of personal therapeutic support [34]. While the regular, scheduled sessions in tel-CBT might resemble traditional psychotherapy, the short sessions and the anchoring of the self-help approach in the treatment program imply reduced therapeutic input and augmented self-reliance on the patients' side. Within the broad variety of low-intensity interventions, tel-CBT can be located on the higher end of the low-intensity spectrum due to regular therapist contact but a simultaneous strong emphasis on (guided) self-help. Despite advantageous features and the promising results of low-intensity treatments, their uptake in routine care has been slow. With a few exceptions in the international context, where technology-mediated psychological treatments are delivered as part of a stepped care model (e.g., Improving Access to Psychological Therapies (IAPT) in the UK), traditional face-to-face therapies are prevailing the landscape of treatment provision models in many countries. One reason for a cautious implementation on a system level might be the relatively large dropout rates connected to low-intensity treatments. Attrition rates in some studies exceed those in face-to-face treatments [2]. This is particularly the case for internet-based psychological treatments with some unguided (internet-based) interventions having showed adherence rates of lower than 7% [12]. While dropout rates in self-help treatments vary widely [9] and are comparable to face-to-face treatments [15], there is contrasting evidence that attrition rates are lower in tel-CBT compared to face-to-face treatment (e.g., [28]), highlighting the capacity of the therapist-telephone combination to overcome barriers that could occur in faceto-face treatments. However, the therapists' role is not only important for patient engagement and in ensuring larger adherence to treatment. The therapist-client relationship is considered a robust predictor of the therapeutic outcome. In psychotherapy research, a strong therapeutic alliance is deemed necessary and even responsible for therapeutic success (Horvath & Symonds, [20]; Norcross & Lambert, [30]). Telephone-delivered treatment is limited in that non-verbal cues such as gesture, facial expression, and eye-contact, which are considered important determinants of a therapeutic relationship, are missing (Haas et al., [19]). Clinicians fear that the lack of visual cues would curtail their ability to form an effective working alliance [32]. Yet, first empirical investigations show that the establishment of a working alliance in tel-CBT is equally possible relative to face-to-face CBT [35]. Although the therapist-client relationship might also be compromised by the reduced therapist contact in guided self-help treatments, there is evidence that the therapist-patient relationship is possible with minimal contact [15], suggesting that the relationship itself is essential rather than the intensity or frequency of the contact [22]. With respect to guided self-help interventions specifically, one meta-analysis demonstrates that professional guidance in self-help intervention leads to improved treatment outcome compared to pure self-help approaches [18]. This indicates that a certain level of therapist support is indeed necessary to achieve clinical benefit. Qualitative research on patients perspectives on lowintensity interventions with patients with depression and anxiety disorders are thus far primarily concerned with the examinations of online and computer CBT [3,17]. These studies have revealed both perceived advantages and disadvantages of the reduced therapist contact and highlight individual differences in assigning value on professional support. It is noticeable that only a segment of patients with certain characteristics, such as those patients taking responsibility for their treatment and inciting motivation for the treatment, appears to benefit from the low-intensity treatment format [6]. Macdonald et al. [24] examined expectancies of and experiences with a minimal intervention that served to bridge the time to starting regular psychotherapy. The patients' narratives revealed that the minimal interventions' focus on resolving symptoms was largely incompatible with the patients' need for seeking insight into the cause of their current condition [24]. This result implies that the lower end of the intensity spectrum (in this case: up to four brief, 15-30 minutes long sessions in addition to the use of a written manual) might not suffice to fulfil expectancies of patients regarding treatment process and outcome. With regard to tel-CBT, there exists one extensive qualitative exploration of patients' acceptance of remote treatment delivery [5], which concludes that a shared construct amongst patients of accessing professional help outweighs the conventional construct of personal encounter in mental health service provision [5]. This finding implies that patients easily adapt to the telephone as a treatment modality, largely driven by the potential of increased access to care. However, the role of the self-help approach, and the interplay between remoteness of care and the emphasis on patients`selfreliance remain unclear. If low-intensity treatments are to be implemented broadly and in a sustained manner, it is important to identify unique aspects of this type of treatment format, including advantages and disadvantages of the type of treatment as well as the exploration of mechanisms of change of low-intensity CBT with a special focus on tel-CBT. Patients' insights can thus help to refine components of a treatment. Given the large variety of treatment intensity in terms of extent of therapist support within the low-intensity interventions described above, qualitative results might help to differentiate the relative importance of the components involved by disentangling aspects of structured and guided self-help CBT from distinct properties of the telephone. In view of the promising results regarding lowintensity interventions for patients with common mental health disorders, we aim at examining patients' narratives on their experiences with a telephone-delivered guided self-help CBT. Our qualitative study was conducted alongside a randomised controlled trial (RCT) investigating tel-CBT compared to treatment as usual (TAU) in routine care. We conducted this process evaluation for two main reasons: 1) We are interested in patients' evaluation of unique aspects of a low-intensity and tel-CBT in order to better understand both the role of the guided self-help approach, the telephone as a delivery mode, and their interplay; 2) Little is known about predictors and mediators of change in tel-CBT; an explorative methodology regarding subjectively helpful treatment elements and processes can help to generate hypotheses about mechanisms of change in processoutcome research. The overall aim is to understand patients' experiences with the treatment that might inform treatment conceptualisation and dissemination. Study context and Intervention This study was designed as a qualitative process evaluation with N = 13 patients nested within a larger RCT investigating the effectiveness and cost-effectiveness of a tel-CBT compared to TAU plus a mini intervention involving regular text messages about depression. Further study information can be found in the study protocol of the trial [41]. The study was approved by the local Ethics Committee. Patients randomised to tel-CBT were invited to take part in the qualitative interview study in order to contribute their experiences with and their views on the treatment concept. A total of 54 patients were recruited into the trial and randomised to either tel-CBT (n = 29) or TAU plus text messages (n = 25). Patient recruitment took place between January 2016 and August 2018 and followed a twofold strategy: Adult patients were either acquired by cooperating General Practitioners (GPs) of the Canton of Zurich or referred themselves to the study programme through newspaper reports and announcements. Trial inclusion criteria comprised a score between 6 and 15 on the Patient Health Questionnaire (PHQ-9) as well as a diagnosis of mild to moderate Depression according to the International Classification of Diseases (ICD-10). In addition, participants were required to provide written informed consent. Exclusion criteria were severe or chronic depression, suicidal ideation, participation in a psychological or psychiatric therapy at present or in the past 3 months, insufficient German proficiency, or physical or cognitive inability to complete questionnaires. Eligible and consenting patients received a short-term CBT, the German adaptation of "Finding inner balance" ( [33]; Tutty et al., [38]; [34]). The program comprises an initial face-to-face session and 8-12 subsequent, weekly and later biweekly scheduled telephone sessions of 30-40 minutes duration. The content is organised around four evidence-based elements of depression treatment (psychoeducation, behavioural activation, cognitive restructuring, and relapse prevention) and is provided in the form of a workbook for patients with 8 chapters. These include psychoeducational material, case vignettes, and exercise and homework sheets. The therapy was delivered by one of four trained study therapists. Therapists were clinical psychologists, who were in advanced CBT training and had several years of experience in treating patients with depression. All therapists received additional training in the tel-CBT manual prior to the trial and monthly supervision from a senior researcher and clinician (BW) during the intervention period. Participants The qualitative interview study took place between October 2017 and February 2018. Due to organisational reasons the time frame for the interviews was restricted to these months, implying that only patients of the RCT who had finished tel-CBT between October 2016 and February 2018 (i.e., whose termination of treatment took place in the previous 12 months before or within the interview period) could participate. Following a consecutive sampling strategy, all eligible patients (n = 14) were contacted and invited to participate in the interviews to describe their experiences with the intervention. One patient could not be reached, whereas all others were contactable and agreed to participate. Further 10 participants of the RCT who finished tel-CBT before October 2016 or after Februar 2018 could not be included in the interview study. One of these 10 patients was the only individual that dropped out from the RCT, because of a lack of time to undertake tel-CBT due to a new job situation (in-service training). The interviewed sample is predominantly female (87%), highly educated, and the age ranges from 26 to 79. All participants had at least one previous depressive episode and all interviewed participants were selfreferred to the RCT. Characteristics of the selected sample are displayed in Table 1. The sample does not differ from the overall RCT sample regarding clinical and socioeconomic characteristics. The only difference pertains the referral source: All of the interviewed patients were self-referred, whereas one third of the overall RCT sample was referred by a GP. Interview procedure We conducted semi-structured interviews to assess patients' experiences with and views on tel-CBT. The interview guide was developed by the study team and contained 16 open-end, inductive questions, which revolved around the themes reason for starting tel-CBT, experience with telephone as a therapy medium, therapy structure and content, therapist and working alliance. The interview guide started with broad questions and ended with the opportunity to express whatever patients wished. The interview guide was revised by a senior researcher (BW). The final version was tested in two training interviews and last changes in the exact wording of the questions were made thereafter. The English translation of the interview guide can be found in the supplementary materials (Additional file 1). Interviews were conducted over the telephone by one member of the research team (NB), who was a graduate student in clinical psychology and was previously trained by senior researchers and clinicians of the study team in conducting the interviews. The interviewer was neither involved in the treatment that patients had received nor in the study procedure and had no knowledge of patients' clinical history, treatment course and outcome, or personal information, apart from last name and contact details. All patients gave their written consent prior to the interviews. The interviews' duration ranged from 30 to 60 minutes. Data analysis The interviews were audiotaped and transcribed verbatim according to predetermined transcription rules by two research assistants. The interview data were subjected to a qualitative content analysis [26]. In accordance with current recommendations, data were analysed without prior knowledge of any trial outcomes in order to avoid any bias in interpretation [31]. The Software MAXQDA 2018 (Verbi Software, 2017) was used to assist the qualitative content analysis. A sequential model of inductive and deductive development of categories was applied whereby a category can be understood as a conceptual assignment to text-based codes. The coding units referred to statements of participants that convey a meaningful message. First, deductive categories based on previous empirical results and the themes of the interview guide were established in order to structure the content. Deductive categories mostly related to the category clusters. For example, one deductive category was labelled "Expectations towards treatment". The next step was to perform an iterative process of establishing inductive categories by grouping and coding text passages using the first 6 interviews. Two researchers (EH, NB) independently established additional categories inductively and subsequently discussed their codings, which resulted in a first version of the category system. Emergent topics were elaborated on in later interviews, following an iterative process of data reading, data coding, and data collection. In order to increase agreement, category clusters and categories were modified until consensus was reached. Inter-coder reliability was then established with three subsequent calculations of the inter-coder agreement in assigning statements to the categories within the category system. After each calculation, categories were modified and restructured until consensus was reached. The first calculation comprised 5% of all statements. No agreement was reached in 15% of those statements, mostly because of different codings made on subcategory level. Following this, the remaining interviews were coded and the resulting category system was again discussed and revised. For the subsequent second and third calculation of inter-coder agreement, 10% of all statements were coded and compared. In the last calculation, the two coders reached complete agreement in 68% of the codings, partial agreement in 98%, and no agreement in 6%. Partial agreement refers to a minimum of one congruent coding within the statement. Subsequently, a two-level category system with four category clusters and 15 categories was established. Results The categories (c) were structured within four category clusters (C). These include: 1) expectations and fears towards the intervention, 2) aspects of guided self-help, 3) aspects of the telephone as a treatment delivery modality, and 4) conclusion and implications for treatment pathway. In the following section the categories that emerged within the second, third, and fourth category cluster are explicated due to their significant informative content. They are illustrated with representative quotations, which have been translated from German into English. The first category cluster is described on the higherranking level, because the results on category level are of secondary importance for our research question. Expectations and fears towards low-intensity CBT When patients were asked to think about their preexisting expectations before the telephone-based therapy had started, most of them initially denied having had any specific expectations beforehand. Despite a general curiosity and simple interest in how the therapy was going to work out, they had rather neutral conceptions about the whole procedure. After inquiring further details, they reported expectations concerning therapeutic and content aspects of the therapy as well as the telephone as a therapy medium. Most patients were unfamiliar with the cognitive behavioural approach and were unsure about what the therapeutic content would include. Their understanding was that they will observe their thinking and behaviour and that one goal will be to adapt and change them. There was an underlying scepticism of whether this is achievable through a telephonemediated therapy. The most commonly reported concern were the missing visual cues during telephone contact and as to whether the therapist would be able to acquire an adequate understanding of the patients' feelings without any visual information. "Of course, in the beginning I have definitely thought … I mean, you cannot see each other … , it could be weird, it might not work or it might not help ( … ). That it might be less personal, because you cannot see each other, or that you don`t get into it as much." (P11) Aspects of guided self-help related to treatment process and outcome The second category cluster (C2) contains patients' accounts on specific aspects of the guided self-help CBT approach. Six categories were established and are outlined in the next section: the first four categories pertain the therapists' role in the guided self-help CBT, while the last two revolve around CBT principles and structural aspects of guided self-help. Initial personal therapist contact (C2c1) The personal contact between patient and therapist served as a kick-off and was able to address initial concerns. Getting acquainted with the therapist was retrospectively considered important, although not necessary for all patients. The patients' accounts revealed that the face-to-face meeting with the therapist encouraged trust between them and made the therapy feel more personal. For these patients, the trustworthiness proved beneficial in two ways: First, it enabled the patients to be more open and honest about their private matters. Second, it appeared as though the initial personal contact strengthened the patient's therapy commitment. "I think it is better like this, compared to if we would not have seen each other at all. I don`t think that I would have engaged with the same openness and honesty with a complete stranger, or a completely unknown voice or with knowing only the voice." (P3) "Well, for me it was very important, because I always had a mental picture of that person, who was on the phone. And I think, if it would have been completely without the first personal contact, it would not have had the same meaning, actually. In any case I have a positive memory of the first interview and it was a good opening, definitely." (P7) Patients also pointed out an initial appeal to the therapist, which was largely enabled through the initial, personal clinical interview. Getting to know the therapist made the therapy more personal and provided a basis for trust and security. "Somehow yes … it is not like: you call or you receive a call, but instead you somehow know … well I know where this person is calling from and who I am speaking with." (P4) Trust and emotional support in the therapist-client relationship (C2c2) Patients unanimously had a positive perception of their therapists, holding for all three therapists involved in the study. Highlighted qualities were sympathetic, considerate and empathetic. Patients generally embraced professionalism and warmth in the client-therapist relationship. Some patients considered the good therapeutic relationship and the open, non-judgmental discussions the most helpful aspects of their interaction with the therapist. "I simply felt comfortable and felt like I was being in safe hands in terms of having the feeling that I can open up, without it getting to anyone or that someone would make fun of it or … No, not at all, I felt comfortable and was able to open up. I always had the feeling that I could tell whatever I want and that she [the therapist] absorbs it, embraces it and reacts to it." (P11) The non-judgemental atmosphere of the interactions with the therapist made the patients feel seen and heard, affording the patients' impression that their problems are accepted, acknowledged and taken seriously. What appeared beneficial for these patients was the therapist's recognition of their efforts and the fact that patients were sharing their success and insights with someone and felt vindicated. "[it is] important that for example one receives affirmation. When I say, I will do this or that. And that [the therapist] saw that from another perspective and could somehow validate me, or motivate me, and say: 'Yes, of course it is good that you do that'." (P9) In view of these narratives, it seems that the therapist successfully recognised and addressed patients' needs despite physical distance. Although the therapist's role within this tel-CBT was not considered equally essential by all patients, the therapist's support was relevant for all patients to be able to immerse into the therapy contents. Interplay between (professional) guidance and independent activity (C2c3) Patients welcomed the close supervision and support from the therapist, particularly in the beginning phase of the treatment. In the course of treatment, patients increasingly recognised the workbook as a therapeutic tool with personal relevance. The workbook did evolve into some sort of a "personal guidebook" with long-lasting value. " … that I could potentially have a look there in the future, like a recipe book:´How was it, when I took my notes, and what did I do back then?'. I find that very helpful." (P9) In particular, the textualisation of the spoken words during the therapy sessions helped to deepen the understanding of the contents and facilitated adequate preparations for the next session. Putting thoughts and experiences on paper enabled the patients to be more concrete about strategies and personal areas of concern and enabled a tangible treatment output. "I basically found it [the workbook] quite useful. Because if we would have only had a conversation, I can imagine that one would remain in a vague space, but if I still have the book afterwards … then I have something black on white; a support, or a possibility to engage in between sessions. I can then also elaborate the tool box more purposefully, when I have the book. I find it almost indispensable that there is a written support, or a written record of the topics." (P3) All patients valued the pragmatic approach of being provided with the therapeutic content on one hand and the possibility of autonomously engaging with the elaborated content on the other hand. Notwithstanding the independence, patients were in overall need of the professional support through the therapist. It was the combination of independent activity and the input received by the therapist that appeared central. It appears as though the therapists' contribution was needed for the contents described in the workbook to unfold. "With her together, yes, that was actually the solution to my problems. Or the approach … , in a way I could elaborate them by myself and I additionally received an input by a professional or further ideas and tips or affirmation as well. That was very helpful." (P9) The shared narratives about the therapeutic dialogue circulated around an appreciation that there was room for personal issues despite the strong structure of the therapy program. The set briefness and the structured agenda of the telephone sessions did not restrict flexible adjustments of the conversations to the patients' needs. "I always had the feeling that she [the therapist] also makes time for the other things ( … ) I could always tell her what was difficult. This was extremely important for me, that there was also room for that. And I would say that sometimes the bookit was certainly always an adressed topic and I always did my homework, but it is not … well there was space for the other stuff as well." (P12). Within the therapeutic dialogue, another therapist factorthe perceived competency of the therapistwas revealed, which was reflected in patients' portrayals of helpful input, suggestions, and new perspectives provided by the therapist. "I think she also took stunningly many notes, because she often referred to previous statements of mine. ( … ) This was sometimes indeed illuminating. Which sometimes also contradicted itself, and then I obviously asked myself why. And I could then look at it together with her [the therapist] and I found that somehow … this is what I meant when I said she impressed me. When she said something I was not prepared for. And then I started thinking about it properly, focused on it and I found that always very thrilling, I have to say." (P8) Cognitive behavioural principles (C2c5) Patients valued the solution-focused and pragmatic approach of the intervention and noted that a few simple and memorable strategies suffice to attain improved mood. With regard to CBT-specific principles all patients provided specific examples of elaborated techniques and described a clear function of the acquired techniques. By paying deliberate attention to feelings and thoughts, it was possible to become aware of thoughts, emotions, behaviour, and the interplay, for example, between social activity and mood regulation: "...during this time, I completely shut myself away, I stopped socialising ( … ) well now I go out again but it really struck me that this withdrawal can be part of the depression and I am now paying attention to arranging appointments with other people." (P4) Monitoring symptoms helped recognising that social withdrawal was part of the depressive symptomatology and encouraged patients to be mindful about fluctuations and triggers of (depressive) mood. While one patient perceived behavioural activation as the most helpful component (also due to its easy implementation), cognitive work formed the core of the treatment for almost all patients. This was reflected by shared accounts on concrete, individual cognitive techniques. One patient summarised this well by explaining to be "able to think my way up to the top in a downward spiral." (P10) Structure (C2c6) The continuity of the sessions and the regular interaction with the therapist were considered crucial for progressing in the therapy and for improving the mental state. The structure provided stability and afforded a prospect of goal attainment. "It is like a suspension bridge. There are two ropes, one on each side, and the deck underneath and one can … one knows, this is the way, when it gets stormy and [ … ] I then just trusted in that I can walk this way … and that it helps me along." (P4) The majority of the patients noted that the biweekly sessions allowed for more autonomous and independent engagement with the therapy content and with themselves. The rhythm provided opportunities to integrate therapeutic strategies into their daily lives by roadtesting suggested strategies and by ascertaining the applicability of strategies in certain situation. While the intervals and the amount of sessions were accepted by most patients, they provided diverse suggestions for improving the strict structure and limited amount of contacts: One patient (P12), for exampledespite being aware of itwas negatively surprised by the "sudden" change to biweekly periodicity; another patient wished for a more gradual and slow reduction of treatment sessions (P3), while a third patient would have appreciated the possibility of additional sessions (P7). Telephone-related aspects of therapy process and outcome Three categories evolved inductively within the third category cluster (C3) regarding aspects related to the telephone. Telephone as an advantageous delivery medium (C3c1) Patients expressed advantages of the telephone-modality on a practical and executive level but also on a content level. With no need for co-location, the therapy becomes independent of time and location, which makes it more adaptable to different lifestyles and suitable for people with demanding jobs or family commitments. In addition, conducting therapy sessions at home in a familiar environment has the potential of increasing comfort in talking to the therapist. For most patients, the absence of visual cues contributed to that effect (see category "closeness vs. distance"). Perceived judgement and subconscious reactions to it in a personal interaction diminish, which increases focus on the essential. "I had the feeling that I was less distracted, almost more focused. When I am facing another person, for example, I always have a body language. Sometimes I get the feeling that I have to sit like this or in a certain way, or whatever you can conceive. Not on the telephone though. (P5) Another benefit of time and location flexibility is that there is no perceived need for disclosure of a patients' participation in psychotherapy. This allows the patient to avoid social stigma associated with traditional face-toface psychotherapy. "I am very busy in my job, I am in a leading position and it is rather difficult, if I -I would not be able to say that I will go to therapy in the afternoon. This would make them [co-workers] feel insecure, and I was really relieved, let`s say a win-and-win situation that I can do the therapy like this." (P8) The omission of the periodic journey to the therapist allowed for a perceived anonymity, which facilitated the first step in starting psychotherapy. Perceived difficulties with telephone (C3c2) The flexible character of the treatment did not only result in perceived advantages. For some patients these exact aspects posed challenges in their experience of tel-CBT. However, most of the reported difficulties were practical in nature, such as finding an appropriate location for receiving the phone calls or making sure that the mobile phone is charged. Patients reported two challenges imposed by the telephone, which related to the therapeutic content: one patient observed that it was particularly challenging to convey emotions and feelings solely through words and descriptions alone, which resulted in only partially processing the mental state together with the therapist. "Maybe with the limitationand that is probably part of the telephone-therapythat the deepening of single emotional states and also expressing these, this was not very possible. It remained more on the cognitive level. And even though that was also helpful and accommodated me, but real feelings, which are associated with specific mental states, like anxiety or depressiveness, this maybe for me was not so easy to express." (P7) The second difficulty emerged from the experience that the therapy taking place in the private environment involved a lack of distance from the problem area: in a regular counselling, well I don`t really know, but one also has fixed appointments. But one goes there and is basically out of the daily live, pulled out of what one is currently doing. The telephone conversations are more mood-dependent, because one does not prepare for it equally like going to the therapist, where one might get ready for it on the way there. Like this, the conversation comes bursting in and one then naturally has a snap-shot of the current mood." (P10) Closeness versus distance (C3c3) Properties ascribed to the telephone were placed between the poles of perceived (physical and emotional) closeness and distance. "I found it very special, because these telephone conversations do not take place in the therapist's office, but in my own environment. And that`s what makes it more intimate, it makes the therapist be closer to me, strangely enough. ( … ) I am laying on my couch, within my own four walls and I perceive this as very close and personal. In fact, more personal than if I go to a therapist and sit decently on a chair. That for me is almost more distance." (P13) While the implantation of the therapeutic interaction to the patients' personal environment created a perceived emotional nearness to the therapist, the physical distance allowed for regulating the closeness. The interplay between closeness and distance is captured in the combination of being able to open up, yet deciding the extent of disclosure in the interaction with the therapist. "At some point I was somewhat sentimental and, in this moment, it was good for me to weep freely, without anyone seeing it ( … ). I think one could tell from my voice but you somehow do not need to show your face. This anonymity was pretty good in these moments. It is somehow not so embarrassing and one is protected by the telephone." (P4) Relevance of low-intensity CBT (C4c1) Putting the experiences with the treatment in a wider context, one quintessence was the relevance of the lowintensity CBT for the personal treatment path. The impact of the therapy concept is ascribed to an immediate, and to a more comprehensive, overarching dimension. Parallel to the narratives on concrete CBT strategies, on the content level patients reported on a tangible, directly available, and individualised outcome resulting from the treatment: "In the end, I think it is about packing the tool box or so. Mine is probably poorly equipped, but I have the exact right tools, I believe." (P8) More specifically, the relevance of low-intensity CBT as a whole is attributed to the insight of having achieved stability and a sense of control: "I have the feeling that I have unrigged the support wheels, I am actually riding freely." (P8) On an overarching dimension, the narratives reveal a relevance on the long run, arising from the treatment's capability of increasing the patients' self-efficacy, also with an eye towards future difficult situations: "This is one main effect of this telephone-based therapy in fact … -support, and that it truly improved my own competency regarding coping with difficult mental states." (P7) Some patients mentioned that the low-intensity CBT launched a reflexion processes and the transfer of insights to other areas of life. "Well I simply noticed that this is also something, that is not just for a short period of time and that you are done with it and can leave it aside ( … ) I think it requires a constant staying on the ball and practicing and making aware of." (P12) Ultimately, for three patients with previously unsatisfying psychotherapy experience, the current treatment presented a corrective experience. For two patients, participating in tel-CBT served as a steppingstone to a regular, high-intensity evidence-based psychotherapy. Ephemerality of low-intensity CBT (C4c2) This category evolved due to expressed doubts regarding the sustainability of the treatment effect. Despite the perceived tangible and longer-lasting output of the guided self-help approach in form of the personalised workbook and acquired individual skills, some patients recognised that an enduring benefit depends on continuing work and usage of the tools. For these patients it is questionable whether the therapy length and dose is capable of solidifying the therapeutic effect. "What is difficult to say, is that you never know what happens in half a year or a year. Will it last? Or would one be happy to go back to it, to taking up the thread again. Or strengthening or so … I find that now actually a little bit of a pity that somehow … It somehow disappeared, or that it is vanishing." (P3) The two patients who started high-intensity CBT after tel-CBT would have wished for a longer therapy in order to internalise strategies learned, particularly the cognitive part of the treatment. Acceptance and satisfaction (C4c3) Generally, there was a positive evaluation of the treatment concept, a high satisfaction with the performance of the therapists, with the CBT emphasis, and with the procedure in general. "I think there is a wide spectrum of people, who might benefit from such a treatment format, from this kind of support." (P3) Discussion The overall purpose of the current process evaluation was to shed light on helpful factors inherent to guided self-help and factors associated with the telephone as a treatment medium, by qualitatively exploring patients' views and experiences with a telephone-based guided self-help CBT. The content analytic techniques of structuring, summarising, and explicating revealed four category clusters, which subsume 15 categories. The category cluster is constituted of expectations as well as fears toward the treatment both on a practical/ executive and content level. Similar to findings from a qualitative study on expectancies in a minimal intervention [24], positive and negative expectations toward the treatment procedure and outcome were rather vague. Patients' participation was largely driven by an appeal to the novelty of the therapy format and a general curiosity and interest in how therapy would work over the telephone. While a previous exploration of patients' views on remote technology-based care [4] highlights the perceived accessibility enabled through the telephone, our data reveal that facilitated access to psychotherapy was just one of several reasons for starting this type of treatment. Patients' narratives rather indicate the perception of being in the right place at the right time. It is important to note that in contrast to Bee et al.'s [5] study, our patient sample was exclusively self-referred, even though the RCT was set in routine care, meaning that all interviewed patients explicitly chose this treatment out of other alternatives available. Interestingly, three patients explicitly mentioned poor previous experiences with psychotherapy, which incited patients to try a new and seemingly less personal therapy. The second category cluster contains therapist-and treatment-related aspects of guided self-help CBT, corresponding largely to specific and common factors in psychotherapy research. The exclusively positive qualities attributed to the therapists underline the significance of the therapists' role in the guided self-help CBT. While we did not include standardised measures of the working alliance, parallels to Bordin's [8] conceptualisation of the therapeutic alliance can be drawn. The initial feeling of safety and trust as well as the interpersonal connectedness reported by the patients correspond largely to the solid foundation of bonds as one facet of the therapeutic alliance. Parts of the emotional bond between therapist and patient wasin retrospectiveafforded to the initial personal contact. Although this finding might support the assumption that therapeutic properties and contextual characteristics of the client-therapist interaction are contesting the legitimacy and quality of remote mental health consultation (May et al. [25]), it remains unclear whether the same level of trust and wellbeing would have emerged without the personal encounter at the beginning of the treatment. More importantly, the results show that patients varied in their perception of the personal encounter being an essential requirement for continuing the treatment with the same openness and comfort. Our data might support previous notions that the therapist contact per se is more important than whether the patient-therapist encounter is personal or not (Knaevelsrud & Maercker, 2007). However, since we were not able to compare patients' experiences of tel-CBT with and without personal encounter, our results are unique to tel-CBT involving one initial face-to-face contact between therapists and patients. A central treatment-related category encompasses patients' narratives on CBT principles. In summary, the problem-focused approach and the emphasis on resolving current problems were deemed important determinants of a successful treatment. This contrasts previous findings on the evaluation of a low-intensity CBT, where patients would have preferred to understand the roots of their condition rather than battling current symptoms [24]. It might be that the rather intensive therapeutic contact in our study facilitated a sufficiently profound elaboration on patients' history, which is also reflected in patients' accounts on a flexible adjustment of the programme to the patients' individual issues. Moreover, the fact that patients made sense of diverse therapeutic strategies (e.g., monitoring the interplay between mood, thoughts, and behaviours; having ready individual variants of cognitive restructuring techniques) points towards the therapists' competency in delivering specific techniques. Additionally, the value placed on the therapists' input within the therapeutic dialogue and the patients' reliance upon the therapist bringing the contents of the workbook to life, testifies to the therapists' expertise and the perceived importance of the therapists' competency. It needs to be considered that the therapists in our study were clinical psychologists with advanced training in CBT in contrast to less specialised personnel (lay-therapists, study nurses, psychological well-being practitioner), which are commonly employed to deliver low-intensity CBT. The trade-off between the therapists' level of expertise, the intensity and length of treatment, and the associated costs needs to be evaluated in additional studies. Helpful "specific factors" in the treatment were primarily symptom monitoring and cognitive restructuring (e.g., recognising, challenging, and replacing irrational thoughts and beliefs). This result might be owed to the fact that our patient sample is middle-aged and predominantly shows mild to moderate depression severity, while behavioural activation has shown effectiveness in younger people [37] and more severely depressed patients [16]. The patients in our study show a comparatively high level of functioning, which is reflected in the high employment rate (70%) of the interviewed sample and the generally high level of social and physical engagement in both the employed and retired patients. It is possible that patients did therefore not perceive additional benefit in scheduling pleasant activities. Monitoring symptoms and reflecting on thought patterns, on the other hand, might have provided more tangible and new skills to understand and manage depressive symptoms. Future research might focus on the role and interrelation of symptom monitoring, cognitive work, and behavioural activation in low-intensity interventions. Ultimately, encouraging patients to become "experts" on themselves and to find ways to help themselves was one product of tel-CBT. These findings largely correspond to qualitative studies on experiences with traditional CBT suggesting that specific techniques as well as common psychotherapeutic ingredients are important from the patients´point of view [13,36]. The engagement with the workbook was placed at the core of the treatment. All patients reported of having used the workbook regularly during treatment, and most of them felt comfortable with writing down their feelings and thoughts. It needs to be noted that our patient sample is highly educated, so while writing down thoughts and reflections is feasible and helpful, generalisability is not possible; other patient populations (e.g. more severely depressed ones) might have more difficulties with written material so that personal therapeutic guidance might be even more important. Despite emphasising the workbook as a concrete and helpful aspect of the treatment, the accounts in this category illustrate that much of the workbook's impact was enabled by the therapist. This finding corresponds largely to a growing body of evidence on the impact of therapeutic homework on treatment effect (e.g., [21]). Within the homework literature single studies have demonstrated that specific homework-related therapist behaviours lead to increased homework engagement on the patient side [11,14]. However, there is a lack of studies, which directly compare the influence of therapist support and autonomous engagement with self-help activities on patients' homework compliance or on treatment outcome. Our finding that CBTtechniques found complete expression in the synergy of workbook and therapist guidance might inform future studies, which intend to empirically test the relationship between therapist behaviours regarding homework assignment and review, patients' homework engagement, and treatment outcome. The third category cluster arose from reported facets relating to the telephone as a treatment modality. The discourse on the treatment medium was characterised by a perception of establishing closeness in the absence of physical proximity [5], which highlights the capacity of the telephone to convey central therapeutic "ingredients" (in this case psychological closeness). Moreover, the value assigned to therapist skills and qualities, as indicated by previously reported categories, shows that it is possible to mediate aspects of a working alliance remotely, identical to previous conclusions (e.g., [5]). Challenges that are often linked to the medium telephonesuch as restricted communication due to missing visual cuesdid not come true. In addition, negative effects connected to the therapy taking place in the home setting (e.g., distraction) were not confirmed. Although one patient pointed out the potentially intruding character of the telephone calls, this was not mentioned as being problematic by others. Interestingly, most of the interviewed patients reported the opposite; that the mediation of therapy content by the telephone helped focusing on therapy contents. The content dimension was placed at the centre of the therapeutic interaction while the interpersonal dimension stayed in the background. This also implies that telephone-based treatments might not be the most suitable options for patients with interactional difficulties. The fourth category cluster comprises implications for the patients' treatment pathways. One conclusion touched upon the ephemerality of the treatment effect. Some patients expressed doubts about the sustainability and long-term effects of the low-intensity CBT. While some patients pointed out the necessity to practice and internalise acquired skills in more depth, there is also quantitative evidence on the inconclusive long-term effect on symptomatology, with one study demonstrating high relapse rates in people completing low-intensity CBT [1]. However, more investigations of patients`pathways with a long-term perspective are necessary. The category satisfaction and acceptance emerged due to a generally positive evaluation of all treatment components and the treatment concept as a whole. While this finding goes in line with previous positive evaluations and corresponds to low dropout rates in tel-CBT [28], there might be other reasons involved, such as low expectancies towards the treatment. It could be that on the one hand patients were positively surprised by the therapeutic comprehensiveness of what might have first appeared as a simple telephone-counselling, while on the other hand previous negative experiences with psychotherapy (reported by some study participants) might have set low expectations toward the treatment in the first place. Several limitations warrant further discussion: First, we would like to point to problems at the core of the qualitative methodological approach, such as the question of whether we could reach saturation of data. Although the sample was broadly representative of the study participants of the RCT in terms of sociodemographic and clinical characteristics, we were not able to include patients who were not or only partially motivated for treatment due to their self-referral to the study. It would be interesting and important to explore patients' experiences and views when referred by GPs or other providers. Given the suitability of low-intensity CBT to be integrated into primary care, experiences of patients who did not actively seek this type of treatment, would allow for more externally valid results. However, the reports tended to be homogeneous within the interviewed sample and repetitive within the individual interviews after completing the interview guide, which is indicative of data saturation. Second, the predominantly positive evaluation of the intervention might have been influenced by a socially desirable response style. We tried to circumvent response bias by assigning an independent member of the study team as interviewer. Speaking about experiences in an interview might, however, still be more inhibiting compared to questionnaire-based items on satisfaction with the treatment. Third, although the study`s purpose was to differentiate between helpful aspects specific to the guided self-help approach from distinct properties of the treatment delivery medium, this type of data and analysis does not allow to draw definite conclusions about the relative impact of each component. Conclusion In summary, interviewed patients report positive experiences with this type of low-intensity CBT touching on structure, content, and procedure. While critical evaluations are related to practical aspects of tel-CBT, helpful facets are mostly afforded to the dimension of the treatment modality, the support by the therapist, and the interplay of those. We believe that the combination of professional guidance and self-help represents an appropriate balance between therapeutic input and patient's self-reliance for individuals in need of care.
2019-10-09T14:30:32.252Z
2019-10-09T00:00:00.000
{ "year": 2019, "sha1": "ea8030084102c49c6b9fe9845e02899614c6875e", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-019-4495-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea8030084102c49c6b9fe9845e02899614c6875e", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226516308
pes2o/s2orc
v3-fos-license
CluSTerinG aS a Tool for ManaGinG inDuSTrial enTerPriSe ISSN 2071-2227, E-ISSN 2223-2362, Naukovyi Visnyk Natsionalnoho Hirnychoho Universytetu, 2020, No 3 M. i. ivanova1, Dr. Sc. (econ.), assoc. Prof., orcid.org/0000-0002-1130-0186, S. o. faizova2, Cand. Sc. (econ.), assoc. Prof., orcid.org/0000-0002-7243-0726, M. V. Boichenko1, Dr. Sc. (econ.), assoc. Prof., orcid.org/0000-0002-9874-3085, o. K. Balalaiev3, Cand. Sc. (Biol.), Senior research fellow, orcid.org/0000-0002-9389-4562, V. l. Smiesova4, Cand. Sc. (econ.), assoc. Prof., orcid.org/0000-0002-0444-4659 https://doi.org/10.33271/nvngu/2020­3/096 introduction. In today's conditions, stable performance of industrial enterprises and their competitive advantages are only possible on the basis of the inevitable organisational and technical restructuring of the management mechanism in ac cordance with the uptodate level of knowledge, technology, and organisation of activity. Most manufacturers are faced with the need for flexible adaptation to the changing market environment, globalisation, informatisation and regionalisa tion. They need to systematise and optimise all business pro cesses in order to reduce total costs, review the organisation of logistics and actively implement the latest technologies and concepts of doing business. Among the variety of ways to de velop the market, means of production, and new areas of ac tivity, clustering is a priority as a tool for managing industrial enterprises. Today, the formation of clusters is a prospect for the further development of many entities at the micro and macro levels. Due to the rapid changes in market conditions, the study of today's clustering methods, comparison of the re sults obtained and identification of economic entities that can form a cluster of industrial enterprises, becomes a burning problem that needs urgent solution. literature review. A large number of publications have been devoted to the issue of cluster formation. The researchers focus on identifying the efficiency correlation between enter prises that have similar operating parameters; these correlation links may serve as a basis for the mutual integration of the en terprises. It should be noted that the interest of scientists has been directed to the search for these patterns at both macro and micro levels. For example, the scientific approach to clustering of the countries of the world was substantiated in [1]. This approach is based on a neural network analysis that uses the selforga nizing Kohonen maps. This analysis allows clustering and, ac cordingly, grouping the countries of the world according to similar patterns of economic processes; revealing explicit and implicit relationships between them; establishing the weight and significance of each link (neuron) in the formed clusters and their correspondence to the input parameters. The methodology for formation of regional clusters, which takes into account the comparative economic advantages of the regions to ensure their sustainable economic development, was investigated by V. Glinskiy [2]. The author proved the in appropriateness of the regular regression model as it cannot be used in dynamics; he suggested eliminating this disadvantage by introducing the indicator of "cluster formation probabili ty". The coefficients of industry specialisation of the regional economy were used as predictors (variables) in the regression equation. However, the author of the model did not take into account the geographical proximity of the regions, which af fects the presentation of the results. The problem of cluster formation at the enterprise level has been investigated by domestic and foreign scientists. In par ticular, the Russian scientists A. Babkin, T. Kudryavtseva, S. Utkina used the category of "virtual enterprise" in the pro cess of constructing a cluster model [3]. This approach made it UDC 330.46:334.012.4 possible to combine multiple consumers into one virtual con sumer, thus simplifying the visualisation and analysis of the resulting cluster formation. This approach was further devel oped by G. Jucevicius, who focused on the formation of clus ters in developing countries. The scientist has highlighted the factors that contribute to and hinder this process. However, the scientist does not propose tools for assessing the possibility of cluster formations under the conditions of internationalisa tion and globalisation of the economy [4]. Lu Yi focused on the problem of 'a transfer' of small enter prises into industrial clusters. The researcher concluded that politics, market conditions, governance, finance and techno logical progress are key factors that influence making deci sions on the integration of small businesses into interfirm al liances [5]. Industrial symbiosis as a type of a cluster was studied in detail by B. Baldassarre [6]. This scientist shares the opinion of V. Glinskiy that the formation of a cluster is a prerequisite for sustainable development, which, to his mind, is based on two concepts -industrial ecology and circular economy. He car ried out a comparative analysis of these concepts, which re vealed that they are complementary and should be applied si multaneously for a full description of the cluster. V. Kayvanfar, in his turn, focused on the analysis of interconnections within an industrial cluster from the point of view of supply chain management (SCM) [7]. The scientist constructed a model of economic activity of enterprises using a twostage model of stochastic programming, followed by the use of acceleration strategies of Benders decomposition. The scientist considers the findings as a basis for making managerial decisions to min imize the total cost of a particular supply chain. However, the issue of evaluating the performance efficiency of the formed cluster remained unresolved. Therefore, in [8] these indicators were formalised; among them, the indicators of business activ ity and socioeconomic effect were singled out. The clustering process is the construction of an economic, organisational and legal model for the merger of enterprises and the development of integration processes; this involves improving the effectiveness of organisational innovations, making effective management decisions, identifying the main motives that encourage economic entities to establish control over the functioning of other enterprises in order to optimise the market mechanism for resource redistribution [9]. The ef fectiveness of clustering is affected by low qualifications of the workforce and insufficient level of its reproduction, lack of available capital, poor technologies, insufficient level of the educational and research system, and imperfect institutions. At the same time, clustering ensures an increase in the effi ciency of interaction between enterprises, financial and re search institutions and the government, an active use of the most important interrelations in the spheres of technology, qualifications, information, marketing and consumer de mands, which are characteristic of the whole complex of en terprises and industries. These interrelations influence the di rection and pace of innovation, as well as the competitiveness of end products. Therefore, there is a need for a further scien tific insight into the methodology of clustering. unsolved aspects of the problem. Despite considerable achievements of scientists in the investigation of clustering, the methodological base for the formation of industrial enterprise clusters remains underdeveloped in the current scientific pub lications; also, there is no single approach to the methods and tools that can be used to identify the enterprises that can be combined in a cluster. Purpose. The purpose of the article is to substantiate the methodological base of forming a cluster of industrial enter prises and establish a system of relationships between their cluster groups. To achieve this, a cluster analysis of enterprises was carried out by the hierarchical, competitive and graph methods; the obtained results were compared; the advantages and disadvantages of each method, when used for clustering industrial enterprises, were revealed; a new technique for a cluster analysis has been proposed, which is based on the search for communities in multilayer network graphs; entities that can form a cluster and establish partnership relations were identified by studying Ukraine's industrial enterprises. Methods. For a more thorough study, the three methods of cluster analysis were compared. The obtained results can be used to construct a cluster through the merge of industrial en terprises of the extractive and processing industry. In Approach 1, a hierarchical cluster analysis was conduct ed to organise multiple objects into an assigned number of clusters, using the raw data. It was based on the assumptions formulated by B. Duran and P. Odell: the object belongs to only one subset and the objects belonging to the same cluster are similar. At the same time, objects belonging to different clusters are dissimilar. In Approach 2, a competitive method of the geometric proximity of neurons to objects was used. This approach is based on the neural network technology of selflearning that uses the Kohonen selforganizing maps (SOM). The Kohonen map algorithm is based on the clustering of multidimensional vectors that characterise the objects under study using a given feature space. As a rule, all nodes in this neural network are arranged in the form of a certain organisational structure. When applied, the latter is preferably a twodimensional net work [10]. Approach 3 is a new clustering method proposed by the authors for finding communities in multilayered network graphs. This approach simultaneously considers the similar ity of objects in several spaces that reflect various manifesta tions of economic activity of enterprises. The method, devel oped by T. Kamada, is based on a fundamentally different methodology for analysing multidimensional data, the so called graph theory, according to which the connections be tween enterprises are presented as adjacency matrices [11,12]. The main sources of statistics were indicators of the Stock Market Infrastructure Development Agency of Ukraine [13]. For the convenience of analysis, each enterprise was assigned a conditional number. results. The results of hierarchical cluster analysis were obtained using the SPSS 22, a statistical software package de veloped by Achim Buehl and Peter Zoefel (A. Bühl and P. Zöfel). As indicators of system units, eight indicators were used to characterize the financial and economic activity of the enterprise: net sales revenue (Return), operating expenses (Charge), net profit (Gain), assets (Asset), noncurrent assets (Nasset), current assets (Casset), number of staff (Staff), cost of sales (Cost). The names of the enterprises that form a clus ter were the observation label or text change. Dendrograms are obtained using the method of inter group relations. This allows identifying merged clusters and presenting proximity matrices that are constructed by squared Euclidean distances. In this case, at the maximum distance between the clusters, two unequal groups of enterprises were figured out: the first group included No. 8 and No. 15; the second group involved all the remaining enterprises (Table 1). This result is explained by the use of data for oneyear's period only. This has led to the fact that most businesses are little different from each other and easily fall into one mega cluster. The results obtained in the hierarchical cluster analysis were based on economic indicators collected over eight years; they cannot be recommended for use because of the peculiarities of the analysed data; moreover, the above method has a fundamentally significant drawback, which is the absence of a single, rigorous, and mathematically proved criterion for an optimal split of a dendrogram into clusters in the case their number is not known in advance. In addition, agglomerative clustering algorithms (except the single link age clustering method) include the objects into a cluster, which are grouped around the identified cluster centre. In this case, the produced clusters would have the shape of hy perspheres, and the cluster structure changes depending on the radius of the circle. This does not allow solving the prob lem unambiguously. According to Approach 2 that uses the Kohonen selforga nizing maps, an input matrix was formed, consisting of 31 ob servation vectors (by the number of enterprises) with a dimen sion of 40 (8 indicators over the last 5 years). All the variables were prestandardized by subtracting the mean from the sample items and dividing by the standard de viation. Therefore, for all input variables, the mean is 0 and the standard deviation is 1. The neurons in the Kohonen layer were arranged according to a hexagonal topology of 50 × 50 neurons. The proximity measure was the Euclidean distance metric. A Gaussian function was used for smoothing the ker nel. Initial learning rate was μ = 0.1. The learning was per formed using stochastic gradient descent. To robust the cluster, an iterative procedure for a phase ad justment of the clusters [14] was initiated by transferring the winning neurons with a correction radius s CR = 0.7 over 100 epochs. The resulting Umatrix (Fig. 1) shows a large cluster of enterprises in the centre of the map. All the other objects do not form clusters with each other. SOM is primarily used to visualize the link densities be tween multidimensional object on a flat map and to identify the agglomerated neural nodes into which the investigated subset of objects falls. The distribution of points on the map allows roughly estimating the topography of multidimensional data. In addition, the SOM algorithm is prone to find rounded cluster zones. For more accurate identifying of the groups of enterprises that tend to form an arbitraryshaped cluster, the DBSCAN algorithm (Densitybased spatial clustering of applications with noise) was used, which had been proposed by M. Ester, H.P. Kriegel, J. Sander and X. Xu. Identification of clusters containing more than one object is only possible if there is one neighbour (m = 1) and a sufficiently sized εneighbourhood (Fig. 2). This pattern is typical of the elongated closely spaced agglomerations in the data. Fig. 2. Enterprises that tend to cluster depending on a search radius Thus, according to breadthfirst search for traversing the graph with restrictions, it is advisable to combine the following 17 enterprises into a cluster: No. 5,No. 10,No. 11,No. 12,No. 15,No. 18,No. 19,No. 20,No. 21,No. 22,No. 23,No. 24,No. 25,No. 26,No. 27,No. 28,and No. 29. In Approach 3, based on identifying communities in mul tilayered network graphs, the following matrices were chosen for analysis: 1) the supplierconsumer relationship matrix (Links); 2) weighted geographic distance matrix (Distance); 3) matrix of forms of ownership (Property), which in cludes several stateowned enterprises and those belonging to financialindustrial groups. Approach 3 implies constructing weighted matrices that consider each layer individually and demonstrate the way the nodes (their communities) are connected within the layer. A number of restrictions are taken into account: it is assumed that too long distances between enterprises do not contribute to their integration due to inevitable logistics cost. Therefore, only enterprises spaced within 600 km are included in the weighted distance matrix, other matrix elements being equal to '0'. The distance of 600 km was found as the histogram mini mum (Fig. 3). It was the crosscorrelation coefficients significant at the p < < 0.01 level which were included in the enterprise metrics ma trix. With these requirements, the coefficient in the module ex ceeds 0.77. Hence, the model only takes into account 10.4 % of all possible correlations between enterprises, which are repre sented by arcs on the corresponding graphs (Fig. 4). The struc ture of the presented data corresponds to a multidimensional multilayer network. Each of the presented adjacency matrices can be displayed as a network graph in a separate layer. The mathematical basis for the analysis of multilayer net works has been developed relatively recently [12]. Network clustering is possible by searching for communities (groups of nodes) that have more edges with each other than with other nodes. The community structure analysis is based on optimi sation of the Girvan -Newman Q modularity, which is equal to the probability of finding the same community structure in a randomly formed network [11]. Modularity is calculated as the ratio of the sum differences between the actual and the expected number of edges that connect pairs of vertices within a cluster, in a randomly gener ated graph of the same dimensions. In other words, it is a mea sure of the deviation of the link density in the network from the density distributed randomly. The threedimensional visualization of the multilayered network presents three communities (Fig. 5). The number of communities and the modularity of the in dividual layers vary from 2 to 5 and from 0.017 to 0.566, re spectively (Table 2). Aggregated modularity is quite low Q = 0.072 and ap proaches the standard confidence level of 0.05, which indi cates a confident combination of nodes in the community. Since the number of vertices of the graph does not exceed 100, the geometric location of the network nodes is optimized according to the minimum internal energy of the springem bedded particle system using a force algorithm developed by T. Kamada and S. Kawai. In this algorithm, the graph is a system of springs, but if the pair of vertices is geometrically spaced very close or far apart, then the force of attraction or repulsion is applied to the verti ces. The algorithm searches for values of variables that mini mise the function of the potential energy E (x 1 , x 2 , …, x n , y 1 , y 2 , …, y n , z 1 , z 2 , …, z n ); for the 3D model it is Cartesian coordi nates of the nodes, since at a global (single or absolute for a given function) minimum, all the individual derivatives of the energy function are 0. In our opinion, it is relevant to use the results obtained by applying Approach 3 to search for multilayered network com munities; also, enterprises that formed the first community are prone to form a cluster that will aggregate the above enterpris es, educational institutions and innovative bodies. Identifying a cluster is the basis for its productive functioning, develop ment and obtaining unique competitive advantages. In partic ular, forming and establishing partnerships will significantly increase the competitiveness of each enterprise in the cluster and enhance the potential of the metallurgical complex. For example, it opens up opportunities for more extensive use of uptodate R&D, advanced production technologies, and new types of economic resources; it allows manufacturing innova tive products, improvement of workpower qualifications, ob taining external sources of financing, crediting and investing; solving the problem of resources; expanding markets, and oth ers. Therefore, this will be the basis for improving the efficien cy of each cluster member. Conclusions. The relevancy of the formation of industrial clusters is proved by the results of the cluster analysis. The au thors have proposed a hierarchical cluster analysis and used it to identify two unequal groups of enterprises: the first group included two companies; all the remaining enterprises fall into the second group, which is explained by the use of data for only oneyear's period. An addition of metrics over five years in Approach 2 (the competitive approach of geometric prox imity of neurons to objects, which is based on neural network technology of selflearning and uses Kohonen selforganizing maps) does not fundamentally change the cluster structure. Only the use of a completely new clustering method (Ap proach 3), i. e. the search for multilayer network communities based on the modularity of the graph, which involves the use of multiple object proximity matrices (supplierconsumer re lationships, geographical distances, patterns of ownership) al lowed distinguishing three enterprise communities that are network analogues to clusters. Considering the crisis of metallurgical enterprises func tioning, we recommend forming a cluster that will include metallurgical enterprises of mining and processing industry and academic and research institutions; this will provide a good basis for the effective development, efficient perfor mance and obtaining additional competitive advantages. The formation and establishment of partnerships will sig nificantly increase the competitiveness of each enterprise in
2020-07-16T09:07:12.086Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "471125f6f295c6e762f0fc20fd0a8f72d69b8608", "oa_license": null, "oa_url": "http://nvngu.in.ua/jdownloads/pdf/2020/03/03_2020_Ivanova.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "405d821992f74cf4d7615fddd9767d5d94123dea", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
218849159
pes2o/s2orc
v3-fos-license
COPD-derived fibroblasts secrete higher levels of senescence-associated secretory phenotype proteins COPD-derived fibroblasts have increased cellular senescence. Senescent cell accumulation can induce tissue dysfunction by their senescence-associated secretory phenotype (SASP). We aimed to determine the SASP of senescent fibroblasts and COPD-derived lung fibroblasts, including severe, early-onset (SEO)-COPD. SASP protein secretion was measured after paraquat-induced senescence in lung fibroblasts using Olink Proteomics and compared between (SEO-)COPD-derived and control-derived fibroblasts. We identified 124 SASP proteins of senescent lung fibroblasts, of which 42 were secreted at higher levels by COPD-derived fibroblasts and 35 by SEO-COPD-derived fibroblasts compared with controls. Interestingly, the (SEO-)COPD-associated SASP included proteins involved in chronic inflammation, which may contribute to (SEO-)COPD pathogenesis. INTRODUCTION Accelerated lung ageing has been postulated to contribute to the pathogenesis of COPD. 1 Several mechanisms of accelerated ageing have been identified in COPD, 1 2 of which cellular senescence is most extensively described to be increased in lung tissue and structural cells from patients with COPD. 3 Cellular senescence is an irreversible cell cycle arrest that prevents cell death. 4 Senescent cells secrete (pro-inflammatory) proteins, called the senescence-associated secretory phenotype (SASP), to recruit immune cells for their clearance. However, on accumulation of senescent cells, high levels of SASP proteins can have detrimental effects on the surrounding tissue, by inducing chronic inflammation and tissue dysfunction. 5 The SASP is cell type specific and its potential (negative) impact on surrounding cells largely depends on the composition and level of secretion of these SASP proteins. Examples of previously described SASP proteins include interleukins, chemokines, growth factors and proteases. 6 7 Recently, we demonstrated higher levels of cellular senescence in lung fibroblasts and lung tissue from patients with older, mild-moderate COPD and patients with severe, early-onset (SEO)-COPD compared with their matched controls. 8 Patients with SEO-COPD develop very severe COPD at a relatively early age with relatively low numbers of pack-years. Thus, accelerated lung ageing, including cellular senescence, may contribute to SEO-COPD. The SASP of senescent primary lung fibroblasts and COPD-derived fibroblasts is not defined yet and thus the potential impact of senescent fibroblasts on the surrounding lung tissue is unclear. Therefore, we aimed to first identify SASP proteins of senescent primary human lung fibroblasts and second to determine which of these SASP proteins are secreted at higher levels by COPD-derived fibroblasts, including SEO-COPD, compared with their matched non-COPD control-derived fibroblasts. METHODS Cell culture supernatants from lung fibroblasts from 10 patients with SEO-COPD and 11 patients with older, mild-moderate COPD and, respectively, 9 and 10 matched non-COPD controls were used (table 1), which were collected as previously described 8 (a detailed description of the methods can be found in the online supplemental). Briefly, cellular senescence was induced in fibroblasts from all subject groups by paraquat (PQ) treatment (250 µM for 24 hours), which by occupational exposure is a risk factor for COPD, and can induce senescence specifically via mitochondrial reactive oxygen species production. 9 10 Senescence induction was confirmed by a 40% increase in SA-β-gal positive cells and a sevenfold increase in p21 expression. 8 Cell culture supernatants were collected 4 days after senescence induction. The highly sensitive Olink Proteomics (Olink Proteomics, Uppsala, Sweden) panels Inflammation and Cardiovascular III were used to measure the secretion of 184 proteins, whereof 165 proteins passed quality control. Since cell numbers at the end of culture were significantly different between COPD and control and between PQ and untreated (online supplemental figure S1), levels of secreted proteins were corrected for these cell numbers. Significant differences between PQ treated and untreated cells were tested using Wilcoxon signed-rank test adjusted for multiple testing using Benjamini-Hochberg. Proteins were defined as SASP protein when a significant (FDR<0.05) ≥threefold increase in secretion was observed after PQ treatment. Next, statistical differences in SASP protein secretion between untreated COPD-derived and control-derived fibroblasts were tested using Mann-Whitney U test. FDR p<0.05 was considered statistically significant. Finally, pathway analysis of COPD-associated SASP proteins was performed using the STRING database (V.11.0) to provide more insight into the function of the SASP proteins and their potential role in COPD, while it should be noted that the selected panels may have caused a bias in the analysis. RESULTS First, the secretion of 124 proteins was significantly increased ≥threefold after senescence induction by PQ and these proteins were thus defined as SASP proteins of senescent primary lung fibroblasts (top-50 is shown in figure 1A, see online supplemental table S1 for all SASP proteins). We compared our SASP composition with the recently published SASP Atlas 7 and other literature and included the overlap in online supplemental table S1. From the 124 found SASP proteins 70 were previously described, including GDF-15 and CCL-3 (figure 1B). In addition, our approach revealed 54 potentially novel SASP proteins, including GDNF and TGF-α (figure 1C). We validated the Olink Data are presented as medians with interquartile ranges unless otherwise stated. Significant differences between groups were tested using Mann-Whitney U tests or unpaired t-tests. P values are stated. Gold stage based on FEV 1 %pred. %pred, % predicted; SEO, severe, early-onset. Proteomics platform by measuring IL-8 using ELISA. A similar increase in IL-8 secretion was detected by ELISA after PQ-induced senescence with a significant positive correlation with IL-8 levels measured by Olink Proteomics ( figure 1D). Next, the secreted levels of these 124 defined SASP proteins were evaluated in untreated cell culture supernatants from patients with COPD compared with their matched controlderived fibroblasts. We observed higher levels of 42 SASP proteins in supernatants from COPD-derived fibroblasts (figure 2A, see online supplemental table S2 for a detailed overview). The three proteins with the highest median fold change were RANKL, FABP4 and IGFBP-1 ( figure 2B). Several of the COPD-associated SASP proteins were previously found to be higher expressed at the transcription level in COPD-derived lung tissue compared with controls, including vWF, CHIT1, SPON1, TR-AP, TIMP4, PECAM1, CDH5, PSP-D, IL-15RA. 11 Furthermore, several COPD-associated SASP proteins were associated with ageing in lung tissue at the transcription level, including t-PA, CHIT1, SPON1, IL-10RA and CXCL9. 12 On subgroup analyses, 35 of the 42 COPD-associated proteins were secreted at higher levels by fibroblasts from patients with SEO-COPD compared with their matched controls (online supplemental table S2), whereas this was not the case for the patients with older, mild-moderate COPD compared with their matched controls. Finally, STRING pathway analysis revealed that responses to stimuli, immune responses and cytokine-related pathways are associated with the COPD-associated SASP proteins (data not shown). COPD-associated SASP proteins include cytokines (IL12B, TNFSF14 and RANKL) and chemokines (CCL15, CCL23 and CXCL9) that are known to be involved in inflammatory processes. These findings suggest that the SASP proteins that are secreted at higher levels by COPD-derived fibroblasts might be involved in the chronic inflammatory response in COPD. CONCLUSION By using a proteomic-based approach, we provide insight into the SASP of primary human lung fibroblasts. Interestingly, 42 of the 124 identified SASP proteins were secreted at higher levels by fibroblasts from patients with COPD compared with matched controls. The COPD-associated SASP proteins include proteins that have been implicated in chronic inflammation, and thus may contribute to disease pathology in COPD. Remarkably, 35 of these 42 COPD-associated SASP proteins are secreted at higher levels by patients with SEO-COPD compared with their matched controls, whereas none were significantly different between patients with older, mild-moderate COPD compared with their matched controls. This lack of significance is likely due to higher biological variation in these older subgroups as the fold changes are comparable (online supplemental table S2) and the interquartile ranges are higher in these groups (online supplemental figure S2). These results suggest a role for these SASP proteins in COPD. The fact that both cellular senescence and SASP protein secretion were higher in COPD-derived lung details see online supplemental table S2). Significant differences were tested using Mann-Whitney U tests. Benjamini-Hochberg adjusted FDR<0.05 was considered statistically significant. Medians with 95% CI are plotted. The SEO-COPD-associated SASP proteins are indicated with a star in the graph behind the protein names. No older, mild-moderate COPD-associated SASP proteins were found. The three COPD-associated SASP proteins with the highest fold change in medians are plotted in dot plots (B). Green=SEO-COPD-matched controls (n=9), red=SEO COPD (n=10), blue=older, mild-moderate COPDmatched controls (n=10), yellow=older, mild-moderate COPD (n=11). Protein levels are depicted as Olink NPX values corrected for cell numbers. Lines represent medians. SASP, senescence-associated secretory phenotype; SEO, severe, early-onset. FDR, false discovery rate. fibroblasts compared with their matched controls suggests that senescence accumulation is involved in the pathogenesis of COPD. It should be noted that until now it is unknown whether the higher senescence observed in COPD is driven by acute exposures or chronic exposures, which may result in a different SASP profile. In addition, different senescence-inducing stimuli may result in a different SASP profile as well. The identified (COPD-associated) SASP proteins of primary lung fibroblasts can be used for further studies to understand the role of senescent cell accumulation and its potential detrimental impact in SEO-COPD pathogenesis.
2020-04-23T09:14:41.776Z
2020-03-05T00:00:00.000
{ "year": 2020, "sha1": "d3f2a134291818c53ee5eaaa6a764f3608c8a2a8", "oa_license": "CCBY", "oa_url": "https://thorax.bmj.com/content/thoraxjnl/76/5/508.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5bc61691dd6beb93792810df4e2e1461f6959e37", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232135201
pes2o/s2orc
v3-fos-license
Inverse design of Raman amplifier in frequency and distance domain using Convolutional Neural Networks We present a Convolutional Neural Network (CNN) architecture for inverse Raman amplifier design. This model aims at finding the pump powers and wavelengths required for a target signal power evolution, both in distance along the fiber and in frequency. Using the proposed framework, the prediction of the pump configuration required to achieve a target power profile is demonstrated numerically with high accuracy in C-band considering both counter-propagating and bidirectional pumping schemes. For a distributed Raman amplifier based on a 100 km single-mode fiber, a low mean set (0.51, 0.54 and 0.64 dB) and standard deviation set (0.62, 0.43 and 0.38 dB) of the maximum test error are obtained numerically employing 2 and 3 counter, and 4 bidirectional propagating pumps, respectively. INTRODUCTION In long-haul optical communications, amplifiers play a crucial role in compensation of the link losses. Designing a desired optical amplification scheme is challenging as the requirements in noise figure (NF) and gain profile can be strict. Regarding this, Erbium-doped fiber amplifiers (EDFAs) and distributed Raman amplifiers (DRAs) have been extensively researched. EDFAs are more power efficient while DRAs provide low NF. Moreover, DRAs' power profile can be adjusted easily by changing the power and wavelength of the pumps [1, 2], which makes them more attractive for wideband wavelength division multiplexed (WDM) scenarios [3]. One of the main challenges with inverse DRA design is to realize the pump configuration based on a desired gain at the end of the link. Several machine learning solutions have been proposed for this problem in the literature. In [4], a neural network (NN) model averaging along with a fine tuning technique is employed to learn the relationship between the desired gain profile at the end of fiber and the corresponding pump powers and wavelengths in a SMF link. Furthermore, [5] proposes an Autoencoder (AE) scheme by embedding a differentiable Raman amplifier model in the training procedure of a NN to predict the pump parameters for a specific family of gains in a FMF link. Alternatively to designing the power spectral density at the amplifier output (frequency domain), being able to control the power evolution over the transmission span (spatial domain) offers a variety of advantages. Uniform distribution of the power along the span results in a quasi-lossless transmission which minimizes the amplified spontaneous emission (ASE) noise level [6][7][8]. Such a power distribution would also help several of the Kerr nonlinearity mitigation techniques currently being investigated, e.g. transmission based on the nonlinear Fourier transform theory which assumes lossless transmission [9] or nonlinearity mitigation using mid-link optical phase conjugation which requires a symmetric power distribution [10]. A significant research effort has been devoted both numerically and experimentally into achieving a desired signal power evolution over a narrow frequency bandwidth but with limited work presented on full C-band. So far, no optimization method has been proposed for addressing the power evolution design jointly in spatial and frequency domain. In this paper, we propose a supervised deep CNN architecture for inverse DRA design in a SMF link to find the pump powers and wavelengths based on the two dimensional signal power profile in frequency and distance along the fiber. The proposed method employs two networks trained in an end-to-end procedure: A CNN network as the feature extraction followed by a multi-layer NN as the regression for predicting the pump power and wavelength values based on the extracted features. The proposed method reduces the high spatial redundancy of the signal power in frequency and distance and also extracts informative features for prediction of the pump parameters. For the evaluation of this framework, numerical simulations are demonstrated in C-band utilizing both counter-propagating and bidirectional pumping schemes. The remainder of the paper is organized as follows. In section II, systems level CNN-based architecture and the training configuration for inverse DRA design is described. Section III demonstrates the numerical simulation results of the proposed method for counter and bidirectional propagating schemes in C-band. Finally, section IV concludes the paper. Flatten layer Feature extraction network Min-Max Normalization Target power profile SYSTEM LEVEL INVERSE DRA DESIGN A. CNN architecture for inverse DRA design Considering a system with a forward mapping denoted as Y = f (X), the inverse design problem aims at modeling the mapping f −1 (.) in order to find the input X providing a desired target output Y. In many cases, the forward mapping is straightforward and can be solved numerically, while the inverse mapping may be very complex or even unknown. Regarding this, machine learning methods, especially deep learning algorithms, have shown a promising performance in learning the approximate inverse mapping between X and Y only based on samples (X s , Y s ) generated from the forward model f (.) [4]. For simplicity in our further analysis, we will refer to X and Y as the sampled versions of the input and output spaces. In inverse DRA design for a SMF link based on signal power evolution, the forward mapping can be described as ×N z is the two dimensional signal power in which p ij is signal power at i-th frequency channel and j-th distance index in a WDM system with N ch number of channels and N z distance points, f (.) is a system of nonlinear differential equations for Raman amplification scheme [1], P pump = [P 1 , · · · , P N p ] T is the pump power vector with T denoting the transpose operator, and λ pump = [λ 1 , · · · , λ N p ] T is the pump wavelength vector. To be able to simultaneously predict the pump powers and the wavelengths, we present a deep learning algorithm to model the inverse mapping [P pump ; λ pump ] = f −1 (P s ( f , z)). The dimensionality of the problem does not allow to simply apply the frameworks of [4,5], since in order to make the input compatible with these methods, the power profile for each sample should be converted to an array of length N ch × N z which results in an extremely large and complex network. For instance, if a system have N ch = 40 channels and 100 km span length with distance resolution of 1 km, N z = 100, the number of the nodes of the input layer will be N ch × N z = 4000. The mapping between such a high dimensional input and the pumping configuration requires a network with high number of trainable parameters which not only takes too much time to be trained, but also will be prone to problems like overfitting and local minimum. Additionally, using the approach of [4,5] would be unnecessarily complex as it would not take advantage of the inherent correlation between the input data. Clarifying this, each point in a WDM frequency-distance space resembled as a pixel of a two dimensional image, has a high spatial redundancy. This means that adjacent points have a lot amount of information in common, however, fully-connected NNs are not capable of reducing these spatial redundancies. Concerning these two main problems, we found CNNs more attractive since they have been designed to process data coming on the form of multiple arrays, like images, and moreover, they can successfully capture the Spatial and Temporal dependencies in a two dimensional form data through the application of relevant filters [11] and weight sharing. For a CNN-based demonstration of inverse DRA design, we consider the distance-frequency power evolution matrix P s ( f , z) as a two dimensional input to the network aiming to predict the pump configuration leading to the target P s ( f , z). Diagram of the CNN-based method is illustrated in Fig.1. The proposed framework is made up of two stages trained End-to-End, a feature extraction, and a regression network, with trainable parameters θ R and θ F , respectively. First, a pixelwise min-max normalization is performed on the input data as a pre-processing step. The minimum and maximum values selected for each frequency-distant points are equal to the the minimum and maximum values of the points in the training set, respectively. Afterwards, the normalized profile is passed to the feature extraction network R(.; θ R ) which consists of three CNN layers with n 1 , n 2 and n 3 filters of size f 1 × f 1 , f 2 × f 2 and f 3 × f 3 , respectively. Moreover, each CNN layer is followed by rectified linear unit (ReLU(x) = max(0, x)) as the activation function which can speed up the training process due to its simplicity in gradient calculation. Furthermore, spatial pooling is carried out by three average-pooling layers inserted in between successive CNN layers with the window size of m 1 × m 1 , m 2 × m 2 and m 3 × m 3 , respectively. The function of pooling layers is to progressively reduce the spatial size of the input feature maps resulting in lower amount of parameters and computations in the network. It is worth noting that the reduction of the scale of representation by each pooling layer is equal to its window size. Consequently, each layer of this network generates informative and compact representations of the input through nonlinear mappings. The output of the last CNN layer is a three-dimensional representation of the input profile consisting of n 3 different two-dimensional representations generated by the different filters of the last layer each with the spatial sizes of q = N ch /(m 1 × m 2 × m 3 ) and r = N z /(m 1 × m 2 × m 3 ). This 3D-representation is converted thereafter to a vector of length q × r × n 3 by the flatten layer and then is passed to the regression network F(.; θ F ). The objective of this network, modeled as a deep fully-connected, is to map the extracted features to the pumping setup. This network has four layers including the flatten layer of size q × r × n 3 , two hidden layers of size N h1 and N h2 and the last layer of size 2N p , representing the pumping configuration vector. The values of N h1 and N h2 are optimized depending on the proposed pump configuration. B. Training and evaluation Since the proposed approach relies on supervised learning, a data-set D = {Y k , X k |k = 1, · · · , K} needs to be generated where K is the number of samples, Y k = [P k pump ; λ k pump ] and X k = P k s are the pumping configuration vector and the corresponding 2D signal profile of the k-th sample, respectively. In this paper we focus on a data-set generated by solving the Raman amplifier differential equations [1], denoted as Raman solver, for different pump powers and wavelengths. For each sample data, each value of the pump parameters denoted as the m-th value of the vector Y have been selected based on a uniform distribution in which y min m and y max m are the minimum and maximum values allowed to be taken by the m-th value of Y, respectively. After the data generation, same as the most supervised learning approaches, we divide the data into separate training, testing and validation sets. Also, we make sure that the training set contains the minimum and the maximum values of each dimensions of the input signal power evolution matrix X k to have a good generalization property [4]. The overall model of the inverse design network can be described as: in which both R and F are jointly trained to minimize the average cost function C between the original target value Y l and the approximated valueŶ l = R(F(X l ; θ F ); θ R ) of the training set: where L is the number of training samples and for each sample, C is the mean square error (MSE) value between the target and the approximated pump set-up values Y l andŶ l , respectively: The parameters of the network are updated in an iterative approach by means of gradient descent algorithm and backpropagation [11]. Furthermore, advanced optimization algorithm RMSprop [12] is employed for updating the parameters as it provides a fast and robust convergence for each parameter. Once the training of the network has been completed, we fix the set of the learnt parameters of the network θ = {θ R , θ F } to evaluate its performance. To this end, we put the network into a schematic as illustrated in Fig. 2. In this scheme, for each input power profile, the corresponding pump powers and wavelengths have been predicted using the network in Fig. 1 and then passed to the Raman solver RS(.) to compute the power profile based on the predicted pumping setup. Afterwards, maximum absolute difference between the predicted and the input power profile is calculated in frequency (f) and distance (z) domain as the final prediction error for each sample. SIMULATION RESULTS In this section, we investigate the CNN-based framework presented in the previous section for the design of Raman amplifiers in C-band. The data-set are generated using the Raman Solver provided by GNPy [13], an open source application developed recently for analyzing optical networks. We consider a single span and analyze the evolution of the power profile jointly over the distance and the entire C-band (between 192 and 196 THz). Also, three propagation cases are deployed for the evaluation of the proposed method: two counterpropagating cases with 2 and 3 pumps and a bidirectional propagating case with 4 pumps (2co+2counter). The ranges for pump powers and wavelengths are specified in Table 1. The superscripts (-) or (+) on the power ranges specify the counter or co-propagation of the corresponding pump, respectively. We divided the C-band into 40 channels with 100 GHz spacing. Input signal power per-channel is set to 0 dBm which results in a total WDM signal power of 16 dBm. Furthermore, a standard silica fiber with the following parameters is assumed: span length L span =100 km, signal data attenuation α s = 0.2 dB/km, pump power attenuation α p = 0.25 dB/km, effective area A e f f = 80 µm 2 , non-linear coefficient γ = 1.26 1/W/km. In order to determine the size of the training data, for each pumping case, different sizes from 1000 to 8000 have been investigated. Models trained on training data-sets with different size, have been evaluated by the MSE on a validation data-set with 1000 points generated separately. Fig. 3 shows the validation MSE as a function of the size of the training data-set. Based on the validation MSE and also the training time, we realized that for 2 and 3 counter and 4 bidirectional cases, best training data sizes are 5000, 6000 and 7000 samples, respectively. Increasing the training size will not result in a remarkable improvement. Regarding the parameters of the feature extraction network, number of filters (n 1 , n 2 , n 3 ), filter sizes ( f 1 , f 2 , f 3 ), and the average-pooling layer window sizes (m 1 , m 2 , m 3 ) have been set and evaluated based on the most common values in the literature. For the number of filters of each layer, we tried 32 and 64 numbers and observed that 64 filters extremely increase the training time with no improvement in performance. Moreover, filter size of 3 × 3 showed a better validation MSE over a bigger 5 × 5 filter. We also figured out that for the window size of the average-pooling layers, a commonly used 2 × 2 window has a better MSE over a window of size 3 × 3. Furthermore, regarding the regression network parameters, we evaluated the validation MSE by setting the N h1 and N h2 based on the set {20, 40, 80, 100} and realized that for 2 pumping case, N h1 = 40 and N h2 = 40 with ReLU activation, and contrarily, for both 3 and 4 pumping cases, N h1 = 100 and N h2 = 40 with ReLU activation function will minimize the validation loss. For all pumping schemes, the batch size in training phase has been set to 128 and the learning rate of the RMSprop is set to 0.001. Furthermore, the best distance resolution for 2 and 3 pumps is 2 km and for 4 pumps is 1 km. Moreover, due to the different value ranges between pump powers and wavelengths at the output layer, the network is trained on the min-max normalized pump configuration vector. The resulting normalized pump configuration vector can be linearly mapped to the desired interval of powers and wavelengths based on the specified ranges for these parameters. Final evaluation of the trained models is performed based on the scheme illustrated in Fig.2 on test data-sets with 2000 samples generated for each pumping case. Fig.4 indicates the probability density function (pdf) for the maximum absolute error (Error max ) of the reconstructed power profile beside its mean (µ) and standard deviation (σ) for all cases. It has been demonstrated that the µ value for 2 counter-propagating, 3 counterpropagating and 4 bidirectional propagating cases is almost 0.51 dB, 0.54 dB and 0.61 dB, respectively, and also the σ value for these cases is almost 0.62 dB, 0.43 dB and 0.38 dB, respectively. We can therefore assert that the proposed method is highly accurate for designing Raman amplifiers based on the signal power profile over a wide band and along the span. CONCLUSION A CNN framework is presented for inverse DRA design based on desired signal power profile in frequency and distance domain. The proposed method consists of two networks trained end-to-end: 1) a feature extraction with 3 CNN layers employed to extract informative features of the 2D signal power profile and 2) a regression aiming to predict the pump powers and wavelengths values based on the extracted features. Numerical simulations show that the proposed framework provide high accuracy in terms of predicting the pump parameters for both counter and bidirectional propagating pumps in C-band.
2021-06-02T06:16:57.636Z
2021-05-19T00:00:00.000
{ "year": 2021, "sha1": "3800b1228b910253ed4d1e9106bd9d640957f7c4", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "3800b1228b910253ed4d1e9106bd9d640957f7c4", "s2fieldsofstudy": [ "Physics", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
206119009
pes2o/s2orc
v3-fos-license
Prevalence of human metapneumovirus in children with acute lower respiratory infection in Changsha, China† Abstract Human metapneumovirus (hMPV) causes acute respiratory infections in children. The prevalence and clinical characteristics of hMPV were determined in nasopharyngeal aspirates of children in Changsha, China. Reverse transcription‐polymerase chain reaction (RT‐PCR) or PCR was employed to screen for both hMPV and other common respiratory viruses in 1,165 nasopharyngeal aspirate specimens collected from children with lower respiratory tract infections from September 2007 to August 2008. All PCR products were sequenced, and demographic and clinical data were collected from all patients. Seventy‐six of 1,165 (6.5%) specimens were positive for hMPV, of which 85.5% (65/76) occurred in the winter and spring seasons. The hMPV coinfection rate was 57.9% (44/76), and human bocavirus was the most common virus detected in conjunction with hMPV. Phylogenetic analysis revealed that 94.7% of the hMPV detected were of subgroup A2, 5.3% were subgroup B2, and none belonged to either the A1 or B1 subgroups. No significant differences were found in terms of the frequency of diagnosis and clinical signs between either the co‐ and mono‐infection groups, or between patients with and without underlying diseases. It was concluded that hMPV is an important viral pathogen in pediatric patients with lower respiratory tract infections in Changsha. Only hMPV genotypes A2 and B2 were co‐circulating in this locality; human bocavirus was the most common coinfecting virus, and coinfection did not affect disease severity. J. Med. Virol. 85:546–553, 2013. © 2013 Wiley Periodicals, Inc. INTRODUCTION Human metapneumovirus (hMPV) was first identified in 2001 in nasopharyngeal specimens from children with acute respiratory tract illness in the Netherlands [Van Den Hoogen et al., 2001]. This virus has been classified as a member of the genus Metapneumovirus of the subfamily Pneumovirinae of the family Paramyxoviridae [Van Den Hoogen et al., 2001;Boivin et al., 2002;Bastien et al., 2003]. Subsequently, hMPV was described as a cause of acute respiratory disease in many countries, including Canada, the United States, Australia, Japan, France, Hong Kong, and Korea [Ordá s et al., 2006]. hMPV is recognized as a common cause of respiratory infections, ranging from upper respiratory tract infection to severe lower respiratory tract infection, in individuals of all ages, particularly in infants and children Freymouth et al., 2003;Peiris et al., 2003;Van Den Hoogen et al., 2003;McAdam et al., 2004;Loo et al., 2007]. However, limited data exist for hMPV infection, especially concerning its prevalence, and molecular and clinical characterizations of hMPV in children with lower respiratory tract infections in China, with the exception of a previous study in Lanzhou City [Xiao et al., 2010]. hMPV strains are divided into two main groups, A and B, based on their nucleotide sequences. Each group is subdivided into two sublineages, A1 and A2, and B1 and B2 [Loo et al., 2007]. In addition, parti-tioning further the sublineage A2 into two genetic clusters designated A2a and A2b has been suggested [Huck et al., 2006], and the relationship of strain differences to clinical features has not been elucidated fully [Schildgen et al., 2005;Agapov et al., 2006;Manoha et al., 2007;Pitoiset et al., 2010]. In this study, 1,165 children with lower respiratory tract infections in Changsha City were screened for hMPV and several other common respiratory viruses, and the epidemiological and clinical features of infection with the various hMPV genotypes were characterized. The objective of this study was to investigate the prevalence and clinical characteristics of hMPV in Chinese children with lower respiratory tract infections. Patients and Specimens Nasopharyngeal aspirate samples were collected from 1,165 children with lower respiratory tract infection in the Hunan Province People's Hospital, China, on 2 days each week from September 2007 to August 2008. All patients were 14 years of age or younger, and informed consent was obtained from their parents/guardians. All patients had symptoms of lower respiratory tract infection on admission. All nasopharyngeal aspirate samples were collected 1-3 days after the onset of lower respiratory tract infection. Demographic data and details of the clinical findings and severity of disease were recorded. The study protocol was approved by the hospital ethics committee. Collection and Processing of Nasopharyngeal Aspirate Samples All nasopharyngeal aspirate specimens were collected and transported immediately to the laboratory at the National Institute for Viral Disease Control and Prevention, China CDC, and stored at À808C until required for further testing. Viral DNA and RNA were extracted from 140 ml of each nasopharyngeal aspirate specimen using the QIAamp viral DNA and the QIAamp viral RNA Mini Kits (Qiagen, Shanghai, China) according to the manufacturer's instructions. cDNA was synthesized using random hexamer primers with the Superscript II RH À reverse transcriptase (Invitrogen, Carlsbad, CA). Detection of hMPV Screening for hMPV was conducted using conventional polymerase chain reaction (PCR) methods. hMPV forward (5 0 -CCC TTT GTT TCA GGC CAA-3 0 ) and reverse (5 0 -GCA GCT TCA ACA GTA GCT G-3 0 ) primers, which target the M gene and generate a 416bp product, were used as described previously [Pujol et al., 2005]. All PCR products were purified using the QIAquick PCR purification kit (Qiagen) and sequenced by SinoGenoMax (Beijing, China). The reaction mix contained 10 pmol of each primer and 1.25 units of EXTaq DNA polymerase (Takara Bio, Tokyo, Japan). Reactions were incubated at 948C for 8 min, followed by 35 cycles at 948C for 30 sec, 558C for 30 sec, and 728C for 45 sec, followed by a final extension at 728C for 10 min. Screening for Other Respiratory Viruses A standard reverse transcription (RT)-PCR was used to screen for respiratory syncytial virus (RSV), human rhinovirus, influenza A virus, influenza B virus, parainfluenza virus, human coronaviruses HKU1 and NL63, and PCR for adenovirus and human bocavirus [Hierholzer et al., 1993;Pujol et al., 2005]. Nucleotide Sequence Analysis All positive sequences were determined and analyzed using the DNASTAR software package. A neighbor-joining tree was constructed using the MEGA software package (version 3.1). Clinical Severity Score Based on variables reported in previous studies [Caracciolo et al., 2008] a severity index was defined a priori by assigning one point to each of the following: use of supplemental oxygen, duration of hospital stay of more than 7 days, and admission to an intensive care unit (ICU). Statistical Analysis The significance of differences in rates among various groups was evaluated using the chi-square test, Fisher's exact test, or Student's t-test. All analyses were performed using SPSS version 13.0 software (SPSS, Inc., Chicago, IL). P < 0.05 was considered statistically significant. Patient Characteristics In total, 1,165 patients were included; the study sample represented 36.7% of the total of 3,174 admissions for acute lower respiratory disease to the Hunan Province People's Hospital from September 2007 to August 2008. Patient ages ranged from 3 hr to 156 months with a median of 15.4 months. The majority of patients (97.5%) were 5 years old or younger. The male:female ratio was 1.9:1 (763:402). All subjects were inpatients. Detection of hMPV and Other Viral Agents At least one respiratory virus was detected in 871 of the 1,165 samples and 76 (6.5%) were positive for hMPV by RT-PCR. hMPV accounted for 8.7% of the total viral agents detected. Forty-four of 76 (57.9%) children who were hMPV-positive were found to be coinfected with other respiratory viruses, including 16 with human bocavirus, 13 with RSV, 10 with human rhinovirus, 7 with parainfluenza 3 virus, 4 with adenovirus, 3 with influenza B virus, 2 with HCoV-HKU1, and 1 with HCoV-NL63. Human bocavirus was the most common coinfecting virus, accounting for 16/56 (28.6%) coinfections. No differences in coinfection rates were observed between hMPV A and hMPV B (P ¼ 0.106). Epidemiology of hMPV hMPV was detected in every month except for September and October 2007 and August 2008. The number of positive specimens peaked in March (n ¼ 20; 26.3%) and April (n ¼ 19; 25%; Fig. 1). The age of patients infected with hMPV varied from 20 days to 12 years of age (median, 15.9 months) and 93.4% (71/ 76) were 5 years of age. Of a subset of 42 children >60 months of age, five (11.9%) acquired hMPV infection (Fig. 2). The male:female ratio of the patients infected with hMPV was 3.5:1 (x 2 ¼ 5.300; P ¼ 0.021). No significant differences were observed in any of the epidemiological characteristics, clinical presentations between the hMPV mono-(group 1) and coinfection groups (group 2), or between the hMPV monoand human bocavirus coinfection groups (group 3; Table II). Note that a significant difference was observed in the rate of coinfection with RSV between those 12 months and those >12 months (P ¼ 0.045). We also found that wheezing was more prevalent in subjects coinfected with RSV than in those with hMPV mono-infection (69.2% vs. 37.5%), although this difference was not significant (P ¼ 0.053 a ; Table II). A total of 163 of 1,165 (13.99%) patients with lower respiratory tract infection and 14 of 76 (18.4%) who were hMPV-positive had an underlying illness. No significant difference was observed in the detection rate between these two groups (P ¼ 0.250). A significant difference was detected in the prevalence of diarrhea between the two groups (P ¼ 0.024). However, no significant difference was found for the duration of hospital stay, age, gender, the majority of clinical diagnoses, and all clinical symptoms (Table III). Phylogenetic Analysis of hMPV The sequences of positive products shared high homology with standard sequences from GenBank (97-100%). Single nucleotide mutations and nucleotide insertions were found, indicating a slow genetic variation rate. Phylogenetic analyses indicated that the 76 hMPV specimens were classified into the two main genetic lineages, A and B. Seventy-two (94.7%) hMPV strains were group A2, four (5.3%) strains were subgroup B2, and none were either subgroup A1 or B1 (Fig. 3). During the epidemic season, sublineages A2 and B2 co-circulated, with 94.7% (72/76) of the circulating viruses belonging to sublineage A2. Between the A2 and B2 genotype strains, the sequence identities of M gene fragments were 86.06-87.15% and 81.14-82.46%, at the nucleotide and amino acid levels, respectively. The identities within subgroup A2 were 98.91-99.56% and 98.58-99.13%, and within subgroup B2 were 95.91-97.98% and 96.37-97.38%, respectively. DISCUSSION Of the 1,165 children with lower respiratory tract infection included in this study, 6.5% were hMPVpositive by RT-PCR. A similar incidence was reported in Singapore [Loo et al., 2007], the United States [McAdam et al., 2004], Hong Kong [Peiris et al., 2003], and in Lanzhou City [Xiao et al., 2010], although the findings were different from those of other studies [McAdam et al., 2004;Chung et al., 2006;Bosis et al., 2008;Heikkinen et al., 2008]. [Esper et al., 2003;Dollner et al., 2004;McAdam et al., 2004]. By the age of 5 years, >90% of individuals screened have evidence of hMPV infection. In this study, the majority of patients who were hMPV-positive (93.4% 71/76) were less than 5 years old. No significant differences in the age distribution rate were detected, but a study from Chongqing, a city in southwestern China, reported that younger children (less than 6 months old) had the highest rate of hMPV infection [Chen et al., 2010]. Van Den Hoogen et al. [2001] and Bastien et al. [2003] reported that no significant difference in the prevalence of hMPV existed between male and female patients. In this study, the majority of patients infected with hMPV (59/76) were male, and statistical analysis found differences in prevalence between males and females, which indicated that male patients are at a higher risk of hMPV infection. hMPV was detected in each month with the exception of September and October 2007 and August 2008. Positive specimens peaked in March and April, in agreement with findings from the United States [McAdam et al., 2004], Canada [Bastien et al., 2003], and a previous study [Xiao et al., 2010]. However, hMPV was detected year-round in Singapore [Loo et al., 2007] and peaked during the summer months in Hong Kong [Peiris et al., 2003]. Although these data suggest that hMPV may follow varying epidemiologic patterns in different regions, elucidation of the exact epidemiologic characteristics of hMPV infection requires further investigation. Independent of the techniques used, several studies have demonstrated that hMPV infection occurs predominantly early in childhood Approximately 58% of patients who were hMPVpositive were coinfected with other respiratory viruses, in agreement with previous reports [Caracciolo et al., 2008;Cilla et al., 2008;Xiao et al., 2010]. Human bocavirus and RSV were the most common coinfecting viruses (36.4% and 29.5%, respectively). The data regarding the impact of coinfections of human bocavirus and RSV with hMPV on disease severity are conflicting [Greensill et al., 2003;Semple et al., 2005;Caracciolo et al., 2008]. In this study, no significant difference in clinical symptoms, age, sex, or duration of hospital stay between the mono-and coinfection groups was detected (Table II). Considerable evidence suggests that hMPV is responsible for both upper respiratory tract infection and lower respiratory tract infection in infants and young children Freymouth et al., 2003;Van Den Hoogen et al., 2003]. Indeed, Arabpour et al. [2008] reported a higher prevalence of lower respiratory tract infection in children infected with hMPV. However, the present study encompasses only hospitalized children with lower respiratory tract infection; therefore, the prevalence of upper respiratory tract infection and lower respiratory tract infection could not be elucidated. Additionally, Williams et al. [2004] reported that one-third of children with hMPVassociated lower respiratory tract infection were diagnosed with concomitant acute otitis media. However, in this study, no acute otitis media occurred in subjects who were infected with hMPV. Bronchopneumonia and bronchiolitis were the most frequent clinical diagnoses in this study, as has been reported previously [Peiris et al., 2003;Loo et al., 2007;Xiao et al., 2010]. Fever, cough, respiratory crepitations, and wheezing were the most common symptoms of these patients. These symptoms are identical to those reported in children in Canada and Korea [Chung et al., 2006;Caracciolo et al., 2008]. No significant difference was observed in the frequencies of cough, fever, respiratory crepitations, wheezing, vomiting, diarrhea, or duration of stay in hospital between the hMPV mono-and coinfection groups. More than one-half of patients with hMPV had normal erythrocyte sedimentation rates, C-reactive protein and leukocyte counts, and a majority had hepatic and renal injury. Data on whether hMPV infection can induce hepatic and renal injury are limited; therefore, this aspect requires further study. Chest radiographs of the majority of patients who were hMPV-positive showed sporadic consolidation of the lung, 21.1% (12/57) showed interstitial lung disease and emphysema, and only one patient showed consolidation in lobar distribution, pleural effusion, and single hila of pulmonary swelling, which is consistent with a report from Korea [Chung et al., 2006]. A majority of patients who were hMPV-positive reportedly had underlying diseases Kaida et al., 2007], and 14 of 76 such patients had at least one underlying disease (Table III). A significant difference in the incidence of diarrhea was observed between subjects who were hMPV-positive with and without underlying illnesses (P ¼ 0.024), which suggests that hMPV infection may cause different clinical presentations in patients depending on the underlying conditions. However, no significant difference was detected in the detection rate, age, sex, duration of stay in hospital, the majority of clinical diagnoses, or clinical symptoms between the two groups (Table III). These data suggest that hMPV infection did not aggravate clinical symptoms or contribute to duration of hospital stay in subjects with underlying illnesses. Three subjects had congenital heart disease, which ranked first in terms of underlying diseases. Two each had gastroesophageal reflux and measles. Further study is needed to investigate whether congenital heart disease, gastroesophageal reflux, or measles poses a major risk factor for hMPV infection. Previous data have suggested that the two hMPV genotypes co-circulate and that different subgroups may predominate from year to year Mackay et al., 2004]. In this study, phylogenetic analysis demonstrated the simultaneous existence of two groups (A and B) and two of the four subgroups (A2 and B2). The majority of these strains (94.7%, 72/76) clustered predominantly with group A hMPV, and all belonged to subgroup A2 (100%, 72/72), which is consistent with the work of Boivin et al. [2004]. Moreover, a previous study in Lanzhou City showed that sublineages A1, A2 (A2a and A2b), and B1 cocirculated during the 2006-2007 epidemic, but only A2 circulated during the 2007-2008 epidemic [Xiao et al., 2010]. These data suggest that the circulation pattern of hMPV in China is complex, which poses a challenge for future vaccine development that relies on more molecular epidemiologic studies. To summarize, the prevalence and clinical characteristics of hMPV in children with lower respiratory tract infection in Changsha, China were described. hMPV was detected in 76 of 1,135 (6.5%) nasopharyngeal aspirate specimens collected. Approximately 58% of subjects infected with hMPV were coinfected with other respiratory viruses, most commonly human bocavirus. The most common symptoms and clinical diagnosis in those infected with hMPV were cough and bronchopneumonia, and the predominant circulating genogroup was subgroup A2. Statistical analysis indicated that male subjects and those less than 5 years of age were at a higher risk of hMPV infection, and coinfection with other respiratory viruses did not affect disease severity. Fig. 3. Phylogenetic analysis of the partial M gene sequences of 76 human metapneumovirus strains from nasopharyngeal aspirate specimens. Phylogenetic trees were constructed by the neighbor-joining method using MEGA ver. 3.1. Viral sequences in marks were generated from the present study; other reference sequences were obtained from GenBank. Bootstrap values are shown at each branching point.
2018-04-03T02:03:14.730Z
2013-01-07T00:00:00.000
{ "year": 2013, "sha1": "3c3f67289ef01b08eade4d3a8b4ed617e92768d8", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7166472?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3c3f67289ef01b08eade4d3a8b4ed617e92768d8", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
9022345
pes2o/s2orc
v3-fos-license
Immunity-related genes in Ixodes scapularis—perspectives from genome information Ixodes scapularis, commonly known as the deer tick, transmits a wide array of human and animal pathogens including Borrelia burgdorferi. Despite substantial advances in our understanding of immunity in model arthropods, including other disease vectors, precisely how I. scapularis immunity functions and influences persistence of invading pathogens remains largely unknown. This review provides a comprehensive analysis of the recently sequenced I. scapularis genome for the occurrence of immune-related genes and related pathways. We will also discuss the potential influence of immunity-related genes on the persistence of tick-borne pathogens with an emphasis on the Lyme disease pathogen B. burgdorferi. Further enhancement of our knowledge of tick immune responses is critical to understanding the molecular basis of the persistence of tick-borne pathogens and development of novel interventions against the relevant infections. INTRODUCTION Although several hundred tick species are known to exist (Jongejan and Uilenberg, 2004), only a handful transmit human diseases. Ixodes scapularis is one of the predominant tick species that spread a wide array of serious human and animal pathogens, including Borrelia burgdorferi, which causes Lyme borreliosis (Burgdorfer et al., 1982;Anderson, 1991). Our understanding of arthropod innate immune responses, primarily involving the fruit fly and mosquito, has advanced over the past decades (Vilmos and Kurucz, 1998). However, our knowledge of tick immune responses, especially the occurrence of immune-related genes, pathways, and specifically how these components respond to invading pathogens remains under-explored. Notably, many pathogens that persist in and transmit through ticks are evolutionarily distinct and possess unique structures (Hajdusek et al., 2013). For example, key pattern recognition molecules (PAMPs), such as peptidoglycan (PG) and lipopolysaccharides (LPS), are structurally different or completely absent, respectively, in major tick-borne pathogens, such as in B. burgdorferi (Schleifer and Kandler, 1972;Takayama et al., 1987;Fraser et al., 1997). Thus, the wealth of knowledge generated in other model arthropods, especially regarding the genesis of host immune responses against classical Gram-positive or Gram-negative bacterial pathogens, might not be readily applicable for tick-borne pathogens, like B. burgdorferi. The primary goal of this review is to present a general overview of tick immune components, as gathered from the sequenced genome and published data, and discuss their potential for modulating infection, with a focus on a major tick-borne pathogen, B. burgdorferi. A better understanding of the I. scapularis immune response to invading pathogens could contribute to the development of new strategies that interfere with relevant pathogen persistence and transmission. While a number of studies detailed characterization of I. scapularis proteins, predominantly salivary gland proteins, that influence immunity and pathogen persistence in the vertebrate hosts (Wikel, 1996;Das et al., 2001;Gillespie et al., 2001;Narasimhan et al., 2002Narasimhan et al., , 2004Narasimhan et al., , 2007Hovius et al., 2008;Dai et al., 2010;Pal and Fikrig, 2010;Kung et al., 2013), relatively limited information is available on how tick proteins shape vector immunity and influence pathogen persistence. In order to generate a list of tick immune genes and related pathways, we sought to perform a comprehensive analysis of the recently sequenced I. scapularis genome data that are available through several publicly accessible databases (Hill and Wikel, 2005;Pagel Van Zee et al., 2007). To accomplish this, we initially searched the National Institute of Allergy and Infectious Diseases Bioinformatics Resource Center (www.vectorbase.org) for annotated I. scapularis immune-related genes. In addition, we also reviewed the relevant literature to identify additional innate immune genes, including those discovered in related tick species or in fruit fly, mosquito, and mammalian genomes (Sonenshine, 1993;Hoffmann et al., 1999;Dimopoulos et al., 2000;Christophides et al., 2002;Hoffmann and Reichhart, 2002;Janeway and Medzhitov, 2002;Govind and Nehm, 2004;Osta et al., 2004;Saul, 2004;Tanji and Ip, 2005;Dong et al., 2006;Ferrandon et al., 2007;Tanji et al., 2007;Jaworski et al., 2010;Kopacek et al., 2010;Yassine and Osta, 2010;Valanne et al., 2011). The latter information was then used to search for possible Ixodes orthologs via BLASTP against the VectorBase database. In total, 234 genes were identified and categorized into one of the following nine major immune pathways or components (number of unique genes): gut-microbe homeostasis (17), agglutination (37), leucine-rich repeat (LRR) proteins (21), proteases (33), coagulation (11), non-self recognition and signal transduction via Toll, IMD, and JAK-STAT pathways (55), free radical defense (13), phagocytosis (33), and anti-microbial peptides (14). These genes are listed in Tables 1-9; unless stated otherwise, all annotations are based on the VectorBase database. We recognize that although our list might not be comprehensive as there might be additional published data inadvertently overlooked in our literature/database searches or yet-to-be identified genes involved in tick immune defense, we believe that it still represents the majority of genes that are potentially involved in the tick immune response. In the following sections, occurrence of these components and pathways are systematically discussed for their occurrence in the tick genome; we also highlighted their potential influence on the persistence and transmission of tick-borne pathogens like B. burgdorferi. I. SCAPULARIS GENOME The I. scapularis genome is relatively large, approximately 2.1 Gb in size and contains nearly 70% repetitive DNA (Ullmann et al., 2005). Recently it was completely sequenced by the I. scapularis genome project -a partnership between a number of tick research communities and institutions (Hill and Wikel, 2005;Pagel Van Zee et al., 2007). Toward the end of 2008, sequencing centers announced the annotation and release of the whole genome sequence data (IscaW1, 2008; GenBank accession ABJB010000000). The sequence data were derived from purified genomic DNA preparations isolated from an in-bred tick colony and sequenced to approximately 6-fold coverage using a combined whole genome shotgun and clone-based approach. The genome information are organized and displayed by a bioinformatics resource center focused on invertebrate vectors of human disease called VectorBase (www.vectorbase.org), which is funded by the National Institute of Allergy and Infectious Diseases, National Institutes of Health. The I. scapularis gene GUT MICROBE HOMEOSTASIS Gut microbiota serve a critically important function in shaping host immunity in a number of organisms, including model arthropods (Dillon and Dillon, 2004;Round and Mazmanian, 2009;Hooper et al., 2012;Buchon et al., 2013;Kamada et al., 2013;Schuijt et al., 2013). Characterization of gut microbiota in ticks, including I. scapularis, as well as their influence on the persistence of tick-borne pathogens like B. burgdorferi has been a focus of a number of recent studies (Clay et al., 2008;Carpi et al., 2011;Narasimhan et al., 2014). As many of these gut microbes play a beneficial role in the physiology of the host, the immune system therefore must be able to differentiate between commensal microbes and pathogenic microorganisms (Macpherson and Harris, 2004). While mechanisms that contribute to the microbial surveillance and pathogen elimination while tolerating the indigenous microbiota remain obscure in ticks, these are wellresearched in many arthropods, particularly in D. melanogaster (Buchon et al., 2013). Studies have established that immune reactivity within the fly gut ensures preservation of beneficial and dietary microorganisms, while mounting robust immune responses to eradicate pathogens (Buchon et al., 2013). There are at least two models of fly immunity for sensing and preserving beneficial bacterial associations while eliminating potentially damaging ones (Lazzaro and Rolff, 2011). The first occurs by recognition of non-self molecules (invading microbes), while the second involves the recognition of "danger" signals that are released by damaged host cells. However, it is also likely that they work together to maintain effective gut microbe homeostasis. Recent studies suggest that dual oxidase (DUOX) and peroxidases enzymes play a key role in this process (Kim and Lee, 2014). While a number of other regulatory molecules may participate in gut homeostasis, we classified 17 different genes within the I. scapularis genome to this pathway, including a single dual oxidase (DUOX) and several peroxidase proteins ( Table 1). Additional studies have recently detailed how DUOX plays an essential role in gut mucosal immunity and homeostasis (Bae et al., 2010;Deken et al., 2013). DUOX, a member of the nicotinamide adenine dinucleotide phosphate (NADPH) oxidase NOX family (Geiszt and Leto, 2004), has previously been shown to be a key source of local microbicidal reactive oxygen species (ROS) production within the fly gut (Kim and Lee, 2014 Targeted depletion of DUOX in flies has resulted in the overproduction of commensal gut bacteria and renders the flies susceptible to infection (Buchon et al., 2013;Kim and Lee, 2014). As originally discovered in Caenorhabditis elegans (Edens et al., 2001), in addition to ROS generation, DUOX is also implicated for catalysis of protein cross-linking that contributes to maintenance of gut microbiota in Anopheles gambiae (Kumar et al., 2010). In mosquitoes, DUOX, along with a specific hemeperoxidase, catalyzes the formation of an acellular molecular barrier, termed dityrosine network (DTN), which forms in the luminal space along the gut epithelial layer during feeding (Kumar et al., 2010). The DTN decreases the gut permeability to various immune elicitors protecting the gut microbiota, both commensal and pathogenic species. Another recent study revealed that an ovarian dual oxidase is essential for insect eggshell hardening through the production of H 2 O 2, which ultimately promotes protein cross-linking (Dias et al., 2013). Further studies on how DUOX and peroxidase systems maintain gut microbiota in I. scapularis could give novel insight into how pathogens that are transmitted through ticks are able to evade the immune system and persist within the vector. AGGLUTINATION Agglutination, the biological phenomenon by which cells or particles clump together, has been described within various tick species Kibuka-Sebitosi, 2006). A group of carbohydrate-binding proteins called lectins (Grubhoffer et al., 1997(Grubhoffer et al., , 2004, which are often produced in a tissue specific manner within arthropods, especially in the gut, hemocytes, or fat bodies, could be key mediators of the process (Grubhoffer et al., 2004. Agglutination of pathogens by lectins, which also function as host recognition receptors for pathogen-associated molecular patterns (Dam and Brewer, 2010), has been reported in many arthropod vectors, including mosquitoes and tsetse flies, where they play an important role in the pathogen-host relationship (Abubakar et al., 1995(Abubakar et al., , 2006Barreau et al., 1995;James, 2003). While lectins can function as signaling factors for the maturation of the African trypanosome or as lytic factors (Abubakar et al., 1995(Abubakar et al., , 2006, in mosquitoes they act as agonists of the development of malarial parasites within the vector (Barreau et al., 1995;James, 2003) While tick lectins, particularly those in hard ticks (Ixodidae), have not been studied as extensively as other arthropod lectins, previous reviews summarized available information on lectins of I. ricinus (Grubhoffer and Jindrak, 1998;Grubhoffer et al., 2004). Since most lectins isolated from arthropods are the ones from the hemocoel, studies have focused on their localization or hemagglutinating activity in the hemolymph (Sonenshine, 1993;Kuhn et al., 1996). In I. ricinus, this activity was characterized as Ca 2+ dependent binding activity (Grubhoffer et al., 2004). A 85 kDa lectin produced by the granular hemocytes and basal laminae surrounding the hemocoel was identified to have a strong binding affinity for sialic acid (Grubhoffer et al., 2004). This immunoreactivity supports the idea that lectins may function as a recognition molecule of the immune system in ticks, implying that they could influence the persistence of tick-borne pathogens like B. burgdorferi. In fact, the hemocytes in I. ricinus can also phagocytize B. burgdorferi through the coiling method, which has previously been though to be a lectin-mediated process (Grubhoffer and Jindrak, 1998). Specifically, two agglutinins/lectins were isolated from the gut, one 65 kDa and the other 37 kDa in size; the former was shown to be the main agglutinin with a binding affinity for mucin, while the latter protein was found to have a strong affinity for a specific glucan (Grubhoffer and Jindrak, 1998;Grubhoffer et al., 2004). It is also suggested that a gut agglutinin has the potential to bind LPS that in cooperation with other digestive enzymes thought to affect the persistence of Gram-negative bacteria and spirochetes that pass through the gut lumen Grubhoffer et al., 2004). In addition to hemolymph and gut, lectin activities are also documented in the salivary gland; a 70 kDa protein has been identified as being responsible for the hemagglutinating activity in this organ (Grubhoffer et al., 2004). It is thus possible that lectin or a related protein in the salivary glands could influence pathogen transmission. In fact, a tick mannose-binding lectin inhibitor that is produced in the salivary glands has been shown to interfere with the human lectin complement cascade, significantly impacting the transmission and survival of B. burgdorferi (Schuijt et al., 2011). Taken together, it is likely that lectins could play a role in the immunity of I. scapularis, which encodes for at least 37 lectins or related proteins ( Table 2). LEUCINE-RICH REPEAT PROTEINS LRR have previously been shown to occur in more than 2000 proteins throughout the plant and animal kingdom, including Toll-like receptors, and are thought to play an essential role in host defense (Boman and Hultmark, 1981;Kobe and Kajava, 2001;Bell et al., 2003;Enkhbayar et al., 2004). LRR proteins typically contain 20-29 amino acid residues (with repeats ranging from 2 to 42) that are involved in protein-protein interactions with diverse cellular locations and functions. While the biological significance of LRR containing proteins in ticks remains unknown, notably, the I. scapularis genome encodes at least 22 potential LRR proteins (Table 3). Unlike in ticks, the roles of LRR proteins in the immunity of other arthropods, including blood-meal seeking arthropods, however, are relatively well-characterized (Povelones et al., 2009(Povelones et al., , 2011. For example, in Anopheles gambiae, LRR-containing proteins, such as LRIM1 and APL1C, have been identified as a potent antagonist of malarial parasites, limiting Plasmodium infection by activating a complement-like system (Fraiture et al., 2009;Povelones et al., 2009Povelones et al., , 2011Baxter et al., 2010). In Manduca sexta, an LRR-containing protein, termed leureptin, is shown to bind lipopolysaccharide and is involved in hemocyte responses to bacterial infection (Zhu et al., 2010). Further studies into how tick LRR-containing proteins contribute to vector immunity and influence pathogen persistence are warranted. PROTEASES/PROTEASE INHIBITORS A number of immune cascades that serve to recognize and control invading pathogens are dependent on the activity of specific proteases or protease inhibitors (Janeway and Medzhitov, 2002;Sojka et al., 2011). Proteases, specifically serine proteases, have previously been shown to be a key regulating molecule for several of these immune response pathways, including coagulation, antimicrobial peptide synthesis, and melanization of pathogens (Gorman and Paskewitz, 2001;Janeway and Medzhitov, 2002;Jiravanichpaisal et al., 2006). Such serine protease-dependent cellular response, for example, as demonstrated for coagulation in the horseshoe crab, manifests through the rapid activation of immune pathways in response to pathogen detection Fujita, 2002). Activation of this pathway has been shown to be controlled by three serine proteases: factor C, factor B, and a pro-clotting enzyme (Tokunaga et al., 1987). When LPS is present, clotting factors that are stored within hemocytes are readily released into the hemolymph, which ultimately results in the immobilization of the invading pathogen. Protease inhibitors also control a variety of proteolytic pathways and are known to play an important role in arthropod immunity (Kanost, 1999). A group of serine protease inhibitors, termed serpins, have been the focus of many recent studies that demonstrate the critical contribution of these proteins to the regulation of inflammation, blood coagulation, and complement activation in mammals (Kanost, 1999). Serpins are also shown to contribute to immunity and physiology in arthropods, as shown in mosquitoes (Gulley et al., 2013) and flies (Reichhart et al., 2011). A detailed characterization of serpins in ticks, including I. scapularis, has been reported by Mulenga et al. (Mulenga et al., 2009). These authors reported the presence of at least 45 serpin genes within the I. scapularis genome, interestingly, most of which are differentially expressed in the gut and salivary glands of unfed and partially fed ticks (Mulenga et al., 2009). It is speculated that ticks could utilize some of these serpins to manipulate host defense to facilitate tick feeding and subsequent disease transmission, although the precise role of serpins in the physiology and immunity within the tick vector awaits further investigation. More recently, a novel serpin, termed IRS-2, was described in I. ricinus (Chmelar et al., 2011). IRS-2 was shown to inhibit cathepsin G and chymase, thereby inhibiting host inflammation and platelet aggregation. This particular protein was also thought to act as a modulator of vascular permeability. Although whether serpins play a role in host microbe interactions remains unknown, studies also explored their potential as target antigens for development of a tick vaccine (Muleng et al., 2001). COAGULATION Injury as well as the presence of microbes in arthropods could result in the induction of two major proteolytic pathways -coagulation and melanization (Theopold et al., 2004). Key enzymes for these processes that cross-link the clot or induce a proteolytic pathway similar to the vertebrate clotting cascade include transglutaminase and phenoloxidase, respectively. Studies in the horseshoe crab have provided a breakthrough in our understanding of the coagulation pathway in arthropods (Theopold et al., 2004). This pathway is characterized by a rapid sequence of highly localized serine proteases and culminates in the generation of thrombin; the process is tightly regulated to ensure excessive clot formation does not occur (Crawley et al., 2011). The I. scapularis genome encodes for at least 11 genes that may be part of the coagulation pathway (Table 5), although precisely how this pathway controls wound healing or affects microbial survival remains unknown. Notably, while the I. scapularis genome lacks genes related to the melanization (phenoloxidase) pathway, phenol oxidase activity was detected in the hemolymph of the soft ticks, Ornithodoros moubata (Kadota et al., 2002). NON-SELF RECOGNITION AND SIGNAL TRANSDUCTION PATHWAYS (TOLL, IMD, AND JAK-STAT) Three major pathways, namely Toll, immune deficiency (IMD), and Janus kinase (JAK)-signaling transducer and activator of transcription (STAT) pathways, contribute to the activation of the immune response within arthropods, as previously detailed (Belvin and Anderson, 1996;De Gregorio et al., 2002;Hoffmann and Reichhart, 2002;Govind and Nehm, 2004;Lemaitre, 2004;Rawlings et al., 2004;Kaneko and Silverman, 2005;Tanji and Ip, 2005;Zambon et al., 2005;Tanji et al., 2007;Xi et al., 2008;Souza-Neto et al., 2009;Valanne et al., 2011;Liu et al., 2012). Notably, the I. scapularis genome encodes many representative genes from all three pathways (Figure 1). While Toll pathways are activated in the presence of bacterial, viral, and fungal pathogens, the IMD pathway is induced by Gram-negative bacteria. The arthropod JAK-STAT pathway, analogous to a cytokine-signaling pathway in mammals (Shuai et al., 1993), has also previously been shown to be activated in the presence of bacterial or protozoan pathogens (Buchon et al., 2009;Gupta et al., 2009;Liu et al., 2012). The Toll pathway is most extensively studied in Drosophila, which encodes nine Toll receptors (Valanne et al., 2011). Cell wall components in Gram-positive bacteria stimulate this pathway, whereas the precise fungal component that induces specific Tolls is not welldefined. In both cases, stimulation of the Toll pathway causes cleavage of the protein Spätzle, which eventually leads to the activation of NF-κB transcription factor family members Dif and Dorsal, which are homologous to mammalian c-Rel and RelA, resulting in the production of different antimicrobial peptides (AMPs) (Irving et al., 2001;Christophides et al., 2002;Hetru et al., 2003). Specifically, research in Drosophila has shown that Gram-positive bacteria induce the Toll pathway, leading to the generation of Toll-specific AMPs, such as drosomycin (Zhang and Zhu, 2009) For further details on these pathways, including abbreviations, please refer to the text and earlier publications (Lemaitre, 2004;Liu et al., 2012). obscure, we list at least 33 genes that potentially belong to this pathway ( Table 6). The IMD pathway, on the other hand, is activated by the peptidoglycan molecules present on the surface of Gram-negative bacteria that are recognized by host cells via peptidoglycan recognition receptors (PGRP) (Ferrandon et al., 2007). This recognition leads to the activation of an adaptor protein and further downstream signaling molecules, such as transcription factor Relish, a compound Rel-Ank protein homologous to mammalian p100 and p105, ultimately resulting in the production of AMPs (Matova and Anderson, 2006;Ferrandon et al., 2007). Although the tick genome encodes at least 20 potential genes from this pathway ( Table 6), similar to Toll, how the IMD pathway affects Gram-negative pathogens, including B. burgdorferi is unknown. A critical and common aspect in the response of both pathways is the ability to induce a specific AMP to combat microbial infections through the recognition of non-self. Interestingly, it is also thought that these two pathways can work synergistically to activate the expression of the same AMP (Tanji et al., 2007). FREE RADICAL DEFENSE Free radicals, such as ROS, which include superoxide radicals (O·2), hydroxyl radicals (·OH), and other compounds, are able to react with biomolecules and cause damage to DNA, proteins, and lipids, playing as critical role in cell signaling (Thannickal and Fanburg, 2000). While ROS are important in arthropod development (Owusu-Ansah and Banerjee, 2009), they are indispensible in arthropod immunity, including activation of specific immune pathways (Pereira et al., 2001;Bubici et al., 2006;Molina-Cruz et al., 2008;Morgan and Liu, 2011). For example, mosquitoes that were previously infected with Wolbachia bacteria were observed to produce much higher levels of ROS (Pan et al., 2012). Nitric oxide (NOS), a highly unstable free radical gas, is another component of free radical defense shown to be toxic to both parasites and pathogens (James, 1995;Wandurska-Nowak, 2004). In insects, NOS is known to be induced following parasite infection (Dimopoulos et al., 1998;Davies, 2000). A family of superoxide dismutases (SOD) that catalyze the conversion of these free radicals to non-toxic O 2 and less toxic hydrogen peroxide (H 2 O 2 ) are responsible for destroying any free radicals generated in the hosts. Glutathione-S-transferases (GST) also detoxify stress-causing agents, including toxic oxygen free radical species (Sharma et al., 2004). The genes encoding GSTs are shown to be induced in model arthropods upon oxidative stress and microbial challenge, including in ticks infected with B. burgdorferi . Despite these studies, how different free radicals or SOD detoxification systems play Frontiers in Cellular and Infection Microbiology www.frontiersin.org August 2014 | Volume 4 | Article 116 | 7 roles in pathogen persistence or clearance within I. scapularis, which encodes at least 13 genes of this pathway (Table 7), remains uncharacterized. PHAGOCYTOSIS Cells recognize, bind, and ingest relatively large particles in phagocytosis (Walters and Papadimitriou, 1978). This process is considered a major evolutionarily conserved cellular immune response in arthropods, mostly studied in model insects (Sideri et al., 2008), and is mediated by hemocytes, also known as blood cells, which are primarily present in the hemolymph as well as infrequently exist within various organs. Phagocytosis of microbes plays a critical role in arthropod defense, as blocking of phagocytosis in Drosophila mutants significantly impairs the flies' ability to survive subsequent bacterial infection (Elrod-Erickson et al., 2000). Hemocytes within the hemolymph have previously been shown to phagocytize various pathogens (Inoue et al., 2001). Whereas, although solid experimental evidence of phagocytosis of B. burgdorferi within I. scapularis is lacking, certain cell lines derived from ticks have been shown to be phagocytic to spirochetes (Mattila et al., 2007). Further studies into the phagocytic pathway of I. scapularis, which encodes 33 potentially related genes (Table 8), would provide insight into whether or how pathogens, such as B. burgdorferi, are phagocytized, as well as how tick-borne pathogens are able to escape this cellular immune response. Notably, the I. scapularis genome encodes for five small GTPases belonging to the Rho family that in addition to other cellular functions, are shown to play central roles in phagocytosis (Etienne-Manneville and Hall, 2002;Bokoch, 2005). ANTI-MICROBIAL PEPTIDES The production of AMPs, a hallmark of systemic humoral immune responses, is an important aspect of host defense in arthropods (Bulet et al., 1999). At least eight different classes of AMPs have been observed in the fruit fly, Drosophila. These AMPs, are mainly produced by fat bodies and secreted into the hemolymph and can then be further grouped into three different families based on their intended target: Gram-negative bacteria, Gram-positive bacteria, and fungi (Imler and Bulet, 2005). In arthropods, specific AMPs are produced as a result of activation of the Toll, IMD, or JNK-STAT pathway by the presence of bacteria, fungi, or viruses. Among effector molecules of innate immune defense, AMPs are relatively well-studied in ticks, which likely generate classical AMPs in the gut and hemocoel Saito et al., 2009). AMPs have been found to be produced in hard ticks, such as I. scapularis and Dermacentor variabilis, as well as in the soft tick Ornithodoros moubata (Nakajima et al., 2002;Sonenshine et al., 2002;Hynes et al., 2005;Rudenko et al., 2005;Saito et al., 2009). I. ricinus induces a defensin-like gene in response to B. burgdorferi in a tissue-specific manner that is not capable of clearing the infection . I. scapularis encodes for at least 14 AMPs ( Table 9). The exact role of defensin or other AMPs in clearance of tick-borne pathogens remains unclear. In addition, ticks may also produce non-classical AMPs. Although gastric digestion in ticks is primarily intracellular, degradation of blood components, such as hemoglobin, could create peptides with antimicrobial activities . Whether these fragments would protect against pathogenic bacteria has currently not been reported. CONCLUDING REMARKS I. scapularis ticks are known to transmit a diverse set of disease agents ranging from bacterial to protozoans to viruses. A number of studies explored the immunomodulatory activities of tick saliva or components of the salivary gland in mammalian hosts or how these activities benefit tick-transmitted pathogens (Hovius et al., 2008;Pal and Fikrig, 2010). However, limited investigation addressed how vector immune responses influence the survival or persistence of specific pathogens within the tick. It is rather surprising that although ticks are known to encode components of a number of immune effector mechanisms, including humoral (classical AMPs) or cellular (phagocytosis) immune responses as well as evolutionary conserved signaling molecules or potentially active pathways (Toll, IMD, or JAK/STAT), their contribution in shaping I. scapularis immunity remains largely obscure. Tick-borne pathogens are evolved to persist and be transmitted by a specific tick species. Thus, it is conceivable that these pathogens coevolved and developed a successful and intimate relationship with the host. Additionally, to be successful in nature, these pathogens must have also evolved specific mechanisms to persist in the vector and evade innate immune insults. For example, when artificially challenged with the Lyme disease pathogen, I. scapularis ticks mount slower phagocytic responses and therefore, remain practically immunotolerant against spirochete infection (Johns et al., 2001). In contrast, another hard tick species, D. variabilis, when challenged with the same pathogen, generates a rapid and effective increase in phagocytic cells and clears the infection and thus, is highly immunocompetent against spirochete infection. With the availability of I. scapularis genome information and development of robust functional genomics and bioinformatics as well as the advent of efficient high-throughput genome sequencing tools, we expect exciting future research enhancing our knowledge of I. scapularis immunity and hope to address specific questions on the biology of tick immune responses against a diverse group of human pathogens. Together, these studies will contribute to a better understanding of the special biology of vector-microbe interaction and specific aspects of tick immunity and at the same time, contribute to the development of new strategies to combat pathogen transmission.
2016-06-16T23:36:08.468Z
2014-06-21T00:00:00.000
{ "year": 2014, "sha1": "7e3806979b8cf45b29ffc9c44f8b6b6d850acda7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2014.00116/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e3806979b8cf45b29ffc9c44f8b6b6d850acda7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
201370951
pes2o/s2orc
v3-fos-license
Surveying Oregon’s Digital Heritage Collections In 2018, the Oregon Heritage Commission conducted a survey of heritage organizations across the state to capture data regarding digitization efforts. The goal of the survey was to collect a baseline of information on the types of digital collections in Oregon, existing digital infrastructure, and a level of interest in collaborative options. Data gathered was shared with our partners, including the Orbis Cascade Alliance, to aid their work in considering how to create an on-ramp for smaller collections to enter into the Digital Public Library of America. This work followed the 2013 Environmental Scan of Digital Collections conducted by the State Library of Oregon and the outcomes of the 2015 Northwest Digital Summit, which identified overall gaps in support for digital collections at heritage organizations in Oregon and Washington. Unlike previous statewide assessments, the 2018 survey strove to capture data from heritage organizations of all types and sizes, both with and without digital collections, so that the Oregon Heritage Commission and our partners can determine strategies, tools, and trainings to best assist organizations at all stages of the digitization process. 42 For the purpose of the survey, digital collections were defined as cultural heritage materials that have been scanned (like photographs, postcards, court records, or letters) or that originate in digital form (like digital photos or oral histories recorded digitally). Digital collections include the type of content you'll find in Washington Rural Heritage and Oregon Digital. They do not include published digital content (like eBooks). Who Responded? The results of the survey included responses from 178 organizations of varying size and sophistication. The majority of responders were museums and libraries, followed by historical societies, genealogical societies, and government agencies. Other responders included a public garden, a historic cemetery group, and nonprofits formed to preserve historic houses. Of those who responded, 128 organizations reported that they have digital collections, and 114 of those organizations are either currently digitizing their collections or have digitized collections in the past. This indicates that 64 percent of responding organizations have some level of infrastructure in place to complete digitization work. What Did We Learn? The majority of Oregon's cultural heritage organizations, large and small, are dealing with digital collections. Some are actively digitizing while others are caring for digital collections that have been donated to them. All together, we estimate these collections account for 231,000 to 580,000 digital heritage objects in the state. Photograph and textual collections account for the largest portion of digital objects, but moving images, artifacts, and artwork are also prominent. • Prioritizing Items to Digitize: Overall, organizations recognize digital collections are a way to preserve material of importance and value. Prioritizing items to digitize is mainly driven by significance to the mission of the organization (57 percent), as well as the need to preserve materials that are fragile and deteriorating (53 percent). Many organizations also indicated that available grants are a driving factor in determining what collections to digitize. • Training of Staff and Volunteers: Survey results indicate that heritage organizations have limited training when it comes to digitizing. Responders reported that over half of their collections staff and volunteers (56 percent) have no training in collections care. While many of the larger organizations that are digitizing their cultural heritage collections rely on professional experience, the smaller organizations rely on knowledge gained at workshops and trainings, as well as materials found online. • Equipment: The type and quality of equipment used by organizations to digitize collections are varied. Many organizations own or have access to a scanner and photo editing software (99 percent and 86 percent respectively). The vast majority also report having access to digital storage, including a combination of hard drives, servers, and cloud storage. Only about half of responders report access to audio conversion and audio editing software. • Metadata: This survey recorded little about metadata standards, other than whether or not organizations are creating metadata. Survey takers were asked if their collections have metadata (defined as descriptive information that explains and locates the file) which can be used to retrieve digital items. Only 40 of 114 organizations with digital collections responded, which indicates the question was either unclear or that many organizations are not creating metadata. Of the 40 organizations that responded, 28 said they do have metadata, six said they don't, and six were unsure. • Online Access: Of the 128 organizations with digital heritage collections, 42 report their collections are available online to the public. Institutions are utilizing a variety of systems to place their collections online including Past Perfect Online, CONTENTdm, web pages, and various social media platforms such as Flickr, and Facebook. Our partners at Orbis Cascade Alliance noted that only nine of those online collections have public-facing systems that offer digital collections in a structured way that can interface with other systems. Past Perfect software is commonly used by museums, however, the system presents difficulties harvesting metadata for aggregation. • Organizations Not Digitizing: Of the 50 organizations that responded without digital collections, 45 expressed an interest in digitizing. Organizations that aren't digitizing are largely choosing not to due to lack of staff and volunteer capacity. A common theme in survey comments is, "We are all volunteers without training," and "Older volunteers don't like to use computers" (Q24, Comments 6 & 9). Other organizations acknowledge that turnover in volunteers is a huge set-back, "One of the big problems for small organizations is continuity of knowledge; one person learns how to participate, then when they leave it's hard to pass on the knowledge" (Q33, Comment 18). Interesting Trends Several trends emerged from the survey responses. One is that heritage organizations see providing public access to collections as a priority. However, when asked for the top three goals in creating or acquiring digital collections, access was second to preservation. We are curious to follow up with responders to understand what access means to them and how they view online digital items as access points to their collections. There may be opportunities to reframe how heritage organizations think of access in general. A pleasant surprise for Oregon Heritage staff was that our assumption that heritage organizations feel a sense of ownership about their collections as a reason not to digitize was disproven by the results. When asked why organizations don't digitize or acquire digital collections, the lowest percent (less than 10 percent) responded that it was because they want to retain control of the content. This reflects a noted change in staff 's previous experience working with small heritage organizations. Rather, the barriers to digitizing fall in line with constraints of staff time, expenses, and prioritization. While organizations didn't reflect a desire to retain control of content as a reason not to digitize, a clear concern that emerged in survey comments was that many heritage organizations rely on revenue from the sale of their digital images, and they don't want online access to restrict their ability to sell images. The seven organizations that expressed this concern were genealogical societies, small historical societies, and a rural public library. One responder wrote, "Some of the board is concerned about losing the opportunity to raise money for copies of our digitized photos if we have a cooperative venture" (Q31, Comment 3). Another responder wrote, "My organization is strict about maintaining revenue opportunities since we charge for access to our digital materials" (Q31, Comment 23). Collaborative partnerships must take this concern into account and educate groups about the quality and use of access images. With Collaboration in Mind The final section of the survey was designed to gauge interest in a variety of collaborative options that have been discussed by statewide partners. One set of questions asked survey takers for their level of interest in collaborative options for digitizing collections. The second set of questions asked their level of interest in collaborative options for providing online access to digital collections. Responders generally expressed interest in both areas. In response to digitizing, the majority were in favor of a loan system where equipment could be checked out and used for brief periods of time. A close second was interest in a "hub" where you could bring items to be digitized by someone else. The idea of a regional hub with shared equipment was less well received. Geographic distance and the cost of transferring items were referenced as barriers for some to participate in this type of collaboration. For online access, a majority of responders were interested in the idea of contributing digital items to a more localized online system, either university-driven or a regional collaboration, rather than national. Organizations made clear that they are looking for trustworthy partners in collaboration. Several responders simply felt more comfortable with the materials staying in the community. One responder wrote, "We need collaborative options for making collections available online because we can never afford to have and maintain online collections ourselves. However, we want people to find us and be aware of us, for their support as members/donors or future visitors. So perhaps the further away from our location and community the materials go, the less visible we feel as a community resource" (Q31, Comment 46). Smaller organizations also want content available on their website in addition to a local repository. One survey responder wrote, "I want the records to be available at least locally, but the more people who have access the better" (Q31, Comment 1)! Conclusion The 2018 Survey of Digital Heritage Collections in Oregon documents a snapshot in time of existing digital heritage collections. A clear finding is that cultural heritage organizations in Oregon are actively digitizing their collections and have expressed an interest in working collaboratively. As a follow-up to the survey, the Heritage Commission is reaching out to individual organizations for more information and will continue to create basic tools and trainings that will be available through the Oregon Heritage MentorCorps program. The Heritage Commission shares the results of our survey with our partners and the library community in order to continue seeking collaborative solutions for stewarding Oregon's heritage collections, particularly looking to larger repositories to assist with the preservation of and access to smaller heritage collections. We know that small heritage organizations house unique collections that tell the story of our state. We also know that many small organizations do so with limited resources of time and money. The issue of capacity is well summed up by this survey comment, "We are great at digitizing, but it comes at the expense of our other collections work" (Q33, Comment 81). For a complete copy of the results, contact Beth Dehn.
2019-08-23T16:43:28.970Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "922a387c563b845e040840f338f82c6e27b282e9", "oa_license": null, "oa_url": "http://journals3.library.oregonstate.edu/olaq/article/download/vol24_iss4_9/1963", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a1a6ce8a769170e57a1104fafd134ee5087c5bbb", "s2fieldsofstudy": [ "History", "Environmental Science" ], "extfieldsofstudy": [ "Political Science" ] }
268258651
pes2o/s2orc
v3-fos-license
Experimental Study of Compressive Strength And Flexural Strength of Standard Concrete by Using Metakaolin as Mineral Admixture This experimental study investigates the influence of metakaolin as a mineral admixture on the compressive strength and flexural strength of standard concrete (M-25). Metakaolin, a highly reactive pozzolanic material, was introduced as a partial replacement for cement by weight. Metakaolin is a valuable addition to many concrete mixes. It can improve the strength, durability, and resistance of concrete, making it a more versatile and sustainable material. Here are some of the key properties of metakaolin: Introduction Concrete is one of the most widely used construction material worldwide due to its versatility, affordability and durability ( Ref 6).However, the production of conventional concrete contributes significantly to CO2 emissions, posing environmental concerns ( Ref 7).Additionally, the depletion of natural resources like sand and gravel necessitates the exploration of sustainable alternatives.Enhancing the mechanical properties of concrete, such as compressive strength and flexural strength, has been a focal point of research to meet the increasing demands of the construction industry ( Ref 9).One promising approach to improve these properties is the incorporation of mineral admixtures into the concrete mix.Metakaolin, a highly reactive pozzolanic mineral obtained by calcining kaolin clay, is a popular choice as a mineral admixture in concrete.It exhibits several advantageous properties, including: • Improved strength: Metakaolin reacts with calcium hydroxide in concrete to form additional C-S-H gel, leading to enhanced compressive and flexural strength.• Increased durability: Metakaolin refines the concrete microstructure, densifying the matrix and enhancing resistance to water penetration, chloride ingress, and chemical attack.• Reduced CO2 emissions: Replacing a portion of cement with metakaolin reduces the overall clinker content, thereby lowering the CO2 footprint of the concrete.• Sustainable alternative: Metakaolin utilizes industrial waste (kaolin clay) and reduces reliance on natural resources like sand and gravel.When used in appropriate proportions by weight of cement, metakaolin can significantly enhance the performance of standard concrete.This study aims to investigate the effects of metakaolin on the compressive strength and flexural strength of standard concrete mix.The addition of metakaolin as a mineral admixture introduces several potential benefits.Metakaolin is known to enhance the pozzolanic reaction in concrete, leading to increased density and reduced porosity (Ref 12).This, in turn, can improve the mechanical properties, making the concrete more durable and resistant to cracking.Moreover, metakaolin can contribute to sustainability in construction by reducing the demand for cement, a material associated with significant carbon emissions ( Ref 14).In this experimental study, preparation of M-25 grade of concrete with varying percentages of metakaolin as a replacement for cement by weight.Compressive strength and flexural strength test will be conducted to assess the concrete's ability to withstand axial loads.These tests will be essential in understanding how metakaolin affects the mechanical performance of the concrete.The study's findings will contribute valuable knowledge towards the development of sustainable concrete with improved mechanical properties and reduced environmental impact.It will provide insights into the optimal replacement levels of cement with metakaolin for achieving desired performance characteristics, potentially paving the way for a more sustainable and ecofriendly construction industry.Coarse Aggregates -Angular shape machine crushed stone was used as aggregates for two different fractions i.e. 20 mm and 10 mm whose specific gravity is 2.85 and 2.82 respectively.Water -Potable tape water free from chemical substances and suspended particles was used for mixing of concrete and curing of concrete mix.Metakaolin -Metakaolin is a pozzolanic material that is formed by the calcination of kaolin clay, typically at temperatures between 600 and 850 degrees Celsius.Specific gravity of metakaolin is 2.6.for casting.Molds fill with concrete mix with metakaolin in varying proportion and compact it thoroughly by using a vibrating table to remove air voids.Placed the cube and beam molds in the curing chamber for 24 hours to cure the specimens and followed the curing procedures.After 7 and 28 days curing duration demolded cube molds carefully and test under compressive testing machine and after 28 days curing duration demolded beam molds carefully and test under flexural testing machine with center point loading method.From the above result it is observed that the metakaolin with 20% is mixed with concrete gives high strength after 28 days as compared to normal concrete Conclussions The experimental study investigated the compressive strength and flexural strength properties of M-25 grade of concrete by using metakaolin as a mineral admixture by weight of PPC 33 grade cement.The study was conducted by casting specimens with 0%, 10%, 20% and 30% replacement levels of metakaolin. The methodology involved the collection of materials, casting of specimens, and testing and analysis of the properties of the concrete as per IS 10262:2019.Based on above study the outcomes are as follows: • The 7 and 28 days compressive strength of various mixes is shown in Graph 2, and the results are indicated in Table 8, that the compressive strength of metakaolin with 20% in concrete mix is high as compared to other percentages.• The 28 days flexural strength of various mixes is shown in Graph 3, and the results are indicated in Table 8, that the flexural strength of metakaolin with 20% in concrete mix is high as compared to other percentages.• The study concluded that the use of 20% of metakaolin as a partial replacement of PPC in concrete can increase its compressive strength by 5.34% and flexural strength by 1.02% as compared to conventional concrete.• Graph 4 and Graph 5 shows the 28 days compressive strength and flexural strength of different percentages of metakaolin with corresponding three values of every percent of metakaolin.Graph 4. 28 Days Compressive Strength Analysis Graph 5. 28 Days Flexural Strength Analysis In summary, the experimental study showed that the use of metakaolin as a mineral admixture in M-25 grade of concrete can improve its compressive strength, flexural strength and durability.The study also highlighted the importance of testing and analyzing the properties of concrete to determine the effects of mineral admixtures on its strength properties. Research Gap While the given research topic investigates the influence of metakaolin on the mechanical properties of concrete, there are potential research gaps that could be explored further: • Life Cycle Assessment (LCA): Conduct an LCA to compare the environmental footprint of metakaolin concrete with conventional concrete, considering factors like embodied energy, greenhouse gas emissions, and resource consumption.• Waste utilization: Explore the potential of using industrial waste products like rice husk ash or ground granulated blast furnace slag as partial replacements for metakaolin, promoting sustainable construction practices. • Additional Points: The study could benefit from a more detailed literature review to identify existing research gaps and avoid redundancy.o Consider incorporating numerical modeling techniques like finite element analysis (FEA) to validate the experimental findings and predict the behavior of metakaolin concrete under different loading conditions.o Emphasize the practical implications of the research findings and provide recommendations for the potential implementation of metakaolin concrete in construction projects.By addressing these research gaps, the study can contribute valuable insights into the effectiveness of metakaolin as a mineral admixture and promote its wider adoption in the construction industry for sustainable and high-performance concrete applications. Acknowledgement The satisfaction and euphoria of the successful completion of any task would be incomplete without the mention of the people who made it possible whose constant guidance and encouragement crowned our effort with success.It gives me enormous pleasure to express my most profound sense of gratitude and sincere thanks to my highly respected guide, PROF.ANUBHAV RAI, Head, Department of Civil Engineering, Gyan Ganga Institute of Technology and Science for their valuable guidance, supervision, and encouragement throughout my work, which made this task pleasant. Figure 1 Figure 1.Metakaolin Powder Volume of Cement = Cement Content /(Specific Gravity of Cement x 1000) = 0.135m 3 Volume of water = Water Content /Specific gravity of water x 1000 = 0.178 m 3 Volume of Mineral Admixture = Mass of admixture/Specific Gravity X 1000 = 0.000 m 3 Volume of all in one Aggregate = Cl 1-Cl 2-Cl 3-Cl 4-Cl 5 = 0.677 m 3 Mass of 20 mm aggregate=Cl 6 x Ratio of blending proportion x ratio of Coarse aggregate x Specific Gravity x 1000 = 637.15kg Mass of 10 mm aggregate = Cl 6 x Ratio of blending proportion x ratio of Coarse aggregate x Specific Gravity x 1000 = 515.82kg Mass of Sand =Cl 6 x ratio of Fine aggregate x Specific Gravity x Blending x 1000 = 707.27kg Table 6 -Grading of Fine AggregateTable 7 -Test Results for Fine Aggregate • Mix Proportions o Target Mean Strength of Mix Proportion S. . Homogeneous and cohesive concrete mix. 2. Achieved specified strength in 07 & 28 days. S.No. Ingredients Quatity of Material per m 3 in SSD Condition in kg Ratio Material Required for mix per bag of cement in kg 1. 3. The mix proportion for M-25 concrete mix with 20% replacement of PPC by metakaolin is 1:2.16:3.52 4. The mix proportion for M-25 concrete mix with 30% replacement of PPC by metakaolin is 1:2.42:3.933.Specimen PreparationCube molds of size 150mmx150mmx150mm and beam molds size 150mmx150mmx700mm are prepared
2024-03-07T16:04:13.909Z
2024-02-29T00:00:00.000
{ "year": 2024, "sha1": "c695cb60593150c23a5aaf99f2942d17be6c2da9", "oa_license": "CCBYSA", "oa_url": "https://www.ijfmr.com/papers/2024/1/14226.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "40c4595a9ce225dceb777ac68bf5c99e139d067b", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
100113130
pes2o/s2orc
v3-fos-license
Effect of phosphorus dopant concentration on the carrier mobility in crystalline silicon This study investigated the effect of phosphorus dopant concentration on mobility of crystalline silicon (c-Si). It considers different temperature ranges, from 100 K to 500 K, and dopant concentration from 1012 cm-3 to 1020 cm-3 in relation to its effect on the mobility of the crystalline silicon. This study indicates that the mobility of phosphorus doped silicon, n-type silicon, at different dopant concentration, tends to reduce as the temperature is increased. On the other hand, the mobility of the doped semiconductor, at different temperatures, showed different trends as the dopant concentration increases: I) mobility decreased in between 1015 to 1017 cm-3, II) mobility saturates doping concentrations less than 1014 cm-3, and III) mobility is not significantly affected, by increasing the temperature for high doping concentration 1018 to 1020 cm-3. The two issues, lattice and impurity, dominate one another depending on the doping concentration and temperature, and thus contributed to dependence of mobility on temperature, in different trend, while being dependent on the fundamental theory of doping in semiconductors. Based on the study, as the temperature gets higher for higher doping concentration, mobility by the impurity scattering increases while it decreases by the lattice scattering, the two cases balance one another, and as a result mobility becomes almost constant, that is, the rate of change of mobility is relatively insignificant. INTRODUCTION Silicon is by far the dominant semiconductor material used in electronics and photonic devices. Since the birth of semiconductor industry, it has been the key semiconductor material, and it is seen as the backbone of electronics and photovoltaic industry (Lukasiak and Jakubowski, 2010). Its application in electronic devices, mainly in transistor, as well as in photonic and photovoltaic materials has entirely revolutionized our life style (Seto, 1985;Fourmond et al., 2011). In relation to its application, the electronic property is deeply investigated in many studies, where many of the studies are based on doping. The studies has made significant progress in detailed understanding of the material and played a key role in advanced device application. The studies in electronic property of the material are mainly based on pand n-type doped crystalline silicon, and it mainly focuses on conductivity/resistivity, mobility and related parameters. The mobility can refer the majority and/or the minority carrier mobility, that is, electron and hole mobility (Cardona, 2010;2011). The carrier mobility is affected by scattering, and theoretically the scattering can be of the following type: i) phonon (lattice) scattering, ii) ionized impurity scattering, iii) scattering by neutral impurity atoms and defects, iv) carriercarrier scattering, and (v) piezoelectric scattering (Pierret, 2003). Of these, the charge carrier mobility is dominantly affected by the lattice scattering and impurity scattering (Bulusu and *Corresponding author: amarebenor@yahoo.com Walker, 2008 Apparently, a higher mobility results in better device performance (Chan,1994;Yacobi, 2003;Doering and Nishi, 2014). As a result, mobility is particularly a key factor in the performance of electronic devices (Watanabe, 1999 (Arora et al., 1982) which takes into account the anisotropic scattering effects and the impurity scattering mobility, at a given temperature T, is given by: (1) where N I is the number of ionized impurity atoms and G(b) is a function given by: (2) where b is given by: where n ' = n [2-(n/N)] and it assumes the acceptor concentration to be zero, n being electron density per cubic centimeter. Besides, a detailed review work was made on the charge transport properties of silicon (Jacoboni et al., 1977). Latter, Arora et al. (1982) derived analytical relation on the electron and hole mobility in silicon as a function of concentration and temperature. In his study, he noted that the lattice scattering mobility can be fitted very well by the following equation: (4) Thus, after taking into account both effects m L and m I , the total charge carrier mobility can be calculated. These former studies made significant contribution to the charge carrier transport. However, the very depth insight of charge transport is still somehow not well explained and theoretical modeling has been improved in time so as to match with the experimental results, as an example, Klassen mobility modeling. In particular, this modeling is not yet fully used in detailed study of carrier mobility of phosphorus doped silicon. Thus, the effect of dopant concentration, to make p-or n-type Si, on the mobility of crystalline silicon is one key issue; and the case with a phosphorus dopant, Klassen mobility modeling, is our scientific issue. Consequently, we focus our attention on the characteristics of semiconductor materials that can be altered significantly by the addition of the impurity or doping into the crystalline silicon, which is an n-type c-Si doped by phosphorus. In other words, the study intends on extended insight of the effect of phosphorus dopant concentration on the carrier mobility of crystalline silicon. The study will see the effect of temperature and doping concentration on the mobility of the n-type semiconductor. Here, the study will consider the combined effect of both temperature and doping concentration and related trend in the electronic property of the material. Additionally, the two important related factors affecting the mobility, lattice and impurity scattering, will be discussed in detail in relation to the temperature and doping concentration. The study gen-erally indicates that lattice and impurity scattering dominate one another, in different rends, depending on the doping concentration and the temperature; and such a trend is well investigated in this study. Figure 1, shows the electron and hole mobility as a function of dopant concentration, with a range of 10 12 cm -3 to 10 20 cm -3 , for phosphorus doped crystalline silicon at room temperature or 300K. As seen from the figure, the charge carrier mobility of holes and electrons tends to decline as the doping concentration increases. Besides, the charge carrier motilities, of electrons and holes, tend to be constant below 10 15 cm -3 and above 10 19 cm -3 dopant concentrations. Furthermore, it is evident that the carrier mobility of electrons is higher than that of holes. In intrinsic crystalline semiconductor, e.g., crystalline silicon, the only factor that affects mobility is the temperature or phonon effect, where the mobility decline as the temperature rises. However, in extrinsic semiconductor, like phosphorus doped semiconductor c-Si, there are two factors or two contributions affecting the charge carrier mobility, namely impurity scattering and lattice scattering (Beadle et al., 1995). RESULTS AND DISCUSSION Thus, Phosphorus doped semiconductor c-Si, mobility is affected by these two factors: lattice and impurity scattering. As shown in Figure 1, the charge carrier mobility of electrons exceeds that of holes is due to the fact that the effective mass, in c-Si, of electrons is less than that of holes. Besides, the decline of charge carrier mobility, by increasing the doping concentration, appears to be related to the increased impurity scattering where it induce more carrier or electron-electron interaction and tends to reduce the charge carrier mobility. The mobility of electrons is about three times greater than that of holes for low doping concentration, e.g., 10 16 cm -3 , while it is less than two fold at higher concentration after 10 19 cm -3 . For low dopant concentrations, less than 10 16 cm -3 , the electron mobility declines by temperature or phonon/lattice scattering, and the lattice scattering has dominant effect particularly at low temperature. The relatively high carrier mobility, in the first regime, or concentrations less than 10 15 cm -3 and mainly for low temperatures, is originated by the relatively low doping concentration. However, in the same regime and at the same doping concentration, the decrease of mobility by increasing the temperature is due to the increase of lattice scattering while the impurity scattering is relatively the same. Besides, as the temperature gets higher for higher doping concentration, mobility by the impurity scattering increases while it decreases by the lattice scattering, the two cases balance one another, and as result mobility as a function of temperature becomes relatively constant and the rate of change of mobility is relatively insignificant (Figure 3). Additionally, for a doping concentration less than 10 15 cm -3 the mobility of electron is almost constant and is primarily limited by phonon scattering, that is, the rate of change of electron mobility is insignificant ( Figure 3). However, for higher doping concentration, above 10 17 cm -3 , the carrier mobility is dominantly hindered by the impurity scattering and electron mobility is significantly influenced even for the cases with low temperature. Here, it is important to note that the very high doping concentration, dominated by impurity scattering, the semiconductor almost reaches to its lowest mobility ranges and it attains to a phase that temperature no more bring a relatively significant change in charge carrier mobility, and thus mobility seems to be constant. The effect of temperature on carrier mobility is insignificant at higher doping concentration, and correspondingly the effect of doping concentration is also insignificant at higher temperature. Generally, as the temperature gets higher, the electron mobility decrease, since lattice vibration increases with increasing temperature, 1976). At lower temperatures, impurity scattering dominates and it is governed as T -3/2 (Masettiet al., 1983). As a result, as the temperature increases, impurity scattering increases and the mobility decreases. To get further uderstanding, the electron mobility as a function of temperature was made at different doping concentration ranging from 10 12 cm -3 to 10 20 cm -3 . Correspondingly, the temperature range was made from 100 K to 500 K. At a given temperature, the electron mobility tends to decreases as the doping concentration increases (Figure 4). This effect is particularly noticeable at low temperature, mainly for a temperature less than 300 K. However, at higher temperatures, greater than 350 K, the effect of concentration on the charge carrier mobility is insignificant. The phenomena, mentioned above, witnessed the fact that the effect of doping concentration on carrier mobility is relatively insignificant at higher temperature and it is in consistent with the finding in Figure 2. It is evident that lattice/phonon and impurity scattering is dominating somehow above and below 400 K, respectively. In the first case, of electron mobility as a function temperature is studied and presented in Figure 5. CONCLUSION In this study, we investigated that the temperature massive change with short ranges and finally saturates at low doping concentrations. The mobility of crystalline silicon decreases as dopant concentration increases, and this effect is pronounced at low temperature, particularly at 100 K. Furthermore, as the temperature increases the carrier mobility decreases and such a trend is more observable at low doping concentration, particularly at 10 12 cm -3 . The study generally indicates that lattice and impurity scattering dominate one another, with different trends, depending on the doping concentration and the temperature.
2019-04-08T13:08:22.351Z
2016-09-21T00:00:00.000
{ "year": 2016, "sha1": "ebcf379064160b3b528a5728c235d8c148498ffc", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/ejst/article/download/144305/133960", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "07ac1ba9da49071fc37ca3bdd3473c770996482d", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
250704979
pes2o/s2orc
v3-fos-license
Effects of long-term supplementation of probiotics on cognitive function and emotion in temporal lobe epilepsy Cognitive impairment and neuropsychiatric disorders are very common in patients with temporal lobe epilepsy (TLE). These comorbidities complicate the treatment of epilepsy and seriously affect the quality of life. So far, there is still no effective intervention to prevent the development of epilepsy-associated comorbidities. Gut dysbiosis has been recognized to be involved in the pathology of epilepsy development. Modulating gut microbiota by probiotics has shown an antiseizure effect on humans and animals with epilepsy. Whether this treatment strategy has a positive effect on epilepsy-associated comorbidities remains unclear. Therefore, this study aimed to objectively assess the effect of probiotics on cognitive function and neuropsychiatric performance of patients with TLE. Participants enrolled in an epilepsy clinic were randomly assigned to the probiotic and placebo groups. These two groups were treated with probiotics or placebo for 12 weeks, and then the cognitive function and psychological performance of participants were assessed. We enrolled 76 participants in this study, and 70 subjects were finally included in the study (35 in the probiotics group and 35 in the placebo group). Our results showed significant seizure reduction in patients with TLE treated with probiotics. No significant differences were observed on cognitive function (including intelligence and memory) between groups. For neuropsychiatric performances, supplementation of probiotics significantly decreased the Hamilton Anxiety Rating and Depression Scale scores and increased the 89-item Quality of Life in Epilepsy Inventory score in patients with TLE. In conclusion, probiotics have a positive impact on seizures control, and improve anxiety, depression, and quality of life in patients with TLE. Cognitive impairment and neuropsychiatric disorders are very common in patients with temporal lobe epilepsy (TLE). These comorbidities complicate the treatment of epilepsy and seriously a ect the quality of life. So far, there is still no e ective intervention to prevent the development of epilepsy-associated comorbidities. Gut dysbiosis has been recognized to be involved in the pathology of epilepsy development. Modulating gut microbiota by probiotics has shown an antiseizure e ect on humans and animals with epilepsy. Whether this treatment strategy has a positive e ect on epilepsy-associated comorbidities remains unclear. Therefore, this study aimed to objectively assess the e ect of probiotics on cognitive function and neuropsychiatric performance of patients with TLE. Participants enrolled in an epilepsy clinic were randomly assigned to the probiotic and placebo groups. These two groups were treated with probiotics or placebo for weeks, and then the cognitive function and psychological performance of participants were assessed. We enrolled participants in this study, and subjects were finally included in the study ( in the probiotics group and in the placebo group). Our results showed significant seizure reduction in patients with TLE treated with probiotics. No significant di erences were observed on cognitive function (including intelligence and memory) between groups. For neuropsychiatric performances, supplementation of probiotics significantly decreased the Hamilton Anxiety Rating and Depression Scale scores and increased the -item Quality of Life in Epilepsy Inventory score in patients with TLE. In conclusion, probiotics have a positive impact on seizures control, and improve anxiety, depression, and quality of life in patients with TLE. KEYWORDS probiotics, cognitive function, temporal lobe epilepsy, supplementation, neuropsychiatric disorders Introduction Epilepsy is a very common disease in central nervous system affecting approximately 70 million people worldwide (1). Despite the emergence of multiple antiepileptic drugs (AEDs) for the treatment of epilepsy, approximately one in three patients develop drug-resistant epilepsy (DRE) (2). Temporal lobe epilepsy (TLE) is the most common type of epilepsy, which is prone to develop into DRE. Due to abnormal brain discharges originating focally in the temporal lobe and limbic system involving cognitive function and emotion, patients with TLE often suffer from different degrees of cognitive impairment and neuropsychiatric disorders (3)(4)(5)(6). These comorbidities severely affect patients' quality of life and burden the family and society (7). Although various drugs targeting a series of comorbidities have emerged, there is still no effective intervention to prevent comorbidity development in patients with epilepsy. Therefore, understanding the pathophysiology of comorbidities associated with epilepsy may help develop therapeutic interventions. The gut microbiota has gradually been recognized to play an important role in the pathology of epilepsy development (8-10). The gut-brain axis is a bidirectional communication pathway connecting the central nervous and enteric nervous systems, and modulates the neural, immunological, and hormonal pathways to balance the body (11,12). Gut dysbiosis changes the levels of neurotransmitters, metabolites, and activities in the gut and affects glial function, neuroinflammation, myelination, bloodbrain barrier permeability, and neurotransmission, leading to various neurological disorders (8, 13). Recently, a series of studies showed alterations of the gut microbiome in patients with DRE, such as increased relative abundance of Firmicutes and Proteobacteria and decreased Bacteroidetes and Actinobacteria, suggesting that gut dysbiosis may be involved in the pathogenesis of epilepsy (14,15). Modulation of gut microbiota may be a potential therapeutic strategy for epilepsy. Another study provided evidence that supplementation with probiotics in patients with DRE showed a positive impact on seizure control (16). Interestingly, a recent study observed that functional gastrointestinal disorders related to microbiota-gut-brain axis dysregulation were significantly associated with the temporal lobe (17). There may be some relationships between microbiotagut-brain axis dysregulation and neurobehavioral comorbidities, considering the role of the temporal lobe in cognitive function and emotion. However, there is still a lack of relevant research on whether modulating the gut microbiota has a therapeutic effect on epilepsy-associated comorbidities. Probiotics, regarded as living microorganisms with unknown harmful side effects, provide health benefits to animals and humans by interacting with the intestinal microbiome (18). Modulation of Bifidobacterium spp. and Lactobacillus spp. were suggested as an effective therapeutic strategy for treatment of DRE (14). BIFICO capsules, containing Bifidobacterium longum, Lactobacillus acidophilus, and Enterococcus faecalis have been widely used in China for more than 20 years (19). Enterococcus spp. increases the colonization of Bifidobacterium spp. and provides an anaerobic environment suitable for Bifidobacterium spp. L. acidophilus produces some growth factors that promote the proliferation of Bifidobacterium spp. Combination of these three strains maximizes the prebiotic effect for maintaining gut microbiota balance (20). In this study, we aimed to investigate the effect of BIFICO on cognitive function and emotional symptoms in patients with TLE to provide guidance for treating epilepsy-related comorbidities, thereby improving quality of life. Participants and inclusion and exclusion criteria Subjects were recruited from the epilepsy clinic of the neurology department at Capital Medical University Affiliated Beijing Friendship Hospital from January 2020 to December 2021. This was a double-blind, randomized controlled study with an experimental and a placebo control group. This study was approved by the Human Research Ethics Committee of the Capital Medical University Affiliated Beijing Friendship Hospital. All enrolled patients signed an approved informed consent document after meeting the inclusion criteria. The inclusion criteria were as follows: (1) aged 50-75 years; (2) TLE diagnosis; (3) seizures that occurred at least twice a month for ≥2 years before entering the study; (4) absence of epilepsy induced by other causes, including encephalitis, stroke, brain tumors, diabetes, or metabolic syndrome; (5) absence of generalized motor seizures, mental motor seizures, or other idiopathic syndromes; (6) no history of neurological or psychiatric disorders and; (7) an ability to read and understand the study documents evaluated by the investigators. The exclusion criteria were as follows: (1) use of topiramate or phenobarbital, which affect cognition; (2) use of other probiotics or yogurts with live or immune-enhancing supplements within the past 3 months; (3) use of antibiotics or anti-inflammatory therapy within the past 3 months. We assessed 160 participants for eligibility, and 76 were enrolled in this study according to the inclusion and exclusion criteria ( Figure 1). The 76 enrolled participants were equally and randomly assigned to the probiotic and placebo groups. During the 12 weeks of treatment, three participants from the placebo group dropped out of the study; one due to worsened seizures, one due to the long distance involved, and one was lost to followup. Additionally, three participants from the probiotic group dropped out in this period; one discontinued due to headaches, and two were lost to follow-up. Therefore, this study eventually . Management and procedures Participants in the probiotic group were assigned to take two capsules after breakfast and dinner per day for 12 weeks (each capsule contained B. longum, L. acidophilus, and E. faecalis; every living bacterium had >1 × 10 7 colony-forming units). In the placebo group, each capsule contained only 210 mg of starch. The probiotic products and placebos could not be distinguished by package, color, taste, or smell (provided by Shanghai Xinyi Pharmaceutical Co., Ltd., Shanghai, China). Antiseizure treatment for the participants was not changed during the treatment period for either group. All participants were evaluated for seizure frequency, cognitive function, anxiety, depression, and quality of life, before and after the intervention. All participants, investigators, and researchers were blinded throughout the study period. After the study was completed, all data were used for statistical analysis. Outcome assessments Demographic information included sex, age, education years, and body mass index (BMI, kg/m 2 ). The clinical features included seizure frequency, epileptic history, seizure focus location, and number of AEDs. The number of episodes in the month before enrollment was regarded as the baseline, and the number of episodes in the third month with probiotic/placebo treatment was regarded as the endpoint. We evaluated the changes of seizure frequency between the baseline and the endpoint. The Wechsler Adult Intelligence Scale-Fourth Edition (WALS-IV) was used to assess cognitive function (21). WMS-IV contains ten subtests that contribute to four cognitive spheres: the verbal comprehension index (VCI), perceptual reasoning index (PRI), working memory index (WMI), and processing speed index (PSI). Full-scale IQ (FSIQ) was calculated for all subtests. The scores of the four cognitive spheres and the FSIQ were transformed into an index according to the test manual (M = 100, SD = 15). The Wechsler Memory Scale-Fourth Edition (WMS-IV) was used to assess participants' memory performance in the probiotic and placebo groups (22). This scale consists of five subtests that produce five indices: the auditory memory index (AMI), visual working memory index (VWMI), visual memory index (VMI), immediate memory index (IMI), and delayed memory index (DMI). According to the test manual, the scores for the five indices were transformed into an index (M = 100, SD = 15). The Quality of Life in Epilepsy Inventory (QOLIE)-89 item was used to measure the quality of life. The total score of this inventory provides an estimate of overall health-related quality of life. Statistical analyses All data were analyzed using the Statistical Package for Social Sciences (version 21; IBM, Armonk, NY) and GraphPad Prism (version 9; GraphPad Software, San Diego, CA). Shapiro-Wilk test was used to verify the normal distribution of continuous variables. The data of General characteristics and epileptic information were expressed as means and standard deviations and analyzed by using Student's t-tests. The categorical data were analyzed by Chi square test and Fisher test. Significant differences were set at a p-value < 0.05. Within-patient contrasts were analyzed by using Mann-Whitney U test that were adjusted by using the Bonferroni correction (p < 0.005 for the cognitive measures and p < 0.008 for the neuropsychiatric measures). Correlations between variables were assessed using Spearman test. Group di erences in the WAIS-IV and WMS-IV index scores On the WAIS-IV, as shown in Table 2, there were no significant differences between the probiotics and placebo groups in the four cognitive indices and FSIQ at baseline. After 3 months of intervention, the scores in probiotic group on PSI, WMI, and FSIQ were observed higher than those in placebo group, but with no significant differences. Similarity, On the WMS-IV, the participants in the probiotic and placebo groups were at the same level on the five indices at baseline (Table 2). At the endpoint, there was no significant differences between the probiotic and placebo groups on AMI, VWMI, VMI, IMI, and DMI. Group di erences in the HAMA, HAMD, and QOLIE-scales Neuropsychological investigations of the participants are shown in Table 3. There were no significant differences between the probiotic and placebo groups in the baseline HAMA, HAMD, and QOLIE-89 scores. After 3 months of intervention, the participants treated with probiotics showed a significant reduction in the HAMA (9.54 ± 5.51 vs. 13.57 ± 6.16, p = 0.003) and HAMD (11.83 ± 5.49 vs. 15.23 ± 5.56, p = 0.006). There was also an increase in the QOLIE-89 scores (60.29 ± 14.01 vs. 51.91 ± 13.20, p = 0.006) compared with those treated with placebo. In addition, an analysis of the effects of seizure reduction on these scales using Spearman test showed that the performance of the participants treated with probiotics had a negative correlation with seizure reduction on HAMA (r = −0.775, p < 0.001, Figure 3A) and HAMD (r = −0.696, p < 0.001, Figure 3B) and a positive correlation with seizure reduction on QOLIE-89 (r = 0.840, p < 0.001, Figure 3C). Discussion This study was a prospective examination targeting the effect of probiotics on cognitive function and neuropsychiatric manifestations using comprehensive tests in an adjunctive trial in patients with TLE. This study supports the assertion that supplementation with probiotics effectively increase the antiseizure effect and improve intelligence, memory impairment, anxiety, depression, and low quality of life to a certain extent. The role of the gut-brain axis in epilepsy has been gradually recognized. A study observed elevated α-diversity in the composition of gut microbiota in patients with DRE, especially those with 4 or more seizures per year (14). Linear discriminant analysis showed increases in the relative abundance of Firmicutes and decreases in Bacteroides in patients with DRE (14). Another study detected fecal microbiota in healthy children and in those with DRE, and found differences in fecal microbial β-diversity between groups instead of α-diversity, and also observed increased relative abundance of Firmicutes and Proteobacteria and decreased abundance of Bacteroidetes and Actinobacteria in infants with DRE (26). Modulation of the gut microbiome may be a potential strategy for treating epilepsy. The WAIS-IV and WMS-IV scores are presented in corresponding age-adjusted scaled scores (index scores M = 100, SD = 15). "Z" indicates Mann Whitney U test. Bonferroni correction was applied to determine statistical significance. . (C) Correlations between changes of QOLIE scores and seizure reduction, r = . , p < . . A case study showed that seizures were controlled in a patient diagnosed with Crohn's disease and epilepsy after receiving fecal microbiota transplantation (27). Ketogenic diets have an antiseizure effect in DRE, and the possible mechanism involves changes in microbiota, microbial interactions, and variance in neurotransmitter and neuroactive peptides levels (15,28,29). Probiotics has been recognized as another promising therapy in epilepsy. In an open-label study, 28.9% of patients with DRE displayed more than 50% seizure reduction following treatment with a cocktail of L. plantarum, L. acidophilus, L. helveticus, L. casei, B. lactis, Streptococcus salivarius, and L. brevis (16). Lactobacillus and Bifidobacterium spp. were found to have positive impacts on neurological disorders and psychological diseases (18,30,31). A potential therapeutic strategy for epilepsy could be given by modulating levels of Lactobacillus and Bifidobacterium spp. (14). BIFICO is a cocktail probiotic capsule which consists of B. longum, L. acidophilus, and E. faecalis. In this study, we observed that supplementation with BIFICO improved seizure frequency in patients with TLE. Current studies have indicated that gut microbiotaderived metabolites and cellular components maintain brain homeostasis. Gut dysbiosis caused by any insult results in disturbance of metabolites and neurotransmitters involved in neural regulation, including 5-hydroxytryptamine, tryptophan, glutamine, gamma-aminobutyric acid (GABA), histamine, short-chain fatty acids, lipopolysaccharides, branched-chain amino acid, bile acids, and catecholamines (13). Abnormalities in these molecules further affect the function of glia, synaptic pruning, myelination, and the blood-brain barrier, which are closely related to seizure susceptibility (10,13). Supplementation with probiotics may therefore restore brain homeostasis through balancing gut microbiota. Cognitive impairment is one of several comorbidities with epilepsy. Gut dysbiosis has been found to be involved in cognitive impairment diseases. Bialecka et al. (32) showed that variance in Bacteroidetes and Firmicutes was correlated with mild cognitive impairment, dementia, and Alzheimer's disease (AD), suggesting that alterations in these phyla leading to gut dysbiosis may contribute to cognitive impairment. Probiotics might be an adjustable intervention for cognitive impairment. Razaeiasi et al. (33) observed that treatment with probiotics (L. acidophilus, B. bifidum, and B. longum) significantly improved spatial learning and memory in rats with AD. Recently, an animal study showed that bacteriotherapy attenuated seizure activity and partially improved spatial learning and memory in pentylenetetrazole-induced kindled rats (34). This may contribute to the improvement in the antioxidant/oxidant ratio and the increased level of GABA stimulated by beneficial bacterial strains in the gut microbiota (34). However, in this human study, supplementation with probiotics was not observed to significantly improve cognitive function. Maybe some individual participants showed obvious improvement on intelligence or memory, but there was no significantly statistical difference. It is necessary to expand the sample size for further analysis. In addition, in animal experiments, timely intervention may effectively prevent the occurrence of cognitive dysfunction in epilepsy, this may be due to the pathology changes in temporal lobe affecting cognition have not yet formed. However, in clinic, pathological changes related to cognitive deficit may have been existed in patients with TLE. At this stage, it is difficult to reverse the pathological changes and therefore may not improve cognitive deficit. More experiments are needed for verification in the future. Patients with epilepsy have a higher prevalence of anxiety and depression (35). Previous studies have shown that anxiety and depression are risk factors for refractory epilepsy and have suggested a pathological association between neuropsychiatric comorbidities and uncontrolled seizures (36). This special pathological link highlights great challenges in the intervention of epilepsy-associated depression and anxiety. Recently, gut microbiota was found to be involved in the development of neuropsychiatric disorders (31, 37). Lactobacillus and Bifidobacterium spp. are regarded as psychobiotics with positive impacts in patients with depression and anxiety (38,39). In this study, we observed that measurements of anxiety, depression, and low quality of life in patients with TLE were significantly improved by supplementation with BIFICO probiotics. These changes were closely related to seizure control. We speculate that there are two possible reasons for this: first, the improvement in anxiety and depression may only contribute to seizure reduction, and second, probiotic-modulated changes in metabolism and neurotransmitters such as dopamine, serotonin, noradrenaline, and GABA may have an effect on TLE pathology (40,41). This study had some limitations. First, the sample size was small, and future studies are needed to expand the sample size to verify our results. Second, we conducted our research based on previous studies and did not analyze the metabolic and neurotransmitter effects of probiotics on epilepsy-associated comorbidities. In the future, more studies are needed to study the potential mechanism of the positive impact of probiotics and to build a pathological link between the gut microbiome, epilepsy, and epilepsy-associated comorbidities. Third, clinical seizures were used in this study as the observation index to evaluate whether the improvement of cognitive function and neuropsychological disorders in patients with TLE treated with probiotics were affected by seizure reduction. The impact of interictal activity was not observed in this study, which needs future studies to evaluate and discuss. Conclusions In conclusion, the results of this study suggested that BIFICO probiotics have a positive effect on seizure control, anxiety, depression, and quality of life in patients with TLE. This study indicated that BIFICO probiotics was beneficial to TLE patients as adjuvant therapy. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. Ethics statement The studies involving human participants were reviewed and approved by Human Research Ethics Committee of the Capital Medical University Affiliated Beijing Friendship Hospital. The patients/participants provided their written informed consent to participate in this study. Author contributions XW wrote the draft of this article. RM and XL collected and analyzed the data. YZ revised the manuscript and gave the final approval. All authors contributed to the article and approved the submitted version.
2022-07-21T15:13:35.364Z
2022-07-19T00:00:00.000
{ "year": 2022, "sha1": "0f7dd5b8fc1d2a8a00374ad3858448ef2fd6f1ac", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2022.948599/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8c1d5c87cafc6b2065b7c1aeb733164be973461", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
259909137
pes2o/s2orc
v3-fos-license
Diagnosis and Management of Neonatal Hypoglycemia: A Comprehensive Review of Guidelines Hypoglycemia represents one of the most frequent metabolic disturbances of the neonate, associated with increased morbidity and mortality, especially if left untreated or diagnosed after the establishment of brain damage. The aim of this study was to review and compare the recommendations from the most recently published influential guidelines on the diagnosis, screening, prevention and management of this common neonatal complication. Therefore, a descriptive review of the guidelines from the American Academy of Pediatrics (AAP), the British Association of Perinatal Medicine (BAPM), the European Foundation for the Care of the Newborn Infants (EFCNI), the Queensland Clinical Guidelines-Australia (AUS), the Canadian Pediatric Society (CPS) and the Pediatric Endocrine Society (PES) on neonatal hypoglycemia was carried out. There is a consensus among the reviewed guidelines on the risk factors, the clinical signs and symptoms of NH, and the main preventive strategies. Additionally, the importance of early recognition of at-risk infants, timely identification of NH and prompt initiation of treatment in optimizing the outcomes of hypoglycemic neonates are universally highlighted. All medical societies, except PES, recommend screening for NH in asymptomatic high-risk and symptomatic newborn infants, but they do not provide consistent screening approaches. Moreover, the reviewed guidelines point out that the diagnosis of NH should be confirmed by laboratory methods of BGL measurement, although treatment should not be delayed until the results become available. The definition of NH lacks uniformity and it is generally agreed that a single BG value cannot accurately define this clinical entity. Therefore, all medical societies support the use of operational thresholds for the management of NH, although discrepancies exist regarding the recommended cut-off values, the optimal treatment and surveillance strategies of both symptomatic and asymptomatic hypoglycemic neonates as well as the treatment targets. Over the past several decades, ΝH has remained an issue of keen debate as it is a preventable cause of brain injury and neurodevelopmental impairment; however, there is no clear definition or consistent treatment policies. Thus, the establishment of specific diagnostic criteria and uniform protocols for the management of this common biochemical disorder is of paramount importance as it will hopefully allow for the early identification of infants at risk, the establishment of efficient preventive measures, the optimal treatment in the first hours of a neonate’s life and, subsequently, the improvement of neonatal outcomes. Introduction Neonatal hypoglycemia (NH) is the most common neonatal metabolic disturbance [1] and constitutes a leading cause of term admission to neonatal units worldwide [2]. Its incidence is estimated to be 5-15% in otherwise healthy neonates [3,4]. The definition of clinically significant hypoglycemia remains one of the most controversial issues in contemporary neonatology, as blood glucose (BG) concentration is not routinely measured in healthy asymptomatic infants who may experience transient hypoglycemia as part of their normal adaptation to extrauterine life [5]. Thus, the normal range of blood glucose levels (BGL) in the first 48 h of life is yet to be determined [1]. Delayed diagnosis, as well as the suboptimal management of NH, is associated with adverse short-and long-term sequelae in the offspring; acute brain injury, visual-motor impairment, executive dysfunction and neurodevelopmental impairment have been reported [6][7][8]. It is worth noting that despite the fact that several studies and clinical trials have attempted to identify the BGL considered to be safe and to provide a valid estimate of the effect of neonatal hypoglycemia on neurodevelopment [9], evidence from the current literature does not support a specific concentration of BG that can potentially result in acute or chronic irreversible neurologic damage and neither the duration nor the severity of NH can accurately predict permanent neurological damage [6]. Although occasions where NH is severe enough to cause long-term neurodevelopmental harm with subsequent significant costs for the family, the patients and the health systems are rare [10], clinicians should implement practices to prevent harm stemming from failure to recognize or treat NH whilst eliminating unnecessary interventions and admissions to neonatal units and, therefore, avoiding the pointless separation between the mother and the neonate. To date, there is insufficient and inconclusive evidence regarding the definition and treatment protocols of NH, leading to significant discrepancies in the existing guidelines. Thus, the development of international evidence-based algorithms for the early identification, the effective prevention and the successful management of clinically significant low BGL seems to be of insurmountable importance and will hopefully drive favorable neonatal outcomes. The aim of this descriptive review was to synthesize and compare recommendations from influential guidelines on the diagnosis and management of neonatal hypoglycemia. Evidence Acquisition The most recently published guidelines by influential medical societies on NH were retrieved and a descriptive review was conducted. In particular, six guidelines were identified from: the American Academy of Pediatrics (AAP 2011) [11], the British Association of Perinatal Medicine (BAPM 2017) [12], the European Foundation for the Care of the Newborn Infants (EFCNI 2018) [13], the Queensland Clinical Guidelines-Australia (AUS 2019) [14], the Canadian Pediatric Society (CPS 2020) [15] and the Pediatric Endocrine Society (PES 2015) [16]. An overview of recommendations is presented in Table 1 (risk factors and clinical signs of NH) and Table 2 (screening, diagnosis and management of NH), respectively. Of note, five of the reviewed guidelines focus mostly on the transitional NH in the immediate postnatal period; however, the recommendations made by PES mainly address the subject of persistent NH, including the diagnosis and management of disorders causing recurrent or prolonged hypoglycemia that persists or occurs beyond the first 72 h of life. Definition of Neonatal Hypoglycemia Many healthy infants experience transient hypoglycemia as part of their normal adaptation to extrauterine life, resulting from the discontinuation of nutrients due to the separation from the placental circulation [5]. This leads to a transient reduction in BGL beginning at 1 to 2 h after birth, known as "physiologic" hypoglycemia (as low as 30 mg/dL (1.6 mmol/L) according to the AAP and BAPM or 20-25 mg/dL (1.1-1.4 mmol/L) according to EFCNI and AUS). The lowest point is usually reached in the first 2 to 4 h of life; at 4 to 6 h, the BGL usually stabilize at 2.5-4.4 mmol/L (45-79 mg/dL) [17]. Glucose is the major oxidative fuel of the brain; however, this transient, asymptomatic form of hypoglycemia can be relatively easily compensated through the production of alternative sources of energy, such as ketone bodies released from fat. After the first 2 postnatal hours, the glucose concentration begins to rise, mainly due to endogenous production (glycogenolysis and gluconeogenesis) rather than feeding. This is the result of a mild and transient form of hyperinsulinism where the mean threshold of BGL for the suppression of insulin secretion is lower in newborn babies (55-65 mg/dL (3.0-3.6 mmol/L)) than in older infants and children (80-85 mg/dL (4.4-4.7 mmol/L)) [18]. The mechanism responsible for the glucosestimulated insulin secretion matures with age, resulting in an increase in the mean threshold of BGL, which, by 72 h of age, is similar to those in older infants and children [18]. It is common for healthy, breast-fed newborns to present low BGL (<36 mg/dL (2 mmol/L)) during the first 24 h of life [19] without abnormal clinical signs or symptoms. A randomized controlled trial, called "The Sugar Babies Study", which enrolled 514 infants of 35-42 gestational weeks, younger than 48 h old, identified to be at risk for NH, found that 51% of babies became hypoglycemic (BGL < 47 mg/dL (2.6 mmol/L)) and 19% had severe hypoglycemia (BGL < 36 mg/dL (2.0 mmol/L)). The majority of the hypoglycemic ones, i.e., 79%, showed no clinical signs [3]. Given these facts, defining a clinical diagnosis of NH is crucial to provide guidance for when and whether therapy should be initiated. If any infant shows clinical manifestations compatible with significantly low BGL, such as apnea, jitteriness and seizures, the plasma glucose (PG) or BG concentration should be measured immediately. The AAP and PES support measuring PG levels to define hypoglycemia, while the BAPM, EFCNI, AUS and CPS recommend whole BGL measurement. PG values tend to be higher compared to the whole blood glucose levels by approximately 10-18% (AAP), 10-15% (BAPM, EFCNI), 15% (PES), 10% (CPS), because the concentration of water in the plasma is higher than in the whole blood [18]. However, the definition of NH lacks uniformity among the reviewed guidelines. First, although all societies divide newborns into two groups depending on their postnatal age, to make a distinction between transient and persistent NH, the AUS, BAPM and PES use a cutoff of 48 h, while CPS and EFCNI draw the line at 72 h of age. The PES guideline are based not only on the neonate's age but also on the presence or absence of a known or suspected hypoglycemic congenital disorder, as they mostly address the matter of evaluation and management of persistent NH. Furthermore, the CPS recommends a different cutoff of glucose levels in transient (within the first 72 h of life) than in persistent NH (beyond the first 72 h of life), as the former is defined by BGL lower than 2.6 mmol/L (47 mg/dL) (also endorsed by AUS and AAP), while the latter by BGL lower than 3.3 mmol/L (59 mg/dL). The definition of persistent NH given by the EFCNI is consistent, i.e., NH lasting beyond 72 h of postnatal life. In contrast, the BAPM guidelines define transient NH (during the first 48 h of life) by BGL between 1.0 and 1.9 mmol/L (18-34 mg/dL) documented on one or two occasions, whereas persistent NH (beyond the first 48 h of life) is defined by BGL lower than 2.0 mmol/L (36 mg/dL) on more than two occasions. The AUS and PES also propose the cut-off point of 48 h to distinguish transitional from persistent hypoglycemia and the AUS describes recurrent NH as BGL below 2.6 mmol/L (47 mg/dL) on more than three occasions in a row. The definition of severe NH is also controversial. More specifically, the BAPM mentions that NH should be characterized as severe when BGL are <1.0 mmol/L (18 mg/dL), while the AUS suggests a definition of BGL < 1.5 mmol/L (27 mg/dL), BGL not recordable or symptomatic hypoglycemia. This distinction has implications on management as transient NH in the absence of associated clinical manifestations does not require further investigation [20], while severe and persistent NH should prompt urgent medical attention and additional investigations because it may be the first sign of a severe metabolic disorder, like hyperinsulinemic hypoglycemia or hypopituitarism [21]. On the other hand, the term "clinical hypoglycemia" is used by the PES and AUS guidelines to describe the concentration of PG that is low enough to cause brain injury [22]. Screening for Neonatal Hypoglycemia There is no consensus regarding the exact timing when screening should be performed (AAP). Data regarding both the optimal timing and time intervals for screening blood glucose are limited and it remains controversial whether it is necessary to screen the at-risk newborns who do not present any signs or symptoms of NH during the time that BGL reach their normal lowest point (approximately within 1-2 h after delivery) [23]. Furthermore, the evidence supporting routine screening for NH of asymptomatic infants who have no risk factors for hypoglycemia, after a non-complicated pregnancy and delivery, is insufficient. Five of the reviewed guidelines (AAP, BAPM, EFCNI, AUS, CPS) provide guidance for the screening of NH. They all agree that screening for NH should be performed only for infants with suspected or well-established risk factors for developing hypoglycemia; any infant with abnormal feeding behavior, absence of feeding cues or any other clinical manifestations should be promptly screened for NH at any time; in fact, screening is recommended within minutes, not hours, of the appearance of symptoms and with a duration and frequency of BGL testing that depend on individualized risk factors. With regard to the initial screening, BAPM, EFCNI and AUS support that the optimal time for screening of asymptomatic, at-risk neonates is just before the second feed (practically no longer than 2-4 h after delivery) provided that the newborn is offered feeding within the first hour after birth. On the contrary, according to AAP and CPS, the recommended time for screening high-risk infants is 30 min after the first feed (practically up to 2 h of age) followed by intervention with feeding or IV glucose depending on the glucose values. The AAP and CPS agree with the BAPM and AUS on the timing of the initial feed, which should be offered to the neonate within the first hour after delivery. Regarding the subsequent BGL measurements, after the initial screening of asymptomatic at-risk infants, all five medical societies agree that measurements should be performed prior to feedings. Breast milk or formula feedings should be offered to newborns every 2-3 h or more frequently. Furthermore, the AUS and BAPM guidelines suggest a second BGL screening before the third feed and no later than six hours (AUS) or eight hours (BAPM) of age. However, the subsequent steps differ. More specifically, according to the AUS, if BGL is within the normal range (≥2.6 mmol/L, >47 mg/dL), screening should continue to be performed before every second feed (every three to six hours depending on feeding frequency) for 24 h. On the contrary, if the second BGL measurement is above 2.0 mmol/L, the BAPM proposes no further glucose measurements, unless signs or symptoms indicative of hypoglycemia are present, and only recommends observation for 24 h, providing continuous support of breastfeeding. According to the CPS, testing should also be performed one or two times during the second day of life, to ensure that the BGL remain above 2.6 mmol/L (47 mg/dL), whereas the AAP suggests repeated testing prior to feedings after the first 24 h of age only if PG values remain lower than 45 mg/dL (2.5 mmol/L). Additionally, the AAP and the CPS agree upon continuing measurements through multiple feed-fast cycles depending on the risk factors of each newborn. On the one hand, small-for-gestational-age (SGA) and late-preterm neonates should be screened for at least the first 24 h before each feeding (every 2-3 h); in addition, if the BGL remain above 2.6 mmol/L (47 mg/dL), screening should be discontinued [24]. On the other hand, large-for-gestational-age neonates and those of diabetic mothers should be screened only for the first 12 h after birth, with the same cut-off glucose value used for discontinuing measurements. This difference in the duration of BGL screening is based on studies showing that IDM and LGA infants are more likely to become hypoglycemic by 12 h after the birth, in contrast to preterm and SGA infants, who usually develop asymptomatic NH within 24 h [24][25][26]. Diagnosis of Neonatal Hypoglycemia Diagnosing NH using a single glucose value is neither feasible nor simple [19]. Thus, monitoring, managing and preventing NH remain highly pressing issues [27]. According to the AUS, CPS and AAP, the generally adopted PG concentration cut-off for otherwise healthy infants is 47 mg/dL (2.6 mmol/L). More specifically, the CPS guideline refers to the existence of four approaches to the diagnosis of NH based on the following aspects: 1. the neonate's clinical condition; 2. epidemiological data from studies on exclusively breastfed, appropriate-for-gestational-age (AGA), term infants and their measured BGL [4,21,28]; 3. the presence or absence of normal physiological responses to NH; and 4. the presence or absence of brain injury and long-term sequelae. However, as stated by AAP, there is no robust scientific justification for the generally adopted cut-off of blood glucose for NH in all infants (47 mg/dL, 2.6 mmol/L) [23,28] and the normal range of blood glucose concentration in neonates depends on various factors, such as their birthweight, gestational age, clinical manifestations, energy sources and metabolic demands. The reasons that make it difficult to form and adopt a substantial, evidence-based definition for NH and an accurate value for BG that requires intervention in all neonates are the frequent co-existence of other severe medical conditions and the lack of evidence on the levels of BG and the duration of NH that can cause brain injury and long-term neurological sequelae, alone or in concert with comorbidities [4,22]. This is why the approach of the "operational threshold" has been introduced by a panel of experts that convened in 2000 [4] and has been endorsed by all six medical societies to guide interventions intended to restore BGL. An operational threshold constitutes the concentration of BGL (either plasma or whole blood) that should raise awareness of physicians to consider intervention based on evidence available in the current literature, distinguishing between the BG value that requires action and the target BGL that interventions aim for [4]. This "operational threshold" approach has been widely adopted for all neonates at risk of impaired metabolic adaptation and adverse outcome, but the threshold values for whole BG or PG for diagnosis of NH and consequent intervention remain a matter of keen debate. Thus, according to BAPM, the most important threshold concentrations at which clinicians should consider intervention include: 1. a BG value < 1.0 mmol/L (<18 mg/dL) at any time, 2. a single value < 2.5 mmol/L (45 mg/dL) in a neonate with abnormal clinical signs, and 3. a value < 2.0 mmol/L (36 mg/dL) that remains that low in a subsequent measurement, in case of a newborn with one risk factor for impaired metabolic adaptation but not presenting any abnormal clinical signs and/or symptoms. These thresholds are higher when it comes to symptomatic newborn infants with recurrent or persistent hyperinsulinemic hypoglycemia (HH). In such cases, therapeutic levels of 3.0 mmol/L (54 mg/dL) or more are suggested [12]. According to AUS, any neonate with BGL < 1.5 mmol/L or unrecordable measurement, as well as any symptomatic neonate, requires urgent management and further investigation, while the value used as an operational threshold is BGL below 2.6 mmol/L (47 mg/dL) in all at risk neonates. The PES recommends PG levels to be kept >2.8 mmol/L (50 mg/dL) during the first 48 h of postnatal life and >3.3 mmol/L (60 mg/dL) after 48 h for high-risk neonates without a suspected congenital hypoglycemic disorder. The same operational threshold for blood glucose but in a different time window (after 72 h of life) is recommended by the CPS guidelines, while for the first 72 h postpartum, the CPS suggests the threshold glucose value of 2.0 mmol/L, for which further management is required. The PES recommend that the operational threshold for neonates with a suspected congenital or confirmed hypoglycemic disorder is higher, as in such cases the PG must be maintained >70 mg/dL (3.9 mmol/L), in contrast with 3.0 mmol/L suggested by the BAPM and 3.3 mmol/L by the AUS. Moreover, PES defines the considered-to-be-normal PG values for neonates as 55-65 mg/dL in the first 48 h of age and 70-100 mg/dL for older ones. The AAP recommends operational thresholds for PG concentration in highrisk newborns that differ depending on the hours of age: 25-40 mg/dL (1.4-2.2 mmol/L), 35-45 mg/dL (1.9-2.5 mmol/L) and 45 mg/dL (2.5 mmol/L), from birth to 4 h of life, from 4-24 h of life and after 24 h of life, respectively. The AAP also recommends intervention for all neonates with clinical signs and a PG concentration less than 40 mg/dL. Finally, the EFCNI, adopts the operational threshold approach on guiding interventions and clinical decisions based on glucose values approved by professionals in all maternity and neonatal units; however, they underline the profound controversy among recommendations of different organizations, due to the lack of evidence-based data on cerebral damage provoked by NH [29]. Thus, the EFCNI does not specifically define NH, only stating that BGL as low as 1.0 mmol/L (18 mg/dL) are associated with acute neurological impairment [9,23]. Diagnostic Methods of Neonatal Hypoglycemia The accurate measurement of BGL is crucial for the diagnosis and treatment of NH. Therefore, the optimal methods of BGL assessment are discussed in all guidelines reviewed. Blood glucose levels are usually measured using chemical strips or bedside handheld glucose meters (non-enzymatic methods) and most of the time they are not validated using laboratory diagnostic tests [15,30]. However, the accuracy of bedside reagent test-strip glucose analyzers is limited, especially in the low range of BG concentrations. This low range is defined as 10-15 mg/dL (0.6-0.8 mmol/L) by the PES, and as 0-36 mg/dL (0-2.0 mmol/L) by the BAPM and EFCNI, whereas no specific values are provided by the other societies. It is also crucial to keep in mind that the neonatal packed cell volume (PCV) could be a cause of inaccuracy in handheld glucometers due to the fact that they do not auto-correct for this variable. Samples with high PCV can generate falsely low glucose values and vice versa [12]. Moreover, even though only few devices that measure true whole BG values by rupturing red blood cells are available, most handheld test-strip glucometers report results that demonstrate a reasonable correlation with PG concentrations and that are considered to be "PG equivalents". Whole BG and PG levels may vary up to 10 to 20 mg/dL, but the gap becomes wider at low glucose concentrations. These are the reasons why these point of care methods are not reliable enough to be used as the sole method for NH screening [30,31], as highlighted by all six guidelines. More specifically, the AAP, PES, CPS and AUS guidelines state that the initial screening could be performed using "rapid" bedside tests (including handheld reflectance colorimeter and electrode methods validated for neonatal samples), to prevent any delay for the rapid diagnosis and initiation of treatment, provided that the clinician is aware of their limited accuracy. Capillary samples obtained from a warmed heel can be used for screening, as agreed by all these guidelines. However, due to the limitations of these handheld glucometer devices, before establishing a diagnosis of NH, glucose concentration (plasma or whole blood) must be confirmed using laboratory enzymatic methods (glucose oxidase, hexokinase and dehydrogenase methods). According to AAP, although not rapidly available, laboratory testing is the most accurate method for BGL measuring. The AUS specifies that, if the initial screening of BGL is <2.6 mmol/L (47 mg/dL) in neonates with clinical manifestations compatible with hypoglycemia or with risk factors for NH, glucose values should be validated using point-of-care diagnostic tests (such as enzymatic handheld glucometers with glucose oxidase or glucose dehydrogenase methodology, if available), blood gas analyzers or laboratory enzymatic methods (in fluoride oxalate tube, if feasible to be performed immediately). The same diagnostic methods are recommended by the AUS, in case of initial BGL < 2.0 mmol/L (36 mg/dL), in all newborn infants. As delineated by the AUS, AAP and CPS guidelines, treatment should not be delayed while waiting for the results to be confirmed using a laboratory test, especially for severe, persistent or recurrent NH [4]. Additionally, the CPS guideline mentions another diagnostic method for NH, called CGMs (continuous glucose monitors), which, however, have numerous limitations that question their accuracy; the development of other more promising and more accurate point-of-care devices for bedside glucose measurement may improve the screening methods for NH. On the contrary, the BAPM and EFCNI state that blood gas analyzers are quick, widely available and accurate for measuring BG values. Furthermore, they calculate glucose result as "PG equivalent" concentration, which in most cases is similar to the result obtained from a laboratory enzymatic diagnostic method. Thus, blood gas biosensors are considered to be the gold standard in the screening of NH, as they support real-time clinical decision making and they could be set up to provide a 'glucose only' reading on a tiny neonatal blood sample [32]. If handheld glucometers are used (necessarily compliant with the specific ISO15197:2013 standard), it is highly important for clinicians to remember their limitations in accuracy at low BGL and to confirm their results with more accurate techniques to ensure that hypoglycemic infants are assigned to the optimal care pathway. As stated by the BAPM, a laboratory confirmation may not be practical, not only because of the delay in obtaining results but also due to inconsistency of the results, caused by variability in the inhibition of glycolysis in fluoride oxalate tubes. Lastly, a new technology-currently under development-based on transdermal, minimally invasive, constant and accurate blood sugar measurements provided by biosensors is discussed in the BAPM guidelines as a very promising useful tool for future research [33]. Prevention There is general agreement on the basic principles of NH prevention among the BAPM, EFCNI, AUS and CPS guidelines. These include the following: 1. the antenatal or immediate postnatal identification of all at-risk infants; 2. the avoidance of cold stress and hypothermia-ideally by providing skin to skin contact with the mother; 3. the early and timely energy provision and feeding support; 4. the regular BGL monitoring at predetermined times with accurate devices that provide results with no delay; 5. the constant observation of both the feeding behavior and the overall clinical condition of the neonate; and 6. a thorough discussion with the parents regarding the neonate's feeding and well-being. The BAPM, EFCNI and AUS guidelines describe these principals in detail. On the other hand, the AAP does not mention any measures for the prevention of NH, the CPS focuses on the neonate's feeding standards to prevent NH, and the PES only refers to disorders with persistent NH, such as hyperinsulinism, in which the main goal of prevention is trying to avoid recurrent episodes of hypoglycemia that may increase the risk of subsequent, possibly unrecognized hypoglycemic episodes. Clinicians should keep in mind that early recognition is vital to avoid serious health disorders and improve outcomes. First, the risk factors for NH must be identified at birth to provide meticulous care and extra support to the newborns. More specifically, the AUS highlights that preterm infants of ≤35 gestational weeks should be admitted to neonatal units and receive special care by managing other possible co-existing clinical conditions, ensuring thermal care and providing early and frequent feeds, assisted with gavage if needed or indicated for neonates not nippling well (AAP, AUS, BAPM). Additionally, a thorough and regular assessment of the neonate's clinical condition when awake is important. The general appearance, muscle tone, body measurements, body malformations or deformations (indicative of a syndrome potentially responsible for NH), skin color, body temperature (normal range within 36.5-37.5 • C measured via the axilla), level of consciousness, response to external stimulations, respiratory and heart rate and all feeding cues should be evaluated [10]. Abnormal feeding behaviors that should raise awareness and call for action include not waking for meals, not latching at the breast, not sucking effectively and appearing unsettled. The BAPM and AUS guidelines point out that when signs or symptoms suggestive of NH make their appearance, BGL should be immediately measured and a pediatrician or a neonatal nurse practitioner should be called for assistance and further guidance. Moreover, the BAPM, EFCNI and AUS thoroughly describe all the steps that should be followed to prevent hypothermia of the at-risk neonate, including the use of a hat, the avoidance of cold draughts, the warmth of the ambient temperature and the immediate skin-to-skin contact with the mother, while the CPS suggests that the first bath should be delayed for at-risk infants as it has been found to decrease the incidence of NH [34]. The crucial role of the parents in the monitoring and management of infants at risk for impaired metabolic adaptation is highlighted by three of the reviewed guidelines (BAPM, CPS and EFCNI). They point out that parents should participate actively in the care pathway of at-risk neonates, being aware not only of the reasons behind their newborns' requirement of extra care and why they undergo regular blood testing for measuring BGL, but also of all the signs and symptoms that could indicate hypoglycemia. Thus, parents can learn about the importance of early energy provision and help physicians with BG monitoring. If risk factors for NH are known before delivery, health care providers should communicate with the parents to inform them antenatally. The BAPM suggests that this information should be given to parents in both verbal and written form, while the EFCNI suggests giving this information only verbally. The BAPM, EFCNI, AUS and CPS note that breast milk is the optimal source of energy for all neonates during their postpartum metabolic adaptation. The early initiation of feeds plays a significant role in preventing NH and it should be ensured that the neonate is offered the breast within the first 60 min (BAPM) or 30-60 min (AUS) of life [10,35]. Efficient support should be provided to all mothers to make them feel capable of initiating and establishing effective breastfeeding and to enable them to recognize both early feeding cues and signs of effective attachment. Feeding effectiveness should be assessed at each feed and the breastfeeding should be offered at least 8-10 times in 24 h, according to feeding cues. As stated by the BAPM, there should not exist a gap of more than three hours between the meals until BGL exceeds 2 mmol/L (36 mg/dL) on two or more consecutive measurements [12]. The main goal is to cover the neonate's energy demands as much as possible using breast milk or expressed colostrum/breast milk. In formula-fed infants, the timing of initial feed and time intervals between feedings are practically the same. The AUS guideline supports that complementary feeds are not required in the first 24 h of life, unless one BGL measurement is <2 mmol/L (36 mg/dL) or two or more BGL values are <2.6 mmol/L (47 mg/dL), whereas it mentions that if formula feeding is chosen, meals should be up to 60-75 mL/kg/day for at-risk newborns. In cases where complementary feeds are required, a minimum of 7.5 mL/kg/feed should be provided [10]. The CPS guidelines differ in that they suggest supplementing feeds with breast milk or a breast milk substitute; the total volume of both oral and IV intake should not exceed 100 mL/kg/day so as to avoid fluid overload and serum electrolytes disorders. This medical society also highlights the importance of continuing to feed high-risk infants regularly, while continuing to measure BGL prior to meals, as well as the use of a pump to achieve slow feeding (breast milk or formula) rather than bolus feeding. Management of Asymptomatic Neonatal Hypoglycemia The goals of managing NH are as follows: first, to identify at risk newborns and newborns with serious underlying hypoglycemic disorders [36]; second, to correct BGL; and third, to avoid unnecessary treatment of normal transitional NH, which will likely resolve without intervention [37]. It is crucial to keep in mind that the treatment of hypoglycemia is a stepwise process depending on the presence or absence of symptoms and signs and on the infant's response at each step. All of the reviewed guidelines highlight the importance of recognizing and treating asymptomatic NH early and agree on the main principles of management, which are as follows: 1. the antenatal or immediate postpartum identification of risk factors, 2. the provision of thermal care, 3. the early energy provision and feeding support, 4. the regular monitoring of BGL and infusion of IV dextrose when necessary, and 5. to try not to interrupt the mother-infant relationship and breastfeeding when possible. For asymptomatic newborns at risk, the AAP suggests a treatment plan that is divided into two time periods, up to 4 h of age and between 4 and 24 h of age. An initial feed should be offered to all neonates within the first hour of age and an initial screen of BGL should be performed 30 min after the first feed. If the PG is <25 mg/dL (1.3 mmol/L), another feeding-checking PG in a one hour-cycle is recommended, and if PG remains <25 mg/dL, IV glucose administration is indicated (glucose dose 200 mg/kg, 2 mL/kg dextrose 10% D/W). If the PG is between 25 and 40 mg/dL (1.3-2.2 mmol/L), another attempt to feed may be made before progressing with glucose administration [38]. For newborns aged 4 to 24 h, feeding every 2-3 h (after the initial feed) and PG measurements prior to each feed are recommended. If PG is <35 mg/dL (1.9 mmol/L) in one sample, it is suggested to refeed and recheck PG concentration within 1 h. If PG remains <3 5 mg/dL, intravenous glucose should be administered (same dose as before). However, if PG is between 35 and 45 mg/dL (1.9-2.5 mmol/L), active support of feeding should continue before the initiation of treatment with IV dextrose solution. According to the BAPM and AUS guidelines, at-risk neonates should be placed in two care pathways based on their first pre-feed BGL. For the BAPM, the first cut-off point is BGL between 1.0 and 1. The BAPM suggests that when BGL are between 1.0 and 1.9 mmol/L (18-34 mg/dL) and no clinical manifestations are present, the administration of 40% oral dextrose gel (dose of 200 mg/kg) should be considered as part of the feeding plan, alongside breastfeeding or formula feeding, if the mother chooses so. The AUS recommendations for at-risk asymptomatic infants with BGL 1.5-2.5 mmol/L (27-45 mg/dL) and the CPS recommendations for at-risk infants with BGL < 2.6 mmmol/L (47 mg/dL) agree with those of the BAPM, as a dose of 40% dextrose gel is suggested to be given buccally (dose of 0.5 mL/kg equivalent to 200 mg/kg) in conjunction with oral feedings. The EFCNI also aligns with the aforementioned guidelines on this matter, as it is generally stated that oral dextrose gel may be considered as an adjunct to a feeding plan in high-risk newborns. This oral 40% dextrose gel of 0.5 mL/kg provides a dose of 200 mg/kg glucose, which is equivalent to the intravenous bolus glucose dose of 2 mL/kg of the 10% DW solution. Its administration is indicated only in late preterm and term infants (CPS) or neonates > 35 weeks of gestational age (BAPM, AUS) during the first 48 h after delivery, with a maximum of six doses during this period of time (AUS, BAPM). The "Sugar Babies" study, which is described in the CPS and BAPM guidelines, assessed the effectiveness of dextrose oral gel treatment over feeding alone in hypoglycemic neonates and showed that therapy with dextrose gel leads to significant lower treatment failure rates compared to placebo. The buccal gel has also been found to reduce the number of NICU admissions due to NH, alongside the need for supplementation with formula at 2 weeks of age [39]. In fact, if glucose gel administration is followed by immediate breastfeeding, the quality of subsequent breast feeds is improved [40]. However, although it decreases the need for IV glucose administration, it cannot achieve the complete avoidance of IV therapy [39]. Furthermore, according to the BAPM, BG should be measured again prior to the third feed and no longer than 8 h of age, and if BGL fail to rise above 2 mmol/L (36 mg/dL), another circle of oral dextrose gel and feeding should be repeated. A re-check of BGL is also recommended by the AUS (30 min after the first dose of oral dextrose gel) and a subsequent dose of dextrose gel is considered safe to be administered if the BGL remain between 2.0 and 2.5 mmol/L (36-45 mg/dL). Similarly, according to CPS, BGL should be remeasured 30 min post-feed and if they remain between 1.9 and 2.6 mmol/L (34-47 mg/dL), another loop of oral dextrose gel 40% (same dosage) followed by enteral supplementation (breastfeeding or formula feeding) and a glucose measurement again 30 min after feeding is recommended. On the contrary, if BGL are <1.9 mmol/L (34 mg/dL) (CPS), 1.0 mmol/L (18 mg/dL) (BAPM) or 1.5 mmol/L (AUS), the initiation of IV glucose infusion at hourly requirements (10% DW) is strongly advised without repeating the loop of oral dextrose gel-breastfeeding/formula feeding/EBM. In addition, if more than two measurements between 1.0 and 1.9 mmol/L have been documented or if two consecutive doses of glucose gel 40% have been given, the neonatal team should be informed to investigate possible causes of NH and to exclude other disorders that mimic hypoglycemia, like sepsis. Admission to the Neonatal Intensive Care Unit (NICU) is required (BAPM, AUS) in such cases. An increase in the feeding frequency and the insertion of a nasogastric tube should also be considered and the IV glucose administration (10% DW) at this point is suggested, too. It is important to remember that buccal dextrose gel can be used as first-line treatment for hypoglycemia, allowing the infant-mother relationship not to be interrupted, avoiding NICU hospitalization and improving the chances of effective breastfeeding after discharge [39]. Additionally, as stated by the BAPM, if BGL are >2.0 mmol/L, breastfeeding or formula feeding and/or EBM should continue to be offered, glucose should be measured again prior to the next feed, and if BGL remain >2.0 mmol/L (after two consecutive pre-feed BG measurements) and no clinical manifestations are present, it is advised that BG measurements are discontinued. According to the AUS, the conditions under which cessation of BGL monitoring is indicated are as follows: (a) BGL ≥ 2.6 mmol/L or ≥3.3 mmol/L for 24 h, within or beyond the first 48 h of life, respectively, (b) neonate feeding effectively, (c) asymptomatic neonate for whom IV glucose had not been required. For neonates who were treated with IV dextrose but are now feeding well and have not received IV glucose during the past 12 h, monitoring should be ceased when BGL exceed 3 mmol/L for two successive measurements. CPS suggest ceasing pre-feed glucose monitoring when two consecutive BG samples are above 2.6 mmol/L and the neonate fully tolerates enteral feeds. Management of Symptomatic Neonatal Hypoglycemia The appearance of hypoglycemic clinical signs and symptoms constitutes a red flag for the urgent initiation of therapy because severe, prolonged, symptomatic hypoglycemia may result in neuronal injury [38,41]. First, a laboratory confirmation of the low BGL must always be performed before starting IV treatment, according to the AAP, BAPM and AUS, because it is essential for both the identification and the optimal management of hypoglycemia. However, therapy should not be delayed while waiting for laboratory results. Blood samples during the hypoglycemic period should be collected to perform further diagnostic evaluation [42]. The recommendations of AAP in symptomatic infants with BGL < 40 mg/dL (2.2 mmol/L) involve immediate IV glucose treatment either as an IV bolus glucose dose of 200 mg/kg (2 mL/kg 10% DW) or as an IV glucose infusion of 80-100 mL/kg 10% DW per day to maintain PG concentrations between 40 and 50 mg/dL (2.2-2.7 mmol/L). The CPS guideline agrees with this approach of immediately treating symptomatic infants or infants who cannot be orally fed, with an IV infusion of 10% DW or a bolus IV glucose administration (dose of 2 mL/kg over 15 min) when BGL are lower than 1.8 mmol/L. The administration of a bolus dose at the start of a glucose infusion therapy is believed to stabilize BGL more rapidly. The PES instructions also align with this treatment for any episode of severe symptomatic hypoglycemia with IV dextrose infusion at an initial dose of 200 mg/kg, followed by infusion of 10% DW at a maintenance rate. A response to the intravenous administration of glucose is expected in the next 30 min and a confirmation should be performed in a timely manner [43]. The recommendations of EFCNI and BAPM on symptomatic hypoglycemia or newborns presenting with very low glucose levels (<1.0 mmol/L, 18 mg/dL) are consistent, as they suggest that in such cases infants should be treated with IV glucose as an initial bolus of 2.5 mL/kg 10%DW (instead of 2 mL/kg 10%DW) as soon as possible, followed by a glucose infusion administration of 60 mL/kg 10% DW per day (instead of 80-100 mL/kg/day). The recommended of the AUS for initial IV bolus glucose dose for symptomatic newborns or BGL below 1.5 mmol/L (27 mg/dL) is 1-2 mL/kg 10% DW, followed by the re-measurement of BGL in the next 30 min and repeated by another bolus glucose dose of 1 mL/kg IV while monitoring for rebound hypoglycemia. The IV glucose infusion rate should commence at 60 mL/kg/day 10% DW. The AUS also gives instructions for treating newborns with BGL between 1.5 and 2.5 mmol/L who are not feeding well (symptomatic newborns). In such cases, one dose of 40% oral dextrose gel should be given, a neonatal nurse practitioner or a pediatrician should be informed, a lactation consultant should be notified and BGL should be re-measured within 30 min. If the BGL are between 2.0 and 2.6 mmol/L, a second dose of 40% oral dextrose gel can be administered and breastfeeding or formula feeding and/or EBM should be continued. If the BGL are <2 mmol/L, the neonate must be admitted to the NICU in order to initiate IV treatment. There is a consensus among the reviewed guidelines that for the management of symptomatic NH, an intravenous access should be obtained (peripheral or central). The AUS points out that in case the required IV glucose infusion concentration is more than 12%, an umbilical venous catheter or central line should be inserted; however, the CPS question previous data that dictated the need for a central vein for glucose solutions with concentration ≥ 15% and supports the integrity of peripheral veins with dextrose concentrations up to 20% based on a randomized controlled trial of 121 hypoglycemic newborns, which showed that 20% and 15% glucose solutions can be infused equally safely into peripheral veins in neonates [44]. Nevertheless, in case an IV access is not easy or possible to be established immediately, two alternatives are proposed as urgent interventions: 40% dextrose gel 200 mg/kg equivalent to 0.5 mL/kg-administered orally via buccal massage (BAPM), or intramuscular injection of glucagon 200 microgram/kg (BAPM, AUS, CPS). It is important, however, to keep in mind that if the BGL are <1.0 mmol/L, the buccal dextrose gel should only be used as an interim measure while trying to establish an IV line [45]. The continuation of treatment is based on the regular assessment of the neonatal clinical condition and its BGL monitoring. The PES, AAP and EFCNI guidelines do not discuss in detail the next steps of the neonate's ongoing management, whereas the BAPM, AUS and CPS recommendations agree that if the first intervention is followed by failure to raise BGL, a stepwise increase in glucose supply may be necessary. The AUS recommends that the glucose rate should be daily increased by 20 mL/kg, without exceeding the total daily fluid intake of 100 mL/kg on the first day of life, to prevent fluid overload. The concentration of the IV dextrose solution could also be increased from 10% DW to 12% or higher, keeping in mind the necessity to always measure BGL after any changes to glucose concentration. The same applies to the increase in the glucose delivery rate proposed by BAPM (mentioned as a rise of 2 mg/kg/min) either by increasing the volume or the concentration of IV dextrose solution. At this point, these medical societies agree that if the glucose infusion rate (GIR) is higher than 8 mg/kg/min in the first 24 h after delivery (or, according to BAPM, if BGL is <2.0 mmol/L on more than two measurements during the first 48 h of life), a clinical suspicion of hyperinsulinism should be raised and treatment with glucagon should be commenced. BGL should be measured again in the next 30 min. According to the BAPM, if the BGL remain <1.0 mmol/L or there are abnormal clinical signs, another cycle of treatment should be repeated with IV bolus 10% DW (2.5 mL/kg), followed by an increase in the glucose infusion delivery rate and re-measurement of BGL 30 min afterwards. If the BGL are between 1.0 and 2.5 mmol/L with no abnormal clinical manifestations, the GIR is suggested to increase by 2 mg/kg/min without another IV bolus dextrose administration, and feedings should continue unless there are contraindications. If the BGL are >2.5 mmol/L, a slow and gradual weaning of IV infusion should start and the enteral feeds should also continue. It is necessary to continue BGL monitoring until the infant is on full enteral feeds and the BGL are >2.5 mmol/L (or 3.0 mmol/L in cases of hyperinsulinism) for several fast-feed cycles during the first 24 h of life. Alternative Treatments The use of alternative medications for the management of NH in cases where BGL do not become normal after the administration of IV glucose or 40% buccal dextrose gel is addressed by the CPS, PES, BAPM and AUS guidelines. The decision for a long-term therapy for hypoglycemic disorders (either persistent or recurrent) should be made in consultation with an experienced neonatologist, a pediatric endocrinologist or a pediatric metabolic specialist in cases where either glucose infusion rate is very high (>10 mg/kg/min according to CPS or >8 mg/kg/min according to the AUS) or glucose infusions fail to maintain the BGL at acceptable levels (more than two blood sugar measurements of 1.0-1.9 mmol/L during the first 48 h postnatally according to the BAPM; greater than 2.6 mmol/L up to 48 h of age or 3.3 mmol/L after the first 48 h, according to the AUS). Blood samples for further investigations (such as serum cortisol and insulin) should be collected immediately while the newborn remains hypoglycemic before administering any medications because recurrent or persistent NH may be the first sign of an underlying disorder associated with the metabolism of glucose, such as hyperinsulinism, disorders leading to cortisol and growth hormone deficiency and inborn errors of metabolism [42,46]. Regarding these alternatives to glucose administration, the AUS and CPS suggest the utilization of glucagon, hydrocortisone, diazoxide and octreotide, while the AUS also proposes hydrochlorothiazide and the BAPM only mentions glucagon as an alternative when an IV line is difficult to be accessed. On the other hand, PES discourages non-specific treatment with glucocorticoids for NH and recommends the use of glucagon, surgical intervention and nutritional therapies. Glucagon stimulates gluconeogenesis and glycogenolysis and it can result in raising BGL in term and preterm hypoglycemic infants (AUS, PES, CPS). The CPS guideline states that glucagon may be given via IV bolus or infusion, whereas the AUS, BAPM and PES point out that an intramuscular or subcutaneous injection could be considered-apart from IV administration-if it is not possible or easy to establish an IV access [47]. The IV infusion of glucagon is preferred by the AUS because it prevents an exaggerated stimulation of the pancreas due to a high glucose infusion rate and it does not interfere with the effective establishment of breastfeeding. Additionally, the AUS does not align with the PES regarding the onset of action and duration of glucagon, as the former supports that the BGL rise within one hour upon administration and last, approximately, up to two hours [47], while the latter indicates that the BGL increase within 10-15 min and remain at these levels for at least 1 h. Hypoglycemia non-responsive to glucagon may be provoked by glycogen storage disease [48]. Moreover, hydrocortisone is proposed as an alternative treatment for NH by the AUS and CPS because its mechanism of action includes the stimulation of gluconeogenesis and the reduction in glucose utilization in peripheral tissues. It is remarkable that hydrocortisone has a slower response than glucagon [49]. Hydrocortisone may be preferred when hyponatremia is suspected, the infant is hypotensive, evidence indicative of hypoadrenalism is present or the response to previously administered glucagon is insufficient. Diazoxide is a potassium channel activator used in cases of persistent NH as long-term management. Its mechanism of action is the inhibition of pancreatic insulin release and can be used in conjunction with hydrochlorothiazide in order to achieve weaning from glucose infusion. Hydrochlorothiazide (proposed as an alternative treatment by the AUS) is a diuretic, which has a mechanism of action similar to the one of diazoxide. Octreotide is a pharmacological analog to natural somatostatin, usually recommended for known or suspected cases of hyperinsulinemic hypoglycemia, and not indicated for the newborn period. When medical therapy fails to maintain the BGL in a safe range, surgical intervention is proposed by the PES for neonates with hyperinsulinemic hypoglycemia. The importance of nutritional therapy is emphasized by the PES, especially for disorders of glycogen metabolism or hereditary fructose intolerance. Although it is not a pharmacological intervention, the AUS describes the increase in fluid volume as an effective alternative measure to manage severe, persistent or recurrent NH. Increasing the volume of IV glucose prior to increasing the concentration of glucose to 12% will result in an immediate change in glucose delivery rate whilst a solution of increased glucose concentration is prepared. In particular, a rise of 20 mL/kg/day in the total volume fluids (which does not exceed the maximum daily fluid intake) leads to an approximate 33% increase in BGL. The maximum tolerated total fluid intake is 100 mL/kg/day for most babies of less than 24 h of age, without being at risk of fluid overload. Serum electrolytes should be monitored within regular intervals in order to avoid hyponatremia and over-hydration. Target Glucose Concentration and Discharge Plan The reviewed guidelines, based on the physiology of normal neonatal glucose homeostasis, the normal age-related increase in glucose concentrations over the first few days of life, and the various pathophysiological conditions that may result in clinical hypoglycemia recommend steps of treatment in order to initiate therapy in a timely manner and to avoid the complications of NH. This treatment is a long process that depends on BG or PG measurements, the presence or absence of symptoms and/or signs and the infant's clinical response, too. Glucose target values vary among these guidelines, alongside with the discharge criteria of at-risk neonates. The AAP recommends that the target PG concentration should be >45 mg/dL (2.5 mmol/L) pre-prandially and that neonates should be capable of maintaining normal PG values throughout at least three feed-fast periods of time. The BAPM suggests that the therapeutic goal should be a BGL value > 2.0 mmol/L (36 mg/dL). The AUS states that the BGL target for neonates younger than 48 h of age is >2.6 mmol/L (47 mg/dL) for three feed-fast cycles, while for those older than 48 h with a known hypoglycemic disorder, the target is >4.0 mmol/L (72 mg/dL) for three feed-fast cycles. The CPS supports that the BGL target for newborns younger than 72 h should be >2.6 mmol/L (47 mg/dL) and for newborns older than 72 h > 3.3 mmol/L (60 mg/dL). Finally, the PES states that neonates with a suspected hypoglycemic congenital disorder, as well as older infants and children, should have BGL > 70 mg/dL (3.9 mmol/L) to achieve the therapeutic goal. For high-risk neonates without a congenital hypoglycemic disorder, the target value of PG is >50 mg/dL (2.8 mmol/L) or >60 mg/dL (3.3 mmol/L) for those up to 48 h of age and for those older than 48 h, respectively. The therapeutic target for glucose levels is not discussed by the EFCNI. With regard to the discharge plan, the BAPM and EFCNI agree that newborns should not be discharged until at least two consecutive pre-prandial glucose measurements are within the normal range and neonates have been feeding effectively over several fastfeed cycles. BAPM clarifies that pre-feed BG measurements should be >2.0 mmol/L for neonates with initial BGL measurements between 1.0 and 1.9 mmol/L and no clinical signs, and >2.5 mmol/L (or 3.0 mmol/L) for neonates with initial BGL below 1.0 mmol/L with/without clinical signs in order to cease monitoring. The AAP states that neonates should maintain normal PG concentrations for at least three feed-fast periods before discharge. The AUS aligns with the recommendations of PES on the management and follow-up of neonates (older than 48 h of age) with a known or suspected cause of persistent or prolonged hypoglycemic disorder or with clinically significant NH (requiring a GIR > 6 mg/kg/min or medication such as diazoxide or hydrochlorothiazide), proposing a safety test of six hours of fasting with regular BG measurements in the interval. This fasting test should be performed after consultation with a pediatric endocrinologist or metabolic specialist and should take place before discharge from nursery to ensure that high-risk neonates are capable of remaining normoglycemic if a feeding is missed, as well as to identify infants who need further investigation and additional management for a persistent hypoglycemic disorder. Conclusions To summarize, there is an overall agreement among the reviewed guidelines regarding the risk factors associated with NH, the wide variety of non-specific clinical manifestations and the main principles of NH prevention. All medical societies underline that the timely identification of hypoglycemic neonates and immediate initiation of treatment are crucial in preventing permanent brain injury. In addition, the AAP, BAPM, EFCNI, AUS and CPS recommend screening for NH using BG measurement for all symptomatic neonates as well as for all asymptomatic high-risk ones. The diagnosis of NH should be confirmed via laboratory tests; however, a single BG value cannot accurately define NH. Thus, all guidelines endorse the "operational threshold approach" for the management of subsequent interventions. On the other hand, there is inconsistency concerning the screening algorithms, the definition of NH, the threshold values of glucose for the diagnosis of NH and the treatment protocols of asymptomatic hypoglycemic newborns. Minor discrepancies were also identified regarding the initial intravenous bolus dose of glucose, the following rate of continuous infusion and the alternative therapies of symptomatic neonates as well as the treatment targets. It should be noted that one of the major limitations of this descriptive review, which may partially explain the inconsistency identified across the different medical organizations, is that NH represents a complex condition which may occur due to a variety of causes. The controversy of the guidelines regarding the management of NH and the lack of universal applicability due to inconsistent definitions and the paucity of a substantial body of evidence is clearly outlined. However, NH remains one of the most common and severe metabolic disturbances in perinatal medicine, with destructive consequences when left untreated. This descriptive review attempts to distill the burgeoning literature and place emphasis on the importance of adopting and implementing consistent international protocols for the definition, diagnosis, operational thresholds, prevention and treatment of NH, with the goal of assisting healthcare providers in best managing hypoglycemic neonates and subsequently minimize the rates of associated neonatal morbidity and mortality. New evidence is constantly being published and the understanding of NH is evolving; further large-scale randomized studies are required to validate and modify the diagnostic and therapeutic approaches suggested by the guidelines.
2023-07-16T15:18:57.977Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "dc609e80bb79150418214f9c6c05fc8c4c9f1925", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/10/7/1220/pdf?version=1689312988", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6bf8c6dd253d0f24780874d37a33ece3db0ecf51", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
216081971
pes2o/s2orc
v3-fos-license
5-Hydroxytryptamine Receptors and Tardive Dyskinesia in Schizophrenia Background Tardive dyskinesia (TD) is a common side effect of antipsychotic treatment. This movement disorder consists of orofacial and limb-truncal components. The present study is aimed at investigating the role of serotonin receptors (HTR) in modulating tardive dyskinesia by genotyping patients with schizophrenia. Methods A set of 29 SNPs of genes of serotonin receptors HTR1A, HTR1B, HTR2A, HTR2C, HTR3A, HTR3B, and HTR6 was studied in a population of 449 Caucasians (226 females and 223 males) with verified clinical diagnosis of schizophrenia (according to ICD-10: F20). Five SNPs were excluded because of low minor allele frequency or for not passing the Hardy-Weinberg equilibrium test. Affinity of antipsychotics to 5-HT2 receptors was defined according to previous publications. Genotyping was carried out with SEQUENOM Mass Array Analyzer 4. Results Statistically significant associations of rs1928040 of HTR2A gene in groups of patients with orofacial type of TD and total diagnosis of TD was found for alleles, and a statistical trend for genotypes. Moreover, statistically significant associations were discovered in the female group for rs1801412 of HTR2C for alleles and genotypes. Excluding patients who used HTR2A, respectively, HTR2C antagonists changed little to the associations of HTR2A polymorphisms, but caused a major change of the magnitude of the association of HTR2C variants. Due to the low patient numbers, these sub-analyses did not have significant results. Conclusion We found significant associations in rs1928040 of HTR2A and for rs1801412 of X-bound HTR2C in female patients. The associations were particularly related to the orofacial type of TD. Excluding patients using relevant antagonists particularly affected rs1801412, but not rs1928040-related associations. This suggest that rs1801412 is directly or indirectly linked to the functioning of HTR2C. Further study of variants of the HTR2C gene in a larger group of male patients who were not using HTR2C antagonists is necessary in order to verify a possible functional role of this receptor. INTRODUCTION The abnormal involuntary movement disorder tardive dyskinesia (TD) is a common side effect of both first-and second-generation antipsychotic drugs Carbon et al., 2017;Widschwendter and Hofer, 2019). The clinical picture and course of TD is not unambiguously defined (Loonen et al., 2019) and several other extrapyramidal movement reactions occur simultaneously in patients treated with antipsychotic drugs (Loonen et al., 2000, Loonen et al., 2001. These other druginduced movement disorders may also have a tardive course and often accompany TD in fluctuating intensity which could make it hard to correctly diagnose a case with dyskinesia in epidemiological studies (Waln and Jankovic, 2013;D'Abreu et al., 2018). Dyskinesia is known to occur spontaneously, particularly in elderly persons (Woerner et al., 1991;Clark and Ram, 2007), and also in drug-naïve persons with schizophrenia (Kane and Smith, 1982;Pappa and Dazzan, 2009) as well as in their direct relatives (McCreadie et al., 2003;Koning et al., 2010). This spontaneously occurring dyskinesia may obscure the prevalence of truly drug-induced movement disorders. Trying to identify biomarkers which could predict vulnerability to develop TD in individual patients using epidemiological data as starting point is probably problematic in the yielding of fruitful results. Pharmacogenetic information can better be applied to help clarify the possible pharmacological mechanisms required in the explanation of the pathogenesis of the movement disorder, and/or the modification of the intensity of its symptoms (Loonen et al., 2019). TD is characterized by involuntary repetitive movements which are usually abrupt and irregular in nature (Loonen and van Praag, 2007). The movements may affect tongue, lips and jaw as well as the forehead, eyelids (blinking), lower face and throat (orofacial dyskinesia); the neck, trunk (rocking movements) and upper and lower limbs may also show rapid repetitive contractions (peripheral or limb-truncal dyskinesia). When the diaphragm and intercostal muscles are affected, dyskinetic movements result in making grumbling, snoring, groaning, and/or sniffing noises (respiratory dyskinesia) . This last type is usually not considered in epidemiological or treatment studies. In the earliest reports about TD, this movement disorder was characterized by involuntary orofacial movements (Loonen et al., 2019). This should be considered to be "classic" TD (Waln and Jankovic, 2013). We have observed that different gene variants are associated with orofacial versus limb-truncal dyskinesia (Al Hadithy et al., 2009. We have also noticed that in levodopa-induced dyskinesia, specific genetic associations exist with limb-truncal dyskinesia (LID), but not with orofacial dyskinesia (Ivanova et al., 2012). This may correspond with the observation that in Huntington's disease (HD) and LID, large muscle groups are often affected, while in TD more subtle movements are present, most often in the orofacial area. We could imagine that in TD the dysregulation is primarily localized within another histological striatal compartment than in HD and LID (Loonen et al., 2019). TD most likely results from dysregulation within so-called dorsal extrapyramidal cortico-striatal-thalamic-cortical (CSTC) circuits (Loonen and Ivanova, 2013;Loonen et al., 2019). CSTC circuits comprising the putamen as striatal entry station to the basal ganglia regulate the intensity (amplitude and velocity) of voluntary muscle contractions. The activity of these CSTC circuits is in turn primarily regulated by ascending dopaminergic nigrostriatal neurons. However, ascending serotonergic input modulates the activity of CSTC circuits too Loonen et al., 2019). 5-Hydroxytryptamine (5-HT, serotonin)-containing fibers originating within brainstem upper raphe nuclei have a widespread distribution within the midbrain and forebrain . They are heavily connected with, for example, the substantia nigra pas compacta (SNc), the dorsal striatum, and the frontal cerebral cortex, next to many other structures. By affecting several components of the CSTC circuits, they modulate their activity . Moreover, they affect the activity of CSTC circuits indirectly via striatal interneurons and by modulating the stress response (Loonen et al., 2019). Seven types of 5-HT receptors (HTRs) can be distinguished, most of them having several subtypes. All but one (HTR3) are g-protein coupled receptors (GPCRs) (Hannon and Hoyer, 2008;Loonen and Ivanova, 2016). For their role in TD and LID, HTR1A, HTR2A and HTR2C have been most extensively studied (Meltzer, 2012;Huot et al., 2013). HTR1A are inhibitory (receptors coupled to Gi/o) and HTR2 are excitatory (receptors coupled to Gq/11) receptors. A special characteristic of HTR2C and, to lesser extent, HTR2A, is constitutive activity (Hannon and Hoyer, 2008). This offers the possibility of certain atypical antipsychotic drugs having inverse agonistic activity (Aloyo et al., 2009), which means that they affect HTR2 in a direction opposite to 5-HT itself. This complicates pharmacogenomic studies because it is not known whether a genetic variant of the HTR2 gene affects the activity, the expression, the inducibility or the constitutive activity of the corresponding receptors. Therefore, an urgent need exists to develop an in vivo or ex vivo test suitable to assess the activity of receptor complexes corresponding to specific HTR2 variants (Ivanova et al., 2018). Moreover, the use of drugs with inverse agonistic activity, like atypical antipsychotics, could obscure the possible consequences of HTR2 inactivity (Loonen et al., 2019). The present study is aimed to investigate the possible role of serotonin receptors in tardive dyskinesia and its subtypes in patients with schizophrenia by studying associations with specific variants of HTR genes. Patients The present study was carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki 1975, revised in Fortaleza, Brazil, 2013 for experiments involving humans and after the study having been approved (protocol N63/7.2014) by the Local Bioethics Committee of the Mental Health Research Institute. Participants providing written informed consent were recruited from three psychiatric hospitals in Tomsk, Kemerovo and Chita oblasts in Siberia. The study population was previously described Levchenko et al., 2019). Patients were recruited who were, or had been, using antipsychotic medication for more than 3 months and were on a stable dosage for at least 3 months prior to entry. The inclusion criteria were a clinical diagnosis of schizophrenia according to ICD-10 (F20) and an age of 18-75 years. Exclusion criteria were non-Caucasian physical appearance (e.g., Mongoloid, Buryats or Khakassians); any relevant (e.g., unstable or acute) physical disorders, relevant pharmacological withdrawal symptoms or organic brain disorders (e.g., epilepsy, Parkinson's disease). The severity of dystonia was assessed with the Abnormal Involuntary Movement Scale (AIMS) (Loonen et al., 2000;Loonen et al., 2001;Loonen and van Praag, 2007). The AIMS scores were converted into binary form (presence or absence of tardive dyskinesia) according by Schooler and Kane's criteria (Schooler and Kane, 1982). Schooler-Kane criteria require: (i) at least 3 months of cumulative exposure to neuroleptics; (ii) the absence of other conditions that might cause involuntary movements and (iii) at least moderate dyskinetic movements in one body area (≥3 on AIMS) or mild dyskinetic movements in two body areas (≥2 on AIMS). To compare antipsychotic medications, all drug doses taken at the time of investigation were converted into chlorpromazine equivalents (CPZeq) (Andreasen et al., 2010). In order to exclude a possible influence of drug-induced receptor inactivation on TD scores, patients using 5-HT2A and/or 5-HT2C receptor blocking antipsychotics (according to Loonen and Ivanova, 2016) were excluded in sub-analysis. DNA Analysis Blood samples were obtained from each participant by antecubital venepuncture. Blood with EDTA was stored in several aliquots at −20 • C until DNA isolation. DNA from leukocytes of whole peripheral blood was isolated according with standard phenol-chloroform protocol. Genotyping was performed without any knowledge of the patient's clinical status. Genotyping was carried out for 29 polymorphic variants of HTR1A (rs6295, rs1364043, rs10042486, rs1800042, rs749099), HTR1B (rs6298, rs6296, rs130058), HTR2A (rs6311, rs6313, rs6314, rs7997012, rs1928040, rs9316233, rs2224721), HTR2C (rs6318, rs5946189, rs569959, rs17326429, rs4911871, rs3813929, rs1801412, rs12858300), HTR3A (rs1062613, rs33940208, rs1176713), HTR3B (rs1176744) and HTR6 (rs1805054) on the MassARRAY Analyzer 4 (Agena Bioscience). Rs1800042, rs6314, rs12858300, rs33940208, and rs1176744 had a minor allele frequency less than 5% or did not pass Hardy-Weinberg equilibrium test (p < 0.05) and were excluded from analysis. We used the set SEQUENOM Consumables iPLEX Gold 384. DNA sample preparation for SEQUENOM MassARRAY Analyzer 4 includes several steps, including standard PCR reaction, a shrimp alkaline phosphatase reaction to neutralize the unincorporated dNTPs in the amplification products, the PCR iPLEX Gold extension reaction, and then placing the samples on a special chip (SpectroCHIP array) using Nanodispenser RS1000 prior to loading them into the analyser. DNA concentrations were measured with a Thermo Scientific NanoDrop 8000 UV-Vis Spectrophotometer. Possibly relevant SNPs were selected according to the literature data on associations with schizophrenia and other mental disorders and program LD TAG SNP Selection (TagSNP). More detailed information about selected SNPs are presented in Supplementary Table S1. Statistical Analysis Statistical analysis was performed in the R statistical environment using basic R functions, PredictABEL and the SNPassoc package (Gonzalez et al., 2014). We conducted the statistical analysis in a few steps. Firstly, we tested all polymorphic variants to deviation from Hardy-Weinberg equilibrium with chi-square test, except those variants which were located in X-chromosome (HTR2C polymorphic variants). Polymorphic variants of HTR2C were divided by sex and tested separately because of hemizygotic status of X-chromosomal markers for men. In addition, polymorphic variants with minor allele frequency less than 5% were excluded from further testing. Secondly, we conducted association analysis of genotypes and alleles with tardive dyskinesia and its orofacial (AIMS item 1-4) and limb-truncal (AIMS items 5-7) subtypes with chi-square test and Fischer's exact test, where necessary. Odds ratio and 95% confidence intervals (significance level was 0.05) were also tested. The third step was to assess variables through binary logistic regression. RESULTS The total number of patients was 449 (226 females and 223 males). According to predefined criteria the total number of patients with tardive dyskinesia was 121, thus the number of patients without tardive dyskinesia was 328. The main demographic and clinical parameters are presented in Table 1. The mean age of patients with tardive dyskinesia (patients with TD) was significantly higher than the age of patients without tardive dyskinesia (patients without TD), and the duration of schizophrenia was significantly longer in patients with TD and higher doses of antipsychotics were used in patients with TD. One hundred and ninety four patients received typical antipsychotics (of which Haloperidol was the most often used -115 patients, Chlorpromazine -44 patients, as well as Chlorprothixene, Zuclopenthixol, Thioridazine, Periciazine). 172 patients received atypical antipsychotics (mainly Risperidone, Clozapine and Quetiapine and to a lesser extent Olanzapine, Sertindole, Paliperidone, Amisulpride); 83 patients received combined therapy. The first step was to estimate a possible association between the autosomal genotypes or alleles concerning HTR genes and the presence or absence of total (Supplementary Table S4) tardive dyskinesia. Almost all HTR variants showed no association with TD or one of its subtypes. An exception was formed by rs1928040 of the HTR2A gene. We found a significant association of the T allele with a total diagnosis of tardive dyskinesia (p = 0.02) and with orofacial TD (p = 0.02). The T allele was also associated with limb-truncal TD, but this association did not reach statistical significance (p = 0.07). The same was true for the association of the genotypes with these first two forms of TD (p = 0.061; p = 0.058, respectively) ( Table 2). Excluding patients who were treated with 5-HT2A antagonists according to Loonen and Ivanova (2016) or periciazine (N = 185) hardly changed the calculated ORs of possible associations, although significance was lost due to the lower patient numbers and the resulting loss of statistical power (data not shown). The second step was to assess possible sex difference for X-bound HTR2C variants (Supplementary Tables S5-S7). To our surprise, we observed no association between any of the HTR2C polymorphisms and total, orofacial or limb-truncal TD in (hemizygous) males. In women, however, both genotypes and alleles of HTR2C polymorphism rs1801412 were significantly associated with total (p = 0.027, respectively, p = 0.03) and orofacial TD (p = 0.008; p = 0.009). Limb-truncal TD was not associated with this polymorphism for women ( Table 3). Excluding patients who used 5-HT2C antagonists changed the measured ORs. In 141 female patients, the observed association decreased for total and orofacial TD, but in 102 male patients it increased for all three types of TD (Supplementary Tables S8, S9). Due to the low patient number, a statistical trend (p = 0.08) was reached only for orofacial TD in male patients. The third step was to assess variables through binary logistic regression (Table 4). We used the status of types of tardive dyskinesia as the dependent variable, and "age, " "gender, " "duration of disease" and significant polymorphic variant rs1928040 as predictors. For orofacial TD Hosmer-Lemeshow test, the result is chi-square = 6.627, p = 0. The present study shows good values of AUC, which indicates approximately good-fitted regression models for different types of tardive dyskinesia in patients with schizophrenia. It revealed that, compared with group of patients without TD, patients with TD have significantly increased odds in variable "Duration of disease" and significantly decreased in variables "Gender" and "rs1928040" ( Table 4). DISCUSSION The present paper describes the results of an association study of in the end 24 polymorphic variants of HTR1A, HTR1B, HTR2A, HTR2C, HTR3A, HTR3B, and HTR6 genes with TD and two of its subtypes in 449 patients with schizophrenia. As HTR2C is X-bound the possible association was studied in male and female patients independently. We found a significant association of the T allele of rs1928040 of the HTR2A gene with total and orofacial TD. In addition, both genotypes and alleles of HTR2C polymorphism rs1801412 was significantly associated with total and orofacial TD in women, but not in men. Rs1928040 is a C > T intron 2 variant of HTR2A gene which is localized on chromosome 13. This variant has been reported to be associated with the response to treatment with selective serotonin uptake inhibitors (SSRIs) in patients with major depressive disorders, but the results are inconsistent (Kishi et al., 2010;Lucae et al., 2010;McMahon et al., 2006). Rs1801412 is a T/G on position 114908141 of the HTR2C gene on the X-chromosome. To our knowledge, an association with central nervous system disorders has not been observed (Serretti et al., 2009;Bakker et al., 2012); this includes antipsychoticinduced movement disorders in a Dutch population of long-stay psychiatric patients (Bakker et al., 2012). An essential component of the pathogenesis of tardive dyskinesia (Loonen and Ivanova, 2013) as well as levodopainduced dyskinesia is excitotoxic damage to striatal medium spiny projection neurons (MSNs) of the indirect extrapyramidal pathway (Loonen et al., 2019). This slow damage could explain the late onset of the movement disorder and the resulting dominance of dopamine D1 carrying MSNs of the direct pathway, which is essential for mediating dyskinesia (Westin et al., 2007;Darmopil et al., 2009). We have hypothesized that second generation antipsychotics (SGAs) can protect indirect pathway MSNs against excitotoxicity by inverse agonism of HTR2A and particularly HTR2C which these neurons carry Loonen et al., 2019). Modulation of the activity of HTR2A or HTR2C resulting from genetic variability could then also result in differences in the incidence of tardive dyskinesia during exposure to antipsychotic drugs. In addition, fast spiking interneurons which inhibit the activity of striatal dopaminergic terminals also express HTR2A or HTR2C Loonen et al., 2019). Inverse agonism by SGAs would increase the release of dopamine, which could directly stimulate direct pathway MSNs causing acute dyskinesia. Hence, present usage of HTR2A or HTR2C antagonists could obscure a possible genetically induced change in prevalence. In our statistical analysis, both rs1928040 and rs1801412 were linked more intensively to orofacial than to limb-truncal TD. This corresponds to our previous observations that the genetic background of these two types of TD may be different (Al Hadithy et al., 2009. It has been suggested that the orofacial variant corresponds to "classical" TD (Waln and Jankovic, 2013) and is possibly related to a disfunction within another striatal tissue compartment (i.e., striosomal) than HD and LID (i.e., matrix) (Loonen et al., 2019). The usage of drugs which inactivate 5-HT receptors may decrease the difference between different genotypes corresponding to more or less activity of the corresponding receptor product. As several SGAs have significant affinity to HTR2A and somewhat less often to HTR2C, the usage of SGAs may obscure possible associations. This was apparently not true for HTR2A variants in our study. Excluding patients who were using HTR2A antagonists hardly changed the observed associations with TD (data not shown). This was also true for rs1928040, although significance was lost, apparently due to decreasing statistical power. This finding might also result in a certain level of doubt about the functional consequences of this rs1928040 variant. Excluding patients who were using HTR2C antagonizing SGAs, however, changed the magnitude of the associations. Due to the low number of the remaining patients, no statistically significant differences were found, but the size of the association between the G allele and the presence of total and orofacial TD increased to a major extent in the 102 studied (hemizygotic) men and decreased in the 144 remaining women after excluding HTR2C antagonist users. Our findings in men correspond to the prediction in Loonen et al. (2019). The most important limitation of our study is that our finding cannot be applied to the identification of rs1928040 and rs1801412 as biomarkers because we assessed a possible association with a total of 24 variants of 7 HTR genes. We wanted to use our genotyping data to confirm or falsify our predefined hypothesis about the possible role of HTR2 in mediating "classical" TD symptoms. We did not find evidence supporting a possible role for HTR2A, but the effect of excluding patients who were using HTR2C antagonists indicates that HTR2C may have a specific role in developing (especially orofacial) TD. The number of male and female patients, who remain after excluding persons using HTR2C antagonists, is small which limits the reliability of this conclusion. Our findings should be verified in a prospective study of (hemizygous) male patients who are only using classical antipsychotics (or benzamides) devoid of HTR2 affinity. A third limitation would be that the treatment history of our patients cannot be adequately assessed. It should be emphasized that our findings are at least partly related to acute effects on the severity of the symptoms of TD (reflected by its prevalence). HTR2C could also have a role in the pathogenesis (related to its incidence) of this movement disorder, but this cannot be estimated. CONCLUSION In this study we obtained evidence for an association between HTR2A polymorphism rs1928040 and HTR2C variant rs1801412 and particularly the orofacial form of tardive dyskinesia. Excluding patients who were using 5-HT2 antagonists suggests that HTR2C variety has functional consequences which may indicate a role of this receptor in modulating the severity of TD. However, this hypothesis needs verification in a larger group of male patients who are not using HTR2 antagonists. DATA AVAILABILITY STATEMENT Data are available from Prof. Dr. Svetlana A. Ivanova (ivanovaniipz@gmail.com) on reasonable request and with permission of MHRI. ETHICS STATEMENT The studies involving human participants (protocol N63/7.2014) were reviewed and approved by the Local Bioethics Committee of the Mental Health Research Institute. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS SI and AL designed and supervised the study. EK collected the clinical information. IP and DP isolated DNA and genotyped the samples. AS and NB supervised the clinical work. SI, AL, and BW supervised the technical work. IP, SI, and AL designed and carried out the statistical analysis. AL wrote the first draft of manuscript. IP, OF, SI, and BW commented on this draft and contributed to the final manuscript. ACKNOWLEDGMENTS We greatly appreciate the help of Mrs. Kate Barker (BA, PGCE) who has proofread the manuscript. This work resulted from a collaboration between the Mental Health Research Institute (Tomsk National Research Medical Center of the Russian Academy of Sciences) in Tomsk and the Groningen Research Institute of Pharmacy (GRIP) of the University of Groningen. This work is carried out within the framework of Tomsk Polytechnic University Competitiveness Enhancement Program.
2020-04-24T13:13:29.356Z
2020-04-24T00:00:00.000
{ "year": 2020, "sha1": "50e4c43a4485056f4aec0d1b772101b2ecbf842e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fnmol.2020.00063", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "50e4c43a4485056f4aec0d1b772101b2ecbf842e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258911217
pes2o/s2orc
v3-fos-license
Reintroduction of resistant frogs facilitates landscape-scale recovery in the presence of a lethal fungal disease Vast alteration of the biosphere by humans is causing a sixth mass extinction, driven in part by an increase in emerging infectious diseases. The emergence of the lethal fungal pathogen (Batrachochytrium dendrobatidis; “Bd”) has devastated global amphibian biodiversity, with hundreds of species experiencing declines or extinctions. With no broadly applicable methods available to reverse these impacts in the wild, the future of many amphibians appears grim. The once-common mountain yellow-legged (MYL) frog is emblematic of amphibians threatened by Bd. Although most MYL frog populations are extirpated following disease outbreaks, some persist and eventually recover. Frogs in these recovering populations have increased resistance against Bd infection, consistent with evolution of resistant genotypes and/or acquired immunity. We conducted a 15-year landscape-scale reintroduction study and show that frogs collected from recovering populations and reintroduced to vacant habitats can reestablish populations despite the presence of Bd. In addition, results from viability modeling suggest that many reintroduced populations have a low probability of extinction over 50 years. To better understand the role of evolution in frog resistance, we compared the genomes of MYL frogs from Bd-naive and recovering populations. We found substantial differences between these categories, including changes in immune function loci that may confer increased resistance, consistent with evolutionary changes in response to Bd exposure. These results provide a rare example of how reintroduction of resistant individuals can allow the landscape-scale recovery of disease-impacted species. This example has broad implications for the many taxa worldwide that are threatened with extinction by novel pathogens. Significance Statement Understanding how species persist despite accelerating global change is critical for the conservation of biodiversity. Emerging infectious diseases can have particularly devastating impacts, and few options exist to reverse these effects. We used large-scale reintroductions of disease-resistant individuals in an effort to recover a once-common frog species driven to near-extinction by a disease that has decimated amphibian biodiversity. Introduction of resistant frogs allowed reestablishment of viable populations in the presence of disease. In addition, resistance may be at least partially the result of natural selection at specific immune function genes, which show evidence for selection in recovering populations. The evolution of resistance and reintroduction of resistant individuals could play an important role in biodiversity conservation in our rapidly changing world. D R A F T to limit pathogen burden) and tolerance (ability to limit the harm caused by a particular burden) are key mechanisms to reduce disease impacts (15) and facilitate population persistence and recovery (16).Host immunity and evolution both play important roles in the development of resistance and tolerance, and utilizing these factors would seem a promising approach to developing effective strategies to mitigate disease impacts in the wild (17,18).However, several aspects of the amphibian-Bd system present difficult obstacles, including (i) the general inability of amphibians to mount an effective immune response against Bd infection (19)(20)(21), and (ii) the apparent rarity of evolution of more resistant/tolerant genotypes (but see 22,23). These factors suggest that reintroduction of amphibians into sites to reestablish populations extirpated by Bd will often result not in population recovery, but instead in the rapid reinfection and mortality of the introduced animals and/or their progeny (24)(25)(26)(27).If true, the future of many amphibian species threatened by Bd appears bleak. The mountain yellow-legged (MYL) frog, composed of the sister species Rana muscosa and Rana sierrae (28), is emblematic of the global declines of amphibians caused by Bd (8).Once the most common amphibian in the high elevation portion of California's Sierra Nevada mountains (USA, 29), during the past century this frog has disappeared from more than 90% of its historical range (28).Due to the severity of its decline and the increasing probability of extinction, both species are now listed as "endangered" under the U.S. Endangered Species Act.In the Sierra Nevada, this decline was initiated by the introduction of non-native trout into the extensive historically-fishless region (30,31) starting in the late 1800s.The arrival of Bd in the mid-1900s and its subsequent spread (32) caused additional large-scale population extirpations (33,34).These Bd-caused declines are fundamentally different from the fish-caused declines because fish eradication is feasible (35) and results in the rapid recovery of frog populations (36,37).In contrast, Bd appears to persist in habitats even in the absence of amphibian hosts (38), and therefore represents a long-term alteration of invaded ecosystems that amphibians will need to overcome to reestablish populations. Despite the catastrophic impact of Bd on MYL frogs, wherein most Bd-naive populations are extirpated following Bd arrival (33), some populations have persisted after epizootics (during which Bd infection intensity on frogs is very high, 39) and are now recovering (Figure 1) (14).Frogs in these recovering populations show reduced susceptibility to Bd infection (14), with infection intensity ("load") on adults consistently in the low-to-moderate range (39)(40)(41).This reduced susceptibility is evident even under controlled laboratory conditions (14), indicative of host resistance against Bd infection (and not simply an effect of factors external to individual frogs, e.g., environmental conditions).In addition to frogs from recovering populations having higher resistance to Bd infection than those from naive populations, they could also have higher tolerance, but no data are currently available to evaluate this possibility.Therefore, we focus on resistance throughout this paper.The observed resistance of MYL frogs could be the result of several non-mutually exclusive mechanisms, including natural selection for more resistant genotypes (22,23), acquired immunity (21), and/or inherent between-population differences that pre-date Bd exposure.The possible evolution of MYL frog resistance and subsequent population recovery is consistent with that expected under "evolutionary rescue", whereby rapid evolutionary change increases the frequency of adaptive alleles and restores positive population growth (42,43).This intriguing possibility also suggests an opportunity to expand recovery beyond the spatial scale possible under natural recovery by utilizing resistant frogs from recovering populations in reintroductions to vacant habitats (Figure 1) (41,44). In the current study, we had three primary objectives.First, to determine whether the reintroduction of resistant MYL frogs obtained from populations recovering from Bd-caused declines allows the successful reestablishment of extirpated populations despite ongoing disease, we conducted a 15-year landscape-scale frog reintroduction effort (Figure 1).Second, to extend our inferences of population recovery well beyond the temporal extent of our reintroduction study, we developed a model to estimate the probability of persistence for the reintroduced populations over a multi-decadal period (Figure 1).Third, given the importance of resistance for frog survival, population establishment, and long-term viability (this study), we conducted a genomic study using exome capture methods to determine whether MYL frogs in recovering populations show evidence of selection and whether these genomic changes are associated with resistance (Figure 1). Following translocation, we estimated adult survival and recruitment of new adults from capture-mark-recapture (CMR) surveys and obtained counts of tadpoles and juveniles from visual encounter surveys (VES).Across all translocation sites, the duration of survey time series was 1-16 years (median = 5). Of the 12 reintroduced populations, 9 (0.75) showed evidence of successful reproduction in subsequent years, as indicated by the presence of tadpoles and/or juveniles.For these 9 populations, one or both life stages were detected in nearly all survey-years following translocation (proportion of survey-years: median = 0.9, range = 0.29-1).These same populations were also those in which recruitment of new adults (i.e., progeny of translocated individuals) was detected.As with early life stages, recruits were detected in the majority of post-translocation survey-years (proportion of survey-years: median = 0.79, range = 0.12-1).In summary, survey results indicate that translocations resulted in the establishment of reproducing MYL frog populations at most recipient sites despite the ongoing presence of Bd. Bd loads were fairly consistent before versus after translocation, and loads were nearly always well below the level indicative D R A F T of severe chytridiomycosis (i.e., the disease caused by Bd) and associated frog mortality (Figure S2) (33,41).Although it is possible that the observed relatively small changes in load are a consequence of individuals with high Bd loads dying and therefore being unavailable for sampling during the post-translocation period, the fact that there was little difference in preversus post-translocation Bd loads even in those populations that had very high frog survival (70556, 74976 -see below; Figure S2) suggests a true lack of substantial change in Bd load. The ultimate measure of reintroduction success is the establishment of a self-sustaining population.Given that it can take years or even decades to determine the self-sustainability of a reintroduced population (for an example in MYL frogs, see 41), the use of proxies is essential for providing shorter-term insights into reintroduction success and the factors driving it.Results from our CMR surveys allowed us to accurately estimate frog survival, including over the entire CMR time series for each site and during only the 1-year period immediately following translocation.These estimates were made using site-specific models analyzed using the mrmr package.We use these estimates to describe general patterns of frog survival in all translocated cohorts, and in an among-site meta-analysis of frog survival to identify important predictors of 1-year frog survival (e.g., Bd load). Estimates of 1-year frog survival indicate that survival was highly variable between recipient sites, but relatively constant within recipient sites (for the subset of sites that received multiple translocations; Figure 2).These patterns indicate an important effect of site characteristics on frog survival.In addition, 1-year survival was higher for frogs translocated later in the study period than earlier: 5 of the 7 populations translocated after 2013 had estimated survival ≥ 0.5, compared to only 1 of 5 populations translocated prior to 2013.We suggest this resulted primarily from our improved ability to choose recipient sites with higher habitat quality for R. sierrae (see Materials and Methods -Frog population recovery -Field methods for details).This increased survival has direct implications for population viability (see Results -Long-term population viability). The goal of our meta-analysis was to identify important predictors of 1-year frog survival.We were particularly interested in whether Bd load had a negative effect on adult survival, as would be expected if frogs were highly susceptible to Bd infection.This analysis was conducted in a Bayesian framework and included a diversity of site, cohort, and individual-level characteristics as predictors and 1-year frog survival (Figure 2) as the response variable.The best model of 1-year frog survival identified several important predictors, but Bd load at the time of translocation was not among them (Figure S3).Instead, important predictors included winter severity in the year following translocation (snow_t1), site elevation, and donor population (Figure 3, Figure S3).Males had somewhat higher survival than females, but this effect was small (Figure 3, Figure S3).The absence of Bd load as an important predictor of frog survival is consistent with frogs in recovering populations having sufficient resistance to suppress Bd loads below harmful levels. In summary, results from our frog translocation study indicate that translocations resulted in (i) relatively high 1-year survival of translocated adults, as well as reproduction and recruitment, at the majority of recipient sites, (ii) 1-year survival of adults is influenced by site characteristics, weather conditions, and donor population (but not Bd load), and (iii) based on the relatively small changes in Bd load after translocation, loads appear more strongly influenced by frog characteristics (e.g., resistance) than site characteristics.Together, these results indicate that frogs translocated from recovering populations can maintain the benefits of resistance in non-natal habitats.In addition, in 3 locations where longer CMR time series allowed us to assess the survival of new adults recruited to the population, naturally-recruited adults had equivalent or higher survival probabilities than the originally translocated adults (Figure S4).This suggests that frog resistance is maintained across generations.All of the conditions described above are supportive of population establishment and long-term population growth. Long-term population viability.Results from the frog translocation study indicated that most populations showed evidence of successful reproduction and recruitment, and that adult survival was often relatively high (described above).Although suggestive of population establishment, a decade or more of surveys may be necessary to confirm that populations are in fact self-sustaining (41).To extend our inferences of population establishment beyond those possible from the site-specific CMR data, we developed a population viability model.Specifically, to test whether the observed yearly adult survival probabilities in translocated populations were sufficient for long-term viability, we built a stage-structured matrix model that captured known frog demography and included demographic and environmental stochasticity.We parameterized the model using CMR data from translocated populations and known life history values in this system (Table S1). Given observed yearly adult survival probabilities of translocated frogs (from site-specific mrmr CMR models; provided in legend of Figure 4B) and a yearly survival probability of the year-1 juvenile class (σJ 1 ) greater than 0.09, at least six of twelve translocated populations should experience a long-run growth rate λ greater than 1 in the presence of Bd (Figure 4A; median predicted λ ranges from 1.19-1.40for these six populations).These six populations all had observed yearly adult survival greater than 0.5.As year-1 juvenile survival probability increased above 0.2, the deterministic long-run growth rate of eight of twelve population was greater than 1 (Figure 4A). Even when incorporating (i) demographic stochasticity and (ii) environmental stochasticity in year-1 juvenile survival and recruitment (the transition that we expect to be the most subject to environmental variability in the presence of Bd), populations with high adult survival are likely to persist over a 50 year time horizon.Our model predicted that, following a single introduction of 40 adult individuals into a population, the six populations with the highest adult survival probabilities (σA R > 0.5) had 50-year extinction probabilities of less than 0.5 when the average year-1 juvenile survival was greater than 0.10 (Figure 4B).This indicates strong potential for long-term persistence in the presence of Bd and environmental variability in survival and recruitment.In contrast, for the six populations where yearly adult survival probability σA R < 0.5, extinction probability over 50 years was always predicted to be > 50% regardless of the value of mean year-1 juvenile survival between 0 D R A F T and 0.25.To test the validity of our model predictions, we demonstrated that our stochastic model could describe the general recovery trajectory of our translocated population with the longest survey history (Figure 4C; population 70550, surveyed for 16 years). In summary, our model demonstrates that given observed yearly adult survival probabilities of translocated frogs, 50% of our translocated populations have a high probability of population growth and long-term viability in the presence of Bd.This is likely a conservative estimate because there is evidence that naturally-recruited adults have higher survival probability than translocated adults (Figure S4), but we considered these probabilities to be equal in all but three of our populations where we had sufficient data to distinguish these different probabilities. Frog evolution in response to Bd. Results from the preceding sections indicate the critical role of frog resistance in posttranslocation frog survival, population growth, and population viability.As such, identifying the mechanisms underlying this resistance would fill a key gap in our understanding of the factors that promote population resilience in the presence of disease. Although natural selection for more resistant frog genotypes, and evolutionary rescue, may be foundational to the ability of frogs to recover despite ongoing Bd infection, for MYL frogs the role of disease-mediated selection in these processes remains unknown. To determine whether MYL frog populations show genomic patterns consistent with an evolutionary response to Bd, we compared frog exomes (i.e., coding region of a genome) between populations with contrasting histories of Bd exposure.Specifically, we compared frog genomes sampled in 4 populations that have not yet experienced a Bd-caused epizootic ("naive") (45) versus in 5 populations that experienced a Bd epizootic during the past several decades and have since recovered to varying degrees ("recovering"; Figure 5) (14,33).Bd-exposure histories of the 9 study populations are based on 10-20 years of VES and Bd surveillance using skin swabbing (e.g., 14,45,46).Naive populations are characterized by large numbers of adults (i.e, typically 1000s), Bd prevalence that is generally 0% except during occasional Bd failed invasions (during which Bd loads remain very low, 46), and no history of Bd epizootics since we first surveyed these populations in the late 1990s and early 2000s (45).In contrast, recovering populations exist in an enzootic state (39), characterized by smaller numbers of adults (generally < 500), high Bd prevalence (often > 80%, 40), and, in adults, moderate Bd loads that are typically well below the level expected to cause mortality (33).Naive and recovering populations can be identified unambiguously using these differences in Bd prevalence and load.Finally, there is no potential for frog dispersal between the 9 study populations due to intervening distances and topography, as well as the presence of introduced (predatory) fish and fish-induced habitat fragmentation.(See Supporting Information -Frog evolution in response to Bd -Study design for additional details regarding the study design.) We conducted a principal component analysis (PCA) of the genomic data to describe the relationships between sampled populations, and then used two complementary approaches to identify regions of the genome that differed between naive and recovering populations (i.e., regions under selection).First, we used a multivariate linear mixed model to evaluate associations between population type (i.e., naive versus recovering) and individual variants, including single nucleotide polymorphisms (SNPs) and insertions/deletions (INDELS), while accounting for population structure.Second, we used a splined window analysis to identify larger genomic regions showing differences between population types in F ST and nucleotide diversity (π dif f = πnaiveπrecovering). Individual frogs clustered into 3 separate groups in principal component space (Figure S6A), and clusters reflected the species split (i.e., R. muscosa versus R. sierrae) and the strong signature of isolation-by-distance that is characteristic of MYL frogs (47)(48)(49).Importantly, each cluster contained at least one population from both the naive and recovering groups, allowing us to distinguish allelic associations of individuals sampled in the 2 population types versus allelic associations resulting from population structure and genetic drift. Results from the individual variant and splined window analyses show that recovering populations have signatures of selection on multiple regions of the genome.The analysis of individual variants identified 11 "outlier" SNPs (i.e., showing significantly different allele frequencies between naive versus recovering populations) from 7 distinct genes across 4 contigs (Figure S6B, C).One of the 7 identified genes (LOC108802036) does not have an associated annotation.For the outlier SNPs, frequency differences between the naive and recovering populations ranged from 0.41 to 0.86.Most of these SNPs showed frequency differences in only a subset of the sampled populations (Figure 5A, B), but the SNP in the RIN3 gene showed consistent differences in frequencies across all populations (Figure 5C).This is suggestive of parallel selection at this locus across multiple populations.The other 6 outlier variants showed less consistent frequency differences across the study populations, but for these we still found a statistically significant signal of selection in 2 of the 3 genetic clusters (containing populations 5-9; Figure 5, Figure S6A).Therefore, although some outlier variant associations have a more limited geographic extent than RIN3, they still describe results that suggest parallel evolutionary changes following Bd exposure. The splined window analysis identified 33 outlier regions for π dif f and 58 outlier regions for F ST (Figure 6A, B).Of these, 9 regions were outliers for both metrics ("shared regions") and 2 of these shared regions also contained one or more of the outlier SNPs described above.A total of 35 annotated genes were found in the 9 shared regions.Given this large number of genes, here we focus on those with the strongest signal of selection and/or immune-related functions.The largest π dif f , indicative of directional selection, occurred in a 163kb region on Contig19, 12.9Mb upstream of the RIN3 outlier SNP (Figure 6C).This region contains approximately 500 SNPs and one annotated gene called "interferon-induced very large GTPase 1-like" (GVINP1).Additionally, a shared outlier region on Contig1 contained two complement factor genes (C6 and C7).Interestingly, this region had a large negative π dif f value, consistent with balancing selection.Finally, one shared outlier region on Contig8 contained one outlier SNP (TCF19) and was within 360kb of another outlier SNP (VARS) (Figure 6D, Figure S7).This region D R A F T (854kb from the beginning of the outlier window to the VARS SNP) contained a total of 8 annotated genes.In Xenopus, five of these genes occur in the extended major histocompatibility complex (MHC) Class I region (FLOT1, TUBB, MDC1, CCHCR1, TCF19) and three occur in the extended MHC Class III region (HSP70, LSM2, VARS) (50).Therefore, this region under selection is part of the extended MHC Class I and III complex and shows synteny with other amphibian genomes. Although the joint processes of Bd-caused population declines and selection in response to Bd exposure could affect genetic diversity of recovering populations, we found no consistent differences in individual-level heterozygosity or population-level π between naive and recovering populations (see Supporting Information -Frog evolution in response to Bd -Genetic diversity for details).Thus, despite localized selection in particular regions of the genome, we did not find evidence for reduced genetic diversity across the genome in recovering populations.In addition, no gene ontology (GO) biological functions, molecular functions, or cellular processes were over-represented in either the outlier variants or the 35 genes located in the overlapping F ST and π dif f splined windows (see Supporting Information -Frog evolution in response to Bd -GO analysis for details). In summary, our genomic results indicate that the exomes of frogs from naive and recovering populations show substantial differences, consistent with parallel evolutionary changes following Bd exposure.The regions under selection contain several immunologically-relevant genes and gene families that are directly linked to disease resistance in other taxa. Discussion Disease-induced population declines are decimating global biodiversity ( 5), but broadly-applicable strategies to recover affected species are generally lacking (e.g., 17).Here, we tested the possibility that populations of resistant individuals from naturally recovering populations can be used to reestablish extirpated populations of the endangered MYL frog in the presence of a highly virulent fungal pathogen (Bd).Our results indicate (i) the capacity of reintroduced populations to become established and eventually recover despite ongoing disease, (ii) that the recovering populations are likely to persist over a 50-year period, (iii) that there are substantial genomic differences between naive and recovering MYL frog populations, consistent with evolutionary change in frogs following Bd exposure, and (iv) that some of the genomic regions under selection contain genes related to disease resistance.Collectively, these results (Figure 1) provide a rare example of amphibian recovery in the presence of Bd, and have important implications for the conservation and recovery of amphibians and other taxa worldwide that are endangered by escalating impacts from emerging infectious diseases.In light of the generally low success rate of amphibian reintroduction efforts (51), our success in reestablishing MYL frog populations via translocation of resistant individuals is striking, and even more so given that MYL frogs were driven to near-extinction by Bd. In the following discussion, we follow the sequence of frog recovery described in Figure 1 to structure our key points.Previous field studies in MYL frogs show that frog-Bd dynamics and frog survival in the presence of Bd are fundamentally different between naive and recovering populations.Following the arrival and establishment of Bd in previously-naive populations, adult frogs develop high Bd loads that lead to mass die-offs (33).In contrast, in recovering populations adult frogs typically have low-to-moderate and relatively constant Bd loads and mass die-offs are not observed (39, 40, see also Figure S2).The differences in Bd load of frogs from naive and recovering populations are also observed in controlled laboratory studies (see Figure 4 in 14), and clearly indicate that frogs from recovering populations exhibit resistance against Bd infection.This resistance could in theory be due to several factors, including natural selection for more resistant genotypes, acquired immunity, and/or inherent between-population differences that pre-date Bd exposure, but until now evidence to evaluate the role of evolution was lacking. Results from our genomic analyses suggest that natural selection for adaptive alleles is at least partially responsible for the increased resistance of frogs in recovering populations.We identified multiple specific alleles and genomic regions showing signatures of selection between adjacent naive and recovering MYL frog populations, consistent with selection following Bd exposure.These analysis are based on samples collected from virtually all of the MYL frog populations remaining in a naive state, as well as adjacent recovering populations.This study design produced genetic clusters that each contained at least one naive and one recovering population, allowing us to detect selection without the confounding effects of population structure.In addition, we did not find a reduction in overall genetic variation in the recovering populations, suggesting that despite localized selection in the genome, these populations retain adequate genetic diversity for long-term persistence. Importantly, some genomic regions that we identified as under selection are associated with cellular and immunological mechanisms known to contribute to disease resistance, including in amphibians (52).For example, the MHC plays an important role in immunity.In our study, we identified a region that shows evidence of selection in recovering populations and contains eight genes associated with either the MHC Class I or Class III regions.These results corroborate numerous previous studies linking MHC genes to amphibian resistance against Bd (e.g., 53,54).Similarly, the region with the strongest indication of directional selection (as measured by π dif f ) contains the interferon-related gene GVINP1.Several previous studies of amphibians have found this gene to be differentially expressed during Bd infection (e.g., 23,55) and in populations differing in Bd susceptibility (23).This gene is also strongly linked to disease in salmon, explaining a notable 20% of the resistance phenotype (56,57).We also identified a region, characterized by high F ST and low π dif f , that contained the complement genes C6 and C7.The complement system plays an important role in innate immunity (58), and our results could indicate that balancing selection is acting in this region of the genome to favor a diverse set of alleles, as is known for C6 in humans (59).Based on the analysis of individual outlier variants, the RIN3 gene showed a consistent pattern of allele frequency differences across all nine of the frog populations sampled in this study, indicating consistent selection in populations distributed across a wide geographic area.This gene is associated with immune response and in Xenopus is expressed during appendage regeneration (60).Finally, the outlier variant with the lowest p-value was the uncharacterized gene LOC108802036.In the D R A F T genome of another frog species, this gene is located adjacent to a type I interferon gene (Np-IFNi2) (61), and together with GVINP1 further suggests the importance of interferon-related genes in this system.Collectively, the genes associated with these genomic differences may confer at least some degree of resistance against Bd infection, an attribute that may be critically important to population reestablishment and recovery in the presence of Bd. Reintroduction of resistant MYL frogs was remarkably successful in reestablishing viable populations in the presence of Bd.Of the 12 translocated populations, approximately 80% showed evidence of both successful reproduction and recruitment of new adults.Year-1 survival for 12 of the 24 translocated cohorts exceeded 50%, and > 70% of translocated cohorts had survival above this 50% level when the earliest translocations are excluded (i.e., translocations conducted when methods were still being refined; see Materials and Methods -Frog population recovery -Field methods for a brief description of these refinements).The fact that the relatively low Bd loads and correspondingly high frog survival was maintained when frogs were moved from donor populations to recipient sites indicates that these characteristics of naturally-recovering populations were not solely an effect of site characteristics, but were also strongly influenced by resistance inherent in the frogs.Although it could be argued that the relatively invariant Bd loads before versus after translocation are a consequence of similar pathogen pressure in the donor and translocated populations, this is at odds with the fact that in the first year after translocation frog densities are typically 1-2 orders of magnitude lower in the translocated versus donor populations and pathogen pressure should follow a similar pattern.In addition to the maintenance of Bd load and frog survival between natal and translocation sites, the relatively high survival of translocated frogs was maintained in their progeny, as expected if resistance has a genetic basis. Results from the population viability model were also encouraging.In particular, translocated populations with > 50% survival in the first year post-translocation were predicted to have a low probability of extinction over 50 years (probability of extinction < 0.5 when year-1 juvenile survival probability was greater than 0.10).The viability model highlighted the important role of frog survival in affecting long-term population viability, and allowed us to extend the temporal scale of our study beyond the years covered by our post-translocation surveys.These long-term forecasts are important, given that reintroduced MYL frog populations may often take decades to achieve our ultimate goal of self-sustainability (41).Making well-supported projections about the long-term outcome of reintroduction efforts from shorter-term information is critically important to the process of adaptive management of species reintroduction programs (62), including the one we are carrying out for MYL frogs.Specifically, the combined results from our reintroduction study and viability modeling indicate that survival of frogs in the first year following translocation is an effective proxy of longer-term survival and population viability.In addition, given the repeatability of frog survival at a site, 1-year frog survival also serves as an effective proxy of site quality (i.e., the ability of a site to support high frog survival and a viable frog population over the long term).This proxy of site quality is important in the MYL frog system because accurately predicting the ability of a site to support a viable frog population a priori remains difficult, even after conducting 24 translocations over 16 years. Despite the demonstrated resistance of adult MYL frogs against Bd infection, individual and population-level impacts of Bd are still evident.In an earlier study of 2 of our 12 translocated populations (41), Bd infection and load had detectable effects on the survival of adults and may have influenced population establishment (sites referred to as "Alpine" and "Subalpine" in (41) are identified as "70550" and "70505" in the current study).Applying similar analyses to all 12 of our translocated populations would likely provide a broader perspective of the ongoing effect of Bd.In addition to these important but relatively subtle effects of Bd on adults, the impacts on younger life stages are more apparent.MYL frogs immediately following metamorphosis ("metamorphs") are highly susceptible to Bd infection (63) and as a result experience high mortality (34).This high susceptibility of metamorphs is documented in numerous species of anurans, and may result from the poorly developed immune system characteristic of this life stage (64).In naturally recovering and translocated MYL frog populations, we suggest that the high mortality of metamorphs is an important limitation on subsequent recruitment of new adults.Therefore, although adult MYL frogs appear relatively resistant, Bd infection continues to have important limiting effects on recovering populations (see also 65). The recent emergence of Bd worldwide has contributed to the decline of hundreds of amphibian species, some of which are now extinct in the wild (8).This extraordinary impact on global amphibian biodiversity is compounded by the lack of any effective and broadly applicable strategies to reverse these impacts (17,25).Importantly, in addition to the natural recovery documented for MYL frogs (14), other amphibian species are also showing evidence of post-epizootic recovery in the presence of Bd (12,13) and suggest the possibility of also using animals from these recovering populations to reestablish extirpated populations.As with MYL frogs, the feasibility and long-term success of such efforts will depend on the availability of robust donor populations containing individuals that have the adaptive alleles necessary to allow frog survival and population growth in the presence of Bd.Despite the hopeful example of successful reestablishment of MYL frogs despite ongoing Bd infection, the challenge of recovering hundreds of Bd-impacted amphibian species globally is a daunting prospect.Although we now have a proven strategy to reestablish extirpated MYL frog populations, recovery across their large historical range will require substantial resources over many decades.The results of this study provide a hopeful starting point for that endeavor and other future efforts worldwide. In our rapidly changing world, evolution is likely to play in important role in facilitating the resilience of wildlife populations. Whether the documented disease resistance in MYL frogs and concurrent recovery of decimated populations provides an airtight example of evolutionary rescue will likely always be uncertain (given that can never have a perfect understanding of the past).Regardless, we provide an example from the wild that suggests that evolution can produce individuals that harbor adaptive alleles and allow population recovery in a novel (i.e., Bd-positive) environment, and show conclusively that individuals from these recovering populations can be used to reestablish extirpated populations and expand the scale of natural D R A F T recovery (Figure 1).We expect that similar species recovery actions will be an essential tool in wildlife conservation in an era of accelerating global change. Frog population recovery Field methods.For the 24 translocations we conducted, we identified donor populations from which adult frogs (≥ 40 mm snout-vent length) could be collected using several years of VES and skin swab collections (14), and results from population genetic analyses (48).The populations that we selected contained hundreds of R. sierrae adults and thousands of tadpoles.These relatively high abundances were the result of recent increases following previous Bd-caused declines (14).As is typical for recovering MYL frog populations, Bd prevalence in the donor populations was high (0.69-0.96) and Bd load (median log 10 (load) = 3.06-3.78ITS copies) was two or more orders of magnitude below the level at which frog mortality is expected (log 10 (load) ≈ 5.78 copies) (33,41).Recipient sites to which frogs were translocated were chosen based on previous R. sierrae presence (determined from VES and/or museum records) or characteristics that suggested high quality habitat for this species (66).At the beginning of this study, we had a relatively limited understanding of the factors that affect habitat quality.In subsequent years, we improved our site selection process by incorporating new information about important habitat features, in particular, overwinter habitats such as submerged boulders and overhanging banks.R. sierrae were absent from all recipient sites prior to the first translocation. We conducted 1-4 translocations per site (Figure 2, Figure S1) and each translocated cohort included 18 to 99 frogs (median = 30).In preparation for each translocation, adult frogs were collected from the donor population and measured, weighed, swabbed, and PIT tagged.Frogs were transported to the recipient site either on foot or via helicopter.Following release, we visited translocated populations approximately once per month during the summer active season and conducted diurnal CMR surveys and VES (summer active season is generally July-August but can start as early as May and end as late as September; range of survey dates = May-25 to Sep-29, range of translocation dates = Jun-28 to Sep-02; median number of visits per summer = 2, range = 1-10).CMR surveys allowed estimation of adult survival, recruitment of new adults, and adult population size, and VES provided estimates of tadpole and juvenile abundance.During 2006-2012, we conducted CMR surveys on a single day (primary period) per site visit, during which we searched all habitats repeatedly for adult frogs.Frogs were captured using handheld nets, identified via their PIT tag (or tagged if they were untagged), measured, weighed, swabbed, and released at the capture location.During 2013-2022, we generally used a robust design in which all habitats were searched during several consecutive days (median number of secondary periods per primary period = 3; range = 3-7), and frogs were processed as described above.However, when the number of frogs detected on the first survey day was zero or near zero, we typically conducted only a single-day CMR survey.When using a robust design, within a primary period, frogs that were captured during more than one secondary period were measured, weighed, and swabbed during the first capture, and during subsequent captures were only identified and released. During each site visit, we conducted VES either immediately before CMR surveys or during the first day of CMR surveys.VES was conducted by walking the entire water body perimeter, first 100 m of each inlet and outlet stream, and any fringing ponds and wetlands, and counting all R. sierrae tadpoles and juveniles.These R. sierrae life stages have high detectability, and counts are highly repeatable and provide estimates of relative abundance (31). Frog counts and reproductive success.For each of the translocated populations, we used the presence of tadpoles and/or juveniles from VES and counts of new recruits (i.e, untagged adults) in CMR surveys to provide two measures of successful reproduction.To calculate the proportion of years in which tadpoles/juveniles were present at a site, we excluded surveys conducted in the year of the initial translocation to that site.This exclusion accounted for the fact that all translocations were conducted after the breeding period and reproduction would therefore not occur until the following year.Similarly, to calculate the proportion of years in which new recruits were present at a site, we excluded surveys conducted during the 3 years following the initial translocation.This accounted for the multi-year tadpole and juvenile stages in MYL frogs (Table S1). Estimation of frog survival and abundance.For each translocation site, we estimated survival of translocated frogs, recruitment of new frogs into the adult population, and adult population size using a site-specific Bayesian open-population Jolly-Seber CMR model with known additions to the population (i.e., translocated cohorts), as implemented by the mrmr package (67) and using R Statistical Software (v4.4.4,68) (see Supporting Information -Frog population recovery -CMR model structure for details).Briefly, the model tracks the states of M individuals that comprise a superpopulation made up of real and pseudo-individuals (see 41, for details).The possible states of individuals include "not recruited", "alive", and "dead".The possible observations of individuals include "detected" and "not detected".We assume that individuals that are in the "not recruited" or "dead" states are never detected (i.e., there are no mistakes in the individual PIT tag records).We also assume that new recruits were the result of within-site reproduction and not immigration from adjacent populations.This assumption is justified by the fact that no R. sierrae populations were present within several kilometers of the translocation sites.For all models, we used mrmr defaults for priors, number of chains (4), and warmup and post-warmup iterations (2000 for each).We evaluated convergence of the Markov chain Monte Carlo (MCMC) algorithm using trace plots and Gelman-Rubin statistics (Rhat). Predictors of post-translocation frog survival. To identify important predictors of frog survival following translocation, we used multilevel Bayesian models (69,70).Included predictor variables describe characteristics of sites, translocated cohorts, and individuals (Bd load, sex, frog size, site elevation, winter severity in the year of translocation, winter severity in the year following translocation, donor population, day of year on which a translocation was conducted, and translocation order).We used 1-year post-translocation survival estimates from CMR models as the response.Estimated survival was rounded to integer values to produce a binary outcome, and modeled with a Bernoulli distribution.Group-level (random) effects included site_id, translocation_id, or translocation_id nested within site_id.We performed all analyses with the rstanarm package (71) and R Statistical Software (v4.4.4, 68).For all models, we used default, weakly informative priors, four chains, and 5000 iterations each for warmup and post-warmup.We checked MCMC convergence using trace plots and Rhat, and evaluated model fit using leave-one-out cross-validation (72), as implemented by the loo package (73).(See Supporting Information -Frog population recovery -Among-site survival modeling for details.) Changes in Bd load following translocation.We analyzed skin swabs using standard Bd DNA extraction and qPCR methods (74, see Supporting Information -Frog population recovery -Laboratory methods for details).To assess the magnitude of changes in Bd load on frogs following translocation, we compared Bd loads measured before versus after translocation.Before-translocation loads were quantified using skin swabs collected from all to-be-translocated frogs at the donor site on the day before or the day of the translocation.After-translocation Bd loads were based on all swabs collected from translocated frogs at the recipient site in the year of and the year following translocation.Individual frogs and their associated Bd loads were included in the dataset only if frogs were captured at the recipient site at least once during the 1-year period following translocation. Population viability modeling Model description.To determine the implications of observed 1-year adult survival on the long-term viability of populations established via translocation, we developed a population model for MYL frogs.Our central question was: How does the magnitude and variation in observed adult survival probability across translocated populations affect the long-term persistence probability of populations?We developed a model that tracked seven state variables of a frog population: density of translocated adults (A T ), density of adults naturally recruited into the population (A R ), density of first-year tadpoles (L 1 ), density of second-year tadpoles (L 2 ), density of third-year tadpoles (L 3 ), density of first-year juveniles (J 1 ), and density of second-year juveniles (J 2 ).We divided adults into two classes A T and A R because there is evidence that the survival probability of translocated adults and naturally recruited adults differs (Figure S4). We modeled the dynamics of these seven state variables using a discrete-time, stage-structured model where a time step is one year. The dynamics are given by equation 1. The parameters in this model are yearly survival probability σ• (the subscript "•" indicates a particular state variable), probability that a female frog reproduces in a given year p F , number of eggs produced by a female frog in a year that successfully hatch F , probability of a first-year tadpole remaining as a tadpole p L1 , probability of a second-year tadpole remaining as a tadpole p L2 , and probability of a first-year juvenile remaining as a juvenile p J1 .First-year juvenile survival and recruitment σ J1 is the parameter that we think is most influenced by environmental stochasticity. In this model we ignore density-dependent recruitment because we were interested in the growth of the population from an initial reintroduction and whether this growth was sufficient to prevent extinction over 50 years following the introduction.We also did not directly consider the dynamics of Bd in this model.We made this decision because (i) translocated populations are infected with Bd at high prevalence (41), and (ii) host density does not seem to play a significant role in multi-year Bd infection dynamics in this system (46). Thus, ignoring Bd infection dynamics and instead assuming all host vital rates are in the presence of high Bd prevalence significantly simplifies the model without much loss of realism.Additional details are provided in Supporting Information -Population viability modeling -Incorporating yearly variability in survival rates and Estimating model parameters. Model analysis.After parameterizing our model with CMR-estimated adult frog survival probabilities and other known vital rates (Table S1), we performed four analyses.First, we compute the long-run growth rate λ for each of our 12 translocated populations to determine if the populations were deterministically predicted to grow or decline in the long-run.Second, we compute the elasticity of λ to four key model parameters to quantify how much changes in these parameters affected the long-run growth rate (Figure S5).This also helped us determine where in the model environmental variation in juvenile survival and recruitment would have the largest effects on population dynamics.Third, we included demographic stochasticity and environmental stochasticity in σ J 1 in our model and simulated the 50-year viability (i.e., 1 -extinction probability) of populations given an introduction of 40 adult individuals into an unoccupied habitat.Finally, we fit our model to our longest translocation trajectory to confirm that our model could reasonably reproduce the observed recovery trajectories of MYL frogs following reintroductions.Additional details are provided in Supporting Information -Population viability modeling -Model analysis and simulation.Code to replicate the analyses can be found at https://github.com/SNARL1/translocation. Frog evolution in response to Bd Sampling and sequencing.We collected DNA samples via buccal swabbing (75) from 53 Rana muscosa/Rana sierrae individuals: 24 from 4 naive populations, and 29 from 5 recovering populations.These populations are located in the southern Sierra Nevada, from northern Yosemite National Park to northern Sequoia National Park (Figure 5).Samples were collected from 5-6 frogs per population.To minimize potential confounding effects caused by variation in frog genotypes across latitude (49), we selected sampling sites such that both population types were represented across similar latitudinal ranges.DNA was extracted following Qiagen DNEasy manufacturer's protocols. We sequenced the samples using an exome capture assay as described in (49).Briefly, genomic libraries were prepared and captured using a custom Nimblegen capture pool.Capture baits were designed based on the coding regions of the R. muscosa transcriptome (GenBank accession GKCT00000000).Captured libraries were then pooled and sequenced on a NovaSeq 6000 150PE Flow Cell S1 at the Vincent J. Coates Genomics Sequencing Lab at UC Berkeley.Raw sequencing reads are available from NCBI SRA (PRJNA870451). We then further filtered our dataset at the individual and variant level.First, we trimmed our variants to only include those with minor allele frequency > 0.03, a maximum depth of 250 and minimum depth of 5, a minimum genotype quality of 20, and a maximum Data analysis.To visualize the genomic relationships of our populations, we conducted a PCA using the glPCA function in the adegenet R package (80).To detect regions of the genome that differed between naive and recovering populations, i.e., regions under selection, we used two approaches: (1) a multivariate linear mixed model to evaluate individual variants (SNPs and INDELs), and (2) a splined window analysis to evaluate larger genomic regions.For the variant analysis, we first used a stringent data filter to include only variants with < 5% missing data (missing for no more than 2 individuals), and then calculated the likelihood ratio statistic for the resulting set of 148,307 high quality variants across 127 contigs using GEMMA (81).GEMMA calculates and incorporates a relatedness matrix for input samples, allowing us to account for relatedness and population structure when calculating likelihood ratio statistics.We identified variants showing different allele frequencies between naive versus recovering populations ("outliers") using a Bonferroni-corrected significance level of 0.01.We visualized the results using a Manhattan plot and qqplot.We developed a more liberal set of outlier variants using a Bonferroni-corrected significance level of 0.05 and used this set solely for the gene ontology (GO) analysis (see below and Supporting Information -Frog evolution in response to Bd -GO analysis; Dataset S1, S2). In the analysis of individual variants, for each outlier variant we determined whether the variant was synonymous (protein sequence the same for each variant) or non-synonymous (protein sequence differs between variants), and where in the gene it was located.To do this, we first extracted the reference genome sequence surrounding the variant using the bedtools "getfasta" function (82).Next, we re-annotated each sequence using BLAST to get the predicted gene location based on the closest annotated reference (83).We then translated each variant to amino acids and aligned this translation to that of the gene annotation to ensure proper frame of reference using Geneious (84).After ensuring proper translation, we characterized variants as within or outside the coding sequence of the gene and as either synonymous or non-synonymous. In the splined window analysis, we identified outlier regions using F ST and differences in nucleotide diversity (π dif f ) between naive and recovering populations.First, we calculated per-site F ST between the naive and recovering individuals for all bi-allelic SNPs in the 30 largest contigs (98% of all SNPs) using VCFtools (85).Next, we calculated per-site nucleotide diversity π separately for individuals from the naive and recovering populations using VCFtools, then calculated π dif f for each population (π dif f = π naive − π recovering ). We concatenated the values for F ST and π dif f in order of size-sorted chromosome number and adjusted the SNP position based on the relative position in the genome (for more efficient data processing and to better contextualize the strength of the outlier signals in different regions of the genome).We then used the GenWin R package (86) to conduct a splined discrete window analysis for F ST and π dif f .This method calculates where non-overlapping window boundaries should occur by identifying inflection points in the spline fitted to F ST and π dif f values along the genome, therefore balancing false positive and false negative results that occur using other window-based methods (86).This method also calculates a W-statistic allowing for outlier identification.We identified outliers as those with a W-statistic greater than 4 standard deviations above the mean for F ST or above/below the mean for π dif f .These standard deviations represent strict criteria to select only the top ~0.3% of windows.Shared outliers were then identified as those that were outliers in both analyses, meaning that they showed (i) high differentiation between naive and recovering populations, and (ii) differential patterns of nucleotide diversity in the same region.Finally, we extracted gene transcripts mapped within each region and retrieved annotation for that region using BLAST (Supporting Information: Datasets S3, S4). Fig. 1 . Fig. 1.For MYL frogs, a conceptual model depicting the Bd-caused decline and subsequent natural recovery (black text), facilitated recovery via reintroductions, and the linkages between these two pathways.Rectangles and hexagons represent outcomes and processes, respectively.Blue text indicates components that are included in the current study.The timeline shows the general sequence of the components, with the dotted portion indicating a projection into the future. Fig. 2 .Fig. 3 .Fig. 4 .Fig. 5 . Fig. 2. Median 1-year survival for each cohort of translocated frogs at the 12 recipient sites, as estimated for each site from the mrmr CMR model.Error bars show the 95% uncertainty intervals.Sites are arranged along the x-axis using the average of the median 1-year survival per translocation at each site.Dot colors indicate the donor population from which frogs in each translocated cohort were collected.When multiple translocations were conducted to a site, points and error bars are slightly offset to avoid overlap. Fig. 6 . Fig. 6.Evidence for selection on genomic regions in recovering MYL frog populations.Manhattan plot of the results from the splined window analysis showing outlier regions for the difference in (A) nucleotide diversity π dif f and (B) F ST .In (A), outlier regions are shown above the upper red dashed line and below the lower red dashed line.In (B), outlier regions are shown above the single dashed red line.Outlier regions for either π dif f or F ST are shown in blue and outlier regions for both π dif f and F ST are shown in red.(C) Magnified Contig19 from (A) showing two adjacent outlier regions for π dif f 12.9Mb upstream of the RIN3 outlier SNP (indicated with a dashed vertical blue line).(D) Magnified Contig8 from from (B) showing the F ST outlier region that includes the outlier SNPs TCF19 and VARS.This region of the genome contains 8 annotated genes known to occur in the extended MHC Class I and III regions. missing proportion of 0.5.This filter resulted in 427,038 sites, of which 353,172 were SNPs and 73,866 were INDELS.Finally, we trimmed samples with an average depth across filtered sites < 7x (n = 3).Our final dataset included 50 samples, 23 from naive and 27 from recovering populations, with an average depth of 16.7x (range = 7.4x -26.1x).
2023-05-27T13:11:50.443Z
2023-12-23T00:00:00.000
{ "year": 2023, "sha1": "5b6b0dc30cc972fd83471982e4401fc3228f52ac", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/05/24/2023.05.22.541534.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "5b6b0dc30cc972fd83471982e4401fc3228f52ac", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
119425680
pes2o/s2orc
v3-fos-license
Cluster Interpretation of Properties of Alternating Parity Bands in Heavy Nuclei The properties of the states of the alternating parity bands in actinides, Ba, Ce and Nd isotopes are analyzed within a cluster model. The model is based on the assumption that cluster type shapes are produced by the collective motion of the nuclear system in the mass asymmetry coordinate. The calculated spin dependences of the parity splitting and of the electric multipole transition moments are in agreement with the experimental data. I. INTRODUCTION The low-lying negative parity states observed in actinides and in heaviest known Ba, Ce, Nd and Sm isotopes are definitely related to reflection-asymmetric shapes [1,2]. There are several approaches to treat collective motion leading to reflection-asymmetric deformations. One of them is based on the concept of a nuclear mean field which has a static octupole deformation or is characterized by large amplitudes of reflection-asymmetric vibrations around the equilibrium shape [2,3,4,5,6]. In this approach the parity splitting is explained by octupole deformation. Another approach [7,8,9,10] is based on the assumption that the reflection-asymmetric shape is a consequence of alpha-clustering in nuclei [11,12,13]. In the algebraic model [7,8,9,10] the corresponding wave functions of the ground and excited states consist of components without and with dipole bosons (in addition to the quadrupole bosons), which are related to mononucleus and alpha-cluster components, respectively. The variant of algebraic model including the octupole bosons in addition to the dipole bosons has been applied in [14,15] to the description of the low-lying negative parity states in actinides. In [16,17,18,19] a cluster configuration with a lighter cluster heavier than 4 He was used in order to describe the properties of the low-lying positive and negative parity states. In both models [7,8,9,10] and [16,17,18,19] the relative distance between the centers of mass of clusters at fixed mass asymmetry is the main collective coordinate for the description of the alternating parity bands. Nuclear cluster effects are mostly pronounced in the light even-even N = Z nuclei with alpha-particle as the natural building block. There is a nice relationship between alphacluster description and deformed shell model [11]. It is known from Nilsson-Strutinsky type calculations for light nuclei that nuclear configurations corresponding to the minima of the potential energy contain particular symmetries which are related to certain cluster structures [20,21,22]. By using antisymmetrized molecular dynamics approach [23,24], the formation and dissolution of clusters in light nuclei, like 20 Ne and 24 Mg, are described. The idea of clusterization applied to heavy nuclei does not contradict the mean field approach. 2 The coexistence of the clustering and of the mean field aspects is a unique feature of nuclear many body system. The problem of existence of a cluster structure in a ground state of heavy nuclei has attracted much attention, especially, because of the experimentally observed cluster decay [25]. The available experimental and theoretical results provide the evidence for existence of fission modes created by the clustering of the fissioning nuclei [26]. Indications of clusterization of highly deformed nuclei are demonstrated in [27,28]. The aim of the present paper is a development of the cluster-type model which provides not only a qualitative but also a quantitative explanation of the properties of alternating parity bands. The description of the excitation spectra, Eλ-transition probabilities (λ=1,2,3) and the angular momentum dependence of the parity splitting [29,30] are the main subjects of this paper. Our model is based on the assumption that the reflection-asymmetric shapes are produced by the collective motion of the nuclear system in the mass asymmetry coordinate [31]. The values of the odd multipolarity transitional moments (dipole and octupole) are strongly correlated with the mass asymmetry deformation of nucleus. In general, the value of the quadrupole moment is related to the degree of the quadrupole correlations (deformation) in nucleus. However, the collective motion in the mass asymmetry degree of freedom simultaneously creates a deformation with even and odd multipolarities. Therefore, the calculations of Eλ-transition moments are of interest in the proposed model. The single particle degrees of freedom are not taken explicitly into consideration since our aim is to show that the suggested cluster model gives a good quantitative explanation of the observed properties of the low-lying negative parity states. If it is so, this model can serve as a good ground for development of an extended model with additional degrees of freedom. It should be noted that the first results of calculations of the alternating parity spectra for a few actinides within the cluster model are already presented in the Letter [31]. Besides Ra, Th and U isotopes, in the present paper we present the results of calculations of the energies of alternating parity bands in 240,242 Pu, 144,146,148 Ba, 146,148 Ce, and 146,148 Nd. The electromagnetic transitions are described in this paper with the cluster model for many nuclei and the spin dependence of the intrinsic quadrupole transition moment is predicted for 238 U. 3 Simple analytical expressions obtained for the parity splitting and the spectra of alternating parity bands are useful for the estimations. The dependence of alpha-clusterization in actinides on the angular momentum is shown for the first time. II. MODEL A. Hamiltonian in mass asymmetry coordinate Dinuclear systems consisting of a heavy cluster A 1 and a light cluster A 2 were first introduced to explain data on deep inelastic and fusion reactions with heavy ions [32,33,34]. The mass asymmetry coordinate η, defined as η = ( which describes a partition of nucleons between the nuclei forming the dinuclear system and the distance R between the centers of clusters are used as relevant collective variables [35]. The wave function in η can be thought as a superposition of different cluster-type configurations including the mononucleus configuration with |η|=1, which are realized with certain probabilities. The relative contribution of each cluster component in the total wave function is determined by the collective Hamiltonian described below. Our calculations have shown that in the considered cases the dinuclear configuration with an alpha cluster (η = η α ) has a potential energy which is close or even smaller than the energy of the mononucleus at |η| = 1 [28,31]. Since the energies of configurations with a light cluster heavier than an αparticle increase rapidly with decreasing |η|, we restrict our investigations to configurations with light clusters not heavier than Li (η = η Li ), i.e. to cluster configurations near |η| = 1. The Hamiltonian describing the dynamics in η has the following form where B η is the effective mass and U(η, I) is the potential. In order to calculate the dependence of parity splitting on the angular momentum and the electric dipole, quadrupole and octupole transition moments we search for solutions of the stationary Schrödinger equation describing the dynamics in η: 4 HΨ n (η, I) = E n (I)Ψ n (η, I). (2) The eigenfunctions Ψ n of this Hamiltonian have a well defined parity with respect to the reflection η → −η. Before we come to the results of Eq. (2), we discuss the calculation of the potential U(η, I), the mass parameter B η and the moments of inertia ℑ(η) appearing in H. B. Potential energy The potential U(η, I) in Eq. (1) is taken as a dinuclear potential energy for |η| < 1 Here, the internuclear distance R = R m is the touching distance between the clusters and is set to be equal to the value corresponding to the minimum of the potential in R for a given η. The quantities B 1 and B 2 (which are negative) are the experimental binding energies of the clusters forming the dinuclear system at a given mass asymmetry η, and B is the binding energy of the mononucleus. The quantity V (R, η, I) in (3) is the nucleus-nucleus interaction potential. It is given as with the Coulomb potential V coul , the centrifugal potential V rot =h 2 I(I + 1)/(2ℑ(η, R)) and the nuclear interaction V N . In the realization of the cluster model developed in this paper, where overlap of clusters is much smaller than in the model of [16], the choice of the relevant cluster configuration follows the minimum of the total potential energy of the system with a cluster-cluster interaction taken additionally into consideration. As the result we describe the same nuclear properties as in [16] with configurations of clusters having larger mass asymmetry and a smaller overlap. The potential V (R, η, I) and the moment of inertia ℑ(η, R)) are calculated for special cluster configurations only, namely for the mononucleus (|η|=1) and for the two cluster configurations with the α-and Li -clusters as light clusters, respectively. These calculated points are used later to interpolate the potential smoothly by a polynomial. The energies of the Li-cluster configurations are about 15 MeV larger than the binding energies of the mononuclei considered. Therefore, for small excitations only oscillations in η are of interest which lie in the vicinity of |η|=1, i.e. only cluster configurations up to Li -clusters need to be considered. The potential V N is obtained with a double folding procedure with the ground state nuclear densities of the clusters. Antisymmetrization between the nucleons belonging to different clusters is regarded by a density dependence of the nucleon-nucleon force which gives a repulsive core in the cluster-cluster interaction potential. Details of the calculation of V N are given in [36]. The parameters of the nucleon-nucleon interaction are fixed in nuclear structure calculations [37]. Other details are presented in [31]. Our calculations show that the potential energy has a minimum at |η|=η α in 218,220,222,224,226 Ra and 222,224,226 Th isotopes. In order to demonstrate the dependence of the potential on the neutron number, we present in Fig. 1 calculated values of U(η α , I = 0) ≡ U(η α ) of configurations with an α -cluster taking the long chain of Ba isotopes as an example. In the neutron deficient isotopes U(η α ) is smaller than zero and an α-clusterization is more likely. When the neutron number approaches the magic value of 82, the nucleus becomes stiffer with respect to vibrations in η and U(η α ) is larger than zero. The appearance of two neutrons above shell closure is in favor for an α -clusterization. In this case U(η α ) drops much and again becomes smaller than zero. Further addition of neutrons increases the nuclear stiffness with respect to η vibrations. C. Moments of inertia The calculation of the moment of inertia ℑ(η) = ℑ(η, R m ) needed to determine the potential energy at I = 0 has been described in [31]. For completeness, we repeat in this subsection the most important information. As was shown in [28], the highly deformed states are well described as cluster systems and their moments of inertia are about 85% of 6 the rigid-body limit. Following this, we assume that the moment of inertia of the cluster configurations with α and Li as light clusters can be expressed as Here, ℑ r i , (i = 1, 2) are the rigid body moments of inertia for the clusters of the dinuclear system, c 1 =0.85 [28,31] for all considered nuclei and m 0 is the nucleon mass. It should be noted that the angular momentum is treated in this paper as the sum of the angular momentum of the collective rotation of the heavy cluster and of the orbital momentum of the relative motion of the two clusters. Single particle effects, like alignment of the single particle angular momentum in the heavy cluster, are presently disregarded. For |η| = 1, the value of the moment of inertia is not known from the data because the experimental moment of inertia is a mean value between the moment of inertia of the mononucleus (|η|=1) and the ones of the cluster configurations arising due to the oscillations in η. We assume that where ℑ r is the rigid body moment of inertia of the mononucleus with A nucleons calculated with deformation parameters from [38] and c 2 is a scaling parameter which is fixed by the energy of the first 2 + or other positive parity state, for example 6 + . The chosen values of c 2 vary in the interval 0.1 < c 2 < 0.3. So, in our calculations there is a free parameter c 2 . However, this parameter is used to describe the rotational energies averaged over the parity and not the parity splitting studied in this paper. D. Mass parameter The method of the calculation of the inertia coefficient B η used in this paper is given in [39]. Our calculations show that B η is a smooth function of the mass number A. As a consequence, we take nearly the same value of B η =20 ×10 4 m 0 fm 2 for almost all considered actinide nuclei with a variation of 10%. However, for 222 Th and 220,222 Ra we varied B η in the range B η =(10-20)×10 4 m 0 fm 2 to obtain the correct value of E 0 (I = 0). These variations of B η lead to better results for light Ra isotopes than those in [31], where the obtained values of the parity splitting at the beginning of the alternating parity band are smaller than the experimental ones. Using a smooth mass dependence of B η [39] we get B η =4.5×10 4 m 0 fm 2 in the Ba, Ce and Nd region. However, better results we obtain for B η =3×10 4 m 0 fm 2 . For very asymmetric dinuclear systems, we can use simple analytical expressions to establish a connection between the relative distance and mass asymmetry coordinates on one side and the multipole expansion coefficients β 2 and β 3 on the other side [28] Here, R 0 is the spherical equivalent radius of the corresponding compound nucleus. One In the actinide region for an α -particle configuration, η ≈0.96 and (dβ 3 /dη) 2 ≈11.25. Then the mass parameters for β 3 and η-variables are related as If we take the value of B β 3 = 200h 2 MeV −1 known from the literature [4], then B η ≈9.3 ×10 4 m 0 fm 2 . This value is compatible with the one used in our calculations. III. INTRINSIC ELECTRIC MULTIPOLE MOMENTS Solving the eigenvalue equation (2), we obtain the wave functions of the positive and negative parity states for different values of the quantum number I of angular momentum. These wave functions are used then to calculate transition matrix elements of the electric multipole operators by integration over η. The electric multipole operators for a system of a dinuclear shape have been calculated [28] by using the following expression For slightly overlapping clusters when the intercluster distance R m is about or larger than the sum of the radii of clusters (R 1 + R 2 ), the nuclear charge density ρ Z can be taken as a sum of the cluster charge densities Using (11) and assuming axial symmetry of the nuclear shape, we obtain [28] the following expressions for the intrinsic electric multipole moments (2)), (14) where Of course, other effects related to degrees of freedom, which are not included in the model, like the alignment of the single particle momenta or interaction with other negative parity bands with different K quantum number can contribute as well. However, a general agreement between the experimental data and the results of calculations shows that the simple cluster model used in this paper gives a firm ground for the consideration of the alternating parity bands. In the considered nuclei the ground state energy level lies near the top of the barrier in η, if exists, and the weight of the α−cluster configuration (Fig. 2) estimated as that contribution to the norm of the wave function which is located at |η| ≤ η α is about 5 × 10 −2 for 226 Ra, which is close to the calculated spectroscopic factor [25]. This means that our model is in qualitative agreement with the known α−decay widths of the nuclei considered. The spectra of those considered nuclei whose potential energy has a minimum at the alpha cluster configuration can be well approximated by the following analytical expression Here, the parity splitting δE(I) is given as with . were obtained by fitting the experimental spectra for the nuclei considered (see Fig. 3). These formulae clearly demonstrate that there are two important quantities which prede- However, we found numerically that Eq. (17) works quite well also at low I. 16 To combine the two limits at I=0 and for I ≫ 1, we use the following expression Substituting this result into (A3), we obtain The last expression can be rewritten as
2019-04-14T03:08:23.807Z
2002-09-23T00:00:00.000
{ "year": 2002, "sha1": "c978427078c9fc889c04ecef5705a7717611fe2a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-th/0209070", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c978427078c9fc889c04ecef5705a7717611fe2a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
268476053
pes2o/s2orc
v3-fos-license
FGF21 overexpression alleviates VSMC senescence in diabetic mice by modulating the SYK-NLRP3 inflammasome-PPARγ-catalase pathway Diabetes accelerates vascular senescence, which is the basis for atherosclerosis and stiffness. The activation of the NOD-like receptor family pyrin domain containing 3 (NLRP3) inflammasome and oxidative stress are closely associated with progressive senescence in vascular smooth muscle cells (VSMCs). The vascular protective effect of FGF21 has gradually gained increasing attention, but its role in diabetes-induced vascular senescence needs further investigation. In this study, diabetic mice and primary VSMCs are transfected with an FGF21 activation plasmid and treated with a peroxisome proliferator-activated receptor γ (PPARγ) agonist (rosiglitazone), an NLRP3 inhibitor (MCC950), and a spleen tyrosine kinase (SYK)-specific inhibitor, R406, to detect senescence-associated markers. We find that FGF21 overexpression significantly restores the level of catalase (CAT), vascular relaxation, inhibits the intensity of ROSgreen fluorescence and p21 immunofluorescence, and reduces the area of SA-β-gal staining and collagen deposition in the aortas of diabetic mice. FGF21 overexpression restores CAT, inhibits the expression of p21, and limits the area of SA-β-gal staining in VSMCs under high glucose conditions. Mechanistically, FGF21 inhibits SYK phosphorylation, the production of the NLRP3 dimer, the expression of NLRP3, and the colocalization of NLRP3 with PYCARD (ASC), as well as NLRP3 with caspase-1, to reverse the cleavage of PPARγ, preserve CAT levels, suppress ROSgreen density, and reduce the expression of p21 in VSMCs under high glucose conditions. Our results suggest that FGF21 alleviates vascular senescence by regulating the SYK-NLRP3 inflammasome-PPARγ-catalase pathway in diabetic mice. Introduction Diabetic cardiovascular, cerebrovascular, and peripheral vascular diseases are the main causes of death and disability in diabetic patients [1,2].The senescence of vascular smooth muscle cells (VSMCs) is the underlying pathological change of vascular calcification, remodeling and stiffening of the vascular wall, and impaired relaxation ability, leading to serious consequences such as myocardial infarction and stroke [3][4][5].High glucose (HG) conditions induce senescence in VSMCs, characterized by a phenotypic switch from a contractile phenotype to a secretory phenotype, increased proliferation and migration, excessive collagen secretion, disruption of the microenvironmental balance of the vascular wall, and cytokine-mediated elastin damage, ultimately resulting in vascular sclerosis and impaired relaxation function [2,3,[6][7][8].Therefore, combating hyperglycemia-induced vascular smooth muscle layer senescence is one of the crucial strategies for preventing and treating diabetic vascular diseases and their severe adverse prognosis.Hyperglycemia-induced oxidative stress is a vital process in VSMC senescence.Studies have proven that HG induces the upregulation of reactive oxygen species (ROS) levels, which in turn triggers proliferation and migration, phenotypic switching, and calcification in VSMCs [9][10][11].However, there is a relative lack of research on high glucose-induced ROS upregulation during VSMC senescence. Recent studies have shown that activation of the NOD-like receptor family pyrin domain-containing 3 (NLRP3) inflammasome plays a key role in ROS-mediated VSMC senescence [12,13].However, further research is still needed to understand the relationship between the NLRP3 inflammasome and ROS production.The NLRP3 inflammasome is an important component of the innate immune system and has been shown to be associated with numerous major human diseases [14].Studies have shown that NLRP3 inflammasome activation leads to dysfunction of endothelial cells (ECs) and VSMCs, as well as DNA damage-mediated senescence in these cells [12,15].Moreover, our previous study suggested that NLRP3 inflammasome activation is likely one of the key mechanisms inducing vascular senescence in the diabetic environment [16].However, the mechanisms by which the HG environment activates the NLRP3 inflammasome remain uncertain.The present study revealed that the generation of NLRP3 dimers in response to HG represents a potential early event in inflammasome activation. Fibroblast growth factor 21 (FGF21) is predominantly expressed in the liver and acts systemically through secretion into the bloodstream as a cytokine; it plays a role in regulating glucose and lipid metabolism, improving insulin sensitivity, suppressing appetite, and reducing the preference for sweet foods [17,18].FGF21 is considered a potential novel therapy for type 2 diabetes and nonalcoholic fatty liver disease [17].Our previous studies showed that FGF21 downregulates NLRP3 inflammasome activity, inhibits VSMC proliferation and migration, and alleviates diabetesaggravated neointimal hyperplasia [19].Some studies have demonstrated that FGF21 alleviates senescence of human brain vascular smooth muscle cells by regulating the adenosine monophosphate-activated protein kinase (AMPK)-p53 pathway [20].However, it remains uncertain whether FGF21 has similar alleviating effects on vascular senescence induced by the diabetic environment, particularly in terms of its protective effects on vascular smooth muscle layer senescence, which is still extremely lacking. In this study, we describe how FGF21 reduces vascular smooth muscle layer senescence by inhibiting NLRP3 inflammasomedependent oxidative stress in diabetic mice. Primary VSMC isolation and culture According to our previous studies [19], primary VSMCs were isolated from wild-type (WT) mice (20-22 g, 9 weeks; purchased from GemPharmatech, Nanjing, China) using the tissue block adhesion method.Briefly, the mouse was sacrificed, the aorta was quickly removed without tearing, the adventitia was gently peeled off under an MSD540T operating microscope (Murzider, Dongguan, China), the remaining vessel was opened longitudinally, the endothelium was scraped out, and the remaining piece was cut into tissue blocks approximately 3 mm square.The tissue blocks were seeded with media and cultured in Dulbecco's modified Eagle's medium (DMEM; Gibco, Shanghai, China) supplemented with 15% fetal bovine serum (FBS; Gibco), 100 IU/mL penicillin and 100 μg/mL streptomycin (C0222; Beyotime, Shanghai, China) at 37°C in a 5% CO 2 humidified incubator.After reaching confluence, the VSMCs were passaged and cultured in regular glucose DMEM (5 mM) or high-glucose DMEM (30 mM, HG; Gibco) containing 10% FBS, 100 IU/mL penicillin and 100 μg/mL streptomycin at 37°C in the 5% CO 2 humidified incubator.Before measurement of the senescence indicators, VSMCs were induced with HG for more than 72 h as described in a previous study [16]. Transfection of VSMCs was performed using electroporation.Plasmids were diluted in electroporation-specific reagent (98668-20, Entranster TM -E; Engreen).VSMCs (10 5 ) were treated with 4 μg of plasmid and subjected to one electric shock at 150 V using a gene introduction instrument (SCIENTZ-2C; Scientz, Ningbo, China).After being left to stand for 2 min, the cells were inoculated into the culture medium. Hydrogen peroxide detection The accumulation of hydrogen peroxide in aortas and VSMCs was detected by staining with ROSgreen (MX5202; Maokangbio, Shanghai, China), a specific hydrogen peroxide probe [26,27].ROSgreen was first dissolved in DMSO and then diluted with HEPES solution (C0215; Beyotime).The mice were intravenously injected with a diluted ROSgreen solution (20 μM) and were sacrificed after 1 h.The aortas were then removed, the adventitia was peeled off, and the ROSgreen fluorescence was detected using the MSD540T operating microscope.VSMCs were treated with a diluted ROSgreen solution (5 μM) and incubated for 20 min.The cells were washed and fixed, and ROSgreen fluorescence was detected.The integrated density of ROSgreen fluorescence was calculated using Image J software. Senescence-associated β-galactosidase (SA-β-gal) staining The accumulation of SA-β-gal was detected using a commercial SAβ-Gal staining kit (C0602; Beyotime).Briefly, aortas without adventitia were removed, and VSMCs were fixed immediately after treatment, washed with PBS three times, and stained with SA-β-gal staining solution.Images of aortas were captured with an MSD540T operating microscope and images of cells were captured with an FRD-6C inverted microscope (Cossim, Beijing, China).The area of SA-β-gal staining (green staining) was calculated using Image J software. Masson staining Collagen accumulation in aortas without adventitia was measured using a Masson staining kit (WLA045; Wanleibio, Shenyang, China).Images of the sections were captured with the FRD-6C inverted microscope.The area of collagen staining (blue staining) was calculated using Image J software. Vascular tension recording The relaxation function of the aortas was detected by a tension detection system (BL-420S; TaiMeng, Chengdu, China) as described previously [16,19].The mice were anesthetized, and the aortas were quickly removed and immersed in Krebs Henseleit (KH) solution (pH 7.4, 119 mM NaCl, 25 mM NaHCO 3 , 11.1 mM glucose, 2.4 mM CaCl 2 , 4.7 mM KCl, 1.2 mM KH 2 PO 4 , 1.2 mM MgSO 4 , 0.024 mM Na 2 EDTA).Aortas were carefully dissected into transparent tubes and then cut into vascular rings of approximately 2 mm in width.The endothelium of the aortas was removed using a flexible wire (0.38 mm in diameter).The vascular rings were then suspended in a water-jacketed tissue bath, and the tension was tested.KH solution was maintained at 37°C, and mixed gas containing 95% O 2 and 5% CO 2 was continuously bubbled through the bath.When the tension of the rings stabilized at the basal level, the aortic rings were contracted with phenylephrine (Phe; 1 μM, S161304; Aladdin, Shanghai, China) to obtain a maximal response, and the rings were assessed for relaxation function using sodium nitroprusside (SNP; 1×10 -9 to 1×10 -5 M, S305727; Aladdin).The record of relaxation induced by SNP (1×10 -4 M) in the Ctrl group was set as 100% response to the SNP. 894 FGF21 alleviates high glucose-induced VSMC senescence 4°C.The supernatants were collected and quantified with a BCA assay kit (P0010; Beyotime).Caspase-1 activity in an equal amount of protein, approximately 200 μg, was determined immediately.Ac-YVAD-pNA was added to the supernatant and incubated for 60-120 min at 37°C.When the solution exhibited a distinct yellow color, the absorbance of samples were measured using a microplate reader (Thermo Fisher Scientific, Waltham, USA) at 405 nm.The detection of IL-1β, IL-18, and FGF21 was performed according to the manufacturer's instructions, and the optical density (OD) was measured at 450 nm. Statistical analysis Statistical analysis was performed with Graphpad PRISM 9.0 software.Data are presented as the mean±SE.Significant differences between and within multiple groups were examined using ANOVA for repeated measures, followed by Duncan's multiple-range test.Independent-Samples t-test was used to detect significant differences between two groups.P<0.05 was considered statistically significant. Overexpression of FGF21 inhibits SYK phosphorylation and NLRP3 inflammasome activation in the aortas of diabetic mice and HG-induced VSMCs By measuring the levels of FGF21 in both serum and vascular tissue homogenates, we observed that injection of the FGF21 overexpression (FGF21OE) plasmid led to an increase in FGF21 levels in both mouse serum and the vascular smooth muscle layer (Figure 1A,B).Additionally, we verified that the expression of FGF21 was activated in both mouse aortas and VSMCs (Supplementary Figure S1A,B,I,J).These results demonstrate that FGF21OE plasmid intervention can increase the levels of FGF21 in the serum and vascular wall of diabetic mice. To investigate the effect of FGF21 on NLRP3 inflammasome activation induced by diabetes, we assessed the extent of NLRP3 inflammasome activation in the aortas of diabetic mice and smooth muscle cells.In diabetic mice, the colocalization of NLRP3 (red) with ASC (green, also known as PYCARD; TMS1, a bridging adaptor protein of the inflammasome) (Figure 1C,D) and the colocalization of NLRP3 (red) with caspase-1 (green, also known as IL-1 converting enzyme, a core effector of the NLRP3 inflammasome) (Figure 1E,F) were found to be increased.Moreover, the phosphorylation of SYK in blood vessels was increased (Figure 1G,H), the NLRP3 dimer was produced (Figure 1G,I), the levels of NLRP3 were increased (Figure 1G,J), the activity of caspase-1 was increased (Figure 1K), and active IL-1β and IL-18 were also produced (Supplementary Figure S1C-H).Overexpression of FGF21 inhibited the colocalization of NLRP3 with ASC the colocalization of NLRP3 with caspase-1, the phosphorylation of SYK, NLRP3 dimerization, NLRP3 expression, caspase-1 activity and active IL-1β and IL-18 levels in diabetic aortas (Figure 1C-K and Supplementary Figure S1C-H). The smooth muscle layer is the main structural constituent of blood vessels, so we detected the activation of the NLRP3 inflammasome in VSMCs under HG conditions.Similar to the results observed in blood vessels, the HG environment caused the upregulation of SYK phosphorylation in VSMCs (Figure 1L,M), as well as NLRP3 dimerization (Figure 1L,N).NLRP3 expression was also upregulated (Figure 1L,O), along with an increase in the activity of caspase-1 (Figure 1P) and the production and secretion of active IL-1β and IL-18 (Supplementary Figure S1K-P).Additionally, HG induced the colocalization of NLRP3 with ASC (Figure 1Q,R), as well as the colocalization of NLRP3 with caspase-1 (Figure 1S,T).Treatment with the FGF21 plasmid partially reversed these changes (Figure 1L-T and Supplementary Figure S1K-P). These findings suggest that FGF21 prevents hyperglycemiainduced NLRP3 inflammasome assembly and activation in the smooth muscle layer of diabetic mouse aortae and that the inhibition of SYK phosphorylation and NLRP3 dimerization may play a key role in this process. Overexpression of FGF21 alleviates senescence in diabetic aortas and HG-treated VSMCs We measured the expression of catalase (CAT), the accumulation of hydrogen peroxide (ROSgreen, a specific probe for detecting hydrogen peroxide accumulation), the expression of the cellular senescence marker p21, SA-β-gal staining, collagen accumulation, and relaxation ability in the aortas.We found that the CAT level decreased in the aortas of diabetic mice (Figure 2A,B), the fluorescence intensity of ROSgreen increased (Figure 2C,D), the p21 expression level increased (Figure 2E,F), the area of SA-β-gal staining (green staining) increased (Figure 2G,H), the area of bluecolored collagen accumulation increased (Figure 2I,J), and the relaxation ability of diabetic aortas decreased (Figure 2K).Additionally, PPARγ underwent cleavage, resulting in the formation of a 40 kDa fragment (Figure 2L,M).Overexpression of FGF21 restored CAT levels, inhibited ROSgreen fluorescence, reduced p21 expression, decreased SA-β-gal staining and collagen deposition areas, preserved relaxation ability, and limited cleaved-PPARγ production in the aortas of diabetic mice (Figure 2A-M). Furthermore, we examined the level of senescence in primary VSMCs under HG conditions.We found that HG caused the cleavage of PPARγ in VSMCs (Figure 2N-O).HG impaired the expression of CAT (Figure 2P,Q) while increasing the fluorescence intensity of ROSgreen (Figure 2R,S) and the expression of p21 (Figure 2T,U).HG also increased the area of SA-β-gal staining (Figure 2V,W) in VSMCs.Overexpression of FGF21 inhibited PPARγ cleavage and partially reversed the changes in VSMC senescence (Figure 2N-W). These results demonstrate that the overexpression of FGF21 protects CAT levels, mitigates vascular hydrogen peroxide accumulation, alleviates senescence in VSMCs, and protects the relaxation ability of blood vessels.The protective effects of FGF21 may be associated with its protective effect on PPARγ. FGF21 activates the PPARγ-CAT pathway to alleviate senescence in the vascular smooth muscle layer of diabetic mice and HG-treated VSMCs To investigate the pivotal role of PPARγ in diabetes-induced senescence of the vascular smooth muscle layer, we activated the PPARγ pathway in diabetic mice and vascular smooth muscle cells (VSMCs) through treatment with RSG (a known PPARγ agonist).We observed that RSG restored the expression of CAT (Figure 3A, B), suppressed the expression of p21 (Figure 3C,D) in the blood vessel wall, and reduced the area of SA-β-gal (green staining) (Figure 3E,F) and collagen deposition (blue staining) (Figure 3G,H).Similar results were obtained in VSMCs, where RSG restored the HG-induced impairment of CAT expression (Figure 3I,J), sup- # P<0.05 vs db/db or HG group.In the animal experiment: Ctrl, wild-type mice from the same litter that were not subjected any intervention; db/db, db/db mice that were not subjected to any intervention; FGF21OE, db/db mice that received intravenous injection of the FGF21-activating plasmid.In the VSMC experiment: LG, cells with regular glucose (5 mM); HG, cells with high glucose (30 mM); HG+null-ACT, cells treated with HG and transfected with the empty vector through electroporation; FGF21OE, cells treated with HG and transfected with the FGF21-activating plasmid through electroporation. 898 FGF21 alleviates high glucose-induced VSMC senescence pressed the HG-induced increase in p21 expression (Figure 3K,L), and inhibited the green-stained area of SA-β-gal (Figure 3M,N).FGF21 overexpression had effects similar to those of RSG (Figure 3A-N).These results indicate that FGF21 alleviates senescence in the vascular smooth muscle layer of diabetic mice and HG-treated VSMCs by activating the PPARγ-CAT pathway. Discussion This study revealed that FGF21 protects the PPARγ-CAT pathway by inhibiting the SYK-NLRP3 inflammasome pathway, thereby alleviating the accumulation of hydrogen peroxide in VSMCs under HG conditions and attenuating the senescence of the vascular smooth muscle layer in diabetic mice.This study suggested that the generation of the NLRP3 dimer may play a crucial role in the assembly and activation of the NLRP3 inflammasome.Additionally, this study is the first to observe the alleviation of diabetes-induced VSMC senescence by FGF21 and to investigate its potential mechanisms.The findings of this study provide novel insights into the mechanisms underlying premature vascular senescence in-duced by the diabetic pathological environment and provide additional evidence for the vascular protective effects of FGF21.Some studies have demonstrated that FGF21 mitigates vascular wall calcification and remodeling by counteracting oxidative stress [29,30].However, the mechanisms responsible for the antioxidant effects of FGF21 remain incompletely understood.Our study revealed that FGF21 maintains CAT level by protecting PPARγ, thereby reducing the accumulation of hydrogen peroxide.This conclusion provides new evidence for the role of FGF21 in counteracting the increase in oxidative stress induced by diabetes.FGF21 is believed to play a role in alleviating the senescence of mesenchymal stem cells, chondrocytes, and neural cells, and its mechanisms are often associated with the inhibition of ROS levels [31][32][33].Some studies have suggested that FGF21 contributes to alleviating vascular senescence.FGF21 delays the occurrence of hydrogen peroxide-induced EC senescence by protecting the SIRT1 pathway [34]; FGF21 inhibits the AMPK-p53 pathway to suppress the ROS induced by angiotensin II, thereby alleviating the senescence of cerebral vascular smooth muscle cells [20].However, the protective effects of FGF21 on the senescence of the peripheral vascular smooth muscle layer, particularly in the context of diabetic pathological conditions, still require further research and elucidation.Our findings contribute to filling this research gap, as we demonstrate that FGF21 overexpression effectively alleviates the senescence of the aortic vascular smooth muscle layer induced by a diabetic pathological environment characterized primarily by high blood glucose levels.This protective effect is accompanied by the inhibition of excessive collagen deposition and the restoration of impaired vascular relaxation.The findings of the present study revealed the novel pharmacological effects of FGF21 in protecting VSMCs, specifically by alleviating the senescence of the smooth muscle layer induced by diabetes. It is generally recognized that the upregulation of ROS levels is one of the main mechanisms of NLRP3 inflammasome activation.Some studies have indicated that ROS-induced NLRP3 inflammasome activation induces EC dysfunction and VSMC calcification [15,35].Our study showed that HG-induced NLRP3 inflammasome activation leads to decreased CAT expression in VSMCs, the accumulation of hydrogen peroxide, and the upregulation of ROS.These results suggest that NLRP3 inflammasome activation induces the upregulation of ROS levels, which differs from the aforementioned findings.These fragmented results may support our conclusion.For example, in the treatment of multiple sclerosis, the restoration of CAT level was observed while inhibiting NLRP3 inflammasome activation [36]; in the study of airway epithelial cell injury, the restoration of CAT expression was observed while inhibiting NLRP3 inflammasome-associated RNA and protein expression [37].Additionally, studies have shown that knockout of NLRP3 reduces ROS production, protecting human aortic endothelial cells [38].Our research findings, together with these existing research results, may suggest the occurrence of ROS-NLRP3 inflammasome-CAT-ROS cycle.We believe that this vicious cycle is likely to be one of the key mechanisms of cell damage and senescence mediated by ROS and the NLRP3 inflammasome, which is worth exploring further. Furthermore, it has been reported that the production of NLRP3 oligomers is a necessary initial step in NLRP3 inflammasome assembly [39,40].Surprisingly, in this study we observed the occurrence of NLRP3 dimerization.We highly suspect that the 900 FGF21 alleviates high glucose-induced VSMC senescence FGF21 alleviates high glucose-induced VSMC senescence 901 formation of the NLRP3 dimer may play a significant role in the activation of the NLRP3 inflammasome induced by HG in VSMCs. Our study provides new insight into a novel mechanism by which HG induces NLRP3 inflammasome activation. The inhibitory effect of FGF21 on the NLRP3 inflammasome has been confirmed.FGF21 protects the vascular endothelium and smooth muscle layers by inhibiting the NLRP3 inflammasome [41,42], and our previous research showed that FGF21 inhibits VSMC proliferation and migration through inhibition of the SYK-NLRP3 inflammasome pathway, alleviating diabetes-induced neointimal hyperplasia [19].The present research findings reinforce the possibility that SYK serves as a pivotal pathway through which FGF21 mitigates NLRP3 inflammasome activation.In the current study, we further observed that FGF21 inhibits SYK phosphorylation-mediated NLRP3 dimerization, thereby suppressing NLRP3 inflammasome activation to protect the PPARγ-CAT pathway.This protective mechanism limits oxidative stress and mitigates HGinduced VSMC senescence.This finding provides a novel perspective on the inhibition of SYK-induced ROS production and the NLRP3 inflammasome by FGF21. Studies have indicated the diverse roles of the PPARγ pathway in cellular senescence [43,44].Our findings support the view that the PPARγ pathway has a protective effect on mitigating VSMC senescence.This finding is somewhat similar to the conclusions of other studies showing that PPARγ agonists alleviate cellular senescence [45,46].However, the role of the PPARγ pathway in VSMC senescence still requires further investigation.PPARγ is cleaved by the core product of the NLRP3 inflammasome, caspase-1, and this cleavage plays a critical role in inducing TAM infiltration [47,48].We hypothesize that this cleavage effect may also play a role in diabetic vascular lesions.Our results indicate that PPARγ cleavage coincides with the downregulation of CAT level and the senescence of the vascular smooth muscle layer and that activation of the PPARγ pathway slows HG-induced VSMC senescence.We propose that blockade of the NLRP3 inflammasome in the PPARγ-CAT pathway may induce VSMC senescence under hyperglycemic conditions.We speculate that NLRP3 inflammasome-mediated PPARγ cleavage could be a novel target for diabetic vascular lesions and warrants further investigation. FGF21 upregulates the PPARγ pathway to alleviate inflammation in microglia and macrophages and protect brain microvascular endothelial cells, playing a positive role in stroke [49,50].FGF21 activation of the PPARγ pathway eliminates the production of inflammatory factors, suppresses pulmonary artery smooth muscle cell proliferation, and alleviates pulmonary arterial hypertension [51,52].These findings imply that the FGF21-PPARγ pathway has a protective effect on VSMCs.Our findings support this conclusion, as our results show that FGF21 increases PPARγ level and alleviates VSMC senescence.However, further studies are needed to investigate the effect of the FGF21-PPARγ pathway on other VSMC processes, such as proliferation, migration, apoptosis, and calcification. In conclusion, our study revealed a novel mechanism by which HG conditions induce NLRP3 protein dimerization, triggering NLRP3 inflammasome assembly and activation, leading to downregulation of PPARγ-mediated CAT expression, accumulation of hydrogen peroxide, and accelerated VSMC senescence.Overexpression of FGF21, through the inhibition of HG-induced SYK phosphorylation and NLRP3-mediated PPARγ cleavage, reduces CAT level and alleviates oxidative stress level, reducing vascular smooth muscle layer senescence in diabetic mice.Our study reveals a new mechanism by which diabetes accelerates vascular senescence and identifies the pharmacological effect of FGF21 in mitigating diabetes-induced vascular senescence, providing new evidence for its potential clinical application. Animal handling and experimental procedures were approved by the Ethics Committee of Changzhi Medical College (DW2022053) following the guidelines of the US National Institutes of Health and the Animal Research Reporting In Vivo Experiments (ARRIVE). Figure 1 . Figure 1.Overexpression of FGF21 inhibits SYK phosphorylation and NLRP3 inflammasome activation in the aortas of diabetic mice and HGinduced VSMCs (A,B) FGF21 levels in serum and aortic tissue homogenates detected by ELISA.(C-F) Representative immunofluorescence images (400×, scale bar: 20 μm) and summarized integrated density of NLRP3 and ASC, NLRP3 and caspase-1 colocalization in aortas.(G-J) Representative western blot images and summarized data showing the phosphorylation of SYK, the production of NLRP3 dimers, and the expression of NLRP3 in aortas.(K) The summarized data showing the activity of caspase-1 in aortas.(L-O) Representative western blot images and summarized data showing the phosphorylation of SYK, the production of the NLRP3 dimer, and the expression of NLRP3 in VSMCs.(P) The summarized data showing the activity of caspase-1 in VSMCs.(Q-T) Representative immunofluorescence images and summarized integrated density of NLRP3 and ASC, NLRP3 and caspase-1 colocalization (800×, scale bar: 10 μm) in VSMCs.n=6 in mice; n=3 in VSMCs.*P<0.05 vs Ctrl;# P<0.05 vs db/db or HG group.In the animal experiment: Ctrl, wild-type mice from the same litter that were not subjected any intervention; db/db, db/db mice that were not subjected to any intervention; FGF21OE, db/db mice that received intravenous injection of the FGF21-activating plasmid.In the VSMC experiment: LG, cells with regular glucose (5 mM); HG, cells with high glucose (30 mM); HG+null-ACT, cells treated with HG and transfected with the empty vector through electroporation; FGF21OE, cells treated with HG and transfected with the FGF21-activating plasmid through electroporation. FGF21 protects PPARγ and alleviates senescence in the vascular smooth muscle layer of diabetic mice and HGtreated VSMCs by suppressing NLRP3To explore the critical role of NLRP3 in diabetes-induced senescence of the vascular smooth muscle layer, we inhibited NLRP3 in diabetic mice and VSMCs through MCC950 treatment.We observed that MCC950 restored the expression of CAT (Figure4A,B), suppressed the expression of p21 (Figure4C,D) in the blood vessel wall, reduced the green-stained area of SA-β-gal (Figure4E,F), and decreased the deposition of blue-stained collagen (Figure4G,H).Similar results were obtained in VSMCs, where MCC950 alleviated the PPARγ cleavage triggered by HG (Figure4I,J), suppressed the expression of p21 induced by HG (Figure4K,L), and inhibited the green-stained area of SA-β-gal (Figure4M,N).FGF21 overexpression produced results comparable to those of MCC950 (Figure4A-N).These results demonstrate that FGF21 mitigates senescence in the vascular smooth muscle layer of diabetic mice and HG-treated VSMCs by inhibiting the NLRP3 pathway.FGF21 protects PPARγ and CAT, alleviating senescence in the vascular smooth muscle layer of diabetic mice and HG-treated VSMCs by inhibiting the SYK pathwayTo explore the underlying role of SYK in diabetes-induced senescence of the vascular smooth muscle layer, we utilized the SYK-specific inhibitor R406 in diabetic mice and VSMCs to block the SYK pathway.We found that R406 restored the expression of CAT (Figure5A,B), suppressed the expression of p21 in the blood vessel wall (Figure5C,D), limited the green-stained area of SA-β-gal (Figure5E,F), and reduced blue-stained collagen deposition (Figure5G,H).Furthermore, in VSMCs, R406 reversed the HG-induced NLRP3 dimerization (Figure5I,J) and PPARγ cleavage (Figure5I,K), restored HG-impaired CAT expression (Figure5L,M), inhibited the integrated ROSgreen fluorescence intensity (Figure5N,O), suppressed HG-induced p21 expression (Figure5P,Q), and decreased the green-stained area of SA-β-gal (Figure5R,S).FGF21 overexpression mimicked the effects of R406 treatment (Figure5A-S).These results indicate that FGF21 alleviates senescence in HGinduced VSMCs by inhibiting the SYK-NLRP3 pathway to protect PPARγ and CAT. Figure 5 . Figure 5. FGF21 protects PPARγ and CAT, alleviating senescence in the vascular smooth muscle layer of diabetic mice and HG-treated VSMCs by inhibiting the SYK pathway (A,B) Representative immunofluorescence images of CAT (400×, scale bar: 20 μm) and the summarized data of CAT integrated density in aortas.(C,D) Representative immunofluorescence images of p21 (400×, scale bar: 20 μm) and the summarized data of p21 integrated density in aortas.(E-H) Representative images of SA-β-gal staining (40×, scale bar: 200 μm) and Masson staining (400×, scale bar: 20 μm), the summarized data of SA-β-gal (green) and collagen deposition (blue) area percentage in aortas.(I-K) Representative western blot images and the summarized data of the production of the NLRP3 dimer and the cleavage of PPARγ in VSMCs.(L,M) Representative IHC images of CAT (400×, scale bar: 20 μm) and the summarized data of the CAT-positive (yellow) area in VSMCs.(N,O) Representative images of ROSgreen staining (200×, scale bar: 40 μm) and the summarized data of ROSgreen integrated density.(P,Q) Representative IHC images of p21 (400×, scale bar: 20 μm) and the summarized data of the p21-positive (yellow) area in VSMCs.n=6 in mice; n=3 in cells.*P<0.05vs Ctrl; # P<0.05 vs db/db or HG-treated group.
2024-03-17T17:17:17.566Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "7f8a170457200f78291226f4e150f287e4ec230e", "oa_license": "CCBY", "oa_url": "https://www.sciengine.com/doi/pdfView/8B2BA5C9AC3C49009B8A123415FD52A6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f89cc2f3d0a9e846470378f16ea5b78888b8f91", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
119625476
pes2o/s2orc
v3-fos-license
A dimension gap for continued fractions with independent digits - the non stationary case We show there exists a constant $0<c_{0}<1$ such that the dimension of every measure on $[0,1]$, which makes the digits in the continued fraction expansion independent, is at most $1-c_{0}$. This extends a result of Kifer, Peres and Weiss from 2001, which established this under the additional assumption of stationarity. For $k\ge1$ we prove an analogues statement for measures under which the digits form a $*$-mixing $k$-step Markov chain. This is also generalized to the case of $f$-expansions. In addition, we construct for each $k$ a measure, which makes the continued fraction digits a stationary and $*$-mixing $k$-step Markov chain, with dimension at least $1-2^{3-k}$. Introduction Let X denote the set of irrational numbers in (0, 1). It is well known each x ∈ X has a unique continued fraction expansion of the form where A 1 (x), A 2 (x), ... are positive integers. Given a probability measure ν on X, each A n defines a random variable on (X, ν) and the digits {A n } ∞ n=1 form a discrete time stochastic process. In 1966, Chatterji [Ch] has shown every probability measure ν on [0, 1], which makes the digits in the continued fraction expansion independent variables, is singular with respect to the Lebesgue measure. In 2001, Kifer, Peres and Weiss [KPW] Assuming A 1 , A 2 , ... are i.i.d. with E[log A 1 ] < ∞ and H(A 1 ) < ∞, where H(A 1 ) is the entropy of A 1 , Kinney and Pitcher [KP] have proven that The Gauss measure µ G (E) = 1 log 2ˆE dx 1 + x is the unique equilibrium state of the Gauss map T x = 1 x (mod 1) with respect to the function x → log x 2 . This follows from the thermodynamic formalism approach of Walters [Wa1]. Hence under the i.i.d. assumption where h η (T ) is the entropy of T with respect to a T -invariant measure η. Since h ν (T ) = H(A 1 ), we get from (1.1) that dim H ν < 1 in this case. When A 1 , A 2 , ... are not identically distributed the formula (1.1) is no longer valid, and so it is not even clear that dim H ν is strictly less than 1. As mentioned above, we shall show that there exists a global constant c 0 > 0 such that dim H ν ≤ 1 − c 0 , assuming A 1 , A 2 , ... are independent. We actually prove more generally that for every integer k ≥ 0 there exists 0 < c k < 1, which depends only on k, such that dim H ν ≤ 1 − c k if the digits form a k-step Markov chain which is * -mixing. This is the main result of this paper. The * -mixing condition was introduced in [BHK], and is a bit less restrictive than the more familiar ψ-mixing condition. The definitions are given in Section 2. In the last section we generalize our main result to the case of f -expansions. Given k ≥ 0 it was shown in [KPW] that there exists 0 < c ′ k < 1, for which dim H ν ≤ 1 − c ′ k whenever ν makes the digits a stationary and ergodic k-step Markov chain. Our proof is a modification of the argument given there for this result. We shall also construct for each k a measure ν k , under which the digits form a stationary and ψ-mixing k-step Markov chain, with dim H ν k ≥ 1 − 2 3−k . This of course shows c k and c ′ k are at most 2 3−k . The rest of the paper is organized as follows. In Section 2 we give some necessary definitions and state our results. In Section 3 we establish a uniform bound on the dimension of subsets of X, which are defined via certain digit frequencies. This is the key ingredient in the proof of our main result, which is carried out in Section 4. In Section 5 we construct the measures ν k mentioned above. In Section 6 we generalize our main result to the setup of f -expansions. Yuri Kifer, for suggesting to me the problem studied in this paper, and for many helpful discussions. Preliminaries and results First, we define the mixing conditions mentioned above. Given random variables {A i } i∈I , all defined on the same probability space, denote by σ{A i } i∈I the smallest σ-algebra with respect to which each A i is measurable. is called * -mixing if there exist an integer N ≥ 1 and a real valued function f , defined on the integers n ≥ N , such that If such an f exists for N = 1 the sequence is said to be ψ-mixing. Remark 2.2. A sequence of independent random variables is clearly ψ-mixing. It is not hard to show that the ψ-mixing condition is satisfied for a finite state Markov Examples of * -mixing countable state Markov chains can be found in Section 3 of [BHK]. Another important example of a ψ-mixing sequence is obtained by the continued fraction digits with respect to the Gauss measure µ G (see [Ad] or [He]). for the Hausdorff dimension of E. Given a Borel probability measure ν on X its Hausdorff dimension is defined by The following theorem is our main result. Then dim H (ν) ≤ 1 − c k , where 0 < c k < 1 is a constant depending only on k. Remark 2.4. As mentioned in the introduction, it was shown in [KPW] that there exists 0 < c ′ k < 1, for which dim H ν ≤ 1 − c ′ k whenever ν makes the continued fraction digits a stationary and ergodic k-step Markov chain. It might be desirable to estimate c k and c ′ k . The next claim shows these constants are at most 2 3−k . Claim 2.5. For each k ≥ 3 there exits an N-valued k-step stationary and ψ-mixing The main ingredient in the proof of Theorem 2.3 is Theorem 2.6 stated below, for which we need some more notations. Let T : X → X be the Gauss map, which is defined by Denote by µ G the Gauss measure, which satisfies It is well known that µ G is invariant and ergodic with respect to T . For (a 1 , ..., a k ) = a ∈ N k set I a = {x ∈ X : α i (x) = a i for each 1 ≤ i ≤ k}, and define I a : X → {0, 1} by Given L > 1 denote by Q L the set of maps q : N → N with q(n + 1) > q(n) for each n ∈ N and lim inf n→∞ q(n) n < L . Theorem 2.6. For every L > 1 and δ > 0 there exists 0 < c L,δ < 1 with Remark 2.7. The proof of theorem 2.6 resembles the proof of the main result (Theorem 2.1) of [KPW]. There an upper bound, which depends only on δ, is obtained for the dimension of sets of the form Here we need to consider the families Q L , and the more general averages due to the lack of stationarity. As a result we must define Γ δ q,a with lim inf, as opposed to the sets (2.1) which are defined with lim sup. Proof of Theorem 2.6 The following large deviations estimate will be needed. Its proof is almost identical to the proof of Lemma 3.1 from [KPW], but we include it here for completeness. n=1 is a stationary and * -mixing sequence of random variables. Let k ≥ 1 and F : and let q : N → N be strictly increasing. Then for every δ > 0 there exists a constant M = M (S, δ, k) > 1, independent of q and F , such that for every n ≥ 1, Let N be the integral part of n/M , and for 1 ≤ j ≤ M set Let ζ 0 , ζ 1 , ... be independent {0, 1}-valued random variables with mean p. Since q is strictly increasing it follows easily from (3.1) that, i=0 ζ i , then Z is a binomial random variable with parameters N and p, and By the exponential estimate for the binomial distribution (see e.g. Cor. A.1.7 in This together with (3.3) gives, The lemma now follows from (3.2). As mentioned in Remark 2.2, the sequence {α i } ∞ i=1 is ψ-mixing with respect to µ G . From this and Lemma 3.1 we get the following corollary. Corollary 3.2. Given k ≥ 1 and δ > 0 there exists a constant M = M (δ, k) > 1, such that for every strictly increasing q : N → N, a ∈ N k and n ≥ 1, Given n ≥ 1 write Proof of Theorem 2.6. Let δ > 0, L > 1, q ∈ Q L , k ≥ 1 and a ∈ N k . Given λ > 0 set, Fix N ≥ 1 and for n ≥ 1 set then Γ δ,N q,a ⊂ Υ δ,n q,a for all n ≥ N . Let M = M (δ, k) > 1 be as in Corollary 3.2, set s = 1 − 1 λLM and let η > 0. From q ∈ Q L we get lim inf n→∞ q(n) n < L. From this and β n n → 0 it follows that there exists n ≥ N such that β n < η and q(n) < nL. By the definition of Υ δ,n q,a there exists B n ⊂ N q(n)+k with Υ δ,n q,a = ∪ b∈Bn I b . From Corollary 3.2 we get By the definition of Υ δ,n q,a , Hence from (3.6), (3.5), q(n) < nL and s = 1 − 1 λLM , As this holds for every η > 0 As this holds for every N ≥ 1 it follows from (3.4) that, . We shall now complete the proof of the theorem. We continue to fix δ > 0 and L > 1. Let Set a δ = (a 1 , ..., a k δ ), then since I a δ ≥ I a it follows from (3.8) that Γ δ/2 q,a δ ⊃ Γ δ q,a , and so dim H (Γ δ/2 q,a δ ) ≥ dim H (Γ δ q,a ) . This together with (3.7) gives which completes the proof of the theorem. Proof of the main result Proof of Theorem 2.3. Fix k ≥ 0, let {A n } ∞ n=1 an N-valued k-step Markov chain which is * -mixing, and let ν be the distribution of [A 1 , A 2 , ...]. Given words a ∈ N m and b ∈ N l we denote by ab ∈ N m+l their concatenation. As noted in observation 2.2 in [KPW], the continued fraction digits under µ G do not form a k-step Markov chain. It follows that there exist m ∈ N, a ∈ N k , b ∈ N m and c ∈ N with and so If k = 0, i.e. when A 1 , A 2 , ... are independent, a is the empty word and I a = X. where |d| stands for the length of d, and set p d,i := P(E d,i ). Let d ∈ ∪ ∞ k=1 N k and assume lim sup is also * -mixing, where 1 E denotes the indicator of the event E. By the law of large numbers for sums of * -mixing bounded random variables (see Theorem 2 in [BHK]), Hence for ν-a.e. x ∈ X, From this and (4.3) we get that for ν-a.e. x ∈ X, and so there exists q ∈ Q 10 with (4.4) where p a,q(i)+m > 0 by (4.4) and µ G (I a ) > ǫ. The sequence {1 E bac,q(i) } ∞ i=1 is * -mixing, so by the law of large numbers for sums of * -mixing random variables, lim n 1 n n i=1 1 E bac,q(i) − p bac,q(i) = 0 almost surely . Construction of the measures ν K In the proof below we use the notation for the Kolmogorov-Sinai entropy from Chapter 4 of [Wa2]. In particular the entropy of a Borel probability measure θ on X, with respect to a countable Borel partition ξ of X, is denoted by H θ (ξ). If F is a sub-σ-algebra of the Borel σ-algebra of X, then H θ (ξ | F ) is the entropy of θ with respect to ξ conditioned on F . If θ is T -invariant the entropy of T with respect to θ is denoted by h θ . If θ is also ergodic we write γ θ for the Lyapunov exponent of the system (X, T, θ), i.e. [a 1 , ..., a m ] = 1 In order to establish the ψ-mixing property in the proof of Claim 2.5 we shall need the following proposition. It follows directly from Theorem 1 in [Br]. Proposition 5.1. Let {A n } ∞ n=1 be a stationary and mixing sequence of random variables. Assume there exists a constant 0 < C < ∞ with Proof of Claim 2.5. Fix k ≥ 3 and for every a ∈ N k and c ∈ N set p a = µ G (I a ) and p a,c = µ G (I ac ) µ G (I a ) . Then c∈N p a,c = 1 for each a ∈ N k and p = {p a } a∈N k is a probability vector. Let {A n } ∞ n=1 be the k-step N-valued Markov chain corresponding to the transition probabilities {p a,c } (a,c)∈N k+1 and initial distribution {p a } a∈N k . For each b ∈ N k−1 as a 1-step Markov chain on the state space N k , it is easy to see it is irreducible and aperiodic. From this and Theorem 8.6 in [Bi] it follows {A n } ∞ n=1 is mixing. Let us show {A n } ∞ n=1 is in fact ψ-mixing. From (3.22) in chapter 3 of [EW] it follows there exists a constant 1 < C < ∞ with, ≤ C l for l ≥ 1 and a 1 , ..., a l ∈ N . (I (b1,...,b k ) ) · k j=1 µ G (I (a l−k+j ,...,a l ,b1,...,bj) ) µ G (I (a l−k+j ,...,a l ,b1,...,bj−1) ) . This together with (5.1) gives From Proposition 5.1, combined with a monotone class argument, it now follows that {A n } ∞ n=1 is ψ-mixing. Let ν be the distribution of [A 1 , A 2 , ...], then ν is T -invariant and ergodic. In order to prove the claim it remains to show that dim H ν ≥ 1 − 2 3−k . Set then it is easy to check that which shows h ν , γ ν , h µG and γ µG are all finite. From this and Section 2 of [BH] it follows that Moreover, it is well known By an argument similar to the one given in Theorem 4.27 in [Wa2], From this and the definition of conditional entropy, Now from Theorems 4.3 and 4.14 in [Wa2], Assume k is even for the moment, then [a 1 , ..., a k ] ≤ x ≤ [a 1 , ..., a k + 1] for every (a 1 , ..., a k ) = a = N k and x ∈ I a . It follows that, Let p, q ∈ N be with gcd(p, g) = 1 and p q = [a 1 , ..., a k ]. From inequalities (3.6), (3.7) and (3.14) in [EW] it follows that q, p ≥ 2 (k−2)/2 and and so γ ν − γ µG ≤ 2 3−k . By exchanging between γ µG and γ ν it can be shown that γ µG − γ ν ≤ 2 3−k . From k ≥ 3 and (5.3) we get γ ν ≥ 1, hence A similar argument shows (5.5) holds when k is odd. From (5.2), (5.4) and (5.5) we now get which completes the proof of the claim. x ∈ (0, 1) with r i (x) = 0 for every i ≥ 0, then (0, 1) \ X is clearly countable. Write where [·] is the integer part of a number. For x ∈ X and i ≥ 1 set then α i (x) ∈ N . We shall assume that and call the expression on the right hand side the f -expansion of x. Regularity conditions on f were given by Rényi [R], which ensure that (6.1) is satisfied. The main example of the decreasing case is f (x) = 1/x, which leads to the continued fraction expansion, and of the increasing case is f (x) = x/M, which leads to the base-M expansion. For more details on f -expansions see [R], [KP], [He] and the references therein. We use the notation I a and I a , introduced in Section 2, with X and α i as We shall assume that (1) the restriction of T to f (a, a + 1) is C 2 for each a ∈ N ; (2) there exists ℓ ∈ N and β > 0 with |(T ℓ ) ′ (x)| ≥ β for all x ∈ X; (3) there exists 1 ≤ Q < ∞ with T ′ (y)T ′ (z) ≤ Q for all a ∈ N and x, y, z ∈ I a . Then by Theorem 22 in [Wa1], there exists an absolutely continuous T -invariant mixing probability measure µ T on X, such that 0 < dµT dL ∈ C[0, 1]. Here, as above, L is the Lebesgue measure. For q ∈ Q L with Q L defined in Section 2, a ∈ ∪ ∞ k=1 N k and δ > 0 let The following theorem is an analogue of Theorem 2.6, and can be proven in exactly the same manner. Theorem 6.1. Suppose that T satisfies conditions (1)-(3) and assume, in addition, that for some t < 1, Then for every L > 1 and δ > 0 there exists 0 < c f,L,δ < 1 with Remark 6.2. The condition (6.2) is needed in order to apply Theorem 4.1 from [KPW], as we did at the beginning of the proof of Theorem 2.6. Since is a ψ-mixing sequence with respect to µ T (see [Ad] or [He]), the large deviations estimate from Corollary 3.2 is valid for µ T . Now the proof of Theorem 6.1 follows almost verbatim the proof of Theorem 2.6. An important ingredient in the proof of Theorem 2.3 is the fact that, for any k ≥ 0, the continued fraction digits under µ G do not form a k-step Markov chain. Hence, in order to generalize Theorem 2.3 to the case of f -expansions we shall need the following lemma. For t ∈ [0, 1] set F (t) = µ T ([0, t]), and let S = F • T • F −1 . Given a ∈ N write I a := f (a, a + 1). do not form a k-step Markov chain under µ T for any k ≥ 1. Proof. Note that F µ T = L and SL = L. From the chain rule it follows that for every a ∈ N and x ∈ F I a , and so S ′ is continuous on F I a . Let β 1 : F X → N be such that β 1 (x) = a for a ∈ N and x ∈ F I a . For i ≥ 1 set β i = β 1 • S i−1 , then β i = α i • F −1 . Given (a 1 , ..., a l ) = a ∈ N l let then J a = F I a . Note that (6.3) L(J a ) = µ T (I a ) for every l ≥ 1 and a ∈ N l . Since S ′ is continuous on F I c and it follows easily from (6.6) that S ′ must be constant on F I c . On the other hand, by (6.4) and (6.6) this is not possible. We have thus reached a contradiction, which shows that {α i } ∞ i=1 does not form a k-step Markov chain under µ T . Remark 6.4. In Proposition 7.1 from [KPW] it is shown that {α i } ∞ i=1 are independent under µ T if and only if S is linear on F I a for each a ∈ N . From this and Lemma 6.3 it follows that if S is not linear on F I a for some a ∈ N , then {α i } ∞ i=1 do not form a k-step Markov chain under µ T for any k ≥ 0. The following theorem is an analogue, for the case of f -expansions, of Theorem 2.3 above and Corollary 2.3 from [KPW]. It can be derived from Theorem 6.1, Theorem 2.1 in [KPW], and Lemma 6.3, by an argument similar to the one given in the proof of Theorem 2.3. Given a 1 , a 2 , ... ∈ N denote by [a 1 , a 2 , ...] the unique x ∈ X with α i (x) = a i for i ≥ 1. Theorem 6.5. Suppose that T satisfies the conditions (1)-(3) and, in addition, that (6.2) holds for some t < 1. Assume the digits {α i } ∞ i=1 of the f -expansion are not independent under µ T . Let k ≥ 0 and let {A n } ∞ n=1 be an N -valued k-step Markov chain (when k = 0 this means A 1 , A 2 , ... are independent). Assume {A n } ∞ n=1 is *mixing or that it is stationary and ergodic. Let ν be the distribution of the random variable [A 1 , A 2 , ...]. Then dim H (ν) ≤ 1 − c f,k , where 0 < c f,k < 1 is a constant depending only on f and k.
2017-03-09T07:25:35.000Z
2017-03-09T00:00:00.000
{ "year": 2018, "sha1": "a96d36b844c1f3f272bea29d1dd4ebeeb690b1a9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1703.03164", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a96d36b844c1f3f272bea29d1dd4ebeeb690b1a9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
73503695
pes2o/s2orc
v3-fos-license
Detecting Non-cognitive Features of Prodromal Neurodegenerative Diseases Abstract: Background: Prodromal Neurodegenerative Disease (ND) due to tauopathies such as Alzheimer’s Disease (AD) and Synucleinopathies (SN) such as Parkinson's Disease (PD) and Dementia with Lewy Bodies (DLB) present subtly. Although ND are considered cognitive disorders, in fact ND present with behavioral and even medical symptomatology years to decades prior to the onset of cognitive changes. Recognizing prodromal ND syndromes is a public health priority because ND is common, disabling and expensive. Diagnosing prodromal ND in real world clinical settings is challenging because ND of the same pathology can present with different symptoms in different people. Individual variability in nature and variability in nurture across the life course influence how ND pathology manifests clinically. The objective of this study was to describe how non-cognitive symptoms from behavioral, medical, neurological and psychiatric domains cluster in prodromal and early stages of ND. Methods: This was an observational study of patients receiving routine clinical care for memory disorders. All patients receiving a standardized evaluation including complete neurological history and examination and standardized brief neuropsychological testing. A Principal Component Analysis (PCA) considering emotion, motor, sensory and sleep factors was performed on the entire sample of patients in order to identify co-occurring symptom clusters. All patients received a consensus diagnosis adjudicated by at least two dementia experts. Patients were grouped into Cognitively Normal, Detectable Cognitive Impairment, and Mild Cognitive Impairment categories due to AD and/or PD/LBD or NOS pathology. Symptom cluster scores were compared between clinical diagnostic groups. Results: In this study 165 patients completed baseline neuropsychological testing and reported subjective measures of non-cognitive symptoms. Four syndrome specific symptom factors emerged and eight non-specific symptom factors. Symptoms of personality changes, paranoia, hallucinations, cravings, agitation, and changes in appetite grouped together into a cluster consistent with an “SN Non-motor Phenotype”. Appetite, walking, balance, hearing, increased falls, and dandruff grouped together into a cluster consistent with an “SN Motor Phenotype”. The Prodromal AD phenotype included symptoms of anxiety, irritability, apathy, sleep disturbance and social isolation. The fourth factor included symptoms of increased sweating, twitching, and tremor grouped into a cluster consistent with an Autonomic phenotype. Conclusions: Non-cognitive features can be reliably measured by self-report in busy clinical settings. Such measurement can be useful in distinguishing patients with different etiologies of ND. Better characterization of unique, prodromal, non-cognitive ND trajectories could improve public health efforts to modify the course of ND for all patients at risk. INTRODUCTION Better description of non-cognitive syndromes that include medical, neurological and psychiatric symptoms could enable earlier identification of prodromal Neurodegenerative Disease (ND). The most common ND syndromes include Alzheimer's Disease (AD), Parkinson's Disease (PD), Dementia with Lewy Bodies (DLB), and Frontotemporal Lobar *Address correspondence to this author at the Hunter College, Nursing. New York, USA; Tel: 212-481-4338; E-mail: cganzer@hunter.cuny.edu Degeneration (FTLD). For all ND's, pathophysiologic changes can be observed in the peripheral and central nervous system years before a patient meets clinical diagnostic criteria [1,2]. Despite most ND being associated with cognitive changes, in fact the earliest symptoms of ND are noncognitive. During prodromal phases of ND, specific symptoms in the domains of mood, motor, sleep and sensation become clinically detectable [3][4][5][6]. For example, in prodromal AD, non-cognitive changes including anxiety, irritability, and apathy and sleep inefficiency tend to predominate. In prodromal SN, non-cognitive symptoms including constipation, REM behavior and depression tend to predominate [7,8]. Such features can be helpful in differential diagnosis, especially in the early stages [9]. Prodromal non-cognitive syndromes in AD relate to amyloidosis and other pre-tau pathological processes [10]. Prodromal syndromes in PD and DLB relate to the presence and location of Synucleinopathy (SN) [11]. Despite a growing understanding of the clinical phenomenology, epidemiology and pathology of ND in its later stages, the timing and significance of non-cognitive symptomatology in prodromal ND remains poorly understood. Cognitive and behavioral disorders are difficult to diagnose, especially at early stages. For people with Subjective Cognitive Impairment (SCI) or with mild cognitive symptoms below the threshold for a diagnosis of Mild Cognitive Impairment (MCI), clinicians lack pathologically-based clinical diagnostic criteria. Even at the Mild Cognitive Impairment (MCI) stage of AD, a significant number of cases are missed by trained clinicians [12]. Prodromal ND symptoms can be non-specific (Markopoulou et al., 2016). A host of factors including genetic, environmental, psychosocial, neurological, psychiatric and medical factors, modify neuropsychiatric function in adults [13]. Childhood developmental traits and late-life, age-related concomitant brain pathologies modify clinical presentations, creating diagnostic complexity. These are critical research gaps because disease-modifying interventions are most effective during prodromal stages. The objective of this study was to describe medical, neurological and psychiatric symptomatology in patients clinically diagnosed with prodromal or early AD and/or PD/DLB. The hypothesis of this study was that non-cognitive symptoms would group together into different, recognizable symptom groups that are consistent with previously described, pathology-specific prodromal ND syndromes. This hypothesis is based on the epidemiological literature on prodromal ND [14][15][16]. A better understanding of the patterns of non-cognitive symptoms in prodromal ND could enable clinicians to more rapidly identify patients at risk. This is particularly important considering that interventions with risk reduction and disease modification are becoming in clinical and research settings. The global burden of ND due to AD, PD/LBD, and other dementias could be reduced through better identification of individuals harboring prodromal ND [17]. Accelerating efforts to identify preclinical stages of AD is therefore a key strategy of the U.S. National Plan to Address Alzheimer's Disease [18]. MATERIAL AND METHODS This retrospective, observational study involved patients presenting to the Weill Cornell Medicine and New York-Presbyterian Memory Disorders Clinic between 2014 and 2017. The subject population included patients seen at the Alzheimer's Prevention Clinic who consented to the Comparative Effectiveness Dementia & Alzheimer's Registry (CEDAR). The CEDAR study is an observational study of clinical care delivered to patients seeking risk reduction and treatment services for dementia. Informed consent was obtained from all participants via a protocol approved by the institutional review board at Weill Cornell Medicine. Patients with incomplete data or prior dementia diagnoses were excluded. As part of routine care, all subjects completed standardized assessments including neurological history, neurological examination, standardized cognitive testing, self-reported assessments, and diagnostic laboratory and imaging tests as indicated. The standardized assessment included National Institutes of Health Patient Reported Outcomes Measurement & Information System (NIH PROMIS) scales assessing depression, anxiety, alcohol use, and sleep [19], as well as other validated scales measuring sleep and perceived stress [20,21]. Non-cognitive symptoms were identified through self-reported assessments using yes or no responses. These measures were chosen based on extensive literature review of the epidemiological risk factors and prodromal symptoms specific to different types of dementias [9,16,[22][23][24][25]. Table 1 lists all of the measures used for evaluation. [26][27][28][29]. The NIHTB-CB tests were chosen because of their validity for assessing cognitive function across a wide range of populations [30]. After the completion of each patients' initial visit, all relevant clinical information from each case was presented and interpreted at weekly team-based consensus conference where at least one neuropsychologist and one neurologist specializing in dementia were present. These conferences included a complete review of clinical history, neurological exam results, cognitive testing results, routine labs, and neuroimaging when available. Subjects were assigned to diagnostic groups using published diagnostic criteria [31][32][33][34][35]. Groups included: MCI due to AD, Detectable Cognitive Im-pairment (DCI) due to AD, MCI due to PD/LBD, DCI due to PD/LBD, MCI not otherwise specified (NOS), DCI NOS, Subjective Cognitive Impairment (SCI), and Normal Cognition ( Table 2). DCI, a diagnostic category introduced in a prior manuscript [36], was assigned when patients could not be classified as having normal cognition and did not meet the threshold for MCI. While semantic in nature, DCI may be more accurately defined as a Detectible Cognitive Indicator, considering patients at this stage have no or minimal subjective complaints. Patients with DCI can be classified as DCI-AD, DCI-SN, or DCI-NOS by taking into account parkinsonian features from the neurological history and examination, ADlike cognitive findings (semantic, amnestic features), and the family history (Fig. 1). DCI groups were subdivided in this study as shown in Table 2. For simplicity of analysis, patients with mixed diagnostic categories, for example individuals classified as dual AD-DLB/PD, were excluded. Laboratory-based biomarkers of neurodegenerative disease risk, including APOE4 and uric acid, were measured using standard clinical procedures. Patients were grouped as either APOE4 positive or negative. Uric acid was included in Not Otherwise Specified this analysis because it has been associated with the occurrence of PD-MCI in multiple studies [37]. Patients in the lowest quartile of uric acid were compared to patients in highest quartile of uric acid, since no established cutoff level currently exists [38]. In the primary analysis of this study, a Principal Component Analysis (PCA) was performed with data from the entire sample to identify the symptoms that clustered together. Factors were named according to the clinical syndrome each factor appeared to represent, if possible. Non-specific factors were assigned a generic name: "Other Factor 1", "Other Factor 2", etc. To test sampling adequacy for variables within the model the Kaiser-Meyer-Olkin (KMO) test was used and Bartlett's test of sphericty was used to test for interrelationships between variables prior to proceeding with Factor Analysis. To exclude a-priori assumptions regarding specific clinical diagnosis, (PCA) was performed on samples obtained from all diagnostic groups regardless of their clinical diagnoses. In sensitivity analyses, both symptom clusters and individual symptoms with significant contribution (factor loading > 0.7) were further analyzed using linear regression to identify a best-fitting regression model. The purpose of these steps was to test whether the clusters differed across diagnostic groups. In particular, subjects with any stage of AD (DCI-AD, MCI-AD, and AD) were compared to subjects with any stage of PD/LBD (DCI-PD, MCI-PD, and PD/LBD), Other Neuro, and Cognitively Normal. RESULTS A total of 165 patients were included in the study. Thirtythree patients were classified as having AD underlying pathology, 29 with synucleinopathy, 67 with Other ND, and 36 as Cognitively Normal. Table 3. represents study sample demographics. Using PCA, twelve factors emerged, which accounted for 63% of the total variance of the non-cognitive symptoms in the cohort (n = 165). Four syndrome-specific symptom factors emerged, and eight non-specific symptom factors. The first factor group was named "Non-Motor phenotype" and consisted of personality changes, paranoia, hallucinations, cravings, agitation, and changes in appetite. The second factor group was named "Motor phenotype" and consisted of changes in appetite, walking, balance, hearing, increased falls, and dandruff [39]. The third factor group was named "AD-affective" and consisted of increased anxiety, depression, sleep disturbance, and social isolation, all measured using the PROMIS scales. The fourth factor group was named "Autonomic" and consisted of increased sweating, twitching, and tremor. The remaining factors and their symptom clusters are shown in Table 4. The between group and within group differences are summarized in Table 5 and Fig. (2) outlines the predicted non-cognitive total score differences between diagnostic groups. Fig. (2). Predicted non-cognitive total score differences between diagnostic groups. DISCUSSION In this study, medical, neurological and psychiatric symptoms in patients with different stages of early AD and PD/DLB were explored, analyzed and described. From clinical information derived from routine neurological history and evaluation, delivered to 165 patients with early stages of AD and PD and/or DLB, 4 specific syndromes and 8 nonspecific factors (groups of symptoms) emerged. Specific factors included a Non-Motor Syndrome, a Motor Syndrome, an AD Affective Syndrome, and an Autonomic syndrome. These findings suggest that non-cognitive syndromes that could be indicative of prodromal ND can be reliably measured by self-report in busy clinical settings. The findings suggest that clinical approaches to detecting prodromal ND could be feasible. Non-motor symptoms (personality changes, paranoia, agitation, hallucinations, cravings, and changes in appetite) were more prominent in our PD/LBD group then our AD group. Interestingly, all factors were strongest in our PD/LBD diagnostic group. One unexpected finding was that the presence of REM Behavioral symptomatology did not correlate strongly with any of the factors, even though REM behaviors have been reported to be strongly associated with PD/LBD [40]. This finding could be due to lack of a sensitive measurement for RBD and also RBD being a relatively later manifestation of synucleinopathy which is unlikely to occur a predominantly younger, minimally symptomatic population. Supporting this notion was the sensitivity analysis which showed that, when dementia patients were included in the analysis, the PCA included a new factor with REM behavior alongside other traditional symptoms of PD/LBD. Regarding sensitivity of detection, REM behaviors are difficult to detect and not noticed until they are more severe or have been present for a longer time period. The findings of Motor, Neuropsychiatric, AD Affective and Autonomic syndrome factors are in line with several studies that compared non-cognitive symptoms to biomarker and imaging data. Babulal and colleagues found that AD biomarkers, including higher values of PET-Pittsburgh Compound B, Cerebral Spinal Fluid (CSF), total tauopathy, CSF phosphorylated tau, and lower CSF β-amyloid are associated with mood changes in cognitively normal older adults [10]. In addition, anxiety and depression among cognitively normal elderly adults has been linked to abnormalities in brain glucose metabolism, as measured by FDG-PET, in regions associated with AD [3]. Further research has found that anxiety and irritability are associated with greater amyloid deposition in the neurodegenerative process leading to AD [41]. Lower levels of CSF β-amyloid has also been associated with decreased quality of sleep as measured as a percentage of time in bed spent asleep [5]. Direct evidence has linked non-motor symptoms, including hyposmia, constipation, depression, visual changes, small fiber neuropathy, and autonomic symptoms to PD, particularly in the pre-motor phase [11]. However, correlations with underlying pathology have been more difficult than in AD, given the lack of PD biomarkers. One study did demonstrate that deficits in dopamine transporters, measured by β-CIT SPECT imaging, were associated with hyposmia and constipation in patients' not meeting diagnostic criteria for PD [42]. Total Non Cog This study adds to the prior literature by assessing a comprehensive set of symptoms, in a real-world clinical setting, in well-characterized patients at different stages of prodromal ND, including pre-MCI. Discerning which symptoms present together could give greater insight on the underlying pathology that is occurring during the development of NDs. Few studies have described differences in non-cognitive features between different diagnoses of ND. Several limitations of this study are worth mentioning, as well as several strengths. Lack of pathological biomarkers for classifying each patient is a major limitation considering that the study is attempting to segregate non-cognitive features based on pathology. In addition, current diagnostic criteria have been written mostly with a research setting in mind. Although we used the conventional criteria, previous studies have shown high false positive (34.2%) and false negative (7.1%) rates [43,44]. Validating the MCI and other diagnoses with CSF biomarkers or imaging data would improve accuracy. We attempted to address this limitation by conducting consensus conferences that included two dementia experts assessing each patient. Another important limitation is the absence of criteria for classifying patients who are considered "not normal" but rather in between Normal and MCI. We addressed this limitation by developing a systematic algorithm for classifying these patients into a novel diagnostic category termed DCI [45], which may be most accurately referred to as Detectable Cognitive Indicator. DCI is characterized by cognitive deficits below the MCI threshold, in the presence of at least one non-cognitive symptom, as well as family history/genetic testing consistent with ND risk, without other explainable etiologies. Since non-cognitive features were included in the diagnosis of the patients, the possibility of reverse causation remains important. However, non-cognitive symptoms were not the only variable taken into account when classifying the DCI patients. In fact, the neurological examination and family history was more important for classifying patients with PD/LBD features. CONCLUSION These findings may help clinicians begin to learn to recognize symptoms that may be part of prodromal ND syndromes. Greater physician awareness may ultimately lead to timely and accurate referrals for disease-modifying interventions to prevent ND. A thorough clinical history that takes into account prodromal, non-cognitive features could increase the accuracy of diagnoses made using biomarker testing which can be discordant in early stages of AD [46]. In addition, non-cognitive features might be useful in context where biomarker testing is less available. For patients in the prodromal stage of AD, biomarker testing such as CSF tests for amyloid and tau, or FDG-PET to assess for brain glucose hypometabolism, are not available through commercial insurance. Ultimately, earlier and more accurate diagnoses could lead to interventions that could modify disease if initiated in pre-MCI stages. Comprehensive lifestyle interventions, as well as anti-amyloid therapies appear to slow pathology in those at risk [47,48]. Identifying the people who would benefit most from these interventions is a major national priority [18]. As such, additional work is indicated to develop clinical diagnostic criteria for pre-symptomatic stages. The possibility of applying such a framework in patients with other ND, such as frontotemporal dementia, and/or with multiple age-related and medical co-morbidities, is a major research priority. ETHICS APPROVAL AND CONSENT TO PARTICI-PATE This study was approved by the Institutional Review Board at Weill Cornell Medicine, New York, USA. HUMAN AND ANIMAL RIGHTS No animals were used in this study, the reported experiments on humans were in accordance with the ethical standards of the committee responsible for human experimentation (institutional national), and with the Helsinki Declaration of 1975, as revised in 2008 (http://www.wma.net/). CONSENT FOR PUBLICATION Informed consent was obtained from all participants. CONFLICT OF INTEREST The authors declare no conflict of interest, financial or otherwise.
2019-03-11T17:25:24.212Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "d77a6dc9452532c8ef9342eeb36af2ebafefbe33", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc6635426?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d77a6dc9452532c8ef9342eeb36af2ebafefbe33", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1025166
pes2o/s2orc
v3-fos-license
Rapid and slow gating of veratridine-modified sodium channels in frog myelinated nerve. The properties of voltage-dependent Na channels modified by veratridine (VTD) were studied in voltage-clamped nodes of Ranvier of the frog Rana pipiens. Two modes of gating of VTD-modified channels are described. The first, occurring on a time scale of milliseconds, is shown to be the transition of channels between a modified resting state and a modified open state. There are important qualitative and quantitative differences of this gating process in nerve compared with that in muscle (Leibowitz et al., 1986). A second gating process occurring on a time scale of seconds, was originally described as a modified activation process (Ulbricht, 1969). This process is further analyzed here, and a model is presented in which the slow process represents the gating of VTD-modified channels between open and inactivated states. An expanded model is a step in the direction of unifying the known rapid and slow physiologic processes of Na channels modified by VTD and related alkaloid neurotoxins. INTRODUCTION The alkaloid neurotoxin veratridlne (VTD) exerts its toxic effects by modifying voltage-dependent Na channels in nerve and muscle (Ulbricht, 1969). VTD may modify Na channels by binding rapidly to open channels (Sutro, 1986;Rando, 1987b). There is also a VTD-modified permeability that develops over several seconds under the condition of prolonged membrane depolarization (Ulbricht, 1969). By whichever pathway the modification is effected, the modified current persists when the membrane is depolarized and declines over several seconds when the membrane is held at -100 mV. The VTD-modified channel gates rapidly between an open and a closed state (Leibowitzet al., 1986;Rando, 1987b). Leibowitz et al. (1986) demonstrated that in frog muscle, the kinetics of that gating reaction could be described by the sum of two exponential processes, suggesting that there are two closed states, but that the energy of activation of the channel was the same as that for unmodified channels. Ulbricht (1969) described the slowly developing VTD-modlfied permeability in frog nerve as the gating of channels between a modified closed state and a modified open state. This transition had exponential kinetics, although a thousand times slower than unmodified channels, and a much greater energy of activation than unmodified channels. The purpose of this investigation was to study both rapid and slow transitions of the VTD-modified Na channel in the same tissue, the frog node of Ranvier. Studies of the rapid gating process revealed important qualitative and quantitative differences in nerve compared with muscle. Investigation of the slow transitions of the VTD-modified channels and the dissociation of VTD from the channel led to a reinterpretation of the slowly developing and slowly declining VTD-modified permeability. A kinetic scheme is presented to unify the rapid and slow transitions of the VTDmodified channels. Some of these results have been presented in preliminary form (Rando, 1987a). METHODS Single myelinated fibers were isolated from sciatic nerves of the frog Rana pipiens. Fibers ranged in diameter from 10 to 15 #m and were voltage-clamped according to the method of Dodge and Frankenhaeuser (1958). The fibers were dissected in a standard Ringer's solution containing 110 mM NaCI, 2.5 mM KC1, 2.0 mM CaCI 2, 12.0 mM tetraethylammonium chloride (to block the delayed K current), and 5.0 mM HEPES buffer adjusted to pH 7.2 with 1 N NaOH. The fibers were mounted in a plexiglass chamber and petroleum jelly (Vaseline) seals were laid down beneath the surface of the solution. The solution level was lowered creating the four pools described by Dodge and Frankenhaeuser (1958). The solution in the end pools was then changed to an "intracellular" solution of 110 mM CsCl, 10 mM NaCI, and 5.0 mM HEPES, adjusted to pH 7.2 with 1 N CsOH. The adjacent internodes were cut again allowing the intracellular solution to diffuse to the inside of the node. The plexiglass chamber was transferred to the volt-age<lamp apparatus where cooled agar bridges (stored in 1 M KCI) were connected to each pool. A cooling solution that was circulated through the base plate of the apparatus maintained the preparation at 12 -2~ The holding potential was set at -100 mV. Linear leakage and capacitance currents were subtracted using analogue circuitry and current traces were filtered with an active four-pole Bessel low-pass filter (3200; Krohn-Hite Corp., Avon, MA) at 10 kHz when measuring responses on a millisecond time scale, and 1 kHz for responses on a time scale of seconds. All data were displayed on an analogue storage oscilloscope (5441; Tektronix Inc., Beaverton, OR), photographed with an oscilloscope camera (Tektronix C-SB), and digitized for analysis using a Digiplot (Houston Instrument Co., Austin, TX) in conjunction with an eight-bit microcomputer (Horizon 2; North Star Computers, San Leandro, CA). Some of the figures are hand tracings of the photographic records for clarity as indicated. The currents were calibrated by dividing the measured internodal potential differences by an assumed internodal resistance of 20 Mt~. Where indicated, Na currents have been converted to Na permeabilities using the Goldman-Hodgkin-Katz equation (Goldman, 1943;Hodgkin and Katz, 1949). Wherever data represent the mean of several experimental results, the error values or error bars indicate _+SD about the mean. VTD was obtained from Sigma Chemical Co., St. Louis, MO. Solutions containing VTD were prepared by diluting the drug from a 20-mM stock solution in dimethylsulfoxide (DMSO). The highest concentration of DMSO in any solution in these experiments (1.58%) had no effect on nodal Na currents. All experiments were done in the continued presence of 200 #M VTD unless otherwise indicated. "Instantaneous" currents are described that refer to the current measured upon changing the clamp potential from one value to another. The settling time of the voltage clamp is ~30 #s and thus would contribute an error of <5% toward the estimation of the true instantaneous current at time zero over most of the potential range studied (-180 to -80 mV). For the extreme potentials of <-180 mV and >-80 mV, where the error could approach 10%, extrapolation of the currents (which were all changing exponentially) back to time zero eliminated the error introduced by the clamp settling time. RESULTS The modulations of VTD-modified Na permeability in frog nerve are complex functions of voltage and time. Apparent steady state permeabilities can be attained within several milliseconds in response to changes in membrane potential. However, these rapidly attained "steady state" currents, at constant membrane potential, slowly change over several seconds reaching a new steady state level. The time courses of these processes differ by three to four orders of magnitude and thus can be studied essentially independently. The results described below are divided into two sections. The first concerns the rapid gating of VTD-modified Na channels, with time constants in the order of 1 ms or less. The second section concerns a slower gating reaction which is measured over several seconds. The Rapid Gating of the VTD-modified Permeability When VTD was applied to a voltage-clamped myelinated axon, the family of Na currents elicited by positive voltage steps appeared to be little affected (Fig. 1). The magnitude and time course of the transient currents seemed virtually unchanged compared with those in the control Ringer's solution. When the membrane was returned to the holding potential, an inward current persisted, indicated by the arrow in Fig. 1 B. The effect of VTD is shown more clearly in Fig. 1 C, a tracing from another axon in which VTD produced more dramatic changes. In the absence of VTD, a single pulse to + 100 mV produced a transient outward current that inactivated within 2 ms. In the presence of VTD, the same pulse produced a transient current of the same amplitude, but the current did not inactivate completely. Rather, there remained a persistent outward current at that potential. When the membrane was repolarized to -100 mV, this current became an inward current that decayed to zero over several seconds. This persistent current flows through only VTD-modified channels, thus the gating characteristics of such channels can be studied independently of unmodified channels. VTD-modified currents generated by brief depolarizing pulses followed a characteristic pattern when the membrane was repolarized to -100 inV. There was a rapid, small decay of the current that was complete within several milliseconds, after which the current maintained an apparent steady state level (Fig. 2). If the membrane were instead repolarized to -200 mV, the initial decay of the current was greater and more rapid, and the steady state current was smaller. Upon the return of the membrane to -100 from -200 mV, the current increased and reached the same level as that when the membrane was repolarized to -100 mV initially. The greater reduction of the current when the membrane was repolarized to --200 mV was not due to dissociation of VTD from Na channels. If it were, then there would have to be a reassociation process to explain the increase of the current upon a return to -100 from -200 mV. However, there is evidence that no association of VTD with channels occurs with a step from -200 to -100 mV; that is, in the absence of a depolarizing prepulse, hyperpolarization of the membrane to -200 mV followed by a step to -100 mV resulted in no VTD-modified current. Thus any dissociation of VTD from channels at -200 mV would have manifested itself as a A B TEA-Ringer + VTD 0 mV ~ I.Oms C ~__ +VTD -VTO I _J 2.5 nA +1OO mV 0.5ms -IOO mV FIGURE l. Na currents in the node of Ranvier in the absence and presence of VTD. With the nodal membrane under voltage-clamp control, currents were generated by 8-ms depolarizations in 10-mV increments from a holding potential of -100 mV. (A) A family of Na currents in standard Ringer's solution. (B) A family of Na currents 5 min after the addition of 200 #M VTD to the Ringer's solution. Note the small persistent current following the test depolarizations (arrow). (C) Na currents from a different node in the presence and absence of 200/zM VTD. A single 5-ms pulse to + 100 mV was delivered before and 1 min after the addition of VTD. smaller modified current upon return to -100 mV. VTD remained associated with the channels at -200 mV, yet the modified channels were in a nonconducting state. Since these modified channels were readily opened with a depolarization to -100 mV, this process represents the conversion of channels from resting to open states. I studied the voltage dependence of this gating process by inducing VTD-modifled currents and then stepping the membrane to new test potentials. For test potentials more positive than -60 mV, the current at the end of the test pulse var-ied nearly linearly with membrane potential (Fig. 3, A and C). For test potentials more negative than -60 mV, the current at the end of the test pulse did not vary linearly but decreased as the test potential was made more negative (Fig. 3, B and C). The currents at the end of the test pulse were not truly at steady state but were slowly changing (see below): they were decreasing for pulses more negative than ~-80 mV (the slowly decaying current) and they were increasing for pulses more positive than ~-80 mV (the slowly developing VTD-modified current). Thus, I refer to the relationship of the current at the end of the test pulse (10 ms) to the test pulse potential as an isochronal current-voltage (I-V) relationship. This is to be distinguished from the steady state I-V relationship of VTD-modified channels (Fig. 7B). Although the shape of the isochronal I-V curve ( Fig. 3 C) resembles that of unmodified channels, the two curves differ in several ways. First, the reversal poten- FIGURE 2. VTD-modified channels gate between resting and open states. A VTD-modified current was induced by a 20-ms depolarization to 0 mV. The nodal membrane was then hyperpolarized to -100 or -200 mV and tracings of the resulting currents are superimposed here. The dashed horizontal line in this and all subsequent figures of current traces represents the zero-current level. After 8 ms at -200 mV, the membrane was depolarized back to -100 inV. The modified current increased to the same level as when the membrane had been repolarized to -100 mV initially. tial for the modified channels was less positive than that of unmodified channels (modified, 23.7 • 3.4 mV; unmodified, 46.6 • 3.7 mV; n = 6). This shift in reversal potential is consistent with a decrease of Na selectivity of channels when modified by VTD (Rando, 1987b). Second, the modified channels conducted a maximum inward current at test potentials of -80 mV, compared with -30 mV for unmodified channels. This was not simply a function of the shift of the reversal potential since the permeability-voltage relationship was also shifted to more negative potentials (see below). Finally, current continued to flow through VTD-modified channels even at --2.0 FIGURE 3. /-V relationships for VTD-modified channels. VTD-modified currents were induced with 20-ms depolarizations to 0 mV, and the membrane was then stepped to test potentials for 10 ms. (A) Test pulse range, -80 to +80 mV. (B) Test pulse range, -200 to -100 inV. At each new potential, the current instantaneously changed and then, for test potentials more negative than -60 mV, decreased to reach an apparent steady state level (see text). The current at the beginning of the test pulse is referred to as the instantaneous current and that at the end of the test pulse (10 ms) as the isochronal current. (C) The values of the instantaneous and isochronal currents for VTD-modified channels are plotted as a function of the test potential. potentials as negative as -200 mV, whereas current through unmodified channels decays completely with hyperpolarization of the membrane beyond ~-60 inV. Instantanteous Rectification of VTD-modified Channels VTD-modified currents were reduced at hyperpolarized potentials for two reasons. First, as just described, there was a voltage-dependent gating reaction that favored a closed state as the membrane was increasingly hyperpolarized. Second, there was a voltage-dependent reduction of the current through the open channels. If the conductance of the open channels were independent of voltage, then the instantaneous current would be a linear function of membrane potential. However, the instantaneous current did not increase linearly, but rectified with membrane potentials more negative than -60 mV (Fig. 3 C). This rectification is consistent with voltagedependent Ca block of the channels, as has been described for both batrachotoxin-modified and tetramethrin-modified Na channels (Mozhayeva et al., 1982;Yamamoto et al., 1984). Thus, both voltage-dependent gating and open channel rectification contributed to the reduction of the VTD-modified current at membrane potentials more negative than the normal resting potential. Perr~ability-Voltage Relationship of VTD-modified Channels The isochronal I-V relationship of VTD-modified channels was converted to a permeahility-voltage relationship in Fig. 4 A (solid line). The dashed line shows the same data corrected for open channel rectification, x The modified channels showed voltage-dependent gating over the range of -180 to -60 mV. As mentioned above, a fraction of the modified channels did not close, even at -200 inV. This current was not an artifact of improper leakage subtraction since hyperpolarization of the membrane to -200 mV without a previous depolarization resulted in no inward current. Furthermore, this current was sensitive to block by tetrodotoxin and thus was mediated by voltage-dependent Na channels. The permeability-voltage relationship of the VTD-modified channels and of unmodified channels is shown in Fig. 4 B. The curve representing the VTD-modifled channels is normalized to exclude the fraction of channels that remained open at -200 mV. The solid line through the points for the activation of unmodified channels is drawn according to the equation: where A is the ordinate value of fractional activation, Em is the abscissa value of membrane potential, Eo is the membrane potential at which A = 50%, and k is a slope factor. For the unmodified channels, Eo was -30 mV and k was 5.16 mV. The solid line through the points for the activation of VTD-modified channels was drawn according to the model presented in the Discussion, but it is fit by Eq. 1 with Eo = -120 and k = 12,0 mV. The dashed line shows the curve through the points for unmodified channels when shifted by -90 mV (i.e., Eo = -120 mV, k = 5.16 mV). The curve for the VTD-modified channels is clearly less steep than the curve for unmodified channels. A model for this voltage-dependent gating of VTD-modifled channels is presented in the Discussion. 1 The correction for open channel rectification was done as follows. The permeability of the VTDmodified channels was calculated using the Goldman-Hodgkin-Katz equation for the linear portion of the I-V curve (between -60 and 0 mV). Then the current expected at more negative potentials for that permeability was calculated using the same equation. The ratio between the calculated and the observed values of the instantaneous current, as a function of membrane potential, was the correction factor in the permeability-voltage relationship in Fig. 4 A. Kinetics of Opening and Closing of VTO-modified Channels There was a time-dependent closing and reopening of VTD-modified channels when the membrane potential was changed to a new value (Figs. 2 and 3). At every membrane potential tested, both the opening and closing processes were well fit by single exponential functions. These relaxations proceeded without any apparent delay after each voltage step. The time constants of these relaxations as a function of membrane potential (Fig. 5) were determined by fitting both the closing process (closed circles) and the reopening process (open circles) with exponential functions. Fig. 3) were converted to permeabilities and plotted as a function of membrane potential (P~N~) Each symbol represents the mean value from six nodes and, where error bars are not shown, the standard deviation was smaller than the size of the symbol. The mean permeability at each potential was then corrected (corrected /~N~) for open channel rectification as seen in the instantaneous I-V reladonship of Fig. 3 (see text for discussion of the correction factor). This corrected curve should represent the voltage-dependent gating of VTD-modified channels over this potential range. (B) The corrected ~ curve of A was normalized from its minimum to its maximum value on a scale of 0-1.0 and is shown as the open circles (P~N~). Also shown is the voltage-dependence of the activation of unmodified channels (triangles, PNa). The solid lines through the triangles are drawn according to Eq. 1 and the values given in the text; the solid lines through the circles are drawn according to the model presented in the Discussion. The dashed line is the solid curve through the triangles shifted -90 mV along the abscissa. The solid line in Fig. 5 represents the time constants predicted from the model for these transitions as presented in the Discussion. Because the gating of VTD-modified channels in frog muscle has been described as having two components, a rapid component analogous to that described here and a slower component (Leibowitz et al., 1986), I analyzed the records carefully for a slow component in nerve. No slow component was found. In particular, I considered the possibility that the fraction of the current that persisted at -200 mV was a slowly decaying current. However, no change of this current was detectable over the final 12 ms of 15-ms test pulses. If this current had been slowly changing, but to an extent equal to the error in the measurements so that it was not detected, then the time constant of that decay would have to be >100 ms at -200 mV; the maximum slow time constant observed in muscle was <20 ms (Leibowitz et al., 1986). Slow Changes of the VTD-modified Permeability The VTD-modified permeability induced by a brief depolarization, as in Fig. 1, was not at steady state but decayed over several seconds when the membrane was repolarized. The decay was exponential and, in nine fibers treated with 200 #M VTD, the time constants of the decay at -100 mV averaged 2.21 _+ 0.21 s. The rate of decay was independent of VTD concentration over a range of 10 to 316 #M (Rando, 1987b). The nature of this slowly decaying current is examined below. Voltage Dependence of the Decay of the VTD-modified Current To test if the decay rate of the modified current was dependent on the membrane potential, the repolarization potential was varied (Fig. 6). The decay of the modified current was exponential for potentials between -80 and -140 mY, and within that range the decay rate increased with increasingly negative repolarization potentials. I was unable to clamp the membrane to potentials more negative than -140 mV for a time sufficient to measure a time constant without doing irreversible damage to the nodal membrane. An increase in the decay rate of the VTD-modified current at increasingly negative potentials was also observed in a preliminary study by Yoshii and Narahashi (1984). The Slowly Developing VTD-modifiea Permeability As mentioned above, a slowly developing VTD-modified current appears during depolarizing pulses that last many seconds; this was originally described by Ulbricht (1969). This slowly developing permeability may represent the slow binding of VTD to channels or the slow opening of channels already modified by VTD. From the results of the following experiments, it seems as if the latter of these two possibilities more accurately describes the process. Time and Voltage Dependence of the Slow VTD-modified Current When a node was depolarized to various potentials for many seconds in the presence of VTD, an inward current increased slowly in an exponential fashion ( . Voltage dependence of the rate of decay of the VTD-modified current. VTD-modified currents, generated with 10-ms pulses to 0 mV, were found to decay exponentially with different time constants at different repolarization potentials. (A) Superimposed tracings of the decay of the VTD-modified current at -100 and -140 mV. The trace at -140 mV has been scaled by a factor of two for easier comparison. The initial amplitude of the current at -140 mV is smaller than that at -100 mV because of instantaneous rectification and voltage-dependent closing of VTD-modified channels at negative membrane potentials (Fig. 3 C). (B) The time course of decay of the VTD-modified current at the indicated membrane potentials are normalized and plotted semilogarithmically. The currents decay exponentially at all potentials but the time constants decrease with hyperpolarization: r = 2.20s (-80 mY), 1.95s (-100 mV), 1.73s (-120 mV), and 1.51 s (-140 mV). Averaged values of these time constants from several experiments are presented in Fig. 10. 7 A). This slow current reached a steady state within 8 s at all pulse potentials. After each depolarization, there was an exponentially decaying current whose time constant was in the same range as those generated by brief depolarizations (Fig. 6). The steady state I-V and permeability-voltage relationships are shown in Fig. 7, B and C. . The most positive potential achieved without instability of the voltage clamp was +40 mV in three nodes, +20 mV in one node, and 0 mV in two nodes. The modified permeability was not at a maximum at +40 mV, so the normalization was arbitrarily determined by the model presented in the Discussion. The dashed curve is drawn according to the model. Included in the graph is the permeability-voltage relationship for unmodified channels from the same node used in A and B (closed circles). In agreement with Ulbricht's report (1969), this slow process was detectable at potentials ~30 mV more negative than the minimum potential for detecting the activation of unmodified Na channels. Furthermore, compared with the peak permeability of unmodified channels, the steady state permeability of the VTD-modifled channels was a less steep function of voltage. The reversal potential of the steady state current is considerably less positive than that of the peak Na current (48.7 + 3.5 mV for the six nodes described below). The average reversal potential of the VTD-modified current from three nodes was 27.6 + 1.5 mV. From three other nodes that could not be polarized to such positive potentials, estimates of the reversal potentials from extrapolation of the steady state I-V curves were all between 25 and 35 mV. This change of reversal potential in the presence of VTD is due to a reduction of the selectivity of the channels for Na over the carriers of outward current, K and Cs (Rando, 1987b). The slow current developed with an exponential time course from the zero-current level for depolarizations to potentials more negative than ~-60 mV. For potentials more positive than -60 mV, the transient Na current was activated and thus the fast VTD-channel interaction occurred (as in Fig. 1). The slow current then developed exponentially from this small current level. The fast interactions appear as initial jumps in the currents for pulses to -40 and -20 mV in Fig. 7 A. Another way to study the development of the slow permeability is to measure the initial amplitude of the slowly decaying current as a function of pulse duration; as the slow permeability develops, so will the amplitude of this current (Fig. 8 A). When the initial amplitudes were plotted semilogarithmically as a function of pulse duration for a depolarization to -20 mV, the exponential nature of the development of the slow permeability was evident (Fig. 8 B). Similar analyses for pulses to -40 and -60 mV showed that over this voltage range the development of the slow permeability continued to be a first-order process with a time constant that decreased with increasing depolarization (Fig. 8 B). This type of analysis was not done for depolarizations more positive than -20 mV because of the difficulty of maintaining a node polarized at those potentials. The slowly developing VTD-modified current was studied using different VTD concentrations. The amplitude of the current at steady state increased as the VTD concentration was increased from 60 to 200 #M. The time course of that development, however, as studied in Fig. 8, was independent of VTD concentration over this concentration range. In four nodes, the time constants at -40 mV, derived as in Fig. 8, were 1.62 • 0.08 s at 60 #M VTD, 1.60 _+ 0.07 s at 100 #M, and 1.62 _+ 0.10 s at 200 #M VTD. This would suggest that the slowly developing current cannot be equated with the slow binding of VTD. Although the slow current developed with a very different time course and voltage dependence from the rapid modification of channels by VTD, the resulting permeabilities of the two processes appeared to be identical by three criteria. First, when the membrane was repolarized after a prolonged depolarization, the slowly decaying current had identical kinetics to that after a brief depolarizing pulse. Second, the rapid gating of VTD-modified channels described above was the same whether the modification was induced by a brief or a prolonged depolarization. Finally, the reduction of selectivity of VTD-modified channels for Na over K was the same regardless of how the modification was achieved (Rando, 1987b). The relationship between the slowly decaying current after a brief depolarization and the slowly developing current during a prolonged depolarization is clarified by the comparison of Figs. 6 and 8. The first implies a process by which channels convert from a conducting to a nonconducting state with first-order kinetics, in a voltage-dependent manner, and with a time constant of ~ 1-2 s. The second implies a process by which channels convert from a nonconducting to a conducting state with first-order kinetics, in a voltage-dependent manner, and with a time constant of ~ 1-2 s. The possibility that these two processes are manifestations of a single gating reaction is considered in the Discussion. Reversal of the Interaction between VTD and the Na Channel The VTD-modified current decayed slowly when the membrane was returned to the holding potential. This decay was the same whether the modification was induced by a brief depolarization as in Fig. 1, or by a prolonged depolarization as in Fig. 7. What is the process responsible for the decline of this current? There are two general possibilities. First, the conducting channels may remain modified by VTD but enter a nonconducting state with the observed time course at -100 inV. Second, VTD may dissociate from the channels leading directly to the decline of the modified current. Since VTD binds rapidly to unmodified channels during a depolarization and decays very slowly upon repolarization, repetitive depolarizations in the presence of VTD leads to a cumulative increase of the magnitude of the VTD-modified permeability and a concomitant decrease in the unmodified permeability (Sutro, 1986;Rando 1987b). After a pulse train, the unmodified permeability recovers to its original value. The time course of this recovery is thus the time course of conversion of VTD-modified channels to unmodified channels. If the slowly decaying VTD-modifled permeability (as in Fig. 6 A) is a result of the dissociation of VTD from the channel, the time course of the decay should be identical to the time course of the recovery of the unmodified current after a pulse train. The following experiments test this hypothesis and show that it is incorrect. Rather, the data suggest that the decay of the VTD-modified permeability is a result of the conversion of modified channels from a conducting to a nonconducting state. Nodes were stimulated at 10 Hz in the presence of VTD until a steady state was achieved. This resulted in an increase of the VTD-modified current and a concomitant reduction of the peak (unmodified) current. At increasing time intervals after the end of the pulse train, a single test pulse was given to assay the recovery of the peak current (Fig. 9 A). Fig. 9 B shows the time course of the recovery of the peak VTD current (PNa) and the decline of the VTD-modified current (PNa) on the same graph. Long after the VTD-modified current had completely decayed (> 12 s), the amplitude of the peak current continued to increase. Thus, the decline of the VTDmodified current cannot be equated with the dissociation of VTD from the channels. It should be noted that, in the absence of VTD, stimulation of a nodal membrane at 10 Hz results in no change of the peak current amplitude. The recovery of the unmodified permeability after repetitive depolarizations as shown in Fig. 9 B occurred in two phases. The time course was found to follow the sum of two exponential processes. The time constant of the more rapid recovery was 10.2 s; that of the slower recovery was 40.0 s. In a total of five such experiments, the rapid time constant averaged 9.4 _+ 1.9 s and the slower time constant averaged 36.4 -+ 3.9 s. The contribution of the slower component was variable from experiment to experiment and made up between 12 and 31% of the recovery of the peak current. These data suggest that the conversion of VTD-modified channels to unmodified channels occurs by two distinct processes. In muscle, repetitive depolarization leads to an increase in the VTD-modified permeability to a maximum value, then it declines with continued stimulation (Sutro, 1986). From this data, it was suggested that there is a slow inactivated state that VTD-modified channels may enter with repetitive pulsing. More channels could be driven to this slow inactivated state with higher pulse frequencies, longer pulse durations, and higher VTD concentrations (Sutro, 1986). This phenomenon was barely detectable in nerve (Rando, 1987b). When nerve was stimulated at 40 Hz for . The relationship between the slow decay of the VTD-modified current and the dissociation of VTD from Na channels. (A) A node was stimulated at 10 Hz for 40 pulses to drive many channels to a VTD-modified state. The return of channels to an unmodified state was then assayed by a single test pulse at variable intervals after the pulse train and measuring the amplitude of the peak current. The traces are the responses to the test pulses. Indicated on the left and right are recovery times corresponding to certain modified and unmodified (peak) current traces. (B) The modified and unmodified (peak) currents shown in A were converted to permeabilities and the values of those permeabilities are plotted as a function of recovery time after the high frequency stimulation. The peak permeability increased from a minimum value (immediately after the train) to its initial value before the pulse train (dashed horizontal line) over ~2 min. The values of the peak permeabilities were adjusted for the variable contribution of the VTD-modified permeability during the test pulse. The modified permeability decreased from its maximal value (immediately after the train) to zero in ~ 12 s. The values of the modified permeability were increased by a factor of 1.33 to adjust for voltage-dependent gating and instantaneous rectification at -100 mV (see Fig. 3), both of which reduced the apparent modified permeability. 12 s, the VTD-modified permeability at the end of stimulation had decreased by <5% of the maximal value, whereas in muscle the permeability declined by nearly 50% (see Sutro, 1986 and Fig. 3). Furthermore, the experiments in nerve were done in four times the VTD concentration. Nonetheless, to be sure that the results of Fig. 9, in particular the delayed and biphasic nature of the recovery of the unmodified permeability, were not due to this slow inactivation process, alternative stimulation parameters were used. With parameters of 1 Hz for 15 s or 10 Hz for 1 s, the amplitude of the modified current was smaller than in the experiment in Fig. 9. That is, fewer channels had been driven to a modified state. Nevertheless, after the cessation of the stimuli, the peak current recovered in two phases just as in Fig. 9, and the time constants of both processes were always similar to those with the stimulation parameters used in Fig. 9. The range of the slow time constants was 23-42 s for stimulation at 1 Hz to steady state (n = 3), and 28-37 s for stimulation at 10 Hz for 1 s (n = 3). Thus, it does not appear that the prolonged recovery of the peak current seen in Fig. 9 is due to the stimulation parameters of that experiment (10 Hz for 4 s). Another interesting aspect of Fig. 9 is that, when a steady state was achieved by repetitive stimulation, the peak permeability was reduced by -2.85 x 10 -9 cm3/s (79%), whereas the corresponding VTD-modified permeability equalled 0.91 x 10 -9 cm3/s (or 25% of the unmodified peak permeability). The modified permeability accounted for only about one-third of the "missing" peak permeability. This would suggest that, compared with an unmodified channel, a modified channel has a smaller single channel conductance, a lower probability of being open at -30 mV, or both. DISCUSSION The rapid and slow gating of VTD-modified Na channels described here can be studied essentially independently because the time constants of the two processes differ by three to four orders of magnitude, depending on the membrane potential. To be able to study both processes in the same tissue is useful for comparisons with other systems in which rapid or slow effects of VTD have been described. The Aaivation of VTD-modifued Channels The gating properties of VTD-modified channels in frog muscle fibers have been described by Leibowitz et al. (1986). The results presented here show that there are qualitative differences between the gating characteristics of such channels in nerve and muscle. One significant difference was that in nerve the time course of opening and closing of VTD-modified channels was well modeled by a single exponential process; in muscle the processes were well fit by the sum of two exponential components. The rapid component in muscle was very similar to the single component in nerve. The slow component in muscle, with time constants in the range of tens of milliseconds (Leibowitz et al., 1986), had no parallel in nerve. Whether this difference represents inherent differences between the Na channels in nerve and muscle, or differences between the tissues (e.g., Na channels in the T tubule system of muscle), remains to be determined. Another significant difference is the sensitivity of the activation process to changes in membrane potential. In muscle, Leibowitz et al. (1986) reported that the slope of the curve of permeability vs. membrane potential was the same for VTDmodified channels as for unmodified channels. In nerve (Fig. 4), the curve for VTDmodified channels was clearly less steep than that for unmodified channels. It would be interesting to compare gating current studies in the presence of VTD in the two tissues to search for differences in the more fundamental aspects of the voltagedependence of charge movement. The Gating of VTD-modified Channels between a Resting and an Open State VTD-modified channels gate between a nonconducting and a conducting (open) state in a voltage-dependent manner (Fig. 2). The nonconducting channels could be readily driven to the open state by a depolarization of the membrane (Figs. 2 and 3). This nonconducting state is, therefore, by traditional definition, a resting state. The transitions of the VTD-modified channels between the resting and open states followed an exponential time course, the time constant of which depended on the membrane potential (Fig. 5). These relaxations were modeled as the conversion of channels between a single resting state R*, and a single open state O*. In this model, the transitions between those states were governed by voltage-dependent rate constants, ~ and s by the following scheme: The values of I~ and Q-2 were derived using the time constants at -120 and -140 mV (Fig. 5), and the distribution of channels between the resting and open states at these same membrane potentials (Fig. 4 B): Q1 ffi 151 9 exp (0.0524 9 Em)s -1 (2) and Q-l = 0.006 9 exp (--0.0321 9 Em)s -1 where Em is the membrane potential. The solid curve in Fig. 5 shows the predicted relationship between the time constant of the relaxation and the membrane potential based on these rate constants where 1 r (4) The model was then applied to the voltage dependence of the distribution of channels between resting and open states. The solid line through the points for VTD-modified channels in Fig. 4 B is drawn according to the voltage-dependent rate constants above, where the fractional activation (A) is derived by the equation: The model accurately describes the voltage dependence of the activation of VTDmodified channels. One aspect of the data that is not included in the model is the fraction of channels that remained open at -200 mV, which is depicted graphically in Fig. 4 A. This may represent a distinct population of channels that do not undergo this gating reaction. Alternatively, this phenomenon may be indicative of a second modified open state, which is stable at very negative membrane potentials and which a fraction of channels enter instead of R*. This point needs further investigation. The Slowly Developing VTD-modified Permeability The studies of the slow modification of Na channels by VTD extend the work of Ulbricht (1969Ulbricht ( , 1972a and Leicht et al. (1971a, b) in which slowly developing currents in the presence of VTD were described. The importance of examining this slow modification lies not only in the fact that it is a second pathway by which a VTD-modified permeability may develop, but also because all of the effects of VTD on resting tissues are probably a result of this slow process. The depolarization of nerve and muscle cells and the increase of Na permeability of cells in tissue culture by VTD occur over many seconds or even minutes (Ulbricht, 1969;Ohta et al., 1973;Catterall, 1975;McKinney, 1984;Rando et al., 1986). These rates can be increased by electrical stimulation (Ulbricht, 1969;Rando et al., 1986), which probably brings into play the rapid process of VTD binding to open channels. Most studies of tissue culture cells are done in the absence of any external stimulation and thus the VTD effects most likely occur through the slow pathway. The extrapolation from VTD-induced changes of Na permeability of tissue-culture cells to the biophysical properties of VTD-modified Na channels is strengthened by an understanding of the underlying process that leads to those permeability changes. Garber and Miller (1987) used the planar bilayer technique to study the permeation properties of VTD-modified channels. In that technique, the artificial membrane is often held at depolarized potentials for many seconds between individual openings of VTDmodified channels. The kinetics of these openings of VTD-modified channels may also be governed by this slow process. From the experiments presented, it is not obvious how the binding of VTD relates to the development of the slow permeability. In the model of Ulbricht (1969), it is presumed that VTD is associated with channels at rest, and those channels activate slowly during prolonged depolarizations. If that model was correct, then one would expect the unmodified permeability to be reduced in the presence of VTD. This is not what is observed (Fig. 1). If there is a binding of VTD to channels that then leads to a slowly developing current, that binding must occur after the activation of unmodified channels. I propose that the slow permeability develops by the binding of VTD to channels in the fast inactivated state, and that it is these modified, inactivated channels that then slowly open (see below). For prolonged depolarizations, the fast inactivated state is a transient state as channels progress to slow inactivated states. Thus, like the open state for the rapid binding of VTD (Sutro, 1986;Rando, 1987b), the transient nature of the state of the channel to which VTD preferentially binds limits the extent of the modification. Another possibility is that the slowly developing current arises from the slow reopening of inactivated channels and the binding of VTD to those open channels. However, an argument is presented below that the slow development and slow decay of the VTD-modified current arise from the conversion of VTD-modified channels between an open and an inactivated state. The binding of VTD to the fast inactivated state of the channel may be the simplest hypothesis that is consistent with the data to explain the slowly developing current. A Reinterpretation of the Decay of VTD-modified Currents: The Inactivation of VTD-modified Channels The decay of the VTD-modified current after a brief depolarizing pulse has been called a "tail current" because it is a decaying current that appears after returning the membrane to the holding potential at the end of a test pulse. For unmodified channels, the decay represents the "deactivation" transition of channels from an open to a resting state. However, there is a second process, namely inactivation, by which channels pass from an open to a nonconducting (inactivated) state while the membrane is maintained at a constant potential. I propose that the decay of VTDmodified currents, as shown in Fig. 6, represents the inactivation of VTD-modified channels. This postulate is based on the traditional distinction between resting and inactivated states. Both resting and inactivated states are nonconducting states of the Na channel, distinguished by the kinetics of their transitions to open states. By convention, a resting closed Na channel is one that can open rapidly and with high probability in response to depolarization; an inactivated Na channel is one that opens extremely slowly and with low probability upon membrane depolarization. Using these criteria, the following argument leads me to conclude that VTD-modlfied channels pass from an open state to an inactivated state as the modified current decays. After the train of depolarizations illustrated in Fig. 9, -80% of the channels were modified as judged by the reduction of the unmodified current. The VTD-modified current decayed completely within 20 s after the train, thus any channel still modified at that time would have been in a nonconducting state. This nonconducting state must be an inactivated state since, as shown in Fig. 4, VTD-modified resting channels would tend to open at -100 inV. It seems most likely that this inactivated state is a VTDmodified inactivated state and that the slow recovery of the peak, unmodified current represents the slow unbinding of VTD. Since inactivated states, whether modified by VTD or not, are physiologically "silent," it would seem that biochemical experiments would best answer the question of the rate of VTD disassociation. If, however, this hypothesis is correct, then VTD-modified currents decay becase modified channels undergo a transition from an open state to an inactivated state. This inactivation of VTD-modified channels is not simply a modification of norreal fast inactivation of unmodified channels; it is a process with no known analogue in the kinetics of gating of unmodified channels. VTD-modified channels inactivate orders of magnitude more slowly than unmodified channels, and the open state is favored at more negative potentials for VTD-modified channels. This is the opposite voltage dependence to that of unmodified channels. In fact, this voltage dependence led Ulbricht (1969) to conclude that this slow process represented a modification of the activation process of unmodified channels. However, the reversal of inactivation at positive potentials is not unprecedented in the literature of Na channel physiology and pharmacology. In the presence of Leiurus scorpion a-toxin, increasingly positive pulse potentials produce Na currents with less inactivation at steady state in the node of Ranvier (Wang and Strichartz, 1985). In the squid axon, in the absence of any neurotoxin, Chandler and Meves (1970) also found a reduction of steady state inactivation of the Na current with very positive depolarizations. It is very important to distinguish the inactivation of modified channels discussed here from other definitions of the inactivation of VTD-modified currents. Leicht et al. (1971b) described an inactivation of VTD-modified channels in giant neurons of the snail Helix pomatia. This inactivation was a partial decay of the induced current during a depolarization of several seconds; this decay was not observed in my studies of frog nerve (Fig. 7). As discussed above, Sutro (1986) described an inactivation of VTD-modified channels that occurred during a series of depolarizing pulses. With pulsing, the magnitude of the VTD-modified current first increased to a maximal value, then slowly decreased with continued stimulation. This type of inactivation is barely detectable in the node (Rando, 1987b). Clearly, the inactivation processes described by both Leicht et al. (1971b) and Sutro (1986) are different phenomena from the inactivation described in this report. The Gating of VTD-modified Channels between an Open and an Inactivated State As suggested by the data in Figs The values of ~ and r~ were derived by assuming that, at most, 1% of the modified channels could be in O* at -120 mV and that 8% would be in O* at -80 InV. Then, using the time constants of the decay of the modified currents at these two potentials, the rate constants were defined by the equations: gl = 0.27 9 exp (--0.0057 9 Em)s -1 and r_l = 1.73 9 exp (0.048 9 Era) s -1. The time constant of this slow gating reaction was calculated as a function of membrane potential using the analogue of Eq. 4. The solid line in Fig. 10 is drawn according to the model and fits well the observed time constants over the range of-140 to -20 inV. Similarly, the analogue of Eq. 5 was used to calculate the fraction of modified channels in the open state for the reversible inactivation gating. One of the predic-tions of this model is that the membrane potential at which channels would be equally distributed between O* and I* is -35 mV. It was on this basis that the data in Fig. 7 B were normalized in Fig. 7 C. The VTD-modified permeability had not reached a maximal value at the most positive membrane potential that was possible to test, so the normalization was arbitrary. The data points were set on the scale such that a curve drawn by eye through them had a value of 0.5 (i.e., equal distribution of channels between O* and I*) at -35 inV. With the data normalized as such, the predicted values based on the model (Fig. 7 C, dashed line) followed the observed values, although there was some deviation at the more positive potentials. The calculated midpoint potential, -35 mV, is close to that assumed by Ulbricht (1969) in his studies of a slowly developing VTD permeability. He was able to maintain a node depolarized at potentials up to 60 mV more positive than the resting potential (i.e., ~-10 mV), and he took the midpoint voltage to be ~30 mV by extrapolation. T(S) ] I I J -t40 -I00 "60 "20 MEMBRANE POTENTIAL (mV) FIGURE 10. Time constants of the relaxation of modified channels between an open and an inactivated state. The solid circles represent time constants of the decay of modified currents as shown in Fig. 6. The open circles represent time constants of the development of the slow current as shown in Fig. 8. These time constants are combined on the same graph based on the model presented in the Discussion, and the solid line is drawn according to that model. All points represent the mean of determinations from five individual nodes except where the number of experiments is indicated in parentheses. Dissociation of FTD from the Channel Since the work of Ulbricht (1969), it has been known that the modification of Na channels by VTD can be promoted or reversed depending on the membrane potential. It seems implicit in that and subsequent studies that the reversal of VTD binding was equated with the slowly decaying (inactivating) modified current. That is, the assumption has been that the modified current decayed because VTD dissociated from the channel. The data in Fig. 9 refute this assumption. Clearly, long after the modified current completely inactivated there remained a significant proportion of modified channels. Leibowitz et al. (1986) considered the slowly decaying VTD-modified permeability to represent the dissociation of VTD from the channel. An interesting result that they obtained is that at membrane potentials sufficiently negative to close many VTD-modified channels, the time constant of the current decay increases appreciably. Their conclusion is that VTD unbinds more slowly from closed channels than from open channels. In light of the results presented here, however, I would interpret that result as showing that closed modified channels inactivated more slowly than do open modified channels. It is possible that instead of a single inactivated state as presented in the model, VTD-modified channels may proceed from one inactivated state to a second inactivated state. If the rate of dissociation of VTD from the two inactivated states were different, this could explain the biphasic nature of the dissociation of VTD from the channel as seen in Fig. 9. According to such a scheme, the time constants obtained from experiments such as those as in Fig. 10 (~9 and ~36 s) would be a function both of actual dissociation rates of VTD and of the kinetics of transitions between the two inactivated states. Measurements of the actual dissociation rates would best be done using biochemical techniques. Many of the alkaloid neurotoxins, pyrethroids, and other lipid-soluble toxins have slow effects on Na channels (see Strichartz et al., 1987). It may be that all such modified channels gate slowly between conducting and nonconducting states along with the known rapid gating of channels modified by toxins such as batrachotoxin and aconitine. Although more attention has been paid to the rapid gating processes, further evaluation of the slow processes is likely to provide more information on the kinetics of the actions of these toxins on resting cells and on the behavior of modified channels in the planar bilayers. I am grateful for the generous advice and support of Dr. Gary Strichartz in whose laboratory this work was done. Thanks are due to Dr. Ging Kuo Wang for useful discussions and critical reading of the manuscript. Rachel Abrams and Mary Gioiosa provided excellent secretarial assistance.
2014-10-01T00:00:00.000Z
1989-01-01T00:00:00.000
{ "year": 1989, "sha1": "fd1565a87c688d0a0cf45c10008fce173cef4717", "oa_license": "CCBYNCSA", "oa_url": "http://jgp.rupress.org/content/93/1/43.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "fd1565a87c688d0a0cf45c10008fce173cef4717", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
245014087
pes2o/s2orc
v3-fos-license
Bioreactance-guided fluid therapy for excision of a giant brain tumor in an infant under general anesthesia: A case report and literature review Pediatric patients are more likely to suffer from brain tumors. Surgical resection is often the optimal treatment. Perioperative management of pediatric brain tumor resection brings great challenges to anesthesiologists, especially for fluid therapy. In this case, the infant-patient was only 69-day-old, weighed 6 kg,but she was facing a gaint brain tumor (7.9 cm × 8.1 cm × 6.7 cm) excision. The infant was at great risks such as hemorrhagic shock, cerebral edema, pulmonary edema, congestive heart failure, coagulation dysfunction, etc. However, we tried to use the parameters obtained by bioreactance-based NICOM® device (Cheetah Medical) to guide the infant’s intraoperative fluid therapy, and successfully avoided these complications and achieved a good prognosis. Introduction Pediatric brain tumors are considered the second most common pediatric neoplasm, and about 50% of them are nonmalignant [1]. Currently, the best strategy for most of these tumors is radical resection as soon as possible [2], due to the intracranial pressure elevation, brain nerve damage, and even life-threatening conditions resulting from intracranial tumors (> 5 cm in diameter) in children. However, the intraoperative mortality rate remains high [3], especially in children with low weight and life-threatening conditions such as blood loss and coagulopathy, which made it a great challenge for anesthesiologists. Notably, the traditional static parameters such as heart rate (HR), arterial blood pressure (BP), and central venous pressure (CVP) are unreliable to predict fluid responsiveness [4]. Goal-directed 200 fluid therapy (GDFT) based on the functional hemodynamic parameters provided by the cardiac output (CO) monitor has been proven to reduce the incidence of postoperative complications in adults [5,6]. However, because of technical constraints of age and weight, few hemodynamic monitoring devices could be used during surgery for children [7]. The traditional hemodynamic monitoring equipment, which can be used in children, such as pulmonary artery floating catheters and esophageal ultrasound, is difficult to be used during neurosurgery due to the limitation of patients' position. Recently, NICOM ® device (Cheetah Medical) as a simple, safe, and completely noninvasive equipment has received the attention and application of many clinicians. Studies have proved that the hemodynamic parameters provided by NICOM based on bioreactance technology have acceptable reliability [8,9]. Thus, we used NICOM in our case to give a successful and precise control of fluid input. Case presentation A 69-day-old infant who weighed 6 kg was scheduled for an excision of a giant brain tumor (7.9 cm × 8.1 cm × 6.7 cm) adjacent to the third ventricle of cerebrum, which was confirmed by magnetic resonance imaging (MRI) (Fig. 1). After her birth, the tumor grew rapidly; thus, the neurosurgeons decided to remove the tumor by surgery after 17 days of chemotherapy. The results of tests including preoperative electrocardiograph, complete blood count, and checks of serum creatinine, liver functions, and serum electrolytes were normal. 6 h before her arrival at the operating room (OR), the baby was fed with breastmilk. After her entry, electrocardiogram, pulse oximeter, bispectral index monitor, and a NICOM ® system (Cheetah Medical, UK) were set up. Her baseline vital signs included BP of 118/85 mmHg, HR of 144 beats/min, and pulse oxygen saturation (SpO2) of 99%. After sevoflurane anesthesia, venous and radial artery catheters were applied, then general anesthesia was induced with intravenous administration of 0.3 μg/kg sufentanil, 0.2 mg/kg cisatracurium, and 5 mg/kg propofol, and tracheal intubation was conducted afterward without difficulty. After anesthesia induction, two central venous catheters were inserted into the left and right femoral veins guided by ultrasound, and the general anesthesia was maintained with sevoflurane inhalation, continuous infusion of propofol, remifentanil, and dexmedetomidine, and intermittent administration of sufentanil and cisatracurium. 201 The rectal temperatures were monitored throughout the surgery. Intraoperative blood salvage was used to collect autologous blood from the surgical site. At 10:45 am, the surgery began, and the procedure of tumor removal lasted 4 h 35 min. Fluid therapy was guided based on hemodynamic parameters obtained from NICOM. The baseline parameters included a cardiac index (CI) of 2.3 L/(min·m 2 ), total peripheral resistance index of 3258 dynes/(s·m 2 ), stroke volume index (SVI) of 16 L/(min·m 2 ), thoracic fluid content (TFC) of 23.5 kΩ −1 , and stroke volume variation (SVV) of 15%. Preoperatively, a "passive leg raising (PLR) test" was performed, and the ∆SVI measured was 12%. Therefore, the basal volume of the patient was considered sufficient, and the hemodynamic parameters in this state were directly used as the baseline values for fluid management. Therefore, 20 mL/kg fluid in total was given during the operation as a basic fluid supplement, while the SVV and SVI were maintained under 20% float to optimize the intravascular volume. A volume expansion of 10 mL/kg crystalloid in 5 min was applied to the patient on the condition that the parameters fell below expectations. For instance, rapid bleeding happened at the third hour of tumor removal. HR rose to 135 beats/min, mean artery pressure (MAP) dropped to 44 mmHg, SVI dropped below 10 L/(min·m 2 ), and SVV rose to 18, all of which indicated abnormal condition. Therefore, 60 mL suspended red blood cells were infused in 5 min. Furthermore, the volume status was evaluated every 5 min until the bleeding stopped and the parameters reached the expected range. Additionally, to evaluate the blood product or blood coagulation substances before transfusion, thromboelastographic (TEG) analysis, blood gas analysis, and coagulation tests were performed. During the operation, SVI was used to monitor her cardiac function and TFC was used to monitor the extravascular lung water content. Meanwhile, TEG analysis was used to guide the supplement of clotting factors or continuous administration of the tranexamic acid. Table 1 presents the detailed treatments applied. Figure 2 shows that the trends of hemodynamic parameters during the whole procedure was relatively stable. At the end of the operation, the MAP was significantly increased after the anesthesia was stopped, and the hemodynamic parameters indicated sufficient volume. During the 9.5-h procedure, 2600 mL fluid, which was approximately 5 times her blood volume, was infused. Meanwhile, about 700 mL of bleeding and 1200 mL of urine were recorded during the entire operation. The patient remained hemodynamically stable throughout the procedure due to the fluid treatment. The internal environment of the patient remained stable as well as presented in Table 2, suggested by the intermittent blood gas analysis during the operation. Furthermore, the postoperative blood and coagulation tests also indicated a stable internal environment as follows: hemoglobin level of 12 g/dL, platelet count of 361 μL −1 , activeated partial thromboplasting time of 38.9 s, international normalized ratio of 1.11, and fibrinogen of 1.78 g/L. The trachea was extubated 5 h after the surgery. The patient's postoperative course was uneventful. The postoperative analysis, including electrocardiography, complete blood count, serum creatinine, liver function tests, and serum electrolyte, was at normal levels. After 6 months, no recurrence or extension of the tumor was observed in the computed tomography (Fig. 3). Further pathological analysis after the operation revealed that the tumor was a mature teratoma. Discussion The incidence of brain tumors in children is high, ranking first in the solid tumors of children, and about 50% of them are nonmalignant [10]. Brain tumors of children are usually large and more likely to arise in the posterior fossa [11], causing hydrocephalus and high intracranial pressure. Furthermore, emergency surgery is often required, and radical tumor resection is normally quite beneficial to the patients' prognosis [12,13]. However, infant patients with low body weight would confront a huge risk when undergoing giant intracranial tumors (> 5 cm in diameter) resection, due to the hypovolemia resulting from larger surfaceweight ratio, higher total water content, limited renal ability to concentrate, and greater water loss from thin skin. Alternatively, conventional emergency fluid therapy (including blood transfusion) strategy of excessive supplement would cause other risks such as pulmonary edema, cerebral edema, and even cerebral hernia. In adults, optimal perioperative fluid management with GDFT has been proven to be an important component of enhanced recovery after surgery pathways [14]. However, investigations demonstrated that static parameters such as HR, arterial BP, and CVP could not properly predict fluid responsiveness [4]. Multiple dynamic indices such as SVV, pulse pressure variation (PPV), and systolic pressure variation were used as the key indicators to predict whether an adult patient was fluid responsive. 204 PPV or SVV over 13% was considered reliable to predict fluid responsiveness [15]. However, it is still unclear if it is also reliable for pediatric patients. Meanwhile, the volume responsiveness predictors could be affected by many factors in both adult and pediatric patients [16]. More importantly, many minimally invasive hemodynamic monitoring devices for adults, especially those calibrated ones, might not be suitable for children. Thus, anesthesiologists are looking for suitable hemodynamic monitoring equipment for children during surgery, and noninvasive equipment may be the ideal choice. Transthoracic bioreactance is a new technique based on the analysis of the frequency variations of a delivered oscillating current traversing the thoracic cavity [17]. Different from the bioimpedance method, the bioreactance technique analyzes the change in the spectrum of the transmitted oscillating currents through the chest cavity [18]. Furthermore, the stroke volume (SV) is determined by measuring the phase shifts continuously. Therefore, this method has great potential against infection and can be used together with other types of monitoring equipment in the OR. Studies confirmed that the device using the bioreactance technique had an equivalent monitoring effect compared with thermodilution, pulse contour analysis, and ultrasonic detection [8,19,20]. This monitoring method is simple and easy to apply to children, avoiding the difficulty of puncturing the artery. The SVI and SVV based on bioreactance measurement had been found to effectively predict fluid responsiveness in children after craniosynostosis repair [22]. The SVV measured using the bioreactance method had also been found to commendably predict fluid responsiveness during mechanical ventilation of children after ventricular septal defect repair [23]. However, this device still has some limitations. Patients with pulmonary edema, pacemaker, aortic stenosis, and so on may get inaccurate parameters when using this device. Li Huang used NICOM, suprasternal USCOM, and esophageal CardioQ to monitor CO, and found that CO based on NICOM is inconsistent with those based on suprasternal USCOM and esophageal CardioQ during the upper abdominal laparotomy (using a surgical retractor) and laparoscopic surgery [21]. Currently, in the literature, there is few reported clinical research on the application of this noninvasive technique for the resection of giant brain tumors in pediatric patients, especially for infant patients with low weight and appropriate SVI. The parameters of children should be quite different from adults due to the physiologic and anatomic characteristics of the pediatric population. Therefore, adult hemodynamic management goals could not appropriately guide children's fluid therapy. Luckily, a research showed that the ∆CI and ∆SV induced by PLR could be used to predict fluid responsiveness in children (95% confidence interval lower limits are 0.55 and 0.59, respectively) [24]. Thus, relative values were used in this study to guide fluid management instead of referring to absolute values. The baseline value was used as a reference and a 20% float was accepted. When SVV or SVI fluctuated for more than 20%, rapid infusion was given based on the results of the TEG and coagulation test. In this case, the operation was completed without using inotropes or vasopressors because of the chief surgeon's precise and gentle operation and the baby's health condition. The hemodynamic parameters monitored by NICOM were used to dynamically guide the fluid treatment of this child, so she got sufficient fluid while not encountering the complications of tissue edema caused by excessive fluid. Finally, she successfully went through the tough situation. Conclusion Infants with low body weight usually possessed low blood volume and hence high risk of hemorrhage during brain surgery. Therefore, intraoperative anesthesia monitoring of the changes of blood volume could be very important but usually difficult. In this study, a successful case using noninvasive evaluation and applying prompt fluid adjustment was reported, which could elicit the potential of bioreactance-guided fluid therapy in making anesthesia and surgery safe for future pediatric patients. Ethical approval This work is approved by the Ethics Committee of Tsinghua University Yuquan Hospital. The guardians of the infant agreed to publish her medical records anonymously. Consent The guardians of the infant were informed and have signed a written consent.
2021-12-11T14:07:22.713Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "ecc406c8a575a41957301e1470d169190d0ffda7", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.26599/BSA.2021.9050007", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "ecc406c8a575a41957301e1470d169190d0ffda7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
113592435
pes2o/s2orc
v3-fos-license
Energy harvester using contact-electrification of magnetic fluid droplets under oscillating magnetic field This paper reports a fluidic-based energy harvester generating electric power through contact-electrification of ferrofluid droplets, which will allow the power generation using oscillating magnetic field without vibration of any mechanical structure such as membrane or cantilever. The proposed device consists of top and bottom plates with a conducting electrode coated with a hydrophobic layer and water-based ferrofluid droplet. The contact area between the ferrofluid and the solid surface is changed according to the magnetic field applied by a magnet, which generates AC output power by contact electrification at the ferrofluid-solid interface. Introduction When the surface of a solid material makes contact with an aqueous solution such as water, an electrical double layer (EDL) is formed at the interface by the interaction of surface charges and their counter ions in aqueous solution [1][2][3]. Recently, a number of research results regarding electric energy harvesting by using the EDL mechanism has been reported. Lin et al. presented energy harvesting devices based on contact electrification between water and a hydrophobic polytetrafluoroethylene (PTFE) surface and the corresponding EDL formation at the interface [4,5]. In addition, it has been demonstrated that electric energy can be harvested by mechanical modulation of the EDL capacitors formed by the contact of water with the conducting plates [6,7]. However, those studies used droplets falling on a substrate from some height or mechanical vibration of the substrate itself to create continuous dynamic contact conditions, which hinder the packaging of the devices by limiting their application to a restricted area. In this paper, we propose and demonstrate an energy harvester using ferrofluid droplets that are deformed by an external magnetic field and consequently provide a dynamically changing contact area at the interface, resulting in continuous charge flow by contact electrification. Proposed device and operational principle Ferrofluids are categorized as oil-based and water-based fluids according to the base media composing them. When a water-based ferrofluid is in contact with a solid surface, an EDL is formed at the interface. Figure 1 shows the power generation mechanism of the proposed energy harvester using a ferrofluid and the oscillating external magnetic field. The device consists of top and bottom plates with a conducting electrode covered by a hydrophobic layer and water-based ferrofluid droplet. The ferrofluid droplet is dispensed by a pipette. The ferrofluid droplet in this experiment is positively charged while separating from dispenser [8]. So, the electrode surface becomes negatively charged to maintain the balance of the electrical potential. In the initial state, no current flows because there is no potential difference between the top and bottom electrodes, as shown in figure 1(a). When the magnet moves close to the bottom plate, the ferrofluid is pulled downward, and finally separated from the top plate, as shown in figure 1(b) and figure 1(c). During this operation, the contact area with the ferrofluid decreases on the top plate and increases on the bottom plate. As a result, electrons move from the top electrode to the bottom electrode through the load resistance until electrical equilibrium is achieved. In figure 1(d), the ferrofluid returns to its initial shape to maintain the balance of surface tension on the hydrophobic bottom surface and makes contact with the top plate when the magnetic field applied to the ferrofluid decreases as the magnet moves away from the bottom plate. During this step, the positive charges on the ferrofluid draw the electrons from the bottom electrode to the top electrode, reversing the direction of current flow. Continuous output power can be generated by the reciprocal oscillating motion of the external magnet. Fabricated device and experimental setup In this experiment, we used gold as electrode material as shown in figure 2. The top and bottom electrodes were coated with Teflon (AF 1600) to provide hydrophobic surfaces and prevent stiction of the ferrofluid during operation. A ferrofluid droplet was placed between the two plates. The gap distance between the top and bottom plates was determined by acrylic spacers. The fabricated energy harvester was mounted on a z-axis stage. A cylindrical neodymium magnet with diameter was fixed on a computer-controlled linear actuator and placed under the device. Figure 3 shows the profile of the open-circuit output voltage from the device with a 1.8-mm gap between the top and bottom plates. Positive and negative output signals were observed as the magnet moved up and down. Furthermore, it is clearly shown that the magnitude of the output voltage increases with the actuation frequency because it is determined by the change in the charge flow with time. Conclusions In conclusion, we have proposed an energy harvester using contact electrification with a water-based ferrofluid operated by an oscillating magnetic field. The modulation in the contact area between the ferrofluid and the electrodes and resultant power generation were successfully demonstrated. One important advantage of the proposed device is that it eliminates the need to vibrate the device itself to actuate the ferrofluid droplet and to harvest electric power because the shape of the ferrofluid inside the device can be modulated by using an external magnetic field.
2019-04-15T13:05:30.974Z
2015-12-10T00:00:00.000
{ "year": 2015, "sha1": "ae5830e343b4d36be6f1d14ff0e9a6ec523494b6", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/660/1/012108", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a477fbe63dca668ae9708f68c609f2cd9bcb4fbc", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
3186483
pes2o/s2orc
v3-fos-license
The D-TUNA Corpus: A Dutch Dataset for the Evaluation of Referring Expression Generation Algorithms We present the D-TUNA corpus, which is the first semantically annotated corpus of referring expressions in Dutch. Its primary function is to evaluate and improve the performance of REG algorithms. Such algorithms are computational models that automatically generate referring expressions by computing how a specific target can be identified to an addressee by distinguishing it from a set of distractor objects. We performed a large-scale production experiment, in which participants were asked to describe furniture items and people, and provided all descriptions with semantic information regarding the target and the distractor objects. Besides being useful for evaluating REG algorithms, the corpus addresses several other research goals. Firstly, the corpus contains both written and spoken referring expressions uttered in the direction of an addressee, which enables systematic analyses of how modality (text or speech) influences the human production of referring expressions. Secondly, due to its comparability with the English TUNA corpus, our Dutch corpus can be used to explore the differences between Dutch and English speakers regarding the production of referring expressions. Introduction In everyday communication, speakers often produce referring expressions. Such expressions (for example: 'the grey chair') have therefore been studied extensively in research on Natural Language Generation (NLG). NLG is a subfield of Artificial Intelligence and aims to build systems that automatically convert non-linguistic information (e.g. from a database) into coherent natural language text (Reiter & Dale, 2000). Practical applications of NLG include, among others, the automatic generation of weather forecasts (Goldberg et al., 1994;Reiter et al., 2005), and summarization of medical information (Portet & Gatt, 2009). Given the ubiquity of referring expressions in natural language, it is no surprise that NLG systems typically require algorithms that compute distinguishing descriptions to objects (Mellish et al., 2006). Various Referring Expression Generation (REG) algorithms have been proposed, including the Full Brevity Algorithm (Dale, 1989;1992), the Incremental Algorithm (Dale & Reiter, 1995;van Deemter, 2002), and the Graph Algorithm (Krahmer et al., 2003). These REG algorithms, each in their own way, compute how a specific target can be identified to an addressee by distinguishing it from a set of distractor objects. Many REG algorithms aim at generating referring expressions that match human referential behaviour (Dale & Reiter, 1995). Although some of the current REG algorithms generate distinguishing descriptions that are judged to be more helpful and better formulated than human-produced descriptions , their applicability is still limited (Krahmer, 2010). Based on several psycholinguistic studies, Krahmer suggests that REG algorithms base the generation of their target descriptions on the wrong psycholinguistic assumptions. For example, while psycholinguistic research shows that human speakers adapt to their addressee when referring (e.g. Clark & Wilkes-Gibbs, 1986;Brennan & Clark, 1996), most current REG algorithms do not take the addressee into account. Furthermore, while human speakers often overspecify their referring expressions and include more information than is strictly needed for identification (e.g. Engelhardt et al., 2006;Pechmann, 1989), none of the current REG algorithms accounts for a systematic way to deal with such referential overspecification. Given the above limitations, it is important to evaluate the performance of the current REG algorithms, and also to further improve the human-likeness of their generated output. Evaluating REG algorithms often occurs against human corpus data, and these data must be semantically transparent: All expressions need to be provided with information regarding the properties of both the target and the distractor objects. Semantic annotation usually occurs in XML format (Gatt, 2007). This format on the one hand permits the automatic generation of logical forms that correspond to human target descriptions, and on the other hand enables direct comparison of human target descriptions with the generated output of REG algorithms (for example in terms of the selected target attributes). Until now, only few semantically transparent corpora that can be used for the evaluation of REG algorithms were collected, and they all have limitations. The MAPTASK CORPUS (Anderson et al., 1991) and the COCONUT CORPUS (Di Eugenio et al., 1998) both consist of dialogues between two participants, but the referring expressions that occur in these corpora are rather specific to the kind of task used for collecting them (direction giving and furniture buying). This makes them less suitable for the evaluation of general REG algorithms (Gatt, 2007). This limitation was addressed by the TUNA corpus 1 , which consists of English written referring expressions that are annotated in such a way that their underlying semantics is made explicit. However, also the TUNA corpus has some crucial limitations. Firstly, the corpus consists of written referring expressions, while speech is arguably the primary modality of communication. Secondly, the referring expressions were not uttered in the direction of an addressee, which contrasts with everyday communicative situations. Thirdly, the TUNA corpus contains only English referring expressions, which disables the possibility to investigate language differences in the production of referring expressions. In order to address the limitations of other corpora, we decided to collect the Dutch D-TUNA corpus. In the current paper we describe the collection and annotation of this corpus, and its applications to psycholinguistic and computational linguistic research on the production of referring expressions. Collection of the corpus In order to collect the D-TUNA CORPUS, we performed a large elicitation experiment in which participants were asked to describe target objects and distinguish them from surrounding objects. This resulted in a corpus of 2400 Dutch referring expressions. Data collection was inspired by the English TUNA experiment . Participants Sixty undergraduate students (14 males, 46 females) from Tilburg University participated in the experiment, either on a voluntary basis or for course credit. All participants (mean age 20.6 years old, range 18-27 years old) were native speakers of Dutch. Materials The materials consisted of forty trials, which all contained one or more target referents and six distractor objects. The target referents were clearly marked by red borders, so that they could easily be distinguished from the distractor objects. For each participant and each trial, the target and distractor objects were positioned randomly on the screen in a 3 (row) by 5 (column) grid. In order to manipulate the properties of the target referents, the trials varied in terms of their types of domains and in terms of cardinality. For several reasons, the people domain was the more complex of the two. Firstly, targets in the people domain cannot be distinguished in terms of their type (since they all have 'type = person'). Secondly, the pictures of the persons are arguably more similar to each other than the furniture items, which makes them more difficult to distinguish from the distractor objects. Furthermore, the pictures of people were not as controlled as the artificial pictures in the furniture domain and hence there may be more information in them that participants may use in their references. Last, the possible descriptions of people are somewhat open-ended, in that there are many unpredictable attributes that can be mentioned. Two types of domains Since speakers need a head noun in their references and therefore always use 'type' in their formulation (Levelt, 1989), trials were built in such a way that the attribute 'type' could never be a distinguishing attribute. 2.2.2. Two levels of cardinality A second manipulation of target properties was that trials differed in terms of cardinality, i.e. the number of target referents that they contained. Twenty trials were singular (SG, ten per domain) and contained one target referent. Furthermore, twenty trials (again ten per domain) were plural (PL) trials containing two target referents. An extra manipulation of the target properties occurred by including two levels of similarity. Plural/similar trials (PS, five per domain) trials contained two target objects with both identical distinguishing attributes, for example 'the table and the sofa that are both red', where the two target objects are distinguished from the distractors by means of their (shared) red colour. The plural/dissimilar trials (again five per domain) contained two target objects with different distinguishing attributes, for example 'the large fan and the red sofa', where the two target objects are distinguished by means of different attributes: size and colour. Procedure Each participant was presented the forty trials in a different random order. The experiments were individually performed in an experimental room, with an average running time of twenty minutes. All participants were filmed during the experiment. The participants were asked to describe the target referents in such a way that an addressee could uniquely identify them. In order to manipulate properties of the communicative setting, the participants were randomly assigned to three conditions (text, speech and face-to-face). The text condition was a replication (in Dutch) of the TUNA experiment: participants produced written identifying experimental room. In the speech condition and the face-to-face condition, participants were asked to utter their descriptions to an addressee inside the experimental room. The addressee was a confederate of the experimenter, instructed to act as though he understood the references, but never to ask clarification questions. In the instructions, the participants were told that the location of the objects on the addressee's screen had been scrambled; hence, they could not use location. In the face-to-face condition, the addressee was visible to the participants; in the speech condition this was not the case, because a screen was placed in between speaker and addressee. Experimental design The experiment had a 2x2x3 design (see Data annotation The 2400 (3x20x40) identifying descriptions of the D-TUNA corpus were all semantically annotated using an XML annotation format: they were provided with information regarding attributes of both the target and distractor objects. For this annotation, we used the XML annotation scheme of the TUNA corpus (Gatt, van der Sluis & van Deemter, 2008b). The annotation tool Callisto 3 was used for the annotation of the expressions. An example of an XML file of a reference to the target shown in figure 1 is depicted in figure 3. In this expression, the target is (in Dutch) referred to as 'De man met een witte baard en zonder bril' (meaning 'The man with the white beard and without glasses'). All XML files consist of a trial node, containing a trial ID and specific conditions under which the expression was produced (such as domain, modality and cardinality). Furthermore, each trial node subsumes four nodes: a domain node, a string-description node, a description node and an attribute-set node. • The DOMAIN node contains a representation of the domain of the particular trial and consists of seven entity nodes: one or two target entities (depends on cardinality) and five or six distractor entities. Each entity node depicts a list of properties of the particular entity. • The STRING-DESCRIPTION node contains the full target description, as produced by the participant. • The DESCRIPTION node contains the annotated version of the target description. All determiners and content words that are part of the string description were provided with the attributes that they represent. For example, the adjective 'witte' (meaning 'white') corresponds to the attribute <hair colour: light>. In case a participant mentioned an attribute that was not present in the domain at all (e.g. 'the laughing man), the attribute 'laughing' was annotated as <other: other>. • The ATTRIBUTE-SET node contains an overview of all properties that are mentioned in the string description and thus represents the flat semantic structure of the referring expression. Applications The D-TUNA corpus can be used in computational linguistic and psycholinguistic studies on the production of referring expressions. The D-TUNA corpus is a useful tool in computational linguistic research on the generation of referring expressions, since its semantic annotation in XML format permits using the referring expressions as input for REG algorithms. In line with Gatt et al. (2009), who used the English TUNA corpus to evaluate and compare the performance of several REG algorithms, Theune et al. (2010) used the Dutch references of the D-TUNA corpus as input for the Graph Algorithm (Krahmer et al. 2003). Since the data collection of the Dutch D-TUNA corpus was inspired by the data collection of the English TUNA corpus, it is possible to explore the differences between Dutch and English speakers regarding the production of referring expressions. For example, Koolen et al. (2010) used the two corpora to compare Dutch and English referring expressions in terms of overspecification. They found roughly similar patterns for references in the two languages regarding which and how many redundant target attributes they contain. In line with Theune et al. (2010), this suggests that our Dutch corpus can be used to train and improve non-Dutch REG algorithms. Furthermore, the D-TUNA corpus is a useful tool in psycholinguistic research on human referring behaviour. Since it contains both written and spoken references that are produced for an addressee, the D-TUNA corpus enables systematic analyses of how modality (text or speech) influences the human production of referring expressions. For example, Koolen at al. (2009) used the corpus to explore which factors cause speakers to overspecify their referring expressions. They found that references to plural targets uttered in the complex people domain contain more redundant target attributes than references to singular targets uttered in the simple furniture domain. Koolen et al. also found that written and spoken referring expressions do not differ in terms of redundancy, but do differ in terms of the number of words they contain: Speakers need more words to provide the same information as people who type their expressions. Conclusion We have presented the D-TUNA corpus, which is the first semantically annotated corpus of referring expressions in Dutch. Due to the XML annotation format, the corpus can be used for evaluating and improving the performance of REG algorithms. Furthermore, due to its comparability with the English TUNA corpus, our Dutch corpus can be used to explore the differences between Dutch and English speakers regarding the production of referring expressions. Last, the D-TUNA corpus is a useful tool in psycholinguistic studies on human referential behaviour. Acknowledgements We thank Albert Gatt and Martijn Goudbeek for their technical support during the annotation of the corpus, and three anonymous reviewers for their comments concerning our abstract.
2015-07-20T18:47:21.000Z
2010-05-01T00:00:00.000
{ "year": 2010, "sha1": "1746acabca4afa9a8899aca2bc228a730c01b771", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "2ef8783505d32f827c4d57f8cf4412aa58244897", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science", "Psychology" ] }
44686449
pes2o/s2orc
v3-fos-license
Strategies of highly pathogenic RNA viruses to block dsRNA detection by RIG-I-like receptors: Hide, mask, hit Highlights • dsRNA species are byproducts of RNA virus replication and/or transcription.• Prompt detection of dsRNA by RIG-I like receptors (RLRs) is a hallmark of the innate immune response.• RLRs activation triggers production of the type I interferon (IFN)-based antiviral response.• Highly pathogenic RNA viruses encode proteins that block the RLRs pathway.• Hide, mask and hit are 3 strategies of RNA viruses to avoid immune system activation. 1. Introduction 1.1. Viral commandment: stay unseen, do not let your dsRNA be detected The intrinsic innate immune system is the primary line of defense against virus invasion, which detects the presence of a virus in an infected cell through a set of proteins named pathogen recognition receptors (PRRs). PRRs recognize viral components, both proteins and nucleic acids, as signatures of ''non-self'' that are the sign of an infection in place and are named pathogen-associated molecular patterns (PAMPs) . Upon PAMPs recognition, PRRs initiate a signaling cascade which culminates with the induction of type I alpha/beta interferons (IFN-a/b). These cytokines act in both an autocrine and paracrine manner to promote viral clearance or induce apoptosis in infected cells, to establish an antiviral state in non-infected cells and to modulate the priming of optimal antigen-specific T cell and antibody adaptive response (Randall and Goodbourn, 2008). Viral PAMPs mainly consist of nucleic acids that may originate from the uncoating process of newly-infecting virions, the transcription of viral genes and the replication of genomic intermediates. Among these, double-stranded RNA (dsRNA) moieties, especially those with the atypically-featured 5 0 -triphosphate (5 0ppp) termini, have no homologues in the cytoplasm and therefore represent the most efficient triggers for PRRs activation (Gerlier and Lyles, 2011). Given the central dsRNA role in stimulating IFN-a/b induction, a subset of PRRs is specifically dedicated to its detection and, depending on their cell compartments localization, can be ascribed to two families. dsRNA recognition in endosomes, lysosomes and even at the extracellular surface is accomplished by the Toll-like receptor 3 (TLR-3), a member of the TLR family (Barbalat et al., 2011), while dsRNA recognition in the cytosolic environment is carried out by RNA helicases belonging to the retinoic acid-inducible gene I (RIG-I)-like receptors (RLRs) family (Wilkins and Gale, 2010), named after the first characterized member, RIG-I (Yoneyama et al., 2004). Because RNA viruses tend to accumulate nucleic acid intermediates and byproducts in the host cytoplasm during their replication cycle, preventing recognition by host PRRs of such PAMPs is of crucial importance. Therefore, most, if not all, RNA viruses encode proteins that display IFN-antagonism properties aimed to circumvent the host innate immune system. Among those proteins, most have been revealed as involved in targeting the RLR pathway at several levels (Versteeg and Garcia-Sastre, 2010). The impact of this counteraction becomes particularly evident for those RNA viruses that cause severe diseases in humans, since in several cases the fatal outcome has been related to their ability to subvert the type I IFN-mediated innate immune response (Bray, 2005). It is therefore not surprising that, among the different viral proteins identified as inhibitors of IFN-a/b production and signaling, most are also molecular determinants of virulence and pathogenesis (Bowie and Unterholzner, 2008;Versteeg and Garcia-Sastre, 2010). This review focuses on the strategies adopted by several highly pathogenic RNA viruses to suppress type I IFN induction at the level of dsRNA detection by RLRs. Even relying on the extremely variegated arsenal found in the biodiversity of viral pro-teins, these strategies can be categorized into three main lines of action: (i) hiding the dsRNA to make it inaccessible to RLRs, (ii) masking the dsRNA PAMP signatures to avoid their recognition by RLRs, (iii) hitting the components of the RLRs pathway to destroy their functionalities (Fig. 1). As representatives of the three different strategies, some prominent human RNA viruses have been here chosen on the basis of their lethality, their status of emerging or re-emerging pathogens and the substantial lack of therapeutics to counter the deadly diseases they cause. In light of recent functional, biochemical and structural findings, the mechanisms of action by which these virally-encoded IFN-antagonists cause a failure in RLR activation and signaling are discussed. The origin of dsRNA as a viral PAMP The existence of dsRNA as a byproduct of replication and transcription steps in the replication cycle of all viruses was proposed about half a century ago (Field et al., 1972a;Montagnier and Sanders, 1963) and it was hypothesized that it could serve as a potent stimulus for IFN-a/b induction (Field et al., 1970). Accordingly, it was demonstrated that the addition of dsRNA, of either viral (Nemes et al., 1969;Tytell et al., 1967) or synthetic origin (Field et al., 1972b), to the culture medium of several cell lines, as well as its intracellular delivery, resulted in a robust type I IFN response (Marcus 1983). Thereafter, several studies aimed to dissect the cel- Fig. 1. Schematic representation of RLR-mediated type I IFN induction and viral strategies aimed at its suppression. Recognition of viral dsRNA by RLRs leads to the production of type I IFN-a/b, triggering an innate immune antiviral response. Viruses prevent it by processing dsRNA (Mask), sequestering dsRNA (Hide) or physically interacting with the host proteins involved in the RLRs pathway (Hit). lular IFN-based antiviral response by using polyinosinic:polycytidylic acid (polyI:C) to mimic the dsRNA intermediates that are present during viral infections. After more than two decades, such efforts led to the identification of TLR3 as the receptor responsive to dsRNA internalized by endocytosis (Alexopoulou et al., 2001) and, later, to the discovery of RLRs as the class of molecules that detects dsRNA in the cytoplasm (Andrejeva et al., 2004;Yoneyama et al., 2004Yoneyama et al., , 2005. Indeed, how and when base-paired RNA sequences were generated and/or were freely exposed during infection by different viruses has represented the main question of several studies. On the one hand, the use of polyI:C as viral mimetic, while demonstrating that the double-helical nature of RNA is itself a potent IFN inducer (Hausmann et al., 2008), somehow depicted an artificial recognition process by RLRs. On the other hand, while viral dsRNAs of genomic length were able to activate RLRs pathway to induce IFN-a/b (DeWitte-Orr et al., 2009;Kato et al., 2008), immunofluorescence analysis with dsRNA-specific monoclonal antibody showed detectable amounts of dsRNA only in cells infected by some positive-stranded (+)-RNA viruses, whereas no dsRNA was detected in cells infected by negative-stranded (À)-RNA viruses . Currently, apart from dsRNA viruses for which the dsRNA PAMP is the viral genome itself, it is generally assumed that other viruses produce this PAMP as a replicative byproduct. In DNA viruses, for example, dsRNA may be generated through the self-annealing of convergent bidirectional transcription products . (+)-RNA viruses, whose single-stranded genomes directly serve for the translation of viral proteins, generate (À)-ssRNA antigenome templates and newly-synthesized (+)-ssRNA genomes during their replication, and both these products are susceptible of base-pairing into dsRNA with their complementary template strand (Ahlquist, 2006). (À)-RNA viruses transcribe monocistronic mRNAs from every gene and during the replication step synthesize viral genomes and antigenomes from a template that is tightly encapsidated by nucleoproteins. This encapsidation occurs concomitantly to RNA synthesis, since it is strictly required by the viral RNA-dependent RNA polymerase (RdRp) complex for its attachment and progression along the template (Morin et al., 2013). As a result, RNA encapsidation also prevents double-strand formation, since no genomic or antigenomic intermediates are normally found naked in the cytoplasm and they therefore cannot anneal to each other to produce dsRNA moieties (Gerlier and Lyles, 2011). Notably, prevention of dsRNA formation may also involve the recruitment of host factors, as the downregulation of the cellular helicase UAP56, which was found to unwind self-paired transcripts from influenza A virus (IAV), resulted in cytosolic accumulation of viral dsRNA (Wisskirchen et al., 2011). However, viral replication and transcription are both processes susceptible to errors, which may lead to occasional production of defective transcripts and abortive 5 0 -genome and antigenome ends that, annealing one to each other or self-pairing into panhandle-like secondary structures, become ideal IFN-inducing viral PAMP candidates (Gerlier and Lyles 2011). In line with this, defective interfering (DI) RNAs generated through genome copyback during Sendai virus (SeV) infection, as well as subgenomic DI particles from IAV were found to potently activate the IFN response and act as RLR ligands (Strahle et al., 2006;Baum et al., 2010). Proteins having dsRNA-binding motifs recognize this nucleic acid in a sequence-independent manner and on the basis of its A-type helical grooves and negative charge (Chang and Ramos, 2005;Tian et al., 2004). Moreover, the presence of an RNA double helix is essential for RLRs to induce a type I IFN-based response , as ssRNA molecules which did not undergo self-pairing and double-stranded secondary structures were unable to bind and activate RIG-I (Schlee et al., 2009;Schmidt et al., 2009). However, the fact that some endogenous short dsR-NAs of annealed antisense transcripts and microRNA precursors with secondary-stem loops -that are occasionally present at low levels in mammalian cells -do not elicit any IFN response, demonstrated that RNA double-strandness alone may be not sufficient for proper RLRs stimulation (Lehner et al., 2002;Marques et al., 2006;Werner, 2005). In fact, other features are also important, as the ability of viral dsRNA species to trigger a type I IFN-induced antiviral response is dictated by their atypical length, abundance, cytoplasmic localization (DeWitte-Orr et al., 2009;Kato et al., 2008) and, (only for RIG-I), by the presence of a 5 0 -ppp at their termini (Hornung et al., 2006;Pichlmair et al., 2006;Schlee et al., 2009;Schmidt et al., 2009). 5 0 -ppp moieties originate from the first-added nucleotide during the primer-independent initiation of viral RNA synthesis, within both transcription and replication processes (Choi, 2012). At the end of transcription, covalent attachment of a cap structure assures the complete removal of 5 0 -ppp (Decroly et al., 2011), whereas the triphosphates at 5 0 -genome termini are either processed or enwrapped by viral nucleoproteins into homo-polymeric nucleocapsids (Choi, 2012). However, 5 0 -ppp groups may be inadvertently exposed to the cytosolic environment in byproducts such as leader RNAs, as well as defective replicative intermediates that remain uncapped (Gerlier and Lyles, 2011). In addition to viral genome replication and transcription, the uncoating process to which viral nucleocapsids are subjected immediately after infection represents another critical moment for the recognition of 5 0 -ppp and panhandle structures. In line with this, a recent study showed that RIG-I was able to physically interact with, and to be activated by, nucleocapsids from several negative-stranded (À)-RNA viruses, independently from viral RNA synthesis . Thus, even though it remains to be shown if such interaction involves intact nucleocapsids, or rather implies the exposure of dsRNA tracts accessible within them, these findings establish the cytosolic release of incoming virions as the possible earliest event for RIG-I detection of viral dsRNA . In summary, byproducts of viral replication and transcription (Rehwinkel et al., 2010) or DI RNAs (Baum et al., 2010;Strahle et al., 2006), as wells as unmodified 5 0 -ppp ends and dsRNA panhandle structures that come from newly-infecting virions are all theoretically prone to provide double-stranded structures, originated through self-pairing or annealing with complementary sequences. Therefore, it is conceivable that all these together may represent the overall source of dsRNA ligands for RLRs activation. RLRs share a similar structural organization (Fig. 2), consisting of an RNA-binding C-terminal domain (CTD) Takahasi et al., 2008Takahasi et al., , 2009) and a central DExD/H-box helicase domain comprising two RecA-like helicase domains, namely Hel1 and Hel2, separated by the insert domain Hel2i (Luo et al., 2011). In RIG-I and MDA5, but not in LGP2, the helicase domain is preceded by two tandemly-repeated N-terminal caspase activation and recruitment domains (CARDs) (Yoneyama et al., 2005). Unlike other members of the SF2 helicases, which bind one strand of the nucleic acid to unwind its double helix (Fairman-Williams et al., 2010), RLRs have double-strand specificity and do not seem to dis-play unwinding activity. Indeed, a weak unwinding was initially described for a RIG-I deletion mutant (Marques et al., 2006) as well as for the full length protein in the presence of dsRNA substrates with overhangs at the 3 0 strand longer than 5 nts (Takahasi et al., 2008). However, lack of RIG-I helicase activity was reported by subsequent studies (Myong et al., 2009;Jiang et al., 2012). Instead, RLRs bind to both dsRNA strands and translocate onto it in an ATP hydrolysis-dependent manner Myong et al., 2009). According to the current model (schematically depicted in Fig. 3), physiologically inactive RIG-I is found in an auto-inhibited state, in which the two CARDs are kept engaged in intramolecular interactions with the Hel2i domain by a repressor pincer motif located between the CTD and the helicase domains (Kowalinski et al., 2011). Upon dsRNA binding to the CTD (Civril et al., 2011), the pincer motif is relieved from CARDs that are then displaced from the Hel2i allowing its binding to dsRNA (Luo et al., 2011). Subsequent to dsRNA enwrapping to Hel domains, RIG-I dimerization and activation of its ATPase activity take place . ATP hydrolysis propels the translocation of the RIG-I dimer along dsRNA (Myong et al., 2009) and an overall conformational change releases N-terminal CARDs for signaling ). Freely exposed RIG-I CARDs then interact with unanchored Lys-63-linked poly-ubiquitin (poly-Ub) chains (Zeng et al., 2010). Next, RIG-I homo-oligomerization and ATP-ase-driven translocation along dsRNA take place (Patel J.R. et al., 2013), leading to additional conformational changes that provide the signaling platform for interaction with the CARDs of the IFN-b promoter stimulator 1 (IPS-1) (Kawai et al., 2005). This is a mitochondrion-associated protein, now called mitochondrial antiviral signaling protein (MAVS) (Seth et al., 2005), formerly also known as IFN-b inducing CARD adaptor (Cardif) (Meylan et al., 2005) or virus-induced signaling adaptor (VISA) (Xu et al., 2005). LGP2 share an overall domain organization that consists of two tandemly-repeated N-terminal CARDs (orange boxes, absent in LGP2) followed by an Hel domain (green box in RIG-I and MDA5, blue box in LGP2) and a C-terminal RNA binding CTD (violet, azure and cyan box in RIG-I, MDA5 and LGP2 respectively). Fig. 3. Activation of the type I IFN antiviral response triggered by the RLRs pathway upon dsRNA detection. Viral short 5 0 -ppp dsRNA and long dsRNA are preferentially recognized by the CTD of RIG-I (violet) and of MDA5 (azure) respectively, with LGP2 modulating the activity of RIG-I helicase. Upon dsRNA binding to their Hel domain (green), ATP-mediated homo-oligomerization, translocation onto dsRNA, ubiquitination, and TRIM25-or Riplet-mediated ubiquitination, RLRs interact through their CARDs (orange) with the CARD of mitochondrion-associated MAVS (red). Signaling prosecution involves recruitment of TRAF3, NEMO and STING adaptors and the assembly of TBK1-IKK-e complex, which phosphorylates IRFs. Activated IRF dimers translocate to the nucleus and, together with other transcription factors, induce the expression of IFN-a/b. Type I IFNs are secreted and bind to their cognate receptor, activating STAT transcription factors for the induction of several ISG products with antiviral activity and the overexpression of RLRs pathway components. Essential for the prosecution of immune signaling, the redistribution of RIG-I from the cytosol to mitochondrial membranes is sustained by the formation of a translocon that involves the mitochondrion-targeting chaperone 14-3-3 e and the tripartite motif 25 alpha (TRIM25a) E3 ligase . Furthermore, in addition to RIG-I activation via unanchored polyubiquitin chains, Lys-63-linked polyubiquitination of RIG-I CARDs by TRIM25a was found to be important for the interaction with MAVS (Gack et al., 2007). In turn, such activity of TRIM25a requires another E3 ligase, namely RING finger protein leading to RIG-I activation (Riplet) (Oshiumi et al., 2009(Oshiumi et al., , 2010, whose polyubiquitination of RIG-I CTD facilitates the release of CARDs and their association with TRIM25a (Oshiumi et al., 2013). At this point, the described mechanism, even though proposed as a general model for dsRNA recognition by RLRs, is mostly based on the characterization of RIG-I. However, the activation and signaling of MDA5 shows some differences. For instance, CARDs are not sequestered in the MDA5 resting form, and its activation -for which both ATPase activity and Lys-63-linked ubiquitination are also important -results in the cooperative assembly of MDA-5 protomers onto dsRNA helical filaments Jiang et al., 2012;Peisley et al., 2011;. Next, prosecution of the pathway involves MAVS prion-like polymerization (Hou et al., 2011) and the recruitment of two adaptors, tumor necrosis factor (TNF) receptor-associated factor 3 (TRAF3) and the nuclear factor-kB (NF-kB) essential modifier (NEMO), which connect and regulate signaling between the upstream sensory domain of MAVS and the downstream complex formed by TRAF family member-associated NF-kB activator (TANK)-binding kinase 1 (TBK-1) and inducible IkB kinase epsilon (IKK-e) (Oganesyan et al., 2005;Zhao et al., 2007;Belgnaoui et al., 2012;Wang et al., 2012). The two kinases TBK-1 and IKK-e carry out the phosphorylation of latent IFN regulatory factors 3 and 7 (IRF-3/7), leading to their dimerization and nuclear translocation (Fitzgerald et al., 2003). It is worth noting that the recruitment of RIG-I to MAVS and the interaction of TBK1-IKK-e complex with MAVS are two critical events for IRF phosphorylation. In particular, in the latter was found involved a newly-identified adaptor that acts as an RLR signaling enhancer, independently discovered by three research groups and variously termed stimulator of interferon genes (STING), mediator of IRF-3 activation (MITA) and endoplasmic reticulum IFN stimulator (ERIS), respectively (Ishikawa and Barber 2008;Sun et al., 2009;Zhong et al., 2008). Once in the nucleus, IRF-3 and 7 homo-or hetero-dimers associate into an enhanceosome with other transcription factors, such as cAMP response element-binding (CREB) binding protein/ p300, NF-kB, activating transcription factor 2 (ATF-2) and c-Jun to drive the transcription of IFN-a/b genes (Panne et al., 2007). In turn, secreted type I IFNs act towards the same cell, or the neighboring ones, by binding to their ubiquitous cognate receptors and activating a signaling cascade that ultimately leads to the expression of hundreds of IFN-stimulated genes (ISGs) with antiviral properties (Sadler and Williams, 2008). Some ISGs encode for effectors that are able to directly interfere with the viral replication cycle, such as the interferon-induced transmembrane (IFITM), ribonuclease L (RNase L), 2 0 , 5 0 -oligoadenylate synthetase (OAS), dsRNA-dependent protein kinase R (PKR), Viperin, myxovirus resistance 1 (Mx1) and interferon stimulated gene 15 (ISG15) proteins . In addition, since among the ISGs products are also the PRRs, their adaptor molecules and transcription factors such as the IRFs, the expression of these proteins determines an amplification loop that increases both IFN-a/b and ISGs production, thereby establishing an overall antiviral state to limit virus spread and eventually clear the infection (Sadler and Williams, 2008). RLRs: differential recognition of dsRNA agonists RIG-I and MDA5 helicases have been shown to preferentially sense different types of dsRNA PAMPs, based on their length, bluntness and end overhangs. RIG-I optimal ligands are short 5 0 -pppcontaining dsRNA molecules less than 20 bp in length (Lu et al., 2010;Jiang et al., 2011;Wang et al., 2010) and, while it can also bind 5 0 -hydroxyl (OH) blunt-ended dsRNA Luo et al., 2011;Kowalinski et al., 2011), the presence of the 5 0 -ppp group is essential for its proper activation (Hornung et al., 2006;Pichlmair et al., 2006;Rehwinkel et al., 2010). The 5 0 -ppp recognition is imparted by a set of key basic residues that form a positively-charged pocket within the RIG-I CTD, thus allowing efficient accommodation of the three phosphates Lu et al., 2010;Takahasi et al., 2009). However, even though RIG-I recognizes 5 0 -ppp dsRNA by end-capping it Luo et al., 2011), the 5 0 -ppp feature alone is not sufficient for efficient IFN induction, and at least small regions of dsRNA terminal base-pairing are required for proper interaction with the helicase . In particular, double-strandness must encompass the 5 0 nucleotide carrying the triphosphate and 5 0 overhang at the 5 0 -ppp end abolishes RIG-I activity (Schlee et al., 2009). Instead, 3 0 overhangs are tolerated, although decreasing RIG-I binding (Schlee et al., 2009) or supporting its ATPase activity without inducing IFN response . Moreover, despite the fact that 5 0 -monophosphate and 5 0 -OH groups at dsRNA ends are correlated with diminished RIG-I binding affinity and IFN induction (Schlee et al., 2009;Schmidt et al., 2009), such detrimental effect of 5 0 -ppp lack tends to vanish as the length of dsRNA substrate increases. In fact, provided the substrate is long enough, multiple internal initiation binding sites can compensate for the absence of the 5 0 -ppp at dsRNA ends . In agreement with the cooperative oligomerization of RIG-I along dsRNA, whose efficiency is in turn correlated with the strength of IFN induction (Patel J.R. et al., 2013), such flexible behavior allows RIG-I to integrate the two distinct signals of high-affinity motif and dsRNA length, thereby displaying the highest possible sensitivity in response to diverse PAMPs to be detected . Regarding MDA5, its optimal ligands were initially identified as blunt-ended dsRNA moieties more than 1 kbp in length (Kato et al., 2008), while a subsequent study showed that this helicase is activated by an RNA web formed by branched dsRNA stretches (Pichlmair et al., 2009). Moreover, such high molecular weight moieties are bound by MDA5 regardless of the nature of the dsRNA 5 0 -ends, since the MDA5 CTD has a flat basic surface that lacks the RIG-I-like 5 0 -ppp-lodging pocket and does not cap dsRNA terminus (Li et al., 2009a;Takahasi et al., 2009). Instead, MDA5 binds to the dsRNA backbone parallel to the helix axis and multimerizes into helical filaments along dsRNA . Less is known about LGP2, the third member of the RLRs, which was shown to bind blunt-ended dsRNA in an ATP hydrolysis-dependent manner (Bruns et al., 2013;Murali et al., 2008) and regardless the presence of 5 0 -ppp (Li et al., 2009b;Pippig et al., 2009). Displaying a complex phenotype, LGP2 plays negative as well as positive roles in IFN induction. In fact, it negatively regulates RIG-I (Rothenfusser et al., 2005;Yoneyama et al., 2005) while it was shown to facilitate viral dsRNA recognition (Bruns et al., 2013;Satoh et al., 2010) and to act as positive regulator of MDA5, with which it co-operates in response to polyI:C stimulation (Childs et al., 2013). The observed non-redundant abilities of RLRs to bind to diverse RNA ligands reflect their differential capacity to recognize various viral groups. In fact, while RIG-I preferentially recognizes 5 0 -pppcontaining self-paired panhandle RNAs, such those exhibited by several (À)-ssRNA viruses, (+)-ssRNA viruses that produce long blunt-ended RNA duplexes are mostly detected by MDA5. Notably, these patterns partially overlap, as some viruses are recognized by both RLRs (Loo et al., 2008;Kato et al., 2006). Hide, mask, hit: three viral strategies to block RLRs dsRNA detection Obeying the ultimate goal of replicating in the host cell to generate new infectious virions, viral pathways include functionalities aimed to circumvent, downregulate or even totally suppress IFN-a/ b induction, IFN-a/b signaling or ISG function (Versteeg and Garcia-Sastre, 2010). Any of the several components involved in the different phases of the innate antiviral response can be targeted by viruses. However, the inhibition of IFN-a/b production achieved by blocking the RLR pathway in its upstream portion, at the level of dsRNA recognition, is a target shared by most viral antagonists, further evidencing the potency of such PAMP in triggering a robust innate immune response (Bowie and Unterholzner, 2008). RNA viruses, of which several are pathogenic to humans, have developed a plethora of solutions to avoid dsRNA-mediated activation of RLRs. These can be ascribed to three strategies, hereby named as 1. Hide, that indicates dsRNA sequestration to prevent its binding to RLRs, and is achieved either by viral proteins competing with RLRs for dsRNA binding or by shielding dsRNA through its compartmentalization within the cytoplasm (Fig. 4A); 2. Mask, that relies on the ability of viral proteins to process double-helix and 5 0 -ppp termini, removing the PAMP signatures recognized by RLRs (Fig. 4B); 3. Hit, that involves physical interaction between viral proteins and either RLRs or their downstream adaptors to suppress their functionalities (Fig. 5). Due to the multifunctional nature of the viral proteins involved in these strategies, a certain degree of redundancy is present, with a given virus concomitantly adopting more than one strategy at a time through the action of one or more viral IFN-antagonists. Noteworthy, thanks to the recent availability of crystallographic structures of several viral proteins bound to their dsRNA ligands, some molecular details of the dsRNA Hide strategy are beginning to be unraveled. Conversely, the direct Hit towards RLRs, or their effectors, appears to be the most multifaceted strategy given the high number of cellular components involved so that, in most cases, the mechanism through which viral antagonists exert such inhibition still remain to be elucidated. According to these three strategies, we reviewed here solutions adopted by a number of RNA viruses that are highly lethal to humans and for which effective medical countermeasures are currently lacking (Table 1). Arenaviruses Arenaviruses are mostly rodent-borne zoonotic pathogens with bi-segmented and ambisense ssRNA genomes that belong to the single genus Arenavirus of the family Arenaviridae and are the largest and worldwide distributed group of viruses causing hemorrhagic fever in humans (Charrel et al., 2011). On the basis of their antigenic properties, genetic relationships and geographical endemicity, members of this group are classified into Old World (OW) and New World (NW) arenaviruses. Among them, the OW Lassa fever virus (LASFV) and Lujo virus (LUJV), as well as the NW Chapare virus (CHPV), Guanarito virus (GTOV), Junin virus (JUNV), Machupo virus (MACV), Sabia virus (SABV) and Whitewater Arroyo virus (WWAV) have all been associated with fatal disease in humans (Briese et al., 2009;Charrel et al., 2011;Delgado et al., 2008). 5 0 -ppp dsRNA moieties belonging to arenaviral gen-omes have been shown to activate the RLR pathway (Habjan et al., 2008;Zhou et al., 2010;Huang et al., 2012). Moreover, suppression of the type I IFN response in cell culture assays by both OW and NW arenaviruses has been reported and related to the properties displayed by two of the four proteins encoded by the arenaviral genome: the nucleocapsid protein NP and (only for NW arenaviruses) the matrix protein Z (Carnec et al., 2011;Groseth et al., 2011;Huang et al., 2012;Martinez-Sobrido et al., 2006;Martinez-Sobrido et al., 2007;Müller et al., 2007). Mask The arenaviral NP is a multifunctional protein with essential roles in the replication and transcription of viral RNA (Hass et al., 2004;Pinschewer et al., 2003), in the encapsidation of viral genome into a ribonucleoprotein (RNP) complex (López et al., 2001) and, together with the Z protein, in virion assembly (Eichler et al., 2004). Early studies identified NP as sufficient for the inhibition of IRF-3 phosphorylation and consequent suppression of IFN induction, thereby highlighting its innate immune antagonist nature Recently, structural and functional characterization of LASFV NP revealed that this property relies on the ability of the protein to process viral dsRNA with a 3 0 -5 0 exoribonuclease activity (Fig. 6A). In fact, while the N-terminal NP folds into a domain important for mRNA capping during transcription, a large cavity within its C-terminal domain exhibits a dsRNA 3 0 -5 0 exoribonuclease activity with a catalytic site comprising residues D389, E391, D466, D533 and H528, and structurally resembling the active sites of the DEDDH exonuclease superfamily (Qi et al., 2010). Notably, these amino acids reside within the same NP region (residues 370-553) mapped for the anti-IFN activity of the OW lymphocytic choriomeningitis virus (LCMV), which contains critical residues along its DIEGR motif (residues 382-386) as well as at positions D459, H517 and D522 (Jiang et al., 2013;Martinez-Sobrido et al., 2009;Qi et al., 2010). Alanine point mutation at each position of the LASFV NP 3 0 -5 0 exoribonuclease active site, as well as at the two additional resi-dues G392 and R492, resulted in markedly reduced exoribonuclease activity, without affecting the NP cap-binding function (Hastie et al., 2011;Qi et al., 2010). Notably, two of these residues were found to be critical for IFN suppression (Martinez-Sobrido et al., 2009), as the substitution into alanine of all of them caused complete inability of LASFV NP to inhibit SeV-or polyI:C-induced activation of the IFN-a/b promoter (Hastie et al., 2011;Jiang et al., 2013;Qi et al., 2010). Unlike other known DEDDH exonucleases, which split dsDNA to start digestion from an unpaired 3 0 -terminus, LASFV NP binds to dsRNA without unwinding its strands and, regardless of whether it bears overhanging or blunt-ended termini, with or without 5 0ppp, it readily digests the 3 0 -strand in the 5 0 -direction. (Hastie et al., 2011;Jiang et al., 2013;Qi et al., 2010). It is therefore possible to envision the possibility that such processing removes from the viral dsRNA any PAMP hallmark that would be otherwise recognized by RLRs. The fact that critical residues for dsRNA exoribonuclease-mediated IFN inhibition are highly conserved among the NPs of all arenaviruses strongly suggests that this form of the Mask strategy is widely shared within the family (Harmon et al., 2013;Jiang et al., 2013;Qi et al., 2010). Hit As a remarkable example of IFN-antagonism redundancy, the same C-terminal NP region that is responsible for dsRNA masking was also found to exert a Hit strategy in both OW and NW arenaviruses. In fact, confocal microscopy and co-immunoprecipitation (co-IP) analyses have revealed that arenaviral NPs bind to the kinase domain of IKK-e, blocking its autocatalytic activity and inhibiting the phosphorylation of IRF-3 substrate. IKK-e sequestration by arenaviral NPs impedes the interaction between this kinase and its mitochondrial partner MAVS, thereby shutting down signaling along the RLR pathway (Pythoud et al., 2012). Importantly, no interactions were observed between NP and other components of the pathway, such as TBK-1, MAVS, TRAF3 and even RIG-I or MDA5 (Pythoud et al., 2012), although for the latter another study reported a co-IP with the NP of LCMV (Zhou et al., 2010). Notably, the NP-IKK-e interaction is disrupted by mutating the same NP residues that are critical for its 3 0 -5 0 exoribonuclease activity, suggesting the existence of an overlap in the domains responsible for the two IFN-antagonist functions (Pythoud et al., 2012). In the case of the NW arenaviruses, a Hit strategy against the RLRs pathway is also carried out, through the function exerted by the Z matrix protein, a small RING finger polypeptide that acts as a co-factor of the viral polymerase complex and also mediates both RNP incorporation into virions and their release from the plasma membrane (Fehling et al., 2012). Overexpression of the JUNV Z protein blocked IRF-3 and nuclear factor-kappa-B (NF-kB) phosphorylation and nuclear translocation, and decreased IFN-b mRNA levels in response to both SeV and 5 0 -ppp dsRNA stimulation (Fan et al., 2010;Martinez-Sobrido et al., 2006. Furthermore, confocal microscopy and co-IP revealed that the Z proteins of GTOV, JUNV, MACV and SABV, but not of LCMV and LASFV, are able to interact with RIG-I, preventing its recruitment to MAVS for downstream signaling (Fan et al., 2010). The mechanism of such inhibiting interaction, as well as the molecular details in terms of involved domain/s and critical residues in the Z protein, remain to be elucidated. Bunyaviruses Viruses of the Bunyaviridae family have a tripartite RNA genome with negative or ambisense coding strategy and are classified into the five genera Orthobunyavirus, Hantavirus, Nairovirus, Phlebovirus and Tospovirus. (Walter and Barr, 2011). This group comprises zoonotic pathogens that are etiologic agents of severe disease in humans, characterized by high lethality, lack of preventive or therapeutic treatments and an emerging pattern at the humanwildlife interface (Walter and Barr, 2011). Among these, the arthropod-borne Rift Valley fever virus (RVFV) from the genus Phlebovirus and the Crimean-Congo hemorrhagic fever virus (CCHFV) from the genus Nairovirus cause fulminating hemorrhagic fever (Ikegami, 2012;Keshtkar-Jahromi et al., 2011), while the rodent-borne Hantaan virus (HTNV), Sin Nombre virus (SNV) and Andes virus (ANDV) from the genus Hantavirus cause the typical hemorrhagic fevers with renal and cardio-pulmonary syndromes, respectively (Charrel et al., 2011). A new member of the genus Phlebovirus, severe fever with thrombocytopenia syndrome virus (SFTSV), was recently discovered and identified as the causative agent of a hemorrhagic fever with high fatality rates that has emerged in several provinces of China Zhang et al., 2013). As shown by in vivo studies, IFN-a/b is poorly induced in RVFVinfected animal models, and the late onset of the type I IFN response is correlated with increased viral pathogenicity (Elliott and Weber, 2009). IFN-a/b levels are kept very low in hantavirus-infected patients during the entire course of disease, and hantaviral species that are pathogenic to humans were found to be able to suppress IFN induction and signaling in cell culture assays much more efficiently than non-pathogenic viruses (Elliott and Weber, 2009). Moreover, a biphasic pattern was observed during hantavirus infection, in which IFN production is suppressed early, but a dramatic increase in ISGs expression occurs at later stages (Matthys and Mackow, 2012). Likewise, type I IFN production and secretion is strongly delayed during CCHFV infection and virus replicating cells were found totally insensitive to IFN-a treatment Weber and Mirazimi, 2008). A similar pattern of type I IFN response dysregulation was also observed for SFTSV, showing that human monocytes are susceptible to infection, mounting a restrained antiviral response with upregulation of some IFN-inducible proteins, but with very low levels of induced type I IFN (Qu et al., 2012). Taken together, these data indicate that the virulence and pathogenesis of bunyaviruses may be due, at least in part, to their ability to counteract cellular IFN responses (Walter and Barr, 2011). Accordingly, Mask and Hit strategies here described are adopted by members of the genera Hantavirus and Nairovirus to escape RIG-I sensing. For RVFV, as well as for other viruses of the genera Orthobunyavirus and Phlebovirus, potent inhibition of the type I IFN-based antiviral response has been found as the result of the properties displayed by their NSs proteins (Bouloy et al., 2001). Indeed, NSs proteins do not specifically target any of the above described components of the RLRs pathway; however, given that their action ultimately impedes the production of IFN-b (Elliott and Weber, 2009), the NSs strategy is de facto a Hit towards the pathway at its most basic level. By contrast, members of the genera Hantavirus and Nairovirus adopt Mask and Hit strategies aimed at directly escaping RIG-I sensing. Mask Digestion with a specific 5 0 -3 0 ssRNA exoribonuclease demonstrated that HTNV and CCHFV genomic RNAs bear a monophosphate group at the 5 0 terminus, which may explain why such nucleic acids are not sensed by RIG-I (Habjan et al., 2008). HTNV 5 0 -ppp moieties are trimmed off during the synthesis of viral genome through a ''prime and realign'' process that cleaves the first-incorporated nucleotide leaving a 5 0 monophosphorylated terminus (Garcin et al., 1995). Furthermore, in addition to being devoid of RIG-I-stimulating 5 0 -ppp, the HTNV genome has complementary 5 0 -and 3 0 -strands that form perfectly paired panhandle structures that are promptly encapsidated by viral nucleocapsid proteins (Garcin et al., 1995). For CCHFV, whose genomic RNA bears a terminal pyrimidine residue at the 5 0 -end, in place of the purine normally used by viral polymerases to initiate RNA synthesis, a similar ''prime and realign'' process likely occurs during replication (Garcin et al., 1995). However, the exact mechanism and viral proteins through which CCHFV removes protruding 5 0 -ppp from its genome ends is currently unknown (Habjan et al., 2008). Hit Even though genomic HTNV and CCHFV RNAs do not significantly activate the RLRs pathway, dsRNA-like secondary structures able to stimulate RIG-I were observed in the mRNA transcripts of HTNV viral nucleoprotein (Lee et al., 2011). Moreover, while HTNV replication was efficient in RIG-I knock-down A549 alveolar epithelial cells, it was strongly inhibited by transient overexpression of RIG-I in Huh7.5 cells that do not constitutively express this helicase (Lee et al., 2011). It is not surprising that also pathogenic han-taviruses display a redundant Hit strategy to target components along the RLRs pathway. The ectopic expression of the hantaviral glycoprotein Gn, from the pathogenic New York-1 virus (NY-1V), was reported to inhibit IRF-3 phosphorylation as well as IRF-3-directed transcriptional response of IFN-b promoter (Alff et al., 2006). Subsequent studies showed that such IFN antagonism is related to a 142-residue cytoplasmic Gn tail that is able to sequester the TRAF3 adaptor by binding to its N-terminal region, and that consequently inhibits the formation of a functional TBK1-TRAF3 complex (Alff et al., 2008). The presence of a viral homologue of the ovarian tumor protease domain (vOTU) (Fig. 6B) within the CCHFV RdRp polymerase L (Frias-Staheli et al., 2007) allows a cross-deubiquitinase proteolytic process for the removal of both poly-Ub and ISG15 from target proteins (Capodagli et al., 2011;James et al., 2011). Notably, RIG-I ubiquitination by TRIM25a stabilizes its interaction with MAVS and enhances downstream signaling (Gack et al., 2007(Gack et al., , 2008. Similarly to ubiquitination, also K63-linked isopeptide-bond conjugation with ISG15 (ISGylation) of several target proteins is critical for IFN-a/b induction and antiviral activity in response to RNA virus infection, showing both positive and negative regulation of the RLRs pathway . For example, IRF-3 ISGylation prevents its degradation and is required for efficient translocation and permanence into the nucleus (Lu et al., 2006), while RIG-I ISGylation lowers its cellular levels to finely tune the antiviral response strength (Kim et al., 2008). Therefore, vOTU deconjugation of Ub and ISG15 from RLRs pathway components seems to be a Hit strategy by which CCHFV evades IFN-based immune response, however further investigations are required to precisely define both mechanism of action and target proteins. As the major virulence factor and IFN-antagonist of RVFV, NSs induces a complete shut-off of transcriptional activity in the infected cell, exerted through the sequestration of the p44 and XPD subunits of RNA polymerase II transcriptional factor TFIIH (Le May et al., 2004) and by promoting the degradation of its p62 subunit (Kalveram et al., 2011). Moreover, in addition to such generalized downregulation of host cell gene expression , the RVFV NSs was shown to directly target the IFN-b production. In fact, by recruiting the repressor protein SAP30, NSs forms with it a multiprotein complex on the IFN-b promoter, thereby inhibiting its induction (Le May et al., 2008). Recently, also the NSs of SFSTV, together with its N protein, were found to act as IFN-antagonists, as revealed by reporter gene assays that showed suppression of IFNb and NF-kB promoters activity in response to polyI:C and IAV infection (Qu et al., 2012). However, while the mechanism by which blockage of IFN production is achieved by both of these proteins is currently unknown, it is likely that inhibition would be exerted by SFSTV NSs through a yet-to-be-determined Hit strategy, as it was found able to co-precipitate with TBK-1 (Qu et al., 2012). Coronaviruses Coronaviruses of the genus Betacoronavirus are (+)-ssRNA viruses in the family Coronaviridae, whose members cause important respiratory disease in humans (Perlman and Netland, 2009). In particular, the spillover from a zoonotic reservoir into the human population of Guandong province in southern China led in 2002 to the identification of a novel coronavirus (CoV) belonging to the genus Betacoronavirus (Ksiazek et al., 2003). Causing a ''flu-like'' disease characterized by severe acute respiratory syndrome (SARS) with high mortality rates, the SARS-CoV rapidly spread to many countries (Hui and Wong, 2004). More recently, a novel CoV was identified in the Middle East as the causative agent of several acute respiratory syndrome with renal failure (Zaki et al., 2012;Khan, 2013). Tentatively named Middle East respiratory syndrome (MERS)-CoV, this pathogen has been responsible for more than 100 cases of lethal infection (de Groot et al., 2013). Patients with SARS typically show aberrant type I IFN, ISGs and cytokine responses, suggesting that the virulence and pathogenicity of the SARS-CoV are related to profound dysregulation of the innate immune antiviral response (Totura and Baric, 2012). Regarding the RLRs pathway, even though both RIG-I and MDA5 are overexpressed during SARS-CoV infection in vitro (Yoshikawa et al., 2010), no IRF-3 phosphorylation or dimerization were observed and loss of IFN induction was reported in SARS-CoV-infected fibroblasts (Thiel and Weber, 2008). A similar pattern of poor IFN production and expression of pro-inflammatory cytokine was also observed in human bronchial epithelial cells infected with the MERS-CoV, and transcriptomic analysis of host gene expression showed upregulation of RIG-I, MDA5, IRFs and other ISGs (Kindler et al., 2013;Josset et al., 2013). In addition, comparative analysis of cell tropism and innate immune evasion showed that both viruses prevent IRF-3-mediated IFN-a/b induction, with MERS-CoV showing a higher sensitivity to IFN treatment (Chan et al., 2013;de Wilde et al., 2013;Zielecki et al., 2013). Together, these data suggest that SARS-CoV and MERS-CoV employ strategies to counteract RLRs dsRNA recognition. Hide The observation that in SARS-CoV-infected cells no type I IFN production was found, whereas treatment with polyI:C resulted in both IRF-3 activation and IFN-a/b induction (Versteeg et al., 2007;Zhou and Perlman, 2007), strongly suggests that viral dsRNA is somehow buried and rendered inaccessible to cytoplasmic RLRs. In line with this, electron tomography studies have revealed that SARS-CoV infection alters the cytoplasmic membrane architecture by inducing the formation of a series of double-membrane vesicles (DMVs) which, interconnecting to each other without communicating with the cytosol, converge to the endoplasmic reticulum (ER) and the Golgi apparatus (Knoops et al., 2008). Membrane convolution into DMVs seems to be directed by the SARS-CoV nsp4 protein, as its mutation results in aberrant and open vesicles (Clementz et al., 2008). Furthermore, nsp3, nsp5 and nsp8 proteins, forming SARS-CoV replicase, as well as SARS-CoV genomic RNA all localize inside the DMVs, suggesting that viral RNA synthesis likely occurs at these intracellular sites (Snijder et al., 2006;Stertz et al., 2007). Notably, such extensive membrane re-arrangment, with DMV formation and co-localization with dsRNA was also observed in Vero cells infected with MERS-CoV, (de Wilde et al., 2013). Hence, despite the substantial amount of dsRNA byproducts generated during SARS-CoV replication cycle , it is likely that dsRNA shielding through its segregation into DMVs microenvironment is a Hide strategy through which CoVs effectively elude dsRNA detection (Knoops et al., 2008). However, such a passive mechanism is not exclusive, since the SARS-CoV N, a 46 kDa highly basic protein with a primary function of encapsidating the viral genome, has recently been found to target RLRs activation and signaling (Lu X. et al., 2011). The SARS-CoV N protein inhibits IFN-a/b production induced by both SeV and polyI:C (Kopecky-Bromberg et al., 2007;Lu X. et al., 2011) whereas, by contrast, it is not able to suppress type I IFN induction upon overexpression of components such as RIG-I-CARD, MAVS, TBK1 and IKK-e, suggesting that the N protein blocks the initial step of dsRNA recognition by RLRs (Lu X. et al., 2011). In addition, co-IP experiments have shown that the SARS-CoV N protein does not interact with either RIG-I or MDA5, thereby excluding the possibility that such a block is exerted by directly impairing RLRs function (Lu X. et al., 2011). SARS-CoV N has been reported to be an RNA-binding protein (Chang C.-K. et al., 2009;Tang et al., 2005) through both its N-terminal and C-terminal domains (Chang C.-K. et al., 2009;Chen et al., 2007). Therefore, it is likely that SARS-CoV N binds dsRNA to prevent RIG-I and MDA5 activation and subsequent type I IFN induction (Lu X. et al., 2011). Mask A 3 0 -5 0 exoribonuclease DEDDH domain was recently identified in the SARS-CoV nsp14 protein, demonstrating its ability to digest ssRNA as well as dsRNA with one 3 0 -mismatched nucleotide (Bouvet et al., 2012). Primarily placed in the context of viral RNA synthesis, this nsp14 activity has been related to nucleotide misincorporation repair (i.e. proofreading) putatively exerted by the SARS-CoV polymerase complex. However, it is possible that this function is also related to the IFN-response evasion. In fact, as observed for the LASFV NP, such 3 0 -5 0 exoribonuclease activity may serve for nsp14 degradation of dsRNA intermediates, thereby leading to inhibition of IFN induction (Bouvet et al., 2012). However, albeit plausible, the formal involvement of nsp14 in a Mask strategy to prevent dsRNA detection requires further investigation. Hit In addition to hiding and masking dsRNA, SARS-CoV IFN antagonism relies on directly targeting the RLR pathway. Several SARS-CoV encoded proteins have been found to display such properties, but the mechanism of action remains to be elucidated for most of them. The SARS-CoV ORF3b and ORF6 proteins are involved in suppressing IFN induction at the level of RLR signaling with their downstream adaptors. In fact, preferential localization and redistribution at the mitochondrial outer membrane of both ORF3b and ORF6 has been reported, seemingly associated with inhibition of RIG-I and MDA5 recruitment of MAVS (Freundt et al., 2009;Kopecky-Bromberg et al., 2007). Another SARS-CoV-encoded IFN antagonist protein is the structural glycosylated M protein, essential for the assembly of viral particles. SARS-CoV M exerts a double Hit strategy that results in the failure of IRF-3 phosphorylation and activation. First, SARS-CoV M interacts with RIG-I and MAVS, sequestering and re-localizing them into discrete membrane compartments associated with the Golgi apparatus (Siu et al., 2009). Second, SARS-CoV M binds TRAF3 and, stably associating with it, abolishes the formation of a functional complex with the TBK1 and IKK-e kinases (Siu et al., 2009). Similarly, the papain-like protease (PLP) domain contained within the SARS-CoV nsp3 protein, an essential component of the viral replicase complex, interacts with the adaptor STING, impeding its dimerization and activation and disabling the STING essential function of MAVS recruiter to the TBK1-IKK-e complex for IRF-3 phosphorylation (Devaraj et al., 2007;Sun et al., 2012). Furthermore, a redundant inhibiting effect on RLR signaling is also due to the deubiquitinating activity of SARS-CoV PLP, that has been found to remove Ub from several RLR pathway components, such as RIG-I, STING, TBK1 and IRF-3 (Clementz et al., 2010;Frieman et al., 2009). Filoviruses Filoviruses are (À)-ssRNA viruses that belong to the family Filoviridae, which includes one species in the genus Cuevavirus, five species of Ebolaviruses (EBOVs) in the genus Ebolavirus and one species of Marburgvirus (MARV) in the genus Marburgvirus, and are among the most virulent known pathogens (Kuhn et al., 2010). EBOVs and MARV are causative agents of fulminant hemorrhagic fever in humans and nonhuman primates, with up to 90% case fatality rates (Hartman et al., 2010). Even though rare and mostly confined to Sub-Saharan Africa and South-East Asia, filoviruses actually pose a worldwide concern. In fact, due to the risk of travel-imported cases, their demonstrated aerosol infectivity and transmissibility between pigs as well as their potential misuse as biological weapons in a bioterrorist attack, they are considered one of the highest priorities for global health security (Bray, 2003;MacNeil and Rollin, 2012). The extreme lethality of EBOVs and MARV is the result of uncontrolled viral replication associated with a total impairment of the innate immune system, related, at least in part, to the properties displayed by three determinants of virulence and pathogenicity, namely the VP24, VP35 and VP40 proteins. However, only the multifunctional polymerase cofactor VP35 acts by suppressing IFN-a/b production, while the EBOV VP24 and MARV VP40 matrix proteins are involved in inhibiting the type I IFN signaling pathway Ramanan et al., 2011). Initially identified as the only filoviral protein able to complement the growth of a mutant IAV lacking its IFN-antagonist NS1 protein (see section 12) , filoviral VP35 was found to suppress production of endogenous IFN-b as well as transcriptional activation of IFN-b, ISG54 and ISG56 promotersdriven reporter genes, induced by either mutant IAV infection, SeV infection or polyI:C dsRNA transfection Basler et al., 2003;Hartman et al., 2004). These properties were correlated with the inhibition of IRF-3 phosphorylation, dimerization and nuclear translocation (Cárdenas et al., 2006;Hartman et al., 2004Hartman et al., , 2006, as well as inhibition of IFN-a/b promoter upon overexpression of either RIG-I, RIG-I CARD, MAVS, TBK1 and IKK-e (Cárdenas et al., 2006). Together, these data indicate that the filoviral VP35 protein exerts its IFN-antagonism by targeting the RLR pathway. Hide The innate immune escape capabilities exhibited by the filoviral VP35 all reside in its uniquely folded C-terminal RNA binding domain (RBD), also called the IFN-inhibitory domain (IID), comprising a central basic patch (CBP) that is highly conserved among filoviruses and shares high sequence identity with the IAV NS1 RBD (Hartman et al., 2004;Leung et al., 2009Leung et al., , 2010a. Indeed, although the RBD/IID is sufficient by itself to display an anti-IFN phenotype, a full-length VP35 capable of homo-oligomerization is required for fully efficient IFN inhibition (Leung et al., 2009;Reid et al., 2005). EBOV VP35 was found to be able to interact with dsRNA (Cárdenas et al., 2006), binding to both blunt-ended (Feng et al., 2007;Kimberlin et al., 2010;Zinzula et al., 2009) and 5 0 -ppp dsRNA molecules (Leung et al., 2010b;Zinzula et al., 2012) with very high affinity and in a sequence-independent manner. Notably, residues within the CBP critical for dsRNA binding such as R305, K309, R312 and K339 were found to be important for IFN inhibition, since their substitution with alanine decreased dsRNA binding to different extents (Cárdenas et al., 2006;Zinzula et al., 2012) and led to failure in suppressing both ISG56 and IFN-b promoter activity induced by various stimuli (Cárdenas et al., 2006;Hartman et al., 2004Hartman et al., , 2006. In particular, with respect to wt VP35, R312A VP35 showed the most dramatic defects, eliciting different ISGs expression profile in infected cells, and EBOV containing R312A VP35 mutation showed delayed viral growth and attenuation of infectivity in mice (Hartman et al., 2008a,b). A similar defective phenotype was also generated by the double mutation K319A/R322A, causing total loss of VP35 0 s dsRNA binding activity, abolishing IFN inhibition and rendering the corresponding mutant EBOV avirulent in guinea pigs (Prins et al., 2010). Notably, this double mutant was immunogenic and protected animals from subsequent challenge with wt EBOV (Prins et al., 2010), strongly reinforcing the notion of a correlation between VP35 dsRNA binding, inhibition of IFN-a/b production and the pathogenicity of filoviruses. The molecular details of the VP35 interaction with dsRNA have been solved by four crystallographic structures: two EBOV VP35 RBD/IID from the Zaire and Reston viruses bound to 8 and 18 bp blunt-ended dsRNA molecules, respectively (Kimberlin et al., 2010;Leung et al., 2010b), and two crystallographic structures of the MARV VP35 RBD/IID bound to 12 and 18 bp blunt-ended dsRNA, respectively (Bale et al., 2012;Ramanan et al., 2012). As observed in these crystals, EBOV VP35s form an asymmetric dimer at each of the dsRNA ends (Fig. 7A). One monomer, termed the backbone binding RBD/IID, binds to the sugar-phosphate backbone of both dsRNA strands, while the second monomer, referred to as end-capping, binds to dsRNA terminal bases and the proximal phosphate backbone. As a result of this bimodal strategy, VP35 RBD/IID dimers mimic the RLRs shape and hide their recognition site at dsRNA ends, possibly accommodating the 5 0 -ppp hindrance into a positivelycharged pocket (Leung et al., 2010b;Kimberlin et al., 2010). In contrast, MARV VP35 RBDs/IIDs have been reported to helically coat dsRNA along its entire phosphate backbone, with a footprint of four monomers every 4-5 bp (Fig. 7B) (Bale et al., 2012;Ramanan et al., 2012). Notably, even though in this case no VP35 RBDs/IIDs capping of the nucleic acid ends was observed, isothermal titration calorimetry and dot blot assays revealed that a binding event with a VP35:dsRNA stoichiometry of 1:1 almost disappears in the presence of dsRNA overhangs, thereby suggesting that an end-capping-like binding dsRNA blunt-ends modality may also take place for MARV VP35 (Bale et al., 2012;Ramanan et al., 2012). More recently, a similar behavior was also described for the EBOV VP35 RBD/IID, and a model was proposed in which the end-capping monomer constitute the first binding event to which attachment of babckbone binding monomers follows (Bale et al., 2013). As revealed by biochemical studies, EBOV VP35 RBDs/IIDs bind with similar affinities regardless of dsRNA length (Leung et al., 2010b;Kimberlin et al., 2010;Ramanan et al., 2012;Zinzula et al., 2012), while MARV RBD/IID binding affinity decreases as dsRNA shortens (Bale et al., 2012;Ramanan et al., 2012). In addition, dsRNA overhangs presence is barely tolerated by EBOV VP35, for which a 5 0 -ppp is otherwise important for binding to dsRNA with high affinity (Kimberlin et al., 2010;Leung et al., 2010b;Zinzula et al., 2012), while they do not affect MARV VP35 binding function (Bale et al., 2012;Ramanan et al., 2012). Such different abilities to recognize diverse dsRNA PAMPs reflect the diverse properties of the two filoviral VP35 in antagonizing RLRs. In fact, while both EBOV and MARV VP35 were able to inhibit the dsRNA-induced ATPase activity of MDA5, only EBOV VP35 exerted the same effect on RIG-I for all tested dsRNA ligands (Leung et al., 2010b;Ramanan et al., 2012). Overall, the architecture of the VP35-dsRNA complex in the four crystallographic structures accounts for the importance of several residues in dsRNA binding. Side chains of residues R305/294, K309/298, R312/301, K339/K328 in the Zaire/Reston EBOV VP35 CBP (Kimberlin et al., 2010;Leung et al., 2010b), as well as in the corresponding residues N261, Q263, R294, K298, S299, R301 and K328 in the MARV VP35 CBP, are all involved in direct interactions with the dsRNA phosphate backbone (Bale et al., 2012;Ramanan et al., 2012). In addition, residues R312/301, K319/308, R322/311 and K339/328 in the Zaire/Reston EBOV VP35 CBP are also involved in protein-protein interactions at the RBD/IID dimer interface, while residues I340/329 and F239/228, in the end-capping monomer, interact with the adjacent K339/328, Q274/263 and I278/ 267 residues, which in turn bind dsRNA terminal bases (Kimberlin et al., 2010;Leung et al., 2010b). Most importantly, the proposed model for filoviral VP35 homo-oligomers shielding dsRNA PAMP signatures has been validated by mutagenesis studies, since alanine substitutions of the same residues resulted in a decrease or loss of dsRNA binding function, as well as in diminished or abolished inhibition of SeV-induced IFN-b promoter activation and IRF-3 phosphorylation (Bale et al., 2012;Kimberlin et al., 2010;Leung et al., 2010b;Ramanan et al., 2012). Hence, given the high degree of conservation of these amino acid residues among all known EBOV and MARV species, it is plausible that dsRNA sequestration by VP35 represents a key strategy through which these pathogens circumvent host innate immunity. Hit In line with the other viral IFN-antagonist redundancy observed to concomitantly target multiple levels of the RLRs pathway, filoviral VP35 also employs several Hit strategies to suppress IFN-a/b induction. In fact, both EBOV and MARV VP35 were shown to block IRF-3 phosphorylation and nuclear translocation induced by TBK1 and IKK-e overexpression, but not to inhibit IFN-b promoter activation induced by a constitutively active form of IRF-3 (Prins et al., 2009;Ramanan et al., 2012). Consistent with a target upstream in the RLRs pathway, co-IP studies showed that EBOV VP35 interacts with the N-terminal domain of TBK1 and IKK-e, it decreases their catalytic activity and is phosphorylated by these kinases (Prins et al., 2009). In addition, overexpression of VP35 resulted in disruption of IKK-e interactions with IRF-3, IRF-7 and MAVS (Prins et al., 2009). Therefore, filoviral VP35 targets the RLRs pathway also in a dsRNA binding-independent way by acting as alternative substrate for the TBK1-IKK-e complex in place of IRF-3 and 7. Furthermore, these two IRFs are also directly targeted by VP35 that prevents their migration to the nucleus (Chang T.H. et al., 2009). In fact, EBOV VP35 was found to physically interact with both IRF-3 and 7 and to promote their Ub-like modification by the two members of the small Ub-like modifier cascade PIAS1 and Ubc9, thereby inhibiting the IRFs transcriptional function and subsequently suppressing the activation of IFN-b promoter (Chang T.H. et al., 2009). Finally, a novel mechanism through which VP35 suppresses the RLRs pathway was recently described, in which the EBOV IFNantagonist interacts with cellular PKR activator (PACT), a dsRNAbinding protein involved in the activation of RIG-I and stimulation of its ATPase activity (Fabozzi et al., 2011;Luthra et al., 2013). The formation of a complex between VP35 and PACT abolishes critical interaction of the latter with the CTD of RIG-I, thereby preventing dsRNA-induced immune signaling (Luthra et al., 2013). Flaviviruses The genus Flavivirus of the family Flaviviridae includes two mosquito-borne (+)-ssRNA viruses, dengue virus (DENV) and West Nile virus (WNV), which are emerging as global life-threatening pathogens. WNV infects millions of people, with thousands of deaths annually, while DENV has spread worldwide, with a recent increase in morbidity and outbreak frequency (Gould and Solomon 2008;Heinz and Stiasny 2012). DENV and WNV human infections are often asymptomatic or characterized by mild and self-limiting fever. In some cases, however, the infection develops into severe and fatal illness, characterized by hemorrhagic fever and shock syndrome for DENV, and by neurological diseases such as meningitis, encephalitis or acute flaccid paralysis in the case of WNV (Gould and Solomon 2008; Heinz and Stiasny 2012). The disease severity caused by these pathogenic flaviviruses has been correlated with their ability to counteract the IFN-a/b response (Keller et al., 2006), and both DENV and WNV have been found to encode several IFN-antagonists acting on different targets (Muñoz-Jordán and Fredericksen, 2010). However, most studies have focused on characterizing DENV and WNV suppression of the TLRs-mediated IFN induction, the IFN signaling pathway and the ISGs-mediated antiviral response, whereas less is known about viral proteins and mechanisms that interfere with the RLR pathway (Morrison et al., 2012;Suthar et al., 2013). Notably, RIG-I, MDA5 and LGP2 are strongly up-regulated in several DENV-and WNV-infected cell lines (da Conceição et al., 2013;Fredericksen et al., 2008;Surasombatpattana et al., 2011;Qin et al., 2011), while viral replication is enhanced in the absence of RLR expression (Nasirudeen et al., 2011;Suthar et al., 2010). Inhibition of IRF-3 phosphorylation has been reported during productive infection by both DENV and WNV, indicating that subversion of the RLR pathway is important for their replication (Chang et al., 2006;Fredericksen and Gale 2006;Keller et al., 2006). Hide DENV is able to efficiently downregulate RLR signaling (Loo et al., 2008), even though its genomic RNA can be recognized by both RIG-I and MDA5 (Rodriguez-Madoz et al., 2010a). Similarly, WNV is able to elude RIG-I recognition early post-infection (Fredericksen et al., 2004), although its RNA subgenomic fragments fold into secondary structures that have recently been shown to potently activate RIG-I (Shipley et al., 2012). Together, these data make it plausible that flavivirus counteraction of the type I IFN-response may include mechanisms that avoid dsRNA detection. Supporting this hypothesis, two studies based on 3D electron tomography have shown that DENV and WNV modify ER and Golgi apparatus morphology, inducing the formation of convoluted membranes (CMs) in the cytoplasm. Tightly associated with the viral NS4A protein, CMs invaginations wrap around other NS proteins that form the viral replication machinery which, as revealed by immuno-labeling, co-localizes into the CMs vesicles packed with dsRNA (Gillespie et al., 2010;Welsch et al., 2009). By providing a hidden compartment where RNA synthesis and virus budding take place, such membrane modifications represent a passive strategy by which DENV and WNV hide dsRNA from RLRs. Hit The RLRs pathway is also actively targeted by these two flaviviruses. In DENV infection, inhibition of SeV-or polyI:C-mediated transcriptional activity of the IFN-b promoter was found to be dose-dependently related to the catalytic activity of the NS2B3 protease (Rodriguez-Madoz et al., 2010b). As recently demonstrated, this correlation is due to the cleavage of the STING protein Yu et al., 2012) by a proteolytic core, within the last 40 amino acids of NS2 and the first 180 residues of NS3, which targets the consensus sequence LRRQ 96 G of STING, it eliminates its ability to interact with TBK1 and activates this kinase for IRF-3 phosphorylation Yu et al., 2012). In WNV infection, a possible Hit strategy to block RLRs signaling has been identified based on the viral protein NS2. By using a WNV subgenomic replicon, severe inhibition of IFN-b promoter-driven transcription was observed, and this phenotype was abolished by the occurrence of an adaptive alanine-to-proline substitution at position 30 of the NS2 protein (Liu et al., 2004). With respect to wt NS2 WNV, A30P NS2 mutant WNV led to faster production of higher IFN-a/b levels in infected cells and determined an attenuation of viral growth and neuro-invasiveness in mouse models (Liu et al., 2006). Furthermore, immunization of mice with this mutant WNV conferred protection against subsequent challenge with a lethal dose of a highly virulent WNV strain (Liu et al., 2006). These data reflect the importance of WNV NS2 as a determinant of virulence and pathogenesis, even though the exact mechanism by which this protein modulates IFN-a/b expression remains to be determined. Henipaviruses Within the viral family Paramyxoviridae, (+)-ssRNA viruses Hendra virus (HeV) and Nipah virus (NiV) of the genus Henipavirus have recently emerged as zoonotic bat-borne pathogens characterized by extreme lethality for humans and a broad mammalian host range (Eaton et al., 2006). Endemic in Australia and South-East Asia, respectively, HeV and NiV typically affect livestock but also cause severe neurologic and respiratory disease in humans that often progresses to encephalitis and multiorgan failure (Marsh and Wang, 2012). As for the viruses discussed above, fatal outcome following henipavirus infection might be correlated with the ability of these pathogens to potently subvert the host innate immune system (Basler, 2012). In agreement with this concept, daily treatment with IFN-inducers prevented death in NiV-infected animals (Georges-Courbot et al., 2006). Henipaviruses counteract the type I IFN response by blocking the IFN-a/b production at multiple levels (Basler, 2012). This is the effect of the properties displayed by the four proteins encoded by the henipaviral P gene, which include the homonymous phosphoprotein P, together with viral products V, W and C (Shaw, 2009). The first three proteins are expressed by the same open reading frame (ORF), share a common N-terminal region and have a unique C-terminus. In fact, while the viral polymerase cofactor P is produced by the exact transcription of the entire gene, both V and W proteins are generated by an editing process based on the insertion of non-templated G nucleotide residues into the viral mRNA. By contrast, the C protein is expressed by an alternative ORF within the P gene, and therefore has a completely different sequence (Kulkarni et al., 2009;Lo et al., 2009). All proteins encoded by the henipaviral P gene act as determinants of virulence, inhibiting the IFN-a/b production by targeting both the TLR and RLR pathways, as well as suppressing type I IFN signaling (Basler, 2012). With regard to the counteraction of RLR pathway, the discussion is limited only to functions displayed by the V and W proteins. Hit The HeV and NiV V and W proteins are able to inhibit the SeV-and dsRNA-induced transcriptional activation of the ISG54 promoter by IRF-3 (Basler, 2012;Shaw, 2009). The V protein was shown to bind MDA5 (Andrejeva et al., 2004;Childs et al., 2007) and LGP2 (Parisien et al., 2009) by targeting homologous regions in the two helicases that, encompass residues 676-816 of MDA5 and residues 327-465 of LGP2, corresponding to the boundaries of their Hel2 domain (Childs et al., 2009;Parisien et al., 2009). As a consequence of this interaction with V, MDA5 dsRNA binding and homo-oligomerization were abolished (Childs et al., 2009) and ATP-hydrolysis activity of both helicases were disrupted (Parisien et al., 2009). Moreover, even though henipaviral V proteins do not directly bind RIG-I (Childs et al., 2009), their interaction with LGP2 induced the formation of a stable LGP2-RIG-I complex, rendering the latter unable to recognize 5 0 -ppp dsRNA ligands . The V protein domain responsible for such IFN-inhibiting activity was mapped to a region comprising 49-68 amino acids at its Cterminus, with one histidine and seven cysteine residues that are highly conserved among all paramyxoviruses and fold into a zinc-finger domain to coordinate two zinc atoms (Ramachandran and Horvath, 2010). As revealed by mutagenesis studies, four residues in the zinc-coordinating domain, namely C194, C206, C208 and C211, as well as the two nearby conserved R409 and I414, are essential for the V protein's ability to suppress MDA5 functionality (Ramachandran and Horvath, 2010). Conversely, the single amino acids R806 of MDA5, R455 of LGP2 and L714 of RIG-I are sufficient for the interaction with the viral protein, as their point mutation renders these helicases unable to co-immunoprecipitate with overexpressed V in luciferase-based cell culture assays (Rodriguez and Horvath 2013). The MDA5 crystallographic structure in complex with the V protein of the henipavirus closely-related parainfluenza virus 5 (PIV5) was recently obtained (Fig. 6C) (Motz et al., 2013). As revealed by this structure, the zinc-finger b-hairpin located at the C-terminal domain of V is folded in a way that mimics the two b-strands in the MDA5 Hel2 domain (Motz et al., 2013). As a result, upon interaction with MDA5, V displaces the b-sheet motif of the Hel2 domain and accommodates its zinc-finger b-hairpin into the MDA5 fold, thereby causing the inhibition of MDA5 ATP-hydrolysis activity, dsRNA binding ability and oligomerization properties, which ultimately inhibits the MDA5induced production of IFN-a/b (Motz et al., 2013;. The henipaviral W protein also downregulates type I IFN production, but the molecular mechanisms by which it targets the RLR pathway are less clear. In fact, the W protein was shown to potently block transcription from promoters activated by IRF-3, such as the IFN-a/b and the ISG54 promoters (Shaw et al., 2005). Moreover, since transcription starting at these IRF-3-responsive promoters was inhibited by henipaviral W also with concomitant overexpression of both TBK1 and IKK-e kinases, it is conceivable that the target hit by this viral protein is placed downstream along the RLRs pathway (Shaw et al., 2005). Notably, due to a frameshiftextended ORF, W is about 43 amino acids longer than V, it lacks the domain responsible for interaction with RLRs and has a unique Cterminal basic stretch that functions as a nuclear localization signals (NLS) (Lo et al., 2009). Therefore, given that W is almost exclusively nuclear and that it does not block IRF-3 phosphorylation, it is likely that this protein directly hits the transcriptional factor once in the nucleus, possibly rendering IRF-3 dimers less stable in order to prevent IFN-a/b promoter activation (Lo et al., 2009;Shaw et al., 2005). This mechanism of action would explain the ability of henipavirus W to impair both RLRs and TLR3 pathways, consistent with the hypothesis that it acts at a point in the pathways where the two signaling cascades converge (Lo et al., 2009;Shaw et al., 2005). Influenza A viruses Influenzaviruses are segmented (À)-ssRNA-viruses that belong to the three genera Influenzavirus A, B and C within the family Orthomyxoviridae. They are the most important respiratory pathogens affecting the human population. IAVs are maintained in several vertebrate host species and are subtyped according to the antigenic properties of their 16 different hemagglutinin and 9 different neuraminidase glycoproteins (Medina and Garcia-Sastre, 2011). Subtypes that circulate in humans undergo an antigenic shift due to accumulation of mutations at these two proteins, which results in overcoming pre-existing immunity to cause seasonal flu epidemics worldwide. Occasionally, profound antigenic change may derive from interspecies transmission or gene re-assortment during concomitant infection of the same host by different IAV subtypes. If such antigenic drift occurs, viruses that are new to human population can emerge and generate devastating pandemics with millions of deaths (Neumann et al., 2009(Neumann et al., , 2010. Human infections by highly pathogenic IAVs typically may result in acute respiratory distress syndrome with fatal pneumonia, due, at least in part, to the ability of IAVs to efficiently suppress the host innate immune response (Ramos and Fernandez-Sesma, 2012). Regarding interference with the dsRNA-induced production of IFN-a/b, IAVs exert their antagonism through the properties of at least three proteins, namely NS1, PB2 and PB1-F2, which target the RLR pathway at several levels and with different mechanisms (van de Sandt et al., 2012). Hide The notion that the multifunctional NS1 protein of IAVs is an IFN-antagonist derived from the finding that viruses deleted of its encoding gene (delNS1) showed attenuated growth and induced large amounts of IFN-a/b in IFN-competent cells, but replicated well in IFN-deficient cells García-Sastre et al., 1998). Subsequent studies showed that this phenotype was the result of a counteraction towards the RLR pathway, as abrogated virus-and dsRNA-induced activation of IRF-3, NF-kB and the ATF-2/c-Jun enhanceosome (Ludwig et al., 2002;Talon et al., 2000;Wang et al., 2000). Such inhibition was correlated with the ability of NS1 to bind dsRNA, since a binding-defective double mutant R38A/K41A NS1 led to virus attenuation in vivo and to IFN-a/b induction in cell cultures (Donelan et al., 2003;Talon et al., 2000). Notably, a third substitution at position 42 in the revertant NS1 mutant R38/K41A/S42G was reported to gain back IFN suppression capability but did not restore dsRNA binding, suggesting that, even though dsRNA binding is necessary and sufficient for efficient inhibition of IFN synthesis, it is not the sole function that mediate IFN antagonism by the IAV NS1 (Donelan et al., 2003). The impact of alternative mechanisms through which NS1 counteracts the host innate immune response (see below) was also highlighted by comparing NS1 proteins from different IAVs, which showed strain specificity in suppressing type I IFN and ISGs expression . On the other hand, the importance of NS1 dsRNA-binding function for IFN antagonism was supported by the fact that the ability of RIG-I to detect IAV-derived RNAs was inhibited in the presence of NS1, but it was efficiently activated by the same PAMPs in cells infected with a virus lacking NS1 (Baum et al., 2010;Hornung et al., 2006;Pichlmair et al., 2006). NS1 binds to blunt-ended and 5 0 -ppp dsRNA as well as ssRNA panhandle structures through an N-terminal RBD that encompasses residues 1-73 Hatada and Fukuda, 1992;Qian et al., 1995). Moreover, NS1 proteins deleted in their C-terminal effector domains were still active in blocking activation of IFN-b promoter (Guo et al., 2007;Hayman et al., 2006). Functional proteins were obtained even when the sole NS1 N-terminal dsRNA binding domain was fused to a heterologous C-terminal domain , reinforcing the importance of dsRNA sequestration by preventing RLR recognition and subsequent IFNa/b production. Structural evidence indicates that NS1 exists as a homo-dimer and interacts with the phosphate backbone of dsRNA, so that the two RBD monomers lie on nucleic acid strands like parallel tracks on a slot (Chien et al., 2004;Liu et al., 1997;Wang et al., 1999). Along these tracks, several conserved basic and hydrophobic residues such as T5, P31, D34, R35, R38, K41, G45, R46 and T49 have been identified as critical for NS1 dsRNA-binding ability (Wang et al., 1999;Yin et al., 2007). In fact, as revealed by the crystal structures of the NS1 RBD in complex with a 21-bp dsRNA (Fig. 7C), these residues form a concave surface that recognizes the minor groove of the dsRNA A-form (Cheng et al., 2009). In particular, residues R38, S42 and T49 of each RBD, as well as arginines at positions 35 and 46 from different monomers, are involved in establishing key hydrogen bonds with the phosphate group of both dsRNA strands (Cheng et al., 2009). Furthermore, an even more evocative description of how IAVs prevent the detection of dsRNA by RLRs has come from the solved structure of a full-length NS1 of the highly pathogenic H5N1 IAV in complex with dsRNA (Bornholdt and Prasad, 2008). As shown by a combined X-ray crystallography and cryo-EM approach, NS1 dimers multimerize through alternating interactions between the RBD and the C-terminal effector domain, forming a tubular structure with a central tunnel suitable to accommodate dsRNA (Bornholdt and Prasad, 2008). Again, critical orientation is observed for key residues such as R38, which is projected to the tunnel center in order to establish direct bonds with dsRNA, while other residues like K41 contribute to enwrap the phosphate backbone all along the entire nucleic acid length (Bornholdt and Prasad, 2008). Hit NS1 also exerts its IFN-inhibiting properties against the RLR pathway through Hit strategies, since it was found to be able to block both RIG-I and MAVS functions by physically interacting with these components (Guo et al., 2007;Mibayashi et al., 2007;Opitz et al., 2007). Noteworthy, the presence of 5 0 -ppp dsRNA enhanced co-precipitation with RIG-I, and such interaction was abolished in the dsRNA binding-defective R38A/K41A mutant, suggesting either the possibility of overlapping functions in the NS1 RBD or a role of dsRNA as intermediate in the formation of the NS1-RIG-I complex (Pichlmair et al., 2006;Talon et al., 2000). RLR activation and signaling are also blocked by NS1 through the suppression of RIG-I K63-linked ubiquitination by TRIM25a . Specifically, NS1 interacts with the coiled-coil domain of TRIM25a and disrupts its homo-oligomerization, which is in turn indispensable for poly-ubiquitination of RIG-I . Moreover, the alanine substitution of E96 and E97 residues as well as the R38A/K41A mutation responsible for loss of dsRNA binding impaired NS1 interaction with TRIM25a and caused attenuation of viral growth in vivo and increased IFN-a/b production . Also, the TRIM25a-orthologue Riplet was recently found to be a target, in human cells, of NS1 from human IAV strains. However, while the E96A/E97A mutant was in this case still capable of such interaction, the dsRNA binding-defective R38A/K41A NS1 could not exert it (Rajsbaum et al., 2012). Finally, the NS1 proteins of most (but not all) IAVs are able to alter innate immune responses through a global restriction of host gene expression. To achieve this, NS1 binds the 30 kDa subunit of cleavage and polyadenylation-specificity factor (CPSF30) (Nemeroff et al., 1998;Noah et al., 2003;Das et al., 2008), preventing polyadenylation of the 3 0 ends of host pre-mRNA, and interacts with the poly-(A)-binding protein II (PABPII), resulting in inhibition of host pre-mRNA nuclear export (Chen et al., 1999). In addition to NS1, the other IAV proteins PB1, PB2, PA and PB1-F2 all display IFN-antagonism against the RLR pathway by targeting the mitochondrial adaptor MAVS through a Hit strategy. The three RNA polymerase subunits PB1, PB2 and PA were all found to inhibit dsRNA-induced and RIG-I-or MDA5-mediated activation of IFN-b promoter in a gene reporter assay, and such inhibition was related to their binding to MAVS (Iwai et al., 2010). In PB2, whose inhibitory effect and binding ability were stronger, amino acids 1-37 were found to be critical for the interaction with the N-terminal 150 residues of MAVS, while an asparagine at position 9 in PB2 sequence was found to be essential for its mitochondrial localization and inhibition of the RLR pathway signaling (Graef et al., 2010;Patel D. et al., 2013). Regarding PB1-F2, this small protein encoded by a +1 ORF in the PB1 gene is found only in some IAVs and its presence is related to the high virulence in a strain-dependent manner . PB1-F2 strongly suppresses RIG-I-mediated IFN-a/b induction, whereas infection with delPB1-F2 viruses results in increased IRF-3 activation and production of IFN-b mRNA (Dudek et al., 2011). Initially, such activity was found to be mediated by a region spanning residues 39-87 of the PB1-F2 C-terminus, where a highly conserved sequence served as a mitochondrial targeting signal (MTS) (Gibbs et al., 2003;Yamada et al., 2004). More recently, the presence of a serine in place of the asparagine at position 66 within the PB1-F2 MTS was found to be related to a severe pathogenicity and a strong reduction of ISGs expression, and to be responsible for the increased virulence of the avian H5N1 and the 1918 Spanish flu IAV strains (Conenello et al., 2007(Conenello et al., , 2011. In line with these data, the mechanism of action through which PB1-F2 shuts down type I IFN production has been elucidated by recent work demonstrating that PB1-F2 physically interacts with MAVS, and that the N66S mutation enhances this interaction (Varga et al., 2012). PB1-F2 also targets the transmembrane region of MAVS, dissipating the mitochondrion membrane potential that is an important factor for the recruitment of downstream components and the prosecution of the RLRs signaling cascade (Varga et al., 2012). Concluding remarks and future challenges Based on the production of type I IFN in response to viral infections, the intrinsic innate immune system is an integral part of the broader immune system of vertebrates, and both cell-mediated and humoral adaptive immune responses depend on its proper function. Prompt detection of the pathogen, which relies on the recognition of its molecular components, is a prerequisite for IFN production, and dsRNA is the hallmark of virus presence, above all PAMPs. In the cytoplasm, dsRNA recognition is assigned to the RLRs. Viruses, in turn, circumvent this detection through an array of mechanisms aimed to suppress the RLR pathway. Within this array, we have described three main viral strategies, which are respectively directed to hide dsRNA from RLRs, to mask its presence by modifying its molecular features and to directly hit RLRs and their adaptors and/or effectors to impair their functions. These strategies, even though put in place through different molecular mechanisms, represent one aspect of the overall process by which adaptation between any given virus and the organism it infects was established. However, while the result of this coevolution and interplay is a fine modulation of the maintenance host's innate immune system, to circumvent its effects, spillover of the virus to a new host species may lead to new fortuitous interactions with diverse intracellular machineries, often causing the dramatic effects of an overwhelming infection. In line with this consideration, for all of the highly pathogenic RNA viruses here examined, the proteins involved in countering host dsRNA-induced IFN production were also found to act as determinants of virulence and pathogenesis. Moreover, in several cases, point mutations that impair their anti-IFN functions cause a loss of virus lethality. Together with the fact that the functional domains responsible for IFN antagonism are highly conserved within each viral group, this indicates that these viral proteins are attractive and promising targets for antiviral development. Additionally, given the importance of the RLRs pathway, it is also possible to envision the strategy of developing compounds that, mimicking the triggering power of dsRNA, may be able to boost the intrinsic innate immune system to obtain a preventive/aiding activation against viral infections. To this aim, tremendous advances in our knowledge are provided by the crystallographic structures of several viral IFN-antagonists as well as by the elucidation of their mechanism of action. Finally, it is worth noting that many questions still remain unanswered regarding the different effects of the described viral proteins on the RLR pathway in human cells, on the one side, and in the natural reservoirs of these lethal pathogens, on the other, since these animals are infected asymptomatically and do not develop disease. In fact, assuming that viral proteins have evolved to finely tune the host innate antiviral response, rather than to completely suppress it, the understanding of the subtle differences between the innate immune antiviral response in humans and in these viruses' animal reservoirs will also be of paramount importance for the development of effective viral therapeutics, making it possible to shift the balance from a fatal outcome to the clearance of infection.
2018-04-03T05:49:10.828Z
2013-10-12T00:00:00.000
{ "year": 2013, "sha1": "123860180484aab96efeabe59a7faf48004344c2", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.antiviral.2013.10.002", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "d3fa15bbb590b2fed0c45b107fe5021aa0bac19d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258518224
pes2o/s2orc
v3-fos-license
A Cross-Sectional Study to Assess the Prevalence of Obesity among Second Professional MBBS Students of One of the Medical Colleges of Indore, Madhya Pradesh : Introduction: Obesity is a complex multifactorial preventable disease. The problem of obesity is important to discuss because it is closely associated with an increasing risk to many diseases. Objective: To assess the prevalence of obesity among Second MBBS students and find the anthropometric parameters of obesity. Method: This was a cross-sectional, observational study conducted among 100 second MBBS students. Demographic data and anthropometric measures, such as Height, Weight, Body mass index, Waist circumference Introduction: Overweight and obesity, as well as their related non communicable diseases are largely preventable and the fastest growing public health problems in [1] developed and developing countries.It is a "New world syndrome" which affects all age groups.The problem of obesity has tripled in the past decade, and it currently affects approximately 30-35% of the general population in the USA and 25% in the UK,by 2030 an estimated 38% of the world adult population Healthline Journal Volume 14 Issue 1 (January-March 2023) [2] will be overweight, and another 20% will be obese.Currently, the global prevalence of obesity in children and adolescents is 7-10% and is speculated to double by 2025.Obesity is associated with an increasing risk of mortality and morbidity as compared to those who [3] have an ideal body weight.Even a moderate weight reduction in the range of 5-10% of the initial body [4] weight improves overall health.The problem of obesity is important to discuss because it is closely associated with increasing risk of cardiovascular disease, dyslipidemia, hypertension, diabetes type The burden is increasing due to increasing economic growth, industrialization, transportation facilities, urbanization, sedentary lifestyle and nutritional transition to high calories processed diet.It is a complex disease which also have genetic, behavioral, socioeconomic, and environmental factors.It is also associated with a psychosocial stigma which is added by economic costs when coupled with comorbidity. Several indirect methods widely used to measure obesity are anthropometric measures such as Body mass index (BMI), waist circumference (WC), Waist/Hip ratio(W/H ratio).BMI is a measure of weight corrected for height and which reflects the total body fat and has been the most accepted [5] parameter for defining overweight.Obesity is 2 defined as BMI of >30 kg/m and is largely due to an imbalance between, calories intake and expenditure.(According to WHO overweight is a BMI >25 and [1] obesity is a >30). A waist circumference >102 cm in men and > 88 cm in women is an estimate for central obesity.The waist -hip ratio is the dimensionless ratio of the circumference of the waist to that of the hips.This is calculated as the waist measurement divided by the hip measurement.The normal W/H ratio is<0.90 in [6] male and < 0.85 in females. Medical students being future doctors are role models for society for reflectinga healthy lifestyle.Many researcharticles suggest that obesity is increasing among them due to unhealthy eatinghabits,lack of physical activity and [7] stress.Moreover, due to COVID -19 pandemic, they had to remain indoors, due to imposed lockdowns.Also, they were bound to study through e-learning while sitting in their homes.So, this study was planned when they resume their offline learning in the institute to assess prevalence of obesity among them.Authors also tried to create awareness regarding obesity and its complications as to maintain a healthy body. Method: A cross-sectional observational study was conducted during September to November 2021.The second professional MBBS students of the 2019 batch of MGM Medical College, Indore were selected purposively for the study.Those who consented to participate in study were requested to fill the questionnaire having basic demographic details of the students.Only one student, who was prescribed an antipsychotic drug, was excluded from the study.A total of 100 students were recruited for the study according to convenient sampling method.Assessment of obesity was carried out by using the 2 BMI formula: BMI = Weight (kg)/height(m ), Normal 2 [8] range for BMI is 18.5-24.9kg/m (as per WHO). The weight of the students was measured by using a calibrated weighing machine, wearing light weight cloths, and removing heavy items from the pockets and weight was recorded to the nearest kilograms.For recording the height of subjects, a vertical measuring scale was fixed to wall and students were asked to remove shoes and stand on flat floor in front of measuring scale with the feet parallel and heels, buttock, shoulder and back of head touching vertical scale.The head was held completely erect with lower border of orbit in the same horizontal plane as the external auditory meatus.The arms were kept hanging by the sides in natural manner.The horizontal bar of the measuring scale was lowered to touch the head.The height was recorded to the nearest centimeter (cm). Grading of BMI was done according to WHO grading, in which individuals with BMI below taken midway between the inferior margin of the last rib and the crest of the ileum in a horizontal plane.Waist circumference was measured to the widest part of the buttocks. The authors also recorded about the physical activity, dietary preference, addiction history and history regarding hypertension, diabetes, and thyroid disorder in the structured proforma.All data was entered into a Microsoft excel sheet and statistical analysis was done using SPSS version 21,p>0.05 was considered statistically significant.The association between overweight/obesity and various factors was done using the Chi-square test.Ethics clearance was obtained from the institutional ethics committee. Various contributory factors associated with obesity were also assessed using questionnaire.Out of total 100 students 48% male and 84% female students were vegetarian while and 36% male and 16%female students were non-vegetarian. When authors inquired about their physical activity, it was found that 76% male and 56% female students were havingmoderate physically activity.This result also not found statistically significant.(p value 0.158)(Figure 3) Out of total participants, only one male student having hemophilia taking factor VIII and one female student having anxiety disorder was on etizolam and desvenlafaxine.Family history revealed history of Diabetesin student's parents of 28%, Hypertension in 29.33%, Hyperthyroidismin 12% and Hemophilia in 1%.This study was done for the assessment of obesity in medical students.Out of total 100 students participated; prevalence of obesity was found more among male as compared to female students.Thehigher prevalence of obesity found among boys may be due to the fact that being more outdoors,they tend to eat more junk food such as fried snacks, and fast-food items.Girls consume less calories as they perceive more about body image correctly and try to change their body weight towards normal.Also, girls perform more household activities as compared to boys.While some previous studies indicate an increased prevalence of obesity/overweight among [9][10][11] female students. During COVID, students were confined in their homes andspent many hours watching television, mobiles, and computers.This extra screen time was also due to online teachinglearning.This also added to their less physical activity leading to overweight.In current study, it was found that females had higher waist circumference as compare to that of males.This may be due to hormonal imbalance among them or consumption of extra-calories.Abdominal obesity which occur because of visceral fat deposition is associated with cardiovascular risk such as Hypertension, Type II Diabetes and [12] Dyslipidemia.In a study from north Chennai,India found that less physical activity, consuming junk food and watching television are associated with higher [13] prevalence of obesity.A study done by Debnath at almentioned about positive correlation between waist circumference and BP (systolic, diastolic and mean) among female students aged 16-22 [14] years.The overweight students are more at risk of [15][16][17] developing obesity and related co-morbidities. Present study also revealed a large number of students to be underweight.These nutritionally deficient students may suffer from anemia, lack of concentration toward studies, weakness.This might result in decreased academic performance of students. Limitation of the study: Only second MBBS students were selected for the study so the findings cannot be generalized. Conclusion: Almost half of the male and female students were having normal BMI.More number of female students had BMI lower than the normal.Students falling in the category of overweight were higher as compared to obese students.Such students were advised for non-pharmacological measures of weight reduction through proper exercise, consuming healthy balanced diet and role of physical activity, so as to maintain proper body weight and to prevent future complications of obesity were advised to the students. Declaration: Funding: Nil Conflict of Interest: Nil 2 18. 5kg/m are underweight, individuals with BMI 2 ranging from 18.5-24.9kg/m are considered normal,those with BMI ranging from 25-29.9 2 kg/m are overweight and those with BMI above 30 are considered obese.For waist circumference measurements the students were made to stand with feet 25-30 cm apart, weight evenly distributed.Measurement was :: 71 :: Healthline Journal Volume 14 Issue 1 (January-March 2023)
2023-05-06T15:19:21.801Z
2023-03-31T00:00:00.000
{ "year": 2023, "sha1": "253811b6a1e15660820e36e3ded150dcf62cbef1", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.51957/healthline_447_2022", "oa_status": "CLOSED", "pdf_src": "ScienceParsePlus", "pdf_hash": "4049258043890346341306bab9ed815c0df07ec0", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [] }
261984436
pes2o/s2orc
v3-fos-license
The Effect of Live Streaming on Social Commerce Platforms on Generation Z’s Purchase Intention . Generation Z's use of social media has become their interest in carrying out daily activities. Currently, social media's role is not only as a communication tool but also as a platform for buying and selling activities, known as social commerce. Sellers use various existing social commerce features to support their buying and selling activities, live streaming being one of them. Sellers often use the live streaming feature to show tangible and real-time goods sold and interact with Generation Z customers to increase purchase intention and product sales results. This study aims to examine the influence of utilitarian value, hedonic value, information quality, and service quality on perceived value which impacts the level of involvement of Generation Z customers to participate in live streaming sales activity and the impact on their purchase intentions. The sample in this study had 434 respondents using the online questionnaire method, which was distributed in the Jakarta region. Hypothesis testing was carried out by using PLS-SEM analysis using SmartPLS software. According to the outcome of this study, perceived value is positively influenced by utilitarian value, hedonic value, information quality, and service quality, perceived value influences customer engagement, so it also influences Generation Z purchase intention. Introduction Customers may now utilize social media platforms to discover and purchase products or services while maintaining social media's primary role, communication, because of social commerce.The utilization of social media for e-commerce transaction purposes develops social commerce as a business model now people recognized [1].Compared to e-commerce, social commerce platforms such as TikTok, Facebook, and Instagram provide more customers to create value by collaborating with sellers and engaging in online discussions with other customers [2].Customers in the study were defined as Generation Z (born between 1997 and 2012) since Generation Z is Indonesia's largest generation group, representing 27.94% of the overall population (74.93 million people) [3]. Generation Z is much savvier in technology than earlier generations (e.g., millennials) since technology has emerged throughout their youth.The capacity of Generation Z to effortlessly absorb and master technology and information, especially the internet, has become a culture and habit [4].Furthermore, the second most common activity Generation Z Indonesians performs on the internet is accessing social media at 86%, while 52% of Generation Z do internet purchasing activities is rated fifth [3]. As business firms, sell * ers use social media features, in particular online chats, review systems, video-based content, and live streaming, to sell products or services, connect and engage with their customers, and develop acts of loyalty.Live streaming has become increasingly popular in Indonesia over the past two years following the outbreak of COVID-19 [5] and emerged as a business model that allows sellers to deliver fun and engaging demonstrations [6].This popularity is also supported by Generation Z's preferences over video-based and livestream content at 75% and 13%, respectively [3].Generation Z is interested in the concept of social commerce that allows customers to interact directly with sellers, such as asking about product-related issues; additionally, the live streaming feature in social commerce platforms minimizes customers' concerns about the integrity of the seller due to the lack of in-person interaction [7].Purchase intention can arise directly through social interaction, significantly when the interaction facilitates the exchange of information and recommendations [8]. A prior study on the social commerce platform [9], Instagram, found that information and service quality affects perceived value, whereas perceived value affects customer engagement.The study also recommended future research to include other variables in predicting customer engagement in social commerce platforms.Utilitarian and hedonic values on live streaming collectively significantly impact e-commerce customers' purchase intention, with customer engagement mediating the relationship [5]. This study discusses the influence of information quality, service quality, utilitarian values, and hedonic values on perceived value, which perceived value can affect customer engagement so that it has an impact on customer purchase intentions in live streaming on social commerce platforms.We also hypothesized that perceived value and customer engagement would be associated with increased customer purchase intention in live streaming on social commerce platforms. Social Commerce and Live Streaming The use of social media today is aimed at sharing information and improving the overall business experience, where users can contribute to purchasing and selling products and services in social commerce, which is a part of e-commerce [10][11][12].In social commerce websites, users also use social networks and promote social interaction, which is also one of the characteristics of online brand communities.However, in contrast to participants in brand communities, social commerce allows users to make website purchases [13]. Recent technology breakthroughs enable sellers to present and promote their items online in real time.This is referred to as live streaming.It allows sellers to connect with customers, demonstrates a particular product feature and respond to client inquiries in real time while scheduling live broadcasting that encourages customers to purchase products exhibited online [14].As such, authenticity and visualization can reduce customer uncertainty, increase seller loyalty, and improve customers' knowledge of what they should need when shopping conventionally [5]. Utilitarian Value (UTV) Utilitarian values in customers deciding to buy a product are related to logical thinking and lead to the purpose of purchasing the product [15].From a utilitarian perspective, customers are seen as homo economicus because customers emphasize functional thinking, product-centeredness, and the customer decision process in fulfilling predetermined consumption [16].One-way customers fulfil this predetermined consumption is by online shopping, where customers expect that when doing so, customers can shop while saving time, effort, and money, they have.However, in online shopping, customers' doubts about the seller and the product become an obstacle to buying the product because there is no guarantee of product authenticity provided by the social commerce. Therefore, live streaming shopping is one of the best solutions for customers because the seller will provide more information about the product, answer customer questions, and show the product in real-time in a way that is not edited like an advertisement so that it can fulfil and comply with utilitarian values which emphasize rational and practical aspects [17].As a result, utilitarian value is a form of customer attitude in shopping activities that will highlight the use value of a product and an efficient way of getting it compared to other aspects. Hedonic Value (HDV) The word "hedonism" can be defined that the hedonic lifestyle applied by customers is a strong impetus from within when doing online shopping; this is because the hedonic lifestyle is a lifestyle that prioritizes pleasure and satisfaction so that when customers make decisions to buy products, customers tend to use feelings rather than logic [18].The daily lifestyle refers to customer activities and habits focusing on fulfilling desires, personal pleasure, and satisfying secret desires.Idea shopping and gratification shopping is one of the dimensions of hedonic values [19].Adventure shopping is also added to the list of hedonic value dimensions [17].Therefore, it is evident that utilitarian and hedonic have different values in purchasing goods, especially in social commerce.Hedonic value is based on outcomes related to spontaneous subjective and personal judgments, such as customer satisfaction, while utilitarian value is objective and prioritizes utility [15]. Information Quality (IQT) The use of information and the extent of willingness to adopt is influenced by a number of factors, like the quality of information being a prerequisite for decision-making when making online purchases.This statement means that information quality is a subjective consumer assessment regarding the characteristics of information and whether it meets the needs and objectives of use or not.High confidence in the quality of the information provided can support the success of social commerce, where the higher satisfaction and trust of customers in the information provided can support customer decision-making in buying products or goods [20]. High-quality information can be seen from several criteria [21].First, the accuracy of the information provided, which customers obtain, must be error-free and not misleading and have an accuracy value so that there is no doubt about its correctness.Second, the information provided to customers is right on time; it should be on time so that the information that comes to the recipient is still valuable for decision-making.Third, the information provided must be relevant, which means that the information is useful and has value according to what customers need.Fourth, the information must also be upto-date so that customers always know and get the latest information.In addition, information quality is also the main pillar that can influence customer attitudes and interaction in social commerce, which is a valuable source for customers [13]. From some of the above definitions, it has been discovered that information quality including the most upto-date, accurate, and relevant information plays a major role in customer purchase decisions through live streaming. Service Quality (SQT) Level of service delivered may match customer expectations, which can be reached via the fulfilment of the customer's needs and delivery accuracy to match customer expectations [22].Service quality, often known as e-service quality, is an evaluation by customers who get service quality from a business or service provider over an internet network [23].Service quality used to determine how much customer satisfaction is from online shopping activities or the services of an internet-based seller in facilitating effective and efficient online shopping, which includes purchasing and delivering products or services. Service quality has five main dimensions [24].First, reliability is the seller's ability to maintain the service quality.Second, responsiveness is a willingness and ability to help and respond quickly to customer requests and help solve customer problems.Third, assurance is a good performance ability and knowledge the seller possesses to generate customer trust and confidence.Fourth, empathy is a form of care and attention given to customers and seeks to understand customer desires.Fifth is tangibility, which describes the company's physical facilities, equipment, materials and employees' appearance. When making sales via live streaming, sellers and customers indirectly interact with each other, such as sellers answering questions given by customers or responding to customer responses regarding the products being sold.The quality of service provided by the seller when live streaming in a short time is one of the things that must be considered and maintained at the quality level because it is a driving factor in attracting the attention and interest of customers to buy products. Perceived Value (PCV) Perceived value is a set of benefits that customers expect to get, both from product value, service value such as seller friendliness, accuracy, and speed of the seller in serving, employee value in terms of appearance and communication shown to customers, and also image value or the same as image [25].Perceived value is the product evaluation value provided by customers from the results of utilizing the product functions based on the value received and the value provided by the product [9].More concretely, perceived value is a value that involves the value of advantages (quality, benefits, usefulness, value) with disadvantages (price, sacrifice) felt by the customer against the use of a product or service [26].The perceived value dimension consists of four main aspects [27], namely, emotional value, derived from the positive feelings that arise from using the product/service.Second, social value improves the customer's social self-concept obtained from product utility.Third, quality value is the use value obtained from the impression of excellence and the desired performance of the product.Fourth, monetary value is the use value obtained from products/services due to cost cuts in a short period of time and costs in a long period. From these several definitions, it has been observed that perceived value is an assessment of the value comparison between the sacrifices that customers have made and the benefits received from using these goods or products, whether following customer expectations or not.Therefore, perceived value is significant in carrying out sales activities because if a product cannot produce satisfying value, then the product will not be able to compete with other competing products. Customer Engagement (CSE) Customer engagement is the amount of each existing or future customer's interaction and relationship with corporate operations [28] to establish trust and long-term loyalty to a product or service.In live streaming activities on social commerce platforms, customer engagement can be seen from word-of-mouth activities, helping fellow consumers, providing reviews and recommendations [29], and providing support in the form of comments or likes.Sellers utilize the convenience of various features available in social commerce to interact with customers in selling products on live streaming.Four indicators are the main points in measuring customer engagement [30]; the first is enthusiasm which is a strong level of interest from individuals in a product.Second, attention is the individual's level of focus on a brand/product.Third, interaction is an interaction that takes place outside the buying process between fellow customers and sellers.Fifth is the identification which leads to the customer's perception of belonging to a particular brand. Purchase Intention (PUI) The purchase intention that arises in consumers towards a product or service indicates that consumers will plan to buy the product or service at any time.Previous research states that increasing purchase intention means an increase in the likelihood of purchasing goods/services, which means that if consumers' purchase intention shows a positive value, it will positively affect consumer involvement which encourages consumers to make purchases [31].In terms of social commerce, consumer purchase intention is something to consider as a consumer's desire to purchase through a social commerce platform [31,32].Purchase intention also means the likelihood of a customer deciding when buying something, which is the individual result of the customer's evaluation of the related product [33].This study describes purchase intention as a measure of consumer willingness to buy products online [34]. Figure 1 depicts the proposed hypothesis on the influences between variables.Therefore, we postulate: This quantitative study uses a PLS-SEM, a partial least square structural equation model, as the best analysis method for validating and predicting ability evaluation [35].PLS enables researchers to analyze samples even as small or less than 500 [36] and is appropriate for this study because it only includes respondents from a specific geographic area.The PLS-SEM algorithm and bootstrapping test were run using the SmartPLS software to measure the hypothesis validity. H1: UTV has a positive influence on PCV H2: HDV has a positive influence on PCV H3: IQT has a positive influence on PCV H4: SQT has a positive influence on PCV H5: PCV has a positive influence on CSE H6: CSE has a positive influence on PUI Data were collected in DKI Jakarta, a province located in Indonesia, which has one of the world's most active internet users, focusing on Generation Z users [37].We used the Slovin formula to calculate the total samples required to determine the sample size.The latest Census has shown that Jakarta's total Generation Z population is 854.382[38].As a result, a minimum of 400 samples were required.Since we are trying to reach out to as many respondents as possible from various municipal regions in Jakarta, we used the snowball sampling method.We obtained 434 samples from Jakarta-based respondents who had watched live streaming on social commerce. Questionnaire and Measures A questionnaire was constructed in Indonesian and was shared online with respondents through social media groups on various platforms who frequently use social commerce and have ever seen or often do live streaming shopping using social commerce platforms.Using a Likert scale of five-point with anchors ranging from (1) "strongly disagree" to (5) "strongly agree."To ensure the respondents' validity, a screening question was added to justify that the respondents have had experience using social commerce platforms and have seen or shopped through live streaming. All measurement items have been adapted from relevant former research and adjusted to the social commerce context.UTV and HDV were both measured by five and four items, respectively [17,19]; IQT and SQT were measured by four items [9]; PCV was measured by four items [9]; CSE was measured by four items [17,39]; PUI measured by four items [17,39]. Prior to the collection of data, a few experts were appointed to help review and proofread the questionnaire, which resulted in some minor changes being made to improve the ease of understanding the questions without making changes to the original scale and measurements.According to the demographic profiles of respondents who completed the questionnaire, as depicted in Table 1, the frequency of both male and female respondents is close, with females dominating higher.Since the gap is not too far away, we could still view equal perspectives from both sides.Since our paper mainly focuses on Generation Z internet users (1997-2012), we have provided the corresponding age range for respondents to fill in.The majority of the respondents were in the age range of 20-22, with 39%, with the 23-25 age range coming in second.Respondents also need to fill in their domicile, where it was shown that most of the respondents are from the West 22% and also from the East, South, and Central 20% each, respectively.The rest of the respondents are from North Jakarta at 18%.As seen in Table 2, the results of loading factor testing generated a loading value above 0.5 (>0.5) [40].This figure is used to measure the validity of existing indicators, so it can be seen that 29 indicators in this study meet the validity criteria. Validity and Reliability Test As seen in Table 3, the research model's reliability and validity were further measured by calculating the CA, CR, and AVE.The table above shows the value of construct reliability test.The Cronbach alpha measurement results show that the internal consistency values are above 0.6 for all latent variables, and composite reliability values are above 0.9 for all latent variables, which indicates high internal consistency [17].Rho_A value is displayed as it helps measure reliability.Rho_A is expected to have a value of 0.7 or more.The Rho_A values measured on the variables shown in Table 3 are all above 0.7, indicating good reliability [41].The AVE shown in Table 4 was calculated to determine the convergent validity.We use the Fornell-Lacker criteria for all latent variables, where the AVE value must exceed 0.5.AVEs of all the variables stated in the table above measured greater than 0.5, indicating that more than 50% of the variance's indicators could be accounted for.Based on the recommended value of AVE, which is higher than 0.5 [35], this also shows adequate validity. Discussion After analysing the results, the following will examine the structural model and perform hypothesis testing.The coefficient of determination (R2) assesses the amount of influence the independent factors have on the dependent variable.R2 is deemed substantial if it is 0.75, moderate if it is 0.50, and weak if it is 0.25 [42].This study's proposed hypotheses are explained using the direct path coefficient.Path coefficient uses to determine the influence of the independent variables UTV, HDV, IQT, and SQT on PCV, the relationship between PCV and CSE, and the influence of CSE on PUI in social commerce platforms.To determine the significance of the coefficient paths, a bootstrapping calculation was initiated with 499 subsamples [42].The bootstrapping findings are presented in Table 6 below.All the proposed hypotheses result in Table 6 shows that they are all supported.With a path coefficient of 0.210, it can be demonstrated that the UTV variable positively influences the PCV on social commerce platforms, proving that the hypothesis (H1) is accepted.The findings are consistent with a prior study [19], which found that utilitarian value had a favourable impact on perceived value.Utilitarian value based on customer experience or judgment after joining live-streaming on social commerce platforms could increase the customer's perceived value. The HDV positively influences the PCV of social commerce platforms with a path coefficient of 0.260, so hypothesis (H2) is proven to be accepted.The findings are in accordance with previous research [19], which found that hedonic value positively impacted perceived value.Hedonic value demonstrates the customer's desire to decrease stress and keep up with trends by purchasing online via live streaming on social commerce platforms, especially when they obtain great deals. The test findings suggest that IQT positively influences the PCV on social commerce platforms, with a path coefficient of 0.260, so hypothesis (H3) is proven to be accepted.Previous studies [9,39] on Instagram as one of the social commerce platforms show that these findings are aligned.Customers will have an excellent impression of the seller/online stores on Instagram if the information is up-to-date and reliable. Next, the result of hypothesis (H4) concludes that SQT positively influences PCV on social commerce platforms with a path coefficient of 0.239, so hypothesis (H4) is proven to be accepted.Based on prior studies' results [9,39] states that there is a positive influence of service quality on perceived value.If the seller provides excellent assistance to the customer when there is an issue, the customer whose problem has been fixed will write a positive review, which has the power to raise the customer's perceived value. The PCV positively influences the CSE on social commerce platforms with a path coefficient of 0.866, so hypothesis (H5) is proven to be accepted.This value is the highest, aligning with prior studies [9,39].Customer engagement may grow if the seller can portray the goods based on the value paid and the risk the customer will face.If customer engagement has risen, a relationship will emerge between the customer and the seller, and the customer will be interested in getting involved in livestreaming activities. Finally, hypothesis testing (H6) results show that CSE influences PUI positively, with a path coefficient of 0.766.These findings align with a recent study [5].When a customer develops a connection with the seller, this improves customer loyalty.Customers will prefer purchasing a product or service from a seller/online business they acknowledge and trust. Conclusion This study aims to determine the factors that influence Generation Z's purchase intention during live streaming on social commerce platforms, starting from the existence of customer engagement that occurs during live streaming, which is influenced by perceived value, where previously perceived value was also influenced by utilitarian value, hedonic values, information quality, and service quality provided by the seller.Based on the 434 Generation Z respondents who participated, all respondents had watched live streaming on social commerce platforms.Six hypotheses were formed from 29 indicators stating the relationship between UTV, HDV, IQT, and SQT factors with PCV, PCV on CSE, and CSE on PUI through live streaming in social commerce platforms, stating that all hypotheses are valid. Significance and Implications The present study is significant because it addresses live streaming as a trending feature of social commerce use on Generation Z.Given the widespread use of social commerce among this age group, understanding the effects of live streaming on social commerce is to increase the user activities on purchasing products and services.This study will contribute to the existing literature by examining the influencing factors that may affect the relationship between live streaming on social commerce and the outcome of purchase intention. The study's implications mainly provide insight to Generation Z and sellers who use live streaming on social commerce to buy/sell goods by considering the factors influencing the customer's purchase intention for these products. Limitations and Future Research The limitations of this study are, First, it does not test or focus on symbolic value as a variable of influence (independent variable).However, it only mentions theoretical influence, so it is hoped that future researchers will research to explore the discussion of symbolic value as an influence variable.Therefore, the researcher suggests that future research can add other variables that can strengthen the causes that influence purchase intention on live-streaming social commerce platforms so that the results of further research get stronger, broader, and more accurate predictions that can support previous research on purchase intention.Second, this research only examines some of the main dimensions related to perceived value, namely social value [27], so that further research can raise and include social value to explain the better and more complete perceived value.Third, this research does not focus on one social commerce platform, so customer behaviour on each social commerce platform is not known. Researchers also suggest that sellers use the live streaming feature in social commerce to increase the perceived value, influencing customer purchase intention.What needs to be done is to continuously improve the quality of information by providing information that customers need, is reliable, and is up to date.Another thing is improving service quality in selling products to customers during live streaming and prioritizing two-way interactions between sellers and customers during live streaming sales to answer customer questions, solve problems, and apply a friendly attitude in interacting with customers.In addition to information quality and service quality, sellers can pay attention to utilitarian values related to products that must suit customers' needs and be easy to obtain so that sellers who sell products via live streaming can attract customers' attention and interest in buying these products.Another value sellers must consider when selling live streaming is hedonic; sellers must ensure that the products being sold attract customers and can provide satisfaction and happiness when buying these products.With these values fulfilled, it will increase the value felt by customers, which aims to strongly influence customer engagement in live streaming on social commerce, which can impact customer purchase intentions. Fig Fig 1 Research Model Table 1 Respondent Demographics Table 2 Outer Loading Factors Table 3 Reliability Test Table 4 Validity Test Table 5 Path coefficient
2023-09-17T15:04:45.991Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "00f5f7ccd9681a353493965c3e640d1846d3ccd5", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/63/e3sconf_icobar23_01081.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5236203a10f82f09c8d23df461a9bea808977f21", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [] }
235790280
pes2o/s2orc
v3-fos-license
Power series as Fourier series An abstract theory of Fourier series in locally convex topological vector spaces is developed. An analog of Fej\'{e}r's theorem is proved for these series. The theory is applied to distributional solutions of Cauchy-Riemann equations to recover basic results of complex analysis. Some classical results of function theory are also shown to be consequences of the series expansion. 1. Introduction 1.1. Motivation. Two types of series expansion ubiquitous in mathematics are the power series of an analytic function, and the Fourier series of an integrable function on the circle. In the complex domain, well-known theorems of elementary complex analysis guarantee the existence of locally uniformly convergent power series expansions of holomorphic functions in domains with ample symmetry such as disks and annuli, where "holomorphic" can be taken in the sense of Goursat, i.e., the function is complex-differentiable at each point. On the other hand, the convergence of a Fourier series is a subtle matter, and its study has led to many developments in analysis (see [Zyg02]). The close connection between these two types of outwardly different series expansions is a recurring theme in many areas of classical analysis, e.g., in the theory of Hardy spaces. It is however not difficult to see that at a certain level of abstraction, Fourier series and power series are in fact two examples of the same phenomenon, the representation theory of the circle group. The aim of this article is to take this idea seriously, and use it to recapture some basic results of complex analysis. The take-home message is that many properties of holomorphic functions, such as the almost-supernatural regularity phenomena are profitably thought of as expressions of symmetry, more precisely the invariance of certain locally convex spaces under the Reinhardt action of the torus group on C n . It is hoped that the point of view taken here has pedagogical as well as conceptual value, and will be of interest to students of complex analysis. 1.2. Abstract Fourier series. We begin in Section 2 with an account of Fourier series associated to a continuous representation of the n-dimensional torus T n on locally convex topological vector spaces, using some ideas of [Joh76]. This provides the unifying language of sufficient generality to encompass both classical Fourier expansions and the power series representations of complex analysis. Even at this very general and "soft" level, one can establish a version of Fejér's theorem on the summability in the Cesàro sense (Theorem 2.1), which can be thought of as a "completeness" statement for the holomorphic monomials. 1.3. Analyticity of holomorphic distributions. Using the abstract framework of Section 2, we recapture in Section 3, in a novel way, some of the basic classical facts about holomorphic functions. We show that on a Reinhardt domain, a distribution which satisfies the Cauchy-Riemann equations (which we call a holomorphic distribution) has a complex power series representation that converges uniformly along with derivatives of all orders on compact subsets of a "relatively complete" log-convex Reinhardt domain (see Theorem 3.1 below), and thus a holomorphic distribution is a function, in fact an analytic function. Traditionally, to prove such an assertion, one would start by showing that a holomorphic distribution is actually a (smooth) function. This can be done either by ad hoc arguments for the Laplacian going back to Weyl (see [Wey40]), or in a more general way, by noticing that fundamental solution 1 πz of the Cauchy-Riemann operator B Bz is smooth, in fact real-analytic, away from the singularity at the origin. It then follows by standard arguments about convolutions (see [Hör03,Theorem 4.4.3]), that any distributional solution of B Bz u " 0 is real-analytic. Therefore, a holomorphic distribution is a holomorphic function in the sense of Goursat, and classical results of elementary complex analysis give the power series expansion via the Cauchy integral formula. The extension of the domain of convergence to the envelope of holomorphy can be obtained by convexity arguments (see [Ran86]). While this classical argument has the admirable advantage of placing holomorphic functions in the context of solutions of hypoelliptic equations, it also has the shortcoming that the crucial property of complex-analyticity (and the associated Hartogs phenomenon in several variables) of holomorphic distributions is proved not from an analysis of the action of a differential operator on distributions, but by falling back on the Cauchy integral formula. After using the real-analytic hypoellipticity of B Bz to conclude that holomorphic distributions are real-analytic, we discard all of this information, except that holomorphic functions are C 1 and satisfy the Cauchy-Riemann equations in the classical sense. In our approach here, however, complex analyticity of holomorphic distributions is proved directly in a conceptually straightforward way, by expanding a holomorphic distribution on a Reinhardt domain in a Fourier series, and then showing that the resulting series (the Laurent series of the holomorphic function) converges in the C 8 -topology, not only on the original set where the distribution was defined, but possibly on a larger domain, thus underlining the fact that Hartogs phenomenon can be thought of as a regularity property of the solutions of the same nature as smoothness. Our proof also clearly locates the origin of the remarkable regularity of holomorphic distributions in (a) the invariance properties of the space of holomorphic distributions under Reinhardt rotations and translations, (b) symmetry and convexity properties of the Laurent monomial functions z Þ Ñ z α 1 1 . . . z αn n , α j P Z and (c) the fact that radially symmetric holomorphic distributions are constants. It is also interesting that the fact that 1 πz is a fundamental solution of the Cauchy-Riemann operator, which is key to many results of complex analysis including the Cauchy integral formula, does not play any role in our approach. The method of proving analyticity via Fourier expansion can be used in other contexts. For example, replacing the representation theory of T n by that of the special orthogonal group SOpnq, the method can be used to show that a harmonic distribution in a ball of R n is in fact a real analytic function and admits an expansion in solid harmonics (see, e.g., [CH53,), which converges uniformly along with all derivatives on compact subsets of the ball. Similarly, one can obtain the Taylor/Laurent expansion of a monogenic function of a Clifford-algebra variable in "spherical monogenics", the analogs for functions of a Clifford-algebra variable of the monomial functions z Þ Ñ z n , n P Z (see [BDS82]). 1.4. Analyticity of continuous holomorphic functions. While the theory of generalized functions forms the natural context for solution of linear partial differential equations such as the Cauchy-Riemann equations, for aesthetic and pedagogical reasons it is natural to ask whether it is possible to develop the Fourier approach to power series, as outlined in Section 3, using only classical notions of functions as continuous mappings, and derivatives as limits of difference quotients, and without anachronistically invoking distributions. We accomplish this in the last Section 5 of this paper. We start from the assumption that a continuous function on the complex complex plane satisfies the hypothesis of the Morera theorem, and show directly that it is complex-analytic without any recourse to the Cauchy integral representation formula (which, being a case of Stoke's theorem, requires the differentiability of the function, at least at each point). Not unexpectedly, one of the steps of the proof uses the classical triangle-division used in the standard proof of the Cauchy theorem for triangles. 1.5. Acknowledgements. The first author would like to thank Luke Edholm and Jeff McNeal for many interesting conversations about the topic of power series. He would also like to thank the students of MTH 636 and MTH 637 at Central Michigan University over the years for their many questions, which led him to think about the true significance of power series expansions. Sections 2 and 4 of this paper are based on part of the Ph.D. thesis of the second author under the supervision of the first author. The second author would also like to thank his other committee members, Dmitry Zakharov and Sonmez Sahutoglu for their support. Other results from the thesis have appeared in the paper [Daw21]. Fourier Series in Locally Convex spaces 2.1. Nets, series and integrals in LCTVS. We begin by recalling some notions and facts about functional analysis in topological vector spaces. See the textbooks [Trè67,Rud91,Bou04] for more details on these matters. Let X be a locally convex Hausdorff topological vector space (we use the standard abbreviation LCTVS ). Recall that the topology of X can also be described by prescribing the family of continuous seminorms on X: a net tx j u in X converges to x, if and only if for each continuous seminorm p on X, we have ppx´x j q Ñ 0 as a net of real numbers. In practice, we describe the topology of an LCTVS by specifying a generating family of seminorms (analogous to describing a topology by a subbasis): a collection of continuous seminorms tp k : k P Ku on X is said to generate the topology of X if for every continuous seminorm q on X, there exists a finite subset tk 1 , . . . , k n u Ă K and a C ą 0 such that qpxq ď C¨maxtp k 1 pxq, . . . , p kn pxqu for all x P X, (2.1) and further, for every nonzero x P X there exists at least one k P K such that p k pxq ‰ 0 (this separating property ensures that the topology of X is Hausdorff). Then clearly a net tx j u converges in X if and only if p k px j´x q Ñ 0 for each k P K. Let Γ be a directed set with order ě. Recall that a net tx α u αPΓ in X is said to be Cauchy if for every ǫ ą 0 and every continuous seminorm p on X, there exists γ P Γ such that whenever α, β P Γ and α, β ě γ, we have ppx α´xβ q ă ǫ. The space X is said to be complete if every Cauchy net in X converges. Observe that in the above definition we can use a generating family of seminorms rather than all continuous seminorms. If tS k u kPN is a sequence in a vector space, we can define the sequence of the corresponding Cesàro means tC n u nPN by The following is the analog for sequences in LCTVS of an elementary fact well-known for sequences of numbers: Proposition 2.1. Let tS k u kPN be a convergent sequence in an LCTVS X. Then the sequence of Cesàro means tC n u nPN is also convergent, and has the same limit. Proof. Let S " lim kÑ8 S k . If p is a continuous seminorm on X and ǫ ą 0, there is N 1 such that ppS k´S q ă ǫ{2 for k ě N 1 . Set s " ř N 1 k"0 ppS k´S q. Then for n ą N 1 , we have Therefore, if we choose N ą N 1 so large that s n`1 ă ǫ 2 , then for n ě N we have ppC n´S q ă ǫ, so C n Ñ S in X. For a formal series 8 ÿ j"0 x j in an LCTVS X, convergence is defined in the usual way, i.e. the sequence of partial sums converges in X. A formal sum ř αPA x α over a countable index set A, where x α are vectors in an LCTVS X, is said to be absolutely convergent if there exists a bijection τ : N Ñ A such that for every continuous seminorm p on X, the series of non-negative real numbers 8 ÿ j"0 ppx τ pjq q is convergent (see [KK97]). To check that a series is absolutely convergent, we only need to check the convergence of the above series for seminorms in a fixed generating family. If X is a locally compact Hausdorff space, absolute convergence in the Fréchet space CpXq of continuous complex valued functions on X is what is classically called normal convergence (see [Rem91,pp. 104 ff.]). Absolute convergence is typical for many spaces of holomorphic functions, e.g. in the space of holomorphic functions on a Reinhardt domain smooth up to the boundary (see [Daw21]). The following result, whose proof mimics the corresponding result for numbers, shows that absolutely convergent series in LCTVS behave very much like absolutely convergent series of numbers: Proposition 2.2. Let X be a complete LCTVS, and let ř αPA x α be an absolutely convergent series of elements of X. Then the series is unconditionally convergent: there is an s P X such that for every bijection θ : N Ñ A, the series ř 8 j"0 x θpjq converges in X to s. The element s P X is naturally called the sum of the series, and we write s " ř αPA x α . Proof. By definition, there exists a bijection σ : N Ñ A, such that for each continuous seminorm p on X, the series ř 8 j"0 ppx σpjq q converges. Let y j " x σpjq and s k " ř k j"0 y j . Since ř 8 j"0 ppy j q converges, for ǫ ą 0 there exists N 0 P N such that whenever m, ℓ P N with m ě ℓ ě N 0 , ř m j"ℓ`1 ppy j q ă ǫ. Therefore for m ě ℓ ě N 0 , Therefore ts k u is Cauchy sequence in the complete LCTVS X, and therefore converges to an s P X. In order to complete the proof, it suffices to show that for every bijection τ : N Ñ N, the series ř 8 j"0 y τ pjq converges to the same sum s. Let s τ k " ř k j"0 y τ pjq . Choose u P N such that the set of integers t0, 1, 2,¨¨¨, N 0 u is contained in the set tτ p0q, τ p1q,¨¨¨, τ puqu. Then, if k ą u, the elements y 1 ,¨¨¨, y N 0 get cancelled in the difference s k´s τ k and we have pps k´s τ k q ă ǫ by (2.2). This proves that the sequences ts k u and ts τ k u converge to the same limit. So, s τ k Ñ s as k Ñ 8. By a famous theorem of Dvoretzky and Rogers, the converse of the above result fails when X is an infinite dimensional Banach space ([KK97, Chapter 4]). We will also need the notion of the weak (or Pettis) integral of a LCTVS valued function which we will now recall (see [Bou04,p. INT III.32-39] for details). Let K be a compact Hausdorff space and let µ be a Borel measure on K, and let f be a continuous map from K to an LCTVS X. An element x P X is called a Pettis integral of f on K with respect to µ if for all φ P X 1 , where X 1 denotes the dual space of X (the space of continuous linear functionals on X), and the right hand side of (2.3) is an integral of a continuous function. If X is complete, one can show that there exists a unique x P X such that (2.3) holds and we denote the Pettis integral of f on K with respect to µ by ż K f dµ " x. In fact, the integral exists uniquely, as soon as the space X is quasi-complete, i.e. if each bounded Cauchy net in X converges, where a net tx α u αPΓ is bounded if for each continuous seminorm p, the net of real numbers tppx α qu αPΓ is bounded. While there are situations in which this more refined existence theorem for Pettis integrals is useful (e.g. when the space X is a dual space with weak-* topology), in this paper, we only consider integrals in complete LCTVS. Since each Hausdorff TVS has a unique Hausdorff completion, we can define Pettis integrals in any LCTVS, provided we allow the integral to have a value in the completion. If T : X Ñ Y is a continuous linear map of complete LCTVSs, and f : K Ñ X is continuous then we have (2.4) since for any linear functional ψ P Y 1 , using the fact that ψ˝T P X 1 and the definition (2.3) of the Pettis integral. Representation of the torus on an LCTVS X. Let T n " tλ " pλ 1 , ..., λ n q P C n : |λ j | " 1 for every 1 ď j ď nu be the n-dimensional unit torus. With the subspace topology inherited from C n and binary operation defined as pλ, ξq Þ Ñ λ¨ξ " pλ 1 ξ 1 ,¨¨¨, λ n ξ n q, is a compact abelian topological group. For a LCTVS X, and a continuous function f : T n Ñ X, we denote the Pettis integral of f with respect to the Haar measure of T n (normalized to be a probability measure) by ż T n f pλqdλ. Let X be an LCTVS, and let λ Þ Ñ σ λ be a continuous representation of T n on X. Recall that this means that for each λ P T n , the map σ λ is an automorphism (i.e. linear self-homeomorphism) of X as a topological vector space, the map λ Þ Ñ σ λ is a group homomorphism from T n to AutpXq, and the associated map σ : T nˆX Ñ X, σpλ, xq " σ λ pxq is continuous. Given a representation σ of the group T n on an LCTVS X, a continuous seminorm p on X is said to be invariant (with respect to σ) if ppσ λ pxqq " ppxq for all x P X and λ P T n . Proposition 2.3. A representation σ of T n on an LCTVS X is continuous if and only if the following two conditions are both satisfied: (a) the topology of X is generated by a family of invariant seminorms, and (b) for each x P X the function from T n to X given by λ Þ Ñ σ λ pxq is continuous at the identity element of T n . Proof. Assume that σ is continuous, i.e., σ : T nˆX Ñ X is continuous, so for x P X, the function λ Þ Ñ σ λ pxq is continuous on T n , in particular at the identity. Therefore (b) follows. It remains to show that p is continuous. For x, y P X, we have ppxq " sup λPT n q pσ λ pyq`σ λ px´yqq ď sup λPT n pqpσ λ pyqq`qpσ λ px´yqqq ď ppyq`ppx´yq, so that |ppxq´ppyq| ď ppx´yq for all x, y P X, and it follows that the seminorm p is continuous on X if and only if it is continuous at 0. We show that for each ǫ ą 0, there exists a neighbourhood V of 0 in X such that for all λ P T n , q˝σ λ ă ǫ on V . For each ξ P T n , since q and σ are continuous, there exists a neighborhood U ξ of ξ in T n and a neighbourhood V ξ of 0 in X such that qpσ λ pxqq ă ǫ for all x P V ξ and λ P U ξ . The collection tU ξ u ξPT n forms an open cover of T n . Since T n is compact, let tU ξ 1 , ..., U ξ k u be a finite subcover of T n corresponding to the open cover. Then for all x P Ş k j"1 V ξ j and λ P T n , we have qpσ λ pxqq ă ǫ. Now assume the two conditions (a) and (b). Let pΓ, ěq be a directed set and let pλ α q αPΓ and px α q αPΓ be nets in T n and X respectively with pλ α , x α q Ñ pλ, xq in T nˆX . We need to show σ λα px α q Ñ σ λ pxq in X, i.e., ppσ λα px α q´σ λ pxqq Ñ 0 for each invariant continuous seminorm p of X. But we have, by the invariance of p: p pσ λα px α q´σ λ pxqq " p´x α´σ λ´1 α λ pxq¯ď p px α´x q`p´x´σ λ´1 α λ pxq¯. The term p px α´x q goes to zero since x α Ñ x, and the term p´x´σ λ´1 α λ pxq¯also goes to zero since λ α Ñ λ and µ Þ Ñ σ µ pxq is continuous at µ " 1. The result follows. 2.3. Abstract Fejér Theorem. Let X be a complete LCTVS and let σ be a continuous representation of T n on X. For each α " pα 1 , ..., α n q P Z n and x P X, define the Pettis integral of the continuous function λ Þ Ñ λ´ασ λ pxq on T n with respect to the Haar probability measure of T n . We will say that π σ α pxq is the α-th Fourier component of x with respect to the representation σ. We use the standard convention with respect to multi-index powers, i.e., λ α " λ α 1 1 . . . λ αn n . We will say that the subspace of X defined as rXs σ α " tx P X : σ λ pxq " λ α¨x for all λ P T n u (2.6) is the α-th Fourier mode of the space X, and we will call the map π σ α the α-th Fourier projection, both with respect to the representation σ. We note the following facts: Proposition 2.4. As above, let X be a complete LCTVS and σ be a continuous representation of T n on X. (1) For each α P Z n , the α-th Fourier mode rXs σ α is a closed σ-invariant subspace of X, and the Fourier projection π σ α is a continuous linear projection from X onto rXs σ α . (2) Let Y be another complete LCTVS, and let τ be a continuous representation of T n on Y , and let j : Y Ñ X be a continuous linear map intertwining σ and τ , i.e., for each λ P T n , j˝τ λ " σ λ˝j . Then for each α P Z n , we have j˝π τ α " π σ α˝j . (2.7) Proof. (2) For x P X, we have where we have used (2.4) to go from the second to the third step. Remark. The inequality (2.8) p pπ σ α pxqq ď ppxq, (2.9) which holds for each α for an invariant seminorm p can be thought of as an abstract form of the familiar Cauchy inequalities of complex analysis. If X is L 1 pTq, the Banach space of integrable functions on T, and if σ is the continuous representation of T on L 1 pTq given by an easy computation shows that for φ P R, and f P L 1 pTq, we have the α-th term of the Fourier series of f . It is therefore natural to define, for x P X, the Fourier series of x with respect to σ to be the formal series x " ÿ αPZ n π σ α pxq. (2.10) For an integer k ě 0, define the k-th square partial sum of the Fourier series in (2.10) by where |α| 8 :" max |α j | , 1 ď j ď n ( . We are ready to state an abstract version of Fejér's theorem. Theorem 2.1. Let σ be a continuous representation of T n on an LCTVS X and let x P X. Then the Cesàro means of the square partial sums of the Fourier series of x (with respect to σ) converge to x in the topology of X. Proof. Write the Cesàro means of the square partial sums of the Fourier series of x as, is the classical Fejér kernel. Introducing polar coordinates, λ j " e iθ j on T n , and summing, we obtain the classical representation t is well-known that the Fejér kernel has the properties that (a) F N ě 0 for all N , where Bp1, δq is the ndimensional ball centered at 1 " p1, 1, ..., 1q and radius δ. Let p be a continuous σ-invariant seminorm on X. Then for x P X, we have Since λ Þ Ñ σ λ pxq is continuous and p is a continuous seminorm, there exists δ ą 0 such that ppσ λ pxq´σ 1 pxqq " ppσ λ pxq´xq ă ǫ 2 whenever λ P T n X Bp1, δq. Then on the set T n zBp1, δq we have ppσ λ pxq´xq ď ppσ λ pxqq`ppxq " 2¨ppxq. (2.13) Now from (2.12), we have Since by Proposition 2.3 the topology of X is generated by σ-invariant seminorms, the result follows. Corollary 2.5. Let X be a complete LCTVS and suppose that we are given a continuous representation of the group T n on X. Then if the Fourier series of an element x P X with respect to this representation is absolutely convergent in X, the sum of the Fourier series equals x. Proof. Since the series ř αPZ n π σ α pxq is absolutely convergent, by Proposition 2.2, there exists the sum of the series, i.e., an r x P X such that for every bijection θ : N Ñ Z n we have ř 8 j"0 π σ θpjq pxq " r x. Let S N " ř N j"0 π σ θpjq pxq; then the sequence of partial sums tS N u converges to r x in X. By Proposition 2.1, the sequence of Cesàro means tC N u of the partial sums converges to r x as well. However, by Theorem 2.1, the Cesàro means converge to x. Therefore r x " x. Recapturing Complex analysis We now use the machinery developed in the previous section to give a conceptually simple account of the remarkable regularity properties of holomorphic distributions. So we will pretend that we have forgotten everything about complex analysis, but do remember the rudiments of the theory of distributions, accounts of which can be found in the classic treatises [Hör03,Trè67,Sch66]. First we clarify notations and recall a few facts. The basic spaces. For an open Ω Ă R n , the space DpΩq of test functions is the LF -space of smooth compactly supported complex valued functions, topologized as the inductive limit of the Fréchet spaces D K consisting, for a given compact K Ă Ω, of those elements of DpΩq which have support in K. Recall that a subset B Ă DpΩq is bounded, if and only if there is a compact K Ă Ω such that B Ă D K , and for each nonnegative multi-index α P N n , we have where, here and later, we will use standard multi-index conventions such as The space of distributions D 1 pΩq on Ω is the dual of DpΩq, consisting of continuous linear forms on DpΩq. We denote the value of a distribution T P D 1 pΩq at a test function φ P DpΩq by xT, φy . The space D 1 pΩq is endowed with the usual strong dual topology. Recall that this topology is generated by the family of seminorms In this topology, the space D 1 pΩq is complete. Given a locally integrable function f P L 1 loc pΩq, we can associate to f a distribution T f defined by where dV denotes the Lebesgue measure of R n . Then the locally integrable distribution T f is said to be generated by f , and as usual we grant ourselves the the right to abuse language by identifying the distribution T f P D 1 pΩq with the function f P L 1 loc pΩq. We will use the abbreviations for the basic constant coefficient differential operators of complex analysis, acting on functions or distributions on C n . If Ω Ă C n is an open set, define the space of holomorphic distributions on Ω. The subspace OpΩq is closed in the space D 1 pΩq by the continuity of the operators D j , and is therefore a complete LCTVS in the subspace topology. 3.2. The main theorem. We begin with some definitions and notational conventions. For λ P T n we denote by R λ the Reinhardt rotation of C n by the element λ, the linear automorphism of the vector space C n given by Let Z denote the union of the coordinate hyperplanes of C n : Recall that a Reinhardt domain Ω Ă C n is said to be log-convex, if whenever z, w P ΩzZ, the point ζ P C n belongs to Ω if there is a t P r0, 1s such that for 1 ď j ď n For α P Z n , let e α be the monomial function given by For a Reinhardt subset Ω Ă C n and 1 ď j ď n, let Ω pjq " tpz 1 , . . . , ζz j , . . . , z n q : z P Ω, ζ P Du, (3.10) where D " t|ζ| ď 1u Ă C is the closed disk. This can be thought of as the result of "completing" Ω in the j-th coordinate. Following [JP08], we say that the Reinhardt domain Ω Ă C n is relatively complete if for each 1 ď j ď n, whenever we have Ω X tz j " 0u " H, we also have Ω pjq Ă Ω. We prove the following well-known structure theorem for holomorphic distributions on Reinhardt domains, as an application of the ideas of Section 2: Theorem 3.1. Let Ω be a Reinhardt domain in C n and let SpΩq " tα P Z n : e α P C 8 pΩqu. (3.11) Let p Ω be the smallest relatively complete log-convex Reinhardt domain in C n that contains Ω. The for each α P SpΩq there is a continuous linear functional a α : OpΩq Ñ C such that for each T P OpΩq, the series ÿ αPSpΩq a α pT qe α (3.12) converges absolutely in C 8 p p Ωq to a function f P C 8 p p Ωq, and f | Ω generates the distribution T . Remarks: (1) For n " 1, all Reinhardt domains in the plane, i.e., disks and annuli, are automatically relatively complete and log-convex. For n ě 2, it is easy to give examples of Reinhardt domains which are not log-convex, or not relatively complete, or perhaps both. For such a domain Ω, it follows that each holomorphic distribution f P OpΩq extends to a holomorphic function F P Op p Ωq. This is the simplest example of Hartogs phenomenon, the compulsory extension of all holomorphic functions from a smaller domain to a larger one, characteristic of domains in several complex variables. (2) The functionals a α are called the coefficient functionals, and the series (3.12) is of course the Laurent series of the function f (the Taylor series if SpΩq " N n ). (3) It is known (by a direct construction of a plurisubharmonic exhaustion) that relatively complete log-convex Reinhardt domains are pseudoconvex. This means that such a domain Ω admits a holomorphic distribution whose Laurent expansion converges absolutely precisely on Ω. As an immediate consequence of Theorem 3.1 we have the following: Corollary 3.1. Let Ω Ă C n be open. Then each distribution T P OpΩq is complexanalytic, i.e., for each p P Ω there is a neighborhood U of p, where the function f generating T is represented by a Taylor series centered at p. Holomorphic functions and maps. A holomorphic distribution T P OpΩq will be called a holomorphic function if it is generated by a C 8 function f . We denote the space of holomorphic functions on Ω temporarily by pO X C 8 qpΩq. Once Theorem 3.1 is proved, it will follow that pO X C 8 qpΩq " OpΩq. Let Ω 1 , Ω 2 be domains in C n . By a holomorphic map Φ : Ω 1 Ñ Ω 2 , we mean a mapping each of whose components is a holomorphic function on Ω 1 . A holomorphic map is a biholomorphism, if it is a bijection, and its set-theoretic inverse is also a holomorphic map. (It is of course known that the assumption of the holomorphicity of the inverse map is redundant, but this is a consequence of complex-analyticity, which is exactly what we are proving here). If Φ : Ω 1 Ñ Ω 2 is a biholomorphism, then for a distribution T P D 1 pΩ 2 q, we can define in the usual way the pullback distribution Φ˚T P D 1 pΩ 1 q: if T is generated by a test function f P DpΩ 2 q, then Φ˚T is the distribution generated by the function f˝Φ, and for general T , we extend this definition by continuity, using the density of test functions in D 1 pΩq, see [Hör03, Theorem 6.1.2] for details. Extending the chain rules for the complex derivative operators from test functions to distributions, we have the following relations analogous to [Hör03, formula (6.1.2)] for the Wirtinger derivatives (3.4): (3.13) and D j pΦ˚T q " where as above, Φ : Ω 1 Ñ Ω 2 is a biholomorphism of domains in C n , written in components as Φ " pΦ 1 , . . . , Φ n q, and T P D 1 pΩ 2 q. Therefore, we have the following immediate consequence of (3.14): Proposition 3.2. If Φ : Ω 1 Ñ Ω 2 is a biholomorphism and T P OpΩ 2 q, then we have Φ˚T P OpΩ 1 q. If f P pO X C 8 qpΩ 2 q, then Φ˚f P pO X C 8 qpΩ 1 q. Therefore the spaces of holomorphic distributions and functions are invariant under pullbacks under biholomorphic maps. In fact, in the proof of Theorem 3.1, only two simple special cases of Proposition 3.2 noted below are needed: (1) Translation by a vector a P C n is the map M a : which is obviously a biholomorphic automorphism of C n . For a domain Ω Ă C n , we therefore have a pullback isomorphism of spaces of holomorphic distributions Må : OpM a pΩqq Ñ OpΩq. This can be thought of as an expression of the fact that the operator B " pD 1 , . . . , D n q is translation invariant. (2) The Reinhardt rotations R λ of (3.6) are clearly biholomorphic automorphisms of Reinhardt domains, and the pullback operation induces a representation of the group T n on the space of holomorphic distributions (see (3.32) below). A domain Ω is said to be Reinhardt centered at a for an a P C n if there is a Reinhardt domain Ω 0 such that Ω " M a pΩ 0 q. Every open set in C n has local Reinhardt symmetry, in the sense that each point has neighborhood which is a Reinhardt domain centered at that point. 3.4. Mean value property of the monomials. It is easily verified by direct computation that for each α, we have e α P pO X C 8 qpC n zZq, where e α is the monomial of (3.9). We now note a remarkable symmetry property (the Mean-Value Property) of the functions e α : Lemma 3.3. Let α P Z n , let z P C n zZ, and let ψ P DpC n zZq be a test function which has radial symmetry in each variable around the point z, i.e., there are functions ρ 1 , . . . , ρ n P DpRq such that ψpζq " ś n j"1 ρ j p|ζ j´zj |q, and whose integral is 1: ż Then we have xe α , ψy " e α pzq (3.17) Proof. First consider the case n " 1. If α ě 0, the formula ż 2π 0 pz`re iθ q α dθ " 2πz α (3.18) holds for r ą 0, by expanding the integrand using the binomial formula, and integrating the finite sum term by term. The formula (3.18) also holds for α ă 0, provided z " 0, and 0 ă r ă |z|. This follows on noticing that we have an infinite series expansion where by the M -test, the convergence is uniform in θ, and then integrating the series on the right term by term. Now ψpζq " ρp|ζ´z|q and ψ is supported in some disc Bpz, Rq Ă C˚" Czt0u, which means R ă |z|. Then the normalization (3.16) is equivalent to which establishes the result for n " 1. In the general case, notice that for α P Z n and ζ P C n zZ we have e α pζq " ś n j"1 e α j pζ j q. Therefore, since C n zZ " C˚ˆ¨¨¨ˆC˚, xe α , ψy " ż C n zZ n ź j"1 e α j pζ j qρ j p|ζ j´zj |q dV pζq " n ź j"1 ż C˚e α j pζ j qρ j p|ζ j´zj |q dV pζ j q " n ź j"1 e α j pz j q " e α pzq. If Ω Ă C n is a Reinhardt domain, then for each λ, the map R λ of (3.6) is a biholomorphic automorphism of Ω. Define a representation τ of T n on the space DpΩq of test functions by Recall that a net tφ j u converges in the space DpΩq, if each φ j is supported in a fixed compact K Ă Ω, and the net tφ j u converges in the Fréchet space D K , i.e. all partial derivatives converge uniformly on K. Using this it is easily verified that τ is a continuous representation of T n in the space DpΩq. Notice that C n zZ is a Reinhardt domain. For a positive integer k we define the norm }ψ} k with respect to the polar coordinates for a function ψ in DpC n zZq : where the tuples r P pR`q n , θ P R n are the polar coordinates on C n zZ specified by z j " re iθ j , and B β Br β , B γ Bθ γ are partial derivatives operators in the polar coordinates defined as in (3.2). From the formulas we see that for 0 ă ̺ 1 ă ̺ 2 ă 8, there is a constant B k p̺ 1 , ̺ 2 q such that for each compact set K such that K Ă tz P C n : (3.23) We will need the following elementary estimate: Proposition 3.4. Let τ be the representation of T n on C n zZ given by (3.20). For integers m, k ě 0, and a compact K Ă C n zZ there is a constant C ą 0 such that for each ψ P D K and each α P Z n , such that |α j | ě 2k for 1 ď j ď n, we have Proof. For r P pR`q n , α P Z n , define the α-th Fourier coefficient of ψ by p ψpr, αq " 1 p2πq n ż r0,2πs n ψpre iθ qe´i xα,θy dθ, (3.24) where re iθ " pr 1 e iθ 1 , . . . , r n e iθn q P C n zZ, xα, θy " ř n j"1 α j θ j and dθ " dθ 1 . . . dθ n is the Lebesgue measure. We will also write pψq^for p ψ whenever convenient. We note the following properties: (1) We clearly have sup rPpR`q nˇp ψpr, αqˇˇď sup C n zZ |ψ| . (3.25) (2) If α " 0, by a standard integration by parts argument, for each ℓ P N n , where, as usual, we set for ζ P C n and β P N n , ζ β " ζ β 1 1 . . . ζ βn n . Now for re iθ P C n zZ the evaluation DpC n zZq Ñ C, ψ Þ Ñ ψpre iθ q is continuous, so using (2.4) we see that for each α P Z n and each ψ P DpC n zZq, we have π τ α ψpre iθ q " ż T λ´αpτ λ ψqpre iθ qdλ " 1 p2πq n ż r0,2πs n e´i xα,φy ψpe iφ¨r e iθ qdφ " e ixα,θy¨p ψpr, αq, where in the last step we make a change of variables in the integral from φ to φ`θ. Therefore, if β, γ P N n with β j " α j for 1 ď j ď n, then Now in the above formula, if we have |β|`|γ| ď k, and let ℓ " pk`m, . . . , k`mq " pk`mq1, where 1 P N n is the multi-index each of whose n entries is 1, we obtain, after taking absolute values of both sidešˇˇˇB Bθ pm`kq1 B β Br β ψ¸^pr, α´βqˇˇˇˇ. (3.29) In the first factor on the right hand side, we have by hypothesis for each j that |α j | ě 2k and 0 ď β j , γ j ď k. Therefore, we have |α j´βj | ě 1 2 |α j |. The first factor can be estimated as (3.30) Using (3.25), the second factor can be estimated ašˇˇˇˇ˜B where in the last step we use the norm introduced in (3.21), and used the fact that |pm`kq1`β| " pm`kqn`|β| ď pm`kqn`nk, since each β j ď k. Combining (3.29), (3.30) and (3.31) we see thaťˇˇˇB from which, taking a supremum on the left hand side over re iθ P CzZ, and remembering that |β|`|γ| ď k, we conclude that Recall that K is a compact set in C n zZ such that ψ P D K , i.e. , the support of ψ is contained in K. Suppose that ̺ 1 , ̺ 2 are such that K Ă A, where A is the product of annuli A " tz P C n : ̺ 1 ă |z j | ă ̺ 2 , 1 ď j ď nu. Formulas (3.24) and (3.28) show that the compact support of π τ α ψ is also contained in the set A. Therefore, passing to the equivalent C k -norms using (3.23), we have which completes the proof of the result. Let Ω be a Reinhardt domain. Since for each λ P T n , the map R λ maps Ω biholomorphically (and therefore diffeomorphically) to itself. Consequently, we can define a representation of T n on the space of distributions D 1 pΩq using the pullback operation σ λ pT q " pR λ q˚pT q, T P D 1 pΩq. (3.32) The representation σ λ is closely related to the representation τ λ introduced in (3.20). Clearly, DpΩq is an invariant (dense) subspace of σ, on which σ restricts to τ . So σ is simply the extension of the representation τ of (3.20) by continuity to the space of distributions. The representation σ on D 1 pΩq is also "dual" to the representation τ on DpΩq: xσ λ pT q, φy " xpR λ q˚pT q, φy " so that σ λ is the transpose of the map τ λ´1 . The second equality in the chain may be proved by change of variables when T is a test function, and then using density. Proposition 3.5. The representation σ of T n on D 1 pΩq defined in (3.32) is continuous. Proof. Let tλ j u be a sequence in T n converging to the identity element. Since pointwise convergence of a sequence in D 1 pΩq on each test function implies convergence in the strong dual topology (see [Trè67]), to show that σ λ j pT q Ñ T in D 1 pΩq we need to show that for φ P DpΩq we have @ σ λ j pT q, φ D Ñ xT, φy. But by (3.33), @ σ λ j pT q, φ D " A T, τ λ´1 j φ E and by the continuity of the representation τ , to follows that σ λ j pT q Ñ T in D 1 pΩq. To complete the proof, by Proposition 2.3, we need to show that the topology of Ω is generated by a collection of σ-invariant seminorms. Let B be a bounded subset of DpΩq. Let r B " tσ λ pφq : λ P T n , φ P Bu. Then it is clear from (3.1) that r B is also a bounded subset of DpΩq. With notation as it (3.3), it is clear that for each T P D 1 pΩq, we have p B pT q ď p r B pT q. Therefore the topology of D 1 pΩq is generated by the family of continuous invariant seminorms tp r B u. 3.7. Fourier series of distributions on Reinhardt domains. Let Ω be a Reinhardt domain and let T P D 1 pΩq be a distribution. Then by the results of Section 2 we can expand T in a formal Fourier series with respect to the representation σ of (3.32). For simplicity of notation, whenever there is no possibility of confusion, we denote the Fourier components of T by T α " π σ α pT q, α P Z n , (3.34) so that the Fourier series of T is written as T " ř αPZ n T α . We notice the following properties of the Fourier components: Proposition 3.6. Let Ω and T be as above and let α P Z n . (a) If the distribution T lies in one of the linear subspaces OpΩq, C 8 pΩq, pO X C 8 qpΩq or DpΩq of D 1 pΩq, the Fourier component T α also lies in the same subspace. (b) If U is another Reinhardt domain such that U Ă Ω, we have (3.35) Proof. All these are consequences of Part 2 of Proposition 2.4. To see Part (a), let X be the space D 1 pΩq and let Y be one of the spaces OpΩq, C 8 pΩq or DpΩq. Each of these has a natural structure of a complete LCTVS (the space OpΩq as a closed subspace of D 1 pΩq, C 8 pΩq in its Fréchet topology and DpΩq in its LF -topology), and for each one of these topologies, the representation σ restricts to a continuous representation (in the stronger topology of the subspace), which we will call τ (this coincides with the τ introduced in (3.20)). If j is the inclusion map of any one of these subspaces into D 1 pΩq, then clearly it is continuous (where the subspace has the natural topology). Therefore, by (2.7), we have j pπ τ α T q " π σ α pjpT qq. By definition, the right hand side is precisely T α , i.e. the Fourier component of T as an element of D 1 pΩq. Notice that π τ α T P Y , since it is the Fourier component of T as an element of Y . Since j is the inclusion of Y in X, it now follows that T α P Y as well. Since the result is true for OpΩq and C 8 pΩq it follows for pO X C 8 qpΩq. For part (b), in part 2 of Proposition 2.4, let X " D 1 pU q with representation defined in (3.32), which we call σ 1 for clarity, and let Y " D 1 pΩq with the representation σ and let j : Y Ñ X be the restriction map of distributions, which is clearly continuous and intertwines the two representations. Then (2.7) takes the form j˝π σ α " π σ 1 α˝j . Applying this to the distribution T , we see that T α | U " j pπ σ α T q " π σ 1 α pjpT qq " pT | U q α . We establish the following lemma: Lemma 3.7. Let Ω Ă C n be a Reinhardt domain, and let A " tT P OpΩq : each Fourier component T α belongs to C 8 pΩqu. Suppose that for each T P A the Laurent series ř αPZ n T α converges absolutely in the Fréchet space CpΩq . Then the Laurent series of each element of A converges absolutely in the space C 8 pΩq. Proof. For a function f P CpΩq and a compact set K, let p K be the seminorm: Notice that the family tp K : K Ă Ω compactu is a generating family of seminorms for CpΩq, and the hypothesis on A can be expressed as ÿ αPZ n p K pT α q ă 8, (3.37) for each compact K Ă Ω. The Fréchet topology of C 8 pΩq is generated by the seminorms p K,L , where K ranges over compact subsets of Ω, L is a constant coefficient differential operator on C n , and p K,L pf q " sup K |Lf | . We will continue to write p K for the seminorm (3.36) corresponding to L " 1. From the distributional chain rule (3.13), we have for each j that D j σ λ pT q " D j pRλT q " pD j R λ q¨RλpD j T q " λ j¨σλ pD j T q. Let β P N n , and denote D β " D β 1 1 . . . D βn n . Then, from the above we have where pD β T q α´β " π σ α´β pD β T q. It follows that all Fourier coefficients of the distribution D β T are in C 8 pΩq, so D β T P A . Therefore, It follows therefore that ÿ By δ on C n can be rewritten as a polynomial in the commuting differential operators D 1 , . . . , D n , D 1 , . . . , D n , and by hypothesis the derivatives D j are zero. Therefore, it follows that ř α p K,L pT α q ă 8. Consequently the series ř α T α is absolutely convergent in C 8 pΩq. 3.8. Proof of Theorem 3.1: case of annular domains. We now prove a weaker version of Theorem 3.1. A Reinhardt domain Ω Ă C n will be called annular if it is disjoint from the coordinate hyperplanes: where Z is as in (3.7). We have the following: Proposition 3.8. Let Ω Ă C n zZ be an annular Reinhardt domain in C n . Then for each α P Z n there is a continuous linear functional a α : OpΩq Ñ C such that for each T P OpΩq, the series of terms in C 8 pΩq given by ř αPZ n a α pT qe α converges absolutely in C 8 pΩq to a function f , and this C 8 -smooth function generates the distribution T . Proof. Let T P OpΩq and let T α be its α-th Fourier component as in (3.34), and let S " e´α¨T α . The distributional Leibniz rule D j pf U q "`D j f˘¨U`f¨`D j U˘, f P C 8 pΩq and U P D 1 pΩq, (3.39) shows that S P OpΩq, since T α P OpΩq by Part (1) of Proposition 3.6. By Part (1) of Proposition 2.4, the Fourier component T α lies in the α-th Fourier mode of OpΩq, i.e., σ λ pT α q " λ α¨T α , so σ λ pSq " λ´αe´α¨λ α T α " S. Therefore, using polar coordinates z j " r j e iθ j , with ǫ j phq " p1, . . . , 1, e ih , . . . , 1q P T n (with e ih in the j-th position) we have Since D j S " 0, and in polar coordinates we have we see that B Br j S " 0 also. Now from the representations (3.22) we have that B Bx j S " 0 and B By j S " 0 in D 1 pΩq for 1 ď j ď n. Therefore, by a classical result in the theory of distributions (see [Sch66, Theorème VI, pp. 69ff.]) we have that S is locally constant on Ω, or more precisely, S is generated by a locally constant function. Since Ω is connected, it follows that there is a constant a α pT q P C which generates the distribution S, so that T α " a α pT qe α . This can also be written as a α pT q " e´α¨T α " e´α¨π σ α pT q. Since multiplication by the fixed C 8 -function e´α is a continuous map on D 1 pΩq (and therefore on the subspace OpΩq), and π σ α : OpΩq Ñ OpΩq is continuous by part 1 of Proposition 2.4, it follows that a α : OpΩq Ñ OpΩq is continuous. But we know that a α takes values in the subspace of distributions generated by constants. By a well-known theorem, on a finite-dimensional topological vector space, there is only one topology. Therefore the topology induced from OpΩq on the subspace of constants coincides with the natural topology of C. Therefore a α : OpΩq Ñ C is continuous. By the above, the Fourier components T α of the Fourier series of T actually lie in pO X C 8 qpΩq, and thus any partial sum (i.e. sum of finitely many terms) of this series also lies in pO X C 8 qpΩq. We will now show that the series (3.34) is absolutely convergent in the topology of C 8 pΩq, in the sense of Proposition 2.2. Let K Ă Ω be compact. Let δ ă distpK, C n zΩq, and let Ψ P DpC n q be a test function such that supportpΨq Ă Bp0, δq, ż C ΨpζqdV pζq " 1, and Ψpζq " Ψp|ζ 1 | , . . . , |ζ n |q for ζ P C n , i.e. Ψ is radial around the origin in each complex coordinate. For z P K, define ψ z pζq " Ψpζ´zq, so that ψ z is radially symmetric around z in each complex direction. Therefore, for z P K, we have, by Lemma 3.3 for each α that xT α , ψ z y " xa α pT qe α , ψ z y " a α pT qz α . By the continuity of the mapping T : DpΩq Ñ C, there is a constant C ą 0 and an integer k ě 0 such that for all ψ P DpΩq with support in K, we have that |xT, ψy| ď C¨}ψ} C k pΩq . (3.41) Also notice that for any ψ P DpΩq we have (3.42) In the following estimates, let C denote a constant that depends only on the compact K and the distribution T , and may have different values at different occurrences. By combining (3.41) with (3.42), we see that for z P K, φ P DpΩq and each α with |α| ě 2k , using Lemma 3.3 recalling that the local order k depends only on the distribution T and the compact set K. Now, for each z we have }ψ z } C 2nk`2n " }Ψ} C 2nk`2n by translation invariance of the norm. Therefore, for each α ě 2k we obtain the estimate for the seminorm (with the same convention as above on the constant C) (3.43) Clearly therefore ř αPZ n p K pT α q ă 8. By Lemma 3.7, the series ř α T α converges absolutely in C 8 pΩq. Let f be its sum. Since the inclusion C 8 pΩq Ă D 1 pΩq is continuous, we see easily that the Fourier series ř α T α of T P D 1 pΩq converges absolutely in D 1 pΩq. Now it follows from Corollary 2.5 that the sum of the series is T . Thus T is the distribution generated by f , and this completes the proof of the proposition. Smoothness of holomorphic distributions. Corollary 3.9. Let Ω Ă C n be an open set. Then each holomorphic distribution on Ω is generated by a C 8 -smooth function, i.e. OpΩq " pO X C 8 qpΩq. Proof. Let T P OpΩq. It suffices to show that T is a smooth function in a neighborhood of each point p P Ω. Without loss of generality, thanks to the invariance of the space of holomorphic functions under translations (Proposition 3.2 and comments after it) , we can assume that p " 0. Let r ą 0 be such that the polydisc P prq given by P prq " t|z j | ă r, 1 ď j ď nu Ă C n is contained in Ω. We will show that T is a smooth function on P 1 " P p r 6 q, i.e., T P pO X C 8 qpP 1 q. By the previous section, holomorphic distributions on an annular Reinhardt domain are smooth. By translation invariance, the same is true if the annular domain is centered at a point a P C n different from the origin, i.e. for a domain of the form M a pAq where A is an annular Reinhardt domain centered at the origin, with M a as in (3.15) is the translation by a. Indeed, let a "`r 3 , . . . , r 3˘, and set Q " M a`P p r 2 qzZ˘, so that Q is the annular Reinhardt domain centered at a given by Q " ! z P C n : 0 㡡z j´r 3ˇˇˇă for which it is easy to verify that P 1 Ă Q Ă P prq. Now since T | Q P pO X C 8 qpQq is a holomorphic function, the result follows. 3.10. Extension to the log-convex hull. In this section, we prove the following special case of Theorem 3.1 for annular Reinhardt domains: Proposition 3.10. Let Ω be an annular Reinhardt domain in C n . Then the Laurent series of a distribution T P OpΩq converges absolutely in C 8 p p Ωq where p Ω is the smallest log-convex annular Reinhardt domain containing Ω. Introduce the following notation: for a compact subset K Ă C n zZ " pC˚q n , we let (3.44) Lemma 3.11. Let Ω be an annular Reinhardt domain in C n and let p Ω be the log-convex annular Reinhardt domain containing Ω. Then for each point p P p Ω there is a compact neighborhood M of p in p Ω and a compact subset K Ă Ω such that M Ă p K. Proof. Let Λ : C n zZ Ñ R n be the map Λpz 1 , . . . , z n q " plog |z 1 | , . . . , log |z n |q. In other words Λppq lies in the convex hull of the m-element set tΛpq 1 q, . . . , Λpq m qu. (By a theorem of Carathéodory, m ď n`1, but we do not need this fact.) We will show that we can "fatten" one the points in this set so that the convex hull of the fattened set contains a neighborhood of the point Λppq. Without loss of generality, we can assume that t m " 0. Consider the affine self-map F of R n given by which is clearly an affine automorphism of R n . Therefore if L is a compact neighborhood of Λppq in R n , then F´1pLq is a compact neighborhood of Λpq m q, which after shrinking can be taken to be contained in ΛpΩq. Let K 0 " Λ´1pF´1pLqq, and set K " tq 1 , . . . , q m´1 u Y K 0 . We claim that p K contains the compact neighborhood Λ´1pLq of the point p. Let ζ P Λ´1pLq. Then there is a point z P K 0 such that Λpζq " ř m´1 k"1 t k Λpq k q`t m Λpzq, i.e. for each 1 ď j ď n we have For simplicity of writing, introduce new notation as follows: we let z k " q k for 1 ď k ď m´1 and z m " z. Then for a multi-index α P Z n and the point ζ P Λ´1pLq we have this shows that ζ P p K, and completes the proof. Proof of Proposition 3.10. By Lemma 3.7 it is sufficient to show that the Laurent series ř α a α pT qe α (whose existence was proved in Proposition 3.8, and whose terms lie in pO X C 8 qpΩq) converges absolutely in Cp p Ωq. It is sufficient to show that each point p of p Ω has a compact neighborhood M such that we have ÿ α p M pa α pT qe α q ă 8. Now by Lemma 3.11, there is a compact subset K Ă Ω such that M Ă p K. We observe, using the definition (3.44) that so that ÿ α p M pa α pT qe α q ď ÿ α p K pa α pT qe α q ă 8, using the estimate (3.43) which holds since K Ă Ω. 3.11. Laurent series on non-annular Reinhardt domains. We now consider the case of a general (i.e. possibly non-annular) Reinhardt domain Ω. We begin by showing that the only monomials that occur in the Laurent series are the ones smooth on Ω : Proposition 3.12. Let Ω be a (possibly non-annular) Reinhardt domain in C n . Then for each α P SpΩq (where SpΩq is as in (3.11)), there is a continuous linear functional a α : OpΩq Ñ C such that the Fourier series of a T P OpΩq is of the form T " ÿ αPSpΩq a α pT qe α . (3.48) Proof. Since T P OpΩq we have T | ΩzZ P OpΩzZq. Notice that ΩzZ is an annular Reinhardt domain and therefore the proof of Proposition 3.8 shows that the Fourier components are given for α P Z n by`T | ΩzZ˘α " p a α pT | ΩzZ qe α , where p a α : OpΩzZq Ñ C is the α-th coefficient functional associated to the domain ΩzZ (see Proposition 3.8). Thanks to (3.35), we know that`T | ΩzZ˘α " pT α q| ΩzZ , so we have pT α q| ΩzZ " p a α pT | ΩzZ qe α . By Proposition 3.6, the Fourier components of a holomorphic distribution are holomorphic distributions, so we have T α P OpΩq since T P OpΩq. Therefore, since by Corollary 3.9, each holomorphic distribution is a smooth function, we know that T α P C 8 pΩq. Therefore, the function T α | ΩzZ " p a α pT | ΩzZ qe α admits a C 8 extension through Z. If p a α pT | ΩzZ q " 0 for some T P OpΩq, this means that e α itself admits a C 8 extension to Ω, i.e., α P SpΩq, where SpΩq is as in (3.11). Therefore if α R SpΩq, the corresponding term in the Laurent series of T | ΩzZ vanishes, and the series takes the form T | ΩzZ " ÿ αPSpΩq p a α pT | ΩzZ qe α . Now for each α P SpΩq define the map a α : OpΩq Ñ C by a α pT q " p a α pT | ΩzZ q. Since both the restriction map and the coefficient functional p a α are continuous, it follows that a α is continuous. The extension of e α from ΩzZ to Ω is still the monomial e α , so the Fourier series of T in OpΩq is of the form (3.48). Notice that each term of (3.48) is in C 8 pΩq and by Proposition 3.8, the series converges absolutely in C 8 pΩzZq when its terms are restricted to ΩzZ, i.e., it converges uniformly along with all derivatives on those compact sets in Ω which do not intersect Z. 3.12. Extension to the relative completion. Given a Reinhardt domain D Ă C n , its relative completion is the smallest relatively complete domain containing D (see above before the statement of Theorem 3.1 for the definition of relative completeness of a domain.) Notice that the relative completion of D coincides with the unions of the sets D pjq of (3.10), where the union is taken over those j for which D X tz j " 0u is nonempty. The following general proposition, which encompasses classical examples of the Hartogs phenomenon, e.g. in the "Hartogs figure", will be needed to complete the proof of Theorem 3.1. Proposition 3.13. Let D be a Reinhardt domain. Then each holomorphic function on D extends holomorphically to the relative completion of D. Proof. We may assume that n ě 2 since each Reinhardt domain in the plane is automatically relatively complete. If for each j, the intersection DXtz j " 0u " H, then the domain D is annular, and its relative completion is itself so there is nothing to prove. Suppose therefore that there is 1 ď j ď n such that D X tz j " 0u " H. We need to prove that each function in OpDq extends holomorphically to D pjq . Without loss of generality we can assume that j " 1. Write the coordinates of a point z P C n as z " pz 1 , r zq where r z P C n´1 . Let f P OpDq. By Proposition 3.12 f admits a Laurent series representation f " ÿ αPSpDq a α pf qe α with SpDq as in (3.11), and the series converges absolutely in C 8 pDzZq. To prove the proposition it suffices to show that the series in fact converges absolutely in C 8 pD p1q q, which by Lemma 3.7 is equivalent to the following: for each point p P D p1q there is a compact neighborhood M of p in D p1q such that ÿ αPSpDq p M pa α pf qe α q ă 8. (3.49) We claim the following: for each p P D p1q there is a compact neighborhood M of p and a compact subset K Ă D such that p M pe α q ď p K pe α q for each α P SpDq. Assuming the claim for a moment, we see that we have p M pa α pf qe α q " |a α pf q| p M pe α q ď |a α pf q| p K pe α q " p K pa α pf qe α q, so that (3.49) follows since by (3.43) we do have ř αPSpDq p K pa α pf qe α q ă 8 for a compact subset K of D. Since each point p P D p1q has such a neighborhood this completes the proof, modulo the claim above. To establish the claim, we may assume that p R D, since otherwise there is nothing to prove. Therefore p P D p1q zD, and consequently, there is a z P D and a λ P D such that p " pλz 1 , r zq, where r z " pz 2 , . . . , z n q. Let K be a compact neighborhood of z in D of the form K " K 1ˆr K, where r K Ă C n´1 and K 1 " tζ P C : |ζ´z 1 | ď ǫu is a closed disk. Let L 1 be the disk tζ P C : |ζ| ď |z 1 |`ǫu, so that K 1 Ă L 1 , and γ " z 1`z 1 |z 1 | ǫ is a point of maximum modulus (i.e. maximum distance from the origin) in both sets K 1 and L 1 . Note that We set M " L 1ˆr K. We will show that these sets K, M satisfy the conditions of the claim. Now let α P SpDq. Since D X tz 1 " 0u " H, it follows that α 1 ě 0. Let r α " pα 2 , . . . , α n q P Z n´1 , and set B " sup r wP r Kˇr w r αˇw here r w " pw 2 , . . . , w n q P C n´1 andˇˇr w r αˇ" |w α 2 2 |¨¨¨|w αn n |, so that we have p M pe α q " sup wPM |e α pwq| " sup On the other hand Consequently, in fact we have p M pe α q " p K pe α q, and the claim is proved, thus completing the proof. 3.13. End of proof of Theorem 3.1. It only remains to put together the various pieces to note that all parts of Theorem 3.1 have been established. If Ω is annular, i.e. Ω has empty intersection with the set Z of (3.7), then Proposition 3.8 takes care of the complete proof. When Ω is allowed to be non-annular, we see from Proposition 3.12 that the Laurent series representation has only monomials which are smooth functions on Ω. Now it is not difficult to see that the smallest log-convex relatively complete Reinhardt domain p Ω containing Ω can be constructed from Ω in two steps. First, we construct the log-convex hull Ω 1 of the set ΩzZ. Notice that ΩzZ and Ω 1 are both annular. The second step consists of constructing the relative completion of the domain Ω 1 , thus obtaining the domain p Ω. Now by Proposition 3.10, the Laurent series of a holomorphic distribution on Ω converges absolutely in C 8 pΩ 1 q. Applying Proposition 3.13 (with D " Ω 1 ), we see that the Laurent series actually converges absolutely in the space C 8 p p Ωq. The sum of this series is the required holomorphic extension of a given holomorphic distribution on Ω. The result has been completely established. Missing monomials In Theorem 3.1, we considered the natural representation of the torus T n on the space OpΩq of holomorphic functions on a Reinhardt domain Ω. In applications, one often deals with a subspace of functions Y Ă OpΩq such that (1) the subspace Y is invariant under the natural representation σ, i.e., if f P Y then f˝R λ P Y for each λ P T n , where R λ is as in (3.6), (2) there is a locally convex topology on Y in which it is complete, and such that the inclusion map j : Y ãÑ OpΩq is continuous, and (3) when Y is given this topology, the representation σ restricts to a continuous representation on Y . The locus classicus here is the theory of Hardy spaces on the disc. We can make the following elementary observation: Proposition 4.1. Let Y and σ be as above, and set SpY q " tα P Z n : e α P Y u. Then the Laurent series of a function f P Y is of the form f " ÿ αPSpY q a α pf qe α . Proof. It suffices to show that if for α P Z n , if the monomial e α does not belong to Y , we have that a α pf q " 0, where a α pf q is the Laurent coefficient of f as in (3.12). By part (1) of Proposition 2.4, we see that π σ α pf q P Y . By part (2) of the same proposition, taking X to be the space OpΩq, for f P Y we have that π σ α pjpf qq " jpπ σ α pf qq, and from the description of the Fourier components of a holomorphic function in the proof of Theorem 3.1, we see that π σ α pjpf qq " a α pjpf qqe α " a α pf qe α . Therefore jpπ σ α pf qq " a α pf qe α , which contradicts the fact that π σ α pf q P Y unless a α pf q " 0. This simple observation can be called the "principle of missing monomials", since it says that certain monomials cannot occur in the Laurent series of the function f . It can be thought to be the reason behind several phenomena associated to holomorphic functions. We consider two examples: (1) Bergman spaces in Reinhardt domains: Let Ω be a Reinhardt domain in C n and let λ ą 0 be a radial weight on Ω, i.e., for z P Ω we have λpz 1 , . . . , z n q " λp|z 1 | , . . . , |z n |q. The L p -Bergman space A p pΩ, λq is defined to be the subspace of the weighted L p -space L p pΩ, λq consisting of holomorphic functions, where the norm on the weighted L p -space is given by where dV is the Lebesgue measure. It is well-known that A p pΩ, λq is a closed subspace of the Banach space L p pΩ, λq and therefore a Banach space ( [DS04]). It is also easy to see (using standard facts about L p -spaces) that the natural representation σ of T n on L p pΩ, λq is continuous for 1 ď p ă 8, so it follows that the representation on A p pΩ, λq is also continuous. It now follows from Proposition 4.1 that the Laurent series expansion of a function f P A p pΩ, λq consists only of terms with monomials e α such that e α P L p pΩ, λq. The case λ " 1 of this fact was deduced by a different argument in [CEM18]. (2) Extension of holomorphic functions smooth up to the boundary: Let Ω Ă C n be a Reinhardt domain such that the origin (which is the center of symmetry) is on the boundary of Ω. A classic example of this is the Hartogs triangle t|z 1 | ă |z 2 | ă 1u in C 2 . In [Cha19], the following extension theorem was proved: there is a complete Reinhardt domain V Ă C n such that Ω Ă V and each function in the space OpΩq X C 8 pΩq of holomorphic functions on Ω smooth up to the boundary extends holomorphically to the domain V . This was noted for the Hartogs triangle in [Sib75], where V is the unit bidisk. To deduce this from the principle of missing monomials, it suffices to consider the case when Ω is bounded. We notice that OpΩq X C 8 pΩq is a Fréchet space in its usual topology of uniform convergence on Ω with all partial derivatives. A generating family of seminorms for this topology is given by the norms tp k u where p k pf q " }f } C k pΩq . The natural representation σ on OpΩq restricts to a continuous representation on OpΩq X C 8 pΩq. Therefore, the principle of missing monomials applies and the only monomials e α that occur in the Laurent expansion of a function f P OpΩq X C 8 pΩq are such that e α P OpΩq X C 8 pΩq. For such a multi-index α, write the multiindex α " β´γ, where β j " maxpα j , 0q and γ j " p´α j , 0q. Then β, γ P N n , and we can apply the differential operator`B Bz˘β to obtainˆB Bz˙β e α pzq "ˆB Bz˙β e β pzq e α pzq " β! z γ . Since e α P C 8 pΩq this means that e γ P C 8 pΩq, which is possible only if γ " 0. Thus α P N n , and the Laurent series of each function f P OpΩq X C 8 pΩq is a Taylor series which converges in some complete (log-convex) Reinhardt domain V , and this V must strictly contain Ω, since Ω is not complete. Classical characterizations of holomorphic functions In this section we show how one can avoid the machinery of generalized functions and weak derivatives altogether, and still use Fourier methods to prove the basic facts of function theory. We confine ourselves to one complex variable and the simple geometry of the disk. exists. The result that a holomorphic function in this sense is infinitely many times complex differentiable and even admits a convergent power series representation near each point is rightly celebrated as one of the most elegant and surprising in all of mathematics. Unfortunately, we cannot use it as a definition, if we want to apply the theory of abstract Fourier expansions as developed in Section 2. Denoting by GpΩq the collection of holomorphic functions in the sense of Goursat in an open set Ω Ă C, we notice that the space GpΩq does not have a nice a priori linear locally convex topology in which it is complete and such that when Ω is a disc or an annulus, the natural action of the group T on the space GpΩq is a continuous representation. Though Goursat's definition carries the weight of a century of academic tradition, we will start from an alternative definition which lends itself better to the application of the methods of Section 2. We also note that the characterization of holomorphic functions by complex-differentiability cannot be used for natural generalizations of complex analysis, e.g. quaternionic analysis, analysis on Clifford algebras etc. (see [GM91, [Hef55,MW67]). Notice that the a priori regularity of Morera-holomorphic functions (assumed to be only continuous) is even less than that assumed for Goursat-holomorphic functions (assumed also to admit the limit (5.1) at each w.) It is immediate from the definition that O M pΩq is a closed linear subspace of the Fréchet space CpΩq of continuous functions. "Closed" means that the limit of a sequence of Moreraholomorphic functions converging uniformly on compact subsets of Ω is itself Moreraholomorphic, a fact that was already noted in [Mor86]. The proof of this crucial fact starting from the Goursat definition must pass through a lengthy development of integral representations, so this is definitely a pedagogical advantage of Morera's definition over Goursat's. The notion of Morera-holomorphicity is local: i.e., f P O M pΩq if and only if there is an open cover tΩ j u jPJ of Ω such that f | Ω j P O M pΩ j q for each j. One half of this claim is trivial, and for the other half, for a triangle T in Ω, we can perform repeated barycentric subdivisions till the triangles so formed are each contained in some element of the open cover tΩ j u jPJ . We therefore conclude that O M is a sheaf of Fréchet spaces on C. The following local description of Morera-holomorphic functions is well-known, and a proof can be found in e.g. [Rem91,. Proposition 5.1. Let Ω Ă C be convex. Then the following statements about a continuous function f P CpΩq are equivalent: ( (B) f has a holomorphic primitive, i.e., there is an F which is complex-differentiable on Ω and F 1 " f . that for an integer n, we use the notation e n pzq " z n for the holomorphic monomials. Proposition 5.2. If n ě 0 then e n P O M pCq and if n ă 0 then e n P O M pCzt0uq. Proof. First note that e n is continuous, on all of C if n ě 0 and on Czt0u if n ă 0. If g n pzq " z n n`1 for n "´1, we can verify from the definition (5.1) that g n is complexdifferentiable and g 1 n " e n so that by part (B) of Proposition 5.1 the result follows for n "´1. For n "´1, we can construct for each p P Czt0u, a local primitive of e´1 near p by setting g´1pzq " ln |z|`i arg z, where arg denotes a branch of the argument defined near the point p. A direct computation shows that g 1 1 " e´1 near p, so that again we see that e´1 P O M pCzt0uq. Products of Morera-holomorphic functions. In the proof of Theorem 3.1, an important role is played by the fact that if U is a holomorphic distribution (in the sense of (3.5))and f is a holomorphic function (i.e. a holomorphic distribution which is C 8 , Section 3.3 above), then the product distribution f U is also a holomorphic distribution. This is an immediate consequence of the distributional Lebniz formula (3.39). A similar result, proved in [MW67], will be needed in order to develop the properties of holomorphic functions starting from Morera's definition. Proposition 5.3. Let f, g P O M pΩq, and assume that g is locally Lipschitz at each point, i.e. for each w P Ω and each compact K Ă Ω such that w P K, there is an M ą 0 such that for z P K we have |gpzq´gpwq| ď M |z´w| . (5.4) Then the product f g also belongs to O M pΩq. The proof is based on a version of the classical Goursat lemma ( [Gou00,Pri01]). This is of course the main ingredient in the standard textbook proof of the Cauchy theorem for triangles for Goursat-holomorphic functions. Recall that two triangles are similar if they have the same angles. Lemma 5.4. Let Ω be an open subset of C and let λ be a complex valued function defined on the set of triangles contained in Ω such that the following two conditions are satisfied: (1) λ is additive in the following sense: if a triangle ∆ is represented as a union of smaller triangles ∆ " Ť n k"1 ∆ k with pairwise disjoint interiors then λp∆q " n ÿ k"1 λp∆ k q. (5.5) (2) For each w P Ω and each triangle ∆ 0 , we have where |∆| denotes the area of the triangle ∆, and the limit is taken along the family of triangles similar to ∆ 0 and containing the point w, as these triangles shrink to the point w. Then λ " 0. Proof. For completeness, we recall the classic argument. Let T be a triangle contained in Ω. We construct a sequence of triangles tT k u 8 k"0 with T 0 " T using the following recursive procedure. Assuming that T k has been constructed, we divide T k into four similar triangles with half the diameter of T k by three line segments each parallel to a side of T k and passing through the midpoints of the other two sides. Denote the four triangles so obtained by ∆ j , 1 ď j ď 4. Then, by (5.5), we have λpT k q " 4 ÿ j"1 λp∆ j q. Choose T k`1 to be one of ∆ j , 1 ď j ď 4 such that the value of |λpT k`1 q| is the largest. Then, by the triangle inequality we have |λpT k q| ď 4 |λpT k`1 q| , and by induction it follows that |λpT q| ď 4 k |λpT k q| " 4 k |T k |¨| λpT k q| |T k | " |T |¨| λpT k q| |T k | , (5.7) where in the last step we have used the fact that T k`1 has one-fourth the area of T k , so |T k | " 4´k |T 0 | " 4´k |T |. Since the diameters of the T k go to zero, by compactness, there is a unique point w in the intersection Ş 8 k"0 T k . Since the family tT k u is a subfamily of all the triangles containing w, and each T k is similar to T 0 , therefore by letting k Ñ 8 in (5.7) and using (5.6) the result follows. To prove the result, we need to show that λ " 0. Since condition (5.5) of Lemma 5.4 is obvious, we need to show (5.6) to complete the proof. Let w P Ω, let K be a compact neighborhood of w in Ω, and denote by M the Lipschitz constant corresponding to this w and this K in (5.4). Now, let ∆ 0 be a triangle and let ∆ be a triangle similar to ∆ 0 such that w P ∆ Ă K. Then observe that, by the hypothesis of Morera-holomorphicity of f and g we have for a constant C independent of the triangle ∆ as long as ∆ is similar to ∆ 0 and w P ∆ Ă K. Letting ∆ shrink to w, we have (5.6) and the proof is complete. 5.4. Fourier expansion of a Morera-holomorphic function. We will now prove the following analog of Theorem 3.1. In particular, it shows that holomorphic functions in the sense of Morera are identical to the holomorphic distributions considered in Section 3. where e n is as in (3.9), and the series on the right converges absolutely in CpDq to the function f . Let σ be the natural representation of T on CpDq given by σ λ pf qpzq " f pλzq, λ P T, z P D. It suffices to show that the representation σ is continuous on CpDq. For 0 ă r ă 1, let p r be the seminorm on CpDq given by p r pf q " sup |z|ďr |f pzq| . (5.10) It is clear that p r pσ λ pf qq " p r pf q for each 0 ă r ă 1, λ P T and f P CpDq. Also f j Ñ f in CpDq if and only if for each r, we have p r pf´f j q Ñ 0, so the family tp r u is a σ-invariant family of seminorms that generates the topology of CpDq. Further, given f P CpDq, by uniform continuity, for each 0 ă r ă 1, we have p r pσ λ pf q´f q " sup |z|ďr |f pλzq´f pzq| Ñ 0 as λ Ñ 1, so that lim λÑ1 σ λ pf q " f in the space CpDq. Therefore both conditions of Proposition 2.3 are satisfied, and the representation σ is continuous. In view of the above, the machinery of Section 2 applies. We now compute the Fourier components (2.5). Proposition 5.6. For f P O M pDq, and with σ the natural representation (5.9), the Fourier components of f are of the form: π σ n pf q " # a n e n if n ě 0 0 if n ă 0, where a n P C and e n pzq " z n . The proof will use the following lemma: Lemma 5.7. A radial function in O M pDzt0uq is constant. Proof. Let f P O M pDzt0uq be radial, and define the complex valued continuous function u on the interval p0, 1q by restriction, uprq " f prq, so that we have f pzq " up|z|q by the radiality of f . To prove the theorem, it suffices to show that u is a constant . Fix 0 ď α ă β ă π and 0 ă ρ ă 1. For R in the interval pρ, 1q consider the curvilinear quadrilateral defined by SpRq " tre iθ : ρ ď r ď R, α ď θ ď βu, (5.11) and notice that SpRq lies in the upper half disc, which is convex. The region SpRq is bounded by the two circular arcs AB " tρe iθ : α ď θ ď βu, CD " tRe iθ : α ď θ ď βu, along with the two radial line segments AD " tre iα : ρ ď r ď Ru, BC " tre iβ : ρ ď r ď Ru. where the vanishing of the integral follows from part (c) of Proposition 5.1 above. Parametrizing AD by z " re iα where ρ ď r ď R , and using dz " e iα dr we get ż AD f pzqdz " ż R ρ f pre iα qe iα dr " e iα ż R ρ uprqdr. Since u is continuous, the right hand side is a function of R which is continuously differentiable on pρ, 1q. Thus u P C 1 pρ, 1q. Differentiating both sides of (5.12) with respect to
2021-07-12T01:15:59.990Z
2021-07-09T00:00:00.000
{ "year": 2021, "sha1": "721507f50e8eb9a9d6a6e15b1223f0ab03f5426b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4f669a9217ce3ad38615e8794a58a9f5f19f95ee", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
213553722
pes2o/s2orc
v3-fos-license
The Analysis on the Implementation of Mobile - Assisted Language Learning Strategy Through Quizizz Application to Improve Student’s Reading Comprehension at Undiksha Singaraja — This research aimed at improving student's reading comprehension by applying MALL-based learning strategy through Quizizz application in literal reading class. Subjects of the research were the second-semester students of D class at Undiksha Singaraja who took the literal reading subject. The research was a classroom action research conducted in two cycles. The data were collected by giving reading comprehension test to measure the student's reading comprehension; observing the teaching-learning process; and giving questionnaire for collecting the data of student’s perception toward the implementation of MALL-based learning strategy. The results showed that (1) the score of student's reading comprehensions test improved significantly at the end of each cycle; (2) the students had a generally positive perceptions toward the implementation of MALL-based learning strategy. In short, the student's reading comprehension in II D class could be improved by applying MALL-based learning strategy through Quizizz application. I. INTRODUCTION In higher education, several fields of science must be mastered by students. One such field of study is English. English is one of the important fields of study that must be mastered by students to compete in the current era of globalization. In teaching English, skills that must be mastered are four language skills, namely listening, speaking, reading, and writing. These skills are taught in an integrated manner because one skill cannot be taught without applying the other skills. Among the four main skills above, reading is one of the basic skills in English that must be mastered by students because through reading, students are expected to be able to get more knowledge, information and certain pleasures. Reference [1] states that reading also provides opportunities to improve language skills such as enriching vocabulary, grammar, punctuation, and how we construct sentences, paragraphs, and texts. In line with this, reference [2] states that there are three different definitions in learning to read. First, learning to read means learning how to pronounce words. Second, learning to read means learning to identify words and get their meaning. Third, learning to read means learning to understand the text to get the meaning contained in the text. Because reading is very important in language learning, the learning process must be given serious attention. In the curriculum, it is stated that one of the competencies that must be possessed is to understand functional written texts and short essays related to the environment and everyday life. Therefore, the first thing to consider in the reading process is understanding because reading without understanding would not be able to get the information contained in the text. An understanding was closely related to the knowledge a person has. In other words, it can be said that in order to understand a text, a person must be able to use the background knowledge he has and relate it to new information when reading a text. Therefore, the first point to be made about the reading process is reading comprehension, because reading without comprehension cannot be called reading, because the interaction between the reader's prior knowledge comprehension of the knowledge from the text is the basic element for comprehension. In other words, it can be said that in order to be able to comprehend a printed material, a reader should be able to use their existing prior knowledge and relate it with the new information as she or he reads. Some important abilities must be mastered by students to understand the text. These components were: 1) the ability to obtain general and specific information from written texts, both explicitly and implicitly, 2) the ability to obtain the main ideas listed in the text both explicitly and implicitly, 3) the ability to search for meaning words, phrases, or sentences based on context, and 4) the ability to understand the reference words used in the text. Because of that, the students were expected to have the ability to understand the text, especially to find the main ideas, specific information, reference words, and the meaning of words contained in the text. In understanding the contents of the text, students encountered many problems both from students themselves who lack vocabulary and from the strategies used by the teacher during the learning process. These problems included difficulties in finding the main ideas, specific information, reference words, and the meaning of words in the text. One of the factors causing the problem was the ineffectiveness of the strategies implemented by the teacher. For example, the teacher only read the text for students, and then students were asked to read the text themselves before they answered some questions related to the text. In addition, the vocabulary of students was not much on the topics discussed in the text. This situation made students bored and they were not motivated to continue reading the given reading text. That condition was also experienced by students in English Language Education Department who took literal reading courses. They experienced problems in terms of finding the main ideas, specific information, reference words, and the meaning of words in the text which had an impact on students' reading comprehension. Most students got low scores on the reading comprehension test. Based on interviews with some students who experienced problems in reading comprehension, several things cause this to happen, namely: the application of constructivist learning models was still not as expected, students' low willingness to learn because it was difficult to find information contained in the text, and the time allotted to find the information sought was very limited. In addition to this, the efforts made by lecturers in learning to read tend to be only through the provision of theoretical explanations, then the lecturer assigned students to read with free topics or topics that are determined. The method used by the lecturer is productoriented, not process-oriented. These efforts did not guide and did not provide experience to students scientifically to find their own and learn to solve their problems because the learning patterns were still oriented to the teacher (teachercentered) which should be oriented to students (studentcentered). This condition needed to be addressed immediately by finding practical steps in learning to read. Related to the problem above, the existence of models and techniques of learning to read is very helpful for students in finding main ideas, specific information, reference words, and meaning of words listed in the text. There were several written models and techniques that can be used to help students understand the text's contents. Among the many models and strategies, MALL (Mobile Assisted Language Learning) strategies can help students to achieve reading competence. MALL is a language learning approach in which the language learning is assisted or enhanced by the use of use of handheld mobile devices. MALL is part of Mobile Learning (m-learning) and Computer-assisted language learning (CALL). When applying MALL, the language learning is often supported by mobile, cellular devices such as Cell phones (cellphones) (including iPhone or iPad.), MP3 or MP4 players (for example iPods), Personal Digital Assistants [3]. With MALL, students could access language learning materials, quizzes related to teaching materials and communicate with their teachers and colleagues anytime, anywhere. One online quiz application that was very useful to help students to increase their mastery of vocabulary is 'Quizizz'. This application was equipped with features for lecturers and students. Lecturers could make quiz in the form of multiple choices and enter the answer key (through the "create quiz" feature) in accordance with the teaching material and students could not follow and answer the quiz (through the "join quiz" feature) that had been made by the lecturer [4]. This application was also equipped with a timekeeper to answer every question. If a student answered quickly, the student would get more points compared to students who answer in a long time. In addition, this application also provided a direct evaluation that allows lecturers to see student achievements directly when answering quizzes online and after answering them [5]. On the lecturer-quizizz screen, it would be seen students who scored highest to lowest, questions that most students answered incorrectly as further enrichment material. Some research results indicated that learning by using MALL could improve students' reading comprehension levels. Reference [6] conducted a study on MALL entitled "Effects of Gloss Type on Text Recall and Incidental Vocabulary Learning in Mobile-Assisted L2 Listening". This research investigated the effects of multimedia glosses on text recall and incidental vocabulary learning in MALL-based L2 listening tasks. The results of this study indicated that access to glosses facilitates the introduction and production of vocabulary so that it could improve student understanding in reading a text. Besides, students became very interested in participating in learning to read because they can access material easily and quickly. Based on the explanation above, the implementation of MALL strategies through the "Quizizz" application was investigated in order to know whether it can improve the reading comprehension of students in the English Language Education Department in Undiksha or not. This research was a classroom action research in Literal Reading course by applying MALL strategy that the output was expected to improve students' reading comprehension. Classroom action research is a way to solve problems found in the classroom, then repaired so that it can improve the quality of learning. The classroom action research model used was reference [7] model, with a series of activities namely planning, acting, observing, and reflecting. There were two cycles that were employed in this study and each cycle involved planning, action, observation, and reflection. A. Planning Before conducting the action, there were some preparations that had been designed, namely teaching scenario, materials, questionnaires, reading tests, and teaching diary. B. Action The plan which had been made was implemented at this stage. The researcher did all the planning in the classroom and the class management was done based on the teaching scenario. Advances in Social Science, Education and Humanities Research, volume 394 C. Observation It refers to process of observing the action. The observation was carried out both during and after the action. During the action, the researcher observed the classroom situation and the students' behaviors in the diary. After the action, the pos-test and the questionnaires were given to the students then the result of the post-test and the questionnaires were analyzed at this stage. Analyzing students post-test was intended to find out the result of the action whether the students' comprehension in reading improved or not. On the other hands, analyzing the results of the questionnaires was intended to know the students' responses toward the action. D. Reflection Reflection refers to the diagnosis of the effect of the treatment. It was done at the end of the action in order to identify the strengths and weaknesses of the action. In this stage, the researcher found out the reasons of the students' failure. The researcher could also decide whether the next cycle is needed to be conducted. Reflection was also served as feedback to improve the next action. The research began with a problem in learning. The existing problems were discussed and explored together by the research team. The next activity was to conduct a survey to capture the initial conditions of the research subject before giving the action. Another thing that was also done was measuring the level of students' reading comprehension in understanding a text. The results obtained from both are diagnosed together and form the basis of research planning. Planning was done in general and specifically. General planning covered the whole research, while the special covers the actions of each research cycle which was always done at the beginning of the cycle. Furthermore, acting (observing) was done as long as the action was given. The end of the cycle was carried out reflection to see the process and the achievement of the results of the actions that had been given. The action taken was MALL-assisted learning to improve students' reading comprehension. In the MALL-assisted learning cycle implemented in the classroom. After that, a reflection of the first cycle was made as a basis for determining the next action by making some modifications. The subjects of this study were second-semester students of D class in the English Language Education Department, Faculty of Language and Art, Ganesha University of Education who took part in the Literal Reading course. They were 32 students. Based on the direct observation of researchers, who were also lecturer of Literal Reading courses, and discussions with the research team, several reasons were underlying the decision to choose class D semester 2 as a class given action with MALL-assisted learning, namely (1) the level of reading comprehension of students was relatively low seen from the pre-test; (2) most of the students in the class showed less enthusiasm for reading and attending lectures. On average they had no interest in attending Literal Reading lectures. This study aimed to find out whether MALL-assisted learning can improve the level of reading comprehension of students, so the object of this research was the process of reading learning and reading comprehension of students who got action on applying the MALL strategy. Data collection was done by observing, giving tests and questionnaires. Tests were carried out to measure students' reading comprehension by using reading tests. There were two kinds of reading tests: pre-test and post-test. The pre-test was administered before giving the action in order to know the students' achievement in reading comprehension. Meanwhile, the post-test was administered at the end of each cycle in order to know the students' achievement in reading comprehension after the action given. The post-test was administered in order to know whether or not the students' achievement has improved. The test instrument used in this study consisted of four devices, namely for the pretest, twice in the learning of the MALL-assisted reading period, and the post-test. The designed test was in the form of an essay test that was measured by using a reading assessment rubric. Based on the data collection techniques above, two types of data were collected from this study, namely quantitative data, obtained from test results, and qualitative data obtained from observations during the administration of actions and questionnaires. Data obtained from reading tests were analyzed quantitatively to obtain an average score achieved by students before and after being given an action. The average score was then compared in each cycle to find out an increase in students' reading comprehension. Unlike the data obtained from the reading comprehension test results, the data obtained from observations and questionnaires (qualitative data) were analyzed descriptively. This qualitative data was used to describe the process of reading learning in the classroom during the administration of actions and responses or responses of students to the implementation of actions, in this case, namely MALL-assisted learning in reading learning. The observation sheet was used in order to make a note the classroom activities during the learning process. It was used to record students' behaviors and the condition of the class during the teaching-learning activities. The data from the observation sheet was used to know the students' problems during the teaching-learning process so that the researcher could make a decision about what should be done to minimize these problems in order to get better result in each cycle. The form of the diary could be shown as follows: Table 1 shows that six items should be written in the diary. The first one is the teacher's activities followed by the students' activity. The time allotment given for the activity is also written in the diary. The students' response to the activity carried out in the classroom is placed in the next column. The classroom situation also must receive attention. Supposed that there are any further explanations for the activities or the response, it can be put under the notes column. The success criteria in this study included success in the process and product. Success in the process could be seen from an increase in the Literal Reading learning process. The increase was marked by the presence of a more enthusiastic and enthusiastic student learning attitude. All of these improvements could be observed during lectures. Thus, in the process, indicators of the success of this study could be observed during lectures that showed active lecture interactions, solid collaboration in groups, and their enthusiastic attitude. This step could be taken through an open questionnaire and observation. The criterion for product success was shown by the increase in students' reading comprehension. In every reading learning process, the product would always be measured. The measurement instrument was in the form of a reading comprehension test that was measured using an essay reading test rubric. III. FINDING AND DISCUSSION Due to the fact that this study was a classroom action research, there were two findings obtained namely, the quantitative and the qualitative findings. The quantitative findings could be seen from the students' results of pre-test and post-tests while the qualitative findings could be seen from the result of questionnaires and students' activity. In addition, the purpose of this study was to improve students 'reading comprehension in the Literal Reading course. The improvement could be seen from the diary notes, test results and questionnaires. From the research diary, it was known that there were changes in student behavior during the learning process. The result of the pre-test showed that the students had low achievement in reading comprehension. The students' mean score was 70.16 and categorized as sufficient. Looking at the result of the pre-test, it was considered important to give some kinds of treatments to the students in order to help them achieve improvement in reading comprehension. On the other hand, the questionnaire was distributed after conducting a pre-test. After calculating the questionnaires' scores, it was found that the highest questionnaire score of the students was 36 and the lowest score was 22. In addition, the mean score of the questionnaire in pre-test was 27.74, so the criterion was negative. It could be concluded that the students had problems in reading class, especially about students' responses when joining the reading course. From the result of pre-observation, the researcher decided to give treatment to the students by implementing a MALL-based learning strategy in teaching reading. It was hoped that the implementation of it could overcome students' reading problems and improve their achievement in reading comprehension. Cycle I was carried out in three sessions in which two sessions were intended for action and one session was for test. The questionnaire was administered at the end of the test. The post-test was conducted to measure whether the students had gained improvement after being given the treatment using a MALL-based learning strategy. There were some students felt confused and some of them did not focus on doing online quizzes. This indicated their unpreparedness in learning by using new strategies that utilize Smartphones for learning. Some of them complained that their cellphone batteries were low, there was no signal and the quota suddenly ran out. This happened at the first meeting because they still did not understand correctly about the stages that they had to do in this Literal Reading Online learning. But this was immediately resolved at the second meeting in cycle I. The ten stages were carried out smoothly because they already had experience from previous activities. They looked very enthusiastic and have a high will in following the learning process. So that at the end of the activity they can already get good grades from the results of answering online quizzes on the Quizziz application. The improvement of students' reading comprehension was clearly seen in the results they achieved in post-test 1. From the results of post-test 1, it was seen that the average score of students was 78.08, which was categorized as good. This increased by 7.92 points from the score on the pre-test. Even though the average score of students had increased, there were still 5 students (12%) who could not experience an increase in reading comprehension. On the other hand, 32 students (89%) showed positive responses and only 4 students (11%) gave negative responses to the application of MALL-based learning strategies. This was obtained after the calculation of the results of the questionnaire answered by students at the end of cycle 1. It implied that students were very happy and enthusiastic during the learning process that implemented MALL-based learning strategies. This study would be considered successful if all students were able to improve reading comprehension and all students respond positively to the application of MALL-based learning strategies. Therefore, the second cycle was carried out to achieve indicators of success (100% of all students had increased reading comprehension and gave positive responses). Several problems arose during the teaching and learning process in cycle I. Most students still could not participate properly in this activity. At the first meeting, some students felt confused by the activity because this activity was new to them. In addition, during the activities, some students did not focus on doing online quizzes which adversely affected their final grades. To overcome these problems, modifications were made to the learning process in the next cycle. In the first modification, the instructor instructed students to be able to ensure that the HP battery did not run out during the online quiz work. Therefore, students must charge their cellphones fully before college started. Second, students must ensure that the network used for online had 4G speed and was stable in the classroom where learning takes place. Third, students must ensure that the internet quota available on each of their cellphones was sufficient to take online quizzes a maximum of the day before class. The topic developed was determined by the lecturer where students were instructed to find information about the topic discussed before the meeting was held and bring that information to the class meeting. This was done to make time-efficient and easier for them to work on online quizzes. Second, remind students to be disciplined with time, so that activities at each stage could be carried out efficiently. This modification was applied during the learning process in cycle II. Modifications made students more cheerful and excited when participating in learning activities. Only a few were seen still confused because they had to answer every question on the online quiz with very limited time. Most of the students had been seen to be calmer and more fluent in doing online quizzes so that at the end of the stage, they could achieve very good grades. In this activity, they were more confident in carrying out their tasks and student enthusiasm was higher than before. The results could be seen in the increase in the post-test score 2. The average value of students in post-test 2 was 85.69. This value increased by 9.61 points compared to the results of the post-test 1. The increase in student scores was caused by some modifications made in the application of MALL-based learning strategies in cycle 2. Students also became more active compared to the first cycle. Students became more confident and comfortable when following the stages of a MALL-based learning strategy. On the other hand, all students experienced an increase in reading comprehension as seen from the students' scores on the post-test 2. Add to that the responses of all students who were categorized positively. The results were obtained after calculating the questionnaire filled out by students after taking the post-test 2. Judging from the reading comprehension of all students that improved and the responses of all students categorized positively, the indicators of the success of this study had been achieved. So the data concluded that this study was successful and could be stopped. The result of this study reflected the same condition with several researchers that were conducted before. Reference [6] also conducted research on MALL, especially concerning the effects of Gloss Type on Text Recall and Incidental Vocabulary Learning in a second language acquisition class where the learning process was assisted by mobile devices. The study revealed that there was a significant effect of Gloss Type and Text Recall strategies toward the mastery of new vocabulary in the second language learning that was assisted with mobile device. The study also revealed that there was a significant effect of Gloss Type and Text Recall strategies toward the production of vocabulary in the second language learning under investigation. Also inline with the result of the present study is the research conducted by [4] where the MALL was used for developing good toward English as foreign language learning in Japan. The main objective of this research was to find out whether certain MALL practices could foster an advanced form of independent learning (SRL). SRL on students is responsible for arousing and maintaining their motivation to create, implement, and evaluate strategic learning plans. It was concluded that the use of the MALL learning module encourages the student to study without teacher intervention, which is often referred to as independent learning. In this case, teacher's intervention is absent from the determination of time spent on learning assignments, the levels of satisfaction that were obtained from assignments, and self-measured achievements. Furthermore, SRL was observed in terms of the specificity of the objectives, the creation of learning tasks and applications in the classroom. From the explanation above, it can be concluded that the MALL-based learning strategy through the Quizziz application was one of the innovative learning strategies that can help students to improve English competency especially their reading comprehension. IV. CONCLUSION AND SUGGESTION Based on the results of the research that had been achieved and the discussion in the previous chapter IV, it could be concluded that the application of the MALL strategy was able to improve the process of literal reading in secondsemester students of Class D of English Education Study Program. Some indicators that could be seen are students who were more enthusiastic and enthusiastic in participating in reading learning, the reading learning process carried out in the classroom took place more dynamically, and there was an increase in students' courage to express ideas about the reading they were facing. Improvements that occurred in the literal reading learning process as described above had implications for improving the ability of students to understand the contents of reading. This was evidenced by an increase in students' reading comprehension scores. An increase in students' reading comprehension scores occurs at the end of each cycle. The positive findings from this research have several implications namely, 1) the importance of reading techniques can increase student motivation, build curiosity for a concept, and involve all language skills to support the learning process; 2) MALL strategy can be applied in reading activities with various condition because it can improve the students' reading comprehension and can minimize the error while reading activity conducted.
2020-02-13T09:25:08.000Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "d66b0e82df35fe5b55b86c574dcbdd133d95733e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/assehr.k.200115.053", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c60f72f6981cd9a7d5ad5ef16abedbadeaa14880", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
249903006
pes2o/s2orc
v3-fos-license
Efficiency of a biological growth regulator in the cultivation of branched seedlings . The article presents the results of studies on the effectiveness of biological preparations Gibbersib - obtained on the basis of the Fusarium moniliforme strain - a polygibberellin preparation containing a set of gibberellic acids, as well as the preparation 6-Benzyladenine - a synthetic cytokinin intended to activate the vital activity of a plant associated with the growth and development of lateral shoots that improve the crowning of seedlings. According to the results of preparations testing with separate and combined use, the effectiveness of using each of the regulator separately, in comparison with the control, was confirmed. The greatest efficiency was obtained with the use of treatment with combined regulators, which makes it possible to enhance the branching of seedlings. Introduction When laying intensive-type apple orchards, much attention is currently paid to the quality of planting material. Branched apple seedlings can already produce the first crop in the year of planting, and industrial fruiting begins already in 3-4 years. Selecting seedlings for intensive orchards, the quality indicators of planting material (the number of side shoots, their average length, the number of fruit buds, etc.) are important [1][2][3][4][5]. But the ability to branch is not sufficiently developed in many popular apple varieties, which makes it difficult to obtain crowned seedlings in the conditions of southern Russia. Seedlings of fruit crops are good for crowning when using growth regulators by spraying the apical point of growth. Plant growth regulators or phytohormones are organic substances naturally produced by higher plants that control growth or other physiological functions and are active in small amounts [6][7][8][9][10][11]. Currently, when growing planting material, 6-Benzyladenine is used as a broad-spectrum growth regulator, which is a synthetic cytokinin that stimulates the appearance of lateral buds, the formation of basal shoots, cell division. Gibberellins belong to a well-known and widespread group of plant hormones that regulate the vigor of growth, awakening of dormant buds and branching of axillary shoots, leading to crowning of seedlings and intensification of various developmental processes. A large number of varieties of these organic compounds are known, most of the gibberellins are acids. Gibberellins stimulate cell division, cause cell elongation, including shoot growth, stem growth, shoot branching, due to activation of the synthesis of nucleic acids and proteins [12][13][14][15][16][17][18][19]. Gibberellins, with the help of specific proteins, help reduce the effect of environmental stress factors on plants (late spring frosts, abnormally high temperatures in summer, droughts, etc.) [20][21][22][23]. Gibbersib -first obtained from a parasitic fungus of the genus Fusarium and a chemical growth stimulator. Gibbersib is a polygibberellin preparation containing a set of gibberellins from the fungus Fusarium moniliforme. The composition of the Gibbersib preparation, along with gibberellic acid A3, includes gibberellins A4, A7, A9 and a number of other gibberellic acids, the total content of which significantly exceeds the content of gibberellin A3. The active ingredient of the preparation are sodium salts of gibberellic acids. One kilogram of the product contains 90 grams of the active ingredient. Gibbersib is based on the Fusarium moniliforme strain. Gibbersib -gray powder. 6-Benzyladenine is a synthetic cytokenin. Considering that 6-Benzyladenine is used in the production of crowned seedlings, the experiments included options for the combined use of this drug with gibbersib 6-benzyladanine -is a plant growth regulator with a diverse spectrum of action, provides a quick exit of the plant from dormancy, stimulates the formation of side shoots from axillary buds. Due to the action of the drug, the rate of synthesis of chlorophyll is enhanced, which improves the processes of photosynthesis, providing the leaves with a darker green color. 6-Benzyladenine is able to help the plant adapt to prolonged low temperatures in late spring, cold rains, as well as during dry, hot summers, helps to reduce stressful weather effects by enhancing immune processes, ensuring a better course of physiological processes during the formation of side shoots and crowning of seedlings [24][25]. The purpose of the research is to determine the effectiveness of growth regulators when crowning apple seedlings that are not prone to branching. Methods and materials As objects of study were used one-year-old seedlings of the Modi variety apple tree on M9 rootstocks in the second field of the nursery. A distinctive feature of seedlings of the Modi variety is the absence of side shoots when using the usual technology for growing seedlings. During the growing period, growth regulators (6-Benzyladenine and Gibbersib) were sprayed six times from a manual spray gun (06/15/2021; 06/30/2021; 07/15/2021; 07/30/2021; 08/15/2021; 08/30/2021) only on the apical bud of seedlings apple trees of the Modi variety in the nursery according to three options: Option 1 -6-Benzyladenine -0.01% solution; Option 2 -Gibbersib -1% solution; Option 3 -6-Benzyladenine (0.01% solution) + Gibbersib (1% solution). Adjuvant H-408 (0.3 ml), which has a high wetting ability, was used in all variants. The control was seedlings of the Modi apple tree treated with water. The studies were carried out in the nursery of IP Gevorkyan R.M., st. Novotitarovskaya, Krasnodar region. The first treatment of experimental plants was carried out on June 15 when the plants reached a height of 50-60 cm. Processing was carried out in the morning at an average air temperature of 18 ° C, an average wind speed of 3 m / s, without precipitation. In the course of the research, the Program and Methodology for Variety Study of Fruit, Berry and Nut Crops VNIISPK was used [1]. Results and Discussions The first treatment of seedlings according to different options was completed on June 15, 2021. Thirty days after -07/15/2021 the seedlings of all variants were almost the same? however, in the variants with the growth regulators had the small side shoots in the axillary buds, with a greater number of branches in option 3 (Table 1). In the control variant, no natural branching was observed on seedlings of the Modi variety apple tree. The control seedlings had a maximum height of 59 cm, but did not have any lateral branching, and also had the smallest stem diameter of -0.65 cm. As of 07/30/2021 after the fourth treatment, the height of seedlings increased up to 65-72 cm, while the maximum height was also observed in seedlings of the control variant and amounted to 72 cm. At the same time, the same pattern was preserved on these seedlingsthe absence of lateral branches. Control Treatmnet of 6-BA + Gibbersib (variant 3) Forty-five days after the first treatment of the apical bud with growth regulators, 6-Benzyladenine caused more branching (4 shoots) compared to Gibbersib (3 shoots), however, with the combined action of these growth regulators, better crowning of seedlings was noted, where the number of branches reached 5 pieces. With the combined action of growth regulators in the third variant, the seedlings had the following biometric characteristics -the height of the seedlings was the smallest and amounted to 65 cm, but the seedlings had the greatest length of lateral shoots, which was 14 cm and the largest trunk diameter. At the same time, the side branches had a maximum angle of departure from the trunk -45 o , which is important in assessing the quality and standard of seedlings. The following treatments of the apical bud contributed to the awakening of the lateral buds and the formation of lateral branches. Already after the fifth treatment, the number of branches in option 3 reached 11 lateral branches, which corresponds in terms of quality to the requirements of GOST for seedlings of the first grade. The best crowning of seedlings is observed when using 6-benzyladanine, where the number of branches reached 9 lateral branches, while the total length of lateral shoots was 82 cm. In the variant where Gibbersib was used with a smaller number of branches (4 pieces), a larger angle of deviation of lateral branches was observed all the time -up to 80°, where it can be concluded that the preparation provides better crowning of seedlings, bringing the angle of departure closer to 90°. At the end of the growing season, the seedlings of the Modi variety in the control variant had a maximum height of 150 cm, which was 8-15% more compared to the experimental variants, but the trunk diameter remained minimal -0.98 cm, which is compared with the seedlings of the third variant less by 0.39 cm. As a result of treatment with growth regulators, the improvement in branching on apple seedlings was noted. Good results were obtained in option 2 (6-Benzyladenine), where the number of side shoots reached 10 pieces, however the height of the seedlings was lower than in the control ( Table 2). When applying the combined treatment of seedlings with 6-Benzyladenine and Gibbersib the best biometric indicators of seedlings were noted, having a greater number of branches, the length of lateral shoots, the diameter of the stem, and the height of the plants in this variant was 15% less than in the control. Analyzing the results of the use of various preparations (Gibbersib, 6-Benzyladenine) to improve seedling crowning, we can conclude that the combined use of gibberellic acids and cytokinins increases the efficiency of physiological processes in grown seedlings, which made it possible to obtain seedlings with a large number of lateral branches, having an angle of departure close to 90 degrees. In the course of studying the effectiveness of growth regulators aimed at enhancing the crowning of seedlings, timely technological methods are also important, aimed at providing plants with mineral fertilizers and sufficient moisture, which together contribute to the intensive growth of seedlings. Only under these conditions is it possible to obtain a greater number of high-quality branched seedlings suitable for laying intensive gardens. Our studies have shown that the result of applying special agricultural practices to obtain branching in annual seedlings largely depended on the activity of their growth processes. Considering that cytokinins are able to reduce the synthesis of auxin in the apex, it can presumably be said that due to their action in the zone of lateral branches, with an additional influx of gibberellins as a result of spraying with growth regulators, better branching of annual seedlings was noted. According to the results of the research, it was found that in order to obtain apple seedlings suitable for the further formation of spindle-shaped crowns, the most promising method is the use of a complex of preparations (Gibbersib, 6-Benzyladenine), which made it possible to increase the yield of branched seedlings up to 100%: a greater number of side shoots (up to 12 pcs.), the maximum total increase (up to 112 cm). Conclusion According to the results of testing the drugs "Gibbersib", "6-Benzyladenine" with separate and combined use, the effectiveness of using each of the drugs separately, in comparison with the control, was confirmed. The highest efficiency was obtained with the combined use of gibberellins and cytokinins, which improved the biometric parameters of branched seedlings by increasing the side shoots (12 shoots) and achieving the maximum total growth (112 cm). Thus, the results of the study of the use of the drug "Gibbersib" alone and in combination with the drug "6-Benzyladenine" in the nursery allows us to conclude that the effectiveness of the six-fold treatment of the apical meristem of the seedling every 15 days when the plants reach a height of 50-60 cm, with the consumption of the drug -100 ml (1 ml Benzyladenine, 0.1 g Gibbersib and 0.3 ml Adjuvant H-408) for 500 seedlings. With a quantity of 2500 seedlings per hectare, the consumption of a working solution consisting of 50% benzyladenine and 50% gibbersib is 0.5 l / ha (5 ml of 6-Benzyladenine, 0.5 g of Gibbersib and 1 ml of Adjuvant H-408 per 500 ml of water). The use of an adhesive -Adjuvant H-408, which has a high wetting ability, allows to increase the efficiency of processing. The results of testing the influence of the growth regulator "Gibbersib" alone and together with the growth regulator "6-Benzyladenine" on the growth of the vegetative mass, the formation of side shoots and the formation of the crown allow us to consider them as promising for use in horticulture when growing planting material.
2022-06-22T15:11:05.121Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "ef459914fc1c35783abcac8833b015dc2922e229", "oa_license": "CCBY", "oa_url": "https://www.bio-conferences.org/articles/bioconf/pdf/2022/06/bioconf_itia2022_08003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9828a2a1c2717ce137bf561de529aa791dd35cdd", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
261794349
pes2o/s2orc
v3-fos-license
Metabolic syndrome, adiposity, diet, and emotional eating are associated with oxidative stress in adolescents Background Metabolic syndrome (MS), a condition related to adiposity and oxidative stress, can develop in adolescence, a critical stage in life that impacts health in adulthood. However, there is scarce scientific research about the relationship between lifestyle factors, emotion management, and oxidative stress in this phase of life. Aim To analyze whether nutritional parameters, lifestyle factors, emotion management, and MS in adolescents are associated with oxidative stress measured by the biomarker 8-isoprostane. Methods A cross-sectional study was carried out in 132 adolescents (48.5% girls, aged 12 ± 0.48 years) and data were collected on nutritional parameters (anthropometric measurements, biochemical analyzes, and blood pressure), lifestyle factors (physical activity, sleep, and diet), and emotion management (self-esteem, emotional eating, and mood). 8-isoprostane was analyzed in spot urine samples. The study population was categorized in three groups (healthy, at-risk, and with MS) using the International Diabetes Federation definition of MS in adolescents. To capture more complex interactions, a multiple linear regression was used to analyze the association between 8-isoprostane and the aforementioned variables. Results Urinary 8-isoprostane levels were significantly higher in the MS group compared to the healthy group (1,280 ± 543 pg./mg vs. 950 ± 416 pg./mg respectively). In addition, univariable analysis revealed positive significant associations between 8-isoprostane and body mass index, waist circumference, waist-to-height ratio, body fat percentage, blood lipid profile and glucose, emotional eating, and refined cereal intake. Conversely, a negative significant association was found between 8-isoprostane and sleep duration and fish intake. The multiple linear regression analysis revealed associations between 8-isoprostane and LDL-c (β = 0.173 value of p = 0.049), emotional eating (low β = 0.443, value of p = 0.036; high β = 0.152, value of p = 0.470), refined cereal intake (β =0.191, value of p = 0.024), and fish intake (β = −0.187, value of p = 0.050). Conclusion The MS group, LDL-c, emotional eating, and high refined cereals and low fish intakes were associated with higher levels of oxidative stress in an adolescent population. Introduction Adolescence is a critical stage in life, when healthy or risk behaviors of adulthood are first established, and social and emotional development affects decision-making and behavior (1,2).The development of obesity or cardiovascular diseases can be accompanied by the emergence of mental health disorders such as depression, eating disorders, and low self-esteem (3,4).Furthermore, oxidative stress has been linked with depression (5,6), whereas a healthy dietary pattern is associated with good mental health (1,7). Obesity and metabolic syndrome (MS) are multifactorial diseases estimated to affect 5% of the global adolescent population in 2020 (8).MS is a cluster of risk factors for cardiovascular disease, type 2 diabetes, or prediabetes.These conditions include high blood pressure (BP), high triglycerides, and abdominal obesity, the latter being the key factor for MS diagnosis in adolescents (9).When these pathological triggers are present chronically, a dysfunctional mitochondria could be developed, which can cause an oxidative stress (10,11), increasing its levels of biomarkers such as 8-isoprostane (12,13).Isoprostanes are biosynthesized by a free radical-catalyzed peroxidation primarily from arachidonic acid, but also from docosahexaenoic and eicosapentaenoic acids (14,15) and 8-isoprostane serves as a biomarker of oxidative stress status (16,17).Due to the scant research on the relationship between 8-isoprostane and MS in adolescents, it is important to study this research topic and its association with factors such as lifestyle, emotion, and nutrition. The aim of the present study was to analyze the association of nutritional parameters, lifestyle, emotion management, and MS with oxidative stress, measured by 8-isoprostane levels, in a cohort of adolescents enrolled in the SI! Program for Secondary Schools in Spain. Materials and methods The present cross-sectional analysis was performed in a sub-sample of participants enrolled at baseline (1st grade of Secondary School) in the SI! Program for Secondary Schools in Spain (clinical trials register: NCT03504059); all details have been previously published (18).A schematic view of the design and different phases of the present study is provided in Supplementary Figure S1. Anthropometric measurements Measurements were obtained after overnight fasting by trained nutritionists.Height was measured with a Seca 213 portable stadiometer (0.1 cm of precision).Body weight, body fat percentage, and skeletal muscle percentage were measured by bioelectrical impedance analysis (OMRON BF511 with 0.1 Kg precision), with participants wearing light clothes and no shoes.Body mass index (BMI) was calculated as body weight divided by height squared (kg/ m 2 ).Waist circumference (WC) was measured three times with a Holtain tape to the nearest 0.1 cm.Waist-to-height ratio (WHtR) was calculated as WC divided by height.BMI, WC, and WHtR were adjusted for age and sex to obtain z-score values (19,20). Biochemical analyzes Biochemical blood analysis was performed by trained nurses using samples taken early in the morning after overnight fasting.Glucose, triglycerides, total cholesterol, high density lipoprotein cholesterol (HDL-c), and low density lipoprotein cholesterol (LDL-c) were determined in capillary blood samples using the Cardio Check Plus device and PTS Panels test strips (21).Fasting spot urine samples were collected in the morning.The urine was aliquoted and stored at −80°C for subsequent analysis.8-Isoprostane concentration in urine was determined using the ELISA Kit protocol (Cayman Chemical, Ann Arbor, MI, United States, Item No. 516351).Creatinine was measured by the validated Jaffé alkaline picrate method (22).8-Isoprostane levels were normalized by creatinine and the results were expressed as pg./mg creatinine. Blood pressure BP was measured with an OMRON M6 monitor with 2-3 min intervals between measurements.When the differences between the measurements were less than 10 mmHg for systolic blood pressure (SBP) and less than 5 mmHg for diastolic blood pressure (DBP), two Abbreviations: BMI, body mass index; BP, blood pressure; DBP, diastolic blood pressure; EE, emotional eating; HDL-c, high density lipoprotein cholesterol; LDL-c, low-density lipoprotein cholesterol; MS, metabolic syndrome; SBP, systolic blood pressure; WC, waist circumference; WHtR, waist-to-height ratio. Physical activity and sleep characteristics Moderate and vigorous physical activity levels and sleep duration were estimated from data from the triaxial accelerometer (Actigraph wGT3X-BT) worn on the non-dominant wrist for seven consecutive days.Activity information was considered valid if data were available for a minimum of 4 days, with at least 600 min of wear time per day.Physical activity intensities were estimated using the cut-off points of Chandler et al. (25).The sleep algorithm proposed by Sadeh et al. ( 26) was used to obtain sleep duration (total of hours asleep), sleep efficiency (number of sleep minutes divided by the total number of minutes the subject was in bed), awakenings (the number of awakenings per night), and time spent awake after initially falling asleep (the average length of all awakening episodes in minutes).All measurements were obtained using ActiLife software (Version 6.13.4,LLC). Emotion management To assess emotion management, three different subscales were used, namely "self-esteem, " "emotional eating" (EE), and "mood, " which were measured through validated questionnaires filled out by the participants.Four response categories on the Likert scale were used for self-esteem and EE items, and five response categories for mood items. Self-esteem was assessed with five items of the Child Health and Illness Profile-Adolescent Edition test (CHIP-AE) (27).An example of an item is, "I like the way I am, " to which there are four response options: strongly disagree = 1, disagree = 2, agree = 3, and strongly agree = 4. Thus, higher scores designate better health-related outcomes.The score was obtained by calculating the mean scores.Internal reliability (Cronbach's ⍺) was 0.797.EE was assessed with three items of the Three-Factor Eating Questionnaire-R18 (Tfeq-Sp) (28).An example of an item is, "When I feel anxious, I find myself eating, " to which there are four response options: definitely false = 1, mostly false = 2, mostly true = 3, and definitely true = 4. Higher scores indicate greater levels of EE.Scores were categorized as "No EE, " "Low EE, " and "High EE. " The "No EE" category reflected a score of 3. To create the "Low EE" and "High EE" categories, a median split was used (excluding the "No" category), and scores below the median were categorized as "Low EE" and scores above the median as "High EE" (4).Internal reliability (Cronbach's ⍺) was 0.791. Mood was assessed with a six items of the validated FRESC (Factors de Risc en Estudiants de SeCundària) lifestyle risk-factor survey for secondary school students (29).An example of an item is, "I am too tired to do anything, " with the following response options: never, almost never, sometimes, frequently, and always.The variable was dichotomized, whereby the response "always" or "frequently" to three or more of the six items indicated a negative mood state, whereas a positive mood state was assigned for the other responses.Internal reliability (Cronbach's ⍺) was 0.673 as reported by Ahonen et al. (30). Dietary data A validated semi-quantitative food frequency questionnaire with 157-items was filled out by the families of the participants to provide information about their dietary habits from the previous year (31,32).The items were organized by food groups and the response categories were as follows: never or almost never, 1-3 per month, 1 per week, 2-4 per week, 5-6 per week, 1 per day, 2-3 per day, 4-6 per day, and 6 or more per day.This questionnaire results were analyzed using Evaldara software and the food composition tables of the Centro de Enseñanza Superior de Nutrición y Dietética and adjusted for total energy intake (33,34). Metabolic groups The study population was categorized in three metabolic groups (healthy, at-risk, and MS) as defined by the International Diabetes Federation (9).Those in the MS group had abdominal obesity (≥ 90th percentile of WC) plus two or more clinical symptoms: ≥ 150 mg/dL of triglycerides or < 40 mg/dL of HDL-c or ≥ 100 mg/dL of blood glucose or high BP (SBP ≥ 130 mm Hg or DBP ≥ 85 mm Hg).The at-risk population included participants with MS symptoms but not enough to belong to the MS group.Those in the healthy population showed none of the aforementioned symptoms.After the participants were categorized, a randomized sampling from each group was carried out to define the at-risk and healthy groups.Supplementary Figure S2 shows a schematic view of the process followed in the different phases of defining the study groups. Statistical analyzes A minimum of 118 participants were required to provide 95% power of test and a significance level of 5% when performing multiple regression.Given that 44 participants presented MS, a total of 132 participants were enrolled to achieve a ratio of 1:1:1 for the three groups (44 in each); more details are provided in Supplementary Figure S2. A seven-step statistical process was followed; a value of p ≤0.05 was considered statistically significant.First, a 98% winsorizing technique was used to minimize the influence of outliers.Second, numerical variables without prior standardization (i.e., adjusted for age, sex, and height) were standardized (z-scores) prior to the statistical analysis.Third, the normality of variables was determined by the Kolmogorov-Smirnov test.Fourth, to estimate differences, a chi-square test was used for categorical variables, an analysis of variance (ANOVA) for parametric numerical variables, and a Kruskal-Wallis test for non-parametric numerical variables, whereas Dunn-Bonferroni correction was used to ascertain differences between the metabolic groups.Fifth, a simple linear regression was used to identify the association between 8-isoprostane levels and each studied variable.Sixth, principal component analysis was used to eliminate collinearity among variables.Finally, to capture more Results The present study enrolled 132 adolescents (48.5% girls) aged 12 ± 0.48 years.The healthy group had the highest percentage of girls (56.8%), and the MS group the highest percentage of boys (63.6%) (Table 1). The characteristics of the study population and the differences between the three metabolic groups are described in Table 1 and Supplementary Table S1 shows the differences between sexes.Significant differences in all anthropometric measurements and blood glucose levels were observed for all metabolic groups.Triglycerides, HDL-c, 8-isoprostane levels, BP, sleep duration, time spent awake after initially falling asleep, and fish intake differed significantly between the healthy and MS groups.It is worth noting that the adolescents in the healthy group had longer periods of sleep interrupted by shorter awake periods compared to the MS group. Levels of 8-isoprostane were higher in the MS group than the healthy group (Figure 1).Table 2 shows the association between 8-isoprostane and each nutritional parameter and lifestyle variable.The BMI, WC, and WHtR z-scores, and body fat percentage were positively associated with 8-isoprostane levels, suggesting the biomarker was positively related to adiposity.Similarly, blood glucose, total cholesterol, LDL-c, EE, categorization in the MS group, and refined cereal intake were positively associated with 8-isoprostane; in contrast, sleep duration and fish intake showed a negative association with 8-isoprostane (Table 2).Total cholesterol and z-scores for BMI, WC, and WHtR acted as collinear variables in the dataset under analysis. Discussion Three principal observations were made in the course of the present study.First, 8-isoprostane was associated with an unhealthy metabolic status in the adolescent cohort, being positively related with adiposity (high z-scores for BMI, WHtR, and WC, and high body fat percentage), LDL-c, and MS.Second, an association between EE and 8-isoprostane was found, which to our knowledge has not been previously reported in adolescents.Finally, our results suggest that diet can significantly influence 8-isoprostane levels, which were positively associated with refined cereal intake and negatively associated with fish intake.Consistent with our results, higher 8-isoprostane levels have been observed in obese children and adolescents compared to those with normal weight (35)(36)(37).This biomarker has also been positively correlated with measures of fatness, as well as WC, WHtR, and body fat (38)(39)(40).The association we found between oxidative stress, manifested by increased levels of 8-isoprostane, and MS in an adolescent population is also supported by previous studies in children (38,41,42) and adolescents (35), which report higher levels of 8-isoprostane in those with symptoms of MS.Moreover, overweight children with MS showed higher 8-isoprostane levels than overweight children without metabolic risk factors (42); it is possible that risk factors such as hyperglycemia, hypertriglyceridemia, presents in MS contribute to the presence of oxidative stress (10).Adolescents with a high BMI were observed to have higher levels of 8-isoprostane if they were insulin-resistant as opposed to insulin-sensitive, but no difference was observed with a low BMI and resistance/sensitivity to insulin (37), suggesting that both, adiposity and risk factors, could conduce to oxidative stress (10, 43).Therefore, it is possible that oxidative stress worsens with obesity, especially when coupled with MS risk factors.In the present study, consistent with the symptoms of MS, LDL-c was positively associated with 8-isoprostane; similar results have been reported in children with excess weight (38), children with diabetes mellitus type 1 (44), and adolescents with insulin resistance (45). Although a relationship between high EE and MS has been described in adults (4,46,47), it remains poorly researched in adolescents.The most closely related study was carried out with adolescents diagnosed with type 1 diabetes, in whom higher EE values were associated with higher levels of HbA1, total cholesterol and LDL-c, which are risk factor of MS (48).On the other hand, an association between EE and obesity, which is the principal characteristic of MS, has been observed in adults (49)(50)(51)(52)(53)(54)(55), but has been scarcely studied in adolescents (56). Since sleeping and physical activity are very important factors in the health of adolescents, we evaluated these variables and consistently with others results, we observed an association between 8-isoprostane and sleep duration (57) and between MS and sleep duration (58), not with physical activity.However, in the multiple linear regression analysis, physical activity or sleep duration were not significant factors in 8-isoprostane level. As reported in the aforementioned studies, EE is linked with obesity.The association between obesity and 8-isoprostane could therefore explain the relationship found in the present study between EE and 8-isoprostane.The EE questionnaire evaluates the individual's response to food consumption, focusing on the emotions of anxiety, loneliness, and depression.Previous research has shown a positive association between 8-isoprostane and anxiety as well as depression (59-63), which is in agreement with the results obtained here.A possible explanation of these relationships could be the effects of oxidative stress on the nervous system.The brain has a high rate of oxygen consumption and is rich in lipids, which contributes to the susceptibility of its cells to oxidative stress (64).The resulting inflammatory processes (65) alter the function of serotonin and dopamine, leading to symptoms of anxiety and depression (66-68), both of which are components of the items in the EE assessment (69,70).Consequently, the directionality of the relationship between oxidative stress and EE is not yet clear.The scope of the present study is limited to demonstrating that an association exists between 8-isoprostane and EE; future work could shed more light on this relationship.Diet is reported to modulate oxidative stress (15).Accordingly, a research work found a reduction in urinary isoprostanes and other oxidative stress biomarkers in MS patients who consumed a Mediterranean diet (71).In the present study, analysis of the eating habits of the adolescent participants revealed a positive association between 8-isoprostane and refined cereal intake.The quality of ingested carbohydrates is known to affect metabolic risk factors (72-75), which is consistent with our findings.A cross-over study in adults reported significantly higher levels of 8-isoprostane in consumers of a refined wheat diet compared to a wheat aleurone diet (76).However, other studies did not find different levels of 8-isoprostane between the consumers of ground flaxseed or wheat bran (77), whole-grain or refined-grain products (78), and whole or refined grain foods (79). The inverse association between the oxidative stress biomarker 8-isoprostane and fish intake found in the present study is in accordance with prior reports of an inverse association between oxidative stress and fish intake (71, 80, 81) or supplementation with fish oil (80,82,83) or eicosapentaenoic acid or docosahexaenoic acid (81,84,85).Conversely, other studies have failed to find any significant associations between oxidative stress and the consumption of fish, including oil Difference in 8-isoprostane levels between the metabolic groups.supplements (86-88).On the other hand, review articles show an inverse association between fish consumption and prevalence of MS (89) and heart failure (90,91), which is supported by our results.A strength of this study is that, to the best of our knowledge, a significant association between 8-isoprostane and EE, refined cereal intake, and fish intake has not been previously demonstrated in adolescents.Additionally, the study takes a holistic approach in which anthropometric and biochemical factors, emotional management, and eating habits are analyzed.Other strong points include the multicenter design and the use of a standardized protocol, which reduces information bias.The limitations of the study include the small size and cross-sectional design of the study population.Also, sometimes the participants did not wear the accelerometer while practicing water activities or a sport requiring its removal (e.g., judo, basketball).Finally, blood samples were not obtained no to use invasive methods due to the age of the cohort, so we were unable to analyze 8-isoprostane in plasma, inflammatory or neurotransmitter biomarkers, which would have provided greater insight into oxidative stress.The results of the present study may contribute to the development of educational programs focused on the establishment of healthier lifestyles in early life stages.The findings also indicate that more research is needed to understand the interaction between food choices, emotion management, and oxidative stress status in adolescents with good or poor metabolic health. In conclusion, a significant positive association between 8-isoprostane and EE, refined cereal intake, indicators of adiposity (BMI z-score, WHtR z-score, body fat percentage), and MS, and a negative association between 8-isoprostane and fish intake in adolescents has been found.This shows that dietary patterns such as those exhibited in the Mediterranean diet could help to prevent cardiometabolic diseases. Data availability statement The datasets presented in this article are not readily available because there are restrictions on the availability of the data for the SI! Program study, due to signed consent agreements around data sharing, which only allow access to external researchers for studies following project purposes.Requests to access the datasets should be directed to Steering Committee gsantos@fundacionshe.org, rodrigo.fernandez@cnic.es,juanmiguel.fernandez@cnic.es,lamuela@ub.edu. FIGURE 1 FIGURE 1 standardized prior to the statistical analysis.†value of ps from chi-square (%), ANOVA (mean ± SD), and Kruskal-Wallis (median and range) tests.Significant differences (value of p ≤ 0.05) among segments after Bonferroni post-hoc correction; * among all groups; A, between healthy and at-risk groups; B, between healthy and metabolic syndrome groups; C, between at-risk and metabolic syndrome groups.8-Isoprostane is expressed by pg of 8-isoprostane/mg of creatinine in urine.BMI, body mass index; WC, waist circumference; WHtR, waist-to-height ratio; HDL-c, high-density lipoprotein cholesterol; LDL-c, low-density lipoprotein cholesterol; SBP, systolic blood pressure; DBP, diastolic blood pressure; MVPA, moderate and vigorous physical activity.Intake is expressed in % EI; percentage of energy intake; g/d, grams per day; s/d, serving per day; s/w, serving per week. FIGURE 2 FIGURE 2Multiple linear regression model for 8-isoprostane associations.Adjusted for sex and energy intake.Numerical variables are expressed as z-scores.*Statistically significant difference (value of p ≤0.05). TABLE 1 Description and comparison of the metabolic groups. TABLE 2 Association of individual variables with 8-isoprostane. Numerical variables were standardized prior to the statistical analysis.p-values and β are of simple linear regression.BMI, body mass index; WC, waist circumference; WHtR, waist-to-height ratio; HDL-c, high-density lipoprotein cholesterol; LDL-c, low-density lipoprotein cholesterol; SBP, systolic blood pressure; DBP, diastolic blood pressure; MVPA, moderate and vigorous physical activity.Intake is expressed in % EI; percentage of energy intake; g/d, grams per day; s/d, serving per day; s/w, serving per week.*Intercept (β) for the reference categories: no emotional eating (β = −0.246)for emotional eating; positive mood (β = 0.022) for mood; healthy (β = −0.351)for metabolic groups.
2023-09-14T15:02:02.191Z
2023-09-12T00:00:00.000
{ "year": 2023, "sha1": "8cc874b53c5e8922a5272305494adb54a5f95ba7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2023.1216445/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "21315a7fd341f40a6b01cf74af3a961bda170662", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
254339004
pes2o/s2orc
v3-fos-license
Advanced Optical Wavefront Technologies to Improve Patient Quality of Vision and Meet Clinical Requests Adaptive optics (AO) is employed for the continuous measurement and correction of ocular aberrations. Human eye refractive errors (lower-order aberrations such as myopia and astigmatism) are corrected with contact lenses and excimer laser surgery. Under twilight vision conditions, when the pupil of the human eye dilates to 5–7 mm in diameter, higher-order aberrations affect the visual acuity. The combined use of wavefront (WF) technology and AO systems allows the pre-operative evaluation of refractive surgical procedures to compensate for the higher-order optical aberrations of the human eye, guiding the surgeon in choosing the procedure parameters. Here, we report a brief history of AO, starting from the description of the Shack–Hartmann method, which allowed the first in vivo measurement of the eye’s wave aberration, the wavefront sensing technologies (WSTs), and their principles. Then, the limitations of the ocular wavefront ascribed to the IOL polymeric materials and design, as well as future perspectives on improving patient vision quality and meeting clinical requests, are described. Introduction In the second half of the 20th century, ophthalmologists started to investigate how to reduce the dependence on spectacles by means of refractive surgery [1]. These eye disorders were treated by using an excimer laser to modify the shape of the cornea and therefore its refractive state. Despite the success of the refractive surgery, patients complained about glare, halos, and starburst in both day and night vision [2,3]. The clinical data showed that higher-order aberrations (HOAs) were induced by laser refractive surgery [2]. Hence, it was necessary to expand a new area of research, known today as "wavefront technology", which aimed to measure and reduce the induction of these unwanted aberrations [4]. Wavefront technology is largely applied in astronomy to correct aberrations in the reflecting mirrors of telescopes to obtain images with higher quality, as in the case of the Hubble Space Telescope and NASA's James Webb Space Telescope, which has recently been providing very high-quality images of space [5,6]. Nowadays, an analogous approach is used for the wavefront-guided refractive surgery: an aberrometer was introduced in the procedure to collect the eye's wavefront errors in order to guide the excimer laser on a customized profile [2]. The knowledge and the tools obtained for this purpose had a great impact on other aspects of ophthalmology such as corrective devices (both contact and intraocular lenses) and in the evaluation of the progress of eye diseases [4,7]. On increasing the radial order, we move towards terms corresponding to higherorder aberrations, which are characterized by a more complex shape and terminology. The description of the effect of the eye's optical properties on the image quality can be performed by approaches involving image-plane metrics [17]. Among them are the point spread function (PSF), representing the image quality for point objects, and the optical transfer function (OTF), which is used for grating objects. As reported below in Equations (2) and (3), the PSF is the squared modulus of the Fourier transform of the pupil function P(x,y), wherein A(x,y) describes the amplitude distribution and W (x,y) the wavefront deformation on the considered pupil [18,19]. The OTF can be obtained as the inverse Fourier transform of the point spread function (Equation (4)) [19]. , = | , | , = , Image-plane metrics describe the wavefront error in the plane of the retina. Furthermore, the aberrations can affect the image of a grating by reducing the contrast or translating the image sideways to create a spatial phase shift. The changes with the spatial frequency of the image contrast and phase shift are, respectively, described by the modulation transfer function (MTF) and the phase transfer function (PTF), which are A quantitative insight is given in terms of the root mean square (RMS) of the wavefront deformation, defined as the root square of the wavefront variance. The RMS is equal to zero in an ideal case and assumes a positive value in an aberrated wavefront [15,16]. The wavefront data are decomposed in a linear sum of terms thanks to Zernike polynomials [13], revealing the total root mean square (RMS) error. The combination of Zernike independent functions is suitable for representing complex surfaces in terms of polar coordinates (r, θ) [4], as reported below: W(r, θ) = ∑ n,m C m n Z m n (r, θ), The coefficient C n m is proportional to the weight of a specific Zernike aberration present in the system. The subscript n is known as the radial order and assumes only positive values, whereas the superscript m is the angular frequency and can be either positive or negative [4,13]. Figure 2 shows the so-called Zernike pyramid, which is useful in describing the ordering system for Zernike polynomials. Commonly, when analyzing aberrometry data for normal and abnormal eyes, the 0th radial order coefficient (called a piston) and the 1st radial order coefficients (called a 'tip' and a 'tilt') are usually ignored because they refer only to a phase shift or to an image displacement, respectively, and not to its quality. Instead, the 2nd order terms are related to defocus and astigmatism optical aberrations, whereas some of the 3rd and 4th orders are, respectively, related to the primary spherical aberration (Z 0 4 ) and coma aberrations (Z −1 3 to vertical coma and Z 1 3 to horizontal coma). In this context, despite the mentioned limitations, the wavefront aberrometry has been applied in clinical ophthalmology to heal eye diseases and, in particular, for designing wavefront-guided refractive surgery. However, there is still space for improvement, especially in the field of wavefront sensors to accurately measure higherorder aberrations. Wavefront Sensors for Ophthalmological Applications: Physical Principle and Practice Wavefront sensors can be defined as aberrometers, revealing light wave distortion after it passes via the eye's optical system. On the market, various wavefront sensing devices employing different technologies can be found [20][21][22]. The most widely used wavefront sensors, including the Shack-Hartmann sensor, the pyramidal prism, and the Tscherning aberrometer, are reviewed in the following, and their advantages and drawbacks are briefly summarized in Table 1. Some technical details found in the literature, such as dynamic range and sensitivity, are reported in Table 2 and compared with the Shack-Hartmann wavefront sensor. Generally, wavefront sensors can be classified in two groups: outgoing and ingoing. The former covers the techniques in which the light source is set on the retina, and the wavefront coming out from the eye is studied. An example of an outgoing sensor is the Shack-Hartmann. Conversely, the ingoing aberrometers are focused on the alterations present in the wavefront after it went through the eye, e.g., the Tscherning aberrometer and the ray-tracing system [23]. Another difference between the several wavefront sensors is in the target. Some sensors, such as the Shack-Hartmann or pyramid sensors, measure the first derivative of the wavefront phase (WF), while other kinds of sensors, such as the curvature ones, aim to measure the second derivative of the WF phase. Description Advantage Drawback On increasing the radial order, we move towards terms corresponding to higher-order aberrations, which are characterized by a more complex shape and terminology. The description of the effect of the eye's optical properties on the image quality can be performed by approaches involving image-plane metrics [17]. Among them are the point spread function (PSF), representing the image quality for point objects, and the optical transfer function (OTF), which is used for grating objects. As reported below in Equations (2) and (3), the PSF is the squared modulus of the Fourier transform of the pupil function P(x,y), wherein A(x,y) describes the amplitude distribution and W (x,y) the wavefront deformation on the considered pupil [18,19]. The OTF can be obtained as the inverse Fourier transform of the point spread function (Equation (4)) [19]. P(x, y) = A (x, y) e i 2π λ W(x,y) (2) PSF (x, y) = |FT [P(x, y)]| 2 (3) Image-plane metrics describe the wavefront error in the plane of the retina. Furthermore, the aberrations can affect the image of a grating by reducing the contrast or translating the image sideways to create a spatial phase shift. The changes with the spatial frequency of the image contrast and phase shift are, respectively, described by the modulation transfer function (MTF) and the phase transfer function (PTF), which are expressed, respectively, by the real and imaginary contribution of the OTF (referred to as the OTF modulus and phase-see Equations (5) and (6)). Polymers 2022, 14, 5321 5 of 37 PTF (ξ, η) = Im[OTF (ξ, η)] (6) In this context, despite the mentioned limitations, the wavefront aberrometry has been applied in clinical ophthalmology to heal eye diseases and, in particular, for designing wavefront-guided refractive surgery. However, there is still space for improvement, especially in the field of wavefront sensors to accurately measure higher-order aberrations. Wavefront Sensors for Ophthalmological Applications: Physical Principle and Practice Wavefront sensors can be defined as aberrometers, revealing light wave distortion after it passes via the eye's optical system. On the market, various wavefront sensing devices employing different technologies can be found [20][21][22]. The most widely used wavefront sensors, including the Shack-Hartmann sensor, the pyramidal prism, and the Tscherning aberrometer, are reviewed in the following, and their advantages and drawbacks are briefly summarized in Table 1. Some technical details found in the literature, such as dynamic range and sensitivity, are reported in Table 2 and compared with the Shack-Hartmann wavefront sensor. Generally, wavefront sensors can be classified in two groups: outgoing and ingoing. The former covers the techniques in which the light source is set on the retina, and the wavefront coming out from the eye is studied. An example of an outgoing sensor is the Shack-Hartmann. Conversely, the ingoing aberrometers are focused on the alterations present in the wavefront after it went through the eye, e.g., the Tscherning aberrometer and the ray-tracing system [23]. Another difference between the several wavefront sensors is in the target. Some sensors, such as the Shack-Hartmann or pyramid sensors, measure the first derivative of the wavefront phase (WF), while other kinds of sensors, such as the curvature ones, aim to measure the second derivative of the WF phase. Description Advantage Drawback Shack-Hartmann WS Detection of spot displacements thanks to a lenslet array and a reference grid. Flexibility and adaptability to different measurement systems [24]. Pyramid Sensor A pyramid prism divides the incoming light into four different spots on a CCD surface. Their differences provide information about WF gradients. Spurious reflections, necessity of another device for modulation [27]. Curvature Sensor Two detectors are symmetrically placed with respect to the focal plane. Their difference in intensity provides information about the second derivative of the WF. Compared to S-H, higher dynamic range and lower cost [15]. Long measurement time and less accurate for higher-order aberrations [15]. Optical Differentiation WS A system of lenses and a mask is used to obtain WF phase slope thanks to Fourier transform properties. Diffuser WS A thin diffuser is set close to the detector and its memory effect is used to retrieve WF displacements. Shearing Interferometry The interference pattern between the incoming wavefront and its displaced replica is used to measure the wavefront phase. Tscherning Aberrometer A collimated laser beam illuminates a mask with regular matrix pin holes, forming a bundle of thin parallel rays. The deviations of all spots from their ideal regular positions are associated with the optical aberrations, computed in the form of Zernike polynomials up to the 8th order. Fast measuring and highly accurate. Not patient friendly because it requires more time and effort to obtain treatable image [32]. Shack-Hartmann Sensor The Shack-Hartmann (S-H) sensor [37,38] is the most used wavefront sensor in astronomy and ophthalmology [39]. Originally, it was developed for military purposes to fulfil the need for improved images from satellites [40]. As depicted in Figure 3, the sensor is made of a lenslet array creating spots from the incident light, whose spatial displacements from a reference grid registered on a CCD camera are a direct measure of the wavefront tilts [15,41]. Specifically, the distance of each spot from its ideal position is measured and related to the local distortions in the pupil due to the optics of the eye. Its main drawbacks are the cost and the limited dynamic range due to the used lenslet array. Some studies were published on the possibility of expanding the S-H sensor's dynamic range [11]. Shinto et al. proposed an adaptive spot search method based on a dual microlens array and confirmed the dynamic range expansion for defocus, astigmatism, and coma [42]. More recently, Akondi and Dubra presented an algorithm to improve the lenslet image location in the cases of defocus and astigmatism [43]. The individual spot displacement allows the computation of the local slope of the wavefront over each lenslet aperture; as a consequence, the S-H sensor does not take into account the quality of the individual spots formed by the lenslet array, and it is particularly inaccurate for highly aberrated eyes. In fact, if the wavefront shape within a single lenslet varies significantly, the spot pattern formed by that lenslet can be blurred, thus reducing the maximum wavefront slope that can be measured reliably. The blurring of the lenslet focal spot of the Hartmann-Shack sensor can be partly neglected by taking into account that the center of such blurred focal distribution obeys the laws of geometric optics, analogously with the case of the classical and quantum description of a particle [44][45][46]. A further limitation of the S-H sensor is due to the lenslet spacing (number of lenslets across the pupil) and the lenslet array focal length. Note that the majority of the higher-order aberrations are typically included in Zernike modes up to 8th order Zernike coefficients, corresponding to 42 coefficients in total (see Figure 2), indicating that at least 42 lenslets are needed to measure higher-order aberrations (HOAs). The distance within a lenslet's subaperture (corresponding to one-half of the lenslet's diameter) is the maximum displacement that each spot can perform on the used CCD camera. To overcome these limitations, different approaches have been used. The projection of a tight and well-defined spot onto the eye retina (achieved by restricting the illuminating beam diameter) is one of the approaches adopted to analyze eyes with highly aberrated corneal optics. This requires a larger CCD camera to capture the spot array pattern. S-H-based sensors are still widely employed, as is the laser ray-tracing (R-T) technique. The latter, developed in 1997, consists of a thin-diameter beam of light, which is projected onto the retina sequentially, and the distance to the retinal reference position is used to calculate the specific aberrations of the eye. Ultimately, the main difference between the S-H and the R-T sensors is the methodology used to acquire the spot image. In the laser R-T technique, the incident beam is scanned sequentially over the entrance pupil to measure light going into the eye, while the S-H sensor measures light coming out of the eye. In this case, a parallel process is necessary to acquire multiple spots over the exit pupil. The sequential acquisition of wavefront aberrations has the advantage of avoiding the possibility of 'overlapping' optical phenomena, whereas simultaneous acquisition measurements need short acquisition times to achieve a high accuracy in assessing wavefront error [48]. Recently, Wu et al. proposed a modification of the S-H sensor by replacing the lenslet array with a spatial light modulator (SLM) in order to provide a multi-megapixel resolution [49]. By combining a CMOS sensor with a phase-retrieval algorithm, they obtained a higher spatial resolution (one order of magnitude) than that in the current noninterferometric WF sensors [49]. Foucault Knife-Edge and Optical Differentiation Wavefront Sensor (ODWS) The Foucault knife-edge test and the linear amplitude filter are techniques of spatial filtering. The spatial filtering techniques operate on an image, taking into consideration the intensity values in a suitable neighborhood of each pixel. Linear filtering is one of the most powerful image-enhancement methods. It is a process in which part of the signal frequency spectrum is modified by the transfer function of the filter. In general, the filters under consideration are linear and shift-invariant; so, the output images are characterized by the convolution sum between the input image and the filter impulse response. The Foucault knife-edge test was described in 1858 by French physicist Léon Foucault as a way to measure the conic shapes of optical mirrors. In the Foucault knife-edge test, a To overcome these limitations, different approaches have been used. The projection of a tight and well-defined spot onto the eye retina (achieved by restricting the illuminating beam diameter) is one of the approaches adopted to analyze eyes with highly aberrated corneal optics. This requires a larger CCD camera to capture the spot array pattern. S-Hbased sensors are still widely employed, as is the laser ray-tracing (R-T) technique. The latter, developed in 1997, consists of a thin-diameter beam of light, which is projected onto the retina sequentially, and the distance to the retinal reference position is used to calculate the specific aberrations of the eye. Ultimately, the main difference between the S-H and the R-T sensors is the methodology used to acquire the spot image. In the laser R-T technique, the incident beam is scanned sequentially over the entrance pupil to measure light going into the eye, while the S-H sensor measures light coming out of the eye. In this case, a parallel process is necessary to acquire multiple spots over the exit pupil. The sequential acquisition of wavefront aberrations has the advantage of avoiding the possibility of 'overlapping' optical phenomena, whereas simultaneous acquisition measurements need short acquisition times to achieve a high accuracy in assessing wavefront error [48]. Recently, Wu et al. proposed a modification of the S-H sensor by replacing the lenslet array with a spatial light modulator (SLM) in order to provide a multi-megapixel resolution [49]. By combining a CMOS sensor with a phase-retrieval algorithm, they obtained a higher spatial resolution (one order of magnitude) than that in the current non-interferometric WF sensors [49]. Foucault Knife-Edge and Optical Differentiation Wavefront Sensor (ODWS) The Foucault knife-edge test and the linear amplitude filter are techniques of spatial filtering. The spatial filtering techniques operate on an image, taking into consideration the intensity values in a suitable neighborhood of each pixel. Linear filtering is one of the most powerful image-enhancement methods. It is a process in which part of the signal frequency spectrum is modified by the transfer function of the filter. In general, the filters under consideration are linear and shift-invariant; so, the output images are characterized by the convolution sum between the input image and the filter impulse response. The Foucault knife-edge test was described in 1858 by French physicist Léon Foucault as a way to measure the conic shapes of optical mirrors. In the Foucault knife-edge test, a spherical surface, a point source, and a knife edge are used to evaluate the possible transversal aberrations. Specifically, a knife edge is placed near the focus and passed through the image of a point or slit source. The shadow, observed by the eye (shown) or on a screen, gives information about the aberration content. A perfect lens will have one image point that darkens almost instantaneously when the knife edge passes though the image. These shadow patterns are based on geometrical analysis, while diffraction will blur out the edges. The variations of these shadow patterns give information about the spherical surface, enabling the user to precisely determine the position of the focal point of the curved mirror [50][51][52]. The optical differentiation wavefront sensor (ODWFS) resembles the Foucault knifeedge test principles. In fact, the working principle of the optical differentiation wavefront sensor (ODWFS) [53] consists in the insertion of a linear amplitude filter in a focal plane filtering setup. In this way, a continuous Foucault knife-edge test is carried out instead of the normal discrete knife-edge test. Furthermore, its dynamic range is very high, but its sensitivity is low. As shown in Figure 4, the setup is a telescopic system made of a first achromatic lens (L1), which performs the Fourier transform of the input; a mask with variable transmittance, used as filter (OF); a second achromatic lens, performing the inverse Fourier transform of the product of the previous elements; and a CCD surface for the photometric detection [53,54]. Thanks to the differentiation property of the Fourier transform, the detected intensity is directly linked to the WF phase derivative [53]. Polymers 2022, 14, x FOR PEER REVIEW 9 of 40 spherical surface, a point source, and a knife edge are used to evaluate the possible transversal aberrations. Specifically, a knife edge is placed near the focus and passed through the image of a point or slit source. The shadow, observed by the eye (shown) or on a screen, gives information about the aberration content. A perfect lens will have one image point that darkens almost instantaneously when the knife edge passes though the image. These shadow patterns are based on geometrical analysis, while diffraction will blur out the edges. The variations of these shadow patterns give information about the spherical surface, enabling the user to precisely determine the position of the focal point of the curved mirror [50][51][52]. The optical differentiation wavefront sensor (ODWFS) resembles the Foucault knifeedge test principles. In fact, the working principle of the optical differentiation wavefront sensor (ODWFS) [53] consists in the insertion of a linear amplitude filter in a focal plane filtering setup. In this way, a continuous Foucault knife-edge test is carried out instead of the normal discrete knife-edge test. Furthermore, its dynamic range is very high, but its sensitivity is low. As shown in Figure 4, the setup is a telescopic system made of a first achromatic lens (L1), which performs the Fourier transform of the input; a mask with variable transmittance, used as filter (OF); a second achromatic lens, performing the inverse Fourier transform of the product of the previous elements; and a CCD surface for the photometric detection [53,54]. Thanks to the differentiation property of the Fourier transform, the detected intensity is directly linked to the WF phase derivative [53]. The more general implementation of an optical differentiation wavefront sensor (ODWS), based on several wavefront gradients obtained by amplitude modulation in a coherent filtering setup, was pioneered by Bortz [55]. It requires a spatially varying transmission filter in the far field of the source under test. So, the radiant energy received by the surface per unit area (fluence), measured in the image plane of the pupil, is related to the wavefront slope in the direction of the transmission gradient (see the optical setup in Figure 4, where a typical 4f spatial filtering system is shown). The spatially varying transmission filter is set in the Fourier plane: the combination of the propagation in a thin lens of focal length f and an additional propagation by a distance f allows the field at the input pupil to be inverse Fourier transformed to the detection plane (i.e., to the far field). We outline that, without the filter, the fluence F0(x,y) measured in that plane is identical to the input fluence, after taking into account an obvious spatial inversion. The ODWS performance was evaluated using the twelve Zernike polynomials of radial order, defining the test wavefronts over the circular input pupil [56]. It emerges that the main advantages of this sensor are the high resolution, the possibility to use it with polychromatic source, and the large dynamic range [54]. Conversely, a great amount of energy is lost due to the absorption of the mask; so, it will impact the signal-to-noise ratio (SNR) [53,57]. Oti et al. compared the SNR between S-H sensor and the ODWS. They observed that, even in adverse conditions, the ODWS shows a better SNR than the Shack-Hartmann for high resolution sensing [53]. Furthermore, compared to most interferometric techniques, the ODWS does not have a strong coherence requirement, e.g., it can operate with non-monochromatic sources. Despite these advantages, the ODWS is The more general implementation of an optical differentiation wavefront sensor (ODWS), based on several wavefront gradients obtained by amplitude modulation in a coherent filtering setup, was pioneered by Bortz [55]. It requires a spatially varying transmission filter in the far field of the source under test. So, the radiant energy received by the surface per unit area (fluence), measured in the image plane of the pupil, is related to the wavefront slope in the direction of the transmission gradient (see the optical setup in Figure 4, where a typical 4f spatial filtering system is shown). The spatially varying transmission filter is set in the Fourier plane: the combination of the propagation in a thin lens of focal length f and an additional propagation by a distance f allows the field at the input pupil to be inverse Fourier transformed to the detection plane (i.e., to the far field). We outline that, without the filter, the fluence F0(x,y) measured in that plane is identical to the input fluence, after taking into account an obvious spatial inversion. The ODWS performance was evaluated using the twelve Zernike polynomials of radial order, defining the test wavefronts over the circular input pupil [56]. It emerges that the main advantages of this sensor are the high resolution, the possibility to use it with polychromatic source, and the large dynamic range [54]. Conversely, a great amount of energy is lost due to the absorption of the mask; so, it will impact the signal-to-noise ratio (SNR) [53,57]. Oti et al. compared the SNR between S-H sensor and the ODWS. They observed that, even in adverse conditions, the ODWS shows a better SNR than the Shack-Hartmann for high resolution sensing [53]. Furthermore, compared to most interferometric techniques, the ODWS does not have a strong coherence requirement, e.g., it can operate with non-monochromatic sources. Despite these advantages, the ODWS is not widely used due to the practical difficulty of manufacturing components with well-controlled transmission profiles. Pyramid Sensor Since the first implementation in 1997, adaptive optics (AO) systems for ophthalmic applications have always relied on S-H sensors to perform wavefront sensing. While this choice was obviously successful in most cases, one could also think of alternative wavefront sensing approaches to achieve this task with possibly higher efficiency and greater flexibility. The application of the pyramid sensor (PS) to ocular wavefront measurements is a valid alternative thanks to its flexibility in measuring a broad range of ocular aberrations. Similarly to the Foucault knife-edge test [50], in the pyramidal wavefront sensors (PS), the aberration-induced inhomogeneities are sensed by placing in the focal plane a four-facet pyramid refractive element with its tip aligned to the optical axis (see Figure 5a). Polymers 2022, 14, x FOR PEER REVIEW 10 of 40 not widely used due to the practical difficulty of manufacturing components with wellcontrolled transmission profiles. Pyramid Sensor Since the first implementation in 1997, adaptive optics (AO) systems for ophthalmic applications have always relied on S-H sensors to perform wavefront sensing. While this choice was obviously successful in most cases, one could also think of alternative wavefront sensing approaches to achieve this task with possibly higher efficiency and greater flexibility. The application of the pyramid sensor (PS) to ocular wavefront measurements is a valid alternative thanks to its flexibility in measuring a broad range of ocular aberrations. Similarly to the Foucault knife-edge test [50], in the pyramidal wavefront sensors (PS), the aberration-induced inhomogeneities are sensed by placing in the focal plane a four-facet pyramid refractive element with its tip aligned to the optical axis (see Figure 5a). The wavefront gradients along the two orthogonal directions are retrieved from the intensity distribution among the four pupil images [15]. In this way, pupil sampling and sensing sensitivity can be adjusted separately. Zemax OpticStudio software (ZEMAX LCC, Kirkland, WA, USA) was used to analyze the common optical aberrations and the contribution to image degradation across the full x-y field of view. This allows the achieving of important information to correct the considered aberration. Figure 5b reports the image of the source conjugated with the pyramid position and the corresponding simulated CCD image for an emmetropic eye. Similarly, Figure 5c,d report the images of The wavefront gradients along the two orthogonal directions are retrieved from the intensity distribution among the four pupil images [15]. In this way, pupil sampling and sensing sensitivity can be adjusted separately. Zemax OpticStudio software (ZEMAX LCC, Kirkland, WA, USA) was used to analyze the common optical aberrations and the contribution to image degradation across the full x-y field of view. This allows the achieving of important information to correct the considered aberration. Figure 5b reports the image of the source conjugated with the pyramid position and the corresponding simulated CCD image for an emmetropic eye. Similarly, Figure 5c,d report the images of the source projected in front of and beyond the pyramid WFS and the corresponding simulated CCD images for myopic and hyperopic eyes, respectively. Numerical simulations comparing the S-H and the PS suggest that the latter may operate with a higher sensitivity in closed-loop conditions [33]. We remember that the sampling parameters of a Shack-Hartmann sensor are fixed and depend on the components' design [59]. Instead, the pyramid sensor proposed by Ragazzoni [27,59] overcomes this difficulty and allows the adjustment of the sampling parameters on the basis of the sample. The main advantages of this type of sensor are the great adaptability to different orders of aberration and the easy modification of the dynamic range [15,27,60]. In addition, the Shack-Hartmann sensors must employ some methods to average out the speckle caused by the roughness of the retina; these methods are not necessary when using the pyramid sensor, whose main disadvantage is due to the spurious reflections which can be detected from the anterior ocular surfaces [27]. Curvature and Phase Diversity Wavefront Sensors In 1988, Roddier proposed a new method in the wavefront analysis, namely the curvature sensor (CS). The principle is based on the reconstruction of the second derivative of the wavefront. As shown in Figure 6, two detectors are set at a certain distance l from the focal plane. The distance l is directly proportional to the spatial resolution and inversely proportional to the sensitivity [41]. In some cases, a vibrating mirror at the lens focus provides the modulation of the sampled positions, and the wavefront sensor synchronously measures the modulated signal. The difference in light intensity distributions between the two planes is used to evaluate the local WF aberrations [41]. Several algorithms, such as the Green function and the Gureyev-Nugent algorithm, can be used [15]. In 2006, Díaz-Doutón et al. adapted a CS for an ocular aberration measurement for the first time and obtained a similar performance to that of the S-H sensor [61]. Similarly, Torti et al. [62] investigated the feasibility of using a curvature sensor in the ophthalmic field and evidenced that, as compared to S-H sensor, the curvature WF sensor was not limited anymore by the features of the lenslet array, showing a larger dynamic range [15,62]. However, it is fundamental to find a good trade-off, which requires a prolonged time of computing. A large defocus is needed to measure the wavefront with higher resolution, thus reducing the sensitivity of the sensor [15]. the source projected in front of and beyond the pyramid WFS and the corresponding simulated CCD images for myopic and hyperopic eyes, respectively. Numerical simulations comparing the S-H and the PS suggest that the latter may operate with a higher sensitivity in closed-loop conditions [33]. We remember that the sampling parameters of a Shack-Hartmann sensor are fixed and depend on the components' design [59]. Instead, the pyramid sensor proposed by Ragazzoni [27,59] overcomes this difficulty and allows the adjustment of the sampling parameters on the basis of the sample. The main advantages of this type of sensor are the great adaptability to different orders of aberration and the easy modification of the dynamic range [15,27,60]. In addition, the Shack-Hartmann sensors must employ some methods to average out the speckle caused by the roughness of the retina; these methods are not necessary when using the pyramid sensor, whose main disadvantage is due to the spurious reflections which can be detected from the anterior ocular surfaces [27]. Curvature and Phase Diversity Wavefront Sensors In 1988, Roddier proposed a new method in the wavefront analysis, namely the curvature sensor (CS). The principle is based on the reconstruction of the second derivative of the wavefront. As shown in Figure 6, two detectors are set at a certain distance l from the focal plane. The distance l is directly proportional to the spatial resolution and inversely proportional to the sensitivity [41]. In some cases, a vibrating mirror at the lens focus provides the modulation of the sampled positions, and the wavefront sensor synchronously measures the modulated signal. The difference in light intensity distributions between the two planes is used to evaluate the local WF aberrations [41]. Several algorithms, such as the Green function and the Gureyev-Nugent algorithm, can be used [15]. In 2006, Díaz-Doutón et al. adapted a CS for an ocular aberration measurement for the first time and obtained a similar performance to that of the S-H sensor [61]. Similarly, Torti et al. [62] investigated the feasibility of using a curvature sensor in the ophthalmic field and evidenced that, as compared to S-H sensor, the curvature WF sensor was not limited anymore by the features of the lenslet array, showing a larger dynamic range [15,62]. However, it is fundamental to find a good trade-off, which requires a prolonged time of computing. A large defocus is needed to measure the wavefront with higher resolution, thus reducing the sensitivity of the sensor [15]. Curvature sensors are similar to the phase diversity wavefront sensor (PDWS). The PDWS simultaneously records two images, one in the focal plane and another, known as the "diverse image", in a defocused plane. As with all the other curvature sensors, both images are taken in out-of-focus planes [63], and the wavefront is connected to intensity via propagation physics. The main useful features of the PDWS as exploited in ophthalmic applications are: (1) it provides a real image of the pupil; (2) it accommodates variability in iris location, size, and shape, but it may be critical to resolve the phase on speckled beams; (3) it allows equally spaced sample planes with equal magnification of all images; and (4) it simplifies the sensor alignment, calibration, and data processing. Finally, the dynamic range and wavefront sensitivity are controlled by sample plane spacing and camera digitization bit depth; unlike the SHWS, they are not coupled to the spatial resolution. Ultimately, the PDWS works like the diffractive IOL multi-plane imaging, allowing the easy analysis of complex optical systems. Diffuser Wavefront Sensor Many efforts have been made over time to find low-cost alternatives to common wavefront sensors. The possibility to use a thin diffuser and its memory effect is promising. The principle is based on the correspondence between a tip/tilt in an incoming wavefront and the corresponding local shift in the detected pattern [30]. The diffuser is set close to the camera and the wavefront is reconstructed numerically by a specific algorithm [30]. Berto et al. proposed the use of the known "Demon Algorithm" [64], which has been optimized to perform the non-rigid registration of bio-medical images [30]. A weak diffuser at distinct angles of illumination has been adopted by Gunjala Curvature sensors are similar to the phase diversity wavefront sensor (PDWS). The PDWS simultaneously records two images, one in the focal plane and another, known as the "diverse image", in a defocused plane. As with all the other curvature sensors, both images are taken in out-of-focus planes [63], and the wavefront is connected to intensity via propagation physics. The main useful features of the PDWS as exploited in ophthalmic applications are: (1) it provides a real image of the pupil; (2) it accommodates variability in iris location, size, and shape, but it may be critical to resolve the phase on speckled beams; (3) it allows equally spaced sample planes with equal magnification of all images; and (4) it simplifies the sensor alignment, calibration, and data processing. Finally, the dynamic range and wavefront sensitivity are controlled by sample plane spacing and camera digitization bit depth; unlike the SHWS, they are not coupled to the spatial resolution. Ultimately, the PDWS works like the diffractive IOL multi-plane imaging, allowing the easy analysis of complex optical systems. Diffuser Wavefront Sensor Many efforts have been made over time to find low-cost alternatives to common wavefront sensors. The possibility to use a thin diffuser and its memory effect is promising. The principle is based on the correspondence between a tip/tilt in an incoming wavefront and the corresponding local shift in the detected pattern [30]. The diffuser is set close to the camera and the wavefront is reconstructed numerically by a specific algorithm [30]. Berto et al. proposed the use of the known "Demon Algorithm" [64], which has been optimized to perform the non-rigid registration of bio-medical images [30]. A weak diffuser at distinct angles of illumination has been adopted by Gunjala This is clear by looking at Figure 7a and recalling some of the geometrical concepts. In fact, in the case of the SHWFS, the α angle, defined by the incoming wavefront, determines the spot displacement on the CCD, which in turns also depends on the lenslet pitch (ρSH) and focal length (fSH). The pixel size Δx corresponds to the lowest detectable spot deviation, whereas ρSH/2 limits its detectable maximum value. Note that the deviation amount of each spot for a given wavefront tilt scales with fSH for tan(α) ≃ α [66]. Finally, these geometrical constraints, together with the other properties such as the signal-tonoise ratio and the spot-tracking algorithm, define the SHWFS dynamic range and sensitivity with respect to the angular wavefront tilt (αmax and αmin). This is clear by looking at Figure 7a and recalling some of the geometrical concepts. In fact, in the case of the SHWFS, the α angle, defined by the incoming wavefront, determines the spot displacement on the CCD, which in turns also depends on the lenslet pitch (ρ SH ) and focal length (f SH ). The pixel size ∆x corresponds to the lowest detectable spot deviation, whereas ρ SH /2 limits its detectable maximum value. Note that the deviation amount of each spot for a given wavefront tilt scales with f SH for tan(α) α [66]. Finally, these geometrical constraints, together with the other properties such as the signal-to-noise ratio and the spot-tracking algorithm, define the SHWFS dynamic range and sensitivity with respect to the angular wavefront tilt (α max and α min ). Considering the DWFS (Figure 7b), McKay et al. used non-periodic lenslet arrays in which the diffuser pitch (ρD), evaluated to be 338 +/− 21 µm, corresponds to the mean distance between the sharp caustic intensity bands and the diffuser focal length (fD), empirically chosen to be equal to 5.15 mm, to the distance from the diffuser to the sensor [24]. By using trial lenses with spherical power in the range of [−24 D, +24 D], located close to the model eye lens, it has been possible to evaluate the dynamic range of both types of sensors. Figure 8 reports the measurements of the spherical equivalent power (M) in both the SHWFS (blue symbols) and the DWFS (red symbols) for three different illuminating sources: a laser diode (LD, Figure 8a); an LD with a laser speckle reducer (LD+LSR, Figure 8b); and (3) a light-emitting diode (LED, Figure 8c). Polymers 2022, 14, x FOR PEER REVIEW 13 of 40 equal to 5.15 mm, to the distance from the diffuser to the sensor [24]. By using trial lenses with spherical power in the range of [−24 D, +24 D], located close to the model eye lens, it has been possible to evaluate the dynamic range of both types of sensors. Figure 8 reports the measurements of the spherical equivalent power (M) in both the SHWFS (blue symbols) and the DWFS (red symbols) for three different illuminating sources: a laser diode (LD, Figure 8a); an LD with a laser speckle reducer (LD+LSR, Figure 8b); and (3) a light-emitting diode (LED, Figure 8c). Although the limited power of the LED (marked by an asterisk) did not allow precise measurements, the DWFS showed a larger dynamic range than that of the SHWFS for all the illuminating sources. This is also testified to by the dashed vertical lines shown in Figure 8 corresponding to the predicted dynamic range [24]. Shearing Interferometry A Mach Zehnder interferometer and an optically addressed spatial light modulator (OASLM) has been adopted as a novel adaptive wavefront correction system [67]. In this system, the output fringe intensity from the interferometric element is fed back optically to the OASLM, which is placed in one arm of the interferometer. In this way, a real-time correction of aberrated wavefronts, without electronic calculations, was obtained by a reliable reconstruction of the eye's wavefront aberration (WA) achieved by the interferometric element. The shearing interferometry is the most used technique for optical tests. The recorded interferometry is between the incoming wavefront and its displaced replica [15]. No reference wave is necessary since, starting from the modification, three different methods can be obtained: "lateral shear", if the input is shifted; "radial shear", if it is magnified; and "rotational shear", if it is rotated [15]. For all these shearing interferometric sensors, the phase information collected is proportional to the gradient of the test wavefront in the direction of the shear. Different optical setups can be used to obtain the shearing, e.g., wedge plates, polarizing prisms, gratings, or diffractive optical elements (DOEs) [15]. The phase of the incoming wavefront is reconstructed by an iterative method known as "phase-shifting". The object (e.g., the retina) is illuminated with a single beam of coherent light. In the simplest case, a grating is situated in front of the object to provide two sheared images of the object. The object is then imaged onto a CCD array sensor. A shearing device in the imaging system results in two superimposed images: the relative separation, or shearing distance, is normally chosen to be a small fraction of the field of view. Therefore, any pixel in the sensor device receives light from two points on the object Although the limited power of the LED (marked by an asterisk) did not allow precise measurements, the DWFS showed a larger dynamic range than that of the SHWFS for all the illuminating sources. This is also testified to by the dashed vertical lines shown in Figure 8 corresponding to the predicted dynamic range [24]. Shearing Interferometry A Mach Zehnder interferometer and an optically addressed spatial light modulator (OASLM) has been adopted as a novel adaptive wavefront correction system [67]. In this system, the output fringe intensity from the interferometric element is fed back optically to the OASLM, which is placed in one arm of the interferometer. In this way, a real-time correction of aberrated wavefronts, without electronic calculations, was obtained by a reliable reconstruction of the eye's wavefront aberration (WA) achieved by the interferometric element. The shearing interferometry is the most used technique for optical tests. The recorded interferometry is between the incoming wavefront and its displaced replica [15]. No reference wave is necessary since, starting from the modification, three different methods can be obtained: "lateral shear", if the input is shifted; "radial shear", if it is magnified; and "rotational shear", if it is rotated [15]. For all these shearing interferometric sensors, the phase information collected is proportional to the gradient of the test wavefront in the direction of the shear. Different optical setups can be used to obtain the shearing, e.g., wedge plates, polarizing prisms, gratings, or diffractive optical elements (DOEs) [15]. The phase of the incoming wavefront is reconstructed by an iterative method known as "phase-shifting". The object (e.g., the retina) is illuminated with a single beam of coherent light. In the simplest case, a grating is situated in front of the object to provide two sheared images of the object. The object is then imaged onto a CCD array sensor. A shearing device in the imaging system results in two superimposed images: the relative separation, or shearing distance, is normally chosen to be a small fraction of the field of view. Therefore, any pixel in the sensor device receives light from two points on the object surface and the phase changes at the pixel then depend directly on the relative displacement of the two points. As shown in Figure 9, the sheared wavefronts can be generated by a grating mounted on a translation stage. The interference pattern is generated in the overlap area (in a grey color) of the two sheared wavefronts. S x and S y are the amount of shear in the x and y directions, respectively; r is the shear vector, and q is the shear angle. The main disadvantage of the shearing interferometry is the limited dynamic [31]. To overcome these limitations, 'multiple shearing interferometry' was re adopted, and its principle relies on the generation of several replicas of the wav which are evaluated using a conventional grating. Such interferometers are able to the phase distortions of several tens of waves but also of very small fractions of a (λ/100). At the same time, the sensitivity and dynamics can be continuously adap the analyzed aberrations [68]. Talbot Moiré Technology The Talbot interferometer belongs to the class of lateral shearing interferom Talbot Moirè technology can be applied to detect wavefront tilts by grating generating Moirè fringes which replicate themselves at a certain distance, known Talbot distance Δz, where the CCD surface is set (see Figure 10). The Talbot distan is given by the relation [70]: where d is the period of the grating, and is the light wavelength. The Talbot technology uses: (1) the Talbot image of a two-dimensional grating as a wavefront and (2) the local shift of the Talbot image to calculate the tilt of the wavefro estimating the phases of the fundamental spatial frequency between the grating a local patch of the Talbot image, the definition of the shift of the Talbot image is al [15]. The Talbot Moiré sensor is constructed with two gratings, in which the Moiré f are generated by superimposing the Fourier image of the first grating on the secon The main disadvantage of the shearing interferometry is the limited dynamic range [31]. To overcome these limitations, 'multiple shearing interferometry' was recently adopted, and its principle relies on the generation of several replicas of the wavefront, which are evaluated using a conventional grating. Such interferometers are able to detect the phase distortions of several tens of waves but also of very small fractions of a wave (λ/100). At the same time, the sensitivity and dynamics can be continuously adapted to the analyzed aberrations [68]. Talbot Moiré Technology The Talbot interferometer belongs to the class of lateral shearing interferometers. Talbot Moirè technology can be applied to detect wavefront tilts by gratings [69], generating Moirè fringes which replicate themselves at a certain distance, known as the Talbot distance ∆z, where the CCD surface is set (see Figure 10). The Talbot distance ∆z is given by the relation [70]: where d is the period of the grating, and λ is the light wavelength. The Talbot Moiré technology uses: (1) the Talbot image of a two-dimensional grating as a wavefront sensor and (2) the local shift of the Talbot image to calculate the tilt of the wavefront. By estimating the phases of the fundamental spatial frequency between the grating and the local patch of the Talbot image, the definition of the shift of the Talbot image is allowed [15]. The Talbot Moiré sensor is constructed with two gratings, in which the Moiré fringes are generated by superimposing the Fourier image of the first grating on the second. The two gratings have the same period. If the phase object is placed in front of the first grating, the light deflected by the object yields the shifted Fourier images, and the resultant Moiré fringes show the deflection mapping [71]. The distortion of the fringe pattern reflects the local tilt of the wavefront. The diffraction patterns can be observed at specific periodic distances from the grating (called Talbot images) [72]. Sekine et al. [69] used a two-dimensional grating for sensing the optical wavefront with the CCD placed in the plane of the Talbot image of the first order to maximize the contrast of the grating image. Overall, the common advantage of the Talbot interferometry is its relatively simple and inexpensive design when compared to the other opto-electronic systems, as well as its accuracy and high spatial resolution. Furthermore, the dynamic range is larger in comparison to the Shack-Hartmann sensor [15,73]. The disadvantages of this sensor technology are the sensitivity to vibration, the changes in the polarization of the beam coming back out of the eye, and the complex reconstruction of the phase error. All these factors strongly limit the widespread application of this technology to human eyes. Tscherning Aberrometer, Ray-Tracing System, and Dynamic Skiascopy As described in a previous section, H-S aberrometers are fairly user-friendly and offer very high resolution, reproducibility, and accuracy, as well as a quick fundamental time in their measurements and analysis of ocular aberrations. However, H-S is inadequate when reconstructing the wavefronts of patients with highly aberrated corneas. The same limitation characterizes the Tscherning aberrometers. The latter are fastmeasuring and highly accurate, but they are less patient-friendly because they require more time and effort to obtain a treatable image. A scheme of the Tscherning aberrometer is shown in Figure 11. The Tscherning aberrometer uses a laser beam (generally, patients are disturbed by the green (532 nm) line used as a source) and projects a grid on the target. Any distortion from the reference grid is reported in the aberration map [26]. The distortion of the fringe pattern reflects the local tilt of the wavefront. The diffraction patterns can be observed at specific periodic distances from the grating (called Talbot images) [72]. Sekine et al. [69] used a two-dimensional grating for sensing the optical wavefront with the CCD placed in the plane of the Talbot image of the first order to maximize the contrast of the grating image. Overall, the common advantage of the Talbot interferometry is its relatively simple and inexpensive design when compared to the other opto-electronic systems, as well as its accuracy and high spatial resolution. Furthermore, the dynamic range is larger in comparison to the Shack-Hartmann sensor [15,73]. The disadvantages of this sensor technology are the sensitivity to vibration, the changes in the polarization of the beam coming back out of the eye, and the complex reconstruction of the phase error. All these factors strongly limit the widespread application of this technology to human eyes. Tscherning Aberrometer, Ray-Tracing System, and Dynamic Skiascopy As described in a previous section, H-S aberrometers are fairly user-friendly and offer very high resolution, reproducibility, and accuracy, as well as a quick fundamental time in their measurements and analysis of ocular aberrations. However, H-S is inadequate when reconstructing the wavefronts of patients with highly aberrated corneas. The same limitation characterizes the Tscherning aberrometers. The latter are fast-measuring and highly accurate, but they are less patient-friendly because they require more time and effort to obtain a treatable image. A scheme of the Tscherning aberrometer is shown in Figure 11. The Tscherning aberrometer uses a laser beam (generally, patients are disturbed by the green (532 nm) line used as a source) and projects a grid on the target. Any distortion from the reference grid is reported in the aberration map [26]. Another type of aberrometry is the ray tracing, which works on a similar principle to that of the Tscherning. The main difference between them is that the ray-tracing system scans the retina sequentially instead of simultaneously. So, each point is processed separately and sequentially with the advantage that it can reduce the risk of intersecting light rays, enabling more highly aberrated eyes to be imaged. However, the ray tracing technique is limited by the resolution of the aberroscope [15]. An unexpanded laser beam is scanned so that it enters the eye sequentially through different pupil locations. One marginal ray (dotted line in Figure 12) and the principal ray (solid line) are shown. Each retinal image (A, B) is projected onto a CCD camera. The displacement of the image, with respect to a reference, is proportional to the local derivative of the wave aberration [48,74]. An interesting system uses the skiascopic ocular wavefront-sensing device (also named the retinoscopy technique), which is a time-dependent method (not a positiondependent approach) to study optical aberrations (mainly, the refractive error of the eye). Another type of aberrometry is the ray tracing, which works on a similar principle to that of the Tscherning. The main difference between them is that the ray-tracing system scans the retina sequentially instead of simultaneously. So, each point is processed separately and sequentially with the advantage that it can reduce the risk of intersecting light rays, enabling more highly aberrated eyes to be imaged. However, the ray tracing technique is limited by the resolution of the aberroscope [15]. An unexpanded laser beam is scanned so that it enters the eye sequentially through different pupil locations. One marginal ray (dotted line in Figure 12) and the principal ray (solid line) are shown. Each retinal image (A, B) is projected onto a CCD camera. The displacement of the image, with respect to a reference, is proportional to the local derivative of the wave aberration [48,74]. Another type of aberrometry is the ray tracing, which works on a similar principle to that of the Tscherning. The main difference between them is that the ray-tracing system scans the retina sequentially instead of simultaneously. So, each point is processed separately and sequentially with the advantage that it can reduce the risk of intersecting light rays, enabling more highly aberrated eyes to be imaged. However, the ray tracing technique is limited by the resolution of the aberroscope [15]. An unexpanded laser beam is scanned so that it enters the eye sequentially through different pupil locations. One marginal ray (dotted line in Figure 12) and the principal ray (solid line) are shown. Each retinal image (A, B) is projected onto a CCD camera. The displacement of the image, with respect to a reference, is proportional to the local derivative of the wave aberration [48,74]. An interesting system uses the skiascopic ocular wavefront-sensing device (also named the retinoscopy technique), which is a time-dependent method (not a positiondependent approach) to study optical aberrations (mainly, the refractive error of the eye). An interesting system uses the skiascopic ocular wavefront-sensing device (also named the retinoscopy technique), which is a time-dependent method (not a position-dependent approach) to study optical aberrations (mainly, the refractive error of the eye). In this case, the measurement of the time gap between the reflected light beams, thanks to a rotating array of detectors, is directly linked to the wavefront errors. The series of sensors that rotate very rapidly allows the collection of more than 1400 retinoscopic data points in a short period of time [75]. Further information is reported in the next sections, which describe some of the ophthalmological imaging methods and the applications. IOLs Wavefront Aberrations Experimental Setups Some optical setups are used for measuring the wavefront aberrations of the lens and specifically of the multifocal intraocular lenses (IOLs). Multifocal IOLs can be classified into two types: diffractive and refractive. Refractive IOLs have two or more curvatures to form refractive zones, whereas diffractive IOLs create more than one retinal image throughout the light diffraction. The scheme of one of the simplest setups is shown in Figure 13. The system consists of a diode-collimated laser beam of 532 nm, a beam expander, a transparent cell (filled with 0.9% normal saline solution) in which the IOL was submerged, a collimating lens, and a Shack-Hartmann wavefront sensor. An XYZ translational stage is attached to the wet cell to align the IOL with the optical axis of the wavefront sensor. that rotate very rapidly allows the collection of more than 1400 retinoscopic data poin a short period of time [75]. Further information is reported in the next sections, w describe some of the ophthalmological imaging methods and the applications. IOLs Wavefront Aberrations Experimental Setups Some optical setups are used for measuring the wavefront aberrations of the len specifically of the multifocal intraocular lenses (IOLs). Multifocal IOLs can be class into two types: diffractive and refractive. Refractive IOLs have two or more curvatur form refractive zones, whereas diffractive IOLs create more than one retinal im throughout the light diffraction. The scheme of one of the simplest setups is show Figure 13. The system consists of a diode-collimated laser beam of 532 nm, a b expander, a transparent cell (filled with 0.9% normal saline solution) in which the IOL submerged, a collimating lens, and a Shack-Hartmann wavefront sensor. An translational stage is attached to the wet cell to align the IOL with the optical axis o wavefront sensor. Some sophisticated test benches [77] have been designed for the lens op characterization, including the possibility of testing the lens under off-axis conditio well as in the presence of decentration and/or tilt in agreement with ISO 11979-2:2014 This aspect is particularly relevant for characterizing the intraocular lens (IOL) u conditions close to a real human eye. To perform this, artificial corneas with diff amounts of spherical aberrations (SA) are generally used. The main parts of the s shown in Figure 14 are (1) the illumination sources (white lamp and four types of LE different wavelengths in the 459-637 nm range) to study the chromatic dispersion, w is a relevant parameter in multifocal lenses; (2) a USAF test chart for detecting the im quality assessment; (3) a collimator to analyze the IOLs according to the ISO standard object has to be at infinity); (4) some pinholes with different diameters to check the optical performance; (5) the model eye with the artificial cornea; (6) the wet cell wher IOL is immersed (in some cases, a water bath containing 0.01% fluorescein solution used to visualize the propagation of light rays illuminated by a monochromatic g laser light (532 nm) [79]); and then (7) the image and wavefront analysis (a 10× micros and a Hartmann-Shack sensor). Once the experimental data are acquired, the lens optical imaging quality is asse by using common metrics, such as the modulation transfer function (MTF), the spread function (PSF), and/or the Strehl ratio (SR). A further analysis is the analysis o lens's wavefront through expansion in Zernike polynomials. This bench tests the a of an optical system to reproduce an infinitesimally thin cross-slit image. The c sectional intensity profile of the reproduced image is then calculated into MTF value the Fourier transform of the line spread function. A similar setup has been use Some sophisticated test benches [77] have been designed for the lens optical characterization, including the possibility of testing the lens under off-axis conditions as well as in the presence of decentration and/or tilt in agreement with ISO 11979-2:2014 [78]. This aspect is particularly relevant for characterizing the intraocular lens (IOL) under conditions close to a real human eye. To perform this, artificial corneas with different amounts of spherical aberrations (SA) are generally used. The main parts of the setup shown in Figure 14 are (1) the illumination sources (white lamp and four types of LED at different wavelengths in the 459-637 nm range) to study the chromatic dispersion, which is a relevant parameter in multifocal lenses; (2) a USAF test chart for detecting the image quality assessment; (3) a collimator to analyze the IOLs according to the ISO standard (the object has to be at infinity); (4) some pinholes with different diameters to check the lens optical performance; (5) the model eye with the artificial cornea; (6) the wet cell where the IOL is immersed (in some cases, a water bath containing 0.01% fluorescein solution was used to visualize the propagation of light rays illuminated by a monochromatic green laser light (532 nm) [79]); and then (7) the image and wavefront analysis (a 10× microscope and a Hartmann-Shack sensor). Once the experimental data are acquired, the lens optical imaging quality is assessed by using common metrics, such as the modulation transfer function (MTF), the point spread function (PSF), and/or the Strehl ratio (SR). A further analysis is the analysis of the lens's wavefront through expansion in Zernike polynomials. This bench tests the ability of an optical system to reproduce an infinitesimally thin cross-slit image. The cross-sectional intensity profile of the reproduced image is then calculated into MTF values via the Fourier transform of the line spread function. A similar setup has been used to estimate the energy distribution between the CCD collected images as a function of pupil diameter. The authors found that, for large pupils, the energy efficiency of the distance image is strongly affected by the level of SA, although aspheric IOLs perform slightly better than their counterparts with a spherical design. For small pupils, there are no differences between the spherical and aspheric IOLs [80]. estimate the energy distribution between the CCD collected images as a function of pupil diameter. The authors found that, for large pupils, the energy efficiency of the distance image is strongly affected by the level of SA, although aspheric IOLs perform slightly better than their counterparts with a spherical design. For small pupils, there are no differences between the spherical and aspheric IOLs [80]. Slightly more complicated optical bench setups have been proposed to assess/characterize the optical features of advanced model intraocular lenses such as diffractive IOLs. For instance, bifocal diffractive IOLs were obtained by combining two lenses: (1) a carrier lens which determines the power for the far vision and (2) a diffractive profile providing the addition needed to correct the near vision. This approach was widely used in the correction of presbyopia or when cataract surgery was performed [81]. In Figure 15 is shown the optical setup which is useful to measure the 1st and 0th order diffractive efficiency, described in detail in Ref. [82] and briefly reported below. A spatially filtered and collimated HeNe laser (633 nm) beam is used to obtain a smooth and flat wavefront. The optical bench must be vertical (left panel in Figure 15) as the lens floats in its cell. The 0th order remains collimated behind the diffractive lens and is brought to focus by using an additional convergent lens of a 100 mm focal length (right panel in Figure 15), which has a high Strehl ratio (98%). Hence, it is placed at the focus of the −1st order to reduce its contribution in the 0th order efficiency measurement. For each focus, the energy is integrated through a pinhole whose diameter is equal to that of the first ring of the Airy pattern defined by the relation: d = 2.44 λ fob/D (being λ = 633 nm, the wavelength of the excitation beam; D = 3 mm, the diameter of the stop; and fob the focal length associated with the diffracted order). In the absence of aberration, the first ring of an Airy pattern contains 84% of the total energy; thus, a correction factor must be taken into account to calculate the diffractive efficiency. Slightly more complicated optical bench setups have been proposed to assess/characterize the optical features of advanced model intraocular lenses such as diffractive IOLs. For instance, bifocal diffractive IOLs were obtained by combining two lenses: (1) a carrier lens which determines the power for the far vision and (2) a diffractive profile providing the addition needed to correct the near vision. This approach was widely used in the correction of presbyopia or when cataract surgery was performed [81]. In Figure 15 is shown the optical setup which is useful to measure the 1st and 0th order diffractive efficiency, described in detail in Ref. [82] and briefly reported below. A spatially filtered and collimated HeNe laser (633 nm) beam is used to obtain a smooth and flat wavefront. The optical bench must be vertical (left panel in Figure 15) as the lens floats in its cell. The 0th order remains collimated behind the diffractive lens and is brought to focus by using an additional convergent lens of a 100 mm focal length (right panel in Figure 15), which has a high Strehl ratio (98%). Hence, it is placed at the focus of the −1st order to reduce its contribution in the 0th order efficiency measurement. For each focus, the energy is integrated through a pinhole whose diameter is equal to that of the first ring of the Airy pattern defined by the relation: d = 2.44 λ f ob /D (being λ = 633 nm, the wavelength of the excitation beam; D = 3 mm, the diameter of the stop; and f ob the focal length associated with the diffracted order). In the absence of aberration, the first ring of an Airy pattern contains 84% of the total energy; thus, a correction factor must be taken into account to calculate the diffractive efficiency. Generally, the adaptive optics IOL metrology system is characterized by three main sections: a model eye, an imaging arm, and the adaptive optics (see Figure 16, see Ref. [83]). In detail: The model eye: It consists of a wet cell in conjunction with an artificial cornea modelled by an aspheric doublet, as recommended by ISO 11979-2:2014 [78]. The air space between the artificial cornea and the wet cell was set to 4.0 mm; so, the ratio of the entrance pupil diameter to the beam size at the IOL was in accordance with that found in the Gullstrand model eye. The intraocular lens alignment was validated with a pupil camera, where the pupil size is accurately controlled with an artificial pupil located in a relayed pupil plane. The imaging arm: Images of a resolution target consisting of a tumbling letter acuity chart with lines corresponding to 20/40, 20/30, 20/25, 20/20, and 20/15 were captured through the focus. The letter chart was displayed by a computer projector in white light placed at the retinal plane. The model eye's retinal plane was magnified by a microscope objective onto a 5-megapixel charge-coupled device to improve the pixel sampling. The adaptive optics system: It is incorporated into the optical bench system to induce arbitrary corneal aberration profiles (LOAs and HOAs) onto the pupil plane of the model eye in real time. Finally, a large-stroke deformable mirror and a custom-made Shack-Hartmann wavefront sensor was used to verify the aberration induction of the deformable mirror. Figure 15. Setup to measure the 1st (left) and 0th (right) order diffractive efficiency. The 0th order remains collimated behind the diffractive lens and is brought to focus by using an additional convergent lens of 100 mm focal length (right), which has a high Strehl ratio (98%). The Strehl ratio S is a suitable figure of merit, defined as the normalized peak intensity of the PSF of the lens: S = I real (0,0)/I ideal (0,0) = | e ikψ(x,y) dxdy| 2 where I real (0, 0) and I ideal (0, 0) denote the intensities at the center of the real point image and the ideal point spread function (PSF) without aberrations, respectively [19]. Generally, the adaptive optics IOL metrology system is characterized by three main sections: a model eye, an imaging arm, and the adaptive optics (see Figure 16, see Ref. [83]). We outline that together with the optical design of lenses with the proper shape and the optimized wavefront sensors, the physical properties of the lens-based materials must be optimized [84,85]. In fact, IOL material compositions, their design, and the application of polymer coatings cause significant changes in WF aberrations. Then, suitable optical In detail: The model eye: It consists of a wet cell in conjunction with an artificial cornea modelled by an aspheric doublet, as recommended by ISO 11979-2:2014 [78]. The air space between the artificial cornea and the wet cell was set to 4.0 mm; so, the ratio of the entrance pupil diameter to the beam size at the IOL was in accordance with that found in the Gullstrand model eye. The intraocular lens alignment was validated with a pupil camera, where the pupil size is accurately controlled with an artificial pupil located in a relayed pupil plane. The imaging arm: Images of a resolution target consisting of a tumbling letter acuity chart with lines corresponding to 20/40, 20/30, 20/25, 20/20, and 20/15 were captured through the focus. The letter chart was displayed by a computer projector in white light placed at the retinal plane. The model eye's retinal plane was magnified by a microscope objective onto a 5-megapixel charge-coupled device to improve the pixel sampling. The adaptive optics system: It is incorporated into the optical bench system to induce arbitrary corneal aberration profiles (LOAs and HOAs) onto the pupil plane of the model eye in real time. Finally, a large-stroke deformable mirror and a custom-made Shack-Hartmann wavefront sensor was used to verify the aberration induction of the deformable mirror. We outline that together with the optical design of lenses with the proper shape and the optimized wavefront sensors, the physical properties of the lens-based materials must be optimized [84,85]. In fact, IOL material compositions, their design, and the application of polymer coatings cause significant changes in WF aberrations. Then, suitable optical materials must be adopted to make the polymeric IOL/contact lenses. Among them, the most adopted are: polymethyl-methacrylate (PMMA), hydroxy-ethylmethacrylate (HEMA), silicone, hydrophilic acrylic, hydrophobic acrylic and hydrophilic-hydrophobic copolymer, and hydrogels, which have a high refractive index and excellent mechanical properties (see Table 3) which are useful in reducing the higher-order aberrations [86][87][88][89]. Adaptive Optics The adaptive optics (AO) setup, first used in astronomy, is composed of a wavefront sensor, a deformable mirror, and a control system, strictly connected in a closed loop [90]. The input wavefront signal is analyzed by the control system, which continuously adjusts the needed correction thanks to the deformable mirror, the surface of which was modified by tunable actuators [91]. Babcock introduced the idea of adaptive optics for the first time in 1953, with the aim of compensating astronomical observations [92]. Subsequently, Smirnov proposed to apply the same idea to compensate for ocular aberrations [93]. Currently, the assets provided by adaptive optics are adopted for vision science. The researchers focused on retinal imaging and on testing visual function [94,95]. In 1989, Dreher et al. presented a first adaptive ophthalmological optical system based on a deformable mirror conjugated with the human eye to correct astigmatism [96]. A decade later, Williams et al. achieved the reduction in the Zernike aberrations up to the fourth order, with minor wavefront errors for defocus, astigmatism, coma, and spherical aberrations. As shown in Figure 17, two subsystems were embedded for measuring contrast sensitivity and to perform retinal imaging. Successively, they upgraded the system by increasing the number of actuators in order to correct higher-order aberrations [95,97]. contrast sensitivity and to perform retinal imaging. Successively, they system by increasing the number of actuators in order to correct higher-or [95,97]. Nevertheless, the scientists are not only interested in studying visual errors but also pathological retinal tissue [98]. Over time, AO for retina integrated in clinical use, and recently, a resolution to 2 µm was ach example, Roorda et al. [99] successfully incorporated AO in ophthalmoscopes (SLOs). The recent technology advancement enables the achievement of AO s a wavefront sensor, as in the case of sensorless AO (SAO), and witho corrector, as in the case of computational AO (CAO). Figure 18 show Nevertheless, the scientists are not only interested in studying visual function and its errors but also pathological retinal tissue [98]. Over time, AO for retinal imaging was integrated in clinical use, and recently, a resolution to 2 µm was achieved [98]. For example, Roorda et al. [99] successfully incorporated AO in scanning laser ophthalmoscopes (SLOs). The recent technology advancement enables the achievement of AO systems without a wavefront sensor, as in the case of sensorless AO (SAO), and without a wavefront corrector, as in the case of computational AO (CAO). Figure 18 shows the different categories of the AO systems, as extensively described in Ref. [98]. Briefly, in the sensorless setup, the properties of the image are analyzed to adjust the correction needed, whereas, in the computational system, a digital filter is required for the compensation [98]. an important role in monitoring both the progression and the treatment of retinal degenerations. AO retinal imaging will continue to be used to investigate diabetes and glaucoma. AO imaging has the potential to improve our understanding and perhaps the monitoring of cerebrovascular and neurodegenerative changes occurring in the retina [100]. Finally, the ability to routinely image cones, rods, and retinal pigment epithelium (RPE) cells will be an important factor in evaluating progression in macular degeneration as well as the impact of therapeutic interventions. The experimental setups previously discussed are designed to study aberrations in monocular vision. Nevertheless, this approach lacks the possibility to study the interaction provided by binocular vision. For this reason, many efforts have been made to develop simultaneous binocular AO systems. In 2009, Fernández et al. proposed a visual simulator that was able to manipulate the aberrations independently in each eye [101]. Their system does not need double components, such as a wavefront sensor and a wavefront corrector, to carry out the simultaneous measurements. In this case, liquid crystal on silicon (LCOS) is used as a wavefront corrector, which modulates the correction needed by modifying the refractive index of the liquid crystal. Instead, binocular infrared optometers allow simultaneous measurement of steady-state accommodation in both eyes, suggesting a significant correlation between the defocus term in the right and left eyes of the same subject. Chin et al. [102], using a binocular Shack-Hartmann wavefront sensor, have measured the ocular wavefront aberrations concurrently in both eyes of six subjects at a sampling rate of 20.5 Hz. More details about the experimental setup, shown in Figure 19, are reported in Ref. [102]. The data analysis procedure follows these main steps: (a) wavefront reconstruction; (b) removal of blink artefacts; and (c) coherence function analysis. So, a dynamic correlation between the ocular wavefront aberrations of two eyes with a binocular Shack-Hartmann wavefront sensor was obtained. Specifically, coherence function analysis shows that the interocular correlation between the aberrations depends on the subject, the Zernike mode, and the frequency and that phase consistency dominates the coherence values. As will be more evident in the following paragraphs, AO retinal imaging is playing an important role in monitoring both the progression and the treatment of retinal degenerations. AO retinal imaging will continue to be used to investigate diabetes and glaucoma. AO imaging has the potential to improve our understanding and perhaps the monitoring of cerebrovascular and neurodegenerative changes occurring in the retina [100]. Finally, the ability to routinely image cones, rods, and retinal pigment epithelium (RPE) cells will be an important factor in evaluating progression in macular degeneration as well as the impact of therapeutic interventions. The experimental setups previously discussed are designed to study aberrations in monocular vision. Nevertheless, this approach lacks the possibility to study the interaction provided by binocular vision. For this reason, many efforts have been made to develop simultaneous binocular AO systems. In 2009, Fernández et al. proposed a visual simulator that was able to manipulate the aberrations independently in each eye [101]. Their system does not need double components, such as a wavefront sensor and a wavefront corrector, to carry out the simultaneous measurements. In this case, liquid crystal on silicon (LCOS) is used as a wavefront corrector, which modulates the correction needed by modifying the refractive index of the liquid crystal. Instead, binocular infrared optometers allow simultaneous measurement of steady-state accommodation in both eyes, suggesting a significant correlation between the defocus term in the right and left eyes of the same subject. Chin et al. [102], using a binocular Shack-Hartmann wavefront sensor, have measured the ocular wavefront aberrations concurrently in both eyes of six subjects at a sampling rate of 20.5 Hz. More details about the experimental setup, shown in Figure 19, are reported in Ref. [102]. The data analysis procedure follows these main steps: (a) wavefront reconstruction; (b) removal of blink artefacts; and (c) coherence function analysis. So, a dynamic correlation between the ocular wavefront aberrations of two eyes with a binocular Shack-Hartmann wavefront sensor was obtained. Specifically, coherence function analysis shows that the interocular correlation between the aberrations depends on the subject, the Zernike mode, and the frequency and that phase consistency dominates the coherence values. Intraocular Lens Design for Wavefront-Shaping Extended Range-of-Vision Nowadays, wavefront technology is strictly connected to the development cutting-edge models of intraocular lenses (IOLs). The appropriate use of in aberrations in extended-depth-of-focus (EDOF) IOLs showed the advantage of en the near vision and providing spectacle independence. An example is the MINI (SIFI SpA, Catania, Italy), a non-diffractive EDOF IOL, where positive and ne aberrations are induced in the first two concentric sections, whereas the last monofocal (see Figure 20A) [103,104]. This specific design enables the extension depth of focus and the obtaining of a continuous focus range. A good quality of v provided between 4 m and 50 cm [104]. Starting from this innovative applicati optical system, denominated WELL Fusion, was developed to fully correct presbyo second intraocular lens, called Mini WELL PROXA, was designed to work syner with Mini WELL and to extend vision up to 33 cm [105]. The optical design of Mini PROXA entails four annular zones where alternatively positive and negative aber are introduced, as well as an external monofocal ring (see Figure 20B) [105]. The bin system WELL Fusion involves the combined implantation of the two IOLs des above, and it exploits a patented wavefront-engineered technology to provide a go continuous quality of vision between 4 m and 33 cm [105]. Over the past decade, the interest in EDOF IOLs based on wavefront tech increased and other devices were developed [106,107] Intraocular Lens Design for Wavefront-Shaping Extended Range-of-Vision Nowadays, wavefront technology is strictly connected to the development of new cutting-edge models of intraocular lenses (IOLs). The appropriate use of induced aberrations in extended-depth-of-focus (EDOF) IOLs showed the advantage of enabling the near vision and providing spectacle independence. An example is the MINI WELL (SIFI SpA, Catania, Italy), a non-diffractive EDOF IOL, where positive and negative aberrations are induced in the first two concentric sections, whereas the last ring is monofocal (see Figure 20A) [103,104]. This specific design enables the extension of the depth of focus and the obtaining of a continuous focus range. A good quality of vision is provided between 4 m and 50 cm [104]. Starting from this innovative application, an optical system, denominated WELL Fusion, was developed to fully correct presbyopia. A second intraocular lens, called Mini WELL PROXA, was designed to work synergically with Mini WELL and to extend vision up to 33 cm [105]. The optical design of Mini WELL PROXA entails four annular zones where alternatively positive and negative aberrations are introduced, as well as an external monofocal ring (see Figure 20B) [105]. The binocular system WELL Fusion involves the combined implantation of the two IOLs described above, and it exploits a patented wavefront-engineered technology to provide a good and continuous quality of vision between 4 m and 33 cm [105]. [105]. OpticStudio software (Zemax, LLC, Kirkland, WA, USA) was used to simulate the behaviour of both lenses at a spatial frequency equal to 50 lp/mm by considering an Arizona model eye and a 3 mm aperture. As can be seen from the picture, the modulation transfer function is quite similar in the far vision region for both lenses, whereas it provides a complementary response in the intermediate and near vision. As matter of fact, its optics was designed to work jointly and to reach a full presbyopia correction by closing the gap in the near vision up to 33 cm. Over the past decade, the interest in EDOF IOLs based on wavefront technology increased and other devices were developed [106,107] Figure 21 reports the theoretical Through-Focus Modulation Transfer Function (TF-MTF) curves for Mini WELL and Mini WELL PROXA [105]. OpticStudio software (Zemax, LLC, Kirkland, WA, USA) was used to simulate the behaviour of both lenses at a spatial frequency equal to 50 lp/mm by considering an Arizona model eye and a 3 mm aperture. As can be seen from the picture, the modulation transfer function is quite similar in the far vision region for both lenses, whereas it provides a complementary response in the intermediate and near vision. As matter of fact, its optics was designed to work jointly and to reach a full presbyopia correction by closing the gap in the near vision up to 33 cm. [105]. OpticStudio software (Zemax, LLC, Kirkland, WA, USA) was used to simulate the behaviour of both lenses at a spatial frequency equal to 50 lp/mm by considering an Arizona model eye and a 3 mm aperture. As can be seen from the picture, the modulation transfer function is quite similar in the far vision region for both lenses, whereas it provides a complementary response in the intermediate and near vision. As matter of fact, its optics was designed to work jointly and to reach a full presbyopia correction by closing the gap in the near vision up to 33 cm. The optical quality of MINI WELL was tested and compared with other IOLs on the market. Domínguez-Vicent et al. compared MINI WELL with TECNIS Symfony (Johnson & Johnson Surgical Vision Inc., Santa Ana, CA, USA) in terms of optical quality, such as modulation transfer function (MTF) and through-focus MTF (TF-MTF). Both IOLs provide an extended depth of focus but with different optical designs: TECNIS Symfony exploits an achromatic diffractive platform, whereas MINI WELL introduces spherical aberrations on a non-diffractive surface [108]. The study carried out by Domínguez-Vicent et al. demonstrated that MINI WELL is more defocus-tolerant for intermediate and near distances than TECNIS Symfony, in both photopic and scotopic conditions. This experimental result is consistent with the clinical outcomes, as reported by Nowik et al. in their retrospective observational study [109]. Nowik et al. compared MINI WELL with TECNIS Symfony from the clinical point of view; they found that MINI WELL provides a larger range of depth of focus than TECNIS Symfony, and the difference was statistically significant. Moreover, MINI WELL recorded a lower incidence of dysphotopsia thanks to its non-diffractive optics and a higher percentage of spectacle independence at both close and intermediate distances. MINI WELL and Tecnis Symfony were compared by Camps et al. in terms of an "in vitro" aberrometric profile [76]. Camps et al. used an experimental setup, including a Shack-Hartmann wavefront sensor, to obtain Zernike polynomials from the third to the sixth orders. As expected, MINI WELL generated positive and negative spherical aberrations. Camps et al. found that TECNIS Symfony generated some negative spherical aberrations (−0.12 µm) to compensate for the positive primary spherical aberration which is normally present in the cornea. Refractive Surgery Refractive surgery exploits laser ablation to modify the shape of the cornea and consequentially the provided refraction. In clinical practice, refractive surgery is a routine treatment aimed at correcting vision impairment, such as myopia, hyperopia, or astigmatism. Different surgery techniques can be used to obtain the compensation for the refractive errors. Nowadays, laser in situ keratomileusis (LASIK) and photorefractive keratectomy (PRK) are among the most widely used treatments [110]. Unfortunately, traditional refractive surgery could increase higher-order aberrations, especially spherical aberrations [15]. Wavefront-guided refractive surgery avoids the occurrence of this side effect thanks to a previous wavefront analysis. The intended ablation pattern is customized and based on the aberration map of the patient's eye. A limitation of this technology is the precise alignment with the eye that is critical for a good outcome. Moreover, the success of the surgery highly depends on the healing process; so, the result is unpredictable. Nevertheless, the risk of induced aberrations is lower when compared with traditional refractive surgery, and many studies reported an improved contrast sensitivity and a reduction in halos and glare [23,111]. Figure 22 shows a real case of a patient who underwent a wavefrontguided PRK treatment [112]. The comparison between the preoperative and postoperative topography, together with the statistical analysis, demonstrated the significant decrease in aberrations. misplacement compared to broad-beam lasers. Centration needs to be accurate as minimal misalignment can induce a completely different aberration pattern. In addition, the scanning-spot frequency must not exceed the rate followed by the tracking system. Finally, it is worth mentioning that other treatments take into consideration the impact of aberrations in refractive surgery, such as the wavefront-optimized profile and the custom Q-factor profile. The wavefront-optimized profile is based on an aspheric profile designed by Mrochen et al. [114] in order to compensate the aberrations induced by conventional refractive surgery. In fact, the loss in ablation efficacy, due to the angle of incidence of the excimer laser pulses in the midperiphery, can lead to a decrease in the intended ablation depth and, in turn, an increase in spherical aberration [113]. The custom Q-factor profile aims to improve the visual outcome thanks to the manipulation of the corneal asphericity. Manns et al. [115] suggested that a minimum of spherical aberration would be obtained at a target Q-factor of approximately −0.4 [113]. It remains to be seen whether all these treatments are totally beneficial for visual performance. Technology, such as adaptive optics, might be a useful tool to reach a higher level of customization. For example, preoperative patient simulations, with different combinations of aberrations, might help in determining the specific amount and Zernike mode of aberration to target with the treatment [113]. WFS Combined with Ophthalmic Technologies One of the causes of blindness is the dysfunction of the blood-retinal barrier, typically observed in people affected by diabetic retinopathy [116], whose study requires a high optical resolution (6.5 µm in diameter [117]) to visualize single capillaries and the blood cells which, in the imperfect optics of the mammalian anterior eye, induce The introduction of customized refractive surgery was possible thanks to the development of specific technology, taking into account the main limitations due to the dimension of the laser beam and thus the precise spot placement. Small irregularities in the cornea are generally treated using the flying-spot technology characterized by smaller beams (0.5 to 1.0 mm), allowing better and more accurate results in custom ablations to correct irregular astigmatism. However, spot sizes less than 1 mm are required to adequately correct up to the fourth order terms [113]. In some cases, devices with a variable spot size (e.g., VISX S4; Abbott Medical Optics Inc., Santa Ana, CA, USA) are adopted, as well as a device allowing the overlapping of the spots to obtain a smooth surface. In this case, high-speed eye-tracking systems are implemented because of the smaller spot size and also because of the risk of individual pulse decentration and misplacement compared to broad-beam lasers. Centration needs to be accurate as minimal misalignment can induce a completely different aberration pattern. In addition, the scanning-spot frequency must not exceed the rate followed by the tracking system. Finally, it is worth mentioning that other treatments take into consideration the impact of aberrations in refractive surgery, such as the wavefront-optimized profile and the custom Q-factor profile. The wavefront-optimized profile is based on an aspheric profile designed by Mrochen et al. [114] in order to compensate the aberrations induced by conventional refractive surgery. In fact, the loss in ablation efficacy, due to the angle of incidence of the excimer laser pulses in the midperiphery, can lead to a decrease in the intended ablation depth and, in turn, an increase in spherical aberration [113]. The custom Q-factor profile aims to improve the visual outcome thanks to the manipulation of the corneal asphericity. Manns et al. [115] suggested that a minimum of spherical aberration would be obtained at a target Q-factor of approximately −0.4 [113]. It remains to be seen whether all these treatments are totally beneficial for visual performance. Technology, such as adaptive optics, might be a useful tool to reach a higher level of customization. For example, preoperative patient simulations, with different combinations of aberrations, might help in determining the specific amount and Zernike mode of aberration to target with the treatment [113]. WFS Combined with Ophthalmic Technologies One of the causes of blindness is the dysfunction of the blood-retinal barrier, typically observed in people affected by diabetic retinopathy [116], whose study requires a high optical resolution (6.5 µm in diameter [117]) to visualize single capillaries and the blood cells which, in the imperfect optics of the mammalian anterior eye, induce aberrations that blur the microscopic retina features. Even if the AO ophthalmoscopy has enabled diffraction-limited imaging of the retina by measuring and correcting for higher-and lower-order aberrations of the eye, the single blood cell imaging still cannot still be easily observed. The movement of the blood cell limits the acquisition which can be made using high-frame rate-cameras [118]. Furthermore, fast cameras require even more light, which could damage the eye tissue. To overcome this drawback, Guevara-Torres et al. [119] developed a scanning imaging system allowing the collection of 2-dimensional raster images at a rate of 25 frames per second, with 1D fast scanning operating at 15.45 kHz. This setup was composed of five pairs of afocal telescopes that relayed coaligned beams for imaging and wavefront sensing. The 843 nm or 904 nm laser sources were used. In the return path, light is reflected into high-sensitivity photomultiplier tubes ( Figure 23) and, in real time, the eye aberrations were measured with a Shack-Hartmann wavefront sensor, corrected with a deformable mirror. 14, x FOR PEER REVIEW 27 of 40 aberrations that blur the microscopic retina features. Even if the AO ophthalmoscopy has enabled diffraction-limited imaging of the retina by measuring and correcting for higherand lower-order aberrations of the eye, the single blood cell imaging still cannot still be easily observed. The movement of the blood cell limits the acquisition which can be made using high-frame rate-cameras [118]. Furthermore, fast cameras require even more light, which could damage the eye tissue. To overcome this drawback, Guevara-Torres et al. [119] developed a scanning imaging system allowing the collection of 2-dimensional raster images at a rate of 25 frames per second, with 1D fast scanning operating at 15.45 kHz. This setup was composed of five pairs of afocal telescopes that relayed coaligned beams for imaging and wavefront sensing. The 843 nm or 904 nm laser sources were used. In the return path, light is reflected into high-sensitivity photomultiplier tubes ( Figure 23) and, in real time, the eye aberrations were measured with a Shack-Hartmann wavefront sensor, corrected with a deformable mirror. Dynamic and static wavefront aberrations influence retinal OCT image quality across a wide and limited field of view (FOV). Actually, the optical coherence tomography angiography (OCTA) has become an increasingly important tool for diagnosing retinal parafoveal microvasculature and vein occlusion. In particular, the adaptive optics with closed-loop feedback-wherein a wavefront sensor detects, and a deformable mirror compensates, optical aberrations-have been considered as a potential solution [120]. Polans et al. proposed a compact OCTA system integrated with wavefront sensorless adaptive optics (WSAO). The wide-field OCTA system covers a 70° field of view, ultimately allowing the correction of peripheral aberrations within 2 s to a level that was sufficient for the enhanced visualization of microvasculatures and microaneurysms in diabetic patients. Recently, some researchers have worked to optimize the image processing approach Dynamic and static wavefront aberrations influence retinal OCT image quality across a wide and limited field of view (FOV). Actually, the optical coherence tomography angiography (OCTA) has become an increasingly important tool for diagnosing retinal parafoveal microvasculature and vein occlusion. In particular, the adaptive optics with closed-loop feedback-wherein a wavefront sensor detects, and a deformable mirror compensates, optical aberrations-have been considered as a potential solution [120]. Polans et al. proposed a compact OCTA system integrated with wavefront sensorless adaptive optics (WSAO). The wide-field OCTA system covers a 70 • field of view, ultimately allowing the correction of peripheral aberrations within 2 s to a level that was sufficient for the enhanced visualization of microvasculatures and microaneurysms in diabetic patients. Recently, some researchers have worked to optimize the image processing approach which is useful for generating retinal perfusion maps adapted to image sequences obtained with AO-corrected ophthalmoscopes [121,122]. However, in the contrast maps some artifacts are present, which implies an uncertainty as to whether a movement observed between two frames is due to physiological reasons or due to scan distortion [123]. Moreover, some other drawbacks should be still overcome, such as a small field of view, an uneven contrast in the capillaries, or a limitation concerning the direction and plane of the vessels whose blood flow can be analyzed. Salas et al. [124] developed a computational approach, relying on a spatio-temporal filtering of the image sequence, which is useful for isolating blood flow from noise in low-contrast sequences. Applying this computational approach, angiography with an adaptive optics flood illumination ophthalmoscope (AO-FIO) using NIR light, in both bright-field and dark-field modalities, has been carried out [124]. Figure 24 reports a scheme of the AO flood illumination ophthalmoscope, arranged in two parts: (1) wavefront (WF) sensing and control and (2) illumination and detection. The first is composed of a reference source (Ref Source), a wavefront sensor (WFS) (microlens array, relay optics, and WFS camera), a WFS beacon source, and a deformable mirror (DM). An additional calibration source can be inserted in place of the eye to calibrate the adaptive optics loop. The illumination and detection subsystem is composed of the retinal imaging camera and the corresponding wide-field imaging source. Recently, François Hénault et al. [127] proposed a crossed-sine wavefront sensor which is useful for simultaneously achieving a high spatial resolution at the pupil of the tested optics and absolute measurement accuracy comparable to that attained by laser interferometers. This is obtained using a linear gradient transmission filter (GTF), located As outlined by Piñero et al. [125], the consistency of the refractive measurements is not dependent on the magnitude of the refractive error, with the same precision ability for moderate to high myopia and for hyperopia. In the last few years, teams of researchers have adopted the Visionix VX120 (Luneau Technologies SAS, Pont-de-l'Arche, France), a multidiagnostic platform providing consistent measurements of refraction, keratometry, central corneal thickness (CCT), and iridocorneal angle (IA) in normal healthy eyes. This noninvasive and high level of intra-and inter-session repeatability, multi-diagnostic platform combines refraction (Hartmann-Shack-based autorefractometer), simulated keratometry (based on Placido disk videokeratography), non-invasive stationary Scheimpflug-based pachymetry, and Hartmann-Shack wavefront aberrometry (see Ref. [126]). So, a complete exam of the anterior segment of the eye (cataracts, refractive error screening, glaucoma screening and monitoring, adaptation of rigid and scleral contact lenses, keratoconus stage classification and monitoring, and complete readings for keratometry and night vision) could be made. Recently, François Hénault et al. [127] proposed a crossed-sine wavefront sensor which is useful for simultaneously achieving a high spatial resolution at the pupil of the tested optics and absolute measurement accuracy comparable to that attained by laser interferometers. This is obtained using a linear gradient transmission filter (GTF), located at the image plane of the tested optical system, the mini-lens array, and a detector array, thus allowing the acquisition of four pupil images simultaneously. The authors also carried out numerical simulations in order to assess the performance of the crossed-sine WFS in terms of measurement accuracy. The accuracy of the crossed-sine WFS is better than λ/100 RMS, which is significantly higher than that offered by commercial WFS (typically λ/25 RMS). Furthermore, the crossed-sine WFS offers the advantage of being quasi-achromatic and able to work on slightly extended illumination objects, thus allowing a vast choice of natural or artificial sources. For future technological applications, we recall the paper of Pelzman et al. [128]. Generally, a multi-lens setup and several images are necessary to measure the wavefront using the sensors previously described. For example, the pyramid sensor requires a lenslet array as well as a mechanical vibrating crystal, and the Shack-Hartmann sensor uses an array of micrometer-scale lenslets to convert the wavefront information of an incoming beam into a two-dimensional intensity map made out of focused spots. Thus, an optimization of the optical alignment of the lenslet array and a focal plane array are fundamental steps to carry out. On the other hand, an ultrahigh spatial resolution is the peak demand today in wavefront detection. It is well known that the excited SP waves in subwavelength structures still carry the wavefront information of the incident wave, according to Huygens-Fresnel principle [129]. Starting from this principle and using a concentric-ring-based aperture array fabricated in an Au film, Pelzman et al. have developed a device showing wavefront-dependent focusing of the surface plasmon (SP) waves. In addition, the demonstrated approach does not require complicated 3D integration or optical alignment, and thus, it has great potential for revolutionizing the existing wavefront sensing technologies. Figure 25 shows the confocal configuration used to measure the shift in the focal spot. The shape of the incident wavefront is easily controlled through defocusing the excitation beam while maintaining the imaging objective in focus. Specifically, by intentionally defocusing the excitation beam, the shape of the incident wavefront was converted from convex to concave. The inset of Figure 25 shows how the SP waves excited (by a diode-pumped 532-nm laser) on the surface interact with the fluorescent dye molecule embedded in the PMMA layer. Then, the emitted fluorescence signal from the interaction of the suspended R6G molecules with the SP waves was collected through the imaging microscope objective. A long-pass optical filter with an edge wavelength of 550 nm was used to block the optical signal from the 532 nm excitation line. Some details are reported in Ref. [128]. existing wavefront sensing technologies. Figure 25 shows the confocal configuration used to measure the shift in the focal spot. The shape of the incident wavefront is easily controlled through defocusing the excitation beam while maintaining the imaging objective in focus. Specifically, by intentionally defocusing the excitation beam, the shape of the incident wavefront was converted from convex to concave. The inset of Figure 25 shows how the SP waves excited (by a diode-pumped 532-nm laser) on the surface interact with the fluorescent dye molecule embedded in the PMMA layer. Then, the emitted fluorescence signal from the interaction of the suspended R6G molecules with the SP waves was collected through the imaging microscope objective. A long-pass optical filter with an edge wavelength of 550 nm was used to block the optical signal from the 532 nm excitation line. Some details are reported in Ref. [128]. Recently, an innovative approach was based on the use of artificial metamaterials, known as metasurfaces, which can impart a phase shift on transmitted or reflected light, allowing for unconventional beam shaping over subwavelength distances [130,131]. Recalling once again the principle of Huygens-Fresnel, a physical implementation of Huygens' fictitious sources can be realized by engineering crossed electric and magnetic Recently, an innovative approach was based on the use of artificial metamaterials, known as metasurfaces, which can impart a phase shift on transmitted or reflected light, allowing for unconventional beam shaping over subwavelength distances [130,131]. Recalling once again the principle of Huygens-Fresnel, a physical implementation of Huygens' fictitious sources can be realized by engineering crossed electric and magnetic dipoles and thus providing full transmission with the arbitrary 2π phase and, in turn, allowing extreme control and manipulation of light [132][133][134][135]. In detail, the atomic array, producing diffraction-limited focusing of light with very short wavelength-scale focal lengths, was simulated using the coherently superposing induced electric and magnetic dipoles. In this way, a quantum nanophotonic Huygens surface of atoms was engineered obtaining a full 2π phase control over the transmission, with close to zero reflection. In view of the diffraction-limited focusing, atomic arrays offer advantages over plasmonic or dielectric platforms (i.e., the absence of absorptive loss and fabrication inhomogeneities and a great flexibility to operate at the quantum limit) [136]. A representative atomic Huygens surface with strong magnetic response at optical frequencies is shown in Figure 26, as reported in Ref. [136]. The atomic array consists of a 2D square lattice in the yz plane. Each site consists of a square unit cell of four atoms, forming an atomic bilayer. In Figure 26b,c are also reported a scheme indicating how a uniform polarization on each atom leads to an effective electric dipole moment d from the unit cell, while an azimuthal polarization leads to a net zero electric dipole moment, and to a perpendicular magnetic dipole moment m. 2D square lattice in the yz plane. Each site consists of a square unit cell of four atom forming an atomic bilayer. In Figure 26b,c are also reported a scheme indicating how uniform polarization on each atom leads to an effective electric dipole moment d from th unit cell, while an azimuthal polarization leads to a net zero electric dipole moment, an to a perpendicular magnetic dipole moment m. Wavefront Sensing Technology to Empower Clinical Ophthalmic Surgery Application of Multifocal IOLs: Future Developments Wavefront technology has the potential to help us truly assess and understand ho and what the patient really sees. With this more comprehensive understanding of th patient's aberrations comes an increased capacity and responsibility to correct them Furthermore, wavefront sensing technology empowers the surgeon to ensure that the IO implanted is the one that will achieve the refractive outcomes which are unique to eac patient's visual needs. As cataract surgery has evolved into lens-based refractive surger expectations for refractive outcomes continue to increase with a wide variety of option to correct refractive error. As already mentioned in the previous sections, in the clinical setting wavefro systems are generally used in combination with corneal topographers to evaluate th aberrations of the patient's eye in the preparation for LASIK treatment or the implantatio Wavefront Sensing Technology to Empower Clinical Ophthalmic Surgery Application of Multifocal IOLs: Future Developments Wavefront technology has the potential to help us truly assess and understand how and what the patient really sees. With this more comprehensive understanding of the patient's aberrations comes an increased capacity and responsibility to correct them. Furthermore, wavefront sensing technology empowers the surgeon to ensure that the IOL implanted is the one that will achieve the refractive outcomes which are unique to each patient's visual needs. As cataract surgery has evolved into lens-based refractive surgery, expectations for refractive outcomes continue to increase with a wide variety of options to correct refractive error. As already mentioned in the previous sections, in the clinical setting wavefront systems are generally used in combination with corneal topographers to evaluate the aberrations of the patient's eye in the preparation for LASIK treatment or the implantation of IOLs in the pre-operative and post-operative follow-up phases. Today, new keratorefractive techniques such as small incision lenticule extraction (SMILE) avoid corneal flap creation and use a single laser device, while advances in surface ablation techniques have seen a resurgence in popularity. Presbyopic treatment options have also expanded to include new ablation profiles, intracorneal implants, and phakic intraocular implants. For all these approaches, a pre-operative evaluation of refractive patients is strongly necessary. Recently, this evaluation has been carried out by using machine learning and artificial intelligence [137], in which multiple diagnostic tools receive information about the eye and guide the surgeon regarding the lens or the best corneal refractive surgery method to perform on a specific patient to adequately correct the refractive error, improving the quality of the retinal image to beyond normal levels. Figure 27 shows a summary of the most widely used refractive surgery techniques. For example, conventional LASIK is useful to correct lower-order aberrations, such as defocus and astigmatism, but it is not adequate for patients with other distortions, such as halos, glare, and impaired night vision. This is because, with conventional LASIK, we are unable to see the true complexity and the interrelationship of the aberrations. In fact, we can see different aberrations independently, but we have no complete map of their relationship with one another. A further complicating factor is that the amount of higher-order aberrations the population experiences is not at all related to the level of myopia. In other words, patients with -1 D can have just as many higher-order aberrations as those with -8 D. This means that refractive surgery that addresses only the sphere and the cylinder may not improve a patient's overall vision. On the other hand, to date, custom ablation allows us to avoid increasing spherical aberration, thereby significantly improving halos at night. Studies have found that patients treated with custom ablation experience improvements in glare, halo, night driving, blurred vision, and fluctuation of vision. Recently, Alcon launched the Optiwave Refractive Analysis (ORA) system (Alcon Laboratories, Inc., Fort Worth, TX, USA), which optimizes intraoperative wavefront data to calculate IOL power and helps with IOL selection. It exploits Talbot Moiré interferometry to provide accurate real-time information during surgery [138]. It also includes analytical tools to evaluate results compared to an aggregate global database [139]. Moreover, although the expansion of the optometric scope of care may have drifted the profession away from the traditional roots of physiological optics and towards the treating and managing of ocular disease, non-surgical wavefront correction provides evidence that once again refractive error is an appealing and central part. Most importantly, patients will benefit from better visual quality with the least invasive solution. In fact, with wavefront analysis we can really see the whole problem and treat it Recently, Alcon launched the Optiwave Refractive Analysis (ORA) system (Alcon Laboratories, Inc., Fort Worth, TX, USA), which optimizes intraoperative wavefront data to calculate IOL power and helps with IOL selection. It exploits Talbot Moiré interferometry to provide accurate real-time information during surgery [138]. It also includes analytical tools to evaluate results compared to an aggregate global database [139]. Moreover, although the expansion of the optometric scope of care may have drifted the profession away from the traditional roots of physiological optics and towards the treating and managing of ocular disease, non-surgical wavefront correction provides evidence that once again refractive error is an appealing and central part. Most importantly, patients will benefit from better visual quality with the least invasive solution. In fact, with wavefront analysis we can really see the whole problem and treat it as such and begin to understand that not everyone's visual map is the same. Here, we outline that wavefront-guided devices do not stop with custom ablation. Wavefront devices have come a long way since the original bulky prototypes first used, and now researchers are experimenting with numerous exciting prospects [140][141][142]. Now that some of the devices are so small, the possibilities are virtually limitless. One idea that is currently in development is wavefront-guided contact lenses, which could be customized to the individual's eye using digital information. Another possibility is to adjust IOLs digitally inside the eye with a wavefront device. Together with the wavefront technologies, particular attention should be paid to intraocular lenses (IOLs). As reported in Section 5.2, IOLs represent the most advanced solution for cataract refractive surgery. The most advanced IOLs for this purpose are the EDOF (extended depth of focus) lenses that present an optical plate with a continuous series of focuses to ensure a continuum correction from far to near in the case of presbyopia and, in case of astigmatism, to provide compensation for the corneal abnormal curvature. This field is still the subject of analysis and prototyping. As the IOL trend is oriented towards lenses of increasing complexity, it is necessary to have wavefront analysis instrumentation that is, in turn, able to follow the complexity of the lens to allow its validation, compliance with the optical design, quality control, and consistency with production batches. This need forces the development trend towards the wavefront in AO technology. Conclusions The selection of the most adequate AO wavefront sensing detectors is essential to analyze the optical retinal imaging modalities and the IOL/contact lenses performance. Nowadays, to compensate for the light aberrations, adaptive optics (AO), a technology initially developed in astronomy, is largely utilized. In this review, we first reported on an overview of the main wavefront sensors planned to be a part of the many instruments that are currently under development for AO applications, and we described their advantages and limitations. In the second part of this review, we outlined selected applications of the IOL and AO systems and the issues that have to be solved to approach the high performance of the optical systems as well as the high degree of process control that is required in AO applications. Finally, the directions for further investigations are reported with regard to the potential of the new materials, whose physical properties are particularly interesting for creating new designs and optimizing the performance of IOLs and AO systems. To this end, further studies closely combining the features of wavefront science with the application demands of the various functional materials are still necessary.
2022-12-07T16:17:44.396Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "18a8353b24eb7307e1503591a2c074752b2ebd87", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/23/5321/pdf?version=1670238003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b056f51b9426b9304cd338753c6ebb2e9a6e3479", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
3537842
pes2o/s2orc
v3-fos-license
Towards Flexible, Small-Domain Surface Generation: Combining Data-Driven and Grammatical Approaches As dialog systems are getting more and more ubiquitous, there is an increasing number of application domains for natural language generation, and generation objectives are getting more diverse (e.g., generating information-ally dense vs. less complex utterances, as a function of target user and usage situation). Flexible generation is difficult and labour-intensive with traditional template-based generation systems, while fully data-driven approaches may lead to less grammatical output, particularly if the measures used for generation objectives are correlated with measures of grammaticality. We here explore the combination of a data-driven approach with two very simple automatic grammar induction methods, basing its implementation on OpenCCG. Introduction & Related Work As language-operated interactive devices become increasingly ubiquitous, there is an increasing need for not only generating utterances that are comprehensible and convey the intended meaning, but language that is adaptive to different users as well as situations (see also (Dethlefs, 2014) for an overview). Adaptation can happen at different levels, concerning content as well as the formulation of generated sentences. We here focus only on sentence formulation with the goal of being able to automatically generate a large variety of different realisations of a given semantic representation. Our study explores the combination of a data-driven approach (Mairesse et al., 2010) with a grammar-based approach using OpenCCG (White et al., 2007). The use of templates is a common and well-performing approach to natural language generation. Usually, either the generation process consists of selecting appropriate fillers for manually-built patterns, or the semantic specification constrains the allowable surface constructions so strongly that it effectively constitutes a form of template as well. While such approaches do guarantee grammaticality when templates (or grammars, respectively) are well-designed, the amount of formulation variation that can be generated based on templates is either very low, or requires a huge manual effort in template creation. One relevant objective in adapting to a user and a situation is utterance complexity. (Demberg et al., 2011) show that a dialog system that generates more concise (but also more complex) utterances is preferred in a setting where the user can fully concentrate on the interaction, while a system that generates less complex utterances is preferred in a dual tasking setting while the user has to steer a car (in a simulator) at the same time. But how do we know which utterance is a "complex" one? We can draw on psycholinguistic models of human sentence processing difficulty, such as dependency locality theory (measuring dependency lengths within the sentence; longer dependencies are more difficult), information density (measuring surprisal -the amount of information conveyed in a certain time unit; a higher rate of information per time unit is more difficult) or words-per-concept (how many words are used to convey a concept). In this paper, we focus on the measure of information density, which uses the information-theoretic measure of surprisal (Hale, 2001;Levy, 2008), as well as the ratio of concepts per words. Our aim is to flexibly generate utterances that differ in information density, producing high-density and low-density formulations for the same underlying semantic representation. We evaluate different parametrisations of our approach by evaluating how many different high vs. low density utterances can be generated. We additionally present judgments from human evaluators rating both grammaticality and meaningfulness. We collect a small corpus of utterances from the target domain and have them annotated by naive participants with a very shallow notion of semantics, inspired by (Mairesse et al., 2010). We then parse the sentences and automatically create typed templates. During generation, these typed templates are then combined into new unseen sentences, covering also previously unobserved semantic combinations. Generation flexibility in this approach depends entirely on the crowd-sourced domain corpus. Our approach is related to (DeVault et al., 2008), who automatically induce a tree-adjoininggrammar for the Doctor Perez domain. Our system is realised using OpenCCG. Currently, we disallow cross composition and type raising and thus employ Categorial Grammar as the underlying model. Data Our data consists of 247 German-language utterances informing about movie screenings. Each utterance may inform about the aspects: movie title, director, actor, genre, screen date and time, cinema, ticket price, and the screened version. They were collected from native speakers of German via crowd-sourcing. For this, we generated random semantic requests and elicited realisations for them from native speakers. The obtained surfaces were then annotated by different persons. Annotation follows (Mairesse et al., 2010)'s semantic stack scheme with a slight modification: instead of allowing multiple instances of one semantic value stack, we explicitly mark alternatives as shown. We focus on the 117 unique requests with only positive ("inform") stacks, disregarding negative ("reject") data for now. We have a total of 158 sentences realising these 117 entries. We use 75% of our data as training set and 25% as development set. As test set, we construct 200 additional requests for which we do not elicit example sentences. All sets contain roughly equal amounts of each semantic aspect. Our Approach Based on the annotated sentences, our goal is to automatically populate a lexicon of multi-word units such that these units express a specific attribute from our domain and can be combined with other lexicon entries into a grammatically correct structure. We cannot solely rely on shallow language models for grammaticality (as Mairesse does) as the language model scores may be correlated with output from other objective measures. Specifically, one of our measures of grammatical complexity, surprisal, is often estimated based on n-gram models. Hence, when seeking to optimise for short utterances with high surprisal, we might end up only selecting highly ungrammatical utterances. To avoid issues, we decided to explore whether the data-driven approach can be combined with a grammar-based approach. We automatically parse all training data with a dependency parser (we use the dependency parser from the mate-tools toolkit, based on (Bohnet, 2010)), and build a categorial grammar based on these parses. The resulting automaticallylearned domain-specific lexicon can then be used for generation with OpenCCG. Our approach can hence be thought of as a very naive way of grammar induction. The dependency parse gives us information about heads and their dependents, which allows us to construct categorial grammar categories. However, we do not know from the automatic parse which dependents are arguments vs. modifiers. We here explore two simple approaches: In the all-arguments (arg) style, we build a CG type that produces exactly the encountered configuration of immediate dominance and linear precedence. This means that we assume all dependents to be arguments of their governing head. We arbitrarily choose to consume the arguments on the right of heads first, followed by those on the left. In the all-modifiers (mod) style, we treat all dependents D as modifiers of their head H. Thus, we construct a CG type modifying H's type from each pair For both flavours, we use part-of-speech tags as basic types. For now, we forego any additional constraints. Clearly this means that our grammars overgenerate. Our goal here in this paper is to explore the extent to which we are able to generate a large amount of linguistic variants and the extent to which these are considered "good" by human comprehenders. The modifier-only approach is less constrained than the argument-only variant, which should lead to more variety and lower grammaticality. Request Semantics In our approach, each word is considered to be either semantically informative or semantically void. It is semantically informative if it is a word or placeholder for a certain information type. For instance, "ACTOR" is the placeholder for an actor's name, and the noun "Originalversion" indicates that a movie is shown in its original version. All other words are considered to be semantically void and called padding. In this setting, a request specifies only the semantic stacks to be conveyed plus the amount of padding to use. Note that using more padding biases the generation process towards more verbose formulations. Additionally we assign a special semantic representation ("VERB") to verb types. This is done to focus the search on full sentences instead of accepting arbitrarily complex noun phrases as complete answers to requests. Sub-Tree Merging As our requests are structure-agnostic, the search space always contains all words potentially usable for a request irrespective of compatibility with each other. In order to alleviate the arising problem of search space size, we merge words that often co-occur into larger entries in the lexicon. We do this as follows: adjacent heads and dependents are merged if they do not both contain semantic information. As an example, a semantically informative adjective (such as "untertitelte"="subtitled") cannot merge with a noun head if the latter contains semantic information it-self (as "Abenteuerfilm"="adventure movie" does). However, if the head is semantically void (such as "Film"="movie"), the two words are combined into one lexicon entry "untertitelte Film" with the semantic assignment "version=subtitled". This reduces the search space and speeds up search greatly. We implement two slightly different versions of this. In the first, verbs are exempt from merging. In the second, verbs may be merged with padding words, resulting in longer "VERB" chunks. One may expect this to result in slightly increased grammaticality. Experimental Setup We build four grammars from data: two argument-only (A1, A2) and two modifier-only grammars (M1, M2). In A1 and M1, verbs are exempt from merging, in A2 and M2, verbs are merged with surrounding padding words as described in 3.2. Manually-Constructed Grammar Additionally, we construct a small grammar G manually. In G, we also do not make use of type raising nor cross composition, but we employ features to enforce both congruence on linguistic features and thematic fit between e.g. verbs and nouns (e.g. only a price or a movie may be assigned a cost, but not a director). G models the most common structures used in the original data and contains most of the vocabulary used therein. Search Timeout We determine the search timeout to use in each generation request from the development set. Figure 2 shows achieved development set coverages in dependency of OpenCCG search timeouts. In this experiment, we use a padding of 5, as this is the maximal padding encountered in data. Our search is thus calibrated on the most complex utterance(s) in the data. After roughly three hours, most of the grammars have achieved saturation satisfactorily well. We set the timeout to three and a half hours for our main experiments. Language Model for Perplexity Evaluation We train a simple Kneser-Ney smoothed trigram on our training data, which we use in order to pre-select candidates for further evaluation. Main Generation Experiment After training and timeout selection, we automatically generated 200 semantic requests, each consisting of 2 to 8 semantic stacks, and generated realisations for each semantic request by each of our grammars. We do this six times in total, varying the number of padding semantics P between 0 and 5. We then select one short and one long sentence per semantic request from each grammar's output. We pick the sentence with the lowest language model perplexity from the 25% longest and 25% shortest sentences, respectively, selecting 1540 sentences. Table 1 below shows the number of test semantics that each grammar is able to produce results for, grouped by the padding they contain (cf. 4.4). Every other row indicates cumulative coverages, i.e., the number of covered semantics when using up to that many padding words, giving an impression of the coverage increments when using more padding words. The argument-only grammars achieve highest overall coverage, while the manual grammar achieves the worst coverage. In the arg grammars, using more padding deteriorates coverage. This is likely due to search space size increasing. The mod grammars fail to piece together short sentences. Grammar Evaluation In Table 2 we report language model perplexities (PP), parse scores from Stanford Parser, percentages of selected sentences parseable by the German Grammar HPSG, and average human ratings (1=worst, 5=best) of grammaticality and meaningfulness. Annotators agreed exactly in 44%, and differed by no more than 1 in 75.8% of cases. PCFG scores are inconclusive. G performs best except for in perplexity, which we believe is due to G overrepresenting unusual formulations Table 2: Average values rounded to two decimal points. "S": avg. sentence surprisal. "PCFG": mean PCFG parse score. "HP": fraction parseable with HPSG. as well as the fact that correct use of long-range dependencies leads to local increases in perplexity when the trigram horizon fails to adequately capture the dependency. G has consistently high output quality as evidenced by its small standard deviation of human ratings. The modifier-only grammars consistently perform worst. Both their fraction of HPSG-parseable sentences and human-perceived grammaticality are very low. The argument-only grammars perform fairly well, but do not quite reach up to the manually-written grammar. Their high standard deviation points towards a mix of high-quality and low-quality outputs. Notet that higher HPSG parseability does not necessarily imply higher human ratings. We believe this is due to correct, but confusing or unnatural stacking of attributions. Information Density Variation We plot the distributions of trigram perplexity at sentence level and those of the concepts-per-words ID measure. On both metrics, G is the most variable grammar. We positively note that A1 and A2's CPW range is comparable to that of G. The mod grammars construct more verbose, less informative formulations as evidenced by their lower CPW mean. Perplexity-wise, the arg grammars and mod grammars are very similar. The mod grammars have slightly higher mean perplexities, which -as the CPW plot evidences -does not necessarily indicate a lower ID variability. Rather, we believe this to be a simple reflection of lower local coherence which also diminishes the mod grammars' human ratings. G's extreme perplexity range can be explained by a tendency to overrepresent unlikely formulations. Given the human ratings of the grammars, we interpret the discrepancy between the arg grammars and G to point to a slightly narrower range of correct formulations in A1 and A2. Conclusion & Future Work We have presented a simple, effective approach to grammar-based generation using Categorial Grammar as underlying formalism. The argument grammars in particular are able to reproduce the hand-written grammar's range of output variability well while achieving drastically better coverage. Further work should concentrate on search efficiency, improving the quality of output, and further broadening the coverage of the induced grammars. The first point might be addressed by applying search heuristics which e.g. include the compatibility of elements with each other. We expect coverage, correctness, and variability to greatly benefit from constructing both argument and modifier types within the same grammar.
2015-09-18T23:22:04.000Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "906797099724185ca3104a7553ad71f0b3bf05c1", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/W15-4718.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "906797099724185ca3104a7553ad71f0b3bf05c1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
90555328
pes2o/s2orc
v3-fos-license
The antioxidant activitives of mango peel among different cultivars In this paper, the contents of total phenol and total flavonoid of 8 mango cultivars were determined. Their antioxidant abilities were also evaluated by 1,1-diphenyl-2-pireyhydrazyl (DPPH) radical scavenging, trolox equivalent antioxidant capacity (TEAC) and ferric reducing antioxidant power (FRAP). Correlations between total phenol, total flavonoid and FRAP as well as TEAC were also analyzed. Results showed that mango peels were rich in natural antioxidant compounds the antioxidant abilities were different among different cultivars. The correlations between total phenol, total flavonoid and FRAP indicated phenolics represent a major part of antioxidant capacity in mango peels. This was also useful in the utilization of mango processing waste. Introduction Mango (Mangifera indica L.) is a widely planted and consumed tropical fruit throughout the world. Besides the fresh fruit, mango can also be processed into juices, nectars, concentrates, jams, jelly powders and so on. It has been reported that fruits and vegetables contain many antioxidant compounds, such as phenolic compounds, carotenoids, anthocyanins and so on (Naczk and Shahidi, 2006). Among the different parts, fruit peels are rich in polyphenolic compounds, flavonoids, ascorbic acid, and this makes them valuable in making antioxidants. There has been increasing focus of attention of many researchers searching for potent antioxidants from mango. Different parts of mango, including stem bark, leaves, pulp and peel have been investigated. Results showed that those parts possessed various biomedical activities, including antioxidative, free radical scavenging, anti-inflammatory, and anticancer ( Peel is a major by-product of mango processing, and they are discarded as waste. It has been reported that peel was a good source of phytochemicals, such as polyphenols, carotenoids, and it exhibited good antioxidant properties (Ajila et al 2007;Kim et al, 2010). It is commonly considered that the concentration and composition of phenolics is affected by genetic, agronomic and environmental factors (Tomá s-Barberá n and Espí n, 2001). However, the antioxidant activities regarding differences among different mango cultivars have been rarely reported. In this paper, the contents of total phenol and total flavonoid of 8 mango cultivars were determined. Their antioxidant abilities were also evaluated by 1,1-diphenyl-2-pireyhydrazyl (DPPH) radical scavenging, trolox equivalent antioxidant capacity (TEAC) and ferric reducing antioxidant power (FRAP). Correlations between total phenol, total flavonoid and FRAP as well as TEAC were also analyzed. The results showed that mango peels were rich in natural antioxidant compounds the antioxidant abilities were different among different cultivars, and this was useful in the utilization of mango processing waste. Sample extraction 1 g mango peel of different cultivars was weighed and refluxed with 30 ml of 70% methanol at 60 °C for 2 h under magnetic stirring. The filtrate was separated by centrifugation, and the extraction was repeated for 3 times. All the filtrate was collected and concentrated under reduced pressure at 40 o C with a final volume of 30 ml and the solution used in the following evaluation and detection. Determination of total phenolic content and total flavonoid in the extracts The total phenol content (TPC) was determined using the FC assay described before with some modifications (Du et al., 2014). Typically, 0.025 ml of the extract of different cultivar was introduced into test tubes and followed by the addition of 2.0 ml of FC reagent (diluted 10 times with water in advance) and 5.975 ml of water. The solutions were allowed to stand 5 min at room temperature before the addition 2 ml of sodium carbonate solution (7.5% w/v). After reacting in dark for 30 min at room temperature, the absorbance of the solutions were measured at 760 nm on a UV-vis spectrophotometer (Shimadzu UV-2700, Japan). The calibration curve was prepared using a standard solution of gallic acid. The results were expressed as milligram gallic acid equivalents (GAE)/g dry weight (fresh weight, FW). Total flavonoid content was determined based on the method described by Kim et al (Kim et al., 2003). One milliliter of extract solution of different cultivar was mixed with 0.3 ml of 5% NaNO 2 and 4 ml of distilled water. Then 0.3 ml of Al(NO) 3 was added to the mixture followed by adding 2 ml of 1 M NaOH. The solution was immediately diluted to 10 ml using distilled water. The absorbance of the solution was measured at 506 nm and the total flavonoid content was calculated by using a calibration curve of rutin standard and expressed as mg rutin equivalent (QR Equiv)/g FW. DPPH radical scavenging ability The free radical scavenging activity of the extracts was performed by measuring the decrease in absorbance of DPPH solution at 517 nm in the presence of the extracts by the method proposed by Liyana-Pathirana et al (2010) with minor changes. The solution of 0.5 mM was prepared by dissolving DPPH in methanol. For the evaluation of free radical scavenging activity, 3 ml of DPPH was added into 0.5 ml of the extracts with different concentrations. The mixture was then allowed to stand at room temperature for 30 min in dark before the absorbance at 517 nm was read. The control was prepared as above without extract. The antioxidant activity could be expressed as the following equation: where A 0 and A s were the absorbance at 517 nm of the control and sample solution, respectively. This assay was determined according to the method reported (Benzie et al., 1996). For the determination of FRAP, the extracts (10 μL) were mixed with 1 ml distilled water and 1.8 ml of the FRAP solution. Then the mixture was reacted at 37 o C for 10 min. The absorbance of the reaction solution was recorded at 593 nm. Trolox standard solution was used to perform the calibration curves and the results were expressed as μM trolox/g (fresh weight, FW). TEAC was calculated according to the ABTS scavenging ability of mango peel extract. This was performed by the procedure described by Re et al (1999). For the scavenging of ABTS, 50 μl of different extracts were added to 4 mL of the solution above. Methanol was used as control. After reaction for 10 min, the absorbance was measured at 734 nm. The free radical scavenging capability was calculated by the equation: Trolox equivalent antioxidant capacity (TEAC) and ferric reducing antioxidant power (FRAP) ABTS scavenging activity= (Ac-As)/Ac×100% where Ac and As were the absorbance at 734 nm of the control and sample solution, respectively. Trolox standard solution was used to perform the calibration curves and the results were expressed as μM trolox/g (FW). Contents of total phenol and total flavonoid The contents of total phenol and total flavonoid in each cultivar was summarized in table 1. It can be conclude from the table that the total phenol and to-tal flavonoid ranged from 1.68-13.28 and 0.33-6.08 mg/g, respectively. The cultivar 'Chenpixiang' pos-sess the highest while 'Jinhuang' possessed the low-est contents of total phenol and total flavonoid. This indicated that the bioactive compounds in mango peels varied greatly different mango cultivars. Simi-lar results have been reported in litchi pericarps (Wang et al, 2011). Table 1 Total phenol, total flavonoid, FRAP and TEAC of different mango peel extracts Antioxidant activities of mango peel extracts Due to its operating simplicity, DPPH is one of the most popular methods employed for the evaluation of antioxidant ability, especially in plant extract. DPPH is a kind of stable organic radical. In the radical form, the molecule of DPPH has an absorbance at 517 nm, which will disappear after the acceptance of an electron or hydrogen radical from an antioxidant in the solution to become a stable diamagnetic molecule (Matthä us, 2002). The DPPH scavenging ability of the peel extracts of different mango cultivars was given in Figure 1. Different from the results of total phenol and total flavonoid, there was no obvious difference among those values. This may because that the bioactive compounds which could scavenge DPPH might almost be the same in different mango cultivars. The values of FRAP and TEAC were also given in table 1. It can be seen that FRAP of mango peel ex-tract of different cultivars ranged from 16 to 122 μM/g, and the turn was the same as that of total phenol and total flavonoid. This was because FRAP represented the total antioxidant activity of plants, and it was only connected to the total bioactive compounds. The turn of TEAC was not totally in accordance with that of FRAP, and this may be as-cribed to the same reason of DPPH scavenging ability. Correlations Correlations were made in order to determine the contribution of total phenol and total flavonoid to the antioxidant activities of mango peel. The correlation between total phenol and FRAP, TEAC was shown in Figure 2A. It can be seen that the correlation between total phenol and FRAP was almost linear (r 2 =0.92). The trend was the same of the correlation between total flavonoid and FRAP (r 2 =0.94). This indicated that phenols and flavonoids represent a major part of antioxidant capacity in mango peel. However, The correlation coefficients between total phenol, total flavonoid and TEAC were lower than those of FRAP (r 2 =0.69, 0.84, respectively), and this may be due to the fact that TEAC were calculated from the ABTS scavenging ability. Different from FRAP, the ABTS scavenging ability could not represent the total antioxidant capacity of mango peels. Conclusions The contents of total phenol and total flavonoid of mango peels of 8 different cultivars, namely 'Chenpixiang', 'Boluoxiang', 'Qiumango', 'Zaoshumango', 'Hong kaite', 'Hongmango 6', 'Sijimango 1', and 'Jinhuang' were determined and compared in this research. Their antioxidant abilities were also evaluated by DPPH radical scavenging and FRAP. Results showed that the turns of total phenol and total flavonoid were both 'Chenpixiang'>'Boluoxiang'>'Qiumango'> 'Zaoshumango' >'Hong kaite'> 'Hongmango 6' > 'Sijimango 1' > and 'Jinhuang'. The turn of values of FRAP was the same as that of total phenol and total flavonoid, and the highest and lowest were 122 and 16 μM/g, respectively. bioactive compounds. The correlation between total phenol, total flavonoid and FRAP was almost linear, and this was of the same reason above. There was no change among the DPPH radical scavenging abilities of different cultivars, and the turn of TEAC was not totally in accordance with that of FRAP. The reason of this may be that DPPH and TEAC only represented their radical scavenging abilities. Besides, the correlation was not linear. All these indicated that the antioxidant activities mainly came from total phenols and total flavonoids. The research showed that mango peels were rich in natural antioxidant compounds and the antioxidant abilities were different among different cultivars, and phenols represent a major part of antioxidant capacity in mango peels. The results were also helpful in the utilization of mango processing waste.
2019-04-02T13:06:44.982Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "bb45ee43ef406106c7236e047475b09bb5446358", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/61/1/012065", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a49ef9cd2ac935482660263b7a4f531d5c35a2cc", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
19671479
pes2o/s2orc
v3-fos-license
Enhancement of dielectric permittivity by incorporating PDMS-PEG multiblock copolymers in silicone elastomers A silicone elastomer from PDMS-PEG multiblock copolymer has been prepared by use of silylation reactions for both copolymer preparation and crosslinking. The dielectric and mechanical properties of the silicone elastomers were carefully investigated, as well as the morphology of the elastomers was investigated by SEM. The developed silicone elastomers were too conductive to be utilised as dielectric elastomers but it was shown that when the above silicone elastomers were mixed with a commercial silicone elastomer, the resulting elastomer had very favourable properties for dielectric elastomers due to a significantly increased dielectric permittivity. The conductivity also remained low due to the resulting discontinuity in PEG within the silicone matrix. Please note that technical editing may introduce minor changes to the text and/or graphics, which may alter content. The journal's standard Terms & Conditions and the Ethical guidelines still apply. In no event shall the Royal Society of Chemistry be held responsible for any errors or omissions in this Accepted Manuscript or any consequences arising from the use of any information it contains. Introduction Dielectric elastomers (DEs) have been studied extensively with respect to finding both new and better elastomer candidates and novel applications. [1][2][3][4] DEs are elastomers which exhibit a change in size or shape when stimulated by an external electric field. They are also known as "compliant capacitors," with actuation occurring when electrostatic stress exceeds elastic stress. 5 Such properties have enabled DEs to play a significant role in applications as actuators, sensors and generators. Dielectric elastomers with high relative permittivity possess high electrical energy in the form of charge separation, due to polarisation. In an unactuated state, the elastomer can withstand a given electrical field, the so-called electrical "break-down strength," 6 but above this electrical field the DE will shortcircuit. Another common failure associated with DEs is electromechanical instability (EMI), which arises during actuation when attractive forces between the two electrodes become dominant and locally exceed a certain threshold value that cannot be balanced by the material's resistance to compression. 7,8 This phenomenon, which is also known as "electromechanical breakdown," can usually be eliminated by prestretching the elastomer, since prestretching has a combined effect of hardening the silicone elastomer, decreasing film thickness and increasing electrical breakdown strength. 9,10 Polydimethylsiloxanes (PDMS), as one promising type of dielectric elastomer, exhibit large ultimate extension. [11][12][13][14] Despite its significant deformation, the drawback of PDMS is that it has low permittivity, in relation to the net dipole moment (µ), of 0.6 -0.9 D. 15 On the positive side, PDMS is known to have very low conductivity. 16 In contrast, polyethyleneglycols (PEG) show high permittivity as a result of a dipole moment of 3.91 D, 17 yet they are incapable of actuating, as they are highly conductive. 18 Combining PDMS and PEG as a block copolymer presents the possibility of substantially improving properties such as high permittivity and non-conductivity, whereby PEG enhances permittivity and PDMS facilitates actuation through its non-conductive nature and inherent softness. The synthesis of the PDMS-PEG multiblock copolymer utilised herein is based on hydrosilylation, as shown in Fig. 1. An astonishing feature of block copolymers is the variety of morphologies due to self-assembly in bulk or in solution. 19,20 In principle, a diblock copolymer, which is the simplest block copolymer, assembles into different morphologies, such as sphere (S), cylinder (C), gyroid (G) and lamellar (L). 19,21 These morphologies can be achieved when two immiscible, covalently-bonded polymers microphase separately. 22 These morphologies can be changed by varying the volume fraction of one constituent in the diblock copolymer. For triblock copolymers, the morphologies are more complex, mainly due to the sequence order of three distinct polymers, e.g. ABC, ACB, BAC and BCA, which introduces further degrees of freedom and thus allows for the assembly of nearly 30 different morphologies. 19 The similarity shared by the block copolymers is that they have four common equilibrium morphologies (S, C, G and L). 21 Here, elastomers are prepared by means of phase separating PDMS-PEG multiblock copolymers, whereby the copolymers' blocks are expected to segregate to form well-defined structures, depending on the chain lengths of the two constituents. Subsequently the phase-separated copolymers are cross-linked via silylation into elastomers. Synthesis of the PDMS-PEG prepolymer The procedure used to synthesise PDMS-PEG multiblock copolymer was amended from that employed by Klasner et al. 23 and Jukarainen et al. 24 All glassware was thoroughly cleaned and dried at a temperature of 200°C. The characterisations on M n of DMS-H21, DMS-H11, DMS-H03, SIH6117.0 and PEG-DE were performed using 1 H-NMR to obtain precise M n for the stoichiometry calculations. The theoretical PDMS-PEG repeating units in the multiblock copolymer were calculated from a target molecular weight of 30 kg/mol, whereby the number of blocks for PDMS and PEG were X and (X+1), respectively: where M n,PDMS and M n,PEG are the molecular weight of PDMS and PEG, respectively. The stoichiometric ratio for preparing multiblock copolymers ‫ݎ(‬ ଵ ) was calculated as: where f PEG-DE and f H-PDMS are the functionality of PEG-DE and H-PDMS, respectively. 25 Both polymers in this case were difunctional (f=2), and the telechelic vinyl groups of the resulting copolymer were targeted. Dry toluene (prepared by molecular sieving) was added into the flask at 30 wt% of the total mass of H-PDMS and PEG-DE. The initial concentration of the platinum catalyst was 3,120 parts per million (ppm). From this solution, the amount of catalyst solution was determined, in order to obtain a final concentration of 30 ppm in the reaction mixture, by assuming the density of the mixture was 1 g/cm 3 . The reaction occurred at 60°C with mild stirring and in the presence of nitrogen gas to eliminate air inside the flask. The duration of the hydrosilylation reaction depended on the chain length of H-PDMS and ranged from 2 to 6 hours. The disappearance of a Si-H bond signal at 4.70 ppm was checked by H-NMR, to ensure that all hydrides in the PDMS had been fully consumed during the reaction; refer to ESI for NMR spectra in Figs. S1.a, S1.b, S1.c and S1.d. The final solution was viscous and appeared light bronze in colour. Any remaining solvent (toluene) was removed with a rotary evaporator for a couple of hours. The product was purified by cold methanol precipitation, in order to remove excess PEG-DE, and washing was repeated at least five times. Methanol from the precipitation process was excluded by using a rotary evaporator for a few hours and then placing the mixture in a vacuum for a day. Experimental setup for the PDMS-PEG block copolymer To distinguish the PDMS-PEG multiblock copolymer samples from different PDMS volume fractions, they were named based on four different repeating unit numbers in the constituent polymer, as listed in Table 1. Asymmetrical morphologies in the PDMS-PEG multiblock copolymer were obtained by varying PDMS chain lengths (m=3,7,14,81) while sustaining the equivalent PEG chain length (n=4), which in turn produced PDMS3-PEG, PDMS7-PEG, PDMS14-PEG and PDMS81-PEG, respectively. Hence PDMS81-PEG constituted the highest volume fraction of PDMS in the block copolymer (0.94), whereas the lowest volume fraction produced in this study was 0.45 (belonging to PDMS3-PEG). Cross-linking Four samples of PDMS-PEG multiblock copolymers and 16 samples of BPB were prepared. The stoichiometric ratio for the cross-linking ‫ݎ(‬ ଶ ) was calculated as: where f HMS and f BCP were the numbers of the HMS-501 (9functional) functional group and the PDMS-PEG block copolymer (2-functional), respectively, while […] indicates the initial concentration. 26,27 The values of ‫ݎ‬ ଶ were calculated based on the mass of PDMS-PEG prepolymers added into the blends. The inhibitor (SIT7900) and the platinum catalyst were added to the blends at 1 wt% and 30 ppm, respectively. Those blends which consisted of PDMS-PEG prepolymer, namely MJK4/13, SIT7900 and 30 ppm platinum catalyst, were speed-mixed rigorously at 3,000 rpm for 2 minutes. Cross-linker (HMS-501) was added, and the resulting mixture was additionally speed-mixed at 1,500 rpm for 2 minutes. The cross-linked films were cured at a temperature of 60°C overnight and then subsequently postcured at 110°C for 2 hours. Characterisations The NMR equipment utilised in this instance was the Bruker 300 MHz NMR. The number of scannings per sample was 128. The sample was prepared by diluting 50 mg of the sample in 0.5 mL of deuterated chloroform (CDCl 3 ). Static contact angles, created by using the "sessile dropneedle in" method, were taken at a room temperature of 23°C using Dataphysics OCA20. The contact angle was measured by dropping 6 µL of deionised water onto the PDMS-PEG multiblock copolymer and BPB films. Measurements for each contact angle were taken for 65 seconds, and the contact angles were analysed every 5 seconds, in order to obtain contact angle versus time profiles. Linear viscoelasticity (LVE) properties, i.e. storage and loss moduli, were characterised at room temperature using TA Instruments' ARES-G2. The geometry of the parallel plate was 25 mm. The axial force, strain and frequency ranges were 5 N, 2% and 100 -0.01 Hz, respectively. The Young's modulus can be determined as Y = 2(1 + ν)G = 3G, since Poisson's ratio (ν) is close to 0.5, due to the incompressibility of silicones. Dielectric permittivity, loss permittivity and conductivity were measured at a frequency of 10 6 to 10 -1 using a broadband dielectric spectrometer from Novocontrol Technologies GmbH & Co. KG, Germany. The electrode diameter was 20 mm. The breakdown tests were carried out on an in-house-built device based on international standards (IEC 60243-1 (1998) and IEC 60243-2 (2001)). 28 Samples with a film thickness less than 100 µm were used, as breakdown strength depends greatly on sample thickness. 10 The film was slid between the two spherical electrodes (radius of 20 mm), and breakdown was measured at the point of contact, with a stepwise increasing voltage applied (50 to 100V/step) at a rate of 0.5-1 steps/s. 29 Each sample was measured up to 12 times, and the average of these values was then taken as the breakdown strength. The SEM model, FEI Inspect S, used to characterise nanoscale images, performed energy-dispersive X-ray and wavelength dispersive measurements. The accelerating voltage and resolution were 200 V -30 kV and 50 nm at 30 kV, respectively, while the imaging modes used high and low vacuums. The number average molecular weight (M n ) determinations for PDMS-PEG multiblock copolymers were performed on an SEC instrument consisting of a Viscotek GPCmax VE-2001 instrument equipped with a Viscotek TriSEC Model 302 triple detector using two PLgel mixed-D columns from Polymer Laboratories. Samples were run in tetrahydrofuran (THF) at 30°C and at a rate of 1 mL/min. Molar mass characteristics were calculated using polydimethylsiloxane standards. PDMS-PEG multiblock copolymer The PDMS-PEG block copolymer samples with different PDMS chain-lengths were characterised by means of sizeexclusion chromatography (SEC), while the cross-linked samples were analysed by means of dielectric spectroscopy and rheology. Results for the average number of molecular weights obtained from SEC, shown in Table 2, indicate that synthesised PDMS-PEG multiblock copolymers possess lower M n than targeted. The relative permittivity of the multiblock copolymers is shown in Fig. 2. Relative permittivity for the copolymer with the least PEG (PDMS81-PEG) is constant at all frequencies, with a slight increase at low frequencies. This behaviour is similar to that of the reference elastomer (MJK), but the PDMS81-PEG multiblock copolymer has three-fold higher relative permittivity. For samples with higher PEG content, significant relaxation takes place at low frequencies, leading to increased permittivity (as seen in Fig. 3), while dielectric loss also increases very abruptly when decreasing the frequency. This behaviour indicates conductive nature of the elastomers. In Fig. 4 the conductivity of the copolymers is shown. It is obvious that they are all conductive, due to the display of a plateau in conductivity at low frequencies. The block copolymers have conductivities of the order of 10 2 to 10 5 higher than those of the reference elastomer (MJK). The rheological properties of the cross-linked copolymers are shown in Fig. 5. The PDMS14-PEG and PDMS81-PEG samples show the behaviour of very soft networks with low storage moduli compared to silicone elastomers, and they also demonstrate significant relaxation at low frequencies, which further indicates the inherent softness. In contrast, the PDMS3-PEG and PDMS7-PEG samples possess PEG-like properties with high storage moduli and low losses. Furthermore, their shear modulus is higher than that of the reinforced commercial silicone elastomer. Therefore, it is clear that an increase of PEG constituents in a PDMS-PEG multiblock copolymer reinforces the network comparable with the effect of silica fillers. It is noteworthy that PDMS81-PEG and PDMS14-PEG closely resemble each other despite PDMS81-PEG being significantly shorter than PDMS14-PEG (see Table 2), and thus PDMS81-PEG should provide significantly higher cross-link density and thus higher G. However, this effect cannot be seen simply because the increased content of PEG in PDMS14-PEG has an identical cross-linking effect. Binary polymer block copolymer and silicone elastomer blends Due to the conductivity of PDMS-PEG multiblock copolymers, they were further blended and cross-linked into a commercial PDMS elastomer (MJK). Incorporating the block copolymers into a silicone network as a binary polymer blend (BPB) can facilitate the creation of PEG spheres, as illustrated in Fig. 6. The blends consist of PDMS-PEG multiblock copolymers at loadings of 5, 10, 15 and 20 wt% and are denoted as MJK/PDMSi, where i=81,14,7,3. When increasing PEG fractions, unfavourable and discontinuous morphologies may be formed. Dielectric properties of the binary polymer blends The relative dielectric permittivity and loss permittivity of the polymer blends are shown in Figs. 7 and 8. Relative permittivities are significantly improved compared to the reference elastomer (MJK), and loss permittivities are substantially lower than those of the pure copolymers -as hypothesised. Refer to ESI Fig. S2-4 for data for all samples. In general, the storage permittivity of MJK/PDMS7 increases as the wt% of the PDMS7-PEG multiblock copolymer increases in line with loadings from 5 to 20 wt%. Incorporating 20 wt% of PDMS7-PEG in a PDMS network yields the highest relative permittivity (5.2), which is an increase of 60% compared to the relative permittivity of MJK (3.5). The small increase in relative permittivity at low frequencies for MJK/PDMS7, with 5 and 10 wt%, is due to electrode polarisation effects occurring during the measurement process. However, this can be corrected by applying silicone grease between the sample and the electrode. 30 The dynamic dipole orientation of polymer molecules resulting from polarisation are observed for MJK/PDMS7 at 15 and 20 wt%, as Debyerelaxation peaks occur at frequencies of 10 0 -10 3 Hz. One essential finding from the dielectric characterisation is that none of the polymer blends is conductive. To further analyse the optimum polymer blend, selection based on the sample which gives the lowest dielectric loss factor is carried out. Polymer blends of MJK/PDMS3, MJK/PDMS14 and MJK/PDMS81 possess electrical loss factors in the ranges of 0.5-0.9, 0.25-0.75 and 0.06-1.25, respectively, in the investigated frequency regime. MJK/PDMS7 is the most promising blend, due to a low dielectric loss factor of 0.05-0.125 (Fig. 8). The behaviour of MJK/PDMS7 non-conductivity with different copolymer loadings is very promising, since no plateau regions are observed at low frequencies (Fig. 9). This implies that a blending method applied properly causes the successful formation of a discontinuous phase for PEG that creates non-conductive behaviour of the developed polymer in the PDMS elastomer and PDMS7-PEG blends at loadings of 5, 10, 15 and 20 wt%. The conductivity of MJK/PDMS7 is consistent with respect to the MJK elastomer, which is nonconductive, as shown in Fig. 9. The low dielectric loss factor and non-conductivity of MJK/PDMS7 for all investigated copolymer loadings indicates that the composites consist of PEG in discontinuous phases. Rheological properties of BPB To evaluate the effect of blending on mechanical properties, elastomers from MJK/PDMS7 with a 5-20 wt% copolymer were rheologically characterised, as shown in Fig. 10. The storage modulus of MJK/PDMS7 with 20 wt% is relatively close to the storage modulus of silicone elastomer (MJK). In contrast, MJK/PDMS7 with 5 and 10 wt% is softer than the PDMS elastomer, with storage moduli being one-fold and three-fold lower than the storage modulus of MJK (7×10 5 Pa). The blend of MJK/PDMS7 with 15 wt% is the stiffest, with G' = 8×10 5 Pa. Another important feature observed from Fig. 10 is the appearance of small relaxation peaks in the loss moduli for 15 and 20 wt%. This is due to the transient nature of the PEG semi-crystalline phases acting as reinforcing domains. All elastomers, however, do show to be well cross-linked and appear very elastic, and therefore they are suitable as soft dielectric elastomers. Dielectric breakdown (E BD ) strength Electrical breakdown and the influence of different PDMS7-PEG block copolymer loadings in MJK/PDMS7 on the Weibull parameters were investigated. The Weibull fits can be seen in Fig. 11. The Weibull ߚ-parameter (slope of the dashed line in Fig. 11) decreases in line with an increasing MJK/PDMS7 wt%, and it even increases at 20 wt%. The y-axis (Fig. 11) was determined from the formula below: Averaged and fitted electrical breakdown data for all the samples are presented in Table 3. MJK/PDMS7 with 5 wt% bears the highest dielectric breakdown strength (103 V/µm) with a standard deviation of ± 4 V/µm when averaging over the 12 samples. All samples have an almost identical Weibull η parameter and respective breakdown strengths. Adding conductive particles usually destabilises the elastomer in respect to electrical breakdown, 31 but in the composites investigated herein the conductive PEG clearly stabilises the elastomers, as the ߚ parameters for the composites are significantly larger -and thus the materials will be more electrically stable. This may be due to the charge-trapping effects of PEG. 10 The trapping effect probably decreases in line with increased loadings, and thus there is an optimum in the composition at which the electrical stabilisation is highest. The softest sample (5wt%) is furthermore very steep, and therefore the effect cannot be attributed to increased Young's moduli, as shown in Vudayagiri et al. 28 Figure of merit (F OM ) One method which can be used to evaluate the actuation performance of the elastomer is by means of a figure of merit for dielectric elastomer actuators, F OM (DEA), derived by Sommer-Larsen and Larsen 32 : where E BD is electrical breakdown, ε 0 is vacuum permittivity (8.85×10 -12 F/m), ε r is relative permittivity and Y is the Young's modulus. The F OM (DEA) for the MJK/PDMS7 samples was determined relative to the absolute value of the F OM (DEA) of Elastosil RT625 (1.86×10 -24 %), as reported by Vudayagiri et al. 28 The normalised F OM (DEA) was calculated as: The calculated figures of merit are shown in Table 4. The composite with 5 wt% has the highest normalised F OM (DEA) value at 17, i.e. 17 times greater actuation than the reference elastomer. This composition is the best-performing elastomer amongst those investigated, due to the combination of high electrical breakdown strength, a low Young's modulus and relatively high dielectric permittivity. Contact angles of BPB The wettability of MJK/PDMS7 polymer blends was evaluated by static contact angle measurements. The nature of the PDMS-PEG multiblock copolymer is known as one of the amphiphilic dynamic polymer chains. Similar to MJK/PDMS7, which consists of PDMS7-PEG block copolymers in the PDMS matrix, the trend on wettability leans toward amphiphilic behaviour. In Fig. 12, the contact angles of MJK/PDMS7 for different wt% (5, 10, 15 and 20) decline steeply for the first 20s and are followed by a slight decrease until they are almost stable at the end of the time period. This indicates that the block copolymer in the polymer blends orients its polymer chains in order to achieve the lowest possible surface energy, since the copolymer comprises blocks of both hydrophobic PDMS and hydrophilic PEG. When the developed elastomer is exposed to air, the surface is controlled by the hydrophobic PDMS from the block copolymer and the matrix, but upon contact with water the chains re-orient and the PDMS blocks migrate back into the bulk material and are replaced by the more hydrophilic PEG blocks at the surface. 23 This behaviour is confirmed by the contact angle measurement, where the rearrangement of the polymer chains accounts for the change in contact angle over time when a droplet of deionised water is dropped onto the top surface of the sample. Thus, classing the wettability of MJK/PDMS7 as amphiphilic is the result of incorporating the PDMS7-PEG multiblock copolymer in the network, since PEGs are well-known for their hydrophilic properties. SEM analysis In order to verify the hypothesised structure of the composites, the prepared films were investigated by SEM; the microscope pictures are shown in Fig. 13. For MJK/PDMS7 with 5 wt% copolymer loading, a rough surface is obtained. There are no visible PEG domains observed, and the composite appears homogeneous on the microscale. When the loading of the block copolymer increases from 10 to 20 wt%, the microspherical domains become visible and the number of microspheres increases in line with an increased concentration of PEG. The domains were analysed using Image Processing and Analysis software (ImageJ). The domain sizes of visible spherical domains for MJK/PDMS7 at 10, 15 and 20 wt% are 1.3 ± 0.2 µm, 1.3 ± 0.2 µm and 1.6 ± 0.2 µm, respectively. The observation of spherical domains is coherent with the samples from Liu et al, 33 who observed pores on composite samples of PDMS and PEG etched with ethanol. 33 The obtained morphologies indicate that the methodology of blending polymers creates the good dispersion of multiblock copolymers in a silicone network where the spherical domain size seems independent on concentration, as the chain length of the PEG was not a variable in this study. Since the composite with the lowest concentration of PEG possesses different morphology, and at the same time possesses the best overall properties for actuation and lifetime, it may be argued that the introduction of additional surfaces into the system is unfavourable, especially as these surfaces may increase permittivity but they also destabilise the elastomer. Conclusion A new composite elastomer, which has high relative and low permittivity, was successfully created from a binary system of polymer blends consisting of conducting PDMS7-PEG multiblock copolymer and non-conducting PDMS elastomer (MJK). The desired morphology (discontinuous phase of the block copolymer and continuous phase of PDMS) was successfully created in the blends, thereby indicating the development of non-conductive behaviour in the elastomer. Low copolymer loading is favourable, since it creates a homogeneous elastomer on the micro-scale which in turn facilitates a more electrically stable elastomer. Even though the PDMS7-PEG multiblock copolymer is conductive and has high loss permittivity, a good composite elastomer can be developed by incorporating the block copolymer into a silicone network at different wt% and by employing a proper mixing technique. The dielectric breakdown strengths for cross-linked MJK/PDMS7 polymer blends were relatively high, with values in the order of 100 V/µm. Finally, by integrating all the characterised parameters, i.e. Young's modulus, breakdown strength and relative permittivity, figures of merit for the dielectric elastomer actuation of the various MJK/PDMS7s were determined, and it was concluded that by incorporating low concentrations of PEG, actuation could be improved 17fold along with the extension to the lifetime of the dielectric elastomer.
2018-12-09T07:51:30.625Z
2015-06-12T00:00:00.000
{ "year": 2015, "sha1": "0f3e8a3cced04b15bcd765d2bd629e024d844dc2", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/ra/c5ra09708h", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f329bad115ee91c45676ae5421194b899b5437d0", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
245496912
pes2o/s2orc
v3-fos-license
Trichotillomania in an Autistic Spectum Disroder: Single Case Intervention The following work addresses trichotillomania and dermatillomania, both symptoms of the impulses control, in a 11 years old girl, who courses the sixth grade in a private elementary school and with a diagnose of first degree autism spectrum disorder and Attention Deficit Hyperactivity Disorder (ADHD) as a comorbidity. The objective was to reduce the frequency of tearing her hair and the skin imperfections in the school context throughout an intervention based on cognitive behavioral techniques. Within the used methodology to analyze the case an exhaustive evaluation of the patient has been performed using grade observation records, before and during the intervention period in order to systematize the whole process. The intervention techniques used where Token Economy and self-instruction. The results show a progressive improvement of the symptoms, reflected in the diminish of frequency of the behaviors registered that were conducted. In spite of the limited time for the intervention for these kinds of behaviors and the base line phase, it was possible to get to know the girl well and establish bonds with her, in spite of her condition, which can be noted in a better adaptation on her school context. This work seeks to favor the increase of research on this disorder since there is information related on the etiological factors, but it still is not enough, likewise, the related information on these disorders and its possible comorbidities is useful to continue with the advance on the treatments in this area. Introduction Autism Spectrum Disorders (ASD) are neurological developmental disorders characterized by persistent deficiencies in communication and social interaction in multiple contexts, associated with patterns of behavior, interests or repetitive and restricted activities (American Psychiatric Association, 2014) and with an unobservable executive functioning (Pérez-Pichardo, Ruz-Sahrur, Barrera-Morales, & Moo-Estrella, 2018). It is estimated that one in each 160 children in the world population has this condition (OMS, 2013), however, in Mexico has been found a prevalence on this disease if 87% (Fombonne et al., 2016). Until this moment, there are no biological markers on ASD, so the diagnose is based fundamentally, in its clinical manifestations. The latter edition on The Diagnostic and Statistical Manual of Mental Disorders, DSM-5, (APA, 2014) has organized it in two domains that must be observed when ASD is being diagnosed. On one side, it groups the social and communicative limitations as a set of difficulties and on the other side, the behavioral paths, interests and/or restricted and repeated activities, stablishing levels of depth and severity. Difficulties in communication and social interaction and a pattern of behaviors and singular activities constitute the main symptoms for the diagnose of ASD. However clinical practice reveals the existence of a ASD profile in which there are no core symptoms but the associated symptoms that actually give place to a professional consultations or a concern on the part of the teachers and parents due to the interference on the daily children's life. What's more, its common to find comorbidities of other psychiatric disorders such as Attention Deficit Disorder, Obsessive Compulsive Disorder, and medical affections such as respiratory and ear infections, food allergies, allergic rhinitis, atopic dermatitis, type I diabetes, asthma, gastrointestinal disorders, sleep disorders, migraines, seizures, and muscular dystrophy (Treating Autism, ESPA Research & Autism Treatment Plus, 2014) and those that have to do with impulses control such as tricotillomania and dermmatillomania. The therm "trichotillomania" (TTM) comes from the greek "trichos" (hair) "tylo" (pull) and manía (impulse), it was originally proposed in 1889 and later incorporated on the catalogue of pychiatric disorders, belonging to the Obsessive-Compulsive Disorders Group (APA, 2014) and it is characterized as we see in the chart 1, for the objective hair loss, due to the repeated incapacity of resisting the impulses to eliminate the hair, the eyebrows, the eyelashes or the body hair (OMS, 1992;Allevato, 2007). It is estimated that amid the 1% and the 4% of population suffers this disorder and commonly starts between the 5 and 13 years, being equally frequent in men and women, however, at the adult age is even ten times more common in women than in men (APA 2014;Torales y Di Martino-Ortiz, 2016). Accordingly to the American Psychiatric Association (2014) the individual with TTM, can offer or not a conscious resistance to this impulse and the realization itself can be premeditated and planned or not. B. Sensation of increasing tension immediately before hair pulling or when trying to resist the practice of this behavior. C. Well-being, gratification or release when hair pulling occurs. D. The disturbance is not better explained by the presence of another mental disorder and is not due to a general medical condition (eg, dermatologic disease). E. The disturbance causes clinically significant distress or impairment of the individual's social, occupational, or other important areas of activity. The circumstances that trigger stress increase the pulling the hair behavior, even though in relaxing and distracting states the behavior is also observed (Sarmiento, Guillén, & Sánchez, 2016). The dermatillomania was presented in a minor frequency than the trichotillomania and it was characterized as is shown in the chart 2 by episodes where the skin was scratched, and some wounds were made then the scabs were ripped and the cycle would start again according to the DSM-5 dermatillomania is part of the Obsessive-compulsive disorders and the related disorders. The Excoriation Disorder (ED), also called psychogenic excoriation, dermatillomania or Skin pinching Syndrome, makes its appearance in medical literature in 1875. It was the British dermatologist Erasmus Wilson, who coined the term "neurotical escoriation" in which it was described as behaviors in neurotic patients that self-infringed repeated an excessive wounds, extremely hard to control. Table 2. Criteria for Excoriation Disorder (ED) according to DSM-V A. Damaging the skin recurrently until provoking skin injuries. B. Repeated attempts to diminish or cease of scratching the skin. C. Scratching the skin causes discomfort clinically significatively or deterioration in social, work or other important areas of functioning. D. The skin damage cannot be attributed to physiological effects of a substance (e.g., Cocaine) or other medical condition (e.g., scabies ). E. The fact of scratching one's skin cannot be explained better by the symptoms of another mental disorder (eg. Delusions, tactile hallucinations in a psychotic disorder, attempts to improve a defect or perceived imperfections as in the TDC, stereotype as in the disorder of stereotypical movements or the disorder for stereotypical movements or damaging oneself in a non-suicidal self-lesions. With respect to the used inverventions, cognitive behavioral therapy has shown to be effective, as well as the training on in Jacobson's progressive muscle relaxation, behaviral awareness, behavioral awareness, the performance of incompatible behaviors, cognitive restructuring, self-instruction, self- Sarmiento, Guillén, & Sánchez, 2016). According to this model, before starting the treatment it is necessary to carry out a detailed assessment defining the behaviours in observable terms and stablish registry systems. Generally, the treatments are limited in time and have a determined number of sessions. The efficacy is related mainly with the used technics and not so much with the bond with the therapist, but when the cognitive approach is combined this becomes relevant, a bond is understood as a collaboration relation between patient and therapist. First a definition of the problem is stablished and with an agreed work contract, clear objectives are stablished and those will guide the process. The cognitive behavioral model centers its interest in the nature of the cognition and the usage of behavioral techniques that promote behavior changes. The objective is to acknowledge through the presentation on this case how can it be treated in school and how the symptomatology can be reduced throughout an intervention based in behavioral-cognitive techniques until the point of reducing the frequency of tearing the hair and the skin imperfections and continue to work other key aspects of the core disorders which is ASD. Participant It is one 11 years old girl who courses sixth grade in an upper middle class private school in one of the states in the southeast of Mexico, she is the younger sister of two. She enjoys play videogames, and her favorite characters are the cats, whom are also her favorite theme for conversation, talk about them and specially hers are very nice. Her main difficulties are related with the socialization and school performance, we will call her CG, she has been diagnosed as a high-functioning autism spectrum disorder whose symptomatology focuses on deficits in social interaction, as well as executive functions. The girl does not have monitoring staff within the school and resists most of the time the support of teachers and school psychologists, as well as to follow the curricular adjustments that have been proposed to improve her performance in school, however, sometimes she agrees through negotiations to participate in classroom activities, as long as, the reinforcer is attractive to her at that time. The girl presents difficulty in the process of sustained attention for long periods of time; however, she manages to have a good performance when taking the exams since she accomplishes to finish even before some classmates she even gets good grades. The group in which she works consists of 10 girls and 8 boys from 10 to 11 years old. The group teacher uses a traditional teaching method, they have good group control and are perceived as an authority figure in the group and even by CG. It is the group teacher who derives the girl to the psychology department, due to the fact that it's been a month since she has increased the frequency of the tearing the hair and skin parts which causing bleeding wounds that become scabs which she rips and starts again, those behaviors where happening sporadically and rarely occurred at her classroom, but now its happening at school and as far as it is known also at home, the teacher is the one who reports and ask for intervention from the psychology department. Objective The objective of this intervention is diminish the frequency of tearing the hair and ripping the scabs at least in a 50 % through an intervention based in cognitive-behavioral techniques. Instruments To gather the data the following instruments where used: The interview is a essential tool at getting information. Questions are formulated to the parents about how the problem behavior started, the frequency of the behavior, emotional state (before, during and after tearing her hair and skin imperfections), the objective of this is to identify the relations between the behavior, the antecedents and the consequences as a prerequisite of the treatment, another aspect to find out is, if there has been a previous strategy to reduce or stop the behavior, if the family was already doing something about it and if they were willing to give support to the intervention, among other subjects. Behavior Rating Records A frequency observation registry (Sattler & Hoge, 2008) was carried out for "tearing the hair or skin imperfections", for which a scale 0 to 4 was designed with which the behavior during the school work the was observed and it was graded in the following manner: 0 in absence of the undesired behavior; 1 if the behavior occurred once or three times and 2 if it happened from four to six times, 3 if she presented seven to nine times and 4 if she had ten or more time during the observation period. In this case the registries were taken in one hour time intervals so that there could be a sample of most of the time in which the girl remained at school. Table 3. Scale of Trichotillomania Frequency Week 1 Procedure The group teacher asked the intervention with CG because she was worried that she was ripping her hair and skin imperfections, that was the main reason why it was decided to enter the classroom to carry out a five-phase program: 1) Adaptation. In which the researcher observed the behavior, familiarize with the environment and the students would adapt to her presence until the children behavior wasn't altered, it took approximately two weeks. 2) Diagnose of base line. Where the behaviors to be studied were operationally defined, the most convenient register was chosen, the contingencies which surrounded the behaviors were analyzed including the parents' interview to complete this information and it was determined that the evaluation design to be used would be the changing of the criteria, this phase lasted four weeks. In addition, frequency of hair and skin imperfections pulling were counted. 3) Intervention. Starting from the contingences analyze, it was designed the intervention which began to apply from the fifth week for the observation registers and from the adaptation and line base phases, application of the intervention program continued for six weeks until the goal was reached. To clarify before the intervention started the informed consent of the parents and others was obtained and it was proposed as objective empathizing with CG, to stablish a strong therapeutic alliance considering her previous history of therapeutic failure and her denial of being assisted by some monitor. The first encounters where were dedicated to talk about topics of her interest and what she did not like, and she became increasingly willing to work and to pay attention to what was said to her. 4) Maintenance. Lasted two weeks looking for the achievements made during the intervention remained reducing the instigators and the reinforcement, in such a manner that the application of the program changed from a continuous to an intermittent program of variable ratio. 5) Fading. In this last stage, the educational center was visited for fewer periods of time and the goal was that the girl could get used to the absence of the researcher and the intervention program itself, this stage was carried out in two weeks. Regarding to the factors that were detected that kept the behaviors in the beginning the learning of inadequate associations (she learned those responses in nervous and boredom situations by tearing her hair) and with time the behavior became generalized, and converted into a habit, producing those behaviors not just because anxiety, but also to feel pleasure, at first she did it occasionally and the frecuency increased when they called her attention, she continued when she went to the bathroom or none was watching her according to the reports of her parents, at school she sat down and reclined her head in the table as if she was asleep, and in the mean time she continued ripping her hair or the scabs, it didn't matter if other people would express disgust when she started to bleed, the teacher used to ask her to quit the behavior or didn't noticed since she was paying attention to the rest of the class, it is worth mentioning that CG had plenty of leisure time and that she refused to do class work when requested and remained laying on the desk often covering her face and continued tearing her hair or skin. Methodology After the diagnose and considering the context in which the behavior was presented it was designed an intervention program adjusted to the necessities of CG. As an intervention technique it was used the token economy labeled "punctuation board" (see Figure 1), it was used in order to keep track of the count of the frequencies of unwanted behaviors (trichotillomania and dermatilomania). In this instrument a symbol is assigned for each unwanted behavior, in the vertical lines the presence or absence of trichotillomania was registered, and a triangle was used with the presence or absence of dermatilomania. A symbol on the left side means presence and one on the right side means absence. At the end the period of intervention of 4 hours (from 7 am to 11 am), the absence points where subtracted from the presence and if the result was minor to 0, it was agreed with the student that she was going to receive a positive reinforcer designed especially for her, as described below. Figure 1. Scoreboard for Desired and Unwanted Behaviors As reinforcement it was assigned a space for manual activity adapted to CG's interests, which consisted in developing of a small feline model made mainly of pompoms and wood to which she had access as long as she could accumulate the required points of the day which she could obtain once she didn't rip her hair or skin imperfection in the determined time that was increasing once the behavior stabilized, following the changing criteria evaluation design until the goal was reached. Awareness Sensibilization or awareness consisted in helping CG to focus on the circumstances where it was more likely to rip her hair or hurt her skin. The fact of showing the registry of her behaviors in the punctuations board is an important part of achieving this awareness so she can realize the number of times that she produces those unwanted behaviors. Self-instruction. To carry out this strategy of structuring the environment, a board was made with a self-instruction template (see Figure 2) to reduce the stress of class activities by making CG aware of how she could face them step by step and foresee situations that triggered her anxiety. Results To stablish the functional analysis of the behavior, some interviews with the parents were performed, the first was during the diagnose phase and later another one was performed during the intervention. In the first interview with the parents, reported that when the girl ripped her hair they asked her not to do it and they took her by her hands, when this ended she went to the bathroom and she stayed for a while, they realized that she was continuing with those behaviors. She was taking citalopram for anxiety and was changed with concerta. Ripping her hair was an activity performed during leisure time, however, she did stop some activities so she could continue the behavior. In the interview with her parents after the intervention was performed during the following stage and they reported that at home the unwanted behavior were no longer presented and that she was not going to the confine herself at the bathroom so she could continue with ripping her hair and scabs, during leisure time from time to time, she still ripped some scabs, but she doesn't have as many, since she is not scratching often as before. When the intervention started the following work hypothesis was formulated, one cognitive behavioral intervention can reduce the trichotillomania and dermatillomania. During the observation of the effects of the intervention and in the posterior moments, it can be observed a diminish in the tearing the hair frequency before and after the intervention. The study of the possible differences between the records in the frequencies obtained in each of the pre and post hair tearing events, indicates that during the pre record, a higher mean score was obtained (M = 2.54, SD = .672) compared to the post registry (M = .413, SD = .309), the difference being statistically significant (t = 9.66, p <.001) and the effect size very large (Cohen's d = 4.06). As we can see in the graphic of the Figure 3, Trichotillomania and dermatillomania behaviors decreased as the intervention and the criteria change was being applied, the behavior was restricting itself every day more. No significative differences were found between the first week to the fifth, but there were from the sixth week which corresponds to the second intervention week, when it was applied a more strict criteria to get the reinforcement. There as a big effect on the difference registries pre and the fifth week of the intervention (started since the previous stage) (Cohen d =0.78) and a great effect since the sixth registry week (Cohen d >4.30) (see Figure 3). Discussion The trichotillomania and the excoriation or dermatillomania are complex disorders that have received little attention in the field of research. The results of this study show the efficacy of one intervention based on cognitive and cognitive behavioral techniques for the treatment of those symptoms in a 11 years old girl whose core disorder is ASD since the improvement of the symptoms not only were presented during the intervention phase but also in the months after since the case was followed during the whole school year and there was no remission of the symptoms. During the first weeks of the intervention, it was important to work in alliance with the monitor and CG, since she had difficulties on working with new people, it should be noted that this aspect is essential for the success of any intervention program, as well as enough time was taken to understand the functional analysis of the behavior, stablishing clearly the triggering events of this and the following consequences that somehow were reinforcing trichotillomania and dermatillomania. It is worth mentioning that the administration of psychiatric medications was of great help in the management of this treatment, however, the school monitoring and the adjustments to certain tasks that caused tension complemented. It is of great importance parental support so that the behavior does not appear appear again since the family situation of CG is that the parents are not very consistent with the agreements that are made with them and do not follow the recommendations for a long time, which could lead to relapses, as a consequence of this, so it is recommended that the family attend to a therapeutic or psychological support process to know how to act in case of relapses and in general to understand how to handle certain situations with GC since she is currently an adolescent and has an ASD condition which they (the parents) often don't know how to handle. Verbal or visual anticipation of certain event in children with ASD reduces anxiety in a great manner also it helps them to accommodate to the new situation and showing them more willing to perform, which is what was tried to do with the self-instruction boards (Salvadó et al., 2012). It is necessary to remember than the behaviors and manifestations of the people with ASD are symptoms of a complex cognitive structure and that is the reason to know it and it can be achieve throughout a literature revision of the techniques such as the interview or the observation to stablish a functional analysis of the behavior and understand what is triggering, what stops it and what is reinforcing, besides knowing to understand the disorder from the inside in order to make an accurate intervention and adjusted to the personal needs. From the intervention's results, can be concluded that the used techniques have been used effectively to the treatment of the student which confirms what previous studies to the present one have been proved regarding the effectivity of this approach and the evidence on how can it be used in children with ASD with high levels of functioning. As limitations on the present work we find that the treatment with children that have this condition, Autism Specter Disorder requires consistency, structure and patience, occasionally the long term effectiveness can be seen, so the strategies should be applied and also the technique that have been proven to be effective for whoever is in charge of the group and the work with this children at school in this case the school teacher who from time to time had troubles dividing her time with CG and the rest of the students. In second place, the need for regular curricular adjustments to reduce the girl's anxiety is also highlighted, since one of her anxiety triggers are when tasks are assigned that are not in accordance with her abilities, the girl then ignores the task, becomes demotivated and increases the periods of leisure which are conducive to her starting to pull out her hair and scab over her skin.
2021-12-27T16:03:03.130Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "f9b2b540fac3485d5e9b1754cc63ecbd469be03f", "oa_license": "CCBY", "oa_url": "http://www.scholink.org/ojs/index.php/wjer/article/download/4427/5025", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b3f54eb16163e4a3a1853b4652fe316093ef4483", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
62816063
pes2o/s2orc
v3-fos-license
Ultrasound Computed Tomography Magnetic Resonance Imaging Axillary Metastasis in the Abdominal Dermatofibrosarcoma Protuberans with the Fibrosarcomatous Changes : A Case Report Dermatofibrosarcoma protuberans (DFSP) is a rare low-grade malignant sarcoma that most commonly affects the dermis and the underlying subcutaneous soft tissues on the trunk and the proximal extremities (1-4). The incidence of DFSP is 0.8-4.2 cases per million persons per year and accounts for 2-6% of all soft tissue sarcomas (1-4). DFSP predominantly arises in young to middle-age males and is locally aggressive; and has a tendency of high-rate local recurrences (1, 4). As such, the primary treatment for DFSP is the complete surgical excision with the negative surgical margin (2-4). DFSP shows a low risk of metastasis accounting between 0.5% to 5%, which is always seen after the repeated local recurrences and almost always arises in the regional lymph nodes and the lungs (1-4). However, DFSP with fibrosarcomatous changes (DFSP-FS), a rare variant of DFSP, is considered to have a more aggressive clinical behavior resulting in a higher metastatic potential (5-8), although such is still debated (9). Here, we present the radiologic findings of a rare case of axillary metastasis, focusing on the sonographic features of the metastasis mimicking accessory breast cancer in the abdominal DFSP-FS without local recurrences. To the best of our knowledge, this is the first report of the distant soft-tissue metastasis, other than in the lungs, in DFSP without local recurrences. INTRODUCTION Dermatofibrosarcoma protuberans (DFSP) is a rare low-grade malignant sarcoma that most commonly affects the dermis and the underlying subcutaneous soft tissues on the trunk and the proximal extremities (1)(2)(3)(4).The incidence of DFSP is 0.8-4.2cases per million persons per year and accounts for 2-6% of all soft tissue sarcomas (1)(2)(3)(4).DFSP predominantly arises in young to middle-age males and is locally aggressive; and has a tendency of high-rate local recurrences (1,4).As such, the primary treatment for DFSP is the complete surgical excision with the negative surgical margin (2)(3)(4).DFSP shows a low risk of metastasis accounting between 0.5% to 5%, which is always seen after the repeated local recurrences and almost always arises in the regional lymph nodes and the lungs (1)(2)(3)(4).However, DFSP with fibrosarcomatous changes (DFSP-FS), a rare variant of DFSP, is considered to have a more aggressive clinical behavior resulting in a higher metastatic potential (5)(6)(7)(8), although such is still debated (9). Here, we present the radiologic findings of a rare case of axillary metastasis, focusing on the sonographic features of the me-tastasis mimicking accessory breast cancer in the abdominal DFSP-FS without local recurrences.To the best of our knowledge, this is the first report of the distant soft-tissue metastasis, other than in the lungs, in DFSP without local recurrences. CASE REPORT A 37-year-old woman was referred to our radiology department for the axillary sonography, who had undergone the wide excision for DFSP of the anterior abdomen 15 months ago. There was no evidence of distant metastasis in the previous chest and abdomen CT scans.She performed the routine postoperative follow-up study including the chest computed tomography (CT) (Sensation 16; Siemens Medical Solution, Erlangen, Germany).A round and solid nodule sized 2 cm, with mild homogenous enhancement, was incidentally observed in the left axilla on the chest CT scans (Fig. 1).Pulmonary metastasis was absent, and no locally recurred evidences were found on the CT scans.On the physical examination, there was no evidence of the local recurrence of DFSP in the abdomen.There was a focal 4).The MRI (3.0 T, Signa Excite HDx, GE Healthcare, Milwaukee, WI, USA) demonstrated a solid tumor invading the skin and pectoralis major muscle at the left anterior axillary region.The mass showed isosignal intensity on the T1 weighted images, central high signal intensity with peripheral low signal intensity on the T2 weighted images and peripheral enhancement on the post-contrast fatsuppressed T1 weighted images (Fig. 5).The US-guided 14-guage gun biopsy was performed, and the histopathologic features showed sarcomatous changes in most of the core specimens. The patient was referred to the general surgery department, and bulging with a skin color change in the left axilla (Fig. 2).The bilateral breast and axillary ultrasound (US) (HDI 5000 or 3000, Philips-Advanced Technology Laboratories, Bothell, WA, USA) was performed.On the US, an irregular and hypoechoic mass with a hyperechoic rim, sized 2.1 × 1.9 cm, was seen at the left axillary subcutaneous fat layer (Fig. 3).There were no accompanied accessory breast tissues, abnormal lymph nodes or abnormal breast masses on the axillary and whole breast US.The most likely diagnosis was the accessory breast cancer arising in the axilla.However, considering the clinical history of DFSP, a lymph node or soft tissue metastasis from DFSP was included in the differential diagnosis.Subsequent mammography also re- Although DFSP-FS is considered to have a higher risk of metastasis, there are currently no published reports on the distant metastasis other than that in the lungs.Our case was exceptional and unique, as the distant metastasis of DFSP-FS occurred in the axillary soft tissue, particularly, without the local recurrences of the primary DFSP. When DFSP is diagnosed, an extensive staging workup is not routinely indicated, but an assessment of the regional lymph node and the pulmonary metastases is preoperatively needed. The postoperative follow-up chest CT is indicated in patients with prolonged, locally advanced or recurrent DFSP or in patients with DFSP-FS. The imaging features of DFSP have been reported scarcely in few articles (10)(11)(12)(13).Imaging studies are not routinely performed for DFSP, because its diagnosis is made on the clinical appearance and the superficial biopsy.However, US is a simple imaging tool to apply for the superfi- (12).According to a few case reports of DFSP of the breast (11), the masses were also oval and circumscribed, hypoechoic or mixed-echogenic on the US.The primary DFSP of our case was concordant with the sonographic findings in the previous studies.However, our axillary DFSP exhibited the sonographic features of an irregular and poorly defined hypoechoic mass with the hyperechoic rim only involving the subcutaneous fat tissue, which was first differentiated from the axillary accessory breast cancer. For this axillary mass, the preoperative MR imaging demonstrated a solid tumor invading the skin and the pectoralis major muscle at the left anterior axillary region.It was a non-specific, axillary soft tissue tumor invading the skin and muscles on the MR imaging, which corresponded with the previous studies of MR imaging (13).Torreggiani et al. (13) reported that MR imaging allowed for the accurate preoperative assessment and aided in the diagnosis of atypical or difficult cases of DFSP. CT is not indicated except in the suspected underlying bone involvement.In this case, the chest CT was done for the evaluation of pulmonary metastases, and the axillary mass was incidentally discovered.The mass was non-specific, round, isodense and solid nodule with mild homogenous enhancement on the CT scans. Immunohistochemically, CD34 is one of the most useful stains to differentiate DFSP from other soft tissue tumors (1,2).The sensitivity of CD34 staining in DFSP ranges from 84 to 100%, whereas DFSP-FS is CD34 positive in about half of the cases (8). In this case, the histologic findings showed significant areas of the high-grade sarcomatous changes in both the primary and the metastatic lesions of DFSP.The CD34 was negative or weakly positive in both. DFSP should be treated with a wide surgical excision, and the Fig. 1 . Fig. 1.A postoperative routine chest CT axial image revealed a 2 cm sized round and solid nodule with mild homogenous enhancement in the left axilla (arrow).Pulmonary metastasis was absent and no locally recurred evidences were found on chest and abdominal CT scans. Fig. 3 . Fig. 3.A 2.1 × 1.9 cm sized irregular and hypoechoic mass with a hyperechoic rim (arrows) was seen at the left axillary subcutaneous fat layer on ultrasound. Fig. 2 . Fig. 2. Physical examination showed postoperative scar in the left abdominal skin and focal bulding with a blue skin color change in the left axilla (arrow). Fig. 4 . Fig. 4. Mammography showed a round and hyperdense nodular density in the left axilla (arrow) without other remarkable features. Fig. 5 . Fig. 5.A post-contrast fat-suppressed T1 weighted MR axial image demonstrated a peripherally enhancing solid tumor with internal necrosis invading the skin and the pectoralis major muscle at the left anterior axillary region (arrows).
2018-12-21T00:37:36.625Z
2013-05-01T00:00:00.000
{ "year": 2013, "sha1": "475e5b88ebb804543d1126a347064c0f3f3ac988", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3348/jksr.2013.68.5.417", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "475e5b88ebb804543d1126a347064c0f3f3ac988", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267766435
pes2o/s2orc
v3-fos-license
Apoptotic response of malignant rhabdoid tumor cells Background Malignant rhabdoid tumors (MRTs) are extremely aggressive and resist current radio- and chemotherapic treatments. To gain insight into the dysfunctions of MRT cells, the apoptotic response of a model cell line, MON, was analyzed after exposure to several genotoxic and non-genotoxic agents employed separately or in association. Results Fluorescence microscopy of chromatin morphology and electrophoretic analysis of internucleosomal DNA fragmentation revealed that MON cells were, comparatively to HeLa cells, resistant to apoptosis after treatment with etoposide, cisplatin (CisPt) or X-rays, but underwent some degree of apoptosis after ultraviolet (UV) C irradiation. Concomitant treatment of MON cells with X-rays or vinblastine and the phosphatidylinositol 3-kinase (PI3-K) inhibitor wortmannin resulted in synergistic induction of apoptosis. Western blot analysis showed that the p53 protein was upregulated in MON cells after exposure to all the different agents tested, singly or in combination. In treated cells, the p53 downstream effectors p21WAF1/CIP1, Mdm2 and Bax were induced with some inconsistency with regard to the accumulation of p53. Poly ADP-ribose polymerase (PARP) cleavage, indicative of ongoing apoptosis, occurred in UVC-irradiated cells and, especially, in cells treated with combinations of X-rays or vinblastine with wortmannin. However, there was moderate or no PARP cleavage in cells treated with CisPt, X-rays, vinblastine or wortmannin singly or with the combinations X-rays plus CisPt or vinblastine and CisPt plus vinblastine or wortmannin. The synergistic effect on the induction of apoptosis exerted by some agent combinations corresponded with synergy in respect of MON cell growth inhibition. Conclusion These results suggest abnormalities in the p53 pathway and apoptosis control in MRT cells. The Ras/PI3-K/AKT signaling pathway might also be deregulated in these cells by generating an excess of survival factors. These dysfunctions might contribute to the resistance of MRTs to current antineoplastic treatments and could warrant consideration in the search of new therapeutic approaches. Background MRTs occur during early childhood in soft tissues, especially kidney and the central nervous system [1]. Prognosis is poor because of the high cellular proliferation rate, propensity to metastasis and marked resistance to current radio-and chemo-therapeutic interventions [2][3][4][5]. According to cytogenetic and molecular analyses, MRTs are generally caused by biallelic alterations of the hSNF5/ INI1 gene [6]. This gene encodes a member of the chromatin-remodeling SWI/SNF multiprotein complexes that activate or repress transcription of target genes [7,8]. SWI/ SNF activity is also required for Rb-dependent transcriptional repression and subsequent inhibition of proliferation [9]. Moreover, the hSNF5/INI1 protein directly co-operates with several important cellular factors: c-Myc [10], Gadd34 [11] and ALL-1 [12]. Overexpression of c-Myc, a nuclear phosphoprotein that regulates DNA replication and cell division, is a consistent characteristic of rhabdoid cells [13][14][15][16]. IGF-II, IGF-IR and IGF-IIR, which promote cell proliferation and DNA synthesis through an autocrine mechanism, are also constitutively expressed in several MRT cells lines [17]. High levels and unusual distribution of p53 protein have been observed, suggesting some abnormalities in p53 status, but there is little Mdm2 mRNA expression [16]. However, p53 protein and the downstream effectors, p21 WAF1/ CIP1 and Mdm2 were up-regulated by DNA-damaging drugs and the p53 pathway was considered to be functional [18]. There appear to be no rearrangements or amplifications in Myc, Ras, Erb B-2 and p53 genes in these cells [19]. Transfection experiments have shown that when hSNF5/ INI1 protein is re-introduced into cells derived from MRTs, it inhibits the entry into S-phase [20], prevents cell proliferation, causes flat cell formation, and directly represses cyclin D1 gene [21]. Moreover, hSNF5/INI1 overexpression induced apoptosis in two of the three cell lines tested [22]. The aim of the present study was to gain further insight into the dysfunctions of MRT cells. A model cell line, MON, was evaluated in terms of its responses to the genotoxic and non-genotoxic stresses induced by physical and chemical agents with different modes of action, employed singly or in association. The treatments provoked different kinds of DNA (and protein) damage (single and double strand breaks, oxidation, alkylation, crosslinks etc.), or interfered with cellular signaling functions. Results showed (a) that MON cells may have impaired control of apoptosis and (b) that apoptosis can be strongly activated by inhibition of the PI3-K pathway under particular stress conditions. Results Apoptosis in response to different genotoxic and non-genotoxic stresses was assessed by monitoring the appearance of typical nuclear morphological changes and internucleosomal DNA cleavage, and by investigating some steps in the apoptotic pathway at the molecular level. Apoptotic response In Fig. 1 the responses of MON and HeLa cells are compared. As appraised by morphological criteria such as chromatin condensation and fragmentation, the rhabdoid cells were largely refractory to the induction of apop-tosisafter exposure to etoposide (up to 40 µM, for 2 h), CisPt (up to 40 µM, for 2 h) and X-rays (up to 10 Gy). However, they showed some degree of apoptosis following UVC irradiation (20 J/m 2 ). All these agents damaged DNA, directly or indirectly. Etoposide complexes with topoisomerase II and DNA to enhance double-strand and single-strand cleavage; CisPt forms adducts with the DNA dinucleotide d(pGpG) inducing intrastrand and interstrand crosslinks; X-rays produce single-and doublestrand breaks and base modifications; UVC radiation mainly induces pyrimidine-pyrimidine dimers and 6-4 photoproducts. In contrast, HeLa cells showed significant or high levels of apoptosis in response to all treatments, even when lower drug concentrations or doses were used. It is worth noting that the doubling times of MON and HeLa cells were similar (around 24 h in the conditions used). Very few cells in either line showed a normal nuclear morphology associated with a staining by propidium iodide (1 µg/ml) after the various treatments, indicating that almost no necrosis had occurred. Electropherograms confirmed that MON cells did not enter into apoptosis after exposure to CisPt (10 µM, for 2 h), etoposide (10 µM, for 2 h) or X-rays (5 Gy) over a three-day period, but they did after UVC radiation (10-20 J/m 2 ). In contrast, HeLa cells underwent DNA fragmentation typical of apoptosis after all the different treatments. Representative gels are shown in Fig. 2A and 2B. These various results indicate that although MON cells can become apoptotic, they are much more resistant than HeLa cells to the induction of apoptosis by a variety of agents. Cancers often show perturbation of signaling pathways and over-expression of survival signals. Experiments were therefore performed to determine whether wortmannin, a PI3-K inhibitor, affected the apoptotic response of MON cells to drug and radiation treatments. Cells were irradiated with different doses of X-rays or treated with several concentrations of vinblastine (an anticancer drug that inhibits microtubule assembly by binding tubulin) or wortmannin. These agents were employed separately or together. Fig. 3A confirms that X-irradiation up to 8 Gy did not induce any significant apoptosis in MON cells and shows that wortmannin induced moderate apoptosis at the concentrations used (up to 4 µM). However, when the two agents were applied together, the resulting degree of apoptosis was much greater than the addition of effects produced by X-rays or wortmannin singly. A similar synergistic effect was produced when vinblastine and wortmannin were combined (Fig. 3B). Vinblastine alone at 8 nM induced a moderate apoptotic response in MON cells. These findings suggest that inhibition of PI3-K may increase the susceptibility of MON cells to potential apoptotic stresses. Induction of apoptosis assessed by fluorescence microscopy in HeLa and MON cells exposed for 2 h to CisPt and etoposide or irradiated with X-rays and UVC Figure 1 Induction of apoptosis assessed by fluorescence microscopy in HeLa and MON cells exposed for 2 h to CisPt and etoposide or irradiated with X-rays and UVC. Cells, cultured and treated in Petriperm dishes, were stained with Hoechst 33342 and visualized with an inverted epifluorescence microscope at different time intervals after the treatments. Cells with fragmented, marginated chromatin were defined as apoptotic. Quantitative analyses were performed by counting at least 1000 cells for each data point. Analysis of internucleosomal DNA fragmentation Figure 2 Analysis of internucleosomal DNA fragmentation. A: Agarose gel electrophoresis of low molecular weight DNA extracted from HeLa and MON cells exposed for 2 h to 10 µM CisPt (P) or 10 µM etoposide (E) or irradiated with 5 Gy of X-rays (X) and harvested on consecutive days thereafter. C refers to untreated controls. B: Agarose gel electrophoresis of low molecular weight DNA extracted from HeLa and MON cells exposed to 5, 10 or 20 J/m 2 UVC or not irradiated (0) and harvested on consecutive days thereafter. In A and B, low molecular weight DNA was extracted from 2.10 6 cells at each time and analyzed by electrophoresis through a 2% agarose gel. Sizes of DNA molecular weight standards are indicated. Gels shown are representative of three separate experiments. Western blot analysis Western blot analysis (Fig. 4) showed that p53, a key regulator of cell cycle check points and apoptosis after DNA damage, was abundant in MON rhabdoid cells compared to HeLa cells. Moreover, it accumulated more rapidly and to a greater extent in the former than in the latter cells after UVC irradiation and CisPt treatment. The accumulation of p53 induced in MON cells by treatment with 40 µM CisPt for 2 h was of the same order than that produced by 20 J/m 2 of UVC. However, the cleavage of PARP, one of the substrate of cysteines proteases activated during apoptosis, occurred in UVC-damaged MON cells but not fol-lowing CisPt treatment. In HeLa cells the cleavage of PARP was observed after both UVC and CisPt damage. In Fig. 5, western blot analysis was done after challenging MON cells with individual agents (X-rays, CisPt, vinblastine, wortmannin) or with different combinations of them. On the basis of the results in Fig. 3, analysis was performed after 24 h of continuous treatment. p53 was considerably upregulated by X-rays (4 Gy) and, as already shown, by CisPt (4 µM), but also by the non genotoxic agent vinblastine (8 nM). Wortmannin induced p53 protein only moderately at the concentrations used (2-4 µM). Combination of X-rays with wortmannin, CisPt or Western blot analysis of p53 accumulation and PARP cleavage in HeLa and MON cells treated with 40 µM CisPt for 2 h or irra-diated with 20 J/m 2 UVC and incubated for different time intervals before the sample processing Figure 4 Western blot analysis of p53 accumulation and PARP cleavage in HeLa and MON cells treated with 40 µM CisPt for 2 h or irradiated with 20 J/m 2 UVC and incubated for different time intervals before the sample processing. These times (hours) were noted above the lanes. C refers to untreated controls. Lysates from 5.10 5 cells were subjected to SDS-PAGE (8%), blotted and incubated with a monoclonal antibody to p53 or PARP. The data are representative of two (PARP) to five (p53) independent experiments. vinblastine, or of vinblastine with CisPt, resulted in higher accumulation of p53 than that provoked by any agent deployed separately. CisPt plus wortmannin, and vinblastine plus wortmannin, induced p53 to similar or somewhat lower levels than those observed after exposure to CisPt or vinblastine alone. The downstream effectors p21 WAF1/CIP and Mdm2 were expressed in treated cells to an extent roughly correlating with the amount of p53, except after treatment with X-rays plus wortmannin or with vinblastine plus wortmannin, which resulted in an induction of p21 WAF1/CIP1 and of Mdm2 much lower than that of p53. The pro-apoptotic protein Bax, also normally under the control of p53, was moderately induced by wortmannin alone and by X-rays plus CisPt or vinblastine, but was not significantly changed by vinblastine or CisPt alone or most of the other combinations. After X-rays alone, the amount of Bax was noticeably decreased. Its level was partially restored when X-rays were combined with Western blot analysis of relevant proteins involved in stress response and apoptosis in MON cells irradiated with 4 Gy of X-rays (X) or exposed to 2 (W2) or 4 µM wortmannin (W4), 8 wortmannin. The anti-apoptotic protein Bcl-2 showed a moderate decrease following all treatments. PARP cleavage, indicative of ongoing apoptosis, was absent or very low in cells treated with X-rays, CisPt, or wortmannin alone. Vinblastine employed alone (8 nM) provoked moderate PARP cleavage. The combinations Xrays plus CisPt, X-rays plus vinblastine, CisPt plus wortmannin, and CisPt plus vinblastine had little or no effect on PARP integrity. In contrast, PARP cleavage was dramatically increased in cells exposed to vinblastine plus wortmannin and X-rays plus wortmannin. These results confirm that MON cells (a) may be resistant to induction of apoptosis by genotoxic and stress signals, and (b) show some disconnection between the stabilization of p53 protein and the expression of some of its downstream effectors on the one hand, and the cleavage of PARP (i.e. apoptosis induction) on the other. This suggests that these cells have dysfunctions in the p53 pathway and apoptosis control. Also, western blot analysis, likewise morphological evaluation, indicates that the PI3-K inhibitor wortmannin can sensitize MON cells to some genotoxic and non-genotoxic treatments. Cell growth inhibition assay Finally, in order to establish whether the apoptosisrelated effects of the different treatments could be relevant to MON cell proliferation, a cell growth inhibition assay was performed. The effects of the agents, separately or in combination, were evaluated after 72 h of treatment. The dose-effect relationships showed that the IC 50 values were about 7.6 nM, 1.6 µM and 6.3 µM for vinblastine, CisPt and wortmannin, respectively. Also, 2.5 Gy of X-rays inhibited the proliferation of MON cells by 50%. The correlation coefficients (r values) were 0.94 or greater, indicating conformity of the data to the median-effect principle and good reproducibility. For combinations of pairs of agents, the combination index (CI) equation was employed for determining synergistic, i. e. higher than additive, and antagonistic, i. e. lower than additive, effects. Table 1 shows that the combinations X-rays plus wortmannin and vinblastine plus wortmannin inhibited MON cell growth synergistically over a fairly wide range. These same combinations had synergistic effects on the induction of apoptosis (see above). In contrast, the combinations X-rays plus CisPt or vinblastine and CisPt plus wortmannin or vinblastine, which were unable to trigger apoptosis as judged by PARP cleavage, had marked antagonistic effects on the inhibition of growth of MON cells. This was despite their damaging effects indicated by p53 accumulation (Fig. 5). Discussion MRTs combine aggressiveness and resistance to therapeutic treatments and have a very discouraging prognosis. In a search for factors responsible for the MRT phenotype, some aspects of the apoptotic pathway were investigated in MON rhabdoid cells. Normally, ATM and ATR, activated in response to DNA damage or stress, phosphorylate p53 and block Mdm2, which targets p53 for destruction. This results in increased levels and conformational changes of p53 and subsequent suppression of proliferation by cell cycle arrest or apoptosis [reviewed in [23]]. The choice between these outcomes is dependent on the extent of damage, as well as on environmental and intrinsic cellular factors. In the present study, comparison of MON cells to HeLa cells revealed a particularly low susceptibility of MON cells to apoptosis following treatments with several DNA damaging agents. Nevertheless, p53 was abundant and was upregulated after treatment with all genotoxic and non-genotoxic agents examined, singly and in various combinations, whereas PARP cleavage did not occur systematically. The lack of correlation between p53 and cell death is illustrated by the fact that treatment of cells with X-rays plus wortmannin, which leads to a massive destruction of PARP, resulted in an induction of p53 lower than that produced by X-rays plus CisPt, after which no PARP cleavage was observed. These observations indicate that rhabdoid cells may have dysfunctions in the p53 pathway and in the control of apoptosis, at least after some types of damage. In fact, the transcription stimulating activity of p53, eliciting the expression of responsive genes such as p21 WAF1/CIP1 and Mdm2, appeared to be perturbed in treated MON cells. That was the case after treatment of cells with X-rays plus wortmannin and, especially, with vinblastine plus wortmannin, which resulted in a lower induction of p21 WAF1/CIP1 and Mdm2 than expected on the basis of p53 accumulation. These treatments were able to drive cells into apoptosis. Since p21 WAF1/CIP1 blocks cell cycle progression through its negative activity on various cyclin dependent kinases, it could be hypothesized that the relatively reduced induction, or the diminution of p21 WAF1/ CIP1 , antagonizes the establishment of a secure G1 arrest and thus facilitates apoptosis in these treated cells. Anomalies in the control of apoptosis in MON cells are probably also illustrated by the lack of positive correlation between increase in the Bax/Bcl2 ratio and cell death. Members of the Bcl-2 family normally interact to regulate programmed cell death. Pro-apoptotic members of this family include Bax, Bad etc. Bcl2 and others proteins act as apoptotic inhibitors [reviewed in [24]]. In MON cells, the Bax/Bcl2 balance did not appear to be a determining factor for apoptotic outcome since, although this ratio was increased by PARP-cleavage-stimulating treatments (X-rays or vinblastine plus wortmannin), it was equally enhanced in cells exposed to wortmannin alone, X-rays plus CisPt, or CisPt plus vinblastine, after which no PARP cleavage occurred. The results also showed that the apoptotic outcome in MON cells can be triggered by treatments combining the PI3-K inhibitor wortmannin with another agent. Among the Ras effector signaling pathways, the PI3-K/AKT pathway facilitates G1 to S phase progression and plays a major role in protecting cells from apoptosis by inhibiting BAD and thus cytochrome C release, inactivating caspase-9 and -3, and targeting p53 for destruction [25]. In MON cells the apoptotic function of p53 is largely abrogated possibly because the Ras signaling pathway, perhaps stimulated by autocrine growth factors, generates an antiapoptotic state that results in resistance to antineoplastic treatments. Wortmannin, by inhibiting the PI3-K, may decrease the expression of survival factors such as AKT. This is not sufficient to provoke apoptosis (wortmannin alone has little effect at the concentrations tested). However, apoptosis is triggered in the presence of other damage or stress signals such as X-rays (unable by themselves to induce significant apoptosis in MON cells) or vinblastine (moderately effective in provoking apoptosis at the concentration used). Nevertheless, inhibition the PI3-K/AKT signaling pathway does not systematically result in increased susceptibility to apoptosis after all types of damage, as shown by the treatment combining wortmannin and CisPt. The damage dependence of the wortmannin effect has no clear explanation. It suggests a complex crosstalk between survival factors, nature of damage and apoptotic signals in MRT cells. Interestingly, the synergistic effects on apoptosis of some agent associations, such as wortmannin plus X-rays or vinblastine, are reflected in the inhibition of MON cell growth. The considerable antagonistic effects of several other combinations of CisPt, X-rays, vinblastin and wortmannin are also worth noting. The establishment of synergistic and antagonistic interactions between agents may be relevant for future clinical choices of therapeutic strategies. Conclusions In conclusion, these results obtained on a model cell line suggest that perturbations of the p53 pathway and a reduced apoptotic response in rhabdoid tumor cells might contribute to the resistance of MRTs to antineoplastic treatments. The Ras/PI3-K/AKT signaling pathway seems to be involved in the dysfunctions induced in these cells by the mutation in the hSNF5/INI1 gene and probably results in increased cell survival factors. Some combinations that are synergistic or antagonistic towards the inhibition of cell growth and the induction of apoptosis have been identified and it is hoped that this approach can provide some suggestions for new rational design of therapeutic protocols. Chemicals CisPlatinium (II) diamine dichloride, etoposide, vinblastine and wortmannin were purchased from Sigma-Aldrich, St Quentin Fallavier, France. Stock solutions were made in the appropriate solvents and stored in aliquots at -20°C. Further dilutions were made in culture medium immediately before use. Irradiations X-rays were delivered by a Philips MG 325 (Philips industrial X-Ray, Hamburg, Germany) employed at 260 kV, 13 mA with a 0.5 mm Cu and 1 mm Al filter. The dose rate was 1 Gy/min. UVC: a Philips germicide tube was used with a maximal emission at 254 nm. Cells were irradiated after removal of the culture medium from the plates. The dose rate was 0.5 J/m 2 /s. Doses given in this paper are incident doses. Cell culture The MRT cells MON [genetically characterized in ref. [6]], generously supplied by O. Delattre (Institut Curie, Paris), and HeLa cells were cultured as monolayers in RPMI-1640 supplemented with 10% fetal calf serum and 20 µg/ml gentallin in a humidified 5% CO 2 atmosphere at 37°C. The cultures tested negative for mycoplasma species. Cell growth inhibition assay The physical and chemical agents were evaluated for their effects separately and in pairs. After counting with a cell counter (Beckman Coulter Inc., Palo Alto, CA, USA), 2.5 × 10 3 rhabdoid cells were seeded in 100 µl of medium in 96-well microtitre plates and incubated for 24 h at 37°p rior to adding the drugs or irradiating. The final volume of medium, with or without drugs, was 200 µl per well and incubation was continued for three days, which ensured logarithmic growth of control cells throughout the experiment. The effects of the treatments were determined in terms of growth inhibition by measuring the cellular protein content according to the method of Skehan et al. [26]. Cells were fixed with trichloroacetic acid and then stained for 30 min. with 0.4% sulforhodamine B in 1% acetic acid. Unbound dye was removed by acetic acid washes, and the protein-bound dye was extracted with Tris base (pH 10) before determining its absorbance at 540 nm in a 96-well microplate reader Victor 2 (PerkingElmer Life Siences, Boston, MA, USA). Median-effect principle for dose-effect analysis The multiple drug effect analysis of Chou and Talalay [27] was used to calculate combined drug effects. Dose-effect curves for each agent and for combinations, in multiply diluted concentrations or scaled irradiation doses, were plotted using the median-effect equation fa/fu = (D/ Dm) m , in which D is the dose, Dm is the dose required for 50% effect, fa is the fraction affected by dose D, fu is the unaffected fraction and m is a coefficient of the sigmoidicity of the dose-effect curve. The conformity of the data to the median-effect principle was determined by the linear correlation coefficient r. The combination index (CI) equation for mutually non-exclusive drugs CI = (D) 1 / (D x ) 1 + (D) 2 /(D x ) 2 + (D) 1 (D) 2 /(D x ) 1 (D x ) 2 , in which drug 1, (D) 1 , and drug 2, (D) 2 , in combination inhibit x%, and (D x ) 1 and (D x ) 2 are the doses of drug 1 and drug 2 alone, respectively, inhibiting x%, was employed for measuring synergism and antagonism. CI < 1, = 1, and > 1 indicates synergism, additive effect and antagonism, respectively. Quantitations by computerized analysis [28] were done using the Calcusyn software (Biosoft, Cambridge, UK). Determination of apoptosis Apoptosis was evaluated by both DNA fluorescence microscopy of nuclear changes, as described elsewhere [29], and DNA fragmentation analysis. For the latter, low molecular weight DNA was extracted according to Hermann et al. [30] with minor modifications. At specific times after treatment, non-adherent and adherent cells were combined, rinsed with phosphate buffered saline (PBS), counted and pelleted. Aliquots of 2.10 6 cells were lysed in 1% Nonidet P-40 (NP40) in 20 mM EDTA, 50 mM Tris-HCl (pH 7.5). After centrifugation for 5 min at 1,600 × g, the supernatant was collected and the extraction repeated on the pellet. The supernatants were combined, adjusted to 1% SDS and then treated for 2 h at 56°C with RNAse A (final concentration, 1 µg/µl). Proteinase K was added (final concentration, 1 µg/µl), and the mixture was incubated for 2 h at 37°C. Ammonium acetate was then added (final concentration, 1 M), and the DNA was precipitated by the addition of 2.5 volumes of ethanol and an overnight incubation at -20°C. After centrifugation (14,000 × g for 20 min at 4°C), the pellet was washed with 70% ethanol, dissolved in 30 µl of gel-loading buffer and separated by electrophoresis through a 2% agarose gel using 40 mM Tris-acetate, 2 mM EDTA as running buffer. DNA was visualized by ethidium bromide staining (1 µg/ ml) and photographed under UV illumination. Western blot analysis Whole-cell protein extracts were made from HeLa and MON cells irradiated with UVC or X-rays or treated with different compounds. At specific times after treatment, non-adherent and adherent cells were collected, combined, rinsed with PBS, counted and pelleted. Aliquots of 5.10 5 cells were lysed in RIPA buffer (50 mM Tris-HCl pH 7.5, 150 mM NaCl, 1% NP40, 0.5% sodium deoxycholate, 0.1% SDS) containing 1 mM phenyl methyl sulfonyl fluoride, 1 µg/ml aprotinin, 1 µg/ml pepstatin, 1 µg/ml leupeptin and 50 mM Na fluoride for 30 min at 4 °C and then boiled for 5 min. Alternatively, for PARP analysis, cell pellets were resuspended in a defined volume of reducing loading buffer (62.5 mM Tris-HCl pH 6.8, 6 M urea, 10% glycerol, 2% SDS, 0.003% bromophenol blue, 5% 2-mercaptoethanol (freshly added)), sonicated on ice with a cup horn tip to break DNA, and incubated for 15 min at 65°C. Proteins were separated by SDS-polyacrylamide gel electrophoresis (PAGE) and transferred on to polyvinylidene fluoride membranes. Ponceau red staining was performed to check that closely comparable amounts of proteins had been loaded and transferred in each lane. Immunoblots were incubated with the indicated primary antibodies for 2 h at room temperature. Antibodies against Mdm2 (SMP14, mouse monoclonal), Bcl-2 (100, mouse monoclonal), Bax (P-19, goat polyclonal and B-9, mouse monoclonal), p21 WAF1/CIP1 (C-19, goat polyclonal and F-5, mouse monoclonal) and p53 (DO-1, mouse monoclonal), were from Santa Cruz Biotechnology, Inc. Santa Cruz, CA, USA. C2-10 mouse anti-PARP monoclonal antibody was from PharMingen International (Becton Dickinson, Grenoble, France). After washing, the blots were incubated with the appropriate secondary antibodies conjugated to horseradish peroxidase for 1 h at room temperature, washed again and developed with an enhanced chemiluminescence kit (Amersham Pharmacia Biotech, Litle Chalfont, UK) according to the manufacturer's instructions.
2014-10-01T00:00:00.000Z
2003-07-15T00:00:00.000
{ "year": 2003, "sha1": "9f67e74a28378f182778604952ab819a77405648", "oa_license": "CCBY", "oa_url": "https://cancerci.biomedcentral.com/counter/pdf/10.1186/1475-2867-3-11", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f67e74a28378f182778604952ab819a77405648", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235197354
pes2o/s2orc
v3-fos-license
Particulate Matter, an Intrauterine Toxin Affecting Foetal Development and Beyond Air pollution is the 9th cause of the overall disease burden globally. The solid component in the polluted air, particulate matters (PMs) with a diameter of 2.5 μm or smaller (PM2.5) possess a significant health risk to several organ systems. PM2.5 has also been shown to cross the blood–placental barrier and circulate in foetal blood. Therefore, it is considered an intrauterine environmental toxin. Exposure to PM2.5 during the perinatal period, when the foetus is particularly susceptible to developmental defects, has been shown to reduce birth weight and cause preterm birth, with an increase in adult disease susceptibility in the offspring. However, few studies have thoroughly studied the health outcome of foetuses due to intrauterine exposure and the underlying mechanisms. This perspective summarises currently available evidence, which suggests that intrauterine exposure to PM2.5 promotes oxidative stress and inflammation in a similar manner as occurs in response to direct PM exposure. Oxidative stress and inflammation are likely to be the common mechanisms underlying the dysfunction of multiple systems, offering potential targets for preventative strategies in pregnant mothers for an optimal foetal outcome. Particulate Matter (PM)-An Intrauterine Toxin Embryonic and foetal development is sensitive to the in utero environment, e.g., maternal stress, poor nutrition and environmental toxins [1][2][3][4]. A poor intrauterine environment is notably correlated with low birth weight in the offspring. The Barker hypothesis links events in foetal development, such as intrauterine growth restriction, to the increased susceptibility to develop future adult diseases [5][6][7]. In recent years, the importance of intrauterine environmental factors has been increasingly recognised in the postnatal susceptibility to non-communicable illnesses, including respiratory disorders, metabolic disorders, cardiovascular diseases and chronic kidney disease [1,8]. Apart from the abovementioned well-accepted factors causing foetal underdevelopment, air pollution has also been increasingly recognised as a major intrauterine toxin [9,10]. The World Health Organisation (WHO) has raised the alarms regarding the gravity of poor air quality on human health in the global setting, based on the studies suggesting the detrimental health effects of direct exposure to PMs derived from fossil fuel, biomass burning and traffic [11]. The burden of PM on health is unevenly distributed. Pregnant women and their unborn infants are among the vulnerable groups that can be significantly affected by the poor air quality in which they live [12]. The attention to the adverse health outcome due to intrauterine PM exposure was illustrated after the 2008 Beijing Olympic Games, when the air quality was improved during that short period which allowed the comparison of the birth outcome between those with and without in utero exposure to heavy air pollution [13]. PMs with a diameter of 2.5 µm or less (PM 2.5 ) are of particularly high risk to human health, including that of the growing foetus [14][15][16][17], even more than the gas component in the polluted air [16]. As such, countries with relatively clean air, e.g., Australia, are still at risk of PM derived from traffic-related air pollution [18]. Those living within 50 to 500 m of main roads are at higher risk of chronic low-level PM exposure and the associated adverse health effects [19][20][21][22][23]. For example, it has been found that living less than 200 m from a major road, meaning exposure to traffic-related air pollution, causes an increased risk of developing asthma and low lung function in children [20,[24][25][26]. The small size gives such PMs the advantage of accessing the bloodstream in the alveoli and passing blood organ barriers, including the blood-placental barrier [9]. As such, PM 2.5 can potentially circulate in foetal blood, although the foetal level compared with the maternal level is currently unclear. The toxicity of PM is due to the complex composition, depending on the source [18]. The common substances carried by PM found in urban and industrial areas include sulphates, carbon, polycyclic aromatic hydrocarbons, biological compounds and metals [11,16,27]. Even in countries and areas with relatively good air quality, extreme weather conditions due to the change in the climate can significantly increase PM mass concentration within a short period of time, such as sand storms and bush fires [28,29]. A recent Nature paper has suggested that the oxidative potential of PM may be the driver of its adverse health effects [30]. In fact, PM contains high levels of free radicals and oxidants, such as reactive oxygen species (ROS) (e.g., oxygen and hydroxyl radicals and other reactive forms of O 2 such as superoxide anion and hydrogen peroxide) [30,31]. Several PM components also generate ROS, including transition metals, polycyclic aromatic hydrocarbons and volatile organic compounds [32,33]. PM sourced from non-exhaust traffic emission contain many transition metals (i.e., manganese, vanadium, copper and iron) that have redox properties with the potential to induce intracellular ROS production, which then activates inflammatory cells to produce more ROS [34,35]. This has been associated with higher oxidative stress and toxicity compared to other sources [34,36]. Oxidative stress may be involved in all PM-induced disorders in multiple organ systems, including the lung, cardiovascular system and liver, which activate the endogenous redox system [37][38][39][40]. While the general public is generally conscious about outdoor pollutions, indoor air pollution is another frequent location of PM exposure that can affect the residents' health. Indoor PMs can come from outdoor with similar chemical composition and size as environmental PMs [41]. Household generated PM can also be due to daily activities, such as cooking, biomass burning and cleaning [42][43][44]. Bisphenol A (BPA), commonly used in plastic products, has been found in 95% of indoor dust samples [45,46]. Although its inhalation is less than ingestion, BPA may interfere with lipid metabolism and inflammatory responses to increase the risk of atherosclerosis [46,47]. While it is well accepted that high ambient PM levels correlate with the mortality rate, it is increasingly recognised that long-term exposure to even low level of PM (quite often considered as "safe level") increases the risk of disorders in vital organ systems, including the heart, the lung and the brain [48,49]. Although not widely studied, PMs are now considered an in utero environmental toxin [9,50] and therefore of interest to this perspective paper. Here, we summarised the currently available evidence from a limited number of publications to raise the awareness of the needs for more comprehensive research into this currently understudied yet important health topic. Disrupted Foetal Development As an intrauterine toxin and a strong oxidant, maternal exposure to PM during pregnancy is associated with birth complications and long-term health consequences in the offspring, including abnormal organogenesis, premature and preterm birth, small for gestational age, impairment in newborn lung function and immune function, and increased risk of brain developmental disorders and cognitive disorders after birth [9,[50][51][52][53]. However, this topic is still understudied, considering that there is no evidence of a safe exposure threshold of any of the air pollutants [14]. The ability of PM 2.5 to cross the blood-placental barrier suggests that PM 2.5 can circulate in foetal blood [9]. Therefore, it can be naturally postulated that PM may directly induce oxidative stress and inflammatory responses in the growing foetus and affect foetal development [9,50]. This theory has been supported by studies on umbilical blood in newborns with prenatal PM exposure, in which reduced endogenous antioxidant Superoxide Dismutase 2 and DNA oxidative stress damage are discovered consistently in mother-baby pairs [54,55]. In vitro studies using embryonic cells or trophoblast cells have discovered dose-dependent toxicities of PMs on cell cycle and viability [50,56,57]. PM exposure affects several pathways, including heightened oxidative stress, inflammatory response and endoplasmic reticulum stress, resulting in ROS-JNK/ERK-apoptosis and G0/G1 arrest pathways [50,56,57]. These studies have shed light on what can happen to the growing foetus if the mother lives in polluted air during pregnancy. The cellular powerhouse mitochondria are sensitive to oxidative stress induced damage; however, mitochondrial function and integrity are not affected by PM exposure in an in vitro study [57]. Interestingly, changes in mitochondrial DNA copy number and methylation have been found in the cord blood of babies born to mothers exposed to PM during pregnancy [50]. This may be inherited from mothers, instead of caused by in utero PM exposure. In addition, in utero exposure to fine ambient PM correlates with heightened placental oxidative stress and inflammatory responses with decreased placental mass and gene expression responsible for placental angiogenesis [50,53]. This may impair nutrient delivery to the foetus, leading to intrauterine underdevelopment [58]. As the developing foetus is highly vulnerable to in utero environmental changes, in addition to low birth weight, intrauterine PM exposure can also result in miscarriage and preterm birth [58]. In line with Barker's hypothesis, low birth weight can lead to an adaptive catchup growth after birth, which increases the risk of obesity. It is not surprising to observe fast weight gain in mice with pre-conceptional exposure to high levels of PM. Xu and colleagues demonstrated that females born to animals exposed to PM 2.5 only during preconception seem to be protected, where only males in the 1st generation (F1) experience intrauterine development and catchup growth after birth [59]. The same study suggested that this transgenerational transmission may be driven by the effect of PM on mitochondrial DNA in eggs, as exposure to PM 2.5 only during gestation did not have the same effect as the pre-conceptional exposure; while only the daughters in the 1st generation pass the adverse effect to the 2nd generation [59]. However, the situation seems more complex in humans, whose mothers are normally exposed to PM during both pre-conceptional and gestational periods. In humans, only girls show this predicted trend, whereas boys with intrauterine exposure to a higher level of PM remain underweight in childhood [60]. This may suggest that there is an additive effect between pre-conceptional and gestational exposures or even postnatal exposure, as babies normally live in the same environment as mothers. Whether the effect is consistent until adulthood is unclear for now. The discrepancy between few animals and human data available to date may have been related to unphysiologically high doses of PM used in animal studies or the effects of daily activities and weather changes on the level variations of PM exposure in human. Risk of Future Respiratory and Metabolic Disorders Maternal PM 2.5 exposure has been found to cause foetal inflammation and oxidative stress, which influence organ development, and therefore increase the offspring's susceptibility to non-communicable diseases in adulthood, as foetal development is a critical window that influences adult disease susceptibility [9,58]. In utero PM 2.5 exposure has been shown to cause mitochondrial damage due to the mother inhaling oxidants leading to increased oxidative stress in the intrauterine environment, which then can cause dysregulation of the foetal immune system and interruption to the genetic duplication process causing adverse birth and foetal health outcomes [9,61]. Thus, maternal PM 2.5 exposure in humans has been linked to increased risks of childhood asthma in the offspring [9]. The associations between direct PM exposure and the development of insulin resistance, abnormal cholesterol/triglyceride levels and obesity have been reported [62,63]. To date, there have been very few studies investigating the impact of intrauterine PM exposure on the risk of future metabolic disorders. The very first study was conducted on hamsters in 1982 and showed that PM 2.5 was able to cross the blood-placental barrier, therefore reaching the foetus in utero. It also showed that maternal PM 2.5 caused a decrease in mitotic activity in the foetal liver [64]. The liver is a key metabolic organ and has several roles, including acting as a hub, connecting metabolically various tissues and thus governing and maintaining body energy metabolism and metabolism homeostasis [65,66]. As this was a study on foetal hepatic development conducted nearly 40 years ago, this information is not up to date and is limited. More recently, another study in 2019 discovered that prenatal and postnatal (4 weeks) PM 2.5 exposure increased lipogenesis and worsened fatty acid oxidation differentially in mice consuming chow and high-fat diet [67]. Moreover, another study using continued PM exposure throughout development in mice showed transcriptomic changes in the liver in adulthood [68]. A pre-reviewed paper in BioRxiv showed maternal exposure to PM 2.5 increased DNA methylation in pancreatic islets associated with the reduced blood insulin level and hyperglycaemia, which is an effect lasting for two generations [69]. Although no other work has supported the abovementioned discoveries, the above evidence suggests that foetal programming of metabolic disorders can be induced by intrauterine PM exposure. This needs to be confirmed by future studies. Influence on Neurocognitive Function Exposure to environmental toxins in utero can interrupt brain development [70]. PM may interfere with the formation of brain structures and cause failure in cell proliferation and the inability to modulate neurotransmission due to dysregulated pruning (loss of synapses) [71][72][73]. School-aged children who were exposed to high PM levels during their foetal life presented a thinner cortex in both hemispheres of the brain, particularly the precuneus region in the right hemisphere, which correlates with impaired inhibitory control [51]. In rats, maternal PM exposure led to decreased levels of IL-18 and vascular endothelial growth factor (VEGF) that are correlated with increased anxiety later [74]. These findings emphasise the links between intrauterine PM exposure and neurocognitive impairment [74]. Studies also suggest prenatal exposure to traffic PM 2.5 may cause social behavioural changes by promoting pro-apoptotic pathways in the cerebral cortex during brain development [75]. PM may directly target the immune system by triggering glial cells, e.g., microglia, oligodendrocytes and astrocytes [71][72][73]. Microglia are resident innate immune cells within the central nervous system that respond to stimuli (cell stress, tissue damage, pathogens, etc.) and serve an active role in inflammation [76,77]. Although without direct evidence, intrauterine PM exposure may induce a similar inflammatory response in the brain regions as that of direct PM exposure (e.g., frontal cortex, substantia nigra, vagus nerve and the olfactory bulb) [78,79]. Elevated inflammation in the brain is also associated with bloodbrain barrier leakage, leading to increased iron deposition in the brain and microbleeds [80]. Microbleeds are often associated with an impaired cognitive function, which may be responsible for the heightened risk of dementia due to direct PM exposure [81,82]. However, whether intrauterine PM exposure can lead to early-onset neurodegeneration and increased risk of dementia and other neurological conditions is unclear, which can be the focus of future epidemiological studies. Disturbance on Body Fluid Homeostasis Chronic exposure to PM has been associated with reduced kidney function [83][84][85][86][87]. The adverse impact of PMs on sodium excretion, natriuretic and diuresis further increases the risk of hypertension in such individuals [88,89]. An animal study suggests that in utero PM exposure can reduce renal dopamine D1 receptor function, which further leads to increased blood pressure driven by increased ROS production [89]. However, there is no literature to date to suggest the impact of intrauterine PM exposure on early kidney development and later susceptibility to renal dysfunction and chronic kidney disease (CKD) in adulthood. A human study suggests that individuals who are exposed to PM from polluted air during foetal development have low birth weight and individuals with foetal underdevelopment have a 70% increased risk for CKD [90]. This is most likely driven by epigenetic modifications, which change DNA-encoded gene expression without affecting the original nucleotide sequence [91]. DNA methylation is the most widely studied epigenetic modification, with numerous studies linking its role to the development of CKD due to in utero environmental influences such as maternal cigarette smoking [2,92]. However, whether intrauterine PM exposure can program the risk of CKD via epigenetic modification is yet to be determined. Similar to PM, chemicals in cigarette smoke are also intrauterine toxins [2]. Intrauterine exposure can induce oxidative stress and inflammatory responses, which is linked to mitochondrial DNA damage, impaired mitochondrial function and structure and increased global DNA methylation in adult kidneys [2]. As a result, hallmarks of CKD have been found in these mice, including increased renal fibrosis and proteinuria [2,8,93]. Whether in utero PM exposure also induces CKD in adulthood through similar mechanisms is unclear. This requires future studies to close the knowledge gap. A Temporary But Plausible Solution Epidemiological studies have suggested that reducing PM exposure or the level of air pollution can reduce the risk of a variety of health problems [94]. Premature deaths could also be reduced by lowering air pollution to the WHO standard [14]. A study in India shows that life expectancy would increase by 1.7 years if PM levels are below that associated with adverse health outcomes [95]. This reduction in PM concentration is achievable through local and national governments establishing multisectoral policies in sectors such as transport, energy, agriculture, waste management and urban planning [11,96,97]. However, this goal is not easy to achieve. This largely depends on the willingness of the individual government to change their carbon emission policy and the influence of the surrounding countries. However, the health risks need to be addressed now. The responses to prenatal PM exposure are comparable to cigarette smoke exposure, another common intrauterine oxidant/toxin. Both lead to oxidative stress and intrauterine underdevelopment [58,98]. Some of the long-term outcomes are also similar between these two stimuli [5,99,100], suggesting common pathological mechanisms and perhaps shared preventative solutions. We have shown that maternal supplements with either global antioxidant (e.g., L-carnitine) or mitochondria-targeted antioxidant (e.g., MitoQ) can ameliorate the detrimental impact of intrauterine cigarette smoke exposure caused foetal underdevelopment and risks of non-communicable disorders in multiple organ systems [92,[101][102][103][104][105]. These benefits include the endocrine system that can lead to diabetes, the liver that can lead to dyslipidemia and liver steatosis, the brain that can lead to motor and cognitive dysfunction, the kidney that can lead to CKD and the lung that can lead to fibrosis and asthma [92,[101][102][103][104][105]. Such effects are perhaps not restricted to suppressing oxidative stress in the growing foetal, as maternal vitamin C supplement during pregnancy has been shown to interrupt unwanted epigenetic modifications that lead to adverse health outcomes after birth due to intrauterine toxin exposure [106,107]. It is not clear whether administration of global or mitochondrial specific antioxidants during the gestation or early postnatal period can ameliorate adverse effects due to maternal PM exposure. Future studies addressing these issues warrant further investigations. Perspective Foetal development determines future health outcomes, in accordance with Barker's hypothesis [6,7,9,[19][20][21][22][23]. Therefore, any impact that in utero PM 2.5 exposure has on the foetus may be carried into adulthood, despite the currently limited number of studies on the effect of in utero exposure to PM 2.5 on the foetus in this regard. In addition, as the general public is not aware of the danger of low PM levels in places where air quality is considered good (e.g., Australia), they will not actively avoid it. Therefore, more epidemiological studies are needed to raise the awareness of both the general public and policy-makers for urban planning. Furthermore, although research on the adverse impact of in utero exposure to tobacco cigarette smoke has suggested the second and third trimester as a critical window to cause foetal underdevelopment [108], which trimester is more important for in utero PM exposure is still unclear. Investigating this research question can be challenging in humans, as moving house or changing working environment during pregnancy is not a common choice among most pregnant women. Perhaps only animal experiments can help identify the critical window during foetal development and use pharmacological approaches to identify the involvement of oxidative stress and inflammation in the toxicity due to in utero PM exposure. In addition, in humans, newborns are more likely to live in the same polluted environment as their mothers, and thus postnatal development can be directly influenced by PM inhaled by their fragile lungs. Therefore, it is often difficult to separate the effects between in utero exposure and direct early-life inhalation. There have been some understandings of the respiratory and neurological effects of maternal PM exposure, whereas the impacts on the liver, kidney and cardiovascular are understudied. We have summarised the potential mechanisms in Figure 1 based on the published evidence. More studies are needed to examine how intrauterine PM exposure can interrupt normal organ development by adopting more physiologically relevant doses of PM. In addition, there is no safe limit for PM exposure. Future studies should also focus on the scenario of chronic low-level PM exposure in those with direct or in utero exposure. Conclusion Limiting pollution to reduce foetal and life exposure to PM is clearly the goal to achieve optimal health outcome. Maternal and early intervention to prevent chronic disease holds promise as a short-term solution; however, the effect of PM exposure during gestation on foetal health outcomes should be studied systematically. Additional studies are required to confirm whether oxidative stress is indeed the main mediator for disease development due to in utero PM exposure and identify the optimal foetal window for interventions and preventative measures. Earlier investigations have established sexual bias in disease pathophysiology, with females less likely to develop certain diseases than males [109]. The conventional explanation is the anti-inflammatory effects of estrogen [110]. However, this is not always applicable to the sexual differences in foetal and early developmental disorders before puberty. Nevertheless, the sexual difference in the impact of intrauterine PM exposure has not been well studied, which may hold the key to develop a proper preventative strategy. Conclusions Limiting pollution to reduce foetal and life exposure to PM is clearly the goal to achieve optimal health outcome. Maternal and early intervention to prevent chronic disease holds promise as a short-term solution; however, the effect of PM exposure during gestation on foetal health outcomes should be studied systematically. Additional studies are required to confirm whether oxidative stress is indeed the main mediator for disease development due to in utero PM exposure and identify the optimal foetal window for interventions and preventative measures.
2021-05-27T05:21:59.919Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "64e3bd3ca9dcdbac1dcf56ae447ce6f02f8be128", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/10/5/732/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64e3bd3ca9dcdbac1dcf56ae447ce6f02f8be128", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
89164022
pes2o/s2orc
v3-fos-license
Evaluating the Inclusion Level of Medium Chain Fatty Acids to Reduce the Risk of Porcine Epidemic Diarrhea Virus in Complete Feed and Spray-Dried Animal Plasma Research has confirmed that chemical treatments, such as medium chain fatty acids (MCFA) and commercial formaldehyde, can be effective to reduce the risk of porcine epidemic diarrhea virus (PEDV) cross-contamination in feed. However, the efficacy of MCFA levels below 2% inclusion is unknown. The objective of this experiment was to evaluate if a 1% inclusion of MCFA is as effective at PEDV mitigation as a 2% inclusion or formaldehyde in swine feed and spray-dried animal plasma (SDAP). Treatments were arranged in a 4 × 2 × 7 plus 2 factorial with 4 chemical treatments: 1) PEDV positive with no chemical treatment, 2) 0.325% commercial formaldehyde, 3) 1% MCFA, and 4) 2% MCFA. The 2 matrices were: 1) complete swine diet and 2) SDAP; with 7 analysis days: 0, 1, 3, 7, 14, 21, and 42 post inoculation; and 1 treatment each of PEDV negative untreated feed and plasma. Matrices were first chemically treated, then inoculated with PEDV, and stored at room temperature until being analyzed by RTqPCR. The analyzed values represent threshold cycle (CT), at which a higher CT value represents less detectable RNA. All main effects and interactions were significant (P < 0.009). Feed treated with MCFA, regardless of inclusion level, had fewer (P < 0.05) detectable viral particles than feed treated with formaldehyde. However, the SDAPtreated with either 1% or 2% MCFA had similar (P > 0.05) concentrations of detectable PEDV RNA as the untreated SDAP, while the SDAP treated with formaldehyde had fewer detectable viral particles (P < 0.05). The complete feed had a lower (P < 0.05) quantity of PEDV RNA than SDAP (39.5 vs. 35.0 for feed vs. SDAP, respectively) (P < 0.05). Analysis day also decreased (P < 0.05) the quantity of detectable viral particles from d 0 to 42, (33.2 vs. 44.0, respectively). In summary, time, formaldehyde, and MCFA all appear to enhance RNA degradation of PEDV in swine feed and ingredients; however, their effectiveness varies within matrix. The 1% inclusion level of MCFA was as effective as 2% in complete feed, but neither were effective at reducing the magnitude of PEDV RNA in SDAP. Summary Research has confirmed that chemical treatments, such as medium chain fatty acids (MCFA) and commercial formaldehyde, can be effective to reduce the risk of porcine epidemic diarrhea virus (PEDV) cross-contamination in feed.However, the efficacy of MCFA levels below 2% inclusion is unknown.The objective of this experiment was to evaluate if a 1% inclusion of MCFA is as effective at PEDV mitigation as a 2% inclusion or formaldehyde in swine feed and spray-dried animal plasma (SDAP).Treatments were arranged in a 4 × 2 × 7 plus 2 factorial with 4 chemical treatments: 1) PEDV positive with no chemical treatment, 2) 0.325% commercial formaldehyde, 3) 1% MCFA, and 4) 2% MCFA.The 2 matrices were: 1) complete swine diet and 2) SDAP; with 7 analysis days: 0, 1, 3, 7, 14, 21, and 42 post inoculation; and 1 treatment each of PEDV negative untreated feed and plasma.Matrices were first chemically treated, then inoculated with PEDV, and stored at room temperature until being analyzed by RT-qPCR.The analyzed values represent threshold cycle (CT), at which a higher CT value represents less detectable RNA.All main effects and interactions were significant (P < 0.009).Feed treated with MCFA, regardless of inclusion level, had fewer (P < 0.05) detectable viral particles than feed treated with formaldehyde.However, the SDAPtreated with either 1% or 2% MCFA had similar (P > 0.05) concentrations of detectable PEDV RNA as the untreated SDAP, while the SDAP treated with formaldehyde had fewer detectable viral particles (P < 0.05).The complete feed had a lower (P < 0.05) quantity of PEDV RNA than SDAP (39.5 vs. 35.0for feed vs. SDAP, respectively) (P Introduction Porcine Epidemic Diarrhea Virus (PEDV) is an enveloped single-stranded positivesense RNA virus that was first identified in the United States in May 2013.Epidemiological and controlled experiments have shown that complete feed or feed components can be one of many possible vectors of transmission of PEDV. 5 Because of the potential viral spread by feed and ingredients, reduction techniques such as chemical treatments have been used to combat the virus.Many chemical treatments have been used to mitigate the virus, but formaldehyde and Medium Chain Fatty Acids (MCFA) seem to have the greatest reduction of the virus within feed and ingredients.Formaldehyde has shown to be effective at the approved rate of addition (37% formaldehyde used in animal feed at rate of 5.4 lb per ton), and MCFA at 2% wt/wt in the feed or ingredient. 6,7,8owever, the efficacy of MCFA levels below 2% inclusion is unknown.Therefore, objective of this experiment was to evaluate if a 1% inclusion of MCFA is as effective at PEDV mitigation as a 2% inclusion or formaldehyde in swine feed and spray-dried animal plasma (SDAP). Procedures In order to evaluate the use of chemical treatments on PEDV survival, a corn-soybean meal-based swine diet manufactured at the Kansas State University O.H. Kruse Feed Technology Innovation Center in Manhattan Kansas, and spray-dried animal plasma were utilized.The feed matrices were first chemically treated before inoculation with PEDV in order to mimic post-processing contamination. In order to treat the complete feed and plasma, all treatments were added on a wt/wt basis and mixed using a lab-scale paddle mixer.The Sal CURB and MCFA treatments were aerosolized into the mixer using an air-atomizing nozzle in order to reduce the droplet size of the liquid treatments.All treatments were mixed for a 5-minute wet mix time to ensure a uniform and complete mix. Once the mixing was complete, a total of 22.5 g of product was collected from different locations within the mixer and added to the respective 250 mL HDPE, square, wide-mouth bottle based on day and replication.In order to reduce the potential for treatment-to-treatment cross-contamination, the mixer was cleaned with soap and water between treatments.Once the treatments were added to their respective bottle, they were allowed to sit at room temperature until inoculation. Inoculation The feed was inoculated using an appropriately sized pipet to allow even distribution of the virus within the feed and plasma.For the inoculation, 2.5 mL of diluted viral inoculum was placed in each 250 mL bottle containing 22.5 grams of each feed treatment, resulting in each bottle containing a PEDV concentration of 10 4 TCID 50 /g of feed.The bottles were then thoroughly shaken to ensure equal dispersion of the virus within each bottle.The samples were then stored at ambient temperature until aliquoted for viral RNA expression of PEDV at 0, 1, 3, 7, 14, 21, and 42 days post treatment via qRT-PCR.For each sample day, 100 mL of chilled PBS was placed in each 250 mL bottle containing 22.5 g of inoculated feed.Samples were then shaken to thoroughly mix and chilled at 4°C overnight.Feed matrix supernatants, including two PCR samples and a bioassay sample, were then collected and stored at -80°C until the end of the trial. Bioassay The Iowa State University Institutional Animal Care and Use Committee reviewed and approved the pig bioassay protocol.A total of 60 crossbred, 10 d-old pigs of mixed sex were sourced from a single commercial, crossbred farrow-to-wean herd with no prior exposure to PEDV.Additionally, all pigs were confirmed negative for PEDV, porcine delta coronavirus (PDCoV) and transmissible gastroenteritis virus (TGEV) based on fecal swab.To further confirm PEDV negative status, collected blood serum was analyzed for PEDV antibodies by an indirect fluorescent antibody (IFA) assay and TGEV antibodies by ELISA, both conducted at the Iowa State University Veterinary Diagnostic Laboratory (ISU-VDL).Pigs were allowed 2 d of adjustment to the new pens before the bioassay began.A total of 20 rooms (60 pigs) were assigned to treatment groups with 2 negative control rooms and 18 challenge rooms.During bioassays, rectal swabs were collected on d -2, 0, 2, 4, 6, and 7 days post inoculation (dpi) from all pigs and tested for PEDV RNA qRT-PCR.Following humane euthanasia at 7 dpi, small intestine, cecum, and colon samples were collected at necropsy along with an aliquot of cecal contents.One section of formalin-fixed proximal, middle, distal jejunum and ileum was collected per pig for histopathology. 9 Statistical Analysis Data of the main effects day, treatment, feed matrix, and all associated interactions were analyzed as a completely randomized design using PROC GLIMMIX in SAS (SAS Institute, Inc., Cary, NC).Results for treatment criteria were considered significant at P ≤ 0.05 and marginally significant from P > 0.05 to P ≤ 0.10. respectively). Interactions are presented graphically and provide more relevant results regarding the effects of specific chemical mitigants in the complete diet and spray-dried animal plasma over time.The PEDV CT in the untreated control of the complete diet increased in a linear fashion from d 0-42 (Figure 1).The chemical treatments all had a greater decrease in detectable PEDV RNA at each analysis day than the untreated control.In the complete swine diet, the MCFA treatments regardless of concentration were the most effective overall. The PEDV CT in the untreated control of the spray-dried animal plasma had the same trend for both MCFA treatments (Figure 2).However, the commercial formaldehyde product was highly successful at mitigating PEDV according to qRT-PCR in spraydried animal plasma compared to the MCFA treatments. Swine Day 2016 Bioassay Results The bioassay provided a more in-depth look at each of the chemical treatments as to which treatments led to no infection in the animals.In the complete feed, the only treatment that led to PEDV positive pigs was the day 0 PEDV positive feed with no chemical treatments (Table 4).However, the spray-dried animal plasma Sal CURB was the only treatment that led to a negative bioassay on d 3 (Table 5).On d 21 the Sal CURB, 1% MCFA, and PEDV positive untreated control all led to negative bioassays with the 2% MCFA treatment producing a positive bioassay 4 days post inoculation (Table 5). In summary, time, Sal CURB, and MCFA enhance the RNA degradation of PEDV in swine feed and ingredients, but their effectiveness varies within matrix.Notably, the MCFA was equally as successful at mitigating PEDV as a commercially available formaldehyde product in the complete swine diet at 1% inclusion.TCID 50 /mL PEDV was diluted to 10 5 TCID 50 /mL PEDV.Each treatment was inoculated with the 10 5 TCID 50 /mL PEDV resulting in 10 4 TCID 50 /g PEDV inoculated feed matrix.Three feed samples per day and treatment were collected and diluted in PBS.The supernatant from each sample was then collected for pig bioassay.The supernatant was administered one time via oral gavage on d 0 to each of three pigs per treatment (10 mL per pig).Thus, each value represents the mean of 3 pigs per treatment.Pigs were inoculated at d 12 age. 2Day post inoculation. 3A cycle threshold (Ct) of >45 was considered negative for presence of PEDV RNA. 4 In each instance a (-) signals a negative pig in the bioassay and a (+) represents a positive in the bioassay.Each day post inoculation within each treatment has three symbols with each row and column, which represents one of the three pigs in each treatment. Figure 1 .Figure 2 . Figure 1.Influence of chemical treatment on RT-PCR detection of PEDV in post-treatment PEDV-inoculated complete swine diet stored at room temperature.Data were analyzed by PCR with each data point represented by N=3.The higher the CT value, the less quantity of PEDV RNA genetic material is detected. Table 1 . Main effect of treatment on detection of PEDV by qRT-PCR 1 Table 3 . Main effect of day post inoculation on detection of PEDV by qRT-PCR 1 Table 5 . Effects of medium chain fatty acids and formaldehyde treatment of spray-dried porcine plasma on porcine epidemic diarrhea virus (PEDV) detection from plasma, pig fecal swabs and cecum contents 1PEDV N-gene Real Time-PCR, cycle threshold (Ct)
2018-11-04T17:04:31.772Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "39e3fef9f4cebe0c096e8a5e6137b1d42ad5ad50", "oa_license": "CCBY", "oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=1279&context=kaesrr", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "daa31ad9ac92eca8f79c8f685d6c96cc33674953", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
199508020
pes2o/s2orc
v3-fos-license
Loggerhead sea turtle (Caretta caretta) diving changes with productivity, behavioral mode, and sea surface temperature The relationship between dive behavior and oceanographic conditions is not well understood for marine predators, especially sea turtles. We tagged loggerhead turtles (Caretta caretta) with satellite-linked depth loggers in the Gulf of Mexico, where there is a minimal amount of dive data for this species. We tested for associations between four measurements of dive behavior (total daily dive frequency, frequency of dives to the bottom, frequency of long dives and time-at-depth) and both oceanographic conditions (sea surface temperature [SST], net primary productivity [NPP]) and behavioral mode (inter-nesting, migration, or foraging). From 2011–2013 we obtained 26 tracks from 25 adult female loggerheads tagged after nesting in the Gulf of Mexico. All turtles remained in the Gulf of Mexico and spent about 10% of their time at the surface (10% during inter-nesting, 14% during migration, 9% during foraging). Mean total dive frequency was 41.9 times per day. Most dives were ≤ 25 m and between 30–40 min. During inter-nesting and foraging, turtles dived to the bottom 95% of days. SST was an important explanatory variable for all dive patterns; higher SST was associated with more dives per day, more long dives and more dives to the seafloor. Increases in NPP were associated with more long dives and more dives to the bottom, while lower NPP resulted in an increased frequency of overall diving. Longer dives occurred more frequently during migration and a higher proportion of dives reached the seafloor during foraging when SST and NPP were higher. Our study stresses the importance of the interplay between SST and foraging resources for influencing dive behavior. Introduction How an animal moves across spatial and temporal scales is fundamental to its ecology. These movements can influence its ability to survive and have impacts on population and ecosystem dynamics and ultimately evolution [1]. Determining factors that drive animal movement behavior is an important part of movement ecology frameworks [1]. Sea turtle diving is a a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 particularly interesting movement behavior as they demonstrate the longest reported breathhold dives of all marine animals [2] and typically spend more than 90% of their time underwater [3][4][5]. Describing sea turtle dive behavior is therefore a vital part of understanding sea turtle ecology and life histories. The relationship between marine predator behavior and oceanographic conditions is not well understood [6]. However, some studies demonstrate that ocean temperature and foraging resources can interact and are important determinants for diving behavior. For example, elephant seals (Mirounga leonina) in the Southern Ocean dived deeper to forage and spent less time at those depths with increased water temperature [7], whereas Atlantic bluefin tuna (Thunnus thynnus) in the Mediterranean Sea dived deeper when biological productivity was high [8], and the depths reached in the North Atlantic for tuna were correlated with thermocline depth [9]. For sea turtles specifically, relationships between diving and interacting oceanographic conditions such as sea surface temperature (SST) and foraging resources have rarely been tested, although a few studies point to the potential importance of these factors. For olive ridleys (Lepidochelys olivacea) in the Guiana basin, foraging depth increased with a deeper thermocline and foraging time increased with lower temperatures [10] and leatherbacks (Dermochelys coriacea) in the Northwest Atlantic showed shorter, more shallow dives in cooler, productive shelf habitat [11]. While all of these species show variety in the appropriate temperature and depths for foraging based on their physiological requirements, these studies all show the importance of the interplay of ocean temperature and prey resources with diving. A 2014 review on sea turtle dive behavior provides a summary of the fundamental knowledge gained on this important aspect of sea turtle life history [2]. From this review, we know that most sea turtle dive data have been collected on nesting females in neritic inter-nesting habitats, followed by juveniles in neritic developmental habitats. The deepest diving sea turtle is the leatherback which can reach 1250 m depth, while the record for the longest dive goes to loggerheads (Caretta caretta) in the Mediterranean at more than 10 h. For many hard-shelled sea turtles, depths visited on average (i.e. outside of overwintering) range from 2-54 m; for leatherbacks this ranges up to 150 m. The effect of temperature on sea turtles has been explored thoroughly and is shown to influence turtle metabolic rates, circulation and other physiological factors. Therefore, dive behavior is presumed to shift based on needs for thermoregulation and in response to seasonal changes (longer dives with lower temperatures), although across species and regions the relationship between temperature and diving has differed and was only investigated in 12 of 70 studies reviewed. The review also describes that some turtles change dive behavior based on whether they are transiting. For example, turtles tend to use shallow waters during transit, with occasional deep dives possibly for resting or foraging en route, with the exception of the leatherback that showed longer and deeper dives during transit. Importantly, dive behavior differed based on habitat type and geography. With all this variability, and an assumption that prey distribution is an important driver for sea turtle dive behavior, studies of dive characteristics alongside real-time prey vertical distribution is listed an important objective [2]; such studies can be challenging to undertake in the open ocean. Loggerhead sea turtles (Caretta caretta) that nest on Gulf of Mexico (GoM) beaches represent three distinct populations segments (DPS [12]) within the threatened Northwest Atlantic population. The Northern Gulf of Mexico (NGoM) nesting group comprises one of the smallest DPSs, estimated at 323-634 adult females [13]. Loggerheads that nest in the NGoM appear to rarely leave the GoM [14,15]. Although these individuals rely heavily on the GoM throughout their lives, very few dive data are available for turtles in this DPS. Few studies have reported dive data for any loggerheads using GoM waters, except to describe basic depths reached (5 females [16]), dive durations (4 unsexed turtles [3]), time spent at the surface versus bottom (10 females [17]), or to show dive behavior during an extreme weather event (2 inter-nesting females [18]). While these studies contribute to our understanding of loggerhead dive behavior in this region, only one study covered turtles in this DPS [17]. Also, the studies did not investigate both depths and durations of dives across inter-nesting, migration and foraging and relate this to the environmental conditions of sea surface temperature and net primary productivity. Using 26 tracks from loggerhead turtles that nested on NGoM beaches tagged with satellitelinked depth loggers, we provide new data in a region in which there is a minimal amount of dive data for this species and aim to investigate if and how oceanic conditions influence loggerhead dive behavior. Specifically, we assess the environmental variables of sea-surface temperature (SST) and net primary productivity (NPP). Using state-space modeling, we also determine the behavioral mode of each turtle (i.e. inter-nesting, migration, or foraging) along its track to determine how dive behavior and its relation to oceanographic conditions changes across the three major life stages of adult turtles. Loggerhead turtle tagging and dive data collection We tagged turtles on NGoM beaches at Gulf Shores, Alabama (GS) and St. Joseph Peninsula, Florida (SJP), which are located in the eastern (SJP) and western (GS) ranges of loggerhead turtle nesting in the NGoM ( [12]; Fig 1). Turtle capture and tagging followed methods identical to those in previous studies [19], followed established protocols [20] and were approved by Institutional Animal Care Protocol United States Geological Survey-Southeast Ecological Science Center-Institutional Animal Care and Use Committee-2011-05. We adhered platform transmitter terminals (PTTs) using slow-curing epoxy (two-part Superbond epoxy) and used two types of Wildlife Computers tags: SPLASH10-309A (n = 10; 7.6 cm long x 5.6 cm wide x 3.2 cm high, mass 125 g in air) and SPLASH10-238A-AF (n = 16; 10.5 cm long x 5.6 cm wide x 3.0 cm high, mass 213 g in air). Tags were set to collect dive data and transmit through the Argos satellite system for 24 hours day -1 all year, except tags with PTT identification numbers beginning with "129" (n = 10), which collected every day but transmitted every day from May-October then every 3rd day from November-April. Location filtering and behavioral modes Location data were retrieved using Satellite Tracking and Analysis Tool (STAT; [21]) available on www.seaturtle.org. We used switching state-space modeling (SSM) output from previous analyses [14], which included all received locations except those with Location Class "Z" which are classified as invalid. Briefly, SSM takes irregular locations received from satellites and performs a two-state switching correlated random walk to model transitions between behavioral states as well as "true" unobserved locations at equal time intervals [22], in this case, every eight hours. Behavioral mode output was binary, defined based on whether the turtle demonstrated area-restricted searching (i.e. 'foraging and/or inter-nesting') or not (transiting, i.e. 'non-migration transit and/or migration'). We defined points as 'inter-nesting' if they were before migration and 'foraging' if after. Because some turtles had multiple SSM transit periods, we considered two factors in order to discriminate migrations from non-migration transit: beach encounters of nesting turtles and graphs of the displacement distance from tagging locations [23]. If a nesting event (groundtruthed during nesting surveys) fell within the SSM-defined migration period, we classified the locations before the nest as 'transit within the inter-nesting period' (see [24]) and locations from the nesting date to the arrival at the foraging grounds as migration. We sub-divided 'transit and migration' into three categories: non-migratory transits within the nesting season, migration from the nesting grounds to foraging areas, and non-migratory transit within the foraging area (See S1 For a few turtles, we had to make decisions beyond the above steps to determine behavioral mode. Turtle 106358 did not have a successful SSM run; locations for this turtle became very erratic and transmitted infrequently after August 13, 2011, after which highquality locations occurred on the Cuban shoreline with subsequent locations moving inland. For our analysis, we only used locations before this date. We visually assessed the track to assign behavioral mode; the turtle moved multiple times between GS and SJP beaches and visited the shore often, indicating that the entire period was within an internesting period, with some points making obvious directed movement classified as transits during inter-nesting. Additionally, for three turtles (129513, 106337, 119946), data stopped transmitting in August or early September of the tagging year (Aug 12 and 31, Sep 4 respectively), and their final area-restricted search areas were close to shore. The nesting season for this sub-population can occur through early September [25] making it unclear if these final areas should be considered inter-nesting or foraging grounds. To determine this, we used the average start of migration for the other turtles in this study (i.e. average end of nesting season: July 21st). Each turtle's final period began either after this or only 4 days before (turtle 119946), and none had high-quality locations on land during this time, so we classified all three as foraging periods. This is also consistent with the peak in migration timing (July 22 -Aug 9) found for loggerhead turtles from this area (including some turtles from this study and others [14]). With the beginning and end dates for different behavioral modes determined, we used original, filtered satellite locations from within those time periods for further spatial analysis. To filter the Argos locations, we removed points on land, those requiring speeds >5 kph, points with location class Z (those that failed Argos plausibility tests), and erroneous locations (defined as those outside the GoM). Because dive data does not come with a spatial location, we created points representing mean daily locations from filtered locations to associate dive data with corresponding behavioral modes and environmental data. Variables and data analysis Environmental variables. To understand the importance of temperature and forage resources on loggerhead turtle diving, we used average monthly SST data and average monthly NPP data (as a proxy for food availability) in generalized linear mixed models (see below). We extracted SST and NPP data at mean daily locations and assigned a daily value for dive data. SST data was retrieved from NASA's Ocean Color Web (https://oceancolor.gsfc.nasa.gov/, accessed on 8/5/2016 and 10/12/2016). NPP data was obtained from Oregon State University's Ocean Productivity (https://www.science.oregonstate.edu/ocean.productivity/, accessed on 8/ 5/2016 and 10/17/2016). Describing dives. Dive frequency may indicate behaviors such as resting or foraging, with longer dives at night indicating resting and shorter dives during the day indicating foraging [26,27]. For each turtle and behavioral mode, we summed the number of dives day -1 for depth bins as well as duration bins; the number of dives in depth bins were used for analyses on daily frequency of dives (see modeling section below). Upper limits for "routine" diving (as opposed to maximum capabilities) in loggerhead turtles have been reported at 22 m depth and 30 min in duration [4]. Therefore, we defined "long" dives as those > 30 min and "deep" dives as those >25 m (our bin choices included either 20-25 m or 25-30 m). We also summarized the proportion of time turtles spent in different Time-at-Depth (TAD) bins. Values in TAD bins represent the percent of time a turtle spent in a depth bin for the summary period (e.g. 24 hr). Using R [28], we averaged the values per bin across the entire tracking period for each turtle (using only full summary periods-any partial summary periods were removed). Then, we averaged these mean percentages across all turtles to get overall means and means per behavioral mode. Additionally, each mean daily location was paired spatially with a bathymetry value and temporally with dive information. Any dives within the bin containing that bathymetry value (or deeper) were classified as dives to the bottom. We only evaluated depths up to 50 m because the bin sizes beyond 50 m (50-100 m and >100 m bins) were much larger than the other 5 m bin sizes. For bathymetry, we used the NOAA National Geophysical Data Center ETOPO1, 1 arc-minute global relief model of Earth's surface (http://www.ngdc.noaa.gov/mgg/ geodas/geodas.html; accessed 26 January 2012). Finally, loggerhead turtles in the Mediterranean have shown a shift in dive behavior during winter including much longer dives [29,30]. We examined possible overwintering behavior by describing diving during the winter (21 December-21 March). For depth bins, duration bins and TAD bins, we used R [28] to create boxplots of the proportion of dives in each bin across all turtles. Modeling dive behavior. To determine which factors influenced dive behavior, we took a model selection approach using four measurements of behavior: (1) total daily dive frequency, (2) frequency of dives to the bottom, (3) frequency of long dives and (4) Time-at-Depth focused on the time at surface. We chose three independent variables: NPP, SST, and mode (inter-nesting, foraging, migration). For all models, we combined inter-nesting and transit during inter-nesting dives (and defined both as inter-nesting) as turtles were in similar spatial locations, dives occurred over similar time periods (nesting season) and they showed similar depth patterns (maximum of individual median max depths for inter-nesting: 27.1 m and for inter-nesting transits: 32.6 m). For each of the four dive measurements, we tested 15 models, including a null model (intercept model) and one to three variable models including various interaction terms between SST, NPP and MODE. We used a generalized linear mixed model with a Poisson log-link function, with 'individual turtle' as a random effect because observations for each turtle were repeated. We selected the best-fitting model among the 15 models for each dependent variable (four dive measurements) using pseudo Akaike Information Criteria corrected for small sample sizes (AICc). Pseudo AICc was used because we did not estimate the model parameters using ordinary least squares. We considered models in which ΔAICc < 2 to be equivalent to the best-fitting model [31]. We also calculated the pseudo AICc weight, which represents a relative likelihood of the model. Turtles and behavioral modes From 2011 to 2013, we tagged and successfully received dive data for 25 adult female nesting loggerhead turtles for a total of 26 tracks (n = 19 from GS, n = 7 from SJP). Two tracks (119944 and 106345) were from the same turtle tracked in subsequent years (2011 and 2012 in GS) and two turtles shared the same tag (119952a and 119952b in 2012) but for different time periods. From here on, 119944 and 106345 are treated as separate individuals but "individual" was taken into effect for all models. Turtles ranged in size from 87.3-104.3 cm CCL (mean ± SD = 95.9 ± 4.6 cm; n = 26) and were tracked from 23-404 days (mean ± SD = 122.7 ± 89.2 days; n = 26; S1 Table). Tracking produced a total of 1753 mean daily locations (Fig 1). Across all possible 3165 tracking days, daily summary periods for dive data were received for 2061 (dive depth, 62%), 1987 (dive duration, 60%) and 1992 (time-at-depth, 60%) days. Location and dive data were coincident on 1227 days. The total number of days in each mode ranged from 4 to 1810 and the total mean daily locations ranged from 4 to 966 (S2 Table). All turtles remained in the GoM; inter-nesting and foraging locations were within the neritic zone (0 to -200 m bathymetry) ranging from Texas to southwestern Florida and migration locations were within and outside the neritic zone ranging from Texas to the Yucatan Peninsula and northern Cuba (Fig 1). Out of 26 tracks, eight had dive data in the southern GoM (n = 1 for inter-nesting, n = 7 for migration and n = 5 for foraging). Dive parameters by individual are listed in the S3 Table. Tracking details and home range analyses for some of these turtles can be found in previous papers [14,19]. Environmental variables For all 1227 days/locations we were able to derive bathymetry and SST information, but NPP data was only available for 1003 locations. We filtered out 17 locations with SST values of 45.0˚C as these locations were very close to shore and we considered the temperatures unrealistic as satellites may have been picking up terrestrial heat signals. Dive frequency Most dives were shallow (less than or equal to 25 m; Fig 2A). The overall median max depth for behavioral modes ranged from 10. MODE, SST and NPP are the best explanatory variables for dive frequency which minimized AICc (S4 Table). This model has the definite likelihood (AICc weight = 1.00) to be the best model among all tested models. Based on this model, the dive frequency was lower during the foraging period compared to the migration period (Table 1; t = -2.83, p < 0.01) whereas dive frequency was higher during the inter-nesting period (t = 6.4, p < 0.01). Higher SST is associated with a larger number of dives (Table 1; t = 17.09, p < 0.01) as is lower NPP (t = -8.09, p < 0.01; Fig 3). While the rates of change vary depending on NPP and SST values, with NPP held at the average value, for each increase in SST by 1˚C (from 29 to 30˚C) there was an increase of 2.3, 2.7, and 2.2 dives per day respectively for migration, inter-nesting and foraging (S5 Table). When we examined the dive frequency only during foraging periods, MODE, SST and NPP (with an interaction term for MODE and SST) are still the best explanatory variables for the frequency of dives to the bottom, which presumably represents foraging behavior (S4 Table). The tagged turtles tended to dive to the bottom more frequently under higher SST (foraging: t = 10.29, p < 0.01; inter-nesting: t = 6.55, p < 001) and higher NPP (t = 17.42, p < 0.01; Fig 3). The rates of change vary depending on NPP and SST, but with NPP held at the average value, for each increase in SST by 1˚C (from 29 to 30˚C) there was an increase of 0.56 and 1.35 bottom dives per day respectively for inter-nesting and foraging (S5 Table). Frequency of long dives During all modes, long dives (> 30 min) and short dives occurred at about the same rate. The frequency of dives was highest in the 40 min bin (30-40 min) for all three modes, but there was also a peak in the >60 min bin for foraging mode (Fig 4A). For two turtles tracked over winter (21 December-21 March; turtles 119944 and 106361), we found about half of their winter dives occurred in the >60 min bin (Fig 4B). The frequency of long dives was best explained by the three-variable model which included MODE, SST, and NPP (S4 Table). Compared to the migration periods, turtles tended to dive for long periods less frequently during inter-nesting (t = -2.21, p = 0.03) and foraging (t = -5.42, p < 0.01) periods. Frequency of long dives significantly increased when both SST Loggerhead diving in the Gulf of Mexico (t = 4.77, p < 0.01) and NPP (t = 8.4, p< 0.01) were higher (Table 1; Fig 3). As before, the rates of change vary depending on NPP and SST values; with NPP held at the average value, for each increase in SST by 1˚C (from 29 to 30˚C) there was an increase of 0.30, 0.29, and 0.30 long dives per day respectively for migration, inter-nesting and foraging (S5 Table). Time-at-depth Turtles spent most of their time between the surface and 30 m. Across all modes, turtles spent 10% of their time at the surface (0-1 m; by mode: 10% in inter-nesting, 14% in migration, 9% in foraging). Time in the deepest bins was rare with only 2% of time in the 100-150 m bin and less than 1% in the >150 m bin (Fig 2B). Across modes, turtles spent the highest proportion of time between 5 and 20 m during inter-nesting and migration, and between 20 and 40 m for foraging (Fig 2B). TAD showed a deviation during winter from the rest of foraging behavior. Instead of time spent about equally between the three bins covering 10-40 m, in winter, this shifted to 26% (106361) to 68% (119944) of the time being spent between 30-40 m with much less time in the 10-30 m categories. The results indicated that none of the tested variables sufficiently explain the TAD variables for surface dives; the null model minimized AICc for all TAD variables (S4 Table). Discussion Understanding how sea turtles use three-dimensional space and what factors are important to the expression of dive behavior is key to understanding sea turtle life histories. Our dataset and comprehensive look at dive behavior for 26 adult female loggerhead turtle satellite tracks within the northern GoM significantly adds to what is known for loggerhead diving in this region. This is the first loggerhead dive study in the GoM to assess what oceanographic factors drive dive behavior across different behavioral modes and one of the first to provide detailed dive information for this DPS (see [17] for surface/bottom time for 5 females). We found that loggerhead dive behavior varied with changing SST, NPP and behavioral mode. Dive behavior The deepest dive for our tagged turtles in the GoM was to 160 m which falls within previously reported maximum dive depths for loggerhead turtles of >340 m, from off the coast of Japan [32]. While informative, maximum depths do not represent behavior for all individuals and/or for all times; therefore, it is also important to consider median or mean dive behavior. In her review of sea turtle diving, Hochscheid found mean dive depths for loggerheads ranged from 5.2 to 54 m [2]. The proportion of number of dives for our tagged turtles, considering all modes, peaked in the 5 and 15 m bins and turtles spent little time deeper than 50 m, which is consistent with these reported mean values. Hochscheid also reported maximum dive durations for loggerheads from 4.8 to 614.4 min and mean or median dive durations from 2.3 to 341 min [2]. GoM loggerhead dive durations were also within these ranges (most dives in the 30-40 min bin across modes). Overall, our turtles spent about 80% of their time between 0-30 m over their tracking periods. This is consistent with other studies looking at the proportion of time spent at depth. Two turtles from Japan spent most of their time between 0-25 m (0% and 35% of time at > 25 m depth [33]); six of seven loggerhead turtles from the Mediterranean (46-75 cm CCL) spent most of their time between 0-30 m [29]; and two loggerheads from the North Pacific (61-83 cm SCL) spent around 40% of their time within 1 m of the surface and almost no time deeper than 100 m [34]. For loggerhead turtles in the northern GoM, most dive behavior is conducted within the first 30 m of the water column. Foraging behavior has been described as consisting of a higher frequency of dives during the day with shorter durations as compared to night dives [26,35]. Also, it would be expected that loggerheads rest at 20 m or shallower based on lung-regulated neutral buoyancy [36,37]. We were unable to separate day and night diving because of tag settings, and so could not distinguish resting from foraging dives. Whether these turtles are choosing foraging and resting locations based on factors like distance to shore during internesting or prey distribution during foraging, or whether bathymetry itself plays an important role in how they choose a location for each mode is unclear. Using fine-scale data loggers that provide dive profiles or accelerometer data, instead of binned data, could help determine what behaviors turtles are engaged in at depth, such as resting or foraging. Two turtles tracked over winter showed an increase in longer dives (>60 min) to depths between 30-40 m. Maximum dive durations from other studies indicate that loggerhead turtles can spend long periods under water and the longest duration (614.4 minutes) recorded was for a loggerhead in the Mediterranean during the winter [38]. Sea turtles exhibit metabolic depression at lower temperatures thereby slowing use of oxygen reserves and allowing turtles to remain aerobic during long dives [39,40]. Although it has been suggested that sea turtles in some locations remain dormant (i.e. hibernate) at temperatures below 10˚C [41,42], recent studies suggest an alternative; that turtles undertake long dives paired with infrequent surfacing events during winter [2,43]. Unfortunately, the tag settings for our GoM data topped out at a bin value of > 60 min so it is unclear if these turtles stayed under for hundreds of minutes at any time, however other loggerheads in the GoM stayed under for > 4 hr during winter [44]. Future studies with more turtles and a wider range of dive duration bins or dive profile data instead of binned data could help determine the extent to which loggerheads in the GoM display over-wintering behavior; profiles provide more detailed information for individual dives (e.g. [45]). Additionally, change-point analysis incorporating dive data with location data to determine behavioral mode has recently helped predict a shift into wintering behavior for loggerheads in the Mediterranean [30]. SST, NPP, and behavioral mode Of 70 sea turtle dive studies reviewed, only 12 addressed the relationship between temperature and diving [2], despite the obvious importance of temperature in sea turtle physiology. For loggerheads in particular, studies have generally found longer dives in colder temperatures [29,32,46] and deeper dives in warmer temperatures [5,32]. We found that SST was an important factor for predicting the frequency of (1) diving, (2) diving to the seafloor, and (3) long dives. Overall, higher SST was associated with more frequent diving. Increasing temperatures could affect turtles both directly and indirectly. For example, it is possible that because turtles are ectotherms, they are simply more active in warmer waters, or they use differing water temperatures for thermoregulation [2]. However, in all models, NPP was also an important predictor for dive behavior, indicating that SST alone does not drive dive behavior of sea turtles. While the idea that SST can indirectly affect sea turtle dive behavior through prey distribution is not new [2], our results explicitly showed that NPP was an important predictor for adult loggerhead dive behavior. As adult loggerhead turtles primarily eat benthic invertebrates [47], we used NPP as a proxy for food availability. Most primary production in the southwestern GoM eventually becomes detritus and benthic invertebrates play a large role in consuming it, consequently making it available for higher trophic levels [48]. We found that higher NPP was associated with an increase in the frequency of long dives and the frequency of bottom dives. With more NPP (and presumably more food available), this behavior could reflect foraging behavior where turtles are spending longer periods of time foraging for prey on the seafloor. Lower NPP was associated with more dives per day overall, which could be a result of more searching in areas with less productivity. Due to possible shifts in dive behavior over time, comparisons between specific behavioral modes may be more informative than broad comparisons across entire tracking periods, and our results support this as we found significant differences in diving across modes. Notably, for the frequency of bottom dives, the top model included an interaction between behavioral mode and SST, indicating that changes in diving with SST are ultimately dependent on the behavior a turtle is primarily engaged in, such as transiting, foraging and/or resting. Long dives occurred less often during inter-nesting or foraging as compared to migration, which reflects that turtles dive for > 30 min more often, perhaps either resting or swimming, during migration. Occasional very deep dives during transit phases for juvenile loggerheads were found in the Indian Ocean [27], with speculation that this could be due to prey searching, predator avoidance or navigation. We found similar behavior, as the majority of dives deeper than 45 m took place during migration. More long dives during migration may be a result of an increase in much deeper dives. However, this result may also be due to a lack of observations during winter at foraging grounds when longer dives would be expected and were indeed observed for the two turtles tracked into winter. Conservation and management GoM loggerhead turtles spent around 10% of time at the surface (up to 1 m). To complement nest numbers, visual population counts of turtles at in-water sites is important to estimate the true number of turtles, including males which are not counted on the beach. Counting sea turtles visually, either from ships or during aerial surveys, is an important component to National Marine Fisheries Survey (NMFS) population assessment goals for sea turtles (www.st.nmfs. noaa.gov). This dive information can be used for aerial correction factors for NMFS as they 'correct' for a turtle's time below the surface during their aerial surveys [49,50]. Visual population estimates rely on accurate knowledge of the proportion of time that turtles spend in the top portion of the water column. The current Gulf of Mexico Marine Assessment Program for Protected Species (GoMMAPPS; https://www.boem.gov/GOMMAPPS/) will utilize time at surface information from these and other turtles to correct aerial survey point counts of turtles. Dive data are also important for deciphering how and where turtles may interact with anthropogenic factors, which may pose multiple threats to turtles in the GoM [51]. Commercial fishing occurs throughout the GoM, especially in neritic areas where loggerhead turtles take up residence [51]. For example, loggerhead bycatch in the GoM bottom longline fisheries has been a concern, and studies show loggerheads overlap with these fishing areas, which present the threat of hooking or entanglement on the seafloor [16]. Knowing which depths turtles use most in each season or behavioral mode can help inform policy decisions for managers looking to reduce bycatch and conserve turtle populations. In this study we found that during inter-nesting and foraging modes, turtles dived to the bottom almost every day. The information we present here on the depths used by loggerheads during inter-nesting, migration and foraging mode should therefore be useful for local management decisions where loggerheads and bottom longline fisheries overlap. Conclusion We found that SST and NPP were important for all the dive behaviors we measured. The degree to which changes in SST and NPP will affect the biology of sea turtles will depend on many factors beyond the number of additional dives, such as the depth and duration of additional dives (i.e. the energetic cost of each dive), the success in capturing prey, the type and density of prey consumed, and the water temperature both at the surface and at depth. Ultimately the physiological cost of each foraging dive will be a balance between energy expended and energy consumed in the form of prey. The changes in dive behavior we found across SST and NPP values may reflect shifting strategies that keep net energy intake relatively constant. However, we were limited in our ability to decipher what behavior turtles were engaged in at depth (e.g., resting or foraging), and had no knowledge of prey consumed, so we are unable to determine if changes in diving represent shifting strategies or an increased physiological cost. The behavioral mode behind each dive must also be considered. For example, during internesting, loggerhead turtles may remain at certain depths to save energy for egg maturation [52] and so energy trade-offs during this time may have little to do with prey availability and more to do with temperature. To understand costs associated with different behaviors and additional dives, accelerometer data on sea turtle energetics during dives [53] could be linked to oceanic conditions. Individual turtle variability likely also plays an important role in diving behavior which may partially explain why SST, NPP and behavioral mode did not successfully explain time spent at the surface. Surfacing behavior can vary depending on turtle activity; for example, green turtles (Chelonia mydas) took less surface breaths after foraging than resting, indicating that they modified their surfacing behavior based on the goal of the dive [54] and loggerheads may alter surface times to absorb solar radiation or recover from long dives [5]. Deeper dives may also show variability by individual. From video obtained through remotely operated vehicles, some loggerheads (2 of 73 filmed) in the U.S. Mid-Atlantic were found to feed on pelagic gelatinous prey before diving to the sea bottom, while others only fed at the benthos [55]. Stable isotope studies on 15 loggerheads sampled on the east coast of Florida, show that individual loggerheads specialize; however, considered together, the group were generalists [56]. The variation in prey choices across adults, but also loggerhead life-stages [57], would affect both the energetic gains received from prey and the costs in capturing it, both regarding diving depths and durations as well as the difficulty of capture. Even without the knowledge of specific energetic costs of these additional dives on the turtles, establishing a link between loggerhead diving and both SST and NPP may have important ramifications. Prey availability is important for turtles but difficult to measure. Here we show that SST and NPP in the environment, both of which can be more easily measured than prey abundance, may be important for sea turtle biology. Further, how SST and NPP interact is of interest: global increases in SST are linked to declining ocean productivity [58] which may affect prey distribution and abundance and could ultimately cause turtles to show more searching and diving. Having baseline information on diving in relation to temperature and productivity could be helpful for detecting changes in sea turtle dive behavior as ocean surface temperatures are predicted to rise in the future [59]. After this inter-nesting period, it then began transiting (migration; red triangles) to foraging grounds (area-restricted search; purple squares) where it remained from 18 August to at least 15 October. It is possible that this turtle stopped to nest again, and this went undetected; the turtle visited the FL coast twice on its journey to its foraging grounds however we were unable to confirm any nesting activity with high-quality locations or sightings on the beach. Table. The change in dive metric by behavioral mode in relation to sea surface temperature (SST) and net primary productivity (NPP). The change is calculated for both the coldest SST and average SST values recorded (over a 1˚C change) and the lowest and average values of NPP (over a 500 mg change). Dive metrics include the frequency of all dives, the frequency of bottom dives and the frequency of long dives. For SST, NPP was held constant at the average value (1,763 mg C square meter-1 day-1). For NPP plots, SST was held constant at the average value (29˚C). In the table "mg C" is measured per square meter per day. Stephens, D. Tafoya, and J. Vinci. We thank the USFWS interns from Bon Secour NWR. Research activities were permitted under permits FLFWC-MTP 094, Bon Secour National Wildlife Refuge Special Use Permit 12-006S (issued to KMH), Federal U.S. Fish and Wildlife Permit TE206903-1 (issued to J. Phillips). We acknowledge the use of the satellite-tracking and analysis tool (STAT) and telemetry data generated as part of the Deepwater Horizon Natural Resource Damage Assessment (publicly available from www. seaturtle.org). Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government.
2019-08-10T13:03:53.669Z
2019-08-07T00:00:00.000
{ "year": 2019, "sha1": "8b2b73c54203a1bf534ea769a60b70d6eb74bcc6", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0220372&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b2b73c54203a1bf534ea769a60b70d6eb74bcc6", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }