id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
16068743
pes2o/s2orc
v3-fos-license
Hypothermia Presenting in Wernicke Encephalopathy: A Case Report Wernicke encephalopathy (WE) is a neurologic disorder characterized by clinical symptoms, such as nystagmus, ataxia, and mental confusion. Hypothermia in patients with WE is a rare complication, and its pathogenic mechanism and therapy are yet to be ascertained. Herein, we presented a case of a 61-year-old man who was diagnosed with WE 3 months earlier. We investigated the cause of hypothermia (35.0℃) that occurred after an enema (bowel emptying). Brain magnetic resonance imaging revealed mammillary body and hypothalamus atrophy. In the autonomic function test, the sympathetic skin response (SSR) test did not evoke SSR latencies on both hands. In addition, abnormal orthostatic hypotension was observed. Laxative and stool softener medication were administered, and his diet was modified, which led to an improvement in constipation after 2 weeks. Moreover, there was no recurrence of hypothermic episode. This is the first reported case of late-onset hypothermia secondary to WE. INTRODUCTION www.e-arm.org oral administration of thiamine 300 mg/day, symptoms showed some improvement, but memory disturbance persisted. The patient was admitted to the rehabilitation unit. At admission, he showed severe constipation with defecation 1 or 2 times a week. We prescribed laxative and stool softener medicine, but his constipation was not improved. Hence, glycerin enema was conducted. After the enema, the patient experienced chilling and sweating sensation for 1-2 minutes and his body temperature dropped to 35.0 o C. During the hypothermic episode, blood pressure, pulse rate, and respiration rate were 110/70 mmHg, 76/min, and 20/min, respectively; and blood sugar level was in the normal range. Hypothermia showed improvement through passive rewarming (blanket covering) and active rewarming (intravenous heating fluid). To investigate the cause of hypothermia, we reviewed his medical history, which showed no hypothermiarelated medication. His thiamine level and thyroid function test was normal. In addition, there were no infectious signs. Brain magnetic resonance imaging (MRI) was performed. Axial fluid-attenuated inversion recovery T2-weighted image showed diffuse 3rd ventricle enlargement with hypothalamus atrophy (Fig. 1). Sagittal T1weighted image showed atrophy in the mammillary body and hypothalamus, the known thermo-regulator (Fig. 2). In the autonomic function test, sympathetic skin response (SSR) was obtained with a Sierra Wave EMG system (Cadwell Laboratories Inc., Kennewick, WA, USA); the SSR test did not evoke SSR latencies on both hands. Abnormal orthostatic hypotension was observed with a systolic blood pressure drop of up to 39 mmHg caused by sudden rising from recumbent position (Fig. 3). Despite medication for severe constipation, his symptom was not improved. In order to prevent a second hypothermic episode, we did a warm (36.0 o C) glycerin enema procedure. Room temperature was maintained at 24.0 o C. Nevertheless, a second hypothermic episode occurred. We added more laxative and stool softener medicine, and modified his diet with high fiber. Consequently, his constipation improved after 2 weeks and there was no recurrence of hypothermic episode. DISCUSSION Hypothermia is a rare complication in WE [3][4][5]. In a study by Victor et al. [3], only 2 cases (0.8%) showed symptoms of hypothermia among 245 patients with WE. Philip and Smith [6] reported 3 cases of hypothermia among WE patients and in a study by Harper et al. [4], hypothermia occurred in 5 cases (approximately 4%) of 131 patients with WE. While all the above studies showed www.e-arm.org hypothermia at an early state of WE, hyperthermia could appear in the late stages of WE [2]. However, there are no previous reports on late-onset hypothermia at 3 months after the treatment for WE. The mechanism of hypothermia in WE remains unclear. It might result from dysfunction of the posterior hypothalamus, which is the center of temperature regulation [2]. The hypothalamus is responsible for thermoregulation in humans, and the autonomic nervous system (ANS) and endocrine system are responsible for the initial and late stage of thermoregulation [7]. Dysfunction of the ANS has been verified in cases of WE [5] and ANS dysfunction could be a cause of hypothermia. Jung et al. [8] reported impaired ANS response in patients with chronic alcoholism compared to non-alcoholic group, based on the ANS test (SSR, heart rate variability). Likewise, our case also showed an abnormal SSR response and orthostatic hy-potension. Hypothermic episode appeared post-glycerin enema treatment for severe constipation. We checked the room temperature, medical history, and other laboratory findings, such as thyroid function test or infection marker to exclude the possibility of hypothermia due to other causes. Brain MRI in our patient indicated hypothalamus and mammillary body atrophy. Based on these evaluations, the cause of hypothermia was due to abnormal function of ANS and atrophy of the hypothalamus in the brain, which is responsible for thermo-regulation. Hypothermia is associated with high mortality rates in WE. Ackerman reported seven cases of WE with hypothermia [9]. In acute state of WE, four patients were treated with thiamine, of which, three survived and returned to normal body temperature. Three of seven cases were not treated with thiamine, and all died. Contrary to the previous case reports, despite adequate thiamine www.e-arm.org treatment in acute state and maintaining oral thiamine in chronic state, we observed late-onset hypothermic episode. Thus, further study is needed to determine the cause or mechanism of hypothermia in chronic WE patients. This case report has several limitations. First, we could not measure the core temperature. It was difficult to measure the rectal temperature due to the occurrence of hypothermia post-enema, therefore, the axillary temperature was measured instead. Second, among autonomic function tests, heart rate variability was not measured due to the deterioration of the patient's cognitive function and deep breathing and Valsalva maneuver were unsuccessful. In conclusion, to our best knowledge, this is the first reported case of late-onset hypothermia secondary to WE. In case of hypothermic episode in the chronic state of WE, evaluation of autonomic function test or brain imaging study to detect lesion of the hypothalamus is recommended. If other causes of hypothermia including endocrine problem, medication, cold environment, or infection are ruled out, minimizing hypothermia-inducing factors and active clinical management are required.
2018-04-03T05:25:45.405Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "c09e89c62057c20c965d17f246e3955c52c8c068", "oa_license": "CCBYNC", "oa_url": "http://www.e-arm.org/upload/pdf/arm-41-158.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c09e89c62057c20c965d17f246e3955c52c8c068", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261087387
pes2o/s2orc
v3-fos-license
Satisfaction Levels in Doctors About Workplace Environment During the COVID-19 Pandemic: Experiences from Tertiary Hospitals in Dhaka, Bangladesh Background: The satisfaction level of doctors regarding the workplace environment signifies both the psychological and physical environment. One of the many challenges to conquer was the adaptation to a steadily changing working environment and the development of a proper working environment during the COVID-19 pandemic. Objectives: The objective of this study was to evaluate the level of satisfaction of doctors regarding the workplace environment. Methods: A cross-sectional study was conducted from January to December 2020. A total of 217 conveniently selected doctors working at selected tertiary hospitals in Dhaka city were interviewed using a pretested, structured questionnaire. The Minnesota Satisfaction Questionnaire (MSQ) was used to assess the level of job satisfaction on a 5-point Likert scale consisting of 20 items. The percentile score was used to categorize the respondents as highly satisfied (75 and above), averagely satisfied (26 to 74), and dissatisfied (below 25). Bivariate and multivariate analyses were performed. Result: Among the 217 respondents, the total mean MSQ score was 3.62±0.23 regarding job satisfaction. About two-thirds of the respondents (63.1%) reported an average level of satisfaction. More than two-thirds of respondents (69.6%) expressed high satisfaction regarding the physical work environment, while the majority of respondents (93.1%) expressed high satisfaction with the psychosocial work environment. However, no significant association was found between outcome and input variables (p>0.05). Conclusion: The study findings showed that satisfaction regarding the psychological environment was higher among the respondents than that of physical working conditions. Evidence-based measures are to be addressed in hospitals to achieve the optimum level of satisfaction among doctors during pandemics. Introduction A poor work environment in hospitals may hamper work performance and promote pessimistic attitudes towards patients and colleagues. Job satisfaction has proven to be an important factor influencing productivity and is of great interest to healthcare organizations [1]. To survive in the medical market, it is important to provide patients with quality service. This is successfully achieved when a country can ensure the satisfaction of healthcare workers [2]. During the COVID-19 pandemic, there was a tremendous increase in workload, a strict protocol for maintaining social distancing that led to a high level of dissatisfaction, a lack of availability of PPE and adequate equipment, and a lack of a sense of fulfillment, all of which impacted job satisfaction. In the health market, the satisfaction of doctors influences their performance. Job satisfaction can be measured by a favorable working environment, workload, payment, and job security [3]. The COVID-19 pandemic has significantly influenced the health system worldwide. The working life of doctors continues to change significantly and will change further in the next few years [4]. In the health environment, doctors' satisfaction is directly related to quality of service and patient satisfaction. Patients are primarily affected by their interactions with doctors. If doctors are not satisfied, the results can be tragic [5]. Job satisfaction is also important to the future recruitment of new doctors and retention of existing doctors, in addition to the productivity and quality of the services provided by the doctors, who are an essential and integral component of our medical care system [6]. To meet the standard quality of care, doctors need a working environment that allows them to work freely without problems that may restrain them from performing up to their full potential [7]. Doctors can provide high-quality healthcare services to patients when they are respected internally and satisfied with their work environment [8]. This study aimed to assess how satisfied doctors were with their work environment to deliver quality services during the COVID-19 pandemic. Materials And Methods A cross-sectional study was conducted from January 1 to December 31, 2020, to assess the satisfaction level of doctors regarding their work environment in select tertiary hospitals (Dhaka Medical College and Hospital (DMC), Sir Salimullah Medical College and Hospital (SSMC), and Shaheed Suhrawardy Medical College and Hospital (ShMCH)) during the COVID-19 pandemic. A non-probability convenient sampling technique was used to collect data from 217 respondents (doctors of selected tertiary hospitals). The sample size was calculated using the n = z2pq/d2 formula. The data collection was carried out using a pretested structured questionnaire through face-to-face interviews after obtaining informed written consent from each participant. The questionnaire was composed to collect data regarding the physical environment, psychological environment, personal satisfaction, and job security, along with the socio-demographic information of the respondents. The Minnesota Satisfaction Questionnaire (MSQ) was used to assess the level of job satisfaction. The questionnaire included 20 items. Each item refers to reinforcement in the work environment. Five response alternatives were presented for each item: very dissatisfied; dissatisfied; neither (dissatisfied nor satisfied); satisfied; and very satisfied. Previous research yielded excellent coefficient alpha values (ranging from 0.85 to 0.91) that have made the MSQ scale a well-known and stable instrument over time. The percentile score was used to categorize the respondents as highly satisfied (75 and above), averagely satisfied (26 to 74), and dissatisfied (below 25). After completion of data collection, to maintain consistency, the data were checked and edited manually and verified for any omission, error, or irrelevance before tabulation. Data were coded, entered, and analyzed on a laptop using SPSS Statistics version 23 (IBM Corp., Armonk, NY, USA). Bivariate and multivariate analyses were performed to identify any association between outcome and input variables. The findings of the study are presented by frequency percentage in tables and graphs. The mean and standard deviations for continuous variables and frequency distributions for categorical variables were used to describe the characteristics of the total sample. The calculated sample size was 198. To minimize dropout, we made a 10% increase, and the sample size was 217. The privacy and confidentiality of each participant were strictly maintained. Ethical approval was obtained from the Institutional Review Board (IRB) of the National Institute of Preventive and Social Medicine (NIPSOM), Dhaka, Bangladesh (approval no. NIPSOM/IRB/2020/1225). Results This study's findings showed that half of the respondents (50%) were from the age group 31 to 35 years, where the mean (± SD) age was 33.2 (± 4.2). About three-fifths of the respondents (58.1%) were male. Most of the respondents were Muslims (85.3%) and married (84.3%). The participants had different educational qualifications; among them, 17.5% had only an MBBS degree, while others had additional degrees ( Table 1). About half the respondents (47.5%) had a working experience of six to 10 years ( Figure 1). FIGURE 1: Working experience of the respondents in years (n=217) Our findings showed that mean MSQ scores were almost similar in different work experience groups. The respondents with the highest mean (±SD), which was 3.68 (± 0.14) had a work experience of 16 to 20 years. To detect the statistical significance, an ANOVA test was done. The differences were statistically nonsignificant (p>0.05) ( Table 2). According to the study findings, the highest mean (± SD) MSQ score was 4.68 (± 0.62), obtained for activity by the respondents. The lowest mean (± SD) was 2.19 (±1) for working conditions. The general mean MSQ score was (± SD) 3.62 (± 0.23) ( Table 3). Around two-fifths (41.9%) of the respondents worked for eight hours ( Figure 2). FIGURE 2: Working hours of the respondents (n=217) The study's findings were compared to the mean MSQ scores in the three different hospitals. The highest mean (±SD) was 3.65 (±0.21), found in SSMC ( Table 4). About two-thirds of the respondents (63.1%) reported an average level of satisfaction. More than two-thirds of respondents (69.6%) expressed high satisfaction regarding the physical work environment, while the majority of respondents (93.1%) expressed high satisfaction with the psychosocial work environment ( Table 5). Around 79.7%, 79.3%, 59.4%, and 58.6% of the respondents were reported to be satisfied regarding the rest room, noisy/overcrowded working place, diagnostic equipment, and cleanliness, respectively. Satisfaction regarding the availability of PPE was 24.9% ( Figure 3). FIGURE 3: Satisfaction regarding workplace facility (n=217) The majority of respondents (81.6%) were satisfied with job retention. One-third of the respondents (33.1%) were satisfied with their salary, and 10.1% of the respondents were satisfied with the health insurance facility ( Figure 4). FIGURE 4: Personal satisfaction of the respondents (n=217) In response to a question regarding security services for doctors in hospitals, 34 ( TABLE 7: Association between age and health insurance for doctors The mean MSQ score regarding job satisfaction is almost similar in different age groups. Among the respondents, the highest mean ± SD score is found (3.66 ± 0.23) within the age group 36 to 40, and the lowest score is (3.61 ± 0.24) within the age group up to 30, and similarly in the 31 to 35 age group and respondents above 40 years of age (3.65 ± 0.19). The total mean MSQ score regarding job satisfaction in different age groups is 3.62 ± 0.23. To see the impact of different age groups on job satisfaction, a one-way ANOVA test was done. The differences were statistically non-significant (p=0.686) ( Table 8). The mean MSQ score regarding job satisfaction in males and females is almost similar. In the male group, the mean ± SD is 3.61 ± 0.24, and in the female group, the mean is ± SD (3.62 ± 0.23). To see the impact of gender variation on job satisfaction, an unpaired t-test was done, and the result was found to be nonsignificant (p=0.860) ( Table 9). The mean MSQ score in the different marital statuses regarding job satisfaction is almost similar: the mean ± SD in married and unmarried groups is 3.62 ± 0.24 and 3.61 ± 0.21, respectively. The differences are statistically non-significant. To find out the statistical significance, an unpaired t-test was done (p=0.852, non-significant as p>0.050 (Table 10). The mean MSQ score regarding job satisfaction in different work experience groups was found to be almost similar. The respondents with the highest mean ± SD 3.68 ± 0.14 have working experience of 16 to 20 years. The second highest mean ± SD 3.67 ± 0.23 was for the group with 11 to 15 years of work experience. The group with work experience up to five years has a mean ± SD 3.63 ± 0.23. Next, the respondents with a mean ± SD 3.59 ±0.24 were those with a working experience of six to 10 years. The respondents with the least mean ± SD 3.47 ± 0.20 have work experience below 20 years. To detect the statistical significance, an ANOVA test was done. The differences were statistically non-significant (p=0.322) ( Discussion This cross-sectional descriptive study focused on evaluating the satisfaction of doctors regarding the working environment in selected tertiary hospitals. The study showed that half of the respondents (50%) were in the age group of 31 to 35 years. Another study mentioned that among the healthcare professionals of combined military hospitals in Bangladesh, more than two-fifths (41.1%) were in the age group of 26 to 35 years [9]. About three-fifths of the respondents (58.1%) were male. Most of the respondents (85.3%) were Muslims, and this is a reflection of the religious culture of Bangladesh. The study also showed that a maximum of 84.3% of respondents were married. Age, gender, and marital status have often been studied as the underlying variables of job satisfaction. According to this study, no significant association was found between job satisfaction and socio-demographic profile (p>0.05). Earlier research has found a significant relationship between job satisfaction and age, gender, and marital status, while others have discarded any such relationship [10]. In this study, 47.5% of respondents had work experience ranging from six to 10 years. The responders with the highest mean (± SD), 3.68 (±0.14), had a working experience of 16 to 20 years, though no significant association was found between working experience and job satisfaction (p>0.05). A study by Altuntaş reported lower job satisfaction in research assistants, assistant professors, and instructors with less than 10 years of work experience and instructors working on their PhD theses or doing contract work [11]. According to our study findings, two-fifths (41.9%) of the respondents worked for eight hours, more than one-third (34.1%) worked for 10 hours, and about a quarter (24%) of the doctors worked for 12 hours. In another study by Suozzo, the author argued that irrespective of the number of working years and work experience, working in excess of 60 hours per week (even once) as a faculty member reduces the mean job satisfaction [12]. Nikic et al. conducted a survey at the Clinical Center Nis in Serbia, which showed that most health workers found their job stimulating and interesting, but that they worked very hard [13]. Increased workload may result in a higher level of dissatisfaction. When questioned about their satisfaction with the extra workload brought on by the COVID-19 crisis, two-fifths (40.1%) of respondents said they were dissatisfied, and more than one-tenth (12.9%) of the respondents said they were highly dissatisfied. However, no significant association was found between working hours and job satisfaction (p>0.05). A satisfaction score was also calculated for each of the 20 items on the MSQ scale. The highest mean (± SD) was 4.68 (± 0.62) which was regarding activity, and the lowest mean (± SD) was 2.19 (± 1) was about working conditions. The general mean (± SD) was 3.62 (± 0.23). Another study showed the average MSQ score of all surveyed doctors was 3.11±0.87. The three highest-scored items for doctors were the way company policies were put into practice, the working conditions, and praise for doing a good job [8]. Comparing the mean MSQ score in three different hospitals (DMC, SSMC, ShMC), the highest mean± SD of 3.65± 0.21 was found in SSMC. The DMC scored the second-highest mean of 3.61 ± 0.26, and the ShMC scored the lowest mean of 3.60 ± 0.23. About two-thirds of the respondents (63.1%) reported an average level of satisfaction regarding general working conditions. Out of 217 respondents in this study, less than one-third (30.4%) had high satisfaction, and more than two-thirds (69.6%) had average satisfaction regarding the physical working environment. The study showed that with regard to the psychosocial environment, the majority (93.1%) of doctors reported high levels of satisfaction, while 6.9% reported average levels of satisfaction. Another study conducted during the COVID-19 pandemic showed similar findings. [14] Evidence shows that multiple workplace-related factors affect the effectiveness of people who work there. These can include the working space, water supply, electricity supply, ventilation system, cleanliness facility, infection control facility, availability of PPE, restroom condition, etc. An organization's overall productivity and quality of individual work influence job satisfaction [15]. This study found that satisfaction regarding cleanliness, infection control, and working space was 58.6%, 49.3%, and 37.3%, respectively. When asked about the facilities available at the workplace, nearly three-fifths of the doctors (59.4%) felt that their workplace was poorly equipped and had scope for improvement. In this study, three-quarters (75.1%) of the doctors were deprived of getting PPE from the hospital. To protect themselves and their patients from the transmission of germs and infectious diseases, PPE is essential in any pandemic event. Job design seeks to integrate means by which job characteristics can be changed, like workload, work variety, and workplace supervisory support, which will lead to enhanced worker satisfaction and hence increased performance [16]. In our study, two-thirds of the participants (66.9%) were not satisfied with their salary in relation to their workload. Doctors were expecting a handsome salary during the COVID situation due to the huge workload. In our study, according to information about satisfaction with the workload, more than half the respondents (53%) were dissatisfied. Another study showed that the positive working environment, low fringe benefits, and poor salaries were the main factors behind the job satisfaction of health workers working in health organizations in the public sector, while socio-demographic characteristics were found to have no significant relationship with their job satisfaction [17]. According to Abdullah et al., salary and benefits were key factors in employee satisfaction and turnover. [18] In our study, satisfaction regarding job retention, the right position, promotion, and health insurance was 81.6%, 63.6%, 46.5%, and 10.1%, respectively. Another study showed that the right position and promotion opportunities, benefits, health insurance, and rewards reflect job satisfaction among public healthcare workers in Pakistan [19]. Doctors were frontline fighters during COVID-19, and their participation in the fight against this pandemic is unusual. During a pandemic, doctors assumed a considerable risk in providing patient care, yet they were not provided with health insurance. The information gathered was challenging to obtain since some of the participants were hesitant to share their genuine opinions. This study may be a source for future studies in different hospitals with a vast scope. Conclusions The COVID-19 pandemic was a major public health threat. The role of doctors was crucial, and they served as frontline soldiers. Unsatisfactory working conditions restrict doctors from rendering their capabilities. To fulfill the guidelines of care during COVID-19 or any pandemic situation, doctors need a workplace that permits them to work enthusiastically. This study revealed that about two-thirds of the respondents in the current study reported an average level of satisfaction. Satisfaction regarding the psychological environment was higher among the respondents than that regarding physical working conditions. To ensure that doctors are as satisfied as possible during pandemics and to provide quality care to patients, hospitals must use evidence-based interventions. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Institutional Review Board (IRB) of the National Institute of Preventive and Social Medicine (NIPSOM) issued approval NIPSOM/IRB/2020/1225. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-08-24T15:08:08.467Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "1ba5a51b02ed85abfb2572d63903611fd6dd5c54", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/original_article/pdf/154192/20230822-30680-17fttun.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "303b74b85ec38474d699e9f477120b249e1c55a9", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
262031864
pes2o/s2orc
v3-fos-license
Dollar Funding and the Lending Behavior of Global Banks A large share of dollar-denominated lending is done by non-U.S. banks, particularly European banks. We present a model in which such banks cut dollar lending more than euro lending in response to a shock to their credit quality. Because these banks rely on wholesale dollar funding, while raising more of their euro funding through insured retail deposits, the shock leads to a greater withdrawal of dollar funding. Banks can borrow in euros and swap into dollars to make up for the dollar shortfall, but this may lead to violations of covered interest parity (CIP) when there is limited capital to take the other side of the swap trade. In this case, synthetic dollar borrowing also becomes expensive, which causes cuts in dollar lending. We test the model in the context of the Eurozone sovereign crisis, which escalated in the second half of 2011 and resulted in U.S. money-market funds sharply reducing their exposure to European banks in the year that followed. During this period dollar lending by Eurozone banks fell relative to their euro lending, and firms who were more reliant on Eurozone banks before the Eurozone crisis had a more difficult time borrowing. INTRODUCTION A striking fact about international financial markets is that a large share of dollardenominated intermediation is performed by non-U.S. banks. This point is illustrated in Figure I. Drawing on data from the Bank for International Settlements (BIS), the figure shows that both the dollar assets and the dollar liabilities of foreign banking entities have grown rapidly in the last two decades, and currently are on the order of $10 trillion, which puts them roughly on a par with U.S. banks (see also Shin, 2012). A significant part of this activity by foreign banks represents loans to customers located outside of the United States. However, foreign banks also play a major role in domestic U.S. markets. As we discuss in more detail below, European banks alone accounted for approximately 28% of the U.S. syndicated loan market over the period [2005][2006][2007]. [FIGURE I] The large footprint of global banks in dollar markets raises a number of questions. Some of these have to do with the dollar's role as a favored currency for transactions by non-U.S. residents and firms-e.g., why is it that a Brazilian manufacturer might prefer to borrow in dollars as opposed to reals? Others have to do with understanding the comparative advantage of foreign banks in lending to U.S. firms-e.g., why might an American manufacturer end up borrowing from, say, Credit Agricole as opposed to JPMorgan Chase? In this paper, we take the presence of global banks in dollar loan markets as given, and focus on its consequences for cyclical variation in credit supply across countries. In particular, we ask how shocks to the ability of a foreign bank to raise dollar funding affect its lending behavior, both in the U.S., and in its home market. This question is especially important in light of the observation that many foreign banks operate in the U.S. with a largely "wholesale" funding model. In other words, rather than relying in part on sticky insured deposits-as do domestic U.S. banks-foreign banks raise the majority of their short-term dollar financing from uninsured institutional sources, such as commercial paper purchased by U.S. money-market funds. 1 This makes the cost and availability of such dollar funding highly sensitive to changing perceptions of a bank's creditworthiness. To understand how such shocks might affect lending activity, we build a simple model, which can be described as follows. Imagine a global bank based in France that lends in euros to European firms, and in dollars to American firms. To finance the euro-denominated lending, it funds itself by issuing insured euro deposits to its local retail deposit base. By contrast, to finance the dollar-denominated lending, it funds itself by issuing uninsured commercial paper to a set of U.S. money-market funds. Initially, the bank is viewed as having near-zero credit risk, so its lack of insurance in the U.S. market does not have an impact on its dollar funding costs. Now suppose that there is an adverse shock to the bank's perceived creditworthiness. Given the wholesale nature of its dollar liabilities (i.e., the lack of insurance), this leads to a spike in its dollar funding costs, as the money-market funds seek to cut their exposure to the bank. At the same time, the cost to the bank of funding in euros is unchanged, given the deposit insurance in that market. Said differently, as the bank becomes increasingly risky, the advantage of funding in euros relative to dollars goes up, since the former enjoys an increasingly valuable subsidy from the deposit insurance fund. So we might expect the bank to shift its funding away from the U.S. commercial paper market and back towards the European deposit market. But does this have any implications for the geographic distribution of its lending? At first glance, one might think that there would be none-i.e., a version of a capital-structure irrelevance proposition would hold. After all, if it wants to maintain the volume of its dollar-based U.S. lending, the bank can always tap its insured deposit base to raise more euros, use the proceeds to buy dollars, make the same dollar loans as before, and hedge out the foreign exchange (FX) risk using the forward market, by buying euros on a forward basis. 2 This logic is correct, so long as FX forward prices are pinned down by the usual coveredinterest-parity (CIP) relationship. In this case, a shock of the sort described above alters the funding mix of the global bank, but leaves its lending behavior entirely unchanged. However, if the induced funding realignment is big enough, we demonstrate that it begins to put pressure on the CIP relationship. In other words, a large surge in the demand by the global bank for FX forwards, combined with limited capacity on the part of arbitrageurs, endogenously leads to a CIP violation such that synthetic dollar funding-composed of euro-based borrowing plus a currency swap-also becomes more expensive. Indeed, in an interior equilibrium with a high level of swap activity, synthetic dollar funding and direct dollar funding wind up being equally costly to the bank, and both more expensive than direct euro borrowing. Once this is the case, implications for the geographic pattern of lending follow immediately. Given the increased cost of dollar funding, the bank is forced to cut back on its supply of dollar loans, but does not face the same pressure to shrink its euro-denominated loan supply. So the key conclusion from the model is that, in the presence of limited arbitrage and an endogenous CIP violation, an adverse shock to the global bank's perceived creditworthiness leads to a drop in its dollar-denominated lending relative to its euro-denominated lending. 2. We are implicitly assuming that the bank is prohibited from taking on naked exchange-rate exposure, i.e. from borrowing in euros, and lending in dollars without a hedge. We discuss this assumption in more detail below. We then go on to test the model's implications. To do so, we focus on events that unfolded from May 2011 to June 2012, a period that captures well the sort of shock to globalbank creditworthiness envisioned in our model. During this period, the credit quality of a number of large Eurozone banks began to be a source of concern, with Moody's putting the French banks BNP Paribas, Credit Agricole and Societe Generale on notice for possible downgrades on June 15, 2011. In the face of these concerns, U.S. prime money-market funds sharply reduced their investments in Eurozone banks. Chernenko and Sunderam (2014) document that the total money-fund holdings of Eurozone bank paper declined by 37%, from $453 billion to $287 billion, between May and August of 2011. Fitch reports further declines through June 2012. 3 Starting in the second half of 2012, the Eurozone situation began stabilize and money-fund holdings of Eurozone bank instruments started to rebound. Coincident with the contraction in dollar funding, there was a pronounced disruption in the dollar-euro CIP relationship, in the direction predicted by our theory. The "euro basis"-i.e., the deviation in the forward price of euros in terms of dollars, and hence in the cost of synthetic dollar borrowing-rose from a negative 16 bps in April 2011, to a high of 73 bps in August, and continued to go up until reaching a peak of 96 bps in December 2011. Using loan-level data on international syndicated lending activity from Thompson Reuters' DealScan, we first show that during the period of dollar funding strain from May 2011 to June 2012, dollar lending by Eurozone banks fell relative to their euro lending, a pattern that differs sharply from that observed among U.S. banks. Next, as a control against possible confounding demand-side shocks, we construct a panel that allows us to incorporate borrower fixed effects. Using this approach, we find that during the period of dollar funding strain (the 3. "U.S. Money Fund Exposure and European Banks: Euro Zone Diverging," Fitch Ratings, January 26, 2012. "shock" period), a syndicate formed to make a dollar-denominated loan to a given firm was less likely to be comprised of Eurozone banks than was a syndicate formed to make a loan to the same firm outside of the shock period. Thus our results cannot be explained by appealing to the idea that Eurozone and U.S. banks lend to different customers with different demand behavior. This shift away from dollar lending by Eurozone banks could in principle have been offset by increased lending by U.S. banks, in which case the loan-supply shock would have had no real effects on corporate borrowers. However, we show that this type of substitution was at best incomplete: firms that before the 2011 shock had borrowed in dollars from syndicates comprised largely of Eurozone banks were less likely to receive any loans at all once these banks faced dollar funding problems. And those borrowers in this group that did receive loans paid higher interest rates. These findings provide support for the view that lending relationships are important in the syndicated loan market, and that when those relationships break down there can be real consequences-echoing recent work by Chodorow-Reich (2014). Finally, in an effort to further isolate the mechanism in our model, we exploit the fact that Eurozone banks differ in the extent of their reliance on money-market funds. We document that, during the period of dollar funding strain, the tendency to cut back on dollar lending is more pronounced for the most money-fund-reliant Eurozone banks, as compared to their less moneyfund-reliant counterparts. The bottom line of our analysis can be summarized as follows: Given limited arbitrage in FX forward markets, the wholesale dollar funding model typically employed by foreign bankswhereby they rely heavily on short-term uninsured sources of dollar finance-exposes their mix of lending activity to changes in perceived creditworthiness. In particular, adverse shocks to creditworthiness lead them to curtail their supply of dollar loans, relative to their supply of loans in their domestic currency. It is worth emphasizing that this is quite a different mechanism than the more familiar capital-crunch channel (as in Rosengren 1997, 2000), according to which a global bank hit with a negative shock to its capital base might be expected to cut back on lending across the board, regardless of the currency in which the lending takes place. This paper fits into a large literature that studies how financing frictions shape bank lending behavior. A subset of this research focuses, as we do, on multinational banks and the role they play in transmitting various kinds of shocks across borders. In addition to the important early contributions by Rosengren (1997, 2000), recent research includes Acharya and Schnabl (2010), Chava andPurnanandam (2011), Schnabl (2012), and Cetorelli and Goldberg (2011, 2012a, 2012b. Our empirical results are closely related to those of Acharya, Afonso and Kovner (2013), and Correa, Sapriza and Zlate (2012). The former investigates the differential response of U.S. and foreign banks to the funding pressures created by the 2007 collapse of the asset-backed commercial paper market, and the latter focuses on the same 2011 European shock that we do. Particularly noteworthy are a pair of recent papers by Laeven (2012a, 2012b). These papers document a generalized "flight home" effect, whereby in periods of financial stress, global banks tend to reduce their lending share abroad relative to their lending share in their home-country markets. Although we focus on the currency-rather than the country-in which global banks lend, the two effects are likely related, and the mechanism that we propose may help to explain this general phenomenon. We discuss this connection in more detail below. There is also a smaller literature that analyzes the CIP violations that have cropped up intermittently since the onset of the financial crisis. These include Baba, Packer, and Nagano (2008), Coffey, Hrung, and Sarkar (2009), Griffoli and Ranaldo (2011), and Levich (2012. These papers discuss the frictions that prevent arbitrage from eliminating a CIP deviation once it emerges, but have less to say about what determines the direction and magnitude of the deviation in the first place. By contrast, in our model the CIP violation is an equilibrium outcome, and we show how it depends not only on the capital of arbitrageurs, but also on global banks' funding opportunities across dollar and non-dollar markets, and on the marginal product of their lending in each currency. That is, we connect CIP violations to the real side of the economy. The remainder of the paper is organized as follows. Section II presents the model. Section III discusses our data sources and provides background information on the three critical components or our analysis: the role of Eurozone banks in syndicated lending in the U.S.; the dependence of Eurozone banks on dollar financing from U.S. money-market funds along with the decline in money fund assets in the second half of 2011; and the violation of covered interest parity during that same period. Section IV describes our main empirical tests, which examine the impact of the money-fund shock on loan supply by Eurozone banks. Section V concludes. II.A. Basic Assumptions Our model considers a global bank B that has lending opportunities in both the U.S. and where again, ℎ(. ) is a concave function. To keep the notation simple, we assume that riskless D L rates in the U.S. and Europe are both equal to r, and that the spot dollar/euro exchange rate, , is equal to one. The bank faces an overall capital constraint on lending, such that aggregate lending is capped by: + ≤ . This constraint, which we assume binds in equilibrium, can be thought of as reflecting the combination of a regulatory capital regime, along with frictional costs to the bank of raising external equity finance (Myers and Majluf, 1984). We further assume that if the bank wishes to lend in dollars, it must effectively fund in dollars, and analogously for euro lending-i.e., it cannot take on any unhedged FX risk. We take this restriction as exogenous here, but it could easily be endogenized by appealing to the real-world fact that if a bank were to take on FX risk in this way, it would face an additional regulatory capital charge. 4 If the shadow value of the regulatory capital constraint is high enough, it will be optimal for the bank to conserve its scarce capital by avoiding any FX exposure. The bank has a probability p of default. We assume that if the bank defaults, all of its loans in both the U.S. and Europe turn out to be worthless, and it has no resources to pay to any of its debts. Note therefore that if the bank earns an expected gross return of ( )on its dollar lending, it must be that the return accrues entirely in the non-default state. So it is more precise to say the bank earns a gross return of ( )/(1 − ) with probability (1 − ) , and zero otherwise. The same applies to its returns on euro lending. If the bank borrows from European depositors and it defaults, these depositors are made whole by the government. Hence the rate that the bank pays on European borrowing , is the riskless rate: = . Said differently, there is a government subsidy associated with European-4. Under the current regulatory framework, increased exposure to FX risk is costly to the bank. This treatment of FX risk dates back to Basel I. In a study conducted in the context of the Basel I discussion, FX risk was identified as one the fundamental risks: "There are many activities of banks which involve risk-taking, but there are few in which a bank may so quickly incur large losses as in foreign exchange transactions." (www.bis.org/publ/bcbs00e.htm.) sourced euro borrowing, and this subsidy is an increasing function of the default probability p. However, in order to attract incremental deposits the bank has to pay an adjustment cost that is convex in the amount of deposits above some threshold, > 0, so that borrowing euros costs This assumption is meant to capture the idea that the bank cannot immediately expand its retail deposit base beyond some pre-existing baseline scale (given by X) at no cost. Rather, to expand it has to invest in advertising, promotions, and branches, and the more it expands in the short run the greater are the marginal costs of adding deposits. While depositors in Europe are insured, if the bank borrows in the U.S. market, its creditors are only partially insured. Specifically, we assume that in expectation, U.S.-based lenders to the bank are only bailed out on a fraction (1 -α) of their losses in the default state. As a result, the rate that the bank pays on U.S. borrowing, , is approximated by: = + . A literal interpretation of the parameter α is that it reflects the fraction of the bank's dollar financing that comes from, say, uninsured commercial paper, as opposed to deposits that are either explicitly insured, or that benefit from some perception of implicit insurance. A less literal interpretation, but one that motivates our empirical work below, is that even among different providers of uninsured finance, some may be structurally "flightier" than others, and hence more sensitive to changes in bank creditworthiness. Money-market funds would seem to fit this description, given the run-like incentives created by their policy of allowing investors to redeem shares at a fixed value. We rely on this idea when we construct bank-level measures of α, associating higher values of α with those banks that raise more of their short-term funding from money-market funds. Note that the funding costs in dollars, unlike euros, are linear in the amount borrowed. This assumption is meant to reflect the idea that dollar borrowing is mainly in the wholesale market via institutions like money market funds and thus can more easily be expanded in a short period of time than European retail deposits. If is large enough such that the bank never hits the convex part of its euro borrowing, then it would want to raise all its funding in Europe and enter into an FX swap to cover the dollar-denominated portion of its lending. This is because the marginal cost of borrowing is just (1 + ) in Europe whereas it is (1 + + ) in the U.S. For more moderate values of X the bank would equate the marginal cost of borrowing across the two locations such that = ( − ). However, this assumes that the FX swap market is frictionless (i.e., CIP holds) an assumption we drop in the next section. But in this case of a frictionless swap market, the bank's funding and lending decisions decouple from one another. Funding is done in the mix of currencies that minimize funding costs, while lending activity in the two countries is pinned down by equating the marginal product of dollar lending to the marginal product of euro lending. And swap activity fills in the gap, by converting funding in one currency into the other as necessary. II.B. Limited Arbitrage and Deviations from CIP Frictions in the swap market can generate deviations from CIP and complicate the bank's borrowing and lending decisions. As we demonstrate, these deviations arise when the bank's swap counterparties have limited capital, and are required to use this capital to post margin in their swap transactions. As a benchmark, note that with interest rates being equal in the two countries, and with the spot exchange rate normalized to one, a simplified version of the CIP relationship-which would always hold with capital-unconstrained parties on both sides of the trade-is that the forward exchange rate must be equal to one as well. In other words, denoting the dollar/euro forward rate for a transaction in a frictionless world by , we have that = 1. Now consider the case where the counterparty is a capital-constrained arbitrageur. Let be the forward price paid by the bank in this case. To pin down this price, we make two further assumptions. First, the arbitrageur has to set aside a haircut H when it enters the swap transaction; this can be thought of as the initial margin required as collateral for its position. 5 To keep things simple, we follow Garleanu and Pedersen (2011) and assume that this haircut is proportional to the size S of the swap position. So the haircut is given by H = γS. Second, when the arbitrageur sets aside H for swap trading, he has to take it away from another productive activity-e.g., lending, or another arbitrage trade. This other productive activity has a net return given by f(I), where I is the amount invested. The arbitrageur has wealth of W, so his budget constraint is that I = W -H, or I = W -γS. It follows that in an interior optimum where the arbitrageur is doing both activities, an equilibrium condition is that the expected excess return per unit earned on doing the swap, denoted Δ, must satisfy: Δ = ′( − ). A convenient simple case is where f(I) = θ log(I) -I, in which case we have that: . We do not explicitly analyze the collateral posted by the bank, as opposed to by the arbitrageur. Instead, we just assume that the bank never defaults on its obligations under the swap contract, even if it does default on its short-term debt obligations. However, none of our main results are changed if there is a risk of default on the swap by the bank. This is because what matters for the bank in deciding how much swap activity to do is the premium it pays relative to the default-risk-adjusted actuarial value. Since this premium is a function of the arbitrageur's collateral constraint, and not the bank's, we focus on the former for the sake of clarity. To simplify even further, we assume Δ is zero when there is no net demand for swaps, but as soon as there is net demand for swaps Δ becomes positive. This amounts to saying that = , i.e., that the arbitrageur has just enough wealth W to take advantage of all positive-NPV investment opportunities in his outside option project f(I), with nothing left over. With this restriction, (1) reduces to Δ = 2 /( − ). The forward price paid by the bank is now given by: We can now see the fundamental tension facing the bank. As its creditworthiness declines-i.e., as p goes up-it would like to increasingly fund its dollar lending with synthetic dollar borrowing, that is, by borrowing in euros and pairing this with an FX swap. However as the magnitude of its swap position S grows, this puts increasing strain on the capital of the arbitrageurs who must take the other side of the trade, and hence creates a CIP deviation in which synthetic dollar borrowing becomes increasingly expensive-as reflected in the higher forward price that the bank must pay to buy back euros with dollars when its dollar loans mature at time 1. II.C. The Bank's Optimization Problem We are now ready to write down the bank's optimization problem. The bank's dollarbased lending is denoted by , and the amount of euro borrowing that it swaps into dollars is denoted by S. This implies that its total dollar borrowing, , is equal to − and its euro borrowing, , is + . The bank's optimization problem is to choose { , , } to maximize: subject to the capital constraint that − − ≥ 0. Here we are assuming that the parameters are such that the swap facilitates extra euro borrowing to fund dollar lending, not the other way around, i.e. ≥ 0. Below we will discuss the conditions under which this is the case. Rearranging terms, the objective function (3) can be rewritten as follows: The first four terms in (3′), The bank takes the frictional cost of the swap, Δ, as given, even though in equilibrium Δ depends on S. That is, the bank is a price-taker in the swap market. This can be motivated by thinking of the bank that we are studying as a representative bank. In other words, one can imagine that there are many identical banks, of total measure one, just like the one whose optimization problem we have written down. Moreover, as we discuss below, the model is easily extended to the case where there is some heterogeneity across banks with respect to the parameter α. The first-order conditions for an interior maximum for , and S respectively can be written as: Here is the Lagrange multiplier on the capital constraint that − − ≥ 0, which we assume is binding. Lending in both currencies will be strictly positive under the usual regularity assumptions on the g and h functions. Thus, (4) and (5) together imply that the bank equates the marginal benefits of lending in the two currencies net of funding costs: Equation (6), the first-order condition for an interior optimum in , says that if the bank does any swaps at all, it sets the marginal cost of borrowing in dollars to the marginal cost of borrowing in euros and converting them into dollars. In an interior swap equilibrium, (6) can be used to rewrite (7) as: That is, the marginal return on lending in dollars exceeds that on lending in euros by a wedge that is exactly equal to the equilibrium CIP basis Δ. Let us first begin by considering what happens in a "normal" pre-crisis period when the probability p of default by the Eurozone bank is zero. In this case, the bank can borrow all it wants to in the U.S. at the riskless rate, so as equation (6) makes clear, there is never any benefit to having a positive swap value S. It is possible, however, that for certain parameters, could exceed , so that the bank faces increasing costs at the margin for raising retail euro deposits. If so, it could be cheaper to fund euro lending by borrowing in dollars and converting into euros, which would correspond to a negative value of S. To eliminate this uninteresting case, we assume that in normal times the pre-existing euro deposit base is equal to euro lending, so that when = 0, = . What we have in mind here is that the pre-crisis period represents a steady-state interval during which the bank has had the time to adjust its core deposits in its home country to match its loan balances. Indeed, Figure I shows that banks almost never have dollar liabilities in excess of dollar assets, which is consistent with this assumption. Next, consider what happens at the onset of a "crisis", by which we mean a period when The intuition behind proposition, which is proven in the appendix, is straightforward. When p increases above zero, equation (6) tells us that the bank will react both by borrowing more in euros and by increasing the volume of its swap activity, thereby driving the CIP basis Δ upward. As equation (8) shows, this increase in the CIP basis represents a wedge in the relative cost of obtaining dollar funding versus euro funding, so at the margin the bank now allocates more of its fixed capital base to euro lending. In our empirical work, we test the above comparative statics, using the money-market fund run on European banks in the second half of 2011 as a proxy for an aggregate shock to the value of p for all European banks. Moreover, in addition to focusing on this time-series variation in p, we also consider a set of cross-sectional tests. At first glance, the model might appear unsuited to making cross-sectional predictions, since it is effectively a model of a single representative bank, or more accurately, of many identical banks of total measure one, since the bank we have been analyzing is assumed to be a price-taker in the swap market. However, the model is easily extended to incorporate some heterogeneity across banks. Suppose we have two banks i and j that are otherwise similar, but with α i > α j , say because bank i is more reliant on money-market funds than is bank j. Looking at equation (6), we can see that when p rises above zero, both banks may become active in the swap market simultaneously, taking as given the common CIP basis of Δ, but that bank i will shift more of its funding to the euro market, thereby bearing a higher marginal cost of attracting retail depositors in that market. 6 And bank i will also, per equation (4), cut its dollar lending by more. Thus the model implies that the impact on dollar lending of a jump in p should be more pronounced for 6. The only other modification that needs to be made when we introduce heterogeneity is to recognize that expression for the CIP basis is now more properly written as: Δ = 2 ∑ /( − ∑ ). That is, it depends on the sum ∑ of swap demands across the banks in the population. more money-fund-dependent banks. We test this additional implication of the model as well in what follows. III. DESCRIPTION OF THE DATA AND BACKGROUND FACTS This section describes our data sources and provides some background that will be useful for our empirical analysis. We discuss the syndicated loan market in the U.S. and Europe, and the important role that Eurozone banks play in the U.S. We also present data on Eurozone bank reliance on U.S. money-market funds, and note the problems they faced in tapping this financing source in the second half of 2011. Finally, we document that during this period there was a significant violation of covered interest parity. As the model shows, it is the combination of wholesale dollar funding difficulties and violations of covered interest parity that gives rise to a decline in dollar lending relative to euro lending. III.A. The Role of Eurozone Banks in the U.S. Syndicated Loan Market The loan data for our analysis come from Thompson Reuters' DealScan database of loan origination. Almost all these loans are syndicated, i.e., originated by one or more "lead" banks and funded by a syndicate of banks and other investors. Often there are multiple lead banks originating a loan, and in these cases we prorate the loan amount by the number of lead banks in the syndicate. 7 The country of the borrower and lender are based on the location of their headquarters as reported in DealScan. We consider a lead bank to be one that is designated as a "Lead Arranger" or "Agent" in the DealScan database. in the U.S., with about 11% coming from Eurozone banks. The most prominent Eurozone banks in this regard are headquartered in France and Germany, each with about a 5% market share. Banks headquartered in countries with sovereign debt problems-Greece, Ireland, Italy, Portugal and Spain-had a less than 2% share of the U.S. market. Over 13% of U.S. syndicated loans were originated by non-Eurozone European banks-mainly those located in the U.K. and Switzerland. These banks also do a lot of Eurozone lending and raise some of their deposits in euros. 8 Given this euro deposit financing, there is a case for including these banks in our analysis, but we take the more conservative approach of reporting the results only for Eurozone banks. However, our results are robust to including European banks outside the Eurozone. Overall, 43% of Eurozone bank lending is in dollars. Given that most of their retail deposits are in euros, this creates a currency mismatch between their assets and retail deposits. The same is not true of U.S. banks, which do 89% of their syndicated lending in dollars. III.B. Eurozone Bank Reliance on U.S. Money-Market Funds and the Run in 2011 In May 2011, financial markets became increasingly concerned about the exposure of European banks to Greek sovereign debt, amidst growing worries about the country's solvency. Leading banks in France, Germany and Belgium were identified as having several billion euros of Greek sovereign bonds on their books. 9 In response, investors began withdrawing money from U.S. prime money-market funds (MMFs), which, according to the SEC, had about one quarter of their assets invested in paper issued by Eurozone banks. The withdrawals were greater from those funds that had more exposure to Eurozone banks (Chernenko and Sunderam, 2014). This in turn led MMFs to reduce their holdings of instruments issued by Eurozone banks. As illustrated in Figure II, between May 2011 and June 2012, U.S. MMFs had reduced their exposure to Eurozone banks from 31% to 8% of their total assets. French banks, which were top lenders to U.S. firms, on average lost over 75% of their funding from U.S. MMFs (see Table III). However, after June 2012 as the crisis in the Eurozone began to stabilize, the MMF holdings of instruments issued by Eurozone banks begin to rebound. The MMF withdrawal was an important shock to the ability of Eurozone banks to fund themselves in dollars. To measure the size of this shock for particular banks, we calculate the share of a bank's short-term funding that comes from U.S. MMFs as of the end of April 2011. This calculation is based on MMF security-level holdings compiled by Crane Data LLC from data provided by fund sponsors. These data cover roughly 85% of the universe of MMF holdings, with some smaller funds missing from the sample. To compute the extent to which a Eurozone bank relied on MMFs for funding, we take the sum of MMF holdings of the bank's certificates of deposit (CDs), commercial paper (CP), asset-backed CP, repurchase agreements, and other short-term bank notes and deposits and scale this by the sum of the bank's deposits and short-term debt. Data on a bank's short-term liabilities are taken from Capital IQ and are measured as of the end of 2010. We should emphasize that we 9. E.g., see "Investors Count Cost to Banks of Greek Default," Financial Times, May 10, 2011 or "EU Banks' Risks from Greece Default Exceed Their Direct Exposures," Moody's Investors Services, May 15, 2011. are not scaling by banks' short-term dollar funding as that information is not available. Thus, our measure does not capture-and may greatly understate-the extent to which a bank relies on U.S. MMFs for its dollar funding specifically. Ideally, we would also want to distinguish between insured and uninsured dollar funding. But there is very limited information on insured deposits, and almost none on insured deposits by currency. However, it is likely that the insured dollar deposits of Eurozone banks are limited. As can be seen, MMFs were an important source of short-term funding for these banks. For Deutsche Bank, the fifth-largest lender in the U.S., with 4.5% of syndicated origination volume, 7.7% of its total short-term funding came from U.S. MMFs. The French banks-Societe Generale, Credit Agricole, BNP Paribas and Natixis-on average, received 5% of their total short term funding from U.S. MMFs. [FIGURE II &TABLE III] As noted above, these calculations understate the significance of MMFs as a source of dollar funding because they normalize by all short-term funding, including non-dollar deposits. While no systematic data are reported on funding currencies, information provided by Credit Agricole in a presentation to analysts can give a better sense of the dollar funding share of U.S. MMFs. 10 The bank reported that in June 2011, 44% of its short-term debt was in dollars. Based on data we have on Credit Agricole's short-term debt and MMF funding in April 2011, this implies that approximately 30% of the bank's short-term dollar funding came from U.S. MMFs. Clearly, this implies a very meaningful reliance on the money-fund sector. III.C. Breakdown of Covered Interest Parity in 2011 Foreign exchange swaps are the primary means through which global banks manage the currency mismatch between their assets and liabilities (e.g., Fender and McGuire, 2010). A swap contract enables a bank to exchange local currency for U.S. dollars at the current exchange rate, while agreeing to reverse the transaction-i.e., exchange U.S. dollars back to local currency-at the forward exchange rate. The typical maturity of a FX swap is three months, but as an overthe-counter instrument its maturity can be extended to several years. Counterparties typically post collateral, which is adjusted depending on movements in currencies. In the absence of market frictions, the cost of an FX swap is pinned down by the differences in interest rates in the two currencies that are being swapped. Specifically, covered interest rate parity (CIP) implies that the differential in interest rates between two countries should be equal to the differential between the forward and spot exchange rates. Given this seemingly riskless arbitrage, significant CIP deviations have historically been rare (Taylor, 1987;Akram, Dagfinn and Sarno, 2008) Interestingly, this increase in currency mismatch is of the same rough magnitude as the decline in MMF holdings of Eurozone bank paper. And it is precisely this change in currency mismatch that Eurozone banks would presumably be seeking to hedge, thereby creating pressure on the CIP arbitrage relationship, and on arbitrageur capital positions sheets more generally. 12 11. "Euro-Dollar Basis Swap Cost at 2008 Crisis Levels," Wall Street Journal, November 16, 2011. 12. Buraschi, Menguturk and Sener (2015) connect the divergence in the rates on euro-and dollardenominated sovereign bonds for several emerging countries to frictions in banks' abilities to fund in foreign currency. IV. LENDING BEHAVIOR FOLLOWING THE SHOCK TO MONEY-MARKET FUNDS In this section, we examine bank lending behavior around the MMF shock. We first show that Eurozone banks reduced their dollar-denominated loans relative to euro-denominated loans. Next, we document that this led to a reduction in the net supply of dollar credit to operating firms-i.e., the inward shift in loan supply by Eurozone banks was not fully offset by other lenders stepping into the breach. Finally, we show that those Eurozone banks that were most MMF-dependent reduced their lending by more than other Eurozone banks. IV.A. Direct Effects of MMF Shock on Bank Lending To examine the behavior of Eurozone banks around the MMF shock we construct a panel data set of bank-month observations from 2005-2013. We begin by focusing on the effect of the MMF shock on DOLLAR LOAN SHARE, the ratio of a bank's dollar-denominated loans to the sum of its dollar-and euro-denominated loans (excluding all other currencies). The exact specification is explained in the captions to each table. The first column of Table IV reports is likely to be variation across banks in the extent to which they lend in dollars and euros. As expected, the coefficient on SHOCK is negative and statistically significant. (Standard errors are calculated to allow for correlation of the error term across observations within a month. 13 ) The coefficient in column (1) implies that Eurozone banks reduce their dollar loan share by 3.5 percentage points during the shock period relative to their pre-and post-shock period averages. Given that the dollar loan share has a sample mean of 17.7%, this effect is fairly sizeable. Over the shock period, the 3.5 percentage-point change would have translated into a reduction of roughly $82 billion in the origination of dollar syndicated loans. 14 Column (2) repeats the exercise using the number of individual loans made in each currency instead of aggregate dollar and euro values to compute the loan share. The results are very similar. In calculating the dollar loan share, we convert the value of euro-denominated loans into dollars, using the spot exchange rate at loan issuance, so these quantities can be meaningfully compared to one another. This raises the concern that an appreciation of the euro, as happened in the shock period, could mechanically lead to a decline in our DOLLAR LOAN SHARE variable, even if the nominal volume of loan issuance in each currency was unchanged. This mechanical effect could then potentially bias our inferences. To control for this possibility, throughout 13. Any alternative clustering method strengthens our result. 14. This $82 billion figure can be compared to our estimate that Eurozone banks lost approximately $370 billion in dollar funding from U.S. money funds over the course of the shock period. What accounts for the difference in these numbers? First, as our model emphasizes, Eurozone banks presumably made up for a significant fraction of the lost dollar funding by turning to their domestic deposit bases. And second, it is important to bear in mind that the syndicated lending that we capture with our data is only a fraction of their total dollar lending-so that the total effect on dollar credit supply is likely somewhat larger than $82 billion. 15. Why might a stronger euro lead to an increase in the share of dollar lending by Eurozone banks? One hypothesis is that if a Eurozone bank holds predominantly euro-denominated assets, then an increase in the value of the euro strengthens its economic capital relative to that of its U.S. counterparts. This in turns enables it to gain market share in those dollar-based loan markets where it is most likely to be in direct competition with U.S. banks. By contrast, in euro-based loan markets, where its competitors are The leading alternative explanation for the drop in the dollar lending share of Eurozone banks is that these banks experienced not a funding shock but rather a decline in dollar loan demand relative to euro loan demand. On its face, this alternative hypothesis is somewhat hard to motivate given that the source of the negative shock in the first place was the Eurozone. So, if anything, one would think that there would be more of a decline in the demand for eurodenominated loans. Nevertheless, we explore this alternative hypothesis in a number of ways. First, in columns (3) and (4) we repeat the analysis of columns (1) and (2), but restrict the sample only to loans made in the European market. 16 As can be seen, the estimated coefficients on SHOCK are very similar. This helps allay the concern that the results are picking up a relative shift in loan demand across European and American borrowers. Next, in columns (5) through (8), we redo everything with U.S. banks included in the sample, and ask whether-as might be expected from a demand-side story-the decline in the dollar loan share is also observed in these banks, which did not suffer from the same funding shock as the Eurozone banks. This is effectively a difference-in-difference specification. Specifically, we add to the sample seven U.S. banks that are active in syndicated lending in the Eurozone. The key coefficient of interest is now that on the variable EUROBANK*SHOCK, which is an interaction between the SHOCK dummy and a Eurozone bank dummy. Our fundingshock hypothesis implies that we should expect to see a negative coefficient. And indeed, the coefficient is negative and statistically significant. Moreover, the near-zero coefficient on the raw SHOCK term implies that there is no change in the currency composition of lending by U.S. more likely to be other European banks, a movement in the exchange rate confers less of an advantage. The net result is an increase in the share of activity the European bank does in dollar markets. This logic is similar to that of Froot and Stein (1991). 16. As before, we only look at the euro and dollar denominated loans. banks in the shock period. In other words, the effect is specific to the Eurozone banks, consistent with our hypothesis. [TABLE IV] To more comprehensively control for potential demand-side confounds, in Table V we examine lending behavior at the loan level rather than at the bank level. We take advantage of the fact that we observe multiple instances in which the same firm taps the syndicated loan market, both before and after the shock. Thus we can ask whether when a given firm gets a dollar loan during the shock period, it is less likely to get it from a syndicate that includes one or more Eurozone lenders, as compared to the same firm borrowing outside of the shock period. We now run regressions for the period 2000-2013; we look at a longer sample period so there are more repeated transactions per borrower. The unit of observation is a loan and the dependent variable is EUROBANK SHARE, the fraction of banks in the loan syndicate that are from the Eurozone. Importantly, we include firm fixed effects in the regressions. In the first two columns of Table V we consider only dollar loans, and the key variable of interest is SHOCK. As predicted, the coefficient on this variable is negative and statistically significant. In other words, in the wake of the dollar funding shock, Eurozone banks are less likely to appear in dollardenominated lending syndicates than otherwise, holding fixed the identity of the borrower. Figure IV provides a graphical presentation of this finding. Here we simply regress EUROBANK SHARE on a sequence of monthly dummies and on firm fixed effects, and then plot the resulting time series of coefficients on the monthly dummies. As can be seen from the figure, the average monthly dummy is lower in the shock period than in the two surrounding non-shock periods, and the difference is statistically significant at the 1% level. In columns (3) through (5) of (1) and (2) is restricted only to those syndicates involving dollar-denominated loans, as the theory suggests it should be. As can be seen, the estimates uniformly bear out this hypothesis. [TABLE V & FIGURE IV] While the results in Table V appear to be clear evidence that Eurozone banks reduced their loan supply during the shock period, the real effects of such a contraction would be minimal if other, non-Eurozone banks were able to step in and increase their supply of loans. However, this potential equilibrating response could be muted to the extent that corporate borrowers have the sort of information-intensive relationships with their existing Eurozone lenders that make shifting to another lender difficult (e.g., Sharpe, 1990;Rajan, 1992). Indeed, consistent with the imperfect substitution of new lenders for existing relationship lenders, Chodorow-Reich (2014) shows that during the 2007-9 financial crisis, firms that had previously borrowed from banks that were hit harder by the crisis had a more difficult time borrowing during the crisis, and those that were able to borrow did so on less favorable terms. Thus, in the spirit of Chodorow-Reich (2014), we examine whether a firm is less likely to receive a new dollar-denominated loan during the shock period if it was more reliant on Eurozone banks for its dollar borrowing prior to the shock. Table VI presents this analysis. The sample includes all U.S. and European borrowers that received a dollar-denominated loan in the pre-shock period, and we ask how a given firm's likelihood of obtaining another dollardenominated loan-from any source-in the shock period depends on PAST EUROBANK SHARE, defined as the fraction of banks in its most recent pre-shock syndicate that are from the Eurozone. As can be seen, the effect of PAST EUROBANK SHARE is negative and statistically significant. A one standard-deviation change in this variable leads to a roughly 2 percentagepoint drop in the probability of getting a dollar loan over the shock period, which can be compared to an unconditional probability of getting a loan of 13.8%. As Table VI details, this result is robust to inclusion of industry fixed effects, controls for year of the last loan origination (column (2)), controls for a variety of loan characteristics including the interest-rate spread on the prior loan, the loan amount, maturity and loan type (column (3)), and an alternative definition of PAST EUROBANK SHARE (column (4)) . In Table VII, we examine the intensive, rather than extensive margin of loan supply. That is, we ask whether firms who had previously borrowed from Eurozone lenders are more likely to face higher rates if they do manage to borrow again during the shock period. Thus for those borrowers that obtain a new dollar loan during the shock period, we compute the change in interest rate spread charged over the London Interbank Offered Rate (LIBOR) as compared to their last pre-shock loan. We then regress this change in spread on PAST EUROBANK SHARE. Loan facilities for the shock period are matched to the pre-shock facilities of the same type, so as to make interest rate spreads as comparable as possible. 17 Standard errors are clustered by borrower. The results in Table VII indicate that borrowers that relied more heavily on Eurozone banks for their dollar borrowing before the shock end up paying a substantially higher interest rate on their loans during the shock period. The increase in the spread is approximately 40 basis points, which is quite sizable when compared to the pre-shock sample mean spread over LIBOR 17.We divide loan type in three categories: a revolving line, bank term loan, and institutional term loan. We exclude second-lien facilities from the sample. of 216 basis points. Moreover, this number may well understate the true effect because of a sample selection bias: we know from Table VI that some previously Eurobank-dependent firms are shut out of the market during the shock period, and it seems plausible to think that only the more creditworthy ones are able to obtain loans at all, and therefore make it into the sample analyzed in Table VII. Taken together, the results in Tables VI and VII suggest that the contraction in dollar loan supply coming from the Eurozone banks during the shock period was not fully offset by other lenders, and hence represented a meaningful inward shift in the overall availability of credit to their customers. In other words, it appears that the shock to the Eurozone banks is likely to have translated into real effects. IV.B. Cross-Sectional Effects on Bank Lending Finally, the model makes the cross-sectional prediction that those Eurozone banks that are the most money-fund-dependent will cut their dollar lending (relative to their euro lending) by more in response to the MMF shock. We operationalize this prediction by measuring moneyfund dependence as the fraction of short-term funding that comes from U.S. money funds, as reported in Table III and discussed above. Recall that this measure, MMFSHARE, normalizes by all short-term debt, both dollar-and euro-denominated. In Table VIII we use the same specification as Table V, column (2). The dependent variable is still DOLLAR LOAN SHARE, the fraction of a bank's loans that are in dollars, but we separate Eurozone banks into those that, as of April 2011, had over 4% of their short-term funding coming from U.S. MMFs (the most MMF-exposed banks) and those that had less than 4% of their short term funding coming from U.S. MMFs (the least MMF-exposed banks). [TABLE VIII] The first column of Table VIII shows that the coefficient on SHOCK for the most MMFexposed Eurozone banks is negative and statistically significant. By contrast, the second column shows that the effect of SHOCK for the least MMF-exposed banks the effect is much smaller and is not statistically significant. An F-test reveals that the coefficients from the two different groups are statistically different from each other at 5% level. One should probably not overinterpret this finding, given the small number of banks in each of the groups, but it is at least directionally consistent with what one would expect based on our model. V. CONCLUSIONS In this paper, we have shown that one of the consequences of the European sovereign debt crisis was that Eurozone banks cut their dollar-denominated lending. This is not surprising in itself; one would expect these banks to cut lending in the face of capital and liquidity constraints stemming from losses on their portfolios of sovereign bonds. More interestingly, however, we show that Eurozone banks shifted the composition of their dollar and euro lending, cutting their dollar lending by more, despite the fact that European economies were more immediately threatened by the debt crisis. We argue that this phenomenon reflects two features of the markets in which European banks fund themselves. First, European banks rely on uninsured and relatively flighty wholesale dollar funding sources to finance their dollar lending whereas a good deal of their euro lending is financed with stickier euro deposits. Second, frictions in the foreign exchange swap market limit the extent to which Eurozone banks can effectively use euro deposits to fund their dollar lending. As swap demand from Eurozone banks rises, there is only limited arbitrage capital available to take the other side of the trade, which increases the cost of engaging in this synthetic dollar borrowing. Thus Eurozone banks adjust to strains in wholesale dollar funding markets by borrowing more in euros, but also by cutting back their dollar lending relative to euro lending. This, in turn, adversely affects the dollar borrowers of Eurozone banks; they are less likely raise additional funding and when they do so, the funding is on less attractive terms. The perspective on global banks developed in this paper highlights the fragilities associated with a particular foreign-bank business model, which involves banks funding large volumes of activity outside their home currency with uninsured short-term sources of wholesale finance. A somewhat different lens on these fragilities is provided by Laeven (2012a, 2012b), who document a generalized "flight home" effect in bank lending behavior during periods of financial stress. Specifically, global banks tend to reduce their lending share abroad relative to their lending share in their home countries when either: (i) there is a banking crisis in their home country; or (ii) credit spreads in the domestic interbank market go up. Although our theory has led us to focus on measuring lending activity by currency, rather than by country, we suspect that there is some overlap in the two phenomena, and that the mechanism we have proposed here may be helpful in understanding these broad-based flight-home effects. Stretching somewhat further, it is well-known that in addition to cross-border lending, global trade flows also tend to contract during periods of economic and financial stress, with the Great Recession providing a particularly stark example. Amiti and Weinstein (2011) argue that this effect reflects the importance of bank lending in trade finance-in other words, when banks are constrained, they have difficulty lending to exporters, which in turn leads to a reduction in international trade. In their empirical tests, Amiti and Weinstein (2011) point to bank capital (which they proxy for with banks' stock-market valuations) as the relevant metric of bank health. While bank capital is surely important, our work adds a potential further nuance to their story: to the extent that trade finance is particularly reliant on global banks that can lend outside of their domestic currencies, it may be especially vulnerable-perhaps more so than other types of purely domestic activity that are also bank-dependent-not only to changes in bank capital positions, but also to shocks to banks' ability to raise uninsured wholesale funding outside of their domestic currency. 18 In addition to these positive implications, our framework may be helpful in thinking about a number of policy issues related to global banks. For example, in February of 2014, the Federal Reserve finalized a rule imposing enhanced prudential standards on foreign banking organizations. 19 One part of the rule requires the U.S. branches and agencies of foreign banks to hold a prescribed stock of liquid assets, in part as a buffer against potential funding outflows. This sort of regulation would seem to be well-motivated by the results in this paper. Our theoretical framework also sheds light on the Fed's provision of dollar swap lines to the European Central Bank and other central banks during the period of stress in dollar funding markets. One way to think of these swap lines is that they were a device to alleviate the frictions associated with limited arbitrage and the accompanying CIP violations. By making dollars available to the ECB-which could then on-lend these dollars to Eurozone banks-the burden on the currency swap market as a device for generating synthetic dollar funding was presumably reduced. 18. To take a concrete example: consider an Italian auto manufacturer that exports cars to the U.S. In so doing, it may incur dollar-denominated payables that it needs finance-because it ships the cars and only receives payment in dollars say 60 days later. If its closest relationship is with an Italian bank, it might count on the Italian bank for the dollar trade finance. And then, in a period of financial stress, even if the Italian bank has plenty of equity capital, it might be less able to provide dollar loans to the auto manufacturer if, per the mechanism we describe, it is having trouble raising dollar funding. 19. The Federal Register notice describing the rule is available at http://www.federalreserve.gov/newsevents/press/bcreg/20140218a.htm From an ex ante perspective, this same logic also suggests that there may be some underappreciated tradeoffs associated with the dollar's status not only as a global reserve currency, but also as a global funding currency. The fact that so many non-U.S. operating firms do some of their business in dollars, and often rely on non-U.S. banks to accommodate their credit needs, may imply that lending terms in all dollar-denominated credit markets, including those facing purely domestic U.S. borrowers, are more exposed than they otherwise would be to shocks emanating from abroad. One consequence of this exposure is that the Federal Reserve may find itself in a position of having to intervene to manage these disruptions-e.g., via the provision of swap lines-in a broader range of circumstances than it would if the dollar were not so prominent as a global funding currency. Indeed, the Fed's enhanced prudential standards on foreign banking firms can be thought of in part as an attempt to reduce this ex ante reliance on its own balance sheet. HARVARD UNIVERSITY AND NBER HARVARD UNIVERSITY AND NBER HARVARD UNIVERSITY AND NBER APPENDIX Proof of Proposition 1: First, note that for all > 0, equation (6) implies that S > 0. This fact, along with our assumption that = , allows us to re-write the two first-order conditions for and S as: Implicitly differentiating ( . 1) and ( . 2) with respect to p, and solving these simultaneous equations, yields the following comparative statics: Given the binding capital constraint, it follows that > 0. Also because S is increasing in p, it follows from (1) that Δ, the deviation from CIP, is also increasing in p. To derive the comparative statics with respect to arbitrage capital, W, one can do similar calculations to yield: An increase in arbitrage capital means that a given volume of swaps generates less of a deviation from CIP. Thus, for > 0 banks will want to borrow more in euros and to fund dollar lending. This leads to an increase in dollar lending, less euro lending and more swap activity in equilibrium. FIGURE II Money-Market-Fund Exposure to European Banks The figure shows the fraction of money-market-fund assets invested in liabilities of European and Eurozone banks. Data is from Fitch Ratings, "U.S. Money Fund Exposure and European Banks," February 4, 2014. The data is monthly starting in February 2011, semiannual before that. Table V. The plotted series corresponds to the coefficients (β t ) on the monthly dummies of the following specification: where the D j are borrower fixed effects, the D t are month fixed effects and X j is loan size. The sample includes U.S. dollar denominated loans issued in Europe and U.S. The highlighted area corresponds to May 2011 through June 2012. The difference between average β t for this period and the rest of the sample (0.1147 -0.1004 = -0.0143) is equivalent to the coefficient on SHOCK in specification (2) in Table V. However, the magnitudes are not exactly the same because the panel is unbalanced. Notes. Loan amount is prorated based on the number of the lead banks ("Lead") or based on the total number of syndicate participants ("All lenders"). Lead bank is identified based on whether the lender is designated as "Lead Arranger" or "Agent" in the league tables as reported in DealScan. The Notes. This table reports money-market-fund (MMF) reliance for the 11 Eurozone banks that were among the top fifty lenders in the U.S. syndicated loan market between 2005 and 2007. We also include MMF data for those European banks outside the Eurozone that were among the top 50 lenders in the U.S. MMF reliance equals MMF holdings as of April 2011 divided by (Deposits + Short Term Debt) as of the end of 2010. Change in MMF reliance is compiled from multiple Fitch Ratings reports on U.S. money-market fund exposure to European banks. Fitch reports highlight banks with the largest use of MMFs by country. Some of the banks or even entire countries are dropped from the coverage when their use of MMFs becomes very small; those cases are indicated by an asterisk. Notes. The dependent variable is the fraction of loans originated by bank i in month t that is denominated in U.S. dollars (S it ). The sample includes all loans originated between 2005 and 2013 that are denominated in U.S. dollars or euros; all other currencies are excluded from the sample. In specifications (1) through (4) we look at Eurozone banks only, specifications (5) through (8) look at U.S. and Eurozone banks. Specifications (3) and (4) look at lending in the European market only; the rest of the specifications look at lending in U.S. and European markets. Specifications (1) through (4) correspond to: S it =D i + β SHOCK + FX t . D i is an originating bank fixed effect and SHOCK is a dummy variable equal to 1 for the May 2011-June 2012 period and 0 otherwise. Specification (8) includes month fixed effects and corresponds to: EUROBANK is a dummy variable equal to 1 if bank's headquarters are located in the Eurozone and 0 otherwise. Standard errors, reported in brackets, are clustered by month. The average of the dependent variable (S it ) is 16.5% for Eurozone banks and 89.3% for U.S. banks, thus the high R-squared in specifications (5) through (8) is due to increased explanatory power of bank fixed effects. Significance at the 1%, 5%, and 10 % levels is indicated by ***, **, and *, respectively. Notes. Each observation used for the analysis reported in this table is a separate loan. The dependent variable is EUROBANK SHARE, a variable between 0 and 1 equal to fraction of lead banks on the loan headquartered in the Eurozone. Specifications (1) and (2) include only U.S. dollars denominated loans and correspond to: EUROBANK SHARE jt = D j + β SHOCK + X j , The rest of specifications include U.S. dollar denominated and euro loans. In particular, specification (4) corresponds to: EUROBANK SHARE jt = D j + D t + DOLLAR LOAN jt + β DOLLAR LOAN jt *SHOCK + X j , where the D j are borrower fixed effects. That is, the coefficient of interest, β, is identified off repeated loans to the same borrower. The D t are month fixed effects. DOLLAR LOAN is a dummy for the loan being denominated in U.S. dollars. SHOCK is a dummy variable equal to 1 for May, 2011-June, 2012 period and 0 otherwise. X j is loan size; we include it as a control because the number of lead lenders depends on loan size. We use loans issued over 2000-2013 period to assure that there are enough repeated loans in our sample. In specification (5), we replace borrower fixed effects with 2-digit Standard Industrial Classification (SIC) code fixed effects. Standard errors, reported in brackets, are clustered by month. Significance at the 1%, 5%, and 10 % levels is indicated by ***, **, and *, respectively.
2014-10-01T00:00:00.000Z
2012-10-01T00:00:00.000
{ "year": 2015, "sha1": "c1a50b2de82817e75466df4baeed9defc3bedfba", "oa_license": "CCBY", "oa_url": "https://dash.harvard.edu/bitstream/1/15787970/1/ivashina,scharfstein,stein_dollar-funding.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "a32fe3abbc96e40364f29f530a080654b64e069b", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business", "Economics" ] }
2465546
pes2o/s2orc
v3-fos-license
Internal Carotid Artery Blister-Like Aneurysm Caused by Aspergillus – Case Report Summary Background Blister-like aneurysm of the supraclinoid internal carotid artery (ICA) is a well-documented cause of subarachnoid hemorrhage. Generally, this type of aneurysm is associated with various conditions such as hypertension, arteriosclerosis, and ICA dissection. Although Aspergillus is the most common organism causing intracranial fungal aneurysmal formation, there is no report of a blister-like aneurysm caused by Aspergillus infection. Case Report An 83-year-old man received corticosteroid pulse therapy followed by oral steroid therapy for an inflammatory pseudotumor of the clivus. Two months later, the patient was transported to an emergency department due to the diffuse subarachnoid hemorrhage, classified as Fisher group 4. Subsequent 3D computed tomography angiogram revealed a blister-like aneurysm at the superior wall of the left ICA. Six days later, the patient died of subarachnoid hemorrhage caused by the left ICA aneurysm rerupture. Autopsy revealed proliferation of Aspergillus hyphae in the wall of the aneurysm. Notably, that change was present more densely in the inner membrane than in the outer one. Thus, it was considered that Aspergillus hyphae caused infectious aneurysm formation in the left ICA via hematogenous seeding rather than direct invasion. Conclusions The blister-like aneurysm is a rare but important cause of subarachnoid hemorrhage. This case report documents another cause of blister-like aneurysms, that is an infectious aneurysm associated with Aspergillus infection. Background Blister-like aneurysms of the supraclinoid internal carotid artery (ICA), which were initially reported by Sundt and Murphey, represent a rare but well-documented cause of subarachnoid hemorrhage [1,2]. A blood blister-like aneurysm refers to a small hemispherical bulge from the arterial wall, and has been reported to arise at non-branching sites mainly from the dorsomedial wall of ICA [1,[3][4][5][6]. Frequently, these lesions are associated with hypertension [3,5], arteriosclerosis [5,6], and ICA dissection [4,5]. These aneurysms are difficult to detect, and their surgical treatment is challenging, with high morbidity and mortality rates [4,7,8]. Mycotic aneurysms involving ICA are also rare. Infectious aneurysms involving the cerebral vasculature represent only 2% to 5% of all intracranial aneurysms [9]. Most infection-associated carotid aneurysms are caused by bacterial pathogens related to infectious endocarditis in contrast to the rare occurrence of fungus-related carotid aneurysms [9,10] In this article, we reported a case of a blister-like aneurysm in the superior wall of the left ICA caused by infectious vasculitis due to Aspergillosis in a patient treated with steroid therapy. Case Report An 83-year-old man presented with headache, facial diplegia and right retrobulbar neuritis. Initial computed tomography (CT) scan of the paranasal sinuses ( Figure 1A) revealed left maxillary sinusitis with a calcified mass and slight bone destruction, suggestive of a maxillary sinus aspergilloma. On contrast-enhanced CT ( Figure 1B), only a mild atherosclerotic change including calcifications and mild arterial wall irregularities without aneurysm formation was found in his intracranial arteries. In addition to these findings, magnetic resonance imaging (MRI) of the head revealed diffuse abnormal signal intensity and enhancement involving the clivus and surrounding soft tissue including muscles and subcutaneous fat without bone destruction. Blood tests showed mild anemia (hemoglobin, 10.1 g/dL) and an elevated level of C-reactive protein (5.5 mg/dL: normal range <0.3 mg/dL), serum Aspergillus antigens were positive. White blood cell count and other biochemical tests including b-D-glucan were almost normal. In addition to endoscopic open surgery of the left maxillary sinus, a transnasal-transsphenoidal endoscopic biopsy of the clivus was performed with the suspicion of central skull base osteomyelitis. However, histological examinations of the clivus bone tissue specimen, including Grocott's methenamine silver staining and Gram staining, revealed no microorganisms. In the absence of any pathological and biochemical evidence suggestive of infectious disease, he was diagnosed with a non-infectious inflammatory pseudotumor of the clivus. As a result, he received a single course of corticosteroid pulse therapy (500 mg methylprednisolone) followed by oral steroid therapy (50 mg prednisolone). After the medication, the patient was discharged with partial remission of his symptoms and abnormal MRI findings. Two months later, the patient was transported to an emergency department due to sudden onset of symptoms, including a severe headache and consciousness disturbance, corresponding to Hunt-Hess grade 4. Initial CT images ( Figure 2A) revealed diffuse subarachnoid hemorrhage, classified as Fisher group 4, and subsequent 3D-CTA ( Figure 2B) revealed a blister-like aneurysm in the superior wall of the left ICA. Aneurysms were depicted as widenecked shallow outpouchings of the superior and lateral walls of the supraclinoid left ICA. In addition to aneurysmal formation, stenosis of the C4 portion of the left ICA was visualized. No other aneurysms were depicted in his intracranial arteries. Considering his advanced age and neurological condition, conservative instead of radical treatment (including surgical and endovascular treatment) was decided as the therapeutic strategy. Six days later, the patient died of subarachnoid hemorrhage due to the left ICA aneurysm rerupture. The brain, lung, heart and abdominal organs such as the liver, spleen, and kidney were examined at autopsy, whereas paranasal sinuses were not observed in detail. A rupture of the aneurysm in the superior wall of the left ICA was confirmed macroscopically ( Figure 3A, 3B), and atherosclerosis was found in the left ICA microscopically. Microscopic findings of the aneurysm were pathologically characterized by destruction of the internal elastic lamina and media, and numerous infiltrating inflammatory cells in the aneurysmal wall. Grocott's staining revealed septate hyphae with acute-angle branching, morphologically consistent with aspergillus species, located in the inflamed A B Case Report areas of the arterial and aneurysmal walls ( Figure 4). Additionally, the serum Aspergillus antigens were positive in the previous days, the final pathological diagnosis was aneurysm caused by Aspergillus infection. Notably, Hyphae of Aspergillus existed more densely in the inner membrane than in the outer one. Considering the distribution of inflammatory changes and Aspergillus hyphae, it was reasonable to suppose that hematogeneous seeding rather than direct local invasion caused infectious aneurysm formation in the left ICA. However, evidence of Aspergillus infection was not found in any organs other than the left ICA. Discussion We reported a rare case of blister-like aneurysm caused by Aspergillus infection in a patient treated with steroid A, B). A B therapy. Blister-like aneurysms are thin-walled, broadbased aneurysms which lack identifiable necks and are known to be one of the most difficult lesions to treat [1,[3][4][5]7,8]. Ruptured blister-like aneurysms of the supraclinoid ICA present with subarachnoid hemorrhage and are estimated to represent 0.9-9.4% of ruptured intracranial aneurysms [4,8]. The most frequent location for blister-like aneurysms is the anteromedial wall [5,8]. Compared with saccular aneurysms, these lesions tend to have a more precipitous course, with rapid enlargement and frequent rebleeding [1,[4][5][6][7]. The imaging appearance of supraclinoid ICA blister-like aneurysms has been described in detail elsewhere [1,[4][5][6]8,11]. Morphologically, they appear as wide-necked, shallow outpouchings of non-branching sites of the supraclinoid ICA. Because of their broad-based, shallow profiles, these aneurysms often represent a diagnostic challenge on both 3D-CTA and conventional angiogram. In addition to comprehension of the morphology and distribution of these lesions, both a meticulous angiographic technique and a high index of suspicion are often required to make the precise diagnosis. Pathologically, these lesions have been reported to reveal focal wall defects covered with thin fibrous tissue and adventitia, lacking the usual collagen layer, suggestive of pseudoaneurysm formation [5,7,8]. Dissection of the ICA has been proposed as a major causative factor for blood blister-like aneurysms [4,5,7,8]. Traditionally, the arterial dissection is regarded as a disorder between the internal elastica and media [12]. Interestingly, a recent study has reported that dissections of the cervical internal carotid and vertebral artery affect primarily the outer arterial layers [13]. However, the pathological findings including destruction of the internal elastic lamina and media, and hyphae in the inflamed areas of the aneurysmal wall in the present case were different from those of arterial dissections. These findings were consistent with infectious vasculitis rather than a dissection. Moreover, we found atherosclerosis in the left ICA, but did not find other evidence that a dissection of intracranial vessels was present before Aspergillus infections. Although it has already been reported that various causal factors, including hemodynamic stress and atherosclerosis other than dissections, are also important in the formation of a blister-like aneurysm [3], to our knowledge, there is no report of a blister-like aneurysm caused by Aspergillus infection. Infectious aneurysms involving the cerebral vasculature are uncommon lesions, believed to represent only 2% to 5% of all intracranial aneurysms [9]. Intracranial fungal aneurysms are estimated to represent 14% of all carotid infectious aneurysms [14]. Although less common, fungal aneurysms currently occur with increasing frequency with the widespread use of immunosuppressive agents and steroids [9,14]. Aspergillus is a ubiquitous fungus that commonly causes diseases in debilitated or immunocompromised patients, and is the most common organism causing intracranial fungal aneurysmal formation [14]. In addition to immune deficiency, the general risk factors are wider administration of antibiotics, corticosteroids and immunosuppressants [10]. Inhalation of airborne spores is a usual mechanism of infection and allows the organism to enter the bronchopulmonary system or paranasal sinuses. The routes of entry into the central nervous system have been reported to be: 1) circulatory propagation from other organs such as the lung; 2) direct invasion from adjacent skull regions such as the middle ear cavity, paranasal sinus, and orbit; and 3) brain surgery, lumbar puncture, and blood transfusion [9,15]. It is well known that Aspergillus infection causes infectious vasculitis [15]. The previous report mentioned that infectious vasculitis follows three courses: 1) formation of thrombus, causing hemorrhagic infarction and brain abscess; 2) sudden massive hemorrhage; and 3) formation of an aneurysm [15]. The tendency of Aspergillus hyphae to intramural growth results in a configuration and location of fungal aneurysms different from those of the more common bacterial infectious aneurysms [9]. Fungal aneurysms tend to be fusiform in shape and involve longer, more proximal segments of the intracranial vessels, whereas bacterial infectious lesions tend to be distally-located multiple spherical aneurysms with relatively small diameters (ranging from 2 to 5 mm) [9,14,16]. These morphologic differences could enable precise diagnosis. Intracranial arterial dissections reveal various imaging appearances including arterial stenosis, intramural hematoma and aneurysmal formation [17]. Clinicians do not have difficulty in diagnosing arterial dissections with typical imaging appearance such as hyperintense intramural hematoma and intimal flap. However, in contrast to these findings, it can be difficult to differentiate between infectious vasculitis and arterial dissection in case of small pseudoaneurysm formation. Particularly, the anatomical characteristic of the superior wall of ICA C2 portion, where hemodynamic stress is high due to superior and lateral directions of its curve, enables various pathological conditions to form pseudoaneurysms similar to the "blister-like aneurysm" [5]. Thus, we recommend clinicians to raise the possibility of infectious vasculitis in the diagnosis of blister-like aneurysms if the patient has risk factors such as previous fungal infection and steroid therapy like in our case. Conclusions We could not find out where the Aspergillus hyphae originated from. Autopsy did not reveal aspergillosis in any other organs. Fungal infection of the sinus cavity might therefore have been present and we may have noticed a small infection if we had examined paranasal sinuses under a microscope at autopsy. However, we did not consider the possibility of infectious aneurysm at the time of autopsy because the maxillary sinus aspergilloma had already been removed at the onset of subarachnoid hemorrhage. Thus, the paranasal sinus was not adequately evaluated at autopsy. The 3D-CTA revealed no aneurysm in patient's intracranial arteries before steroid pulse therapy. The immunosuppressive effects of steroid therapy are considered to have facilitated the propagation of Aspergillus infection and caused hematogenous spread to the superior wall of left ICA (in this portion fungal infectious aneurysms tend to occur and fragility due to atherosclerosis might help in destruction of the arterial wall).
2018-04-03T03:19:13.331Z
2015-03-25T00:00:00.000
{ "year": 2015, "sha1": "e38bd1dc222bedb8978cc87de6ff2134c08d5536", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4376144?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e38bd1dc222bedb8978cc87de6ff2134c08d5536", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203125775
pes2o/s2orc
v3-fos-license
Feasible Ratio Determination of the Thermal Oil Equipment Waste-Heat Recovery The applicability of the thermal oil equipment flue gases thermal energy recovery relates to heating the air amount required for the fuel burning, and technical and economic feasibility. The paper objective is to define the regenerative system air heating temperature optimal value and to study the temperature impact on the heat power unit efficiency. The problem solution is air heating temperature optimal value. calculation algorithm. The methods of the complex heat exchange mathematical modelling, optimization problem numerical solution, non-linear programming, and feasibility study were used in the research carried out. The study results of the temperature influence of heating the air required for fuel combustion on the thermal oil equipment technical and economic efficiency are presented. Introduction In oil and gas industry, thermal oil equipment including the diathermic oil as a heat transfer fluid and fuel burned in the combustion chamber as an energy carrier is used as the heat supply systems high temperature boiler units. The uniqueness of this method is to obtain the high heating temperature of the heat transfer fluid (up to +350 0 C) at the low pressure (up to 5 bar) [1]. This criterion reduces the equipment cost and increases the reliability when operating. The use of diathermic oil in boilers has a number of advantages compared to hot water boilers: heating to high temperatures at low pressures; water treatment plant is not required; free of metal erosion and corrosion; possibility of operating in the full automatic mode. The fuel combustion in the thermal oil equipment chamber is enhanced due to the complete fuel combustion at the minimum air excess with a possible waste-heat recovery ratio. Problem statement The main task of the equipment under study is to utilise the flue gases thermal energy efficient recovery (heat recovery feasible ratio) with minimum investments. The possibility of solving this problem will reduce the fuel and energy resources costs. The use of the exhaust gases heat after the heat processing units when heating the air in regenerative devices is the main activity to improve their efficiency [2][3][4][5][6]. However, this results in the devices construction costs increasing, therefore it is appropriate to detect the optimal temperature of heating the air used for the fuel combustion. Theory The thermal oil unit Lavart 500 DMH produced by CJSC "OmZIT" (figure 1) [7] comprising the heat regenerator: 1 is the burner, 2 is the housing of the thermal oil unit 3 is the outer coil, 4 is the inner coil 5 is the housing insulation, 6 is the slot joining the flue gases discharge pipe, 7 is the explosive IOP Conf. Series: Journal of Physics: Conf. Series 1260 (2019) 052026 IOP Publishing doi:10.1088/1742-6596/1260/5/052026 2 valve, 8 is the inspection window. Heat exchange surfaces form the concentric spiral consisting of rings made of several seamless alloy tubes. The first ring-shaped spiral forms the basis of the boiler furnace space. The second ring-shaped spiral forms a flow channel for flue gases. Coils are in a leaktight chamber forming the thermal oil equipment body. The maximum power of the thermal oil equipment is 0.5 MW when burning the natural gas at the low heating value of 35.9 MJ/m 3 , the fuel consumption is 0.016 m 3 /s, the efficiency is 86.4%, the air supply temperature for gas combustion is 50 0 C, the flue gases temperature is 337 0 C. The heat regenerator is installed to heat the air behind the thermal oil unit through the exhaust pipe (figure 2): δ is the wall thickness of the regeneration heat exchange, m; d r is the regenerator inner diameter, m; l r is the regeneration main heat exchange length, m; d p is the inner diameter of the discharge pipe, m; t a1 , t a2 are the air temperatures at the inlet and outlet of the regenerator, 0 C; t g1 , t g2 are the gases temperatures at the inlet and outlet of the regenerator, 0 C. The combustion air optimum temperature can be defined on the assumption of the minimum capital investments and operating costs for the flue gases heat recovery and fuel combustion [8]: where I is the total capital investments for the regenerator and burnt fuel, RUB/year; t a is the air temperature in the thermal oil heating unit burner 0 C; A f , B are the fuel annual cost and consumption (RUB/m 3 )·(s/year), m 3 /s; C r is the cost of 1 m 2 of the regenerator heating surface, RUB/(m 2 ·year); F is the regenerator heating surface, m 2 ; C f is the cost of 1 m 3 of the natural fuel, RUB/m 3 ; τ is the operating time of the thermal oil equipment over a year, s/year; Q usf. is the useful thermal energy output by the thermal oil equipment, W; Q br is the thermal conductivity heat losses through the unit brickwork into the environment, W; Q rad. is the radiation heat losses through the open inspection window W; Q ac. is the accumulation heat losses by the brickwork when taking the thermal oil equipment out of the cold condition, W; Q l is the low heating value of natural gas, J/m 3 ; R 2 is the losses rate of the combustion mechanical incompleteness; S f1 , t f1 are the average heat capacity and the fuel temperature J/(m 3 ·K), 0 C; V g is the flue gases amount per unit of the fuel quantity, m 3 /m 3 ; t g , H g are the temperature and heat capacity of the thermal oil unit flue gases, 0 C, J/(m 3 ·K); R 1 is the unburned CO heat in flue gases, J/m 3 ; V a is the air amount required for the fuel quantity unit combustion, m 3 /m 3 ; H a2 is the air heat capacity at the regenerator outlet, J/(m 3 ·K) 0 C; δt a is the air temperature drop on the way from the regenerator to the unit burner due to the heat loss into the environment, °C; I r is the capital investments for the construction of 1 m 2 of the regenerator heating surface, RUB/m 2 ; D r is the investments discounting rate 1/year; O is the depreciation rate, 1/year; N d.u . is the power required for maintenance of 1 m 2 of the regenerator heating surface, W/m 2 ; Z is the safety factor of the exhaust and draught units supply and consumption, and the power capacity; C d.u. is the exhaust and draught units cost, RUB/W; C e is the electricity cost, RUB/(W·s); η a is the coefficient taking into account the regenerator air losses; H a1 is the air heat capacity at the regenerator inlet, J/(m 3 ·K), 0 C; K is the heat transfer coefficient, W/(m 2 ·K); ε Δt is the correction factor under the complex heat exchange scheme; υ is the average temperature difference. The thermal oil equipment differential equation (1) where D and G are the coefficients depending on the ratio (t g2 -t a1 )/(t g1 -t a2 ); H g1 , H g2 are the gases heat capacities at the regenerator inlet and outlet, J/(m 3 ·K); η g is the coefficient taking into account the heat losses through the regenerator enclosing walls into the environment; δt g is the exhaust gases temperature decrease on the way to the regenerator; Θ is the coefficient taking into consideration the exhaust gases dilution in air on the way to the regenerator. Experimental results The numerical study the air temperature t a heating, revealed: the total capital investments and operating costs of the regenerator and burned fuel are reduced (figure 3); the fuel consumption decreases ( figure 4); the regenerator heating surface increases (figure 4); the thermal oil equipment efficiency improves η ( figure 5). Figure 5. The dependences of η on t a . Results discussion The experiments results (at the calculated optimum temperature of air heating t a.opt. =226 0 C) made it possible to obtain the following: the fuel consumption and discounted costs are reduced to 15% and 10% correspondingly during the thermal oil equipment operation, thereby increasing the efficiency up to 10%. The air heating optimum temperature is not high, which makes it possible to use inexpensive materials in manufacturing and operating the heat regenerator. Conclusions Air heating and its optimal value calculation in the regenerator according to the proposed algorithm provides the increase in the technical and economic effect of the thermal oil equipment operation, taking into account the investments and operating costs minimization. The presented algorithm determines the air heating optimal temperature considering the thermal oil equipment operating conditions depending on the regenerator and its design cost, the burned fuel type. The numerical studies results verify the algorithm feasibility when designing the boiler units in oil and gas industry.
2019-09-19T09:10:05.008Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "1130d0b7b48563afe35b5c45af909850ca631ab9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1260/5/052026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "98d6ac4df864ca7056218c3fa182fddfcc660b6f", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
6512691
pes2o/s2orc
v3-fos-license
ACSL6 Is Associated with the Number of Cigarettes Smoked and Its Expression Is Altered by Chronic Nicotine Exposure Individuals with schizophrenia tend to be heavy smokers and are at high risk for tobacco dependence. However, the nature of the comorbidity is not entirely clear. We previously reported evidence for association of schizophrenia with SNPs and SNP haplotypes in a region of chromosome 5q containing the SPEC2, PDZ-GEF2 and ACSL6 genes. In this current study, analysis of the control subjects of the Molecular Genetics of Schizophrenia (MGS) sample showed similar pattern of association with number of cigarettes smoked per day (numCIG) for the same region. To further test if this locus is associated with tobacco smoking as measured by numCIG and FTND, we conducted replication and meta-analysis in 12 independent samples (n>16,000) for two markers in ACSL6 reported in our previous schizophrenia study. In the meta-analysis of the replication samples, we found that rs667437 and rs477084 were significantly associated with numCIG (p = 0.00038 and 0.00136 respectively) but not with FTND scores. We then used in vitro and in vivo techniques to test if nicotine exposure influences the expression of ACSL6 in brain. Primary cortical culture studies showed that chronic (5-day) exposure to nicotine stimulated ACSL6 mRNA expression. Fourteen days of nicotine administration via osmotic mini pump also increased ACSL6 protein levels in the prefrontal cortex and hippocampus of mice. These increases were suppressed by injection of the nicotinic receptor antagonist mecamylamine, suggesting that elevated expression of ACSL6 requires nicotinic receptor activation. These findings suggest that variations in the ACSL6 gene may contribute to the quantity of cigarettes smoked. The independent associations of this locus with schizophrenia and with numCIG in non-schizophrenic subjects suggest that this locus may be a common liability to both conditions. Introduction Smoking is an important public health problem, and smokingrelated diseases take a heavy toll on society. Many studies have shown that smoking is addictive and both genetic and environmental factors influence smoking behaviors. In recent years, with the application of genome-wide association study (GWAS) approach, genetic studies of smoking have made significant progress. One example is the identification of the CHRNA5/ CHRNA3/CHRNB4 locus as a risk factor for smoking quantity [1][2][3]. However, this and other established loci explain only a small proportion of the heritability observed for smoking behaviors. Many more genes remain to be identified for their effects on smoking behaviors. Smoking is highly prevalent among individuals with a variety of psychiatric disorders [4,5]. Among them, the comorbidity with schizophrenia is particularly high. Smoking prevalence in those with a schizophrenia diagnosis is 3-4 times higher than that in the general population. Most individuals with schizophrenia smoke more heavily than smokers in the population at large [6]. Several hypotheses have been proposed to explain these phenomena, including a self-medication theory [7,8]. It is possible that schizophrenic patients smoke to enhance cognition and to inhibit side-effects of neuroleptic drugs, or schizophrenia has an inherited vulnerability to heavy smoking behavior. In a previous study, we found evidence that the SPEC2/PDZ-GEF2/ACSL6 locus is associated with schizophrenia [9]. In the present study, in our analyses of smoking phenotypes for the control subjects of the Molecular Genetics of Schizophrenia (MGS) sample, we found substantial association in the acyl CoA synthetase long chain 6 (ACSL6) gene with a phenotype based on a categorized number of cigarettes smoked per day (numCIG). To verify if this locus is associated with numCIG, we initiated replication studies using 12 other independent samples. It is unknown how ACSL6 variants might affect tobacco smoking and there are no probes available to assess levels of ACSL6 in the brains of humans. Because nicotine is a major psychoactive ingredient in tobacco that is self-administered in both humans and rodents [10], we used rodent models to determine if chronic nicotine exposure alters ACSL6 expression in brain areas involved in addictive behaviors and cognition. We conducted in vitro and in vivo studies, measuring the effects of chronic nicotine exposure on ACSL6 mRNA expression in rat primary cortical culture and assaying levels of ACSL6 protein in mouse prefrontal cortex (PFC), hippocampus (HIP), ventral tegmental area (VTA) and nucleus accumbens (NAC) following chronic nicotine exposure in the presence or absence of a nicotinic receptor antagonist, mecamylamine. In this article, we report our findings from these experiments. ACSL6 association with numCIG in the MGS controls In our analysis of the interval covering the SPEC2, PDZ-GEF2 and ACSL6 genes (about 800 kb), where we reported a long-range haplotype association with schizophrenia [9], 69 of the 145 markers have a p value#0.05 for numCIG, far in excess of chance expectations. In contrast, only 11 markers reach nominal significance for FTND ( Figure 1). The lowest p value (8610 25 ) is observed at rs6870930 for numCIG (beta = 20.156, r 2 = 0.0096), which is located in the PDZ-GEF2 gene. Of the 24 markers typed in our previous schizophrenia study [9], rs667437 and rs477086 in the ACSL6 gene are amongst the markers with the lowest p values for numCIG (p = 0.0005; see Figure 1 and Table 1). However, the association with FTND is not significant. For both markers, the alleles significantly associated with greater number of daily cigarettes smoked (allele G for rs667437 and allele C for rs477086) reside on the risk haplotypes associated with schizophrenia in our previous study. Nicotine stimulates ACSL6 mRNA expression in rat cortical culture To follow up the finding that ACSL6 is associated with the number of cigarettes smoked, we next questioned if nicotine in tobacco might affect expression of ACSL6 in the central nervous system. We used animal models to determine whether nicotine affects the expression of ACSL6 in comparative rodent brain areas thought to regulate nicotine/tobacco use and involved in schizophrenia. We first conducted chronic nicotine stimulation experiments in primary rat cortical cultures using 10 and 100 mM nicotine, and measured the mRNA expression of ACSL6 by realtime quantitative PCR. There is a main effect of nicotine exposure on mRNA expression of ACSL6 (F 2, 6 = 78.844, p,0.001; Figure 3). A post hoc t-test reveals that on day 5, the mRNA expression in the nicotine-treated samples is significantly higher than that in the saline controls independent of the concentration of nicotine (t 8 = 23.579, p,0.01). The effects on days between the doses are marginal (t 8 = 2.293, p = 0.051 for day 3 and t 8 = 2.278, p = 0.052 for day 5). These results suggest that nicotine stimulates ACSL6 mRNA expression in cortical cells. Rat GAPDH and TATA box binding protein (TBP) primers were used as internal controls. Results from GAPDH and TBP did not differ significantly and therefore, only TBP data was shown ( Figure 3). Nicotine increases ACSL6 protein levels in mouse prefrontal cortex (PFC) and hippocampus (HIP) We further tested whether nicotine alters ACSL6 protein expression following in vivo nicotine exposure. We chronically administrated a dosing regimen of 36 mg/kg/day nicotine via mini-pumps for 14 days that has been shown to result in dependence-like behavior in mice [11,12]. On day 15, mice received a challenge s.c. injection of 1 mg/kg mecamylamine or saline vehicle 30 min prior to brain harvest. Four brain areas (PFC, HIP, NAC and VTA) were dissected and processed for Western blotting to quantify the expression levels of ACSL6 protein. The results are shown in Figure 4. There is a significant interaction of nicotine pretreatment with mecamylamine challenge in the PFC ( Figure 4A, F 1,11 = 16.420, p = 0.004) and HIP ( Figure 4B, F 1,27 = 8.290, p = 0.008), but not in the NAC ( Figure 4C) and VTA ( Figure 4D) after chronic nicotine treatment. Post hoc t-tests reveal that animals receiving chronic nicotine show a significant increase in ACSL6 protein expression in the PFC (t 4 = 4.872, p = 0.008) and HIP (t 16 = 4.583, p,0.001) of NIC-SAL compared to SAL-SAL control animals. This was not observed following a withdrawal-precipitating dose of the non-selective nicotinic antagonist, mecamylamine. Mecamylamine treatment alone does not change the expression of ACSL6 protein in the brain regions tested (F's,1.0 in SAL-SAL compared to SAL-MEC mice), but does reverse nicotine associated increases in ACSL6. Mice receiving an injection of mecamylamine following 14 days of chronic nicotine treatment (NIC-MEC) produce significantly less ACSL6 protein than saline-injected controls (NIC-SAL) in both the PFC (t 4 = 5.38, p = 0.006) and HIP (t 12 = 3.221, p = 0.007). Together these data suggest that continued activation of nicotinic acetylcholine receptors is necessary for the increases of ACSL6 protein in these brain regions. Discussion In our analyses of the numCIG and FTND phenotypes in the MGS control subjects, we found that multiple markers in the broad region containing the SPEC2, PDZ-GEF2 and ACSL6 genes showed substantial association with number of cigarettes smoked. While the association signals in these genes did not reach genomewide significance, the wide spread association signals in the region are consistent with the patterns we observed in our previous schizophrenia studies. Given the high rate of smoking in schizophrenic patients, we sought to test whether this locus is independently associated with tobacco smoking as measured by numCIG and FTND scores in non-schizophrenic subjects. Of the many markers typed in this region in the MGS control sample, we found two overlap markers in our previous study [9], rs667437 and rs477086. The association signals for numCIG from these two markers are among the most significant SNPs identified and are representative of the region. Therefore, we initiated replication of these two ACSL6 markers in a large population twin sample selected from our twin studies (VCU subjects). In our VCU subjects, both markers, rs667437 and rs477086, reached nominal significance and the risk alleles are the same as those observed in the MGS control subjects. Encouraged by these results, we recruited participation by investigators with independent samples. Using standard meta-analyses, we demonstrate that both markers are significantly associated with the numCIG phenotype and the FTND phenotype shows a trend in the same direction. As these studies were performed with non-schizophrenic subjects, the associations observed are independent of schizophrenia diagnosis. The observation that the same alleles in ACSL6 are associated with schizophrenia and with numCIG in non-schizophrenic subjects raises an interesting question as to whether there is a common mechanism underlying these associations. To answer this question, we need a sample with both schizophrenia and cigarettes smoking phenotypes. With such a sample, we could estimate how much of the association with schizophrenia can be accounted for by cigarette smoking, and evaluate the relationship between these two separate associations. ACSL6 encodes a key enzyme that activates polyunsaturated long chain fatty acids and is involved in lipid metabolism. Its preferred substrates include arachidonic acid, eicosapentaenoic acid and docosahexaenoic acid [13]. Arachidonic acid is a component of arachidonoyl phosphatidyl choline and arachidonic acid-containing inositol phospholipids, which are the major sources of N-arachidonoyl ethanolamine (anandamide) and 2arachidonoylglycerol (2-AG) [14]. Anandamide and 2-AG are the major endocannabinoids that bind to the cannabinoid receptor 1, a receptor associated with tobacco smoking and other addictive drugs in humans [15][16][17][18] and shown to promote nicotine reward in animal model studies [19][20][21]. We have reported that the cannabinoid receptor 1 target is associated with tobacco smoking and nicotine dependence [18]. Others have provided evidence that the endogenous cannabinoid system plays a major role in drug reward and addiction, including nicotine [22][23][24][25][26][27][28]. Intriguingly, it has been proposed that the endocannabinoid system is involved in the etiology of schizophrenia [29] and cannabis use is considered a risk factor for schizophrenia as well [30]. While the exact role of ACSL6 in schizophrenia and tobacco addiction is unclear at this time, these data warrant further investigation into the potential interactions of ACSL6 and the cannibinoid system. Since nicotine is the major psychoactive ingredient in tobacco and ACSL6 genotype was associated with number of cigarettes smoked, we sought to characterize if nicotine affects expression of ACSL6 in brain areas that regulate cognition, attention, and addiction behaviors. Using primary cell culture of rat cortex, we show that nicotine stimulates ACSL6 mRNA expression after 5 days but not 3 days of nicotine exposure. These data suggest that repeated nicotine exposure is necessary for nicotine-associated changes in ACSL6. In rodents, humans and non-human primates, the cortex is thought to regulate executive functions that support working memory, cognition and inhibitory control [31][32][33][34]. The PFC communicates with the HIP which is important for declarative memory and integration of higher brain signals with information projected from sensory systems. This circuitry is compromised in those with schizophrenia who are thought to smoke in part to enhance cognition [33]. Utilizing a mouse model of nicotine exposure that we have previously shown to produce nicotine dependence-like behavior as measured by tolerance and withdrawal [11,35], we find that chronic systemic exposure to nicotine increases ACSL6 protein levels in the PFC and HIP of mice. It is interesting that nicotine does not affect the expression of ACSL6 in the VTA and NAC, areas of the brain that regulate addiction and reward behavior [36][37][38], but the PFC and HIP that have inputs to the VTA and NAC and are also thought to contribute to tobacco addiction [39][40][41][42][43]. Interestingly, the ACSL6 gene appeared in the list of amphetamine addiction genes by an independent GWAS [44], suggesting that ACSL6 may be involved in addictive behaviors to drugs other than nicotine. The observation that nicotine-associated increases in ACSL6 are inhibited by the nicotinic receptor antagonist mecamylamine suggests that sustained increases of ACSL6 protein require activation of nicotinic receptors. It is not clear if this effect is due to changes in transcription or protein degradation and since mecamylamine blocks all nicotinic receptors in brain, further studies should question which receptor subtypes mediate these effects. Although the mechanism of these findings has yet to be determined, our finding is consistent with several studies that have implicated nicotinic receptors in the etiology of schizophrenia [45][46][47] and the quantity of daily cigarettes smoked [1][2][3]48]. It is not clear whether nicotine-associated increases in ACSL6 expression might serve to improve cognition and/or enhance neuroplasticity associated with nicotine dependence in individuals with schizophrenia. Many studies have shown that smoking can improve cognitive function, memory and attention in both normal controls and schizophrenia patients [49][50][51][52] and the effects of cannabinoids on cognitive function and memory [53,54] are thought to be mediated in part by the cholinergic system [55,56]. These findings also warrant future studies to determine if alterations of ACSL6 in brain might support self-medication, cigarette craving, withdrawal or primary or conditioned reinforcing effects of cigarettes. In our previous fine-mapping study of schizophrenia, we found that haplotypes spanning SPEC2, PDZ-GEF2 and ACSL6 genes were associated with the disease [9]. In the current study, we found that this same genomic interval showed a similar pattern of association with numCIG in the MGS controls, an association that we confirmed in a meta-analysis of 12 independent samples of non-schizophrenic subjects. Since the ACSL6 gene is in high LD with other genes, i.e. SPEC2 and PDZ-GEF2, we cannot exclude the possibility that the observed association signals may reflect the activity of other genes in this region, but our rodent studies suggest that nicotine in tobacco increases expression of ACSL6 message and protein. Given the heavy tobacco use of individuals diagnosed with schizophrenia, our results raise interesting questions in regard to whether schizophrenic smokers may ingest tobacco to regulate ACSL6. In separate studies, several other genes have been associated with both nicotine dependence [18,20,57] and schizophrenia [58][59][60][61][62][63][64]. The SPEC2, PDZ-GEF2 and ACSL6 region may be another locus that contributes to shared susceptibility to schizophrenia and tobacco addiction. Taken together, these studies provide convergent evidence that tobacco use and schizophrenia may share some common underlying mechanisms and that common vulnerability genes such as ACSL6 are regulated by nicotinic receptors. Human studies This study was conducted according to the principles expressed in the Declaration of Helsinki. For all human studies, all participants provided written informed consent. The study protocol, forms, and procedures were approved by Institutional Review Boards/Ethics Committees at Virginia Commonwealth University (VCU subjects), Yale University School of Medicine (The Yale/UConn subjects), University of Virginia (MSTF study) and all the other participating Institutional Review Boards. MGS controls. The control subjects for the MGS were a population sample selected for the GWAS of schizophrenia [65]. Subjects were sampled proportionally from 25 major population areas. All subjects completed an online, short, self-report clinical assessment after giving informed consent through an online procedure and prior to venipuncture being arranged. The selfreport screen focused on common psychiatric disorders including substance use problems, along with age, sex, height, weight, and the ethnic background of their grandparents. Specific to smoking phenotypes, the subjects were ascertained with the full FTND questionnaire. The information corresponding to the FTND was extracted from the phenotype dataset. FTND scores were obtained based on the answers of the participants to the questionnaires. Subjects who did not smoke a whole cigarette (a ''No'' answer to the question, ''Have you ever smoked a whole cigarette?'') were excluded from FTND score calculation. To be consistent with ascertainment of our VCU subjects, only subjects who answered ''Yes'' to the question ''did you smoke cigarettes on a daily basis'' and who smoked $5 cigarettes per day were included in the analyses of FTND. The numCIG phenotype was constructed using the raw number of cigarettes smoked per day at the time when the subjects smoked most heavily in their lifetime. Based on the raw data distribution, we divided the subjects into 4 categories using these cut-offs: 0 for those who smoked 1 cigarette per day; 1 for those who smoked 2-14 cigarettes per day; 2 for those who smoked 15-25 cigarettes per day and 3 for those who smoked 26 cigarettes per day or more. For the numCIG phenotype, all subjects reporting smoking were included in the analyses. The MGS controls included both European Americans (MGS_EA) and African Americans (MGS_AA). For this study, there were 1,342 and 1,843 European American (EA) subjects having FTND and numCIG phenotypes respectively, and 454 and 639 AA subjects having FTND and numCIG phenotypes, respectively. The distributions of these phenotypes are shown in Figure S1. The distribution of both phenotypes is normal. The SAGE subjects. The Study of Addiction: Gene and Environment (SAGE) is part of the Gene Environment Association Studies initiative (GENEVA) funded by the National Human Genome Research Institute aiming at understanding the impact of genes and environments on substance dependence and addiction. The SAGE sample consisted of 3 subsamples: the Collaborative Study on the Genetics of Alcoholism (COGA) [66], the Family Study of Cocaine Dependence (FSCD) [67], and the Collaborative Genetic Study of Nicotine Dependence (COGEND) [1]. Although FTND scores were available for all subjects, the raw number of cigarettes smoked per day was not available. Instead, subjects were assigned to one of the four categories based on the FTND questionnaire that grouped individuals by the number of cigarettes smoked per day (subjects were assigned 0 if they smoked 0-10 cigarettes; 1 if they smoked 11-20 cigarettes; 2 if they smoked 21-30 cigarettes and 3 if they smoked more than 30 cigarettes per day). The SAGE sample included 2,125 subjects with self-reported European ancestry (SAGE_EA) and 904 subjects with selfreported African ancestry. For the FSCD subsample, only one subject from each family was used. The Lung cancer and smoking study. This is a GWAS to investigate the genetic determinants of lung cancer risk. This study is also part of the GENEVA initiative and consisted of two samples. The first is the Environment and Genetics in Lung Cancer Etiology Study (EAGLE) [68], a population-based, biologically intensive, case-control study from the Lombardy region of Italy including ,2,000 newly diagnosed lung cancer cases and ,2,000 age-, gender-and region-matched controls. This study is also referred to as the Smoking and Lung Disease (SLD) study. The second is the Prostate, Lung, Colon and Ovary (PLCO) [69,70] Cancer Screening Trial, from which ,850 lung cancer cases and ,850 controls matched on age and gender were used. The subjects were ascertained for a variety of smoking phenotypes, including smoking status, persistent smoking, quit attempts, and the Fagerström questionnaire. PLCO participants were all EA and EAGLE included subjects from Italy who were all with European ancestry. EAGLE is a case-control study and includes 3,937 phenotyped subjects. PLCO is a screening trial with a cohort design and includes 1,651 phenotyped subjects. The VCU subjects. The subjects from the Virginia Commonwealth University were selected from the Mid-Atlantic Twin Registry. In this study, we selected regular smokers (defined as those who smoked at least 7 cigarettes per week for a month or more) from our population twin studies [71][72][73]. Tobacco smoking and nicotine dependence were assessed by the Fagerström Tolerance Questionnaire (FTQ) and/or FTND [74,75]. All regular smokers with DNA samples were included except when self-reported ancestry was not Caucasian. One of the co-twins was selected randomly for inclusion. The final sample included 2,138 individuals, with 1,438 males and 700 females. All subjects were aged 18 to 65 at the time of FTND ascertainment and all selfreported being of European ancestry. Mid-South Tobacco Family (MSTF) study. The subjects of the MSTF study with either AA or EA origin were recruited primarily during 1,999-2,004 from the Mid-South states of Tennessee, Mississippi, and Arkansas. Proband smokers were required to be at least 21 years old, to have smoked for at least 5 years, and to have consumed at least 20 cigarettes per day for the last 12 months. Siblings and biological parents of a proband smoker were recruited whenever possible, regardless of their smoking status. The study includes 2,037 participants in 602 nuclear families, with 671 subjects in 200 EA families and 1,366 subjects in 402 AA families. The degree of nicotine dependence of each smoker was ascertained by the three most commonly used measures: Smoking Quantity (SQ; defined as the number of cigarettes smoked per day), the Heaviness of Smoking Index (HSI; 0-6 scale), and the FTND score (0-10 scale). All three measures have been used consistently in previous reports on nicotine dependence in this sample [57,[76][77][78]. In this study, only the FTND scores and numCIG phenotypes were used. The Yale/UConn subjects. The subjects involved in the Yale/UConn study were both families and unrelated individuals. The family samples were recruited from several clinical sites (principally from Yale University School of Medicine and the University of Connecticut Health Center, but also from McLean Hospital and the Medical University of South Carolina) through siblings meeting DSM-IV criteria for cocaine dependence or opioid dependence. The sample of unrelated individuals was ascertained as cases affected for cocaine, opioids, or alcohol dependence and screened controls. The Yale/UConn family sample included 2,129 AAs (including 132 self-reported Hispanics) and 1,706 EAs (including 310 self-reported Hispanics) [79]. Of these, 1,858 subjects from 893 families had genotype and phenotype data. The Yale/UConn case control sample included 1,912 AAs (including 76 Hispanics) and 1,476 EAs (including 176 Hispanics). Marker Selection, genotyping and genotype imputation. Since our initial goal was to test whether markers associated with schizophrenia were also associated with smoking and nicotine dependence phenotypes, we compared the markers from our schizophrenia study with those included in the Affymetrix 6.0 chipset (the MGS samples were first genotyped with this chipset) in the PDZ-GEF2/ACSL6 region. We found that only 2 of the markers (rs667437 and rs477086) used in our previous schizophrenia study were in the 6.0 chipset. Although the association of these two markers with schizophrenia was not among the strongest observed in the schizophrenia study, they were among the best results in our initial association analyses of the MGS sample. Therefore, only these 2 markers were selected for the present study. For the GWAS datasets (MGS, SAGE and Lung Cancer), DNA preparation and genotyping have been reported previously. The MGS subjects were typed using an Affymetrix platform (SNP 6.0), and the SAGE and Lung Cancer subjects were typed using an Illumina platform (Human 1 M-Duo). For these datasets, we accepted the quality filtering procedures of each individual study and used the genotypes directly after download. The VCU subjects, the Yale/UConn and MSTF subjects were genotyped by each group using the TaqMan method. For the 2 SNPs used in this study, the assays were designed and synthesized by Applied BioSystems (Foster city, CA, USA). Standardized procedures recommended by the manufacturer were used. The Illumina marker set included only one of the two SNPs used in this study: rs477086. The genotypes for rs667437 were imputed using the fastPHASE program [80] with the MGS_EA or MGS_AA as a reference panel for the EA and AA samples respectively. Imputations were also conducted with HapMap subjects as references, and the imputed genotypes were almost identical to those using MGS samples as references. Association and meta-analysis. Association analyses were performed for each sample with the PLINK software package [81]. We used two phenotypes, FTND score and numCIG, where numCIG was a categorical phenotype. Both phenotypes were treated as quantitative traits in linear regression. For population samples, we used linear regression with sex, age and ethnicity (Hispanics) as covariates. For family samples, we used the QFAM module and the within-family statistics (MSTF, Yale_EA_Fam and Yale_AA_Fam samples). For GWAS datasets, we adapted the principle components used in the original studies. Meta-analyses were performed with the GWAMA software package [82] for replication samples only. Summary statistics (beta, se and sample size) were extracted from individual analyses and used in the metaanalyses. The package can perform both fixed effect and random effect meta-analyses. In our analyses, since the heterogeneity tests (both Cochran's Q statistics and I 2 ) were non-significant, we reported the results from the fixed effect analyses. Meta-analysis results were plotted with the R package rmeta. Animal studies The experimental protocol was approved by the Institutional Animal Care and Use Committee at Virginia Commonwealth University (the University Animal Welfare Assurance Number: A3281-01), and all animals were treated according to the Guidelines for the Care and Use of Laboratory Animals, as set forth by the National Institutes of Health. Animals were maintained on a 12-hour light/ dark cycle in a temperature (21uC) and humidity controlled vivarium with ad libitum access to food and water. Experiments were performed during the light cycle. Rat in vitro expression study. Sprague-Dawley timedpregnant rats were obtained from Zivic Laboratories (Allison Park, PA, USA). On postnatal day 1, the litter was transferred together to the laboratory where brain harvests took place. Mixed neuronal plus glial cultures were prepared as described previously [83,84]. Briefly, the cortex of 4-5 pups was dissected in a sterile saline solution (137 mM NaCl, 5.3 mM KCl, 0.17 mM Na 2 HPO 4 N7H 2 O, 0.22 mM KH 2 PO 4 and 0.0012 g/L Phenol Red) under a laminar flow hood then transferred into a sterilefiltered 0.1% porcine trypsin dissection solution, minced and incubated at room temperature for 10 min. Brain sections were rinsed twice in plating medium (DMEM, 10% FBS, 1% Lglutamine and 1% Penicillin/Streptomycin) to stop the trypsin reaction, triterated with a cotton-plugged glass pipette, spun 7 min at 1,000 RPM with medium replaced and then poured through a 70 mm nylon cell filter (BD Falcoln, Bedford, MA, USA) for plating. 1.7610 6 cells were seeded into each well of a 6-well plate. Cells were cultured in a 5% CO 2 incubator at 37uC. Three days after plating, and every 3 days thereafter, 1 mL medium was removed from each well and replaced with 1 mL of fresh mixed growth medium containing minimal essential medium, 10 mM glucose, penicillin 100 U/mL, streptomycin 100 mg/mL, and 5% horse serum. On day 14, in vitro media was removed and 2.0 mL media was quickly replaced to ensure a consistent stimulating volume. Cells treated with 0 mM, 10 mM, and 100 mM Nicotine ((-)-Nicotine hydrogen tartrate salt, dissolved in 0.9% sodium chloride) were cultured for another 3 and 5 days. At each time point, the media was removed, wells were washed twice with PBS, and cells were harvested in 200 ml PBS. Cells from three wells were pooled into an Eppendorf tube. Total RNA was extracted from cells using TRIzol Reagent (Invitrogen, Eugene, OR, USA) and quantified by Quant-iT TM RNA Assay Kit (Invitrogen, Eugene, OR, USA). cDNA was synthesized using 2 mg of total RNA and 50 ng of random hexamers according to the first-strand cDNA synthesis protocol provided with SuperScript III RNase H-Reverse Transcriptase (Invitrogen, Eugene, OR, USA). For real time-PCR, samples were analyzed in triplicate 20 ml reactions including 25 ng of cDNA, 250 nmol of primer, 16 PCR buffer, 2 mM MgCl2, 0.08 mM dNTPs, 0.01 u/ml Taq polymerase (Invitrogen, Eugene, OR, USA) and 1/10 X SyBRH Green (Sigma, St. Louis, MO, USA). Primers were designed to coding sequence using Primer3 (v. 0.4.0). PCR reactions using rat TATA box binding protein (TBP) and GAPDH primers were used as internal controls. The Primer sequences for the ACSL6, TBP and GAPDH are as follows: ACSL6 forward, 59-TTTCACGA-GCGGTACAACAG-39, ACSL6 reverse, 59-GTGTACATCC-GCACAAGTGG -39; TBP forward, 59-TATAATCCCAA-GCGGTTTGC-39, TBP reverse, 59-CAGCCTTATGGGGAAC-TTCA-39; GAPDH forward, 59-AAGGGCTCATGACCACA-GTC-39, GAPDH reverse, 59-CAACGGATACATTGGGG-GTA-39. PCR was conducted and the expression level of each reaction was determined by the C T value. The results from three replicated assays were averaged to produce a single mean C T value for each treatment condition. The relative expression level between ACSL6 and TBP or GAPDH for each condition was calculated by the 2 2DCT method, where DC T = C T ACSL6 2C T TBPor GAPDH [85]. Mouse in vivo expression study. Male 129SvJ mice were purchased from Jackson Laboratories (Bar Harbor, ME). Animals were 8-10 weeks of age at the start of the studies. Mice were anesthetized with sodium pentobarbital (45 mg/kg by intraperitoneal injection) and implanted subcutaneously (s.c.) with Alzet osmotic mini pumps [model 2,004, Durect Corporation, Cupertino, CA, USA] filled with (-)-nicotine (NIC) or saline (SAL) solution as described in Jackson et al. [11]. The concentration of nicotine was adjusted according to animal weight and mini pump flow rate so that mice were infused with 36 mg/ kg/day for 14 days. The dose and duration of nicotine exposure were chosen based on previous behavioral studies [11,12] which show that significant tolerance and nicotine withdrawal signs are produced in mice after this treatment regimen. On the morning of Day 15, chronic nicotine-and saline-infused mice were injected s.c. with 1.0 or 2.0 mg/kg of a non-selective nicotinic receptor antagonist, mecamylamine (MEC) or saline vehicle (SAL) followed by a 30 min waiting period. There was no significant difference in ACSL6 protein levels between the 1 and 2 mg/kg doses of mecamylamine used across cohorts of animals, so these data were combined to create 4 experimental treatment groups (SAL+SAL, SAL+MEC, NIC+SAL, NIC+MEC). Mice were then euthanized by cervical dislocation. Brain sections were dissected and placed immediately in cold extraction buffer for dissection of PFC, HIP, NAC and VTA. Brain tissues were dissected and homogenized as described previously [86]. Protein concentrations were determined using the DC protein assay (Bio-Rad Laboratories, Hercules, CA, USA), and 30 mg of protein were mixed with 66 blue gel loading dye (New England Biolabs, Ipswich, MA, USA) and heated for 5 minutes at 95uC. Samples were then separated by SDSpolyacrylamide gel electrophoresis on a 10% Tris-HCL gel and subjected to immunoblotting with anti-ACSL6 antibody (1:500 from goat; sc-48005, Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA) or anti-GAPDH (1:50,000 from mouse; Advanced ImmunoChemical Ins., Long Beach, CA, USA) primary antibodies overnight at 4uC. Blots were rinsed in TBST and then incubated in secondary antibodies (1:2000 anti-goat or 1:50,000 mouse, Vector Laboratories, Inc., Burlingame, CA, USA) for 1 hour at room temperature followed by three times of TBST 10 min washes. Specific bands were detected by enhanced chemiluminesence (GE Healthcare Bio-Sciences, Piscataway, NJ, USA), exposed to X-Ray film, and quantified using Image J software (Rasband WS, National Institutes of Health, Bethesda, MD; http://rsb.info.nih.gov/ij/, 1997-2006). ACSL6 protein levels were normalized against GAPDH for loading control and against vehicle-treated control subjects to enable comparisons across blots.
2016-05-12T22:15:10.714Z
2011-12-20T00:00:00.000
{ "year": 2011, "sha1": "1f3e7087c48c63220a67ed829fa2aaddd6eff986", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0028790", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f3e7087c48c63220a67ed829fa2aaddd6eff986", "s2fieldsofstudy": [ "Psychology", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
69667849
pes2o/s2orc
v3-fos-license
Real time backpack detection in visual surveillance based On verticals contour analysis In this paper, we aim to propose a method to detect backpacks and sling bags which pedestrians are carrying in visual surveillance. A real time video analysis and processing on passive recording provided from surveillance camera could give more valuable information for a variety of applications. We detect backpacks without protrusion supposition or any training data. We propose an algorithm to find expectable vertical lines inside or near a human body contours using some fuzzy rules. To improve detection accuracy, area of carried object is compared with next several frames. It influences our detection result greatly. As a result, we detect carried objects including backpack and sling bag successfully. We run several experiments with previously designed VIRAT ground video dataset and result shows that our method is promising. Introduction Potential classification of specific object in crowded area is a demanding application.In recent years, terrorism activity is increasing in crowded areas such as stadiums, hall etc. Fig 1 shows convicts of Boston marathon bombing in 2013, where some attributes of convicts could be 'carrying backpack', 'wearing hat' and 'dark clothes'.Government and intelligence bureaus are focusing on public security.Surveillance cameras are mainstream and such static video streams can be used as an observation source.An object classification, a detection and a tracking on stream can provide more important and valuable information.Detection of carrying object is challenging in case its size is too small or its color is similar with the clothes.Because, the people can carry any kind of objects in different ways, it makes recognition more complicated.Moreover, its carrying location is variable.For example, a sling bag can be carried on shoulder or can be held by arms.A backpack can be carried on both or one of the shoulders which make it difficult to define geometrical characteristic of backpack.In this research, we aim to detect carrying object such as backpack, sling bags using real time surveillance video stream. Related works Until now different methods of carrying object detection have been proposed.One of them (1) defines geometrical properties of backpack and sling bag strap with Hough transform and edge detection method.In this method, geometrical properties are separately calculated.The contour is approximated with a polygon with accuracy proportional to the contour perimeter.Re-implementation of (2) Markov random field with a map of prior probabilities for carried objects and spatial continuity assumption, from which segmentation of carried object using MAP solution is completed (3) .A motion based recognition approach investigated is considered as well (4) .Here, after detecting pedestrians from video frames, they will be classified into categories, walking or object-carrying based on spatiotemporal analysis of the obtained binary silhouettes.To accomplish it, temporal correspondence free analysis have been made on binary frames with periodic motion of pedestrians.Moreover, neural network trained on negative and positive dataset (5) has been investigated.An extract features from silhouettes and neural network are trained on set of images.A human carrying status examined (6) by using general tensor discriminant analysis, which coherently incorporates the Gabor based human gait appearance model. Proposed Method As mentioned in introduction, our system is designed to track and classify backpack in street surveillance.It consists of different modules that communicate with each other.An operation flow of our method is shown in Fig2. Pedestrian detection In last years, notable progress has been made for detecting and tracking human in computer vision field.Since we use surveillance camera stream, we implement background subtraction.In foreground image, erosion and dilation are used to convert grayscale image and equalized histogram. Fig2. System low-level architecture Contours of each blob are groups to mark out an approximating rectangle. In addition, Gaussian probability is calculated on aspect ratio feature.In crowded area sometimes, blob (person close to each other) is enormous. Therefore, we extract human features by using Local Binary Patter LBP (7) and Histogram-of-Oriented-Gradient HOG (8) .In order to improve detection result, we exclude people appearing at a distance or are far from camera.In calculation, an approximation is used to define the minimum possible size of person by using normal distribution. Bag detection Capturing the connection of the object with human posture is a challenging task.Carrying object may occupy a small part of body or is not comparable in height with the human. In this paper, we consider a solution for detecting bags and backpacks from surveillance cameras.Integration of object features and pedestrian contour is achieved based on foreground subtraction.As a result of several kinds of experiments, we have observed that carrying objects create visible vertical lines in the human body contour or near with human body.Depending on the length of these lines and the distance from human body, a vote is done to define the carrying item Fig. 3. A Sobel operator is applied to X coordinates to find all verticals.The next operation is to find vertical lines on the binary image by intersecting (bitwise) foreground mask and Sobel X operator result.An ally vertical lines are represented by four element vectors (x_1, y_1, x_2, y_2), where (x_1, y_1) and (x_2, y_2) are the ending points of each detected line segments. Then we can calculate the points of these lines that are located furthest from contour.If one of the dots is far from contours by around 10 pixels (parameter should tune video size) or is too close to the bottom part, those lines are excluded, because it is usually legs of pedestrian.Depending on the human movement, the appearance of the bag can be changed.However, this method might miss some types of carried objects such as roller bag, luggage, since compactness is one of the feature, it still can be voted if objects make straight contour with fist. Therefore, it is required to compare detected bag results with the next frames to improve result. Experimental result We performed several experiments with VIRAT ground video dataset (9) .It is more advanced dataset design than conventional datasets, because its realism and natural scene based operation makes it excellent.This dataset works on data collected from human motion based scenes with undefined backgrounds. In our experiment, some scenes are captured in university campus area, where most people are carrying backpacks or sling bags.Each video recorded 23 fps at 1280x720 pixel resolution with 679 kbps bitrate. The system implemented have done by C++ and runs on an Intel i7 3.4GHz machine.Depending on video quality and pedestrian count, each bag frame detection takes an average 38ms -70ms.To compare bag regions with following three frames, average calculation takes around 3.2s -5.3s while it takes 1.1s -1.3s for two frames comparison. A Table 1 shows our experimental results.All pedestrians on VIRAT dataset are counted by manual and then the detection is implemented with differently adjusted comparing duration. In each iteration count, both of the pedestrian and bag are counted automatically.However, analyze with 1 frame might be effective, but misdetection is high. In case of 2 frames comparison, the misdetection is higher as well, but undetected bags reduced slightly.When sequence of 3 frames are used, detection for some bags is increased and the misdetection is decreased slightly.In such manner, comparing number of frames results increase of undetected bags and delay on system operation.Experimental result shows that comparing bag frame with following next two frames is efficient in terms of less misdetections.Approximate success ratio of carried object detection is 80.5% while single frame detection is 53% of success ratio.Upon investigation on false detection, we found several false detection shown Conclusion We have investigated real time intelligent surveillance camera video analytics for carried object detection.System could achieve 80.5 percent success ratio and it can be more fine-tuned by every view angle and distance of pedestrian path.Another advantage of the method is that detection speed is very fast for real time computation. Our method can be applied in public spaces like stadiums, hall entrances or metro stations to provide more accurate information for public security. Ideally, it is common use to have a dataset of multiple images featuring people carrying a bag, but our method does not have such a constraint, therefore detecting any carried object.Future work will focus on optimization and computational speed of the core algorithms. Fig1. A combination Fig1.A combination of security camera images of Boston marathon bombing convicts carrying backpack. Fig3. Fig3.Backpack rear and frontal detection based on strap and rear appearance verticals Fig 5 Fig 5 shows an video player and bag carrier and region of interest frames. Fig7 Fig8.Misdetection resultsMost of undetected bags are caused by color contrast as shown Fig7 (a) and (b) where color contrast with background or color of the clothes itself causes difficulties in vision detection.Also, in some rare case like Fig7 (c), pedestrian's long hair hides backpack, because long hair lies down the shoulder covers bag strap or bags and it makes it impossible to vote as a bag.Sometimes, even it is hard to discriminate by human.In Fig7 (d), pedestrian occluded strap by one hand.If the bag strap is too short or it's held by, it's not sufficient to vote that it's bag sling.Fig8 shows some examples of misdetection results.In Fig8 (a) and (b), because pedestrians are holding hands, it makes vertical straight line within human contour.In Fig8 (c), pedestrian is wearing clothes with long vertical textures. Table 1 . Detected results with compare iteration
2019-02-09T23:22:55.029Z
2018-01-29T00:00:00.000
{ "year": 2018, "sha1": "ecf393d2f716d2f6eb4dc0a9d13e04e9806ccc70", "oa_license": "CCBY", "oa_url": "https://www2.ia-engineers.org/conference/index.php/iciae/iciae2018/paper/download/1780/1091", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ecf393d2f716d2f6eb4dc0a9d13e04e9806ccc70", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
29353896
pes2o/s2orc
v3-fos-license
Transcranial magnetic stimulation and transcranial direct current stimulation appear to be safe neuromodulatory techniques useful in the treatment of anxiety disorders and other neuropsychiatric disorders ABSTRACT Transcranial magnetic stimulation (TMS) has recently been investigated as a possible adjuvant treatment for many neuropsychiatric disorders, and has already been approved for the treatment of drug-resistant depression in the United States and in Brazil, among other countries. Although its use in other neuropsychiatric disorders is still largely experimental, many physicians have been using it as an off-label add-on therapy for various disorders. More recently, another technique, transcranial direct current stimulation (tDCS), has also become available as a much cheaper and portable alternative to TMS, although its mechanisms of action are different from those of TMS. The use of off-label therapeutic TMS or tDCS tends to occur in the setting of diseases that are notoriously resistant to other treatment modalities. Here we discuss the case of anxiety disorders, namely panic and post-traumatic stress disorders, highlighting the uncertainties and potential problems and benefits of the clinical use of these neuromodulatory techniques at the current stage of knowledge. Parkinson's 4 , and epilepsy 5 .In the last two decades, research on using these tools to treat conditions such as depression 6 , mania 7 , obsessive-compulsive disorder 8 and post-traumatic stress disorder 9 began to appear. The use of electric currents for treating psychiatric disorders began in the 18 th century with the development of the voltaic pile.However, it was only in the 1960s and 1970s that the noninvasive method of cerebral stimulation known as brain polarization, similar to modern tDCS, was able to provide improvements in mood and alertness in healthy volunteers, in addition to treating depression 10 .Later, this method was abandoned, possibly due to the advance of psychopharmacology and the social stigma of electroconvulsive therapy that hindered the development and emergence of new noninvasive forms of cerebral stimulation. Despite the controversies involving such an early noninvasive method of cerebral stimulation, tDCS began to be used again as a putative neuromodulatory tool in the present century, notably with the studies of Priori 10 in Italy, and Nitsche and Paulus 11 in Germany.Both studies were able to demonstrate that the transcranial induction of a low-intensity direct current, through electrodes placed on the head, increased (anodal stimulation) or decreased (cathodal stimulation) cortical excitability during the period of stimulation, suggesting that tDCS probably produced its effects through changes in the resting membrane potential 12 , thus modifying the threshold for firing of action potentials, or even through synaptic mechanisms. As a neuromodulatory technique, its physical and physiological principles require less complex equipment than TMS, with only two electrodes being necessary: one cathode and one anode, which, arranged in different positions, create a flow of low-intensity (1 or 2 mA) direct electric current that covers a specific region of the cerebral cortex, modulating it in accordance to the polarity.The electric current, in turn, flows from one electrode to the other through the scalp and the cortex.By comparison, while TMS can generate strong currents capable of depolarizing the neuron until it reaches the threshold for firing action potentials, tDCS changes cortical activity through weak electric currents, producing changes in the resting membrane potential, and consequently in brain activity 13 . One fact that stands out in tDCS is the duration of its physiological effects.The technique is able to decrease or increase cortical excitability for hours after stimulation 11 , probably by inducing long-term depression or long-term potentiation on the treated neurons and synapses 14 .Due to these long-term changes in cortical excitability, it is commonly applied daily, for 20 to 30 minutes 15 , ensuring efficacy and safety.Taking into account its prolonged effects upon cortical excitability, low intensity of the currents used to modulate brain activity, as well as the fact that it also allows sham stimulation in experimental protocols 16 , tDCS has been widely accepted not only as an off-label treatment but also as a research tool, instead of high-cost equipment such as magnetic stimulators.Furthermore, the technique appears to be quite safe, and there is no reason to-date to suspect tDCS to be detrimental to health 17 , since it presents a low rate of side effects when used in accordance with the standard procedures recommended by recent clinical studies for the treatment of psychiatric disorders.The only side effects associated with tDCS have been redness of the skin or mild superficial electrolytic burns 18 .However, due to the still short follow-up time of treated subjects, longer-term side effects, if any, are unknown at this time. APPLICATIONS OF TMS AND TDCS IN PSYCHIATRY While in the past it was necessary to surgically manipulate the brain in order to modulate its activity in a nonpharmacological way, noninvasive stimulation provides something new: through TMS and tDCS, it is possible to adjust cerebral activity and apparently even mental processes, with less risk than is inherent in manipulation through neurostimulator implant surgery or by using drugs 19 . Due to the increasing incidence of mood disorders and anxiety in the global population, researchers have sought increasingly effective, safe and noninvasive investigative and therapeutic techniques, with a lower incidence of adverse effects, for these disorders.According to the World Health Organization, depression and anxiety are among the most prevalent diseases in society, as per the tenth edition of the International Classification of Diseases (ICD-10), and are considered "common mental disorders" due to their high level of comorbidity and because they have similar therapeutic approaches.Research in the clinical field is needed in order to discover more effective and less invasive new treatments, given that not all patients respond to psychopharmacological or psychotherapeutic interventions 20 . NONINVASIVE TRANSCRANIAL STIMULATION FOR MAJOR DEPRESSIVE DISORDER Transcranial magnetic stimulation has received FDA approval for the treatment of major depressive disorder 21 .One notable fact is that a third of major depressive disorder patients are treatment-resistant, defined by the lack of adequate response of the symptoms after two or three antidepressant treatments 22 .Due to the high prevalence of resistance to treatment and failed antidepressant response, the National Institute of Mental Health developed the Sequenced Treatment Alternatives to Relieve Depression Trial, a systematic protocol for treating depression.According to the latter protocol, cumulative response and remission rates after two unsuccessful antidepressant treatments are 73% and 47% respectively 23 .Based on these data, numerous repetitive TMS (rTMS) studies have been performed on the treatment of major depressive disorder and have shown positive results, suggesting that low-frequency stimulation (1Hz) of the right dorsolateral prefrontal cortex, or high frequency over the left dorsolateral prefrontal cortex, both have antidepressant effects 24 . Double-blind studies have been developed using anodal tDCS on the left dorsolateral prefrontal cortex, with low intensities between 1 and 2mA, over 10 days or more, and have shown positive results in reducing depressive symptoms 25,26 .In the study by Fregni et al. 25 , the results indicated a 69% improvement in symptoms of depression after just five sessions of tDCS, after 1.5 weeks, compared to a 30% improvement in the control group, measured on the Hamilton Depression Rating Scale.The same results were found by Boggio et al. 26 , with an improvement of 40.5% in the group that received tDCS, in comparison to 10.4% of the control group, after two weeks of treatment.Taking into account these promising results of tDCS as an antidepressant treatment, replication of these studies in the future is suggested. EFFECTS OF TRANSCRANIAL STIMULATION UPON ANXIETY DISORDERS In addition to depression, the effects of transcranial stimulation on anxiety disorders and related disorders have also been investigated.However, before discussing some of these studies, it should be noted that, based on the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-V, 2014), published by the American Psychiatric Association, some disorders such as anxiety disorders, classically recognized by the international community, have been registered separately in this latest edition of the DSM, in their own chapters, with separate nosologies.Among these are obsessive-compulsive disorder (OCD), post-traumatic stress disorders (PTSD) and acute stress.This new and controversial classification has been the target of criticism, but it is not the purpose of this review to analyze its pros and cons.Thus, for the description and analysis of some of the main studies investigating the effects of transcranial stimulation on OCD and PTSD, before and after the publication of the DSM-V, a purely descriptive analysis of their clinical results is adopted, without considering whether or not they are anxiety disorders according to this version of the DSM. NEUROMODULATION AND OBSESSIVE-COMPULSIVE DISORDER Obsessive-compulsive disorder is characterized by the presence of obsessions and/or compulsions, where obsessions are thoughts, impulses or recurrent and persistent images that are intrusive and unwanted experiences, while compulsions are repetitive behaviors or mental acts that an individual feels compelled to perform in response to an obsession or according to a set of rules that must be rigidly applied.Because it is a chronic disease characterized by obsessions and compulsions, OCD causes discomfort to the patient and his/her family 27 and was ranked as the fourth most common psychiatric disorder, affecting approximately 1-3% of the global population 28 .It is also known that the disease is associated with dysfunction in the frontostriatal circuit, including the dorsolateral prefrontal cortex, orbitofrontal cortex, and medial prefrontal cortex, as well as the supplementary motor area, supplemental gyrus, anterior cingulate gyrus and basal ganglia 29 .An interesting fact is that only 40-60% of patients respond to pharmacological treatment and cognitive behavioral therapy 30 , justifying the use of new techniques in order to reduce resistance to treatment in these patients.Further research is still needed to provide a better understanding of the neural circuits involved in this disorder, since aspects of its etiology and pathophysiology remain unknown.Recent studies with rTMS emphasize that inhibitory application at a low frequency (1 Hz) on the supplementary motor area improves symptoms and increases the motor threshold 31 and intracortical inhibition measured by the matched magnetic pulses technique 32 , the latter considered to be dependent on GABAergic mechanisms.These results corroborate the findings of Gomes et al. 33 who, in a recent randomized, double-blind study of OCD patients, showed that those who received rTMS on the supplementary motor area over two weeks had 35% improvement in OCD symptoms (Y-BOCS scale).Also in relation to symptoms of anxiety, there was a reduction in symptoms of 19.6% in the tDCS group compared to 9.5% in the group undergoing control stimulation, according to the anxiety scale (Hamilton Rating Scale for Anxiety -HRAS-14) after two weeks of stimulation.As to the symptoms of depression, they found no significant difference between tDCS and control groups on the Hamilton Scale for Depression (HRSD-24).In recent literature reviews, Berlim et al. 34 and Senço et al. 35 also observed that response rates among patients who received low-frequency rTMS on the orbitofrontal cortex or supplementary motor area vary between 35% and 13%, making these areas promising targets for reducing the symptoms of OCD, but studies are still incipient, as pointed out by some authors 14 . NEUROMODULATION AND POST-TRAUMATIC STRESS DISORDER In addition to OCD, a few studies have also investigated the effects of transcranial stimulation on PTSD.This is a psychiatric condition that occurs in people who have witnessed events involving active threat of death or serious injury, or a threat to the physical safety of themselves or others 36 and is characterized by three symptom clusters: re-experience, avoidance and hyperarousal, resulting in social or occupational dysfunction.The symptoms must be present for at least a month, and may last for years.It is also a condition in which exposure to a risk against life results in a set of intrusive memories, where the individual experiences events associated with stress 37 .Epidemiological studies estimate that 7.8% of the US population suffer from PTSD at some point in their lives 38 , suggesting a psychosocial and economic loss exceeding US$ 3 billion in lost productivity to the US every year 36 . Pharmacological studies have shown that reduced glutamatergic neurotransmission, by AMPA receptor blockage, results in anxiolytic effects.In PTSD patients, this effect results in reduced flashbacks and nightmares, typical symptoms of the disorder.Regarding the hypothalamicpituitary-adrenal axis, hypocortisolemia is different from other anxiety disorders.It is not exactly known why the reduction occurs, but it has been found that when cortisol levels are lower in PTSD, the severity of symptoms is greater.However, although many drugs have demonstrated therapeutic benefit in humans with PTSD 39 and many of these drugs have been shown to be capable of preventing anxious behavior and cognitive impairment in rats with stress 40 , not all patients respond to pharmacological treatment options for PTSD.For this reason, non-pharmacological treatments and noninvasive methods such as rTMS and tDCS have been tried 41 . A few studies have shown that administration of low-frequency rTMS on the right dorsolateral prefrontal cortex is beneficial for improving the symptoms of PTSD 9 , by decreasing cortical excitability.Garcia-Toro et al. 42 demonstrated that refractory PTSD patients showed an improvement of clinical symptoms after 10 sessions of low-frequency rTMS (1 Hz) applied to the right dorsolateral prefrontal cortex.In a complementary manner, Osuch et al. 43 demonstrated that patients who had 20 sessions of low-frequency rTMS (1 Hz) on the right dorsolateral prefrontal cortex and PTSD exposure therapy showed a reduction in the symptomatic and psychophysiological effects of PTSD, as assessed by the Hamilton Depression Rating Scale, the Impact of Events Scale and the Clinical Administered PTSD Scale.Curiously, a group of researchers has been conducting experiments using high-frequency rTMS (10 Hz) instead of low frequency rTMS in patients with PTSD, and has also reported positive results 9 .Such experiments, however, still need to be replicated by other researchers, on the basis of variable conflicting results. Correlation with neuroimaging studies Neuroimaging studies on rTMS and PTSD have shown increased oxygenation in the right prefrontal cortex when participants are exposed to experiences that remind them of the traumatic experiences, suggesting that overactivity on the right side in PTSD is associated with the role of the right hemisphere in anxiety and other adverse emotional experiences 44 .Thus, low-frequency rTMS (1 Hz) might decrease activity in the cortical areas of the right hemisphere, which in turn might improve the abnormalities and reduce cerebral functional asymmetries associated with PTSD 43 .Still, hyperactivation of the amygdala and the rostral region of the cingulate cortex is observed in these patients, and the higher the activation of these regions, the greater the severity of the symptoms of PTSD. These findings therefore suggest that rTMS on the right dorsolateral prefrontal cortex seems to be well suited to become an effective tool in the treatment of PTSD; moreover, there is evidence that rTMS produces anxiolytic action in humans 45 .Berlim et al. 46 reviewed several randomized controlled studies conducted between 1995 and 2013 and found that studies on PTSD patients in which rTMS was applied on the right dorsolateral prefrontal cortex pointed to significant differences from patients undergoing sham rTMS.In addition, there were differences in the symptoms of anxiety and depression among PTSD patients before and after rTMS.Additionally, Karsen et al. 47 conducted a review of published reports including a total of 132 patients to evaluate the efficacy of rTMS in the treatment of PTSD.Based on the review, it was found that the variables most often studied were: a) treatment of the right or left cerebral hemisphere; b) stimulation frequency (0.3, 1, 5, 10 or 20 Hz); c) anatomical location; d) number of stimulation pulses; e) combination of rTMS with exposure.All studies stimulated pre-frontal regions. Neuromodulation for PTSD and the right-left dilemma Boggio et al. 48demonstrated that high-frequency rTMS (20Hz) of either the right or left cerebral hemisphere can be effective in reducing the symptoms of PTSD.In contrast, some authors have observed that low-frequency rTMS (1 Hz) on the right dorsolateral prefrontal cortex 43 seems to be more effective.Again, however, Cohen et al. 9 , upon comparing low-frequency rTMS (1Hz) and high-frequency rTMS (10Hz) on the right dorsolateral prefrontal cortex, concluded that high-frequency rTMS showed greater improvement of PTSD symptoms compared to low-frequency rTMS (29.3 % and 10.4% respectively).Furthermore, Rosenberg et al. 49 compared the effect of low-frequency and high-frequency rTMS (1 Hz and 5 Hz, respectively) and found low-frequency rTMS on the left dorsolateral prefrontal cortex to be efficient in reducing depressive symptoms, but not PTSD symptoms. Although studies have demonstrated effects on cortical activation for both high-frequency and low-frequency stimulation, and treatment of both the right and left dorsolateral prefrontal cortices have been shown to reduce symptoms of PTSD, it is not yet understood how stimulation of the dorsolateral prefrontal cortex affects this neural circuitry.Regarding the rTMS mechanism of action in PTSD, it is known that: a) stimulating the right dorsolateral prefrontal cortex with a high frequency activates the hypothalamic-pituitary-adrenal axis, inhibiting excessive autonomic response and suppressing activity of the amygdala 9 ; b) stimulating the right dorsolateral prefrontal cortex with a low frequency inhibits the right hemisphere, reducing hyperactivity of the right prefrontal cortex in patients with PTSD.Thus, both low-frequency and high-frequency rTMS, when applied to the right side, are potentially well suited to reduce the symptoms of PTSD. Limitations of PTSD and PTSD-depression studies Regarding the limitations of PTSD studies, Karsen et al. 47 suggest that, although there are studies in which both frequencies can contribute to a decrease in the symptoms of PTSD, one frequency may be better than the other, and further research is therefore needed in this area.Furthermore, the small number of subjects in these analyses limits any generalization of the findings, which increases the need for further studies with larger samples and improved stimulation parameters 50 ; moreover, the mechanism of action of rTMS in anxiety disorders needs to be further clarified, by combining TMS with neuroimaging 51 . In addition to treatment with rTMS, PTSD is also treated through exposure to objects and events that induce anxiety and memories of aversive episodes 52 , but in a controlled environment so that patients know that they are not in danger.Finally, the frequent coexistence of anxiety and mood disorders justifies the concomitant study, for example, of panic disorder (PD) and major depressive disorders with rTMS.Many researchers have been interested in both psychiatric disorders, since they lead to a decrease in functionality, resulting in increased morbidity and suicide rates 53 . NEUROMODULATION AND PANIC DISORDER Although less numerous, studies have also been published on the effects of transcranial stimulation in patients with PD.For example, low-frequency rTMS (1Hz) on the right dorsolateral prefrontal cortex seems to lessen the symptoms of panic in these patients 32 .There seems to be an activation asymmetry, so that, in PD patients, the right hemisphere appears to be more active than the left in the dorsolateral prefrontal cortex area 54 . Right-sided low-frequency rTMS and PD To test the clinical effects of low-frequency rTMS (1 Hz) on the right dorsolateral prefrontal cortex of patients with PD and depression who are resistant or intolerant to medication, Zwanzger et al. 55 observed that after two weeks of rTMS treatment there was significant improvement in symptoms of anxiety and depression, corroborating the findings of Mantovani et al. 56 Later, Mantovani et al. 32 demonstrated that, after four weeks of low-frequency rTMS stimulation of the right dorsolateral prefrontal cortex, patients who received real treatment fared better than those who received only sham stimulation.It can therefore be concluded that low-frequency rTMS on the right dorsolateral prefrontal cortex improves symptoms of major depression and anxiety disorders 55 .Low-frequency rTMS (1Hz) on the right dorsolateral prefrontal cortex decreased symptoms of panic according to Li et al. 57 , although the difference was not significant between patients who received rTMS and sham rTMS, measured by the Panic Disorder Severity Scale. Left-sided high-frequency rTMS stimulation and PD In contrast, Pallanti and Bernardi 50 concluded that high-frequency rTMS on the right dorsolateral prefrontal cortex reduced symptoms in anxiety disorders and had positive results in patients with PTSD and PD, while the case study by Sakkas et al. 58 in which high-frequency rTMS (10-20Hz) was administered to the left dorsolateral prefrontal cortex in a patients with PD who was resistant to pharmacological treatment, also showed an improvement in symptoms.These findings are in keeping with other studies that compared high-frequency rTMS over the left dorsolateral prefrontal cortex to the effects of antidepressants.Additionally, Dresler et al. 59 demonstrated improvements in symptoms of patients diagnosed with PD as a result of three weeks of high-frequency rTMS over the left dorsolateral prefrontal cortex, confirming the study by Guaiana et al. 60 , which found an improvement in the symptoms of anxiety on the Panic Disorder Severity Scale. A limitation of the rTMS method in anxiety disorders is that its impact is on the cortical surface layers, and it is not possible to directly stimulate more distant cortical and subcortical areas that are relevant to the pathogenesis of anxiety disorders.Thus, further studies are recommended to determine the role of tDCS in the treatment of anxiety disorders, since it is a less focal stimulation and may influence deeper neuronal circuits. In conclusion, both TMS and tDCS appear to be safe and useful neuromodulatory techniques with potential application in the treatment of anxiety disorders, as well as a number of other neuropsychiatric disorders.However, as is well illustrated by this review of anxiety disorders, larger clinical trials are needed if consensus is to be reached regarding indications, optimal treatment protocols and clinical relevance of these non-pharmacological interventions.Morevover, caution should be exercised to avoid abusive use of these powerful neuromodulatory techniques, due to the uncertainties about their exact mechanism of action and possible long-term side effects.
2017-08-28T11:51:03.047Z
2016-03-30T00:00:00.000
{ "year": 2016, "sha1": "7dd0640d44ca3e4b67493c71f09e58f948132a0d", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/anp/v74n10/0004-282X-anp-74-10-0829.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7dd0640d44ca3e4b67493c71f09e58f948132a0d", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
221560022
pes2o/s2orc
v3-fos-license
Instance Transfer Subject-Dependent Strategy for Motor Imagery Signal Classification Using Deep Convolutional Neural Networks In the process of brain-computer interface (BCI), variations across sessions/subjects result in differences in the properties of potential of the brain. This issue may lead to variations in feature distribution of electroencephalogram (EEG) across subjects, which greatly reduces the generalization ability of a classifier. Although subject-dependent (SD) strategy provides a promising way to solve the problem of personalized classification, it cannot achieve expected performance due to the limitation of the amount of data especially for a deep neural network (DNN) classification model. Herein, we propose an instance transfer subject-independent (ITSD) framework combined with a convolutional neural network (CNN) to improve the classification accuracy of the model during motor imagery (MI) task. The proposed framework consists of the following steps. Firstly, an instance transfer learning based on the perceptive Hash algorithm is proposed to measure similarity of spectrogram EEG signals between different subjects. Then, we develop a CNN to decode these signals after instance transfer learning. Next, the performance of classifications by different training strategies (subject-independent- (SI-) CNN, SD-CNN, and ITSD-CNN) are compared. To verify the effectiveness of the algorithm, we evaluate it on the dataset of BCI competition IV-2b. Experiments show that the instance transfer learning can achieve positive instance transfer using a CNN classification model. Among the three different training strategies, the average classification accuracy of ITSD-CNN can achieve 94.7 ± 2.6 and obtain obvious improvement compared with a contrast model (p < 0.01). Compared with other methods proposed in previous research, the framework of ITSD-CNN outperforms the state-of-the-art classification methods with a mean kappa value of 0.664. Introduction A brain-computer interface (BCI) is a communication method between a user and a computer that does not rely on the normal neural pathways of the brain and muscles. Motor imagery (MI), one of the paradigms of BCI, is a way of thinking that imitates the motor intention without real motion output; that is, the brain imagines the entire movement without contracting the muscles [1]. Research has shown that motor imagery (MI) can produce the same change of sensory motor rhythms as a real movement [2]. This phenomenon will cause energy increase or decrease in specific frequency bands of EEG, which are called eventrelated desynchronization (ERD) and event-related synchrony (ERS) [3]. The differences of ERD/ERS are always used to decode mental intentions, control a robot, and execute rehabilitation training for stroke patients [4]. During this process, the accurate decoding of MI is the essential factor that determines the effectiveness and quality of the rehabilitation. However, due to the differences in physiological structure and physiological condition across subjects/trials, there will be obvious variations in feature distribution for EEG signals. Especially, as a spontaneous potential activity, the signal of MI is extremely weak and always accompanied with nonlinearity and nonstationary. It brings a huge challenge for the decoding model for MI. With the development of machine learning (ML) and deep learning (DL) technology, more classification models are widely used for EEG decoding [5]. During the training stage of the classification model, strategy can be divided into two ways: subject-dependent (SD) and subject-independent (SI). As shown in Figure 1, SD strategy is aimed at training a subject-specific model using their own data. In contrast, SI strategy utilizes data from other subjects to train a generalized decoding model for a new subject [6]. One of the main assumptions of ML and DL is that training data and test data belong to the same feature space and subject to the same probability distribution. But it is often violated in the field of EEG signal processing. In other words, the SI strategy cannot satisfy performance of accuracy and generalization due to the individual differences across subject/sessions. SD strategy provides an effective way to optimize this issue; however, it requires long calibration sessions to collect the high-quality and large amounts of EEG datasets. All these restrictions greatly affect the application of BCI in practice. One effective approach to solve this problem is instance transfer learning (ITL) [7], which combined the advantages of training strategy of SI and SD, i.e., training personalization classification model with enough data. The definition of transfer learning is that given a source domain D s and learning task T s and a target domain D T and learning task T T transfer learning are aimed at helping improve the performance of target predictive function f T ð·Þ using the knowledge from D s . ITL is one of the typical TL methods, which transfer instance knowledge by reweight the data from D s to improve generalization ability for f T ð·Þ. The essence of ITL does not change the feature space or property of signals in MI task, but it finds the optimal transfer weighting coefficient for source data by similarity measurement [8][9][10]. The transfer weighting coefficient is then weighted with the number of corresponding data from D s . As shown in Figure 2, w k S i /T represents the transfer weighting coefficient for D s data. k represents the serial number of the subject, and i is the ith trials. During the training stage, weighted data from D s were combined with D T data to train classifier. Based on this, we could utilize similar EEG data from other subjects or sessions to help reduce system calibration time and train decoding model for target subject [11]. For example, Azab et al. proposed a weighted transfer learning for improving MI task that they use Kullback-Leibler divergence to measure the similarity between two feature spaces of signal. According to the results of similarity, the weight coefficient is assigned to the source data to optimize the small sample problem in classification model training [12]. A Jensen-Shannon ratio method is used to measure similarity between target data with source data in Giles et al.'s work [13]. Based on this method, they propose a target subject identification framework based on rule adaptation transfer learning, which can reduce the calibration time of the online BCI system by reusing the data with the highest similarity between D s and D T . However, due to the obvious individual differences across subjects, the direct instance transfer method may bring negative transfer effects. In addition, traditional measurement does not concentrate on the specific feature of EEG data, which will affect the performance of the transfer learning. Especially for motor imagery signals, the traditional timeseries signals cannot effectively reflect the feature of motor intention, but the energy feature of signal can represent the distribution of feature well. Therefore, choose the translocatable objects and assign transfer weights reasonably to the core research for instance transfer learning [14]. In the computer vision field, content-based image retrieval (CBIR) is an important research topic [15]. The goal of CBIR is to find images from the source domain that belongs to the same category. MI spectrogram image contains abundant information of frequency and energy feature, which is suitable for extracting feature of motor intention. Therefore, we 2 Computational and Mathematical Methods in Medicine assume that technology of CBIR may implement effective data matching across subject and achieve the effective instance transfer from D s to D T . The perceptive Hash (p-Hash) algorithm is one of the typical CBIR methods, which is used to judge the similarity between different images by transforming these images to perceptual hash code and measure its distance [16]. With the development of deep neural network (DNN) technology, EEG decoding based on DNN has attracted wide attention. Due to the excellent ability of fitting and automatic feature extraction, DNNs achieve outperformed results for EEG classification. In Reference [17], a convolutional neural network (CNN) and variational autoencoder (VAE) were used for two-class MI classification task. The CNN utilized multiple hidden layers to extract the features, and the VAE was used for feature classification. The CNN-VAE method achieved a 3% improvement in classification accuracy than the best methods in their referred literature. Lu et al. [18] proposed a novel method based on restricted Boltzmann machines (RBMS) for EEG classification. Fast Fourier transform (FFT) and wavelet packet decomposition (WPD) were used to extract the frequency-domain features of signals, which were used as inputs of the network. Three RBMs were stacked with an additional output layer to train the classification network. The authors verified that the classification performance of this network was better than stateof-art methods evaluated by public datasets ðp < 0:01Þ. In a recent study [19], the researchers compared the classification performance of a CNN and long short-term memory (LSTM) network for classifying the time-frequency domain signals of MI. The authors evaluated the adaptability between different network structures to individual differences and showed that CNN provided better performance for detecting the differences across subjects, and its classification rate was significantly higher than that of the LSTM. In summary, CNN shows satisfactory classification performance in MI-BCI task compared to traditional machine learning methods or other networks. However, the limitation of the amount of dataset hinders the practical applications of DNN. Especially for SD training strategy, it is difficult for a subject to collect enough high-quality EEG data. Therefore, we propose a novel instance transfer learning based on p-Hash to improve the utilization efficiency of data and build a CNN for MI classification. Based on the problems mentioned above, we propose a novel instance transfer learning strategy combined with CNN for subject-dependent MI classification. The main contributions of this paper are as follows: (1) To address the issue of individual differences across subjects/sessions in MI classification, we creatively apply the methodology of content-based image retrieval to EEG classification. Based on this, we proposed a novel instance transfer learning (ITL) strategy using the p-Hash algorithm, which is aimed at calculating the transfer weight coefficient between the trails from different subjects/sessions (2) There are two main limitations of subject-dependent and subject-independent training strategies: smallscale dataset and large difference of signal across subjects. Therefore, we apply instance transfer learning to optimize the traditional training strategies. Similarity measurement in feature space is executed to calculate the transfer weight coefficient across subjects/sessions, which is aimed at exploring the correlation between different trials. And then we expand the training set for target subject based on instance transfer by weighted (3) To improve the classification performance in MI-BCI task, we combine CNN with transfer learning strategy using SD training strategy (ITSD-CNN) to classify MI signal. Experiments evaluate that the ITSD-CNN can achieve outperformed results than state-of-art methods The step of ITSD-CNN can divide into these steps: firstly, we preprocess the raw MI-EEG signals and adopt short-time Fourier transform (STFT) to transform the raw MI signal into a 2-D spectrogram signal. Then, an ITL based on the perceptive Hash algorithm is proposed to measure the similarity of MI signals between D s and D T . Next, we build a convolutional neural network to classify the MI data after transfer learning. The BCI competition IV-2b dataset is used to verify the effectiveness of this framework. Our results show that the proposed approach can significantly improve the classification performance. Meanwhile, the ITSD provides a new training strategy to optimize the performance of SD training. The rest of the sections is organized as follows. Section 2 explains the materials and methods for ITSD and CNN. Section 3 introduces the experimental results and discussion. Discussion is described in Section 4, and Section 5 is the conclusion of the paper. Description of Datasets. In this paper, we utilize BCI competition IV dataset 2b [20]. This dataset was provided by the BCI Research Institute in Berlin and contained two parts: the standard set and the evaluation set. Nine subjects participated this experiment, and three channels (C3, C4, and CZ) were used to record EEG with a 250 Hz sampling rate. Each subject is required to imagine the movement of left and right hands according to the cue. And all of them underwent 5-session experiments. The experimental process is shown in Figure 3. The potential activity of MI always causes the variation of energy in the contralateral cortex and ipsilateral cortex during MI, which is recorded by C3, Cz, C4, and surrounding channels [3]. However, this phenomenon cannot reflect in time domain clearly. To describe features in a better form, we transform time-series signals to spectrogram signals after filtering. As shown in Figure 4, the three channels are converted into a two-dimensional form and are mosaicked into an image using vertical stacking. The variations along X-axis and Y-axis represent the trend of time series and frequency, respectively. And the depth of color indicates the energy feature. For one trial, we chose the data from 3 to 7 s (period of imagery) and set the window size of the STFT as 256. After transformation, all spectrogram images were resized to 64 × 64 for convenience and consistency in the subsequent calculation. Instance Transfer Learning Based on the Perceptive Hash Algorithm. The spectrogram signal of MI can vividly reflect the feature variations especially for the energy of frequency band. The perceptive hash (p-Hash) algorithm can obtain the most sensitive information by discrete cosine transform (DCT) in the human and machine vision system [16]. This transformation concentrates the energy on the main diagonal of the image matrix and has effectively removed redundant and irrelevant components. Under specific EEG task, the feature distribution of signals across subjects may exist difference but the form of feature is consistent. Therefore, we assume that change between different modes for MI can be effectively perceived and recognized by p-Hash. This paper uses p-Hash to measure the similarity of spectrogram data across subject. Then, the obtained similarity is transformed into the ITL coefficient that inputs into a classifier combined with corresponding data. The implementation of ITL based on p-Hash are as follows. Firstly, some denotation of symbols should be explained. We define D T representing the target subject's data and D S is other subjects' data. Let us define G i n = fg i t g n t=1 ∈ D s l×l which is a set of single-trial EEGs represented by spectrogram from D S , Q i n = fq i t g n t=1 ∈ D t l×l for target subject, where t is the number of EEG trials, l is the dimensions of a square matrix, i represents the i-th subject.q i n ð1/nÞ∑ n t=1 q i tqt represent the average spectrogram for the current target subject. Before calculation, spectrogram images from D S and D T are separately resized to 64 × 64 and converted to grayscale level. Then, discrete cosine transform (DCT) is utilized to compress an image: siy,y cos αðuÞ and αðvÞ are coefficient matrixes after transformation. G u,u and Q v,v are results after transformation. The energy variations of the image after DCT are mainly concentrated in the low-frequency part [21]. Therefore, the 8 × 8 matrix d located in left diagonal is extracted for subsequent In addition, the mean value of DCT coefficients is set as threshold standard to compare with each coefficient. By the rule of threshold, the two-dimensional matrix of n × n can be compressed into one dimension of 1 × n matrix H i . where h i is the bit of the perceptual hash at position i, m is the mean value of the DCT coefficients, and b i ði = 0, 1, ⋯, N − 1Þ is DCT coefficient of the array. The obtained 1 × n matrix is H i which represents perceptual hash code [22]. Finally, respectively, calculate the Hamming distance d H of perceptual hash code from D T and D S and set the distance d H ðH T , H S Þ as the ITL weight coefficient from source domain to target domain. For each trial from source subject, weight w S i /T t can be calculated: The calculation processing of transfer weight is shown in Figure 5. Convolutional Neural Network. Researches show that the CNN has obvious advantages in processing MI signals [23]. CNN is a multilayer neural network composed of a sequence of convolution, pooling, and fully connected layers. The convolution layer extracts different levels of feature of input image by kennel size, while the pooling layer reduces the complexity of the model by subsampling. With the increase of layers, the more advanced features can be extracted. The fully connected layer will transform the output matrix from the last layer to a n-dimensional vector (n is the number of classes). Backpropagation is utilized to decrease the classification error. In the convolution layer, the input image can be convolved with a spatial filter to form the feature map and output function, which is expressed as This formula describes the jth feature map in layer l. X l j is calculated by the previous feature map X l−1 i multiplied by the convolution kernel W l ij and bias parameter b l j . Finally, the mapping is completed by RELU function f ðÞ. The pooling layer is sandwiched in the continuous convolution layer to compress the amount of data and parameters and reduce overfitting. The max pooling method in this work is chosen as follows: where x i is the ith feature map and y i represents an output probability distribution. The gradient of back-propagation was calculated according to the cross-entropy loss function. And we used the stochastic gradient descent (SGD) optimizer with a learning rate of 1e − 4 to improve the speed of network training. where μ is the learning rate, while W k represents the weight matrix for kernel k and b k represents the bias value. E represents the difference between desired output and real output. There are eight layers in the proposed network ( Figure 6). The first layer is the input layer, and the second layer is a convolutional layer with kernel size 3 × 3; the next layer is the max pooling layer with kernel size 2 × 2. The next two layers have the same kernel size and function. Two fully connected layers, respectively, consist of 10 and 2 neuros to compute the predicted labels. The gradient of backpropagation is calculated according to the cross-entropy loss function. The stochastic gradient descent with momentum (SGDM) optimizer is used for optimization with learn rate = 1e − 4, momentum = 0:9, and decay = 1e − 6. We set the minibatch size to 50 and the max epoch to 6. To reduce computation time and prevent overfitting, we adopt to the dropout operation. The proposed CNN model is summarized in Table 1. To evaluate the effectiveness of instance transfer learning, three training strategies are compared. As for the subject dependent method (Figure 7), a total of 720 trials for one subject are divided into the training data and test data using10fold cross-validation. A generalized model is trained using data from other subjects (D s ) in the subject-independent training stage ( Figure 8). In the ITSD method, weighted data from D s together with target data are input into the training set. And data from D T are used to test model performance; the method of data partition is shown in Figure 9. To show the size of training and test data more clearly, we briefly summarize the number of data for three training strategy in Table 2. Experimental Results and Analysis 3.1. Performance of the Proposed Framework. In this paper, we use BCI competition IV dataset 2b to verify the proposed methods. During the training stage for each subject in ITSD, we supply dataset from other subjects by instance transfer (Figure 9). After the mixture of source data and target data, we adjust the number of instances to keep the class balance in the training stage. To evaluate the performance of different training strategies, we compared the classification accuracy of different methods. As depicted in Table 3, the SD training strategy is better than SI based on the CNN classifier even though SI obtains more training data. This indicates that MI-EEG from different subjects causes an obvious difference of feature under the same label. The average classification accuracy of ITSD-CNN is superior to that of SD-CNN, which obtains a 14.1% improvement. It is worth noting that subjects 2 and 3 can better adapt to model preference by efficiently data transfer to greatly improve the classification accuracy. To verify the significance of results, analysis of variance was performed. As shown in Figure 10, there is no significant difference between SD-CNN and SI-CNN, while the strategy of ITSD-CNN performs satisfied convergence and high accuracy than the other two methods (p < 0:01). By observing the training process, the weakness of small sample can directly influence the results of network training. Effective data transfer can increase the number of samples to improve the generalization of network and prevent underfitting. Moreover, this method can validly reduce the influence of classifier result from individual differences. Comparisons with Previous Research. Numbers methods have been proposed for MI classification using BCI competition IV dataset 2b. In this section, we further compared our method with that of a commonly used strategy by the metric of mean kappa value. Based on the analysis of Table 4 and Figure 11, we can observe that ITSD-CNN outperforms the baseline and the state-of-the-art methods. It is worth noting that the hybrid framework based on CNN obtains an ideal result among these methods. This indicates that CNN has strong robustness and high accuracy in MI classification. In addition, instance transfer effectively improves the classification performance of CNN using the same model and parameters. Discussion Compared with traditional methods, the application of deep learning for EEG classification has successfully improved the performance [28]. However, there are still some limitation hinder its application in practice. The feature distribution of EEGs always shows a difference in the same mental task across subject/session, which may cause the overfitting during the network training. Transfer learning turns out to be instrumental in subject/session classification performance. It can be used to initialize a BCI using knowledge transfer from other subjects for a naive subject. At the same time, this strategy may help a classifier to learn global features from all subjects without falling into the local optimal. Therefore, transfer learning combines the advantages of SI and SD strategy and outperforms them. In future studies, it is valuable to Another limitation is small-scale sample for classifier training. Strict requirements of quality and collection of EEG data make it difficult to obtain large datasets in practice. The performance of EEG decoding based on DNN is directly related to the amount of training data. Data augmentation is a promising way to address this issue. As discussed in References [29,30], artificially generated data can be used to training classification model and the augmentation method has been proved efficient in EEG decoding. The addition of generated dataset improves the complexity and robustness of models. Traditional augmentation methods contain geometric transformation and model generation, which requires a long time to prepare and select suitable generated data. It takes up a lot of computing resources in the BCI system. Therefore, data augmentation from an available database may provide a probable method. As proposed in this study, instance transfer learning can easily obtain data from other subjects and adaptively assign weights to the transfer data, which achieve the utilization maximization of data across subjects. Although the training process of this method is similar with subject-dependent strategy, i.e., it requires recomputing for a new participant; low-cost calculation would not burden the operation of the BCI system. In later research, we will explore the detail of variability across subjects and achieve more effective transfer learning. Conclusion In this paper, we propose a novel instance transfer learning method with a deep neural network applying for the subject-dependent classification of motor imagery in the BCI system. In this work, we firstly transform the raw data to spectrogram image by STFT. Then, instance transfer learning based on the perceptive Hash algorithm is utilized to measure the similarity between the data of source domain and target domain. Next, we convert the similarity into a transfer weight coefficient to realize the data transfer of a single trial between different subjects. Finally, a convolutional neural network is built to verify the performance of proposed methods and some other methods are adopted to evaluate the results. Experiment verifies that instance transfer learning by the perceptive Hash algorithm can effectively provide data augmentation based on subject-dependent training strategy and improve the performance of the classifier, which demonstrates the superior performance and promising potential of proposed novel training strategy. Meanwhile, the proposed method provides a solution for the weakness of small samples in the deep neural network. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that they have no conflicts of interest.
2020-09-03T09:05:57.121Z
2020-08-28T00:00:00.000
{ "year": 2020, "sha1": "dbef05fe345c841b3267b504bfb531e7d60cb1c0", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cmmm/2020/1683013.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "53d5ecc7d8748465638554c40cb91be9d1c5385b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
29828230
pes2o/s2orc
v3-fos-license
Health literacy among adults in Yazd, Iran The purpose of this survey was to assess the health literacy levels and determine the relationship between health literacy with demographic variables and the socioeconomic status Three hundred and eighty adults, 18 years and older, were randomly selected and assessed by the Test of Functional Health Literacy in Adults (TOFHLA) instrument in two sections of reading comprehension and numeracy. The second instrument used to detect the relationship between the demographic variables and socio-economic status and the level of health literacy of the subjects of adults in Yazd district. Three hundred and eighty adults, 18 years and older, were randomly selected and assessed by the Test of Functional Health Literacy in Adults (TOFHLA) instrument in two sections of reading comprehension and numeracy. The second instrument used to detect the relationship between the demographic variables and socio-economic status and the level of health literacy of the subjects. The mean score of a participant's health literacy was 73.33 ± 1.29. Fifty-four percent of the individuals had adequate health literacy and the rest of them had limited health literacy. The mean score of functional health literacy was significantly different by socio-economic status (p0.05) and the years of schooling (P = 0.00). On the basis of linear regression, in this research, the years of schooling (B0.28, p0.01) and marital status (B = 3.08, p0.05) were two predictors of health literacy. to obtain, process, and understand basic health information and services needed to make appropriate health decisions'. Significantly, health literacy is necessary not just to expand individual skills, but also to interact with people and their environment for a better life. [2] Overall, the level of functional health literacy in Iran is low. [3] But the recent surveys in some provinces of Iran suggest that this trend has been upgraded. [4] According to the development vision document of the Islamic Republic of Iranian 2025,'at the end of 2025, every Iranian person must be health literate and must complete his/her own health literacy, by means of educational instruments. [5,6] Therefore, monitoring and surveying the level of health literacy is the INTRODUCTION Health literacy is the key component of pursuing health and wellbeing in modern society. It is linked to literacy and entails people's knowledge, motivation, and competence to access, understand, appraise, and apply health information in order to make judgments, and take decisions related to healthcare in everyday life, such as disease prevention and health promotion, to maintain or improve their quality of life during life's course. [ key component of the health policy in Iran. Therefore, the purpose of this study was to investigate the situation of health literacy and the relationship among the socio-economic, the socio-demographic, and health literacy of the people of Yazd, Iran. MATERIALS AND METHODS This study was a cross-sectional study, which was done on the performance of people living in the Yazd province between January 2013 and June 2013. Yazd is one of the 31 provinces of Iran and is located in the center of the country. The eligibility criteria for this research were individuals 18 years old and above, having basic literacy for reading and writing, and residents of the urban district of Yazd. During the process of data collection, the ones who lacked cooperation were excluded from the research. Considering the 95% confidence interval and the standard error of measurement, 380 people were chosen for this research. This sample was chosen by cluster sampling of people living in nine regional municipalities of Yazd. The long version of the TOFHLA test was used to assess the functional health literacy (FHL) of the subjects. The original version of the TOFHLA instrument was translated to Persian by Tehrani and colleagues using the standard method. [6] In addition to the TOFHLA test, the demographic and socio-economic characteristics of the participants, including: Sex, age, marital status, level of education and socioeconomic status were also collected. The long version of TOFHLA was scored on a scale 0-100 in two sections: Reading comprehension and numeracy. The cut off points for the health scale were set at 57, and 74. Subjects with a score of 0-59 were considered as having inadequate functional health literacy, subjects with a score of 60-74 score were considered as having marginal functional health literacy, and subjects with a score of 75-100 score were considered as having adequate functional health literacy, which takes up 22 minutes. The statistical comparisons in this survey were based on analysis of variance (ANOVA) and regression statistics. ANOVA was used for comparing the means and regression statistics were used to determine the association between the socio-demographic characteristics and the level of health literacy among the participants. RESULTS The total number of participants was 380, which included 186 (48.9%) males and 194 (51.1%) females. The mean age of subjects was 35.65 ± 1.24 years, which ranged between 18 and 73 years. The mean score of the participants in TOFHLA test was 73.33 ± 12.93. The mean score of the subjects in reading section and numeracy was 39.66 + 7.06 and 33.67 ± 8.81 respectively. The mean score of women in the numeracy section was 33.27 ± 8.53 and in the other group was 34.09. Table 1 shows the distribution of the TOFHLA mean score of the participants according to the demographic and socio-economic variables. Both the t-test and ANOVA showed that the two variables of socio-economic status (P < 0.001) and years of schooling (P = 0.00) had significant association with the scores of TOFHLA. There were no differences between the scores of TOFHLA and sex (P = 0.28), marital status (P = 0.10), as well as age (P = 0.53).According to the findings of this research 224 (59%) of the participants had an adequate health literacy level and the prevalence of limited health literacy was equal to 41%. The highest level of health literacy was seen in males, married, and individual, in the age range of 31-40 years, from a good socio-economic status. According to the findings of this research, the years of schooling and socioeconomic status were two variables that had a positive and significant correlation with functional health literacy [ Table 2].The first one had the highest correlation with health literacy (P = 0.000, r = 0.247). A multiple linear regression model was used to predict the TOFHLA scores. By using the enter method, the years of schooling and socioeconomic status were two predictors of functional health literacy, wherein, by increasing the year of schooling and changing the marital status from single to married, the score of TOFHLA increased [ Table 3]. Health literacy score = 63.45 + 2.44(the years of schooling) +3.08(Marital status) The findings of this research showed a significant association between the FHL and the years of schooling as well socio-economic variables, which support the results of Tehrani and colleagues, [3] Ghanbari and colleagues, [8] Nekoei Moghadam and colleagues [4] in Iran and other countries. [2,6,[8][9][10][11][12][13] In this survey, consistent with increasing with the years of schooling, the mean score of functional health literacy of the participants intensified with increased schooling, with low, medium, and high education, fetching 35, 59, and 75%, respectively, for participants who had adequate health literacy. Therefore, educational interventions to promote health literacy of individuals help them to make decisions about their own health situation, as well as that of their families and the community they live in. Yet, there is no complete correlation between the year of schooling and health literacy, as 25% of the participants with high education, have limited health literacy. Therefore, assessment of health literacy according to the educational certificate or the years of schooling is incorrect, and this proved the findings of the previous studies on this subject. [14] According to Nutbeam's assessment, any strategies to promote general literacy, have a positive effect on health literacy. [15] It seems that the high rate of general literacy in Yazd is one of the factors in the acceptable rate of functional health literacy. It is estimated that over 90% of the individuals in Yazd have basic literacy, which includes the ability to read and write. [16] In contrast, as in most of the researches in this field, [6,9,11] the results of this survey have not indicated any significant correlation between age and health literacy. Although this survey does not show any association between socioeconomic and functional health literacy, this variable and the years of schooling have been two predictors of health literacy. In conclusion, the result of this assessment may possibly help policy makers to make decisions about the future programs, especially educational programs. This research has some limitations: This is a cross-sectional study and the sample of this study is restricted to urban people, therefore, the findings of this study may be generalized cautiously. Financial support and sponsorship Shahid Sadoughi University of Medical Sciences. Conflicts of interest There are no conflicts of interest. DISCUSSION The finding of national survey about the Iranian's health literacy suggested that overall most of the Iranians have limited health literacy. [3] According to Tehrani's study, there is no significant difference between the level of health literacy and sex of participants, which support the results of our study. The mentioned survey showed that the unadjusted score of women was lower than that in men, but the interesting findings of the research showed that the score of women's health literacy would be higher than the score of men's health literacy if the years of education were adjusted. This finding implied that overall if the participants had been at the same level of education, the health literacy score of women would be higher than that of men. The combination of these results, with previous findings, showed that the level of education in women is rapidly growing [5] which indicated that probably in future the health literacy of women would become higher than that in men. This was consistent with our study. Pursuant to the findings of this research, the prevalence of adequate health literacy in the urban district was estimated to be 59%, which was comparable to the study of Jovic-Vranes, [7] on this subject.
2018-04-03T01:54:35.611Z
2015-12-30T00:00:00.000
{ "year": 2015, "sha1": "f98b86b78f63701f84dff25cb9f877a5bf5e5f29", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "ee01e0d69624a0e1ae2d980a05e1f202a8fe5182", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7972539
pes2o/s2orc
v3-fos-license
Long-term exposure to high-altitude chronic hypoxia during gestation induces neonatal pulmonary hypertension at sea level ETIOLOGY Long-term exposure to high-altitude chronic hypoxia during gestation induces neonatal pulmonary hypertension at sea level. Am Physiol Physiol 299: 2010. determined whether postnatal pulmonary hypertension induced by 70% of pregnancy at high altitude (HA) persists once the offspring return to sea level and investigated pulmonary vascular mechanisms operating under these circumstances. Pregnant ewes were divided into two groups: conception, pregnancy, and delivery at low altitude (580 m, LLL) and conception Postnatal pulmonary hypertension induced by 70% of pregnancy at HA pro-motes cardiopulmonary remodeling that persists at sea level. the postnatal period is associated with high mortality, and children who survive may have decreased postnatal growth and devastating neurological, respiratory, and cardiac complications that often persist into childhood (9,44). One condition that may lead to elevations in pulmonary arterial pressure in the postnatal period is sustained fetal hypoxia (1,44). In humans and animals, a common form of sustained fetal hypoxia is pregnancy at high altitude (16,23). Pulmonary hypertension in the postnatal period due to this condition is an important problem, since currently nearly 140 million people reside at over 2,500 meters above sea level, being permanently exposed to chronic hypoxic conditions (34,41). Sustained or partial exposure to high altitude of pregnant women, either permanently resident at high altitude or native to low altitude, is therefore a current problem. Several mediators act upon the pulmonary vasculature, triggering alterations in vascular tone and structure. A potent vasoconstrictor is endothelin-1 (ET-1), which acts via the ET A receptor to stimulate both contraction and remodeling of the pulmonary vascular bed. ET-1 therefore plays an important role in the regulation of pulmonary vascular resistance (3,4,26). Interestingly, it has been reported that ET-1 function is increased in neonatal pulmonary hypertension (3). Nitric oxide (NO) is also another important modulator of the pulmonary circulation, and its vasodilator actions are mediated via several mechanisms, including the activation and opening of Ca 2ϩactivated K ϩ channels (BK Ca ) and the balance between the synthesis of cGMP through the activation of soluble guanylate cyclase (sGC) and its degradation by the isoenzyme phosphodiesterase 5 (PDE5) (2,42). The impairment of NO-dependent dilatation has also been closely related to pulmonary hypertension in the postnatal period (1,44). In addition, the endogenous gas carbon monoxide (CO) is a dilator in the pulmonary vascular bed, and it protects against pulmonary vascular remodeling (31,37,48). In newborn llamas, augmented pulmonary CO, rather than pulmonary NO, helps to prevent pulmonary hypertension in the newborn period at high altitude (25). Using ovine pregnancy at high altitude as an experimental model, we have previously reported that pregnancy and delivery at high altitude yields offspring with pulmonary hypertension, coupled with increased constrictor reactivity of isolated pulmonary vessels despite enhanced pulmonary NO function (23,24,25). In those studies, the in vivo and in vitro measurements were performed at high altitude. It remains unknown whether long-term exposure of the pregnancy to high altitude results in altered pulmonary vascular function and anatomy in offspring, even following return to sea level. Therefore, this study tested the hypothesis that long-term exposure of the pregnancy to high altitude results in postnatal pulmonary hypertension even following return to sea level and that this is associated with cardiopulmonary remodeling and alterations in the pulmonary vascular function. We used an integrative approach at the whole animal, isolated organ, and molecular level to determine the effects of 70% of gestation at high altitude on: 1) in vivo pulmonary arterial pressure under basal and acute hypoxic conditions, both before and after NO blockade; 2) the reactivity of isolated small pulmonary arteries to KCl, ET-1, and to sodium nitroprusside (SNP) before and after treatment with the sGC inhibitor 1H- [1,2,4]oxadiazolo [4,3-a]quinoxalin-1-one (ODQ); 3) the mRNA and protein expression of endothelial NO synthase (eNOS), and protein expression of sGC, BK Ca , PDE5, heme oxygenase-1 (HO-1), and CO production in the postnatal lung; and 4) the morphology of small pulmonary arteries and weight ratios of the heart and lungs. All studies were performed at sea level in lambs within the first two weeks of postnatal life. Animals Twenty-eight pregnant ewes (Ovis aries) were divided into the following two groups: conception, pregnancy, and delivery at lowland (Santiago, 580 m, LLL, n ϭ 14) and conception at lowland, pregnancy at high altitude (Putre, 3,600 m) from 30% of gestation until delivery, and return to lowland (LHL, n ϭ 14). Mothers and newborns were housed in an open yard with access to food and water ad libitum. Surgical Preparation and In Vivo Experiments A subgroup of the lambs was surgically prepared between 3 and 8 days of age for in vivo experimentation (LLL, n ϭ 8; LHL, n ϭ 4). In brief, the animals were anesthetized with ketamine (10 mg/kg im) and diazepam (0.1-0.5 mg/kg im) with additional local infiltration of 2% lidocaine. Polyvinyl catheters were placed in the descending aorta and inferior vena cava, and a Swan-Ganz catheter was placed in the pulmonary artery, as previously described in detail (23). Following 3-4 days of postsurgical recovery, the animals were subjected to a 3-h experimental protocol, consisting of 1 h of normoxia, 1 h of hypoxia, and 1 h of recovery. Acute isocapnic hypoxia was induced via a transparent, loosely tied polyethylene bag placed over the animal's head into which a known mixture of air, N 2, and CO2 (ϳ10% O2 and 2-3% CO2 in N2) was passed at a rate of 20 l/min. Acute hypoxia was induced on separate days during vehicle infusion (0.9% NaCl) and during NO blockade [N G -nitro-L-arginine methyl ester (L-NAME), 20 mg/kg bolus plus 0.5 mg · kg Ϫ1 · min Ϫ1 in 0.9% NaCl infusion] in random order. Infusions started 15 min before hypoxia and ran continuously until the end of the hypoxemic challenge. Arterial blood samples were taken during each experimental protocol to determine arterial pH, PO 2, PCO2, hemoglobin concentration ([Hb]), and percentage saturation of hemoglobin (SaO 2 ) [IL-Synthesis 25 (Instrumentation Laboratories, Lexington, MA); measurements corrected to 39°C]. Pulmonary and systemic arterial pressures and heart rate were re-corded continually via a data acquisition system (Powerlab/8SP System and Chart v4.1.2 Software; ADInstruments, New South Wales, Australia) connected to a computer. Cardiac output was determined at set intervals by the thermodilution method by the injection of 3 ml of chilled (0°C) 0.9% NaCl in the pulmonary artery through the Swan-Ganz catheter connected to a cardiac output computer (COM-2 model; Baxter, Irvine, CA). Pulmonary vascular resistance was calculated as described previously (23). The production of CO by the pulmonary circulation was calculated as follows: cardiac output multiplied by the difference in the concentration of CO between the aorta and the pulmonary artery (percent of carboxyhemoglobin; IL-Synthesis 25) (25). Ex Vivo and In Vitro Experiments The remaining uninstrumented lambs (LLL, n ϭ 6; LHL, n ϭ 5) underwent euthanasia with an overdose of sodium thiopentone (100 mg/kg iv) and were studied ex vivo. Wire myography. The left lung was removed by dissection and immediately immersed in cold saline. Fourth-generation pulmonary arteries (counting from the pulmonary artery trunk, id LLL: 410 Ϯ 20 m and LHL: 365 Ϯ 22 m) were dissected from the caudal lobule of the left lung. Isolated arteries were mounted in a wire myograph, maintained at 37°C, and aerated with 95% O 2-5% CO2. Concentration-response curves (CRCs) were analyzed in terms of sensitivity and maximal or minimal responses by fitting experimental data to the Boltzmann equation (Prism 5.0; GraphPad Software, La Jolla, CA). Contractile responses were expressed in terms of tension (N/m) and contraction or relaxation responses as a percentage of increase or reduction of 125 mM K ϩ -induced contraction or tension (N/m). Sensitivity was calculated as pD2, where pD2 ϭ Ϫlog[EC50], with EC50 being the concentration at which 50% of the maximal response was obtained (22). CRCs were constructed for KCl, ET-1, and for the NO donor SNP following precontraction with 125 mM K ϩ . CRCs to SNP were repeated following blockade of sGC with 50 M of ODQ (10 Ϫ5 M) incubation for 10 min. Heart and lung biometry. The neonatal heart was obtained, and the free wall of the right ventricle, the left ventricle, and the septum were dissected. The ratio of the weights of the right ventricle to the left ventricle and septum was calculated (20). In addition, the lungs were removed and weighed. Lung weight-to-body weight ratio was calculated. Histology. We isolated and perfused the right lung with 4% paraformaldehyde. Excised lungs were fixed in 4% paraformaldehyde for 24 h at 4°C and embedded in paraffin; van Giesson staining was performed on 10-m slides. At least four arteries (100 -200 m diameter) per lung were chosen, and an average of four measurements from each artery was recorded. Images of parenchymal arterioles were acquired using a workstation (Olympus trinocular microscope-BX51 plus digital camera QimaginGO3) linked to Image Pro software 6.3, and vascular areas were calculated using an image analysis program. The wall-to-vessel area ratio was calculated and expressed as a percentage, as previously described (24,33). Briefly, the percent wall thickness was calculated as follows: wall thickness (%) ϭ external area Ϫ internal area/external area ϫ 100, where external area and internal area are the area bounded by external and internal elastic laminae, respectively. In addition, the area of vascular smooth muscle was calculated as follows: muscle area (%) ϭ external muscle area Ϫ internal area/external muscle area ϫ 100, where the external muscle area and the internal area are the external and internal boundaries of the tunica media, respectively. Statistical Analysis Data are expressed as means Ϯ SE. Groups were compared by two-way ANOVA and the post hoc Newman-Keuls test, or the Student's t-test for unpaired data, as appropriate. We used the Fisher Exact Test to compare survival between groups. For all comparisons, differences were considered statistically significant when P Ͻ 0.05 (18). Survival and Weight In marked contrast to sea level pregnancies (LLL) with 100% survival, pregnancies after 70% exposure to high altitude (LHL) had increased mortality, with 21% abortions and 14% stillbirths (P Ͻ 0.05). No lambs died after birth in either group. Surviving LHL lambs were much lighter than LLL lambs (3.8 Ϯ 0.3 kg, n ϭ 9 vs. 7.0 Ϯ 0.4 kg, n ϭ 14, P Ͻ 0.001, weights at the time of experimentation between 6 and 11 days of age). In Vivo Experiments Because of differences in survival, eight LLL and four LHL lambs were studied in vivo. Values for basal pH a , Pa O 2 , Pa CO 2 , Sa O 2 , and [Hb] were similar in both groups of lambs (Table 1). During acute hypoxia on a background of saline infusion, a similar fall in Pa O 2 and Sa O 2 occurred in both groups of lambs, without any alteration to Pa CO 2 from baseline (Table 1). During recovery, all variables returned toward basal values in both groups. However, values for Pa CO 2 were significantly depressed from baseline in LLL lambs (Table 1). Treatment with L-NAME had no significant effect on arterial blood gas and acid base status either during basal or acute hypoxic conditions (Table 1). Basal values for pulmonary arterial pressure, pulmonary vascular resistance, and cardiac output were significantly greater in LHL than LLL lambs ( Fig. 1 and Table 1). Basal values for heart rate were similar between the groups (Table 1). Basal systemic arterial pressure was similar in LHL and LLL lambs (87 Ϯ 3 vs. 82 Ϯ 1 mmHg, respectively). During acute hypoxia on a background of saline infusion, pulmonary arterial pressure, pulmonary vascular resistance, cardiac output, and heart rate increased significantly in both groups of lambs. However, in LHL relative to LLL lambs, values for pulmonary arterial pressure and cardiac output reached significantly greater values during the acute hypoxic challenge ( Fig. 1 and Table 1). No changes in systemic arterial pressure were seen in either of the experimental groups during hypoxia. During recovery, pulmonary arterial pressure, and cardiac output remained significantly elevated from baseline, but heart rate and pulmonary vascular resistance returned toward basal values in LHL lambs. In contrast, all variables returned toward basal values in LLL lambs ( Fig. 1 and Table 1). Treatment of the lambs with L-NAME during the basal period led to an increase in pulmonary arterial pressure and pulmonary vascular resistance and a decrease in heart rate and cardiac output in both groups of lambs. Although the fall in heart rate and cardiac output was similar between the groups, the increment in pulmonary arterial pressure and in pulmonary vascular resistance was significantly greater in LHL than in LLL lambs ( Fig. 1 and Table 1). Treatment with L-NAME did not affect the magnitude of the pulmonary hemodynamic and blood gas responses to acute hypoxia in either group of lambs, with the exception that values for pulmonary arterial pressure reached greater values in LHL than in LLL lambs ( Fig. 1 and Table 1). At the time of surgery up until the time of study, there was no evidence of a patent ductus arteriosus. At the time of dissection, after the last study, the ductus arteriosus was examined, and no lumen was visible in any of the studied animals. Additional evidence for the closure of the ductus arteriosus is provided by the similarities of oxygen saturation and PO 2 in samples obtained from the ascending and descending aorta (data not shown). This is further supported by the fact that systemic arterial pressure was always higher than the pulmonary arterial pressure in all animals. The presence of a left-to-right shunt is unlikely because there were no differences in pulmonary arterial pulse pressure, between LHL and LLL lambs, either during basal conditions or during hypoxia. Western Blot The expression of eNOS mRNA and protein in lung tissue was significantly greater in LHL lambs than in LLL lambs (Figs. 4 and 5). This was associated with a significantly greater protein expression of pulmonary BK Ca and PDE5 but not sGC in LHL than in LLL lambs (Fig. 5). Furthermore, protein expression of pulmonary HO-1 and the production of CO by the pulmonary circulation were both diminished in LHL compared with LLL lambs (Fig. 6). Heart and Lung Biometry The ratio of the weight of the right ventricle to the left ventricle plus septum was augmented in the LHL compared with the LLL group (0.370 Ϯ 0.017 vs. 0.328 Ϯ 0.009, P Ͻ Values are the means Ϯ SE, in units, for arterial pH (pHa), partial pressure of oxygen (PaO 2 ), partial pressure of carbon dioxide (PaCO 2 ), saturation of hemoglobin with oxygen (SaO 2 ), hemoglobin concentration ([Hb]), heart rate, cardiac output, and pulmonary vascular resistance (PVR). LLL, conception, pregnancy, and delivery at low altitude (580 m); LHL, pregnancy at high altitude (3,600 m) from 30% of gestation until delivery, and return to lowland; N G -nitro-L-arginine methyl ester, L-NAME. Blood samples and cardiovascular variables were taken, measured, and calculated during preinfusion baseline (Basal), during infusion with saline or L-NAME (Basal ϩ I), during acute hypoxia (Hypoxia ϩ I), and during recovery. Significant differences (P Ͻ 0.05) are as follows: vs. basal (*), LHL vs. LLL ( †), and L-NAME vs. 0.9% NaCl ( ‡). 0.01). The ratio of the lung weight to the body weight was similar in LHL compared with LLL lambs (0.0200 Ϯ 0.0009 vs. 0.0180 Ϯ 0.0005, not significant). Histology Morphometric analysis of the pulmonary vasculature revealed no significant difference in vascular wall thickness between LLL and LHL lambs (48.79 Ϯ 3.59 vs. 55.03 Ϯ 4.35%, respectively, P ϭ 0.31, n ϭ 5 for each group). However, there was a significant increase in the area of vascular smooth muscle in LHL compared with LLL lambs (29.28 Ϯ 2.96 vs. 21.09 Ϯ 1.73%; P Ͻ 0.05; Fig. 7). DISCUSSION These studies show that 70% exposure to high-altitude chronic hypoxia during gestation yields postnatal lambs with basal pulmonary hypertension and an increased pulmonary vascular response to an episode of acute hypoxia even following return to sea level. These findings persist despite evidence of enhanced pulmonary NO function obtained through in vivo, isolated organ and molecular approaches. Furthermore, the results show a decrease in pulmonary CO function and an increase in the vascular reactivity of constrictors associated with cardiopulmonary remodeling processes. Combined, the data support the hypothesis tested and provide a mechanistic explanation for the persistence of neonatal pulmonary hypertension at sea level induced by high-altitude pregnancy. A striking difference between the groups of lambs in the present study was the much greater mortality and pronounced growth restriction in lambs born from pregnancies after prolonged exposure to high altitude. Pregnancy at high altitude induces maternofetal hypobaric hypoxia, and we have previously reported lower maternal and fetal arterial PO 2 in a separate cohort of animals exposed to the same altitude during the whole pregnancy (12). A similar effect on fetal growth restriction and mortality during development at high altitude has been reported in highland human populations (16,30,34,38) and in chick embryos following highland incubation (17,43). Malnutrition during early gestation in high-altitude cattle also resulted in a higher incidence of elevated pulmonary arterial pressure and right ventricular hypertrophy compared with controls when measured in the offspring at 15 mo. This was associated with differential gene expression in the right ventricle, but the resulting interaction between undernutrition and high-altitude hypoxia is unclear (21). In our study, both groups of lambs received the same nutrition, so the changes observed in pulmonary arterial pressure and growth restriction appear to be independent of nutrition. Accordingly, the effects on fetal growth restriction and mortality of developmental hypoxia at high altitude have been shown to be independent of the maternal nutritional status and of highland hypobaria in other species, since fetal growth restriction persists in ewes undergoing pregnancy at high altitude with food intake values similar to those as sea level pregnancies (23,39). These effects have also been shown in the chick embryo, where incubation at high altitude of sea level eggs with oxygen supplementation completely prevented the high altitude-induced fetal growth restriction and mortality (17). The present study extends these ET-1, B). Maximal response (Emax) and sensitivity (EC50 or pD2) were calculated (see text). Significant differences (P Ͻ 0.05) are as follows: LHL vs. LLL for Emax ( †) and LHL vs. LLL for sensitivity ( ‡). Brackets denote concentration. Fig. 1. Pulmonary arterial pressure (PAP) during the in vivo acute hypoxia protocol in lambs in the following 2 groups: conception, pregnancy, and delivery at low altitude (580 m) (LLL, OE) and conception at low altitude, pregnancy at high altitude (3,600 m) from 30% of gestation until delivery, and return to lowland (LHL, ). Acute hypoxia was induced following a background infusion (gray bar) with 0.9% NaCl (A) or with the nitric oxide synthase (NOS) blockade inducer N G -nitro-L-arginine methyl ester (L-NAME, B). Values are means Ϯ SE, calculated every minute during the experimental protocol. Significant differences (P Ͻ 0.05) are as follows: vs. preinfusion baseline (*), LHL vs. LLL ( †), and L-NAME vs. 0.9% NaCl ( ‡). findings and reports that 70% rather than 100% exposure to high altitude during fetal development can also have dramatic effects on the maintenance of pregnancy, on fetal growth, and on fetal mortality (abortion and stillbirth). In contrast, we did not have neonatal mortality. Lambs born from pregnancies after 70% exposure to high altitude had a greater basal cardiac output, pulmonary vascular resistance, and pulmonary arterial pressure, even when their Pa O 2 had recovered to normoxic levels, and also showed a greater pulmonary pressor response to L-NAME and to acute hypoxia. The greater basal cardiac output in the highland group is independent of differences in basal heart rate, suggesting a greater resting stroke volume in lambs from pregnancies after long exposure to high altitude. The differences in basal pulmonary arterial pressure and vascular resistance between the groups may be explained, in part, by the larger area of vascular smooth muscle, the greater pulmonary vessel maximal constrictor response to KCl, and the increased sensitivity to ET-1 in the LHL lambs. ET-1 is induced by chronic hypoxia and is a potent pulmonary vasoconstrictor and a mitogen, leading to smooth muscle cell proliferation (3). Previous studies have correlated an increased vascular response with greater smooth muscle cell remodeling (29,45), conditions that were both observed in LHL lambs in our study. Moreover, it has been suggested that the longer the exposure to high altitude, the greater the vascular smooth muscle remodeling (29,40,45). Dissociation between changes in vascular wall area and in wall thickness is a common finding with established explanations. Elegant studies by Baumbach and Heistad (5,6) and by Mulvany (35,36) have made it clear that an increase in the ratio of the vascular wall to lumen may be achieved by at least two very different situations. For instance, the ratio may be increased by a reduction in lumenal diameter without a change in medial volume. There is thus rearrangement of the same volume of vessel wall around a smaller-diameter lumen, what is now termed inward eutrophic vascular remodeling. Conversely, an increase in the vascular wall-to-lumen ratio may be achieved by an increase in wall material with or without a change in lumen diameter, what has been termed outward hypertrophic vascular growth. An increase in wall material with an increase in lumen diameter is what is occurring in the LHL vessels. Interestingly, the main driving forces that promote this type of vascular remodeling are increased flow and pressure (36), both of which are present in the pulmonary bed of LHL lambs. The present study also reports an increase in right ventricular mass in LHL neonates. This is a common finding in humans and animals that have suffered arterial pulmonary hypertension (1,15,46,47). Other components contributing to basal pulmonary hypertension in lambs from pregnancies after prolonged exposure to high altitude may include alterations in the tonic balance between dilator and constrictor influences on the pulmonary vascular bed. For instance, we have previously reported in highland lambs reduced synthesis of dilators, such as CO (25). In this set of experiments, it was also found that LHL lambs had an important decrease in the production of CO by the pulmonary circulation concordant with the reduced HO-1 protein expression. Interestingly, a study in fetal lambs showed them to be unresponsive to CO (19). However, this was performed in ventilated (hypoxic, Ͻ10% FI O 2 ) fetuses rather than normally oxygenated postnatal lambs. Lambs native to high altitude do not increase HO as do llamas, which suggests that they are insensitive to endogenous CO, although they may be responsive to induced CO production (19). CO is a dilatator via activation of sGC (13,27,32) and via hyperpolarizing the vascular smooth muscle secondary to activation of BK Ca channels (7,10,50). CO can also diminish the vasoconstrictor responses to phenylephrine and 20-hydroxyeicosatetraenoic acid while reducing the synthesis and release of ET (28,51). The diminished production of CO by the pulmonary circulation determined in this study may play a putative role in the maintenance of persistent pulmonary hypertension of the newborn at sea level. In addition, chronic developmental hypoxia is known to result in lung hypoplasia and immaturity, pulmonary edema, and altered endothelial function (2,20,39,46). Alterations in the synthesis and function of vasoconstrictors such as ET-1, as reported in this paper, thromboxane, IGF, serotonin, and leukotriene C 4 /D 4 have also been implicated in the pulmonary hypertensive phenotype during chronic hypoxia (29,45). In the present study, the greater pulmonary hypertension under basal and stimulated conditions in lambs from pregnancies after 70% exposure to high altitude occurred despite evidence of enhanced NO-dependent dilator function in the pulmonary vascular bed. The greater pressor response to treatment with L-NAME, the increased expression of eNOS mRNA and protein, and the enhanced isolated vessel dilator response to SNP all strongly support enhanced NO function in the pulmonary vasculature of lambs from pregnancies after longterm exposure to high altitude. PDE is an enzyme that breaks down cGMP and thus halts the NO vasodilator cascade (42). In this study, LHL also showed greater pulmonary protein expression of PDE5, findings similar to those reported in hypertensive lambs and lambs native to high altitude (22,24). Although a greater protein expression of pulmonary PDE5 may itself favor constriction in the pulmonary vascular bed, it is likely that the increased expression of PDE5 occurs to match all other components of the enhanced NO cascade, and it does not underlie a cause but it is likely a consequence of the pulmonary hypertension in lambs from pregnancies exposed to high altitude. In the present study, blockade of sGC with ODQ completely prevented the pulmonary dilator response to the NO donor SNP in control lambs but not in lambs from pregnancies after prolonged exposure to high altitude. In the latter group, the dilator response to SNP persisted, albeit at a reduced level. This suggests that long-term exposure to high altitude during pregnancy may trigger an enhancement of NO dilatation pathways in addition to the activation of sGC in vascular smooth muscle. One possibility is the direct action of NO on the activation of K ϩ channels, as has already been described for the BK Ca channel (8). Accordingly, in the present study, LHL lambs showed a significantly greater pulmonary BK Ca protein expression. What is important to highlight is that, despite evidence of enhanced pulmonary NO function via at least two different signaling cascades, this adaptive response is insufficient to offset pulmonary hypertension and vascular remodeling in lambs even following return to sea level. In conclusion, postnatal pulmonary hypertension induced by long-term exposure of the pregnancy to high altitude persists at sea level, despite enhanced pulmonary NO function. This condition is associated with a decrease in the production of pulmonary CO coupled with an increase in the vascular reactivity of constrictors associated with cardiopulmonary remodeling processes. Perspectives and Significance During acute episodes of hypoxia, the pulmonary vascular bed undergoes constriction to match the reduced oxygenation with reduced perfusion. During sustained hypoxia, this initial homeostatic response becomes maladaptive, triggering sustained increases in pulmonary vascular resistance, leading to the establishment of pulmonary hypertension. Our studies show that this maladaptive pulmonary constrictor response to hypoxia can be triggered in the newborn lamb following pregnancy at high altitude, when the measurements are performed at high altitude (23, 24, 25) and, even, following return to sea level. Sustained pulmonary hypertension and remodeling of the pulmonary vasculature suggest possible persistence of this maladaptive response until adulthood. The implications of these findings are not only relevant to women of reproductive age native to sea level countries, considering trips or work at high altitude, but also to the developmental programming of pulmonary hypertension in adulthood by prenatal hypoxia (14,34).
2018-04-03T05:37:44.042Z
2010-09-29T00:00:00.000
{ "year": 2010, "sha1": "9c23f07c9c57df966919f76302bb39a6d8e35a78", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3007194", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "7a8d4d5f9091724bda78e64f269c91bc5929eded", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215745617
pes2o/s2orc
v3-fos-license
Polarization of chirality It has been long recognized that the spatial polarization of the electronic clouds in molecules, and the spatial arrangements of atoms into chiral molecular structures, play crucial roles in physics, chemistry and biology. However, these two fundamental concepts - chirality and polarization - have remained unrelated so far. This work connects them by introducing and exploring the concept of polarization of chirality. We show that, like charge, chirality, or handedness, can be polarized, and that such polarization leads to fundamental consequences, demonstrated here using light. First, we analyze how chirality dipoles and higher-order chirality multipoles manifest in experimental observables. Next, we show how to create chirality-polarized optical fields of alternating handedness in space. Despite being achiral, these racemic space-time light structures interact differently with chiral matter of opposite handedness, and the chirality dipole of light controls and quantifies the strength of the enantio-sensitive response. Using nonlinear interactions, we can make a medium of randomly oriented chiral molecules emit light to the left, or to the right, depending on the molecular handedness and on the chirality dipole of light. The chiral dichroism in emission direction reaches its highest possible value of 200%. Our work opens the field of chirality polarization shaping of light and new opportunities for efficient chiral discrimination and control of chiral and chirality-polarized light and matter on ultrafast time scales. of the enantio-sensitive response. Using nonlinear interactions, we can make a medium of randomly oriented chiral molecules emit light to the left, or to the right, depending on the molecular handedness and on the chirality dipole of light. The chiral dichroism in emission direction reaches its highest possible value of 200%. Our work opens the field of chirality polarization shaping of light and new opportunities for efficient chiral discrimination and control of chiral and chirality-polarized light and matter on ultrafast time scales. Chirality, or handedness, is a ubiquitous property of light and matter characterized by an unusual type of symmetry -mirror symmetry. Mirror reflection transforms a chiral object into its opposite counterpart, with our left and right hands being a typical example. These mirror twins are called enantiomers, and symmetry dictates that they must behave identically unless interacting with another chiral object. Handedness changes sign upon reflection: two enantiomers are characterized by the handedness of opposite sign. In this sense, handedness is similar to charge: just like a collection of charges can be positive, negative or neutral, a collection of chiral objects can have positive, negative, or zero total handedness (the latter called racemic mixtures). Chirality is of tremendous importance in nature and distinguishing molecular enantiomers is vital, which has stimulated a major recent research effort [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] . Just as important is the role of spatial polarization of electronic clouds in molecules, from controlling how fatty acids dissolve in water to giving specificity to the biological activity of enzymes 21 . Polarization of charge characterizes overall neutral but non-uniform spatial charge distributions and their interaction with other charges. Following our parallel between distribution of charge and distribution of handedness (see Fig. 1), one would expect polarization of chirality to play an important role in the interaction between extended chiral 2 objects displaying a position dependent chirality. Is this expectation correct? Can we engineer such structures? How effective is their interaction with chiral media and how can it be characterized? Can polarization of chirality be detected in an experiment? Here we give positive answers to these fundamental questions. Let us begin with the simple case depicted in Fig. 1a: a one-dimensional arrangement of alternating positive and negative charges ±q. When the particles are uniformly distributed, the medium is not polarized. It becomes polarized as we modify their positions, creating dipoles However, the definition of the unit cell in Fig. 1a, together with the polarization of the unit cell, is not unique, a well-known problem encountered in solid state physics 22 . Therefore, the concept of chirality dipole should rely on specific observables. In this context, consider the interaction of chiral matter with chiral light. Traditionally, chiro-optical effects have relied on the interaction with both the electric and magnetic field components of a circularly polarized light wave. However, the magnetic interaction occurs beyond the electric-dipole approximation and is often very weak. This limitation can be bypassed in a particularly straightforward manner by using a pair of non-collinear laser beams. These allow us to engineer propagating optical fields with three orthogonal polarization components [23][24][25][26] and create synthetic chiral light 25 . Such light is chiral already in the electric-dipole approximation: the electric field of light draws a chiral Lissajous figure in every point in space 25 . The chiro-optical response to this light does not rely on magnetic interactions and is orders of magnitude stronger. While synthetic chiral light is chiral in every point in space, its handedness can be distributed in space as desired. Thus, it is ideally suited to model distributed handedness and test the concept of polarization of chirality. The enantio-sensitive optical response to chiral light originates from the interference of two contributions to the light-induced polarization P ω = P ACH ω + P CH ω at a given frequency ω, one of them not sensitive to chirality (P ACH ω ), and the other unique to chiral matter (P CH ω ), which is out of phase in media of opposite handedness. The microscopic intensity |P ω | 2 is sensitive to the interference of P CH ω (r) and P ACH ω (r) at the same point r through the term P ACH * ω (r) · P CH ω (r) + c.c. This interference encodes the distributed handedness of light or matter (or both). That is, the near-field response records the distributed handedness locally. In contrast, the far-field signal provides access to the interference of the chiral and achiral contributions from the whole interaction region and is sensitive to spatial correlations of P ACH ω (r) and P CH ω (r + r ). The far-field signal maps the distribution of handedness on observables such as the enantio-sensitive direction of light emission and the enantio-sensitive shape of the emission pattern on the screen. These observables are simply the different moments of the enantio-sensitive component of the intensity distribution in the reciprocal space, which is proportional to the real part of (1) (The subscript ω indicates that we consider far-field signal centered at frequency ω with bandwidth ∆ω ω, hence ∆k k; we omit this subscript below for brevity.) For example, the enantiosensitive contribution to the total intensity and the average direction of emission are given by the zero and first moments of the distribution, respectively: The enantio-sensitive shape of the emission pattern on the screen is given by the higher moments: Eqs. (2-3) describe the multipoles of the enantio-sensitive intensity distribution in k-space. The different moments ofG(k) reflect the fact that handedness can have complex distributions both in coordinate and reciprocal space. If the handedness of matter is distributed uniformly, thenG(k) reflects the distributed handedness of lighth(k),G(k) ∝h(k). The enantio-sensitive contribution to light intensity, the direction of light emission, the shape of the light spot on a screen will then encode zero, first, and higher order moments ofh(k): The enantio-sensitive contribution to the total intensity is only non-zero if light's handedness is non-zero on average, h 0 = 0. But even if h 0 = 0, enantio-sensitive effects remain. For example, if light is racemic, h 0 = 0, but chirality polarized, i.e.h = 0, we will see enantio-sensitive deflection in the nonlinear optical response. In general, the distributed handedness of racemic objects manifests itself in an entire array of tensorial enantio-sensitive observables. Their measurement requires acquisition of N-dimensional data sets, where N is the rank of the corresponding tensor. We now illustrate this general analysis with a specific example and demonstrate that racemic, chirality-polarized light can be used to discriminate chiral molecules with extremely high efficiency. Chirality-polarized light can be created using two beams propagating in the xy plane, at small angles α = ±5 • with respect to the y axis, as shown in Fig. 2a. Both contain a fundamental field, linearly polarized in the plane of propagation, and a weak second harmonic component polarized orthogonal to this plane. In the overlap region, the total electric field is elliptically polarized in the xy plane at frequency ω, and it has a weak, linearly polarized, 2ω frequency component along z, where the two-color phase delay φ = φ 1 +φ 2 2 , which controls the field's handedness, is determined by the two-colour phase delays in the individual beams, φ 1 and φ 2 . The spatial modulation of the three orthogonal polarization components, F x , F y and F z , is presented in Fig. 2b. The electric field vector, at a given point in space, draws a chiral Lissajous figure in F-space. Fig. 2c shows that field's transverse spin S 2ω ∝ F * ω × F ω ∝ F x F yẑ and F 2ω = F zẑ change sign at different positions. As a result, the sign of their product S 2ω · F 2ω , which determines the sign of light's handedness, changes periodically in space. This spatial distribution of field's handedness in the near field is recorded in the non-linear response of the medium. In the lowest order of non-linearity, the strength of the local enantio-sensitive response is controlled by the chiral correlation function 25 Fig. 2d. We see a periodic structure of "dimers" of alternating handedness, the structure envisioned in Fig. 1b. The overall light field has mirror symmetry with respect to the yz plane (up to a global time shift) and is achiral. However, its handedness is polarized, with the x-component of the dipole of chirality (Eq. 4) equal to: where φ 1 and φ 2 are the two-colour phase delays in each of the two beams (see Fig. 2). The difference φ 2 − φ 1 controls the amplitude ofh, which maximizes for φ 1 = φ 2 . The phase ofh is controlled by φ 1 + φ 2 . This gives us control over the polarization of the field's handedness: we set φ 1 = φ 2 to maximize its strength, and then vary φ 1 , φ 2 synchronously to control the orientation ofh in the complex plane. Note that the polarization of the field handedness in position space, evident in Fig. 2d, leads to a non-zero value of h (5) xdx, which is proportional toh x for this definition of the unit cell. We now analyze the interaction of the chirality-polarized light field depicted in Fig. 2 with chiral matter. Fig. 3 shows the nonlinear response of left-and right-handed randomly oriented fenchone molecules driven by the field in Fig. 2. The fundamental wavelength of the incident field is 1300 nm with intensity 7.5 · 10 12 W·cm −2 in each beam; the second harmonic intensity is 100 times weaker, and we calculate the response polarized along z. Panels a,b show the intensity and phase of harmonic 12 (for other harmonics, see Supplementary Information). The response of opposite molecular enantiomers is antisymmetric with respect to x; the effect of exchanging the molecular enantiomer is equivalent to reversing the polarization of chirality of the field, which can 7 be done by shifting the two-colour phase delay in both beams (see Eq. 6). The single-molecule response, at a given point space, is enantio-sensitive in intensity (Fig. 3a) because the driving field is locally chiral. However, the overall light field is achiral, and thus the total intensity signal, obtained after integration over x, is identical in left-and right-handed molecules. Still, the direction of polarization of the field's handedness is imprinted in the phase of the nonlinear response, which depends strongly on the molecular handedness (Fig. 3b) is the same for left-and right-handed molecules, as in the near field (Fig. 3a). However, the direction of emission is extremely enantio-sensitive: while the left-handed molecules emit harmonics to the left (towards negative x), the right-handed molecules emit harmonics to the right (positive x). We control the enantio-sensitive direction of emission by controlling the polarization of the field's handedness in our setup (see Eq. 6). Fig. 4b shows that chiral dichroism resolved in the emission angle reaches the ultimate efficiency limit, CD = 200% (Fig. 4b). We find giant enantiosensitivity in the direction of emission of all even harmonics, see Supplementary Information. We can define the left-right asymmetry in the harmonic emission as A LR = 2 I L −I R I L −I R , where I L and I R are the intensities of harmonic emission to the left (β < 0) and to the right (β > 0), respectively. This angle-integrated quantity reaches very high values for all harmonic numbers, as shown in Fig. 4c. The direction of harmonic emission is correlated to the enantiomeric excess of the medium ee = C R −C S C R +C S , with C R and C S being the concentrations of the right-and left-handed molecules, see Fig. 4d. The expectation value of the emission angle is then given by whereĨ aR ,Ĩ β a andĨ β R are angle-integrated quantities that do not depend on ee. Eq. 7 allows us to quantify small values of enantiomeric excess in macroscopic mixtures. Polarization of chirality is a powerful concept which allows one to engineer highly efficient chiral interactions of racemic objects. It opens several opportunities. The first stems from flexibility in chiral polarization shaping of synthetic chiral light, where the spatial dependence of the field's transverse spin S (n) ωn (r) and the electric field component F ωn (r) parallel to it can be controlled separately. Here one can take full advantage of modern light shaping techniques including polarization shaping in space and time [27][28][29] . In contrast, in standard circularly polarized light, this opportunity is limited, since its electric and magnetic components are locked to each other. A non-zero dipole of chirality is present in any locally chiral field 25,26 where field's transverse spin S (n) ωn (r) and F ωn (r) have opposite parity. The second opportunity is to use chirality polarized light to imprint polarization of chirality onto racemic matter, creating "chiral polaritons". The phase relationship between the imparted angular momentum and polarization waves, induced in the medium by chirality-polarized light, will define the medium chiral polarization, its coherence length, and the strength of its interaction 9 with other chiral structures. The third opportunity is to exploit giant enantio-sensitivity to not only create chiralitypolarized matter, but also to control it on ultrafast time scales. Chirality-polarized light may allow us to identify racemic aggregates of chiral matter that exhibit complex chirality patterns in space, and to quantify their degree of polarization of chirality. Finally, polarization of chirality can also be used for efficient separation of opposite enantiomers in racemic mixtures by extending the proposal of Ref. 30 2ω /F (2) ω = sin (α); 2α = 10 • is the angle between the beams, the focal diameter is 200nm. c, Normalized 2ω-field amplitude (F z ) and transverse spin ∝ F x F y . d, Local handedness of the light field, characterized by its fifthorder chiral correlation function h (5) . The colours encode the phase of h (5) and thus the field's handedness, which is controlled by the relative phase φ (see Eq. 5); purple: arg{h
2020-04-14T01:00:28.230Z
2020-04-10T00:00:00.000
{ "year": 2020, "sha1": "ef6ed0bfa8ccea6a060951655e87f7fc15621613", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ef6ed0bfa8ccea6a060951655e87f7fc15621613", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256465422
pes2o/s2orc
v3-fos-license
Innovative strategies for the reception of asylum seekers and refugees in European cities: multi-level governance, multi-sector urban networks and local engagement Cities are taking a prominent role in solving global challenges, with a ‘new localism’ inviting a reorientation of power from nation-states downwards, outwards and globally. This special issue explores this phenomenon through extending the existing analyses of multi-level governance and the ‘local turn’ to the underexplored area of asylum seeker and refugee reception in European cities. The special issue draws on research in European cities where new strategies were piloted especially in the wake of ‘the refugee crisis’ from 2015, consolidating the ‘local turn’ evident in immigration and integration policy-making. The collection is in two parts: the first part explores innovation in local governance of asylum seeker reception. Here, case studies demonstrate how cities responded through forging new alliances both vertically and (especially) horizontally in networks within and between cities. The second part explores innovation in practice, analysing novel initiatives premised on local engagement and inclusivity of newcomers within the social fabric of the city. This editorial paper draws out the wider lessons of efforts from this comparative exploration of attempts to rethink asylum seeker and refugee reception at the local level. (historically) transit countries in the Mediterranean region and those with more established histories of immigration in North-Western Europe. The contributions invite reflections on the extent to which local innovation is possible within broader frameworks of regional, national and European policy. Asylum seeker reception is an example of a complex global issue that manifests itself locally. Such issues drive what Katz and Nowak (2018) describe as the 'new localism', referring to radical shifts in the location of power: downwards from nation-states to cities, horizontally from national governments to networks and globally to transnational networks (Marks and Hooghe 2004;Barber 2013). Pannizon and van Riemsdijk (2019) suggest that the European refugee 'crisis' heralded a reorganization of power constellations between local, regional, national and supranational governance. Migration policy-making at the European level saw shifts in both the interactions between the supranational and intergovernmental realms as well as intergovernmental cooperation on migration issues (ibid., Slominski and Trauner 2018). Yet it instigated to a shift of power downwards and horizontally, especially as local municipalities were left to manage the consequences of incapacity at the EU and national level to quickly and adequately respond to the urgency of asylum seeker reception (Caponio 2019;Agustín and Jørgensen 2019). Migration research had already noted this downward shift through attention to the 'local turn' in migration policy, referring to the trend of cities taking a more prominent role in the broader framework of multi-level governance [MLG] of migration (Caponio and Borkert 2010;Scholten 2013;Dekker et al. 2015;Scholten and Penninx 2016;Zapata-Barrero et al. 2017). As Garcés-Mascareñas and Gebhardt (2020, p. 3) note, 'cities become particularly relevant whenever they go beyond the role of simple passive implementers and actively interact with national or regional policies'. Recognizing the critical role of cities is important too to temper the 'methodological nationalism' that privileges the nation-state in migration research; more understanding is required of migrants relationships with the cities in which they live, including their active role in citymaking (Çağlar and Glick Schiller 2018). While MLG is well explored in relation to broader migration integration processes, as well as to undocumented migrants (Spencer 2018) this special issue adds to that corpus of knowledge by understanding how the local turn manifests in relation to asylum seeker reception, complementing the collection by Glorius and Doomernik (2020) on this theme. Paying attention to city-level innovation should not suggest that this is necessarily a common practice. Indeed, evidence points to widespread convergence in reception approaches across European cities, reflecting local governments' limited role in decisionmaking as relevant policies are shaped at the national and supranational level (Kreichauf 2018; Glorius and Doomernik 2020). Some evidence even suggests that national governments are also aiming to regain power over local integration policies and exert greater control and influence at the local level (Campomori and Ambrosini 2020;Emilsson 2015). However, the case studies point to an important counter-trend where local actors assume the role of innovators and policy entrepreneurs, sometimes developing policies completely apart from national policy, a process described as 'decoupling' (Scholten and Penninx 2016). The innovations often build on civic and political traditions which, through the contributions of individuals, voluntary, faith groups, media and police, cultivate an 'urban citizenship' as an antidote to national policies (Hintjens andPouri 2014, see Oomen 2019a). This is evident in cities developing their own responses in wider policy domains, such as in environmental or housing policies (e.g. in Barcelona's curbing of the tourist rental market or London's Ultra Low Emission Zone aiming to improve air quality) as well as those more specifically in the migration and integration sector. These include examples such as New York's IDNYC programme, to provide identification documents for all city residents, including irregular migrants, Dutch cities' development of a 'Bed, Bath and Bread' provision for irregular migrants in contravention of national policy frameworks, and Liverpool's decision to allow people access to homeless shelters regardless of immigration status. To understand this specific type of urban innovation in the field of asylum seeker and refugee reception, we adopt Sørensen & Torfing's (2011, p. 8) widely accepted definition of collaborative public innovation as, 'an intentional proactive process that involves the generation, practical adoption and spread of new and creative ideas, which aim to produce a qualitative change in a specific context'. This collection considers how cities -as contexts and referring to the multi-sector urban networks of local actors therein-were able to seize a moment of opportunity to disrupt and innovate asylum policies and practices. In presenting some of these innovations, our aim is not to provide an overview of practices, since there are many other emerging developments which we cannot include. Rather the collection aims to examine case studies of cities in depth, to understand the reasons for and mechanisms involved in development, as well as to understand their dynamics and emerging outcomes. To do so, the collection considers innovation in this field in two ways. First, the special issue explores case studies of innovations in governance: the emerging organizational arrangements, alliances and dynamics of sharing, or contesting, power around the issue of asylum seeker reception. Innovation in this sphere occurs where networks of different actors meet in new configurations to fill gaps and change local practice, including through collections of local, national, regional and international partners in horizontal, sub and transnational networks. These actors can disrupt or even partially exit from national state-run hierarchies, become part of multiscalar, regional and global networks and assert the power of subnational jurisdictions (Sassen 2012 andsee Oomen 2019b). Funding from the European Union and other sources such as independent philanthropy can be an incentive for new networks to form and for their legitimacy to growas some of the case studies show; they may be radical, combative or unfold in more limited and constrained ways. The second form of innovation refers to innovation in local practices: the material arrangements, initiatives and schemes put in place locally to receive asylum seekers into the city. As much as a local turn, the examples indicate 'a turn to the local', as city authorities engage specific local actors and neighbourhoods to co-create and try out new and different ways of 'doing' rather than 'giving' reception. Poppelaars and Scholten (2008) point out that local levels of governance tend to prioritize pragmatic approaches to integration over the ideological drivers of national approaches to the issue. The cases include here indicate this is true, yet they also highlight a growing confidence in the local level at developing responses and building alliances founded on different values and goals (Katz andNowak 2018, see also Oomen 2019a). While it cannot be assumed that local policy formation is always inclusionary (see Ambrosini 2013) and there can be a confirmation bias towards inclusionary cities in this field (Caponio 2019) nevertheless, these types of initiatives often show more concern for humanitarian logics. They embrace values of diversity, prioritize inclusivity and claim responsibilities towards the new urban inhabitants as 'newcomers' and 'urban citizens' rather than more stigmatized and ideologically loaded notions of asylum seekers or refugees (Oomen 2019a). According to Agustín and Jørgensen (2019) many new solidarity movements emerging also indicates a new kind of 'cosmopolitanism from below'. New frames are developing at the local level, which draw on the different role and competencies of local governance in focusing on such notions of inclusive community rather than an exclusionary frame (Broadhead 2020). Urban initiatives also demonstrate a parallel concern for existing residents, recognizing the impacts of asylum seeking on community cohesion. Local community objections may be founded on a sense of economic competition, cultural distance and security concerns (Zorlu 2017). Yet recognizing these concerns can inform the design of reception models, and also inform sensitive communication around reception. Several initiatives in this collection did so by reframing a prevailing local notion of refugees as a 'burden' and sought to bridge social distance between existing inhabitants and newcomers (Zill et al. 2020;Oliver et al. 2020). This embraces a notion of cities as 'strategic frontier zones', where cities are sites in which alternative, more inclusive intercultural norms of migrant reception can be forged (Sassen 2012;Zapata-Barrero 2019). Attention to the urban should not overlook that especially through dispersal policies, reception also often occurs in rural areas and smaller towns, and innovation can also occur there (Whyte et al. 2019;Bock 2018;Caponio 2019). Assuming that urban areas hold the monopoly on innovation risks replicating and reinforcing divides (Jennings and Stoker 2016). Nevertheless, scholars are acknowledging the increasing urban articulation of global issues (Sassen 2012) recognising there is often more capacity, opportunities for mobilization and political will in large cities to bring about change (Caponio 2019). As Sassen (2012) poses, while cities can be spaces of exclusion and sites of operation for global corporate capital, they are also hybrid spaces that possess the capacity for inhabitants to develop alternative norms, forge new identities and overcome divisions and borders. The special issue examines how experiments in asylum seeker reception aim to harness this transformative potential of cities. It critically examines how they fare over time, especially when they chafe against nationally managed and centralised systems of asylum reception. The particular questions posed are summarized as follows: 1. What forms of alternative and innovative responses to asylum seeker and refugee reception can be seen emerging at the local level in European cities? 2. What are the dynamics of multi-level and multi-sector governance in asylum seeker reception? In particular what role do horizontal alliances and other sub or transnational networks have and how do these interact with national governments? 3. What can be learned from the empirical study of specific local innovations for asylum seeker reception founded on principles of inclusivity? What is their potential for addressing problems in reception and forging understanding between asylum seekers and existing city inhabitants? In the following discussion, the argument is first made for why innovation is emerging, and why therefore a collection such as this, is significant and timely. This is followed by assessment of the individual contributions to the Special Issue, as related to the questions we pose above. Rationale: the need for and potentials of innovation in asylum seeker reception Asylum seeker reception poses a major challenge to multi-level migration governance. As a complex, or 'wicked' global policy issue, like other areas of immigrant reception and integration, it is an intractable policy problem that defies a clear solution, with different views at different levels on how to approach the issue, or indeed what is the issue (Boswell 2008;Scholten 2013;Geuijen et al. 2017). Certainly, migration is a field where actors' views at national and local level can be highly divergent (Spencer 2018;Ambrosini 2013;Scholten 2013;Scholten and Penninx 2016). This disconnect stems from the fact that asylum is a competence managed by national governments, where cities are subject to top down decision-making and the imposition of centralised policies. Localities, by contrast have a major role in the practical management of asylum seeker reception, with only limited input into decision-making (Glorius and Doomernik 2020). There are also divergent values, logics and objectives for asylum seeker reception. At national level, the goal is primarily to guard the integrity of national borders, an objective governed through laws, rules and obligations (Barber 2013). As Darling (2014, p. 77) explains, a 'classificatory impulse of the 'domopolitics' of asylum' drives the development of a regime to secure, keep order and maintain an image of the nation. This is achieved through largely administrative and rational processes of categorising, accommodating, dispersing and detaining asylum seekers to determine those 'worthy' or 'bogus' in placing a claim on state resources (Darling 2014). While status determination occurs, reception is provided at a most basic level, at the lowest costs. This both discourages people from arriving and staying and simultaneously displays a government's 'toughness' on immigration to its public (Mayblin and James 2019). A different set of concerns manifest at the local level, where authorities are tasked with practically managing the effects of sudden migratory flows on the social fabric of the city. Hostility and public protests about the local provision of accommodation for asylum seekers can threaten the social order in the city, especially when reception facilities are located in marginalized neighbourhoods for reasons of cost and efficiency (Bock 2018;Oliver et al. 2020). Fears are exacerbated too by reception occurring in 'camp-like' structures across European cities, which share socio-spatial characteristics that limit connections between asylum seekers and the localities in which they are placed (Kreichauf 2018). Nationally imposed asylum conditions forbid asylum seekers from participation in local economies, some use curfews to restrict movement. Such social and spatial isolation can help depoliticize the issue, by 'distanc [ing] those seeking asylum from the political and the public gaze' (Darling 2014, p. 77). Practical concerns are also met with moral concerns for the treatment of asylum seekers at the local level. Although the EU's Common European Asylum System (CEAS) through the Directive on Reception Conditions sets minimal standards on how asylum seekers are treated while their application is being processed, Glorius and Doomernik (2020) point out that there is, in practice, a great deal of variation in provision. Austerity measures in public spending coupled with welfare chauvinism has meant that reception conditions have been deteriorating. Critics of the CEAS argue that rather than safeguard standards, it acts more like a 'race to the bottom', in which countries already maintaining good reception standards could be deteriorated and where bad, countries could opt out anyway (Mayblin 2016). As Mayblin and James (2019, p. 377) note, support for asylum seekers is not electorally popular, so at national levels, there is even a 'political disincentive to ensure support levels are adequate'. This deterioration in standards is evident across a range of European countries. Italy's former populist government led by League and the Five Star Movement in 2018 installed a more restrictive system of asylum seeker reception, delivering emergency reception of just 'bed and bread', and removing funding for other services such as legal help and language courses (Campomori and Ambrosini 2020). In the UK, the outsourcing of asylum to centralized private providers has worsened conditions of reception and limited local authority control further (Darling 2016). Under the last contracting round, the government watchdog sought to recover funds of £7 m from providers because of poor performance, with evidence to the review focusing on 'poor property standards,' including 'frequent references to defects, damp, dirt and vermin' (Independent Chief Inspector of Borders and Immigration (ICIBI) 2018, p. 14). Such minimal standards create problems of community cohesion, and local authorities must also mitigate the longer-term consequences of asylum reception, including losses in skills and wellbeing generated during often protracted asylum processes (ibid.). Refugees experience deterioration in skills and networks through periods of inactivity and ultimately lower rates of labour market participation and higher unemployment than other migrants (Bakker et al. 2014;Kone et al. 2019). They also experience downward mobility into low skilled, low status, low paid and insecure jobs (Bloch 2008). Existing difficulties in converting human capital are worsened due to the loss and deterioration of skills while waiting and being unable to work, as well as through the inhibited possibilities to expand asylum seekers' already limited social capital within reception facilities (De Vroome and van Tubergen 2010). Reception has been shown to have damaging health and psychological effects, compounding previous trauma and aggravating asylum seekers' mental health problems as they experience their lives in limbo and 'on hold' because of the forced inactivity (Li et al. 2016;Miller and Rasmussen 2017). Many local governments are recognising that enabling earlier stage labour market participation and social integration would likely pay dividends in reducing risks of creating an excluded urban population (see Bakker et al. 2014). Such contradictory concerns of national and local levels of government can sow the seed of contestation in governance arrangements and fuel a desire for different solutions locally. The introduction now turns to consider alternatives, focusing on six case studies, which overall provide insight into innovative responses to asylum seeker and refugee reception emerging in and across European cities (question 1). Case studies in local innovation Innovation in governance arrangements: network-building, conflict and cooperation The first three contributions to the collection particularly explore different governance arrangements emerging at local level (question 2). Scholten and Penninx (2016) identified several different types of migration governance arrangement between national and local levels, including (1) top-down, or 'centralist', (2) bottom-up, or 'localist', (3) dehierarchised or 'multi-level' and (4) decoupled. These case studies point to an increasing development of bottom-up and decoupled governance in this field. This is especially through alliances developed within and between cities. The first two articles explore multi-level governance from the perspective of city authorities in Spain (Barcelona) and Italy (multiple cities). Both countries are the first points of entry for asylum seekers in Europe, and responses have developed rapidly. Until the current decade they have perceived themselves mainly to be countries of transit rather than reception, given that until then, numbers of asylum seekers officially recorded were few, and arrangements little developed. The contribution by Garcés-Mascareñas and Gebhardt (2020) focuses on how, during recent years, Barcelona has modelled itself however as 'a city of refuge'. They show how policy entrepreneurs are developing bottom up governance and 'municipalist' philosophy in the face of a heavily centralized, but increasingly dysfunctional national approach to asylum seeker reception. Key to this has been heavy investment of the city's own resources into reception policies, as well as political arguments for change, drawing strength from coalitions made with other cities of refuge nationally, and with 'Solidarity Cities' across Europe. These alliances have translated into attempts to influence cities' rights to secure future funding, including European funds managed at the national level (Asylum and Migration Funds) and those related to the European Urban Agenda. Arguably this has led to a 'wider shift in favour of cities' in the next generation of integration funding (Garcés-Mascareñas and Gebhardt 2020). Francesca Campomori's and Maurizio Ambrosini's contribution (2020) explores dynamics across regions and cities in Italy, showing how mobilizing for alternative approaches are occurring in the face of 'a renewed national turn' in Italy. The contribution shows the growing significance of horizontal multi-sector alliances in this policy field, whereby asylum seeking is a process managed not solely by national and local political authorities and NGOs. Instead it now involves a much wider group of actors, including migrants themselves, pro-immigrant actors and social movements, and including xenophobic movements too. They argue that most studies of MLG in relation to migration pay lesser attention to this horizontal dimension, favouring attention on the multi-level or vertical aspects of MLG. In addressing this deficit, they describe local policies of reception as 'a playing field' where a much broader range of actors, both state and non-state come together. This is confirmed within other accounts later in the collection (e.g. Oliver et al. 2020), where innovation comes from multi-sector networks including public-private partnerships, involving actors from local businesses and social enterprises. These networks can also include educational institutions, both as providers of services and involved in research, evaluation and knowledge exchange (see also Broadhead 2020). Both contributions also expose the dynamics and understandings between different actors at both different levels of governance and within multi-sector alliances. As Campomori and Ambrosini (2020) assert, crucial here is a recognition of MLG less as a 'negotiated order', but a process marked by conflict. Campomori and Ambrosini favour the concept of 'battleground' as a more appropriate metaphor to characterise the (often neglected) interplay between vertical and horizontal spaces. Friction and tension is also present in the palpable and growing anger which fuels confidence among local coalitions to resist implementation or remedy the effects of national systems, as well as increasing demand at local level for funds to support those actions (see also Garcés-Mascareñas and Gebhardt 2020). Campomori and Ambrosini helpfully analyse how a range of dynamics can characterize the interplay between public powers and civil society organizations. These stances include: closure to civil society activism; tolerance; immigrant activism vs anti-immigrant mobilization; and cooperation. Nevertheless, within multi-sector alliances, while conflict may be a defining characteristic, we see that other forms of cooperation emerge where a shared perspective on this topic transcend other important differences. Thus in Spain, Garcés-Mascareñas and Gebhardt show how cities from different political colours were able to unite under the same cause. City to city cooperation is particularly prominent with the third case study in the collection: Jacqueline Broadhead's (2020) analysis of an emerging UK city network. This provides an example of how under even constrained conditions of highly centralised national policy on asylum and resettlement and privatised asylum seeker accommodation, city governments have collaborated in order to develop more assertive leadership. Broadhead examines a newly emerging city network informed by knowledge exchange from the University of Oxford and a transnational learning exchange. This fledgling network helped to bring cities together, inspiring and informing local practice. Broadhead's analysis identifies some of the key strategies for inclusion identified at city level. These include first, the reframing of the migration issue towards a 'newcomer' frame. Here, asylum seekers are responded to less as a category requiring treatment, and as part of a broader group of people arriving in a city. Second, is the development of citybranding, whereby the cities develop place-based narratives of inclusion. Key to embedding a broader notion of reception is strategic leadership, exemplifying the importance of key local political representatives as influencers to champion alternatives and resist the national model. In Broadhead's case-study, we see the recent introduction of a new Deputy Mayor responsible for social integration in London. In Barcelona, it is the mayor Ada Colau who called for a network of welcoming cities to avoid the 'war on life' that was playing out in asylum seeker reception. Finally, several of the cases underline how city to city networks are proving an important force in mobilization (Caponio 2019, see also Oomen 2019b). Often they developed with a function to exchange ideas and learn from each others' experiences of local integration policies (e.g. in the 'Cities for Local Integration Policies' (CLIP) which began in 2006 as an initiative of policy makers to network 25 European cities). Yet these alliances have increasingly developed into numerous and sometimes large regional and transnational cooperative networks, such as Eurocities, Intercultural Cities networks, the Global Parliament of Mayors and the recently founded Mayors Migration Council (Oomen 2019b). The alliances draw benefits from exchanging information, sharing experiences and developing strategies collaboratively (Barber 2013;Oomen 2019b; Garcés-Mascareñas and Gebhardt 2020). As Oomen (2019b) shows however, there is often great power in 'teaming up', not only to share experiences, but develop alternative narratives. Collective mobilisation is more powerful in contesting nationally restrictive arguments and can also help influence the global legal framework, as 150 mayors of cities were able to do in contributing a city perspective to the 2018 Global Compact on Refugees and Migrants (ibid.) Innovations in practices: a turn to the local for social inclusion Moving now to the second master-theme of the collection, we reflect on practical experiments undertaken at city-level. All provided architectures, mechanisms and activities to generate social connections and provide a more inclusive form of reception for asylum seekers and refugees. We begin with Mahieu and Van Caudenberg (2020) analysis of an urban programme that involved young unaccompanied refugees cohabitation with young local citizens in small-scale collective housing units in Antwerp. Their article considers the opportunities this provided for social support and mutual learning, reflecting on the wider role that intercultural living can bring for newcomer integration. This is followed by Oliver, Geuijen and Dekker's et al.'s (2020) account of the Utrecht Refugee Launchpad (URLP), a co-housing project that attempted to facilitate social contact between a group of young, local tenants and asylum seekers housed in the same complex. We end the collection with Zill Spierings & Van Liempt's (2020) Lefebvrian exploration of an alternative space of asylum seeker reception emerging from civil society organizations and third sector activism, the Grandhotel Cosmopolis in Augsburg, Germany. Part-hotel, part-asylum seeker centre, its café, restaurant and artistic space provided sites of encounter between local residents, tourists and asylum seekers, while its playful imaginaries of the space of asylum reception as a 'grandhotel', evokes images of encounter and intercultural exchange. All three examples show how local consensus is an important aspect of innovative projects. While existing scholarship has identified policy making as taking a 'local turn', these case studies demonstrate a 'turn to the local'. By this we mean that the innovations were founded on a pragmatic recognition that reception occurs in real neighbourhoods with real people. Not only is power located downwards but so too is responsibility for newcomers' experiences within the city as part of their urban citizenry. For example in Utrecht, the local government officials responsible for URLP, or 'Plan Einstein' as the initiative was colloquially known, referred to asylum seekers living in the city as 'ours' (fieldnotes from study by Oliver, Dekker and Geuijen 2019). By 'turning to the local', innovations strategically used pre-existing and newly emerging local horizontal networks, especially between local government and civil society. However, equally, two of the initiatives were also able to seek support vertically, by exploiting a rare opportunity of European funding being directly channelled to cities through the Urban Innovative Actions scheme (https://www.uia-initiative.eu/en). Such funding allows cities to bypass the national governments and pursue their own agendas (see also Garcés-Mascareñas and Gebhardt 2020). These finances were invaluable not only for allowing adequate resources to fully support the investment, but they also lent legitimacy locally and nationally to the experiments. Ironically, such funding may even provide a politically acceptable means for national governments to permit cities like Utrecht to experiment locally in ways that would be more difficult on a national scale . However, as some of the examples show, there can be implications of projects being framed as 'experimental', and 'new'. Innovation, by its very nature is unlikely to get everything right at first and rather implies learning through experimentation. This sits somewhat at odds with the high-risk and public nature of the ventures examined in this collection, whereby the innovations attracted high public interest. Yet as Zill et al. (2020) show, a positive media spin on these 'model projects' can take a life of its own. Despite local media reporting presenting the Grandhotel Cosmopolis as experimental, the media construction at the national level was framed in 'utopian' terms. Their analysis shows that this reduced the innovation to a unique and 'modern fairytale' rather than any real, plausible alternative, or site of potential critique to the status quo nationally. The media depictions were also far removed from the reality of those living nearby, where reactions were informed by direct experience. The strongest contribution of these three articles is however to the final question of our special issue (3). Through detailed empirical study, they shed light on cities' attempts to forge greater intercultural understanding between asylum seekers and locals, based on an assumption that proximate living would create meaningful encounters. This was developed in different way across the innovations. In Antwerp, the concept of 'organized befriending' was embedded into communal living, where refugees were matched with young buddies through a carefully considered process by professionals. In Utrecht, cohousing and shared space for joint use by asylum seekers and tenants, alongside a wider co-learning educational programme open to both asylum seekers and locals was adopted. In Augsburg, the Grandhotel Cosmopolis project's ambition for more open asylum reception stretched out to the wider community, as the reception space was designed as an open space for spontaneous encounter. It sought too to engage with both the imagined geographies of asylum and lived experiences, recognising that asylum seekers understood in mental images and 'representational space', as well as through physical engagement in 'lived spaces'. In the final part of this editorial introduction, we consider then to what extent did the noble ideas behind these practical experiments live up to their transformative potential in dismantling categories of 'us' and 'them'? All three contributions offer some evidence that intervening in this type of local connection can provoke some kind of meaningful encounter. They confirm academic research which shows the positive benefits of social contact and encounter for enhancing intercultural understanding (e.g. in Allport's (1954) contact hypothesis). Mahieu and Van Caudenberg (2020) show that befriending enabled instrumental social support, crucially by lowering the threshold for refugees asking for help and providing opportunity for informal learning in situations of daily living. The simple notion of just 'having someone around' offered them a welcome distraction from their past and present challenges. In Utrecht, asylum seekers similarly valued the opportunity to just be around other Dutch people, with tenants offering a glimpse into the realities of a regular life in Utrecht. Zill et al. (2020) show that the space of asylum accommodation is physically and socially produced, created not only through media representations, but also through direct experience of asylum seeker spaces. Nevertheless, all the contributions show that developing intercultural encounters and shifting imagined representations of this group is not easy and the assumptions and expectations embedded within such innovations should be carefully considered. In Antwerp, contact was developed in a top-down manner, through the professionals selecting and matching buddies to work on a one-to-one basis with refugees in gender-mixed pairs. Mahieu and Van Caudenberg (2020) show that expectations of asylum seekers and buddies sometimes differed and that intimate co-housing arrangements sometimes created misunderstandings along cultural and gendered dividing lines. In Utrecht, social contact fluctuated over the course of the project and encounters varied in ease and intensity, and at times, became awkward and difficult. Oliver et al. (2020) show that it was influenced heavily by conditions and contexts outside the project. Contact worked best when numbers were fewer, ratios equal (around 40:40), people shared similar characteristics (of age and education) and there was enough time for relationships to develop. However, these conditions were not easily met within the constraints of national asylum management where people were moved regularly, often far from the city, directly contravening local policy. Similarly, in Augsburg, we see that despite being able to 'walk in', local people's reactions were still heavily influenced by external, mediatized constructions of asylum seekers as either criminals or victims. Being close had only a limited effect on attitudes (Zill et al. 2020). Developing intercultural encounters and shifting imagined representations were also affected by spatial arrangements. In all three cities, new reception facilities were quickly adapted to meet housing needs: In Antwerp, the collective housing was provided in a range of apartments, housing and a site built for the project including 16 two-bedroom modular units. In Utrecht, the site was a refurbished office building. In Augsburg the Grandhotel was a renovated former home for the elderly. All accounts demonstrated that physical locations and material conditions of asylum reception matter. In Antwerp, the opportunity to live together provided opportunities for pairs to exchange small gestures of help and share household items or lend furniture. However, these could be loaded exchanges, due to material inequalities between the groups. In Utrecht, tenants and asylum seekers lived close, but separately, yet notions of co-housing disguised the multiple inequalities within. Environmental factors, from large scale delays to the asylum seeker centre being ready for habitation to very small physical changes, like locking a shared entrance, had major repercussions on contact and atmosphere. In Augsburg, the Grandhotel was developed particularly with an eye to the affectual element of familiarization, where passers-by were encouraged to just walk into a physically open semi-public space. Nevertheless, the contribution shows that entering an asylum centre was still a difficult threshold for some nearby residents to overcome (see also Oliver et al. 2019 on neighbourhood reactions to the new centre). Concluding discussion Using six empirical studies of emerging multi-sector alliances and reconfigurations of multi-level governance, the special issue invites reflection on the extent to which innovation is possible in asylum seeker and refugee reception in European cities. Innovation requires the creation, adoption and spread of new ideas, and the instigation of qualitative change (Sørensen and Torfing 2011). What do these examples tell us about the extent for incremental (or indeed more radical) change in the field of asylum seeker reception? The collection exemplifies that at local level, a strong desire to develop more adequate approaches to asylum seeker and refugee reception can be frustrated. The policy space available for cities to innovate in this arena remains limited, given constraints of capacity, (legal) competence and funding. There are risks too, both perceived and actual, in striking out outside of national government policies in a highly charged policy area. However, there are also risks in not acting, and cities have emerged as a frontline in identifying new approaches to reception. These approaches speak to wider city issues of inclusion, cohesion and place-making and shaping, and move beyond narrower framings of reception pertaining only to national centralised systems for asylum reception based primarily on legal and governmental frameworks. Key in shifting some of these dynamics are multi-sector alliances between local governments, NGOs and a range of non-state actors, as well as city networks. These facilitate conditions for innovation, both practically (through funding opportunities and the sharing of best practice between cities) but also conceptually, by providing cities with the opportunities to define their own policy framing and rationale for action. Change might be rather incremental and indirect, for example through improving conditions that might lead to further opportunities to innovate, as shown in the case study of Barcelona (Garcés-Mascareñas and Gebhardt 2020). Yet as Broadhead (2020) shows, shaping new policy frames is still an important move in creating better conditions and policy space for further action. Crucially however, some of the contributions also show that challenges to the status quo are not easily or cooperatively achieved in this field. Conflict might more readily describe interactions in both multi-level and multi-sector arrangements as new arrangements are struck and competencies fought over (Campomori and Ambrosini 2020; Garcés-Mascareñas and Gebhardt 2020). The contributions on the practical experiments by Mahieu and van Caudenberg (2020) Oliver et al. (2020) and Zill et al. (2020) show that innovations rely on building local alliances and, a 'turn to the local'. Not only is this through bringing together coalitions of actors with their own particular expertise, but by engaging communities in the sites and localities where asylum seekers are placed. Zill et al. (2020) point out the risk that contact initiatives are presented as some distanced 'utopia', whereas all the examples show that transforming relations at the local level is a complex and concrete process. Despite the attractiveness of the idea, bringing asylum seekers together with locals does not automatically lead to harmony and disruptions to categories of 'us' and 'them'. The contributions show careful attention needs to be paid to the dynamics and inbuilt inequalities between groups, the influence of external conditions (including the effects of national asylum regimes) and the effects of physical environments. The scholarship on these innovations point to the value and importance of learning from such initiatives. Yet the studies also implicitly indicate difficulties in ensuring traction and capitalising on the lessons learned. This may because they are funded only temporarily, and even those that endure, such as the Grandhotel Cosmopolis, are still at risk through their representation in national media as 'special' and unique; as Zill et al. (2020, this issue) argue, these utopian framings dilute their potential for large-scale and serious critique of national approaches. We end therefore with a call for a continued critical focus of research in charting the potentials within this rapidly shifting territory. Methods to assess the outcomes of such innovations include evaluative approaches which recognise the complexity of attempting change in this field, and which adequately acknowledge as a key part of the story, the wide-ranging influences and contexts which inform such interventions. Sharing learning from innovations in governance and practice is vital, showing that alternatives to more typical national approaches are possible. We simultaneously urge researchers to remain critical. Innovation in asylum seeker reception seems a worthy and important pursuit, but we must avoid overly celebratory and descriptive accounts, in which innovation is unilaterally regarded as leading to better experiences and outcomes for refugees and asylum seekers. A focus on innovation cannot assume that the local level is somehow more progressive, and its innovations, automatically good (Agustín and Jørgensen 2019). This special issue instead seeks to offer a thorough and critical exploration of how alternative policies and practices of asylum seeker reception have emerged at the local level in Europe, and how they are experienced by those that they engage.
2023-02-02T14:25:47.613Z
2020-09-10T00:00:00.000
{ "year": 2020, "sha1": "a7f49076fed36a74a160c7dc6ea1e20a794dd0e8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40878-020-00189-y", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a7f49076fed36a74a160c7dc6ea1e20a794dd0e8", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
550998
pes2o/s2orc
v3-fos-license
Evaluating our ability to predict the structural disruption of RNA by SNPs The structure of RiboNucleic Acid (RNA) has the potential to be altered by a Single Nucleotide Polymorphism (SNP). Disease-associated SNPs mapping to non-coding regions of the genome that are transcribed into RiboNucleic Acid (RNA) can potentially affect cellular regulation (and cause disease) by altering the structure of the transcript. We performed a large-scale meta-analysis of Selective 2'-Hydroxyl Acylation analyzed by Primer Extension (SHAPE) data, which probes the structure of RNA. We found that several single point mutations exist that significantly disrupt RNA secondary structure in the five transcripts we analyzed. Thus, every RNA that is transcribed has the potential to be a “RiboSNitch;” where a SNP causes a large conformational change that alters regulatory function. Predicting the SNPs that will have the largest effect on RNA structure remains a contemporary computational challenge. We therefore benchmarked the most popular RNA structure prediction algorithms for their ability to identify mutations that maximally affect structure. We also evaluated metrics for rank ordering the extent of the structural change. Although no single algorithm/metric combination dramatically outperformed the others, small differences in AUC (Area Under the Curve) values reveal that certain approaches do provide better agreement with experiment. The experimental data we analyzed nonetheless show that multiple single point mutations exist in all RNA transcripts that significantly disrupt structure in agreement with the predictions. Background RNA (Ribonucleic Acid) is a ubiquitous messenger of genetic information in the cell and plays a central role in the regulation of molecular processes [1][2][3][4][5]. Unlike DNA, RNA is generally single stranded and has a high propensity to fold into functionally important structures [6][7][8][9][10]. These structures can be significantly disrupted by mutations including Single Nucleotide Polymorphisms (SNPs) [11,12]. Genome-Wide Association Studies (GWAS) regularly identify disease-associated SNPs in non-coding regions of the genome. Disease-associated SNPs do not necessarily directly reveal the molecular cause of the disease and require further analysis [11,[13][14][15]. A majority of the genome is transcribed into RNA [16,17]; as a result a majority of genetic mutations will also be transferred to the transcriptome. From a structural perspective, we distinguish two broad classes of RNA; highly structured RNAs (e.g. the Ribosome, tRNAs, self splicing introns, RNAse P) and RNAs that potentially adopt multiple conformations (e.g. mRNAs and non-coding RNAs) [3,4,18]. Structured RNAs are under significant evolutionary pressure to adopt a single, functional conformation [19]. However, mRNAs and non-coding RNAs are not necessarily evolved to adopt a single conformation but rather adopt an ensemble of conformations [20][21][22][23]. We have recently found specific disease-associated mutations that alter the ensemble partitioning of mRNA affecting gene regulation and thus cause disease [24]. Thus, structure is likely an important functional feature even in RNAs traditionally thought of as "unstructured." Algorithms to evaluate the structural and functional consequences of mutations on proteins (e.g. PolyPhen and SIFT) are commonly used to assess the potential deleterious effects of mutations [25][26][27]. In addition, several groups are actively developing web servers to compute the potential deleterious effects of SNPs on RNA structure and function [28,29]. The structural basis for deleterious mutations to a structured protein is rationalized through an understanding of protein folding. For example, replacing a hydrophobic residue in the hydrophobic core of a protein with a hydrophobic amino acid will likely cause the protein to misfold [26,27]. In RNA however, the physico-chemical properties of the four-nucleotides are not as diverse as the amino acids. Furthermore, RNA does not fold through the formation of a hydrophobic core [4]. Instead the structure is a complex network of base-pairing and stacking interactions [3,8]. To observe a large conformational change in an RNA, the mutation must not only disrupt an existing base-pair, but also favor a completely alternative base-pairing network. The functional consequences of structure disruption depend on whether the affected region is involved in important regulatory interactions. In certain cases, small local changes in the RNA structure may have functional consequences [15,30]. In this manuscript we are interested in identifying the mutations that globally affect RNA structure and are thus likely to have significant functional consequences. We initially interrogate high-throughput SHAPE chemical mapping of multiple non-coding RNAs and associated single point mutations [31,32]. We aim to determine whether single point mutations, like in proteins, can significantly alter the structure of the RNA. We then evaluate the performance of multiple RNA structure prediction algorithms to determine the optimal strategy for identifying the mutations that disrupt RNA structure. As GWAS (Genome Wide Association Studies) continue to focus more on non-coding regions of the genome, it will become increasingly important to have accurate algorithms for assessing the potential deleterious consequences of SNPs on the transcriptome. Results and discussion Single mutations disrupt RNA structure To better understand the potential effects of SNPs on a large RNA we consider the Boltzmann sampled suboptimal ensemble of the Vibrio vulnificus Adenine Riboswitch ( Figure 1A) [33,34]. Projecting these structures onto the first two principal components of their structural space as described previously [24], reveals four major clusters ( Figure 1A). The Adenine Riboswitch is so named as the aptamer domain (highlighted in light magenta in Figure 1A) binds Adenine. It is one of the few Riboswitches that activates gene expression upon ligand binding [35][36][37]. The "on" and "off" conformations of the Riboswitch are present in the Boltzmann ensemble of the WT sequence ( Figure 1A, green and magenta clusters, respectively). This is consistent with recent models that suggest that Adenine riboswitching is kinetically controlled at the transcriptional level [35]. Moreover, two other conformations (cyan and red clusters, Figure 1A) are not highly populated in the WT ensemble. If we repeat the Boltzmann sampling procedure for a sequence containing the C77G mutation ( Figure 1B), we see a drastic shift in the ensemble favoring the cyan and red conformations. A majority of mutations, however, are like the U39A mutation and have very little effect on the suboptimal ensemble ( Figure 1C). To experimentally validate the prediction made by suboptimal sampling made in Figures 1A-C, we queried the SNRNASM (Single Nucleotide Resolution Nucleic Acid Structure Mapping) archive as well as the RNA Mapping Database (RMDB, http://rmdb.stanford.edu) for chemical mapping data of the Adenine Riboswitch [38]. We found SHAPE chemical mapping data for the WT, C77G and U39A transcripts under identical solution conditions (10 mM MgCl 2 and 100 mM KCl). This data provides single nucleotide resolution measurements of base-pairing in the Riboswitch [39]. A high normalized SHAPE reactivity indicates high flexibility and thus low probability of basepairing, while low reactivity indicates high likelihood of base-pairing [40,41]. The data in Figure 1D therefore experimentally validates the predictions made in Figures 1A-C. We see that the C77G (red trace) is significantly different from the black (WT) and blue (U39A) traces, consistent with a large shift in the predominant structures in the ensemble. The significant increase in SHAPE reactivity in residues 32-43 and 62-68 are consistent with the hairpin structure represented by cyan cluster. We compute the experimental Structure Disruption Coefficient (eSDC) to evaluate the effect of a SNP on the RNA structure as described in the Methods (Equation 1). This value measures the disruptive effect of a SNP on an RNA, the higher it is the greater the structural disruption. In this case it is 2.0 for C77G and 0.1 for U39A. Furthermore, we can use the multiple repeats of the experiments to evaluate the statistical significance (p-value) of these eSDC values, i.e. the probability that we would obtain the value due to noise in the data. For the C77G, the difference is statistically significant (p-value < 0.001) while for U39A it is not (p-value >0.5). Systematic eSDC analysis of five non-coding RNAs The SNRNASM and RMDB databases contain 470 SHAPE data sets of RNA sequences with single and/or double point mutations relative to WT RNA for five non-coding RNAs under similar monovalent and divalent salt concentrations. We therefore computed eSDC values for these 470 mutations and summarize the results in Figure 2A. In all cases we computed eSDC values relative to the WT sequence to identify single or double mutations that significantly disrupt RNA structure. The results of our analyses are plotted on Figure 2A and reveal that in all cases certain mutations (e.g. U22G.A196G in FTL, U113A in the Glycine Riboswitch) significantly Figure 1 Structural analysis of the Adenine Riboswitch, which is a bacterial regulatory RNA that binds Adenine and controls gene expression [35,37]. The RNA adopts two major conformations, the "On" state (Adenine bound) forms three stem loops (P1, P2 and P3), while in the "off state" the site of translation initiation (3' end of the UTR, near the start codon) is structured effectively disrupting translation initiation. A.) Boltzmann suboptimal sampling of the ensemble of possible RNA conformations (as predicted by sFold) projected onto the first two principal components of structure space as determined by a Manhattan distance metric evaluation of the ensemble. Each dot in the diagram is one alternative structure. Representative structures adorn the diagram, and the aptamer domain of the Riboswitch is highlighted in light magenta. The Riboswitch is predicted to adopt four structures, characterized by green, purple, cyan and red dots. The "on" and "off" states of the Riboswitch to the green and magenta cluters, respectively. B.) Boltzmann sampling of the structural ensemble for the C77G containing sequence which indicates a significant shift in partitioning towards the cyan and red conformations. C.) Boltzmann sampling for the U39A mutation which is predicted to have no effect on the partitioning compared to WT. D.) Experimental validation using SHAPE chemistry of the predictions made in A-C, showing that the C77G mutation disrupts the structure of the RNA in a manner consistent with an increase in the population of the cyan cluster. Figure 2 Comprehensive analysis of mutation induced structure disruption in five non-coding RNAs. A.) eSDC (experimental Structure Disruption Coefficient) for 470 single or double mutatants relative to the RNA's WT sequence. eSDC is computed as one minus the Pearson correlation coefficient of the SHAPE profile (mutant to WT) multiplied by the square root of the length of the RNA. We see that most mutations have small eSDC values indicating that they do not significantly disrupt structure. The five RNAs studied are the human FTL 5' UTR (FTL), the V. vulnificus Adenine Riboswitch (Adenine RS), the V. cholera Glycine Riboswitch (Glycine RS with and without Glycine (G) bound), the cyclic di-GMP Riboswitch (bis-(3'-5')-cyclic dimeric guanosine monophosphate Riboswitch with and without cyclic-diguanosine-monophosphate (CDM)) and the P4P6 domain of the L-21 Tetrahymena thermopila group I intron [5,34,35,63]. All data were collected under near physiological solution conditions, i.e. 10mM MgCl 2 and 100 mM monovalent. For FTL, hyperferritinemia associated mutations are indicated in magenta. The eSDC values for ± ligand for the three Riboswitches are indicated with a green horizontal line and represent a "biological" threshold above which a structure change is likely to have a functional consequence. This histogram to the right represents a pairwise "within" eSDC calculation for 6-fold repeats of the SHAPE experiments on the FTL UTR RNA to evaluate the reproducibility and significance (p-value) of eSDC values. B.) SHAPE profiles for the WT, U32A and U113A (black, blue, and red respectively) Glycine Riboswitch in the presence of Glycine showing that the U113A mutation very significantly disrupts structure. C.) SHAPE profiles for WT, C128G and C65G (black, blue, and red respectively) P4P6 group I intron transcripts showing that the C65G globally affects the structure of the RNA. disrupt RNA structure. However, a majority of mutations (e.g. U39A and U32A in the Adenine and Glycine Riboswitches) have very small effects on structure. We plotted representative SHAPE data for structurally disruptive (red) and non-disrupting mutations for the Glycine Riboswitch and P4P6 intron in Figures 2B and C, respectively. To evaluate the significance of the structural disruption, we computed the "within" distribution for multiple repeats (6-fold) of the FTL UTR RNA SHAPE data and plot the resulting histogram to the right of Figure 2A. This allows us to determine the expected eSDC values due to the noise in the experimental data, and evaluate the p-value for any given eSDC. Clearly, single point mutations exist that significantly disrupt RNA structure, however a majority of mutations result in no measurable effect. The FTL UTR data set is particularly interesting, as this RNA is a "RiboSNitch," i.e. an RNA in which specific SNPs can alter structure and cause disease [24,42]. In this case, FTL is associated with Hyperferritinemia Cataract Syndrome, a rare genetic disorder that is characterized by early onset cataracts due to excess ferritin in the retina [43,44]. We indicate the disease-associated SNPs as magenta text in Figure 2A. All the disease-associated SNPs alter the structure of the RNA significantly (p-value < 0.001). Three of the RNAs tested in Figure 2A are Riboswitches and undergo a conformational change if ligand is present. We can therefore compute an eSDC value for SHAPE traces in the presence and absence of ligand. We indicate these eSDC values with a green horizontal line in Figure 2A. The reason this result is important is that the structural change caused by ligand binding to a Riboswitch is sufficient to regulate gene expression [37,45,46]. Thus the Riboswitch ligand eSDC value (green line Figure 2A) represents a "biological" threshold above which the structure change is likely to affect function. A particularly important result of this analysis is the identification of multiple SNPs with much larger eSDC values compared to ligand binding in the Riboswitches. Thus, it is likely that a majority of these SNPs will have important functional consequences. Performance of RNA structure prediction algorithms for RiboSNitch detection We chose to benchmark the four software packages illustrated in Figure 3 [23,[47][48][49], as they each have various options to evaluate the ensemble of suboptimal structures. The precise UNIX commands we used to generate the predictions are also indicated in Figure 3. It should also be noted that all of these programs are designed to predict the best secondary structure, and with the exception of RNAmutants are not necessarily optimized for identifying the mutation that most disrupts RNA structure. We aim to use RNA structure prediction programs to predict the eSDC values determined from the SHAPE data ( Figure 4A). Figure 4 illustrates the four metrics applied to the ensemble of structures from each algorithm and used to generate pSDC values (predicted Structure Disruption Coefficients, Equation 4, methods). This metric is analogous to the eSDC as it allows us to rank order SNPs according to their predicted disruption of RNA structure. All structure prediction programs we tested can compute a Minimum Free Energy (MFE) structure. We represent this as a vector of ones and zeroes, and compute the correlation coefficient between the WT and mutant structures ( Figure 4B). Many structure prediction algorithms can also compute the probability of base-pairing (which is more analogous to SHAPE reactivity) by summing the rows or columns of the predicted partition function matrix ( Figure 4C) [48,50]. We computed the Z Centroid ( Figure 4D) of the partition function as well [51]. Finally, for the algorithms that sample suboptimal structures, we can cluster the resulting ensemble and determine the centroid structure for the most populated cluster ( Figure 4E) [23,51]. We found that in general pSDC values are larger than eSDC values. We are most interested in the different algorithms' (Figure 3) and metrics' (Figure 4) ability to rank and identify the mutations that maximally disrupt structure. To evaluate each algorithm's performance we generated Receiver Operator Characteristic (ROC) curves based on the ranking of the 470 mutant RNA's eSDC values ( Figure 2A) compared with those ranked by pSDC. Figure 5A plots three representative ROC curves and illustrates that algorithm/SDC metric combinations vary in their predictive performance. The AUC (Area Under the Curve) values reported in Figure 5B suggest that the highest performing algorithm is RNAsubopt using a Z centroid metric (AUC 0.64). The "partition function" for RNAsubopt was obtained by computing the pair probabilities for the first 10,000 suboptimal structures. The AUC values reported in Figure 5B reveal that most algorithm/metric combinations perform similarly and are within the standard error of 0.03 when the experimental data is bootstrapped. eSDC values, and SHAPE data for all mutants analyzed are provided as tables in the additional files. Additional Files 1-8 correspond to the FTL 199, FTL 226, Adenine RS, Glycine RS NoGlyc, Glycine RS wGlyc, GMP RS wCDM, GMP RS NoCDM, and P4P6, respectively. Conclusions RNA is a ubiquitous regulatory molecule in the cell and there is growing evidence that structure is a central component of its function [52,53]. The Riboswitches studied in this manuscript are one of many examples where RNA structure change regulates bacterial metabolism [46,54,55]. In the case of the 5' UTR, disease-associated SNPs disrupt structure and deregulate Ferritin levels in the eye, resulting in early onset cataracts [24]. The T. thermophila group I intron (P4P6) must fold into its correct three-dimensional structure to self-catalyze its splicing reaction [8,56]. In these examples, structure change is central to the RNA's function in the cell. The data we present in Figures 1 and 2 reveals the extent to which a single point mutation can disrupt RNA structure. Our systematic analysis of 470 mutations on five RNAs reveals that large scale SNP induced structure change is common in RNA and can potentially contribute to disease [24]. Interestingly, all RNA secondary structure prediction algorithms predict that a small subset of mutations will have a large effect on secondary structure. The data we present in Figures 1 and 2 cover a relatively comprehensive set of mutations in each RNA, but are nonetheless limited to five functional molecules. As such, the generalizability of these results will require the analysis of larger experimental data sets as they become available [38]. The mechanism for this change is best illustrated in Figure 1, where we see how a single mutation (in this case C77G) can completely alter the thermodynamic folding landscape of the RNA, favoring an alternative conformation. The data we present in Figure 2 suggest that the thermodynamic models used to predict RNA structure are sound, as we find mutations experimentally in all RNAs studied that disrupt structure. All RNA structure prediction algorithms predict that certain Figure 3 Schematic representation of the four software packages we benchmarked for their ability to predict which mutations in an RNA affect structure most significantly. We chose these packages as they all perform some form of sub-optimal sampling, illustrated with "cartoon" energy landscapes. We also include the precise UNIX commands used to make the predictions. mutations will significantly disrupt structure. In addition, a recent study of common SNPs in the human genome revealed that these affect local RNA structure [57]. An important result in our analysis of the Riboswitch SHAPE data is the comparison of the eSDC values for mutations relative to ligand induced conformational change (see green lines, Figure 2A). For all three Riboswitches, multiple mutations exist that result in far larger structural changes (as measured by our SDC metric) than ligand binding. This is highly relevant, as ligand binding induced structure change can completely turn on (or off) gene expression translationally and/or transcriptionally [45]. Thus the mutations above the green lines in Figure 2A have even greater potential to regulate cellular function. This means any functional RNA has the potential to be a "RiboSNitch," as there exists mutations that can significantly disrupt its structure. Figure 4 Schematic representation of metrics used to compute pSDC (predicted Structural Disruption Coefficients) based on RNA structure predictions for WT (black) and mutant (magenta). The data here are for the WT, and hyperferritinemia cataract syndrome associated U22G mutant of the FTL 5' UTR. A.) SHAPE experimental data for the WT and U22G mutant UTRs revealing a significant effect of the U22G mutation on the structure of the RNA. An eSDC value of 2.3 is computed for this data. B.) sFold Minimum Free Energy (MFE) probability of base-pairing for the WT (black) and U22G (magenta) containing sequence, one corresponds to not-base-paired and zero paired. We see that the program correctly predicts changes in the 40-60 range as measured by SHAPE. C.) Probability of base-pairing computed as the sum of the rows or columns of the partition function [64]. In this case the partition function is computed using sFold Boltzmann suboptimal sampling and computing the observed frequency of base-pairing [51]. D.) Z Centroid simplification of the partition function and probability of pairing computed by summing the rows or column [51]. E.) Probability of pairing assessed as the cluster centroid structure of the most populated cluster of suboptimal structures, in this case using sFold and k-means clustering as previously described [51]. The data we present in Figure 2 are ideal for benchmarking RNA structure prediction algorithms. The analysis we carried out in this manuscript is different from previous secondary structure prediction benchmarks, because we are specifically interested in identifying mutations that globally disrupt a given secondary structure. We developed metrics based on RNA secondary structure prediction algorithms analogous to our eSDC calculations. We can use such an analogy, since SHAPE data is correlated with base-pair probability. The SDC metrics are purposefully global, and we did not evaluate algorithms for their ability to predict the specific local changes in structure, but rather whether they predict that a specific mutation will disrupt structure relative to others. Our reasoning for this approach is that for the analysis of disease-associated SNPs, we are most interested in identifying the most structurally deleterious mutations. Although RNA structure prediction algorithms correctly predicted that all RNAs are disrupted by certain mutations, it is clear that predicting exactly which mutation will alter structure remains very challenging. Although there is some variation in the relative performance of the different algorithmic and metric combinations we tested, the AUC values reported in Figure 5B remain relatively low. This result is not necessarily surprising, as none of the RNA structure prediction algorithms (other than RNAmutants) have been optimized to predict which mutations disrupt structure. In fact, an algorithm's sensitivity to point mutations is often viewed as a weakness, favoring methods that are less sensitive to mutation. However, the experimental data clearly show that SNPs can profoundly change an RNA's folding landscape. The attempts to constantly refine algorithms so as to have them always converge on a single "correct" RNA structure may not improve their ability to identify RiboS-Nitches. Although only anecdotal, mFold's good performance in our benchmark (AUC 0.62, Figure 5B) may indicate that simpler energy functions, which tend to predict more alternative structures, may ultimately perform better for identifying RiboSNitches. Indeed RNAStructure's relatively low performance in our benchmark is surprising, since it has the most sophisticated and accurate energy function and is most accurate in structure prediction [48,50]. Improvements in our ability to predict RiboS-Nitches will likely require a better understanding of the suboptimal ensemble and how mutations affect it in addition to improved energy functions. With the growing number of sequencing efforts revealing ever more single nucleotide variants in the non-coding regions of the genome, accurate algorithms predicting the structural consequences of these mutations are likely to play an important role in genomic interpretation. Data collection and analysis The SHAPE chemical data used in our analysis were downloaded in ISATAB format from the SNRNASM (Single Nucleotide Resolution Nucleic Acid Structure Mapping) and RMDB web sites (http://snrnasm.bio.unc. edu and http://rmdb.stanford.edu). The SNRNASM standard was developed to share the results of high-resolution and throughput nucleic acid structure mapping data [58]. We identified RNAs that were probed using SHAPE chemical mapping under standard conditions (10 mM MgCl 2 and 100 mM NaCl), and where significant mutational information was available. Only RNAs that were at most two SNPs (or mutations) away from a reference (WT) sequence were considered. The data were normalized as previously described [59], and for the two Riboswitch and P4P6 data sets, manually re-aligned to correct for frameshift errors due to the automated analysis of the data using the HiTRACE software [42]. eSDC values were computed as described by Equation 1: where p CC is the WT/mutant pearson correlation coefficient and n is the length of the RNA. The eSDC quantitatively evaluates the effect of a mutation on RNA structure. Prior to the calculation of the eSDC, normalized SHAPE values were capped at one in order to increase the metric's ability to reflect changes in structure identified by differences in the peaking pattern and not minor differences in peak intensity. Significance testing for structure disruption was adjusted using a Bonferroni correction. PCA analysis of the ensemble of structures and clustering Principal components were calculated (as described previously) from a total of 10,000 sampled structures generated equally from a WT sequence and mutants of interest [24]. The principal components were generated from the binary representation of these 10,000 structures. These structures were then projected onto the first two principal components and subjected to the k-means clustering algorithm to reveal distinct clusters [60]. The centroid structure of each cluster was identified from the k-means clustering algorithm and then drawn using R2R [61]. Individual mutant structures were then generated (as discussed in Fig. 3) and projected onto the first two principal components. Each structure projection is colored according to their cluster. Computation of the partition functions from sampled structures and calculation of the Z centroid Partition functions were generated for each ensemble of structures. Each structure is first transformed to matrix form as described in [51]. This is accomplished by creating an NxN matrix where N is the length of the sequence and placing a 1 at position i,j and j,i if nucleotides i and j are paired and a 0 if they are not paired. When all the matrices representing the structures are summed together and then divided by the total number of structures, the resulting matrix is the partition matrix. This matrix contains the probability of nucleotide i being paired to j. The Z centroid is defined as the structure with all the probability of pairing for each pair greater than 50%. ROC analysis of prediction performance Each of the program/metric combinations were evaluated using a Receiver Operator Characteristic (ROC) Analysis [62]. The ROC analysis was carried out by calculating the true positive rate (i.e. sensitivity): and false positive rate (i.e. 1-specificity): Analogously to the eSDC calculation, we compute a pSDC (predicted Structure Disruption Coefficient) by computing the Pearson Correlation Coefficient ( pred CC) between WT and mutant for each RNA structure prediction algorithm. This value is analogous to the eSDC in that it allows us to rank order the disruptive effect of mutations on RNA. To determine ROC values, the mutations were listed from highest to lowest according to their eSDC value. The top 50% of eSDC values were considered to disrupt the structure while the lowest 50% preserved structure. A second list was generated using the same mutants but using the pSDC values instead. A true positive was defined as having a pSDC value above a cutoff and experimentally disrupting the structure while a true negative was defined as having a pSDC below a cutoff and experimentally preserving a structure. A false positive or false negative is recorded when the predictions contrast with the experimental results. The pSDC cutoff was defined by stepping through the pSDC ranks. The resulting true positive rates and false positive rates were then used to generate an ROC curve. The area under the curve was calculated for each ROC using the trapezoidal method. This process was bootstrapped for each program/metric 5000 times using 20 randomly selected mutants from each set. Due to the fact that each of the RNA data sets has a differing number of mutants, the bootstrapping is done by sampling 20 mutants from each of the other data sets besides FTL, in order to correct for any bias that might come up due to one program/metric favoring one data set over another. This results in the ROC being run on 145 mutants at a time, not the full 470. The average area under the curve was calculated with the standard deviation between runs generating the error. The closer the area under the curve was to one the better the predictive power for a given program/metric. Precise WT sequences, corresponding mutations (SNPs), eSDC values and normalized SHAPE data are provided as separate excel spreadsheets in the additional files. These data should facilitate further benchmarking efforts for novel algorithms to predict RNA structure change.
2016-05-04T20:20:58.661Z
2012-06-18T00:00:00.000
{ "year": 2012, "sha1": "efca1e362ebe7a2fb805571b22470b01f709caf8", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-13-S4-S6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "efca1e362ebe7a2fb805571b22470b01f709caf8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248408516
pes2o/s2orc
v3-fos-license
TIPE2 May Target the Nrf2/HO-1 Pathway to Inhibit M1 Macrophage–Related Neutrophilic Inflammation in Asthma Purpose Although recent studies have highlighted the link of TIPE2 and asthma airway inflammation, its roles and molecular mechanisms in different asthma inflammatory phenotypes remain largely unknown. We evaluated sputum TIPE2 expression level and its correlation with different asthma phenotypes. Additionally, we explored the roles and mechanism of TIPE2 in M1 polarization of macrophages. Methods A total of 102 asthma patients who underwent sputum induction were enrolled to evaluate the expression level of TIPE2 and its association with different asthma phenotypes. To explore the roles and mechanism of TIPE2 in M1 polarization of macrophages, THP-1 monocytes stimulated with phorbol-12-myristate-13-acetate, were used as a model of undifferentiated (M0) macrophages, and M0 macrophages were treated with lipopolysaccharide to induce M1 macrophages. Results The sputum TIPE2 level was significantly lower in patients with neutrophilic asthma (NA) and higher in patients with eosinophilic asthma (EA) compared with patients with paucigranulocytic asthma. The levels of IL-1β, TNF-α and IL-6 were highest in NA compared with other groups. TIPE2 levels in sputum negatively correlated with IL-1β and TNF-α levels but positively correlated with IL-4, IL-5, IL-13, and IL-10 levels (P < 0.05). In vitro, TIPE2 enhanced Nrf2/HO-1 pathway activation in macrophages and inhibited LPS-induced M1 macrophage differentiation and related cytokine release. Further analysis showed that the Nrf2 inhibitor ML385 weakened TIPE2-induced activation of the Nrf2/HO-1 pathway, as well as TIPE2-induced suppression in M1 polarization of macrophage and inflammatory cytokines secretion. Conclusions TIPE2 expression level was highly down-regulated in NA and was negatively correlated with inflammatory factors (IL-1β and TNF-α). Aberrant expression of TIPE2 may target the Nrf2/HO-1 pathway to inhibit M1 macrophage–related neutrophilic inflammation in asthma. INTRODUCTION Asthma is a complex and heterogeneous disease characterized by chronic airway inflammation, airway hyperresponsiveness (AHR), and reversible airflow limitation (1). The inflammatory response in asthma is driven by recruitment of the Th2/Th1 lymphocytes, neutrophils, eosinophils, and macrophages to the lung and is associated with M1/M2 polarization of macrophages (2,3). Asthma is divided into four major inflammatory phenotypes based on the percentages of eosinophils and neutrophils in induced sputum of patients: eosinophilic asthma (EA), neutrophilic asthma (NA), mixed granulocytic asthma (MA), and paucigranulocytic asthma (PA) (4)(5)(6). Eosinophilic inflammation is the most common type of asthmatic airway inflammation (7). The main molecular mechanism of airway inflammation in EA is type 2 inflammation, which is driven by T helper (Th) 2 responses and mediated by interleukin (IL)-4, IL-5, and IL-13 (8). Corticosteroids and biological therapies (e.g., anti-IL-4Ra, anti-IL-5/5Ra, and anti-IgE antibodies) have been indicated for asthma treatment in the guidelines of the Global Initiative for Asthma (GINA) (9). However, in contrast to EA patients, some patients with NA exhibit neutrophil dominance airway inflammation that can be driven by M1 macrophages or Th1/Th17 lymphocytes (3,10,11). Thus, patients with NA respond poorly to corticosteroids and biological therapies (12,13) and commonly progress to severe asthma or refractory asthma (14). Therefore, further research on the possible mechanisms and biomarkers for non-EA phenotypes is necessary to identify effective treatment strategies for these patients. Tumor necrosis factor-a-induced protein 8 (TNFAIP8, also known as TIPE)-like 2 (TIPE2), a member of the TNFAIP8 family, maintains immune homeostasis and regulates inflammation (15)(16)(17). TIPE2 gene deletion in mice led to the spontaneous development of lethal inflammation of multiple organs, including the lung (16). TIPE2 regulates macrophage polarization and Th cell-mediated immune responses in vivo, as well as neutrophil and eosinophil activity and exudation (18)(19)(20). TIPE2 promoted the immune-suppressive effects of Tregs by increasing Foxp3 expression, thereby extenuating airway inflammation and airway hyperresponsiveness in the asthma mouse model (21). One study showed that TIPE2 levels in peripheral blood mononuclear cells (PBMCs) of asthma patients were lower than levels in healthy individuals and negatively correlated with eosinophil, IL-4, and IgE levels (22). However, another report found that TIPE2 expression in the polyps of asthma patients with eosinophilic chronic rhinosinusitis with nasal polyps (Eos CRSwNP) was significantly increased compared with expression in patients with non-asthmatic Eos CRSwNP and was positively correlated with eosinophils and local eosinophilic inflammation (23). Therefore, TIPE2 may be related to airway inflammation in asthma. However, the role of TIPE2 in asthma is complex and remains unclear. Nuclear factor E2-related factor 2 (Nrf2) is an important transcription factor that promotes the gene expression of cytoprotective gene heme oxygenase-1 (HO-1), thereby reducing the damage caused by oxidative stress and inflammation in the body (24). HO-1 is an antioxidant enzyme expressed by macrophages (25). Induced expression of HO-1 significantly inhibited lipopolysaccharide (LPS)-induced M1 macrophage polarization and related proinflammatory cytokine gene expression (26). In addition, activation of the Nrf2 pathway significantly reduced Th1 and Th17 cytokine expression (27) and attenuated LPS-induced neutrophil recruitment (28). Whether TIPE2 exerts anti-inflammation activities via Nrf2/ HO-1 is unclear. The heterogeneity of asthmatic airway inflammation and the variability of asthma phenotypes may contribute to the disparate roles of TIPE2 in asthma. Here, we sought to determine the expression pattern of TIPE2 in different asthma phenotypes and its relationship with neutrophilic and eosinophilic inflammation. In addition, we further investigated the possible cellular and molecular signaling of TIPE2 in NA. We speculate that TIPE2 may mediate the M1 polarization of macrophages by activating the Nrf2/HO-1 pathway to alleviate airway neutrophilic inflammation in asthma. Study Subjects This cross-sectional study included 113 patients with asthma recruited from the Department of Respiratory Medicine of Jilin University Second Hospital. Asthma was diagnosed in accordance with the 2012 GINA criteria (29). Subjects underwent fractional exhaled nitric oxide (FeNO) tests, pulmonary function tests, and sputum induction. The Asthma Quality of Life Questionnaire (AQLQ) was used to assess patient quality of life, and the 6-item Asthma Control Questionnaire (ACQ6) and Asthma Control Test Questionnaire (ACT) were used to assess asthma control. All examinations were conducted on the same day. Ethical approval was received from the Ethics Committee of the Second Hospital of Jilin University (approval number: 2016-34). Sputum Induction Sputum induction was carried out according to previously described procedures (30,31). Briefly, participants underwent 15 min of sputum induction with 4.5% hypertonic saline. Dithiothreitol was used to disperse the sputum cells, and the cells were resuspended in 1× phosphate buffered saline, and the total cell count was determined. Cytospins were prepared and stained by May-Grunwald-Giemsa. Cell counts of inflammatory cells (eosinophils, neutrophils, macrophages, epithelial cells, and lymphocytes) were calculated. Sputum Inflammatory Phenotype Classification According to previously published criteria, the granulocyte cutoff values were 3% for sputum eosinophils (5) and 61% for sputum neutrophils (6). The EA phenotype was defined as sputum eosinophil counts ≥3%; the NA phenotype was defined as neutrophil counts ≥61%; the MA was defined as eosinophil counts ≥3% and neutrophil counts ≥61%; and the PA phenotype was defined as eosinophil counts < 3% and neutrophil counts < 61%. Cell Culture The human monocyte leukemia cell line (THP-1) was obtained from the Cell Bank of the Type Culture Collection of the Chinese Academy of Sciences. The cells were cultured in RPMI Medium 1640 basic (1X) supplemented with 10% FBS and 1% penicillin/ streptomycin solution (Thermo Fisher Scientific, Inc., USA) under standard conditions (37°C and 5% CO 2 in a humidified incubator). THP-1 cells were cultured with phorbol-12myristate-13-acetate (PMA, 20 ng/ml; Cat. #M4647, AbMole BioScience) for 48 h to transform into undifferentiated (M0) macrophages. After culture without PMA for 24 h, the M0 macrophages were treated with lipopolysaccharide (LPS, 1 µg/ ml; Cat. #L2880, Sigma-Aldrich) for 24 h to induce M1 macrophages. Cell Transfection To overexpress or silence the TIPE2 gene in THP-1, cells were transduced by recombinant lentivirus carrying a TIPE2 overexpression vector (TIPE2-OE) or small hairpin RNA (shRNA) targeting the TIPE2 gene (sh-TIPE2) for 6 h, followed by incubation under standard growth conditions. At 72 h after transfection, overexpression or depletion of TIPE2 was confirmed by qRT-PCR and western blotting. The production and packaging of the human TIPE2 overexpression vector and the viral vector was conducted by Beijing Mijia Biotech Company. The human TIPE2 sequence was derived from the NCBI database (NM_024575.5) and the TIPE2 shRNA sequence was CCATGACGGCACTTAGCTTTG. Real-Time Quantitative PCR (RT-qPCR) Total RNA was isolated using the Cell Total RNA Isolation Kit (FOREGENE, Chengdu, China) according to the manufacturer's protocols, and RNA was reverse transcribed using the StarScript II First-strand cDNA Synthesis Mix (GenStar, Beijing, China). RT-qPCR was carried out with the CFX96 Touch ™ Real-Time qPCR Detection System (BIO-RAD, USA). Results were analyzed using the 2 −DDCt method. GAPDH mRNA was used as an internal control. The primer sequences are provided in Table 1. Subcellular Fractionation The nuclear and cytoplasmic protein extraction was performed using the nuclear extraction kit from BestBio (Cat. #BB-3115, Shanghai, China) according to manufacturer's instructions. ELISA Cell culture supernatants were collected for cytokine assay by ELISA (Elabscience, Wuhan, China), performed according to the manufacturer's instructions. Levels of IL-6, IL-1b and TNF-a were detected. Statistical Analysis IBM SPSS Statistics, version 25.0, was used for statistical analysis. Normally distributed data were expressed as the mean ± standard deviation (SD) and a comparison of two groups was performed using Student's t-test. Comparison of the means among three or more groups was performed using one-way analysis of variance (ANOVA) followed by post-hoc test (least significant difference [LSD] or Tamhane's T2). Non-normally distributed data were summarized as the median (Q1, Q3) and comparisons between subgroups were performed using Kruskal-Wallis test accompanied by Bonferroni correction or Mann-Whitney U test. For categorical variables, the data were expressed as percentages, and the chi-square test was used for analysis. Spearman correlation coefficients were used to measure the association between clinical parameters of the patients. p<0.05 was considered statistically significant. Demographic and Clinical Characteristics of Asthma Patients and Healthy Volunteers This study included 113 patients with asthma and 22 healthy volunteers ( Table 2). No significant differences in patient age, sex, body mass index (BMI), percentage of smokers, and smoking index were detected between asthma patients and healthy volunteers (p > 0.05). Compared with the healthy group, the asthma patients had significantly reduced lung function (FEV1/FVC, FEV1% predicted) (p < 0.001). The sputum eosinophil percentage was higher in asthma patients compared with healthy volunteers and the sputum macrophage percentage was lower (p < 0.05). Clinical Characteristics and Inflammatory Phenotypes in Asthma Sputum cell counts from 102 asthma patients were available, and the patients were classified with asthma inflammatory phenotypes as follows: 48 (47.1%) patients had PA, 31 (30.4%) had EA, 12 (11.8%) had NA, and 11 (10.8%) had MA ( Table 3). There were no significant differences in the average age, sex, BMI, and percentage of smokers among the four groups (p > 0.05). Patients with EA had a significantly higher FeNO value and blood eosinophil count compared with NA and PA groups and a significantly lower blood neutrophil count (p < 0.05). TIPE2 Expression in Asthma Inflammatory Phenotypes We detected no significant difference in sputum TIPE2 levels between asthma patients and healthy individuals (p > 0.05) ( Table 2). However, the sputum TIPE2 level in patients with NA was also significantly lower compared with levels in patients with PA but significantly higher in patients with EA compared with levels in patients with PA (p < 0.05) ( Figure 1A). We next performed TIPE2 immunofluorescence analysis on sputum inflammatory cells from all asthma phenotypes ( Figure 1B). TIPE2 expression was lower in sputum neutrophils of all phenotypes compared with sputum eosinophils. Macrophages in NA showed lower levels of TIPE2 staining compared with macrophages in PA, whereas macrophages in EA showed intense expression of TIPE2. All immune cells in NA, including neutrophils, macrophages, and eosinophils, showed lower levels of TIPE2 staining compared with cells in EA ( Figure 1B). Therefore, sputum TIPE2 expression was downregulated in neutrophils and NA but was up-regulated in eosinophils and EA. Interestingly, TIPE2 levels in the sputum of patients with MA were significantly higher than those in NA and PA (p < 0.05) but were not significantly different from those in EA (p > 0.05), which may be caused by the different ratio of sputum neutrophils and eosinophils. The expression pattern of TIPE2 in asthma phenotypes may depend on the heterogeneity of airway inflammation. TIPE2 Impedes M1 Macrophage Differentiation To explore the role of TIPE2 in the M1 polarization of macrophages, THP-1 monocytes stimulated with PMA were used as the model of undifferentiated (M0) macrophages. M0 macrophages were then treated with LPS to induce M1 macrophages. M0 and M1 macrophages were initially identified by morphology under the optical microscope ( Figure 3). Further analysis by flow cytometry showed a significant increase in CD11b+ cells (7.9% vs. 92%) following PMA stimulation of THP-1 ( Figure 4). After LPS continued to stimulate M0, CD80+ cells increased significantly (13.2% vs. 72.9%), while the number of CD11b+ cells exhibited a nonsignificant change (92% vs. 91%) (Figure 4). These results support the differentiation of THP-1 cells in terms of M0 and M1 macrophages. M1 macrophages and M1 cytokines (IL-6, IL-1b, and TNF-a) are involved in neutrophilic airway inflammation in asthma (2, 3). Our results showed that TIPE2 mRNA and protein expressions were decreased in M1 macrophages compared with M0 Figures 5A, B). THP-1 monocytes were then transduced with lentivirus expressing TIPE2 shRNA, and monocytes with TIPE2 shRNA exhibited decreased expression levels of TIPE2 ( Figure 5B). The expression levels of M1 genes and cytokines in M0 macrophages were measured by RT-qPCR and ELISA following treatment with LPS. Under M1 macrophage-inducing conditions, TIPE2-silenced macrophages exhibited increased expression levels of M1 genes (CD11c and inducible nitric oxide synthase [iNOS] genes) compared with control macrophages ( Figure 5C). The expressions of IL-1b, IL-6, and TNF-a were also significantly upregulated in TIPE2-silenced M1 macrophages ( Figure 5C). In addition, compared with control M1 macrophages, M1 macrophages with TIPE2 gene silencing secreted significantly more IL-1b, IL-6, and TNF-a into the supernatant ( Figure 5E). Lentivirus was then used to overexpress TIPE2 in macrophages ( Figure 5B). Compared with the control group, TIPE2 overexpression in macrophages inhibited M1 gene expression and inflammatory cytokine secretion ( Figures 5D, E). TIPE2 impeded LPS-induced M1 macrophage differentiation and early inflammation. TIPE2 Promotes Nrf2/HO-1 Signaling Pathway Activation Activation of the Nrf2/HO-1 pathway restrains the M1 polarization of macrophages and exerts anti-inflammatory effects (26). We next explored whether TIPE2 regulates Nrf2/HO-1 pathway activation. RT-qPCR analysis demonstrated that macrophages with TIPE2 overexpression showed a significant increase in the mRNA expressions of Nrf2 and HO-1 after stimulation with LPS ( Figure 6B). Relative Nrf2 and HO-1 protein levels were also significantly increased in TIPE2-overexpressing macrophages ( Figure 6D). In addition, TPE2 overexpression induced significantly increased protein levels of nuclear Nrf2 in M1 macrophages ( Figure 6E). Immunocytochemistry analysis showed that M1 macrophages with TIPE2 overexpression showed more Nrf2 staining, especially nuclear Nrf2 staining ( Figure 6F). However, silencing TIPE2 resulted in reduced expression levels of Nrf2, HO-1 and nuclear Nrf2 in LPSinduced M1 macrophages ( Figures 6A, C, E, F). These data suggested that TIPE2 enhanced activation of the Nrf2/HO-1 pathway in M1 polarization. TIPE2 Inhibits M1 Macrophage-Related Inflammation by Enhancing Nrf2/HO-1 Activation ML385 is a specific Nrf2 inhibitor and inhibits activation of the Nrf2 pathway (32). To assess whether TIPE2 inhibits the M1 polarization of macrophages by targeting the Nrf2/HO-1 pathway, we treated macrophages with ML385 (5 nmol) for 48 h to block Nrf2 pathway activation. ML385 treatment decreased the up-regulated expression in HO-1 and Nrf2 mRNA induced by overexpression of TIPE2 in M1 macrophages ( Figure 7A). ML385 treatment also reduced the increase in Nrf2, and HO-1 protein levels induced by overexpression of TIPE2 ( Figure 7C). Moreover, M1 macrophages with ML385 treatment exhibited less Nrf2 staining compared with control M1 macrophages, but overexpressing TIPE2 weakened the decrease in Nrf2 expression induced by ML385 treatment in M1 macrophages ( Figure 7B). Under M1 polarization conditions, TIPE2-overexpressed M1 macrophages had higher expression levels of Nrf2 and HO-1 after ML385 treatment (Figures 5A, C). ML385 treatment attenuated the down-regulated expression of M1 genes (CD11c and iNOS genes) induced by TIPE2 overexpression in macrophages following LPS stimulation ( Figure 7D). Moreover, ML385 treatment significantly diminished the decrease in IL-1b, IL-6, and TNF-a mRNA levels induced by TIPE2 overexpression in M1 macrophages ( Figure 7D). Upon treatment with ML385, TIPE2-overexpressing M1 macrophages had lower expression levels of CD11c, iNOS, IL-1b, IL-6, and TNF-a compared to control M1 macrophages ( Figure 7D). Together, these data indicated that blocking Nrf2 weakened the anti-inflammatory role of TIPE2 in M1 macrophages polarization. Therefore, these findings suggest that TIPE2 inhibits M1 macrophage differentiation and related inflammation by enhancing Nrf2/HO-1 pathway activation. DISCUSSION A previous study found that TIPE2 expression in PBMCs of asthmatic children was decreased (22). Another study, however, found that TIPE2 levels were higher in the polyps of patients with asthmatic Eos CRSwNP compared with non-asthmatic Eos CRSwNP (23). In contrast, our data showed that the TIPE2 level in sputum was comparable between asthma patients and healthy individuals. However, an inverse expression pattern of TIPE2 in sputum of NA and EA was observed. Interestingly, sputum TIPE2 levels in MA were significantly higher than those in NA but was not notably different from those in EA, which may be caused by the different ratio of sputum neutrophils and eosinophils. The heterogeneity of asthma airway inflammation or differences in asthma phenotypes could account for the disparities in TIPE2 levels in asthma patients. Significantly dissimilar infiltration of inflammatory cells and cytokines has been observed in different inflammatory phenotypes of asthma (34)(35)(36). TIPE2 in sputum of asthma patients positively correlated with pro-eosinophilic inflammation cytokines but negatively correlated with pro- neutrophilic inflammation cytokines. Abnormal expression of TIPE2 may play a regulator role in neutrophilic and eosinophilic inflammation in asthma. Sputum TIPE2 level in EA patients was significantly higher compared with levels in PA patients. TIPE2 expression in sputum neutrophils, macrophages, and eosinophils in EA patients was higher than levels in NA patients. In sputum eosinophils of all phenotypes, TIPE2 was expressed at higher levels compared with that in sputum macrophages. Previous study showed that upregulated expression of TIPE2 in the polyps of Eos CRSwNP patients was positively related to eosinophil numbers, thereby aggravating local eosinophilic inflammation and disease severity (23). However, another study found that TIPE2 may enhance the immune-suppressive activity of Tregs, thereby extenuating eosinophil accumulation in the airway and reducing the severity of asthma (21). These studies exhibited a contradictory role of TIPE2 in eosinophilic airway inflammation in asthma (22,23), which could be attributed to the heterogeneity of asthma inflammatory phenotypes. Thus, we assessed the correlation between sputum TIPE2 levels and cytokines that correlated with eosinophilic and neutrophilic inflammation in asthma. In this study, sputum TIPE2 levels were significantly increased in patients with EA and positively correlated with sputum IL-4, IL-5, and IL-13 that promoted eosinophilic inflammation in asthma (2). In addition, the sputum TIPE2 levels were positively correlated with sputum eosinophils, blood eosinophils and FeNO value, which prompts a positive correlation between TIPE2 with eosinophilic inflammation. However, TIPE2 was also positively correlated with anti-inflammatory cytokine IL-10. More researches needed to explore the role of TIPE2 in eosinophilic inflammation in asthma. TIPE2 was negatively correlated with neutrophilic inflammation, which could be confirmed by the inhibitory role of TIPE2 on M1 inflammation. In this study, sputum TIPE2 levels were significantly decreased in NA patients and negatively correlated with sputum IL-1b and TNF-a. In sputum neutrophils of all phenotypes, TIPE2 was expressed at lower levels compared with that in sputum macrophages. Additionally, macrophages in NA patients showed lower TIPE2 expression compared with PA patients, while macrophages in EA patients had higher TIPE2 expression. The result was consistent with down-regulated expression of TIPE2 in M1 macrophages that can release neutrophilic inflammation factors and aggravate neutrophilic inflammation in asthma (3). TIPE2 was shown to inhibit macrophage apoptosis and M1 phenotype polarization in vitro by decreasing the levels of monocyte chemoattractant protein (MCP)-1, TNF-a, IL-6, IL-1b, and IL-12 (19,37,38). TIPE2 may inhibit neutrophilic inflammation in asthma. Classical (M1) polarization of macrophages and related cytokines play a predominant role in airway inflammation in non-allergic asthma (11). Nrf2, a major transcription factor, promotes the expression of the cytoprotective gene HO-1, thereby inhibiting M1 macrophage polarization and reducing The protein levels of Nrf2 and HO-1 in M1 macrophages were detected by western blotting analysis following treatment with ML385 (5 nmol) for 48 h. (D) Measurement of CD11c, iNOS, IL-1b, IL-6, and TNF-a expression levels in M1 macrophages by RT-qPCR following treatment with ML385 (5 nmol) for 48 h. Statistics were performed on pooled data from at least three independent experiments. Error bars represent the standard deviations of the means. A comparison of two groups was performed using Student's t-test. Comparison of the means among three groups was performed using oneway ANOVA followed by post-hoc test (LSD or Tamhane's T2). *P<0.05, **P<0.01. TIPE2, tumor necrosis factor a-induced protein 8-like protein 2; Nrf2, nuclear factor erythroid 2-related 2; HO-1, heme oxygenase-1; RT-qPCR, reverse transcription-quantitative polymerase chain reaction; shRNA, short hairpin RNA; TIPE2-OE, TIPE2 overexpression; iNOS, inducible nitric oxide synthase; IL, interleukin; TNF-a, tumor necrosis factor-a; CTRL, control vector; ANOVA, analysis of variance; LSD, least significant difference. the expression of inflammatory cytokines (26). We found that TIPE2 enhanced activation of the Nrf2/HO-1 pathway in macrophages and down-regulated the expression of M1 genes and M1 inflammation cytokines stimulated by LPS. M1 polarization of macrophages and related cytokines play a predominant role in neutrophilic inflammation (38). In vivo, TIPE2 inhibited lipopolysaccharide-induced pulmonary cell apoptosis and neutrophil infiltration, thereby weakening lung inflammation and structure injury of mice (17). Exogenous TIPE2 treatment decreased the levels of serum IL-1b, IL-6 and TNF-a in acute lung injury mice, inhibiting lung inflammation (39). Furthermore, neutrophils were increased significantly in the cornea of TIPE2 -/keratitis mice after Pseudomonas aeruginosa infection (20). Additionally, Nrf2 pathway activation notably suppressed neutrophil recruitment to sensitized skin in hypersensitivity mice (40). More importantly, in the ovalbumin-induced asthma model, Nrf2 deficiency led to significantly increased levels of neutrophils and eosinophils in bronchoalveolar lavage fluid and lung tissues of mice, and aggravated oxidative stress, airway inflammation, and AHR in mice (41,42). In short, TIPE2 inhibits these pro-inflammatory cytokines, which can promote neutrophils to the lung, exacerbating airway inflammation in NA (43). TIPE2 may target Nrf2/HO-1 pathway activation to inhibit M1 macrophages inflammation, thereby mitigating airway neutrophilic inflammation in asthma. Together, our results demonstrate an inhibitory role of TIPE2 on M1 inflammation through TIPE2 targeting Nrf2/HO-1 pathway activation and indicate that alterations in TIPE2 expression may lead to an interchange between the asthma inflammatory subtypes. In conclusion, this study revealed that TIPE2 expression level was highly down-regulated in NA and was negatively correlated with inflammatory factors (IL-1b and TNF-a). In vitro analyses showed that TIPE2 impeded LPS-induced M1 macrophage differentiation and related inflammation by targeting activation of the Nrf2/HO-1 pathway. Aberrant expression of TIPE2 may target the Nrf2/HO-1 pathway to inhibit neutrophilic inflammation in asthma. Additional research will be required to elucidate the precise function of TIPE2 in each of these distinct asthma phenotypes. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of the Second Hospital of Jilin University (approval number: 2016-34). The patients/ participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS BS drafted the manuscript. YH, WL, HD and MX reviewed and critically revised the manuscript for important intellectual content. PG provided substantial contributions to the study conception and design and confirmed final approval of the version to be published. All authors contributed to the article and approved the submitted version.
2022-04-28T13:22:31.529Z
2022-04-28T00:00:00.000
{ "year": 2022, "sha1": "32cd595dc55592e2b753d2e157ddb159ad8351cc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "32cd595dc55592e2b753d2e157ddb159ad8351cc", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
251280876
pes2o/s2orc
v3-fos-license
Communications Through Contemporary Tools of Information and Communication Technology: Cross-sectional Study Evaluating Health Among Separated Family Members Background The number of single-living workers separated from their spouses and families has been increasing due to the need to create a balance between life and work. Workers are assigned everywhere in globalized workplaces while also caring for their family members in the context of Japan’s aging society. At the same time, the mental and health status of persons living separately from their families is a matter of concern. The development of interpersonal communication means using information and communications technology (ICT) tools and the internet is remarkable, enabling simultaneous 2-way communication across distances and national borders. The easy accessibility to simultaneous communication is expected to improve the psychosocial status of isolated family members. Objective This study aims to clarify the health benefits of ICT by using a psychosocial health assessment, the characteristics of ICT tools, and the frequency of communication among the workers and their families who live separately. Methods This was a cross-sectional study planned and conducted in Japan. Study participants, including adults who live separately from other family members or have separately living family members due to work, were recruited to answer a web response survey about ICT usage status, health status, and life and society evaluation. This study recruited 73 participants divided into 2 groups by their communication tools and frequencies, and their separated life, health, and psychosocial status were statistically compared. Results Among the 73 study participants, 15 were categorized in the high communication–skilled (HCS) group that used both types of ICT tools to communicate frequently: “live,” such as video chat and voice call, and “nonlive,” such as SMS text message service and email. A simple comparison between the HCS and reference groups showed significant differences in the cohesion with the neighborhood (P=.03), perceived social position (P=.01), and happiness (P<.001); however, there were no significant differences in the health (psychological distress, P=.08; self-rated health, P=.07), lifestyle (drinking, P>.99; current smoking, P=.37), and dyadic trust in family members living separately (P=.80). Further, in a multivariate regression analysis adjusted for confounding factors, such as educational history, age, gender, and job status, poor subjective health showed a prevalence odds ratio of less than 1 (OR 0.17, 95% CI 0.03-1.02). The HCS group showed significant positive relationships in the cohesion score with the neighborhood (P=.01; β=2.40, 95% CI 0.56-4.24), perceived social position (P=.03; β=1.17, 95% CI 0.11-2.23), and happiness score (P=.002; β=1.46, 95% CI 0.58-2.34) in the same multivariate regression models. Conclusions This study suggested that people who frequently communicate with separated family members by taking advantage of various ICT tools can maintain a better mental state and better social relations among those who live alone and are separated from their families. Introduction In Japanese working households, couples may have a period of living separately due to raising children or caring for parents at the opportunity of a job transfer of a family member. Most Japanese workers are employees, accounting for almost 89% of the workforce [1]. Given that the Japanese company organization adopts the membership system [2], there are many cases where they follow transfer orders by their company until they reach retirement age under lifetime employment [3]. Because Japanese companies have domestic and overseas branches and departments, employees usually experience several different workplaces every few years unless they change companies. Employees belong to the company's membership system; even if their company orders them to change their workplace, it is rare for them to leave their company because of that system. Therefore, if there are family circumstances, such as a child attending a competitive elite school or parents needing care, when workers are ordered to transfer, only the worker will leave and move to a different workplace alone. In Japan, where the birthrate is declining and the population is aging, children's education is an important concern for families [4]. At the same time, caring for aged parents is often managed by some family members with less manpower [5]. In addition, as the working-age population is relatively small, companies have to allocate a small number of employees to various places and suitable positions. The human resource department of Japanese companies usually transfers their employees from one job to another or from one branch to other branches rather than hiring new workers to allocate necessary jobs or branch. Because they are taking the membership employment system, hiring workers means to approve the person as a member. Therefore, such an approval process is fateful and serious, and the human resource department hesitate to make decisions easily. From the high economic growth era until today, many Japanese company organizations seem not to change the membership employment system [2]. The number of single-living workers separated from their spouses and families is increasing. Although it is difficult to determine the accurate number, it has been partially figured out by the government, for example, through the Comprehensive Survey of Living Conditions and the National Census. For instance, available information aggregates male spouses who are single households as single job-transferred workers based on these national statistics [6]. According to this information, there were 750,000 single-living married men in 2015 [6], equivalent to 2.4% of the households with couples. In addition, since 1997, the Ministry of Internal Affairs and Communications has begun reporting the number of female workers living alone, although they have spouses in the Employment Status Survey. The number has also increased for women from 0.5% of female workers in 1997 to 1.2% in 2017 [7]. Thus, the percentage of single job-transferred workers for both men and women is not high in all households or workers, but it has steadily increased in the past 30 years. The mental and health status of solitary members living apart from other family members has been a concern according to the increasing number of single job-transferred workers. Several studies on the health (in particular, mental health centered on psychological stress) of single job-transferred workers and their families were published between the 1980s and 2000s. In some studies, the health status of single job-transferred workers did not necessarily deteriorate because the age at the time of transfer, personal qualities (whether or not the transfer is positively considered), and significance of the transfer (whether or not it involves promotion) affected the situation. Therefore, the worsening effect of single job transfer on mental health did not necessarily occur in all cases [8,9]. In addition, a survey was conducted on couples in which one of them was assigned to work alone. An unpredictable life, such as not making a life plan due to an unknown assignment period, was a stress factor, although it depended on the spouses' age and the length of the assignment [8]. Recently, physical health has been also evaluated for single job-transferred workers compared with workers living with their families. With respect to their lifestyle habits, increased smoking and frequency of drinking were higher among those assigned to work alone, and many of them did not eat breakfast [10,11]. Studies have concluded that these were due to stress. The study also reported several symptoms, such as headaches, gastrointestinal disorders, and colds, among single job-transferred workers [10]. In addition, they had a higher prevalence of mental stress, such as irritation, anxiety, and depressive mood, than workers living with family [11]. Comparing the results of health checkups of single job-transferred workers with those of workers living with their families, cholesterol levels and other values were worse for the former [11]. Workers living alone and separated from their families tend to consult with their families about health problems rather than with their close colleagues and medical staff at the working place [10]. It is suggested that communication with their families is important for maintaining their health even when they are isolated physically from them. However, few studies have focused on the communication of single job-transferred workers. Therefore, it is meaningful to evaluate the state of communication as a factor that leads to a better state of life and health of an isolated person living away from the family. The means of communication available to families living separately have increased dramatically since the 1990s when the official number was first estimated. The development of interpersonal communication means using information and communications technology (ICT) tools is remarkable. In particular, the free call service using the internet network enables simultaneous 2-way communication across distances and national borders. When video functions are added, it enables visual communication and sharing of information by nonverbal transactions. Messenger apps in smartphones, such as WhatsApp, Facebook Messenger, and WeChat, are popular and each has more than 1 billion active users [12]; additionally, these apps have regional characters. In East Asia, LINE is popular in Japan, Kakao Talk is famous in South Korea, and WeChat is used the most in China [13]. These free calling apps over the internet have hundreds of millions of users worldwide and contribute as a communication tool. Under the background of developing ICT tools, communication means in Japan has also changed. The number of general users of free calling apps, a recent method, is increasing rapidly [14]. According to the Ministry of Internal Affairs and Communications' Communication Usage Trend Survey, the number of communications and communication hours via conventional telephones have decreased in recent years. By contrast, the number of email and social networking service (SNS) users has become more than half. It is reported that 3.8 billion people not only in Japan but also worldwide are using some kind of SNS tool today [15]. Considering communication to build interpersonal relationships, it can be said that the association between social support and health status has been known for a long time [16]. The number of people to talk to daily has been used to determine the effects of social support. The greater the number of individuals with whom one communicates, including those met face-to-face and those who communicate by email, telephone, and SNS, the higher one's self-reported health [17]. In addition, sharing life-related information among family members through various means increases the family's well-being [18]. In the case of spouses, increased sharing of information reduces the incidence of mental disorders in families living apart and improves their mental health [19]. It also positively affects family relationships based on affection and growth [20]. Regarding the level of sharing information, these studies evaluated the frequency of communication between families using internet-based applications (eg, Viber, Imo, and Facebook) and the number of messages sent by instant messaging as an index. The types of ICT tools used for individual communication have been influenced by socioeconomic factors [18,21]. Nowadays, these tools are regarded as a means for maintaining and promoting health. Effective use of the internet is observed in older adults of higher socioeconomic status and in those who reported less depressive and anxiety symptoms [22]. In addition, increased access behavior to health information has been observed, as internet users are not in the lowest socioeconomic level [23]. In Japan, trials have been conducted by simultaneously sharing photographs [24] and using telemonitoring systems of television's operating state [25] to ensure a secure and safe feeling among family members living separately. However, these are case reports of experiments among a few families, and psychological health has not been examined. In Japan, where the number of single job-transferred workers is increasing, the relationship between the physical and mental health of separated couples and their families and the usage and frequency of communication tools widely employed today is expected to be examined fully. Currently, while numerous people are benefiting from ICT to obtain psychological support from remote family and friends, the health of those who live away from their families has not been fully evaluated in this context. Therefore, this study investigated the association between the communication via ICT and the emotional advantages among couples and families who temporarily choose to live separately because of work. This study aimed to clarify the health benefits of ICT by conducting a psychosocial health assessment as well as evaluating the characteristics of ICT tools and the frequency of communication. Study Design This was a cross-sectional study planned and conducted in Japan. Study participants were adults who live separately from families or have separately living family members due to work. Recruitment was conducted for 5 months from November 2019 to March 2020. The survey asked about ICT usage status, health status, and life and society. All answers were collected by the web response system and analyzed statistically. Participants The researchers approached their acquaintances to introduce the survey to their family, friends, and colleagues. Most acquaintances were also researchers, specialists, and businessmen working in universities, research institutes, and companies with many branches located both domestically and abroad. They had experienced living alone remotely from their family and were expected to know those temporarily separated for work reasons or who had such families. As the reason for living separately, work and family care were included but not for divorce, family troubles, or other reasons. University students were excluded because many of them were usually not expected to be responsible for the life of the rest of their family members, although many of them lived alone apart from their families. Participants were recruited by the snowball sampling method. First, by meeting directly, sending email, or calling over the telephone, the researchers asked them to be the first introducer. The researchers then asked them to send a recruitment message to the mailing list of the Young Scientist Group of the Japan Epidemiological Association. If the potential participant accepted to be an introducer, the researchers asked him/her to introduce them to 1-3 acquaintances who met the inclusion criteria and send an email of the survey site link, a token key, and the research explanatory material. If a reader of the mailing list wanted to participate in the survey and was confirmed that he/she met the criteria of living separately, the researchers directly sent a similar email with the information of a link, a token, and a research explanatory material. If the first applicants accepted the research conditions and participation in the survey, they anonymously started answering the questionnaire using the token key received. Once they finished answering the questions, a new token key and a message appeared, which asked them to send the token key to the family living separately from the first participants to allow them to participate in the survey. Finally, 73 participants were chosen to have given valid responses, and it included 12 family pairs living separately. Survey Variables The survey was conducted using a free and open-source online statistical survey web app, LimeSurvey [26], and responses were collected using an anonymous self-completed survey form. Separated family pairs were identified by the tokens distributed with the survey guide. Regarding the ICT usage status, the following items were suggested to understand the types of communication tools often used: "phone calls by cellphone and telephone," "online free calls," "video chat using the Internet," "short text message on cellphones," "group text chats like LINE, FB messenger, etc.," and "e-mail" as well as the frequency of communication with each tool from the following options: "never use," "once a month, "few times a month," "once a week," "two or three times a week," "four or five times a week," and "almost every day." The detailed questionnaire is presented in Multimedia Appendix 1. In addition to the basic attributes, the researchers asked about their separated living status, such as the relationship status for participants, period of separation, time and cost required to meet, and frequency of meeting. The outcome indicators for the evaluation of the communications were mental health; K6, a 6-item screening scale for psychological distress [27]; subjective self-rated health [28]; and lifestyle habits, such as drinking and smoking. Moreover, this study investigated trust with the family [29]; subjective happiness level [30]; perceived social position, as assessed by a social stratification ladder called the Cantril Ladder [31]; and cohesion with neighborhoods [32]. These validated indicators were selected to assess the psychosocial health status. Statistical Analysis First, a simple tabulation was presented about the basic attributes and separation status of the research participants. Furthermore, the communication tools were divided into 2 groups: "live," which included voice and video options, such as phone call, internet free call, and video call; and "nonlive," which included sending texts via email, group text chats such as LINE, and SMS text message. Communication frequency was calculated as the average score for each combination of the most frequently used tool and the next most frequently used tool. Participants with high-frequency scores, including both live and nonlive tools used as the first and second choices, were classified into the high communication-skilled (HCS) group and compared with the remaining participants, now treated as the reference group. For statistical comparison, the Student t test or the Wilcoxon rank-sum test was used for continuous variables depending on the observed data distributions. Similarly, the chi-square test or Fisher exact test was used for categorical variables. After making a simple comparison, the effects of HCS were estimated by adjusting the educational history, age, gender, and having a job or not, which were thought to be confounded by a multivariate regression model. For the statistical estimation of the effects of HCS on psychological health-related outcomes, logistic regression was used for dependent variables with a binary value of 0 or 1, and ordinary least-squares linear regression was used for continuous dependent variables. The significance level (P value) of all statistics was 5%, and analysis was performed using Stata version 16 (StataCorp, Inc.). Ethics Approval The Kyushu University Medical District Institutional Review Board for Clinical Research approved the protocol of this verification study in 2018 (approval no. 30-335). The main study has also been approved for implementation. Results Among the study participants who agreed to join the survey, 73 completed the questionnaire, including 61 family pairs consisting of couples or parent-child relationships. Their basic demographic characteristics and the status of separate living are summarized in Table 1. Mobility data on returning home to meet the family differed according to distance, and these were summarized based on the difference between domestic and international travels. A total of 53 pairs lived separately in the same country, 7 pairs were separated internationally, and 1 pair did not provide their residence status. Figure 1 shows the types of ICT tools used in family communication and the communication frequency among family members. ICT tools were divided into 2 types of systems: live and nonlive. Phone calls were the most frequently used option by study participants for family communication among the live-type systems. Among nonlive-type systems, most used email every day. The study participants were summarized for each combination of the most frequently used and the second most frequently used tool for family communication. The average value of the scores of communication frequencies for each combination and the number of participants are shown in Table 2. The combination tools that had the highest average score for frequency of use of the combined tools (ie, both live and nonlive systems) were (1) phone calls and SMS text messages (score 16.3 + 6.4; n=9), video calls and group messages (15.5 + 4.6; n=6), or video calls and SMS text messages (score 14; n=1). The 15 people included in these combinations were chosen as the HCS group. We then compared the health and psychosocial states of the HCS group with the rest of the participants, who were considered the reference group. Tables 3 and 4 list and compare the attributes of the HCS and reference groups. The HCS group included more graduates (15/16, 94%) than the reference group (29/57, 51%). However, there were no statistically significant differences in other attributes, such as age (P=.37), gender ratio (P=.94), engaged in a job (P=.58), or living status (P=.70). In addition, there was no difference in the status of separation between the HCS and reference groups on the frequency of actual meeting with the separated family, the travel time, and the cost. One's level of education, communication methods, and evaluation of life may be influenced by educational history. Therefore, a multivariate regression analysis adjusted for confounding factors, such as educational history, age, gender, and job status, was performed (Tables 6 and 7). As a result, poor subjective health showed a prevalence odds ratio of less than 1 (0.17, 95% CI 0.03-1.02), but psychological distress (P=.09), lifestyle (drinking, P=.60; current smoking, P=.36), and dyadic trust with a family partner (P=.93) showed no significant association with those of the reference group, as was the case with the simple comparison. In the HCS group, the cohesion score with the neighborhood (β=2.40, 95% CI 0.56-4.24), perceived social position (β=1.17, 95% CI 0.11-2.23), and happiness level (β=1.46, 95% CI 0.58-2.34) were all higher. Principal Findings In this study, people who frequently communicated using ICT took more advantage of its characteristics even if they were separated from their families, experienced more social cohesion with their neighborhood, and exhibited a higher degree of social position and a higher level of happiness in their life than people who did not communicate frequently using various ICTs. There was no relationship between the variety and frequency of communication methods and the sense of trust in the other family member living separately. In addition, as long as they communicate with each other with sufficient frequency by taking advantage of ICTs, separated persons from family maintained good health, although it was not statistically significant. Separation, such as a single job transfer process without family, causes a great inconvenience in life, making the health condition worse. Therefore, single job-transferred workers have reported worse health than workers who move with family at job transfer [10,11]. However, this study could modify these findings. Our results show that a separated person who communicates more using various ICT tools may have a useful skill in maintaining good mental health. The study design was different from previous studies, and people who were not separated were not evaluated as a reference group in this study. However, good psychosocial effects were achieved by taking advantage of ICT tools and having enough frequency of communication, which can be useful for mitigating the inconvenience and stress in the life of single job-transferred workers in the future. Preferably, to identify the effective intervention methods necessary to improve the quality of life of separated families, additional studies are needed to evaluate the relationship between the state of detailed communication and the sociopsychological health status of both individuals separated and not separated. It would be meaningful to evaluate the psychological health status of separated families considering their current infrastructure background, as ICT tools are now highly developed and the mobility of traffic movement has become increasingly flexible. In Japan, family members often separate temporarily for job reasons, and single job transfers were introduced by many Japanese companies during the period of high economic growth starting in the 1960s and 1970s and have been widely accepted among families. According to Japan's public employment statistics, a survey on the number of households having a worker of single job transfer began in 1981. Since then, several evaluations have been conducted, focusing on the stress status of family members, the roles of parents, and the development of children [8]. Recent studies have also revealed that single-living workers separated from their families have poor lifestyles, poor psychological conditions, and poor medical examination results [10,11]. However, new communication devices, such as smartphones and tablets, are expected to significantly enhance the effects of communications using the internet, which were introduced after 2009 and have developed continuously. When using a smartphone, there were a few technical difficulties, such as the need for other devices and systems, at the time of its introduction. The communication cost is relatively low because the existing internet network can be used. In addition to texts, it is possible to communicate with a large amount of information, such as images and videos, so it has become possible to evaluate psychological health conditions considering the relationship of health conditions with both quality and quantity of communication methods. In Japan, the ownership rate of different information and telecommunications devices indicated that smartphones were the most common, accounting for up to 83.4% in 2019 [14]. Previous studies have already pointed out that the psychological state of single job-transferred workers may be improved by improving communication. It has been observed that when single-living employees have a health problem, more than half of them consult with a family member living far away rather than with colleagues or medical staff closer to them [10]. Thus, there is a possibility that sociopsychological indicators can be kept high by mastering and using diversified means of communication. As ICT is expected to continue to develop in the future, our results suggest evaluating the current transitional situation. In any case, as long as people learn and use ICT tools appropriately, the range of communication means will expand, eventually giving more merit. Limitations As this study used a cross-sectional design, causal relationship cannot be proven. The relationship between cause and effect was not evaluated, such as whether the sociopsychological utility, such as happiness level, has increased due to sufficient communication, or whether the communication has become active because separated individuals had an insufficient connection with the surroundings and a suitable level of perceived social position. A longitudinal research design in the near future can be adopted to monitor life evaluation changes and the physical and mental health status before and after the start of separation or during separation and after the end of the separation period together with the quality and quantity of communication. In addition, because only few participants (n=73) were evaluated in this study, there is a possibility that participants' characteristics are biased. Looking at the attributes, it is possible that a large number of highly educated individuals with high socioeconomic status have gathered, so caution should be exercised when generalizing the results. Comparing single job-transferred employees with those living with family members, many single job-transferred employees held executive and managerial posts [11]. The annual incomes of these single job-transferred employees were usually higher than the average annual income of Japanese employees [8]. Therefore, participants in this study possibly had higher education and socioeconomic status than general people in households, including no single job-transferred members. In other words, highly educated people were likely to have family members living separately, so the results may apply to many households whose members live separately. Moreover, this study used subjective health evaluation and did not use objective indicators, such as health checkup measurements, as in past studies. No statistically significant difference in the health indicators considered in this study might be attributed to the small number of participants, and the power was low. As we did not extract households, including single job-transferred workers, from a specific company or group, the diversity of evaluation targets was one of the strengths of the study when generalizing the findings. However, the small number of study participants threatened the reliability of research evaluation. Conclusions Despite the aforesaid research limitations, those who frequently communicate with separated family members by taking advantage of various ICT tools can maintain a better mental state and better social relations among those who live alone and are separated from their families. This study suggested that cohesion with the surroundings, subjective social position, and happiness level are higher among those who communicate better using various tools and more frequently than those who have less communication. It is expected that there will be increasing opportunities to go to a new location, workplace, or school alone away from the family in this contemporary society according to ICT development and the increasing frequency of mobility. Therefore, it would be useful to evaluate how to communicate better to maintain a good mental state using technical aspects and frequency indicators in future studies.
2022-08-04T06:17:07.934Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "586ba4d4b2b63c87495f137660e701edfaa92058", "oa_license": "CCBY", "oa_url": "https://formative.jmir.org/2022/8/e34949/PDF", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f098611537f316bf9c34187202e61af1ab5dcb11", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
52951707
pes2o/s2orc
v3-fos-license
Reported Self-control is not Meaningfully Associated with Inhibition-related Executive Function : A Bayesian Analysis Self-control is assessed using a remarkable array of measures. In a series of five data-sets (overall N = 2,641) and a mini meta-analysis, we explored the association between canonical operationalisations of self-control: The Self-Control Scale and two measures of inhibition-related executive functioning (the Stroop and Flanker paradigms). Overall, Bayesian correlational analyses suggested little-to-no relationship between self-reported self-control and performance on the Stroop and Flanker tasks. The Bayesian metaanalytical summary of all five data-sets further favoured a null relationship between both types of measurement. These results suggest that the field’s most widely used measure of self-reported self-control is uncorrelated with two of the most widely adopted executive functioning measures of self-control. Consequently, theoretical and practical conclusions drawn using one measure (e.g., the Self-Control Scale) cannot be generalised to findings using the other (e.g., the Stroop task). The lack of empirical correlation between measures of self-control do not invalidate either measure, but instead suggest that treatments of the construct of self-control need to pay greater attention to convergent validity among the many measures used to operationalize self-control. Classically defined as the overriding of unwanted impulses (Baumeister, 2014;Roberts, Lejuez, Krueger, Richards, & Hill, 2014), self-control is among the most celebrated facets of higher cognition. Impulse control enjoys such theoretic prominence that inhibition has been proposed to underlie 80-90% of self-regulation (Baumeister, 2014;Baumeister, Heatherton, & Tice, 1994). Also remarkable is the array of measures used to assess self-control-from introspective self-report questionnaires to reaction-timed tests of executive functioning. While such measures are undoubtedly diverse, what seems to unite them is the idea that they each tap some ability to override unwanted dominant impulses (Baumeister et al., 2014;Baumeister, 2014;Hofmann, Schmeichel, & Baddeley, 2012;Inzlicht, Schmeichel, & Macrae, 2014). That all these measures relate to a common construct called self-control has been assumed often by facevalidity (e.g., "I am good at resisting temptations"; Stroop performance requires inhibiting word-reading), but seldom investigated empirically (cf., Duckworth & Kern, 2011). Yet, the importance of understanding the nature of self-control measures cannot be understated. Poor self-regulation has been identified as "the major social pathology of the present time" (Baumeister et al., 1994, pp. 3; see also , Mischel, Shoda, & Rodriguez, 1989), with low reported self-control predicting poorer health, finances, and wellbeing, and higher rates of mortality and criminal convictions (Moffitt et al., 2011;Tangney, Baumeister, & Boone, 2004). Therefore, understanding operationalisations of the construct of self-control is critical. Most people have a lay conceptualisation of what it feels like to resist temptation and to exert self-control. However, are common empirical measures, that each putatively assess the ability to resist impulses in their own right, statistically related to each other? Do people who self-report high levels of self-control on questionnaire measures also show improved performance on laboratory tests of self-control? Here, we addressed these questions by examining the relationship between self-report measurements of selfcontrol and performance measures of inhibition-related executive functioning through five novel data-sets and a meta-analytical summary (N = 2,641). as well as interrupt undesired behavioural tendencies (such as impulses) and refrain from acting on them" (pp. 274, emphasis added). Consistent with prominent theories (Baumeister et al., 1994), considerable emphasis was placed on inhibition in the development of the Self-Control Scale. Several items in the Self-Control Scale include content that is face-valid in relation to inhibition (e.g., "I am good at resisting temptation"; "I refuse things that are bad for me, even if they are fun"). Beyond inhibition, however, the Self-Control Scale assesses multiple outcomes and processes that are more globally reflective of self-discipline and goal-directed actions (e.g., "I have trouble concentrating"; "I tend to be disorganised"; "I say inappropriate things"; "I am lazy"; all reverse-scored). Initial validation work demonstrated that higher Self-Control Scale scores were associated with better psychological adjustment, reduced problematic food and alcohol consumption, increased relationship satisfaction, and more adaptive emotional responses (Tangney et al., 2004). These relationships were largely confirmed in a subsequent meta-analysis of 102 studies (de Ridder, Lensvelt-Mulders, Finkenauer, Stok, & Baumeister, 2012). In short, self-control-as operationalised by the Self-Control Scale-predicts the good-life. Likely reflecting the wide scope of the Self-Control Scale, this measure is strongly correlated with other broad individual differences including conscientiousness (John, & Srivastava, 1999;Roberts, Jackson, Fayard, Edmonds, & Meints, 2009) and grit (Duckworth et al., 2007). Conscientiousness encompasses needs for achievement, organization, restraint, and rule following (Goldberg, 1990;John, & Srivastava, 1999;Roberts, Jackson, Fayard, Edmonds, & Meints, 2009), and reliably predicts health, wealth, and well-being across the lifespan (Bogg & Roberts, 2004;Kern & Friedman, 2008). Similarly, grit is defined as perseverance and passion for long-term goals, even in the face of short-term setbacks and obstacles (Duckworth et al., 2007;Von Culin, Tsukayama, & Duckworth, 2014). Relative to self-control, conscientiousness and grit were not developed with a central emphasis on inhibition. However, given the strong correlations among grit, conscientiousness, and the self-control scale (cf., Roberts et al., 2014), our analyses also included a "self-discipline" composite that combined conscientiousness, grit, and the Self-Control Scale into a single measure. Impulse control and executive functioning In addition to self-report measurement, the ability to override impulses is commonly measured using executive functioning tasks such as the Stroop, Flanker, and Go-Nogo paradigms (Miyake et al., 2001). Bearing a close resemblance to definitions of self-control (Baumeister et al., 1994;Hofmann et al., 2012), executive functions reflect a domain-general range of processes that allow individuals to flexibly regulate attention and behaviour in a goal-directed manner (Banich, 2009;Miyake et al., 2000). Assessed using a range of tasks, executive functions demonstrate both unity and diversity. While a common factor appears to unite diverse measures of executive functioning, the processes of inhibition, updating, and switching have also been identified as dissociable subcomponents (Miyake et al., 2000). More recent analyses have suggested, however, that the inhibition component is particularly strongly related to the shared variance among executive functions (Miyake & Friedman, 2012). This latter finding is consistent with the idea that inhibition-related processes play a central role in governing flexible goaldirected actions. In the present studies, we used the Stroop and flanker tasks as performance measures of inhibition-related executive functioning. In the Stroop task, people identify the physical colour of a word while ignoring the lexical meaning of the word that may be either compatible (the word "blue" in blue ink) or incompatible (the word "red" written in blue ink). Participants in the flanker task must respond to a central target letter (e.g., "S" or "H") while surrounding flanker letters prime either the correct response on compatible trials (e.g., "SSSSS"), or the incorrect response on incompatible trials (e.g., "HHSHH"). In both cases control is required to overcome the dominant but unwanted response tendency primed by the flankers or automatic word-reading processes on incompatible trials. The Stroop and flanker tasks were originally conceived to investigate cognitive interference, and were presented in within-subjects experimental designs (Eriksen & Eriksen, 1974;Stroop, 1935). However, these paradigms have been widely adopted as self-control measures in social and personality psychology (Allom et al., 2016;Gailliott et al., 2007;Inzlicht & Gutsell, 2007;Molden et al., 2012). Despite their experimental origins, performance measures of inhibition-related executive functioning are frequently used in cross-sectional designs to assess individual differences in inhibition and impulsivity (Davidson, Amso, Anderson, & Diamond, 2006;Hall, 2012;Nigg, 2001;Snyder, 2012). Individual differences in executive functions have been related to multiple real-world outcomes thought to characterise good self-control, such as relationship fidelity (Pronk, Karremans, & Wigboldus, 2011), treatment-compliance among drug users (Streeter et al., 2008), and not snacking on unhealthy foods (e.g., Allan, Johnston, & Campbell, 2010). Thus, while tasks like the Stroop are often used to assess the influence of short-term, state effects on selfcontrol (e.g., Inzlicht & Gutsell, 2007;Molden et al., 2012), it is variation in these tasks as an individual difference that is of interest in the current work. Whenever selfcontrol on the Stroop task or flanker task is mentioned in this manuscript as a correlate of other measures (e.g., the Self-Control Scale), we are referring to self-control as an individual difference rather than a momentary, state effect. One controversy regarding conflict control tasks is the specific role of inhibition in successful task performance. For example, interference from incompatible trials could be overcome either by inhibiting the inappropriate motor response to the irrelevant stimulus dimension (e.g., inhibiting the flanker letters; overriding word-reading processes in the Stroop task) or by focusing attention on the task-relevant dimension (e.g., the central letter in the flanker stimulus, or the physical colour of the Stroop target; Cohen, Dunbar, McClelland, 1990;Egner & Hirsch, 2005). No clear consensus has emerged on whether these executive functioning tasks rely on selective attention, inhibition, or both. Most important for present concerns, however, is that these tasks more broadly assess the ability to control the influence of inappropriate impulses on performance in a goal-directed manner. Hereafter, these processes will be referred to as inhibition-related executive functions, while acknowledging that the exact mechanisms underlying successful performance on these tasks requires clarification. The convergence between self-reported selfcontrol and performance measures of self-control The heterogeneity of tools available to investigate selfcontrol might suggest that a single latent process underlies impulse control across contexts and modalities-from reaction time performance at a millisecond resolution to global introspections about one's ability to self-regulate. This consilient view of self-control assessment is hindered, however, by the modest correlations among measures of self-control. A recent meta-analysis (Duckworth & Kern, 2011) reported small but statistically significant convergence among measures of self-control (i.e., self-report, informant report, choice tasks, and inhibition-related executive functioning, r = .27). This meta-analysis further revealed that while the expected correlations between self-reported self-control and inhibition-related executive functions were present, the effect size of these associations were particularly small in magnitude (r = .10). Multiple factors might contribute to low convergence among measures of self-control. As mentioned previously, the self-control scale is not solely focused on inhibition, but incorporates a wide range of characteristics and behaviours that are indicative of self-discipline. The breadth of the Self-Control Scale might mean that it assesses a broad spectrum of characteristics, with this content only overlapping to a small extent with the inhibitory processes that are assessed in tests of inhibition-related executive functioning. It is also true that inhibition-related executive functions focus more narrowly on a constrained range of cognitive processes that allow people to overcome interference that is specific to a single, rather abstract, goal (e.g., "name colours", "identify the central target letter"). As such, this relatively narrow scope of the Stroop and flanker tasks might mean that inhibition-related executive functions can only correlate very slightly with the broader definition of inhibition used in the Self-Control Scale, simply because these executive functioning measures capture significantly less content. It is noteworthy that this logic still anticipates a detectable (albeit small) relationship between the broader trait measure and the performance measure. The reliability paradox is another factor that might contribute to the relatively low correlations (Hedge, Powell, & Sumner, 2017). This paradox refers to a phenomenon where behavioural tasks only become established in the experimental literature if they show little between-subject variability. Consequently, low between-subject variability might limit the extent to which behavioural measures of self-control (e.g., Stroop, flanker) can correlate with other individual differences, such as the Self-Control Scale. Again, this reliability paradox might explain why correlations between self-reported self-control and inhibition-related executive functions would be rather small. Finally, common biases in academic publishing might mean that prior meta-analytical effect sizes might overestimate the strength of relationship between scale measures of self-control and performance measures, like the Stroop or flanker tasks. The modest correlations between scale and performance measures reported in prior meta-analyses (i.e., Duckworth & Kern, 2011) might overestimate the underlying effect sizes because they did not correct for publication bias (i.e., the general tendency for significant results to be published more often than null results; Rothstein, Sutton, & Borenstein, 2005). The current work In a series of 5 data-sets and a meta-analysis, we asked if the Self-Control Scale (Tangney et al., 2004)-likely the most common self-control questionnaire-is correlated with the overriding processes commonly identified by the Stroop and flanker tasks. When formatting our research question, it is important to consider the smallest effect size that would still be of practical and theoretical interest. A correlation of at least moderate strength (e.g., r ≥ .4) might be expected if moderate-to-strong convergence is anticipated between inhibitory executive functions and trait-self-control. However, effect sizes in this range seem particularly unlikely given prior meta-analytical results (Duckworth & Kern, 2011). As such, we anticipated that any correlation between scale and inhibition-related executive functioning measures of self-control would be small (e.g., rs > .10 < .25). It should be further noted that the smallest effect size of theoretical significance differs depending on the proposed application, however, it seems that any effect size approaching zero (<.10) would be small enough to be off little practical or theoretical significance. Our work builds on prior investigations in two key ways. First, working against potential file-drawer problems, we present every study that we are aware of from our laboratory (i.e., the Toronto Laboratory for Social Neuroscience) that includes the Stroop or flanker task and the Self-Control Scale. Second, by exclusively employing null-hypothesis significance testing, existing studies can only support convergence among measures. Instead, we used Bayesian methods to obtain a Bayes factor comparing a null and non-null hypothesis, and computed the posterior distribution of the correlation given the alternative model. The Bayes factor gives us the evidence for or against the hypothesis of no correlation, and the posterior tells us how big any non-zero correlation is likely to be (see Etz & Vandekerckhove, in press). Study 1 The Stroop task is a common laboratory operationalisation of self-control (e.g., Gaillott et al., 2007;Inzlicht & Gutsell, 2007;Molden et al., 2012). In study 1, we used the Stroop task to test convergence between reported self-control and inhibition-related executive functioning. Method Participants We decided apriori to collect data from at least 200 participants. This sample size exceeds that of similar prior investigations (Allom et al., 2016;Edmonds et al., 2009), and meets rule-of-thumb guidelines for sample size in social-personality psychology (Fraley & Vazire, 2014). 224 undergraduate students from the University of Toronto Scarborough participated for course credit (81 females; mean age = 18.8, SD = 2.5 years). Seven participants were excluded from these analyses either because of software malfunction or incorrectly responding to likert-type questions (responding outside the range of the scale). For each study in this manuscript, ethical approval was obtained from the Research Ethics Board at the University of Toronto, Scarborough before data collection started. In each study informed consent was provided by each participant before taking part. Scales Self-control was assessed using the 13-item Self-Control Scale (Tangney et al., 2004). We additionally measured two other self-regulatory individual differences: conscientiousness and grit. Conscientiousness was assessed using the conscientiousness 9-item subscale of the 44-item Big Five Inventory (John & Srivastava, 1999). Grit was assessed using the 12-item grit scale (Duckworth et al., 2007). Please see Table 1 for descriptive and reliability statistics for all scales. Although theoretical distinctions exist between grit, conscientiousness, and the Self-Control Scale (cf., Duckworth & Gross, 2014), these self-regulatory traits tend to correlate highly with each other and the Self-Control Scale (rs ~ .70-.80; Roberts et al., 2013). Consequently, we created a composite selfdiscipline measure aggregating across the Self-Control Scale, conscientiousness, and grit. Similar composite scores have previously demonstrated higher utility in predicting the ability to select between competing impulses than single measures (Duckworth & Seligman, 2006). Thus, by improving signal-to-noise ratios, this composite scale might show a particularly reliable relationship with inhibition-related executive functioning. Scales were computerized and completed by participants immediately after the Stroop task. Stroop paradigm The Stroop started with 10 practice trials of 6 object-words (chair, house, lamp, spoon, table, and window) presented in red or blue. The left arrow-key was pressed to identify blue words, and the right arrow key for red words. The main Stroop task started after these practice trials. Stimuli were the words "RED" and "BLUE" presented in either red or blue to create compatible (e.g., "RED" in red font) and incompatible (e.g., "BLUE" in red font) targets. Trials commenced with a fixation cross (250 ms), followed by a target stimulus until response (min: 150 ms; max: 1500 ms) followed by a blank screen (550 ms). 1 Selfpaced breaks occurred between blocks where participants reported their subjective experience (not analysed here). We created mean scores for each dependent measure (Stroop effects [i.e., RT/error-rates on incompatible trials minus RT/error rates on compatible trials], accuracy and reaction times on compatible trials and incompatible trials) separately for each experimental block to assess the internal consistency of the Stroop task. Consequently, Cronbach's α was calculated using 9 different values (i.e., one from each experimental block) for each measure in the experiment (e.g., the Stroop effect in RT). The resulting internal consistency estimates suggested that reliabilities were good-to-excellent for most measures (compatible mean RT, α = .95; incompatible mean RT, α = .93; compatible error rates, α = .83; incompatible error rates, α = .83), but low for the Stroop effect in mean reaction time (α = .52) and error rates (α = .46; see also Wöstmann et al., 2013). Given these reliabilities, it might be suggested that the subsequently reported null correlations were a result of low measurement reliability, rather than low association among reported self-control and inhibition-related executive function. However, it should be noted that similar small-to-null correlations were observed when Bayesian correlations were conducted between reported self-control/self-discipline and non-difference executive functioning scores (i.e., mean RT and error-rates for incompatible and compatible trials) across all studies in this manuscript (see supplemental materials). Analysis strategy Bayesian Pearson's correlations were computed to quantify the association between the Stroop effect and self-report scores. The primary benefit of adopting this Bayesian approach is that in addition to providing evidence in favour of an alternative hypothesis (e.g., a negative correlation between the Self-Control Scale and the Stroop effect) and estimating the size of the correlation, Bayes factors can also be used to provide evidence in favour of a null relationship (Wagenmakers, Verhagen, & Ly, 2015). Bayes factors can be interpreted "as the degree to which the data sway our belief from one to the other hypothesis" (Etz & Vandekerckhove, 2016, p. 4). For example, in the case where we initially have no preferred hypothesis, so that we assign 50% prior probability for each of the null and alternative hypotheses, a Bayes factor of 3 in favor of the null brings the probability of the null hypothesis to 75% (a Bayes factor of 10 brings the probability of the null hypothesis to 91%). Results and Discussion Correlations among self-regulatory constructs As anticipated, grit, conscientiousness, and Self-Control Scale scores were strongly correlated (all rs > .598, see Table 1). A self-discipline composite measure was created by z-scoring grit, the Self-Control Scale, and conscientiousness and summing these standardised values. Bayesian correlations We next tested correlations between the Stroop effect and the self-report measures. Difference values were used because they are indicative of overriding processes while controlling for base rate performance. 2 As increasing values on each self-report scale should be associated with reduced Stroop effects (i.e., higher control), we set a prior on the correlation that is skewed such that most of its mass (78%) falls on negative correlation values (see Figure 1). 3 Here, a Bayes factor in favour of the alternative hypothesis (BF 01 < 1) would support correlations among performance and self-report measures. In contrast, a Bayes factor favouring the null (BF 01 > 1) would suggest no relationship between questionnaire and behavioural measures. We report posterior medians and 95% posterior (credible) intervals in brackets for each correlation, which tell us the most likely range for the value of the correlation if it is in fact non-zero. Stroop effect in reaction time. The data provide evidence that the Stroop effect in reaction times was not correlated with Self-Control Scale scores (r = -.012 [-.143, .119] BF 01 = 7.73) or the self-discipline composite measure (r = -.027 [-.158, .105], BF 01 = 7.26), see Figure 2. Moreover, the posterior intervals suggest that any correlation between reaction time performance and these scales would likely be small. Stroop effect in choice error rates. The data provide some evidence that the Stroop effect in error rates was not associated with scores on the Self-Control Scale (r = -.067 [-.196, .065], BF 01 = 4.81) or the selfdiscipline composite measure (r = -.102 [-.231, .029], BF 01 = 2.44), see Figure 2. The posterior intervals suggest that any correlation between error rates and either of these scales would most likely be negative and small. 4 Studies 2 a-c The following three studies replicated the above findings using a large online sample of participants and another common test of inhibition-related executive functions, the Flanker paradigm (cf., Eriksen & Eriksen, 1974;Miyake et al., 2001). Because these studies share an almost identical protocol we present them together noting any methodological divergence. Method Participants Study 2a. Eight hundred and fifty-six participants (397 females; mean age = 35.23, SD = 11.11 years) were recruited from Mechanical Turk (MTurk) in October 2015. MTurk is crowd-sourced marketplace in which workers are remunerated for completing online tasks. While data from MTurk tends to be noisier than lab data, this platform facilitates the recruitment of lager samples in service of statistical power (Crump, McDonnell, & Gureckis, 2013). All recruited MTurk workers were located in the USA and had completed a minimum of 5 HITs with >90% success rate. Participants were compensated $3 USD, and sessions lasted ~2 5 minutes. The study consisted of an arrow version of the flanker task, followed by a battery of selfreport questionnaires. This data was initially collected as a training sample for a machine learning project. However, we only analysed variables pertinent to the hypotheses addressed in Study 1: The flanker data; reported selfcontrol, conscientiousness, and grit. Two hundred and nine participants were excluded for making >40% errors on the flanker task or having missing data from the flanker task (e.g., because of software malfunction). 5 While this exclusion rate is high (>24%), the 40% error criteria ensures that responding is above chance. One further participant was excluded for very fast mean reaction time (<100 ms). The final sample included 647 participants, resulting in 80% power to find a correlation of .11 when α = .05; and >99% power to find a correlation of .20. Study 2b. 1,621 participants (1227 females, mean age =39.20, SD = 14.22) recruited by the online survey and market research company Tellwut (www.tellwut. com). As with the previous sample, all participants were recruited from the USA, paid $3 USD for taking part, and completed the same study protocol as in Note: Consc. = Big Five Inventory -Conscientiousness; Self-Control = Self-Control Scale; Self-disc = Self-Discipline Composite; α = Cronbach's alpha. inspection of the scatterplots one participant was also removed whose flanker effect that was >4 SDs below the mean. Study 2c. 1,769 participants (932 females, mean age = 40.21, SD = 12.53) were recruited from the online panel company Cint (www.cint.com). These participants were also based in the USA, were paid $3 USD for participation, and completed a protocol that was identical to studies 2a and 2b. The same exclusion criteria applied to studies 2a and 2b identified 888 participants for exclusion resulting in a final sample of 881 individuals. Scales In each study, grit and conscientiousness were assessed as in Study 1, while we used the extended 36 item version of the Self-Control Scale (cf., Tangney et al., 2004). See Table 2 for descriptive statistics. Conscientiousness was assessed in study 2c using the relevant subscale from the IPIP-NEO-120 (Johnson, 2014). Arrow Flanker task All three studies used an identical arrow flanker task. Flanker stimuli consisted of five arrowheads (i.e., compatible: <<<<<, >>>>>; incompatible: <<><<, >><>>). Participants responded to the central arrow using the left or right arrow-key on their keypad. Here, control is required on incompatible trials to override the misinformation primed by the flankers. The main experiment consisted of 100 flanker trials (50 compatible, 50 incompatible) preceded by 10 practice trials. Each trial started with a fixation cross for 1000 ms, followed by the target until response (max. 1000 ms). Our statistical approach followed that used in Study 1. Due to the short nature of the flanker task in these studies, we were unable to obtain comparable measures of internal consistency in the brief online flanker task as we did in Study 1 for the Stroop task. However, previous reports have suggested that the reliability of the Flanker is similar to that which we obtained in Study 1 (Wöstmann et al., 2013). Scale correlations and reliability All three studies supported strong positive associations among self-reported grit, self-control, and conscientiousness, all rs > .698, all ps < .001, see Table 2. A self-discipline composite measure was created as in Study 1. Flanker effects These brief online experiments revealed robust Flanker effects in all three data-sets: Responses were slower and more error-prone on incompatible than compatible trials (see Table 3). Bayesian correlations To evaluate the correlations from studies 2a, 2b, and 2c we employed the following strategy: For each analysis in study 2a, we used the posterior from the corresponding test in study 1 as the prior. Then for each analysis in study 2b, we used the posterior from 2a as the prior, and repeated this again for 2c. This Bayesian updating yields sequential Bayes factors (BF 0r ; see Ly, Etz, Marsman, & Wagenmakers, 2017), and these can be interpreted as the additional evidence gained from each study beyond the evidence we had before. Study 3 Study 3 was an in-lab version of the flanker task that was conducted as part of a battery of baseline measures for a longitudinal goal-striving study that was on-going at the time of writing this manuscript. The flanker task always came first in a series of 3 tasks that were undertaken by participants while electroencephalography (EEG) was recorded. The other two tasks in this series were a time-estimation paradigm and passive picture viewing task that were not relevant to the current research questions. After these tasks the participants answered a battery of questionnaires that included assessments of conscientiousness, grit, and trait self-control. The following analyses report only on the executive functioning and self-report measures included in studies 1 and 2a-c. Method Participants Two hundred and twenty-one participants took part in the study (61.5% females, mean age = 20.2, SD = 5.6), and were largely recruited through the undergraduate pool at the University of Toronto Scarborough, while a smaller number of participants were recruited through community advertisements. The session lasted approximately two hours, including setting up the EEG apparatus, completing the computerised tasks, and the subsequent questionnaires. Forty-three participants were excluded following the same exclusion criteria that were applied in Studies 2a-c. Scales Self-control and grit were assessed in the same manner as Study 1 (see Table 4), however, the likert questions for the Self-Control Scale were a range of 7 points rather than 5. Grit was assessed by the 8-item short grit scale (Duckworth & Quinn, 2009;α = .76). The questionnaires were answered by participants after the computerised tasks. Arrow-flanker task As in studies 2a-c, participants performed an arrow version of the flanker task in a dimly lit room. Here, we only note deviations from the protocol used in these previous studies. Trials commenced with the presentation of a fixation cross (250 ms) that was followed by a flanker target stimulus until response (min: 100 ms, max: 1000 ms), followed by a blank screen for 600-1000 ms before the start of the next trial. Participants performed a total of 420 trials. Participants were given self-paced breaks after blocks of 60 trials, and were instructed to respond as quickly and accurately as possible. Results and Discussion As with all previous studies, strong positive correlations were observed between conscientiousness, trait selfcontrol, and grit, all rs > .676, ps < .001. Bayesian correlations Following the strategy of Bayesian updating from the previous studies, the posterior from Study 2c was used as the prior for study 3 to yield a further sequential Bayes factor . We can interpret each posterior from study 3 as the result of a Bayesian fixed-effects meta-analysis because they contain all the information accumulated across all the studies. Furthermore, a fixed effect meta-analytic Bayes factor is given by the product of all individual studies' sequential Bayes factors. For mean reaction times, the data from Study 3 provided a little more evidence in support of a null relationship between the self-control scale and the flanker effect, r = -.039 [-.077, 0], BF 0r = 1.8, with the overall evidence favouring no association (fixed effect meta-analysis BF 01 = 3.81). A similar result was obtained for the flanker effect in reaction time for the self-discipline composite, r = -.024 [-.062, .015], BF 0r = 1.12, with the overall evidence again favoring no association (fixed effect meta-analysis BF 01 = 13.1). The data from the flanker effect in error rates provided a little more evidence favoring a small association with the selfcontrol scale, r = -.060 [-.098, -.022], BF 0r = .848, with the overall evidence favoring a small association (fixed effects meta-analysis BF 01 = .239). The association between error rates and the self-discipline composite showed a similar pattern in this study, r = -.046 [-.085, -.008], BF 0r = .763, but the overall evidence was essentially equivocal (fixed effects meta-analysis BF 01 = 1.67). 7 Bayesian Random Effects Meta-Analysis We found converging lines of evidence for zero to small correlations between inhibition-related executive functions and reported self-control across five data-sets. We next conducted a Bayesian random-effects metaanalysis to estimate the overall size of the correlations while accounting for potential heterogeneity of results across the data sets. To this end, four separate meta-analyses (reaction time and choice-error rates separately for the Self-Control Scale and self-discipline composite) were conducted using the Bayesian statistical software Stan (Carpenter, et al., 2016;Stan Development Team, 2017), and we computed metaanalytic Bayes factors using bridge sampling Gronau, Singmann, & Wagenmakers, 2017). These random effect meta-analyses were instantiated as hierarchical models where the individual studies can be related to each other through shared populationlevel parameters; see the supplementary materials for details. The results of each random-effects meta-analysis are summarized in Figure 5. The posterior distribution for the meta-analytic correlation (rho) between the Self-Control Scale predicting conflict effects in reaction time suggests any association that might exist is likely negative and small (r = -.028 [-.181, .109]), and the Bayes factor favours the null model (BF 01 = 8.34). Similarly, the posterior for the meta-analytic correlation between the self-discipline composite score and conflict effects in reaction time was small (r = -.021 [-.161, .093]), and again the Bayes factor favoured the null model (BF 01 = 10.33). A similar pattern of results holds for the meta-analytic correlations between the conflict effects on error rates and scores the Self-Control Scale (r = -.059 [-.190, .056]), with a Bayes factor slightly favouring the null model (BF 01 = 3.93). Similar results were found for the self-discipline composite measure (r = -.051 [-.183, .057], BF 01 = 4.87). General Discussion Combining 5 data-sets with over 2,600 participants, we found a consistent pattern of a small-to-zero relationship between self-reported self-control and two performance measures of inhibition-related executive functioning (the Stroop and flanker tasks). Most individual studies were consistent with no association between self-reported self-control and inhibition-related executive functions, with only study 2c tending to support a small negative relationship. Further Bayesian meta-analyses-both fixed and random effect-suggested little to no relationship between reported self-control and conflict effects in reaction time and choice error rates. This conclusion is supported by both Bayes factors supporting a null association and the corresponding small posterior estimates. What do these results mean for the science of selfcontrol? Both questionnaire and inhibition-related performance measures are established and widely accepted measures of self-control (de Ridder et al., 2012;Duckworth & Kern, 2011;Hofmann et al., 2012;Molden et al., 2012;Inzicht & Gutsell, 2007). Indeed, it was once estimated that inhibition underlies 80%-90% of self-regulation (Baumeister et al., 1994;Baumeister, 2014). Given that inhibition is evoked as a mechanism underlying executive functions (Miyake Note: Consc. = Big Five Inventory -Conscientiousness; Self-Control = Self-Control Scale; Self-disc. = Self-Discipline Composite; α = Cronbach's alpha. & Friedman, 2012) and self-reported self-control (Tangney et al., 2004), it might be expected that associations among these self-control measures should be sizeable and robust. However, the current results suggest that questionnaire measures of self-control and canonical performance measures of inhibition-related executive function are largely unrelated to each other. Importantly, we do not claim that our results invalidate one measure or the other. Our results suggest that the Stroop and flanker tasks do not reflect the broader individual difference construct that is reflected in self-report scales, and, equally, that scores on the self-control scale are not analogous to the processes assessed by the Stroop and flanker tasks. Our findings are consistent with previous studies that reported non-significant correlations between inhibitionrelated executive functions and self-report measures of impulsivity (Eisenberg et al., 2018;Nęcka, Gruszka, Orzechowski, Nowak, & Wójcik, 2018;Stahl et al., 2013) or conscientiousness (Flemming, Heintzelman, & Bartholow, 2016). We further extend these frequentist analyses by providing Bayesian support for a null relationship between these measures. The strongest interpretations of these findings are: a) that theoretical and practical conclusions drawn using one measure (e.g., the Self-Control Scale) cannot be generalised to findings using the other (e.g., the Stroop task); and b) that there is little-to-no relationship among these measures that are both commonly identified as operationalisations of the psychological construct of self-control. Executive functioning paradigms such as the Stroop and Flanker task are designed specifically to assess control over pre-potent impulses (cf., Botvinick et al., 2001;Miyake et al., 2001). Evidence that these tasks assess the ability to overcome inappropriate impulses has been Figure 5: Forest plots depicting the results of the random effects meta-analyses as a function of self-report measure (Self-Control Scale, Self-Discipline Composite) and the difference scores on the executive functioning tasks for both reaction time and choice error rates. Error bars depict 95% confidence intervals. BF 01 in sub-titles show evidence favouring the null for the meta-analytical (rho) effect across all four data-sets. Tau is a measure of heterogeneity of effect size, and was low for each of our meta-analyses. suggested both by behavioural and psychophysiological investigations (Kopp et al., 1996;Verleger et al., 2009). Similarly, the concept of inhibiting unwanted impulses was central to the development of the Self-Control Scale (Tangney et al., 2004), and many of the items in this scale assess inhibition-like content (e.g., "I am good at resisting temptation"). Despite these logical similarities, it is reasonable to conclude from our results that self-report measures of control and laboratory tests of inhibitionrelated executive functions assess different underlying processes. These findings should be of great concern to psychological scientists interested in self-control: Despite theoretical suggestions to the contrary (Hofmann et al., 2012), our results suggest that the field's most widely used trait measure of self-control is uncorrelated with two of the field's most commonly used executive functioning measures of self-control. We should be clear that our results do not undermine the validity of the Self-Control Scale (or other self-report measures like it) as a predictor of real-world outcomes. Scale measures of self-control are consistently related to multiple indices of wellbeing (de Ridder et al., 2012;Moffitt et al., 2011). In fact, initial validation work focused on associations between the Self-Control Scale and relevant outcomes (e.g., less binge eating, alcohol abuse, better relationships, and good psychological adjustment), rather than exploring associations with other established self-control measures (Tangney et al., 2004). Instead, it appears that the Self-Control Scale and performance measures of inhibition-related executive functioning might be largely non-overlapping, despite these tasks both being framed as assessments of the ability to override impulses. The current results are illustrative of a general conceptual and definitional ambiguity that may hinder the empirical validation self-control as a psychological construct. Self-control is typically defined as the ability to override unwanted impulses (Baumeister et al., 2014), and this ability is assessed using multiple measures. This heterogeneity of assessment is particularly challenging for construct validity as there currently exists no gold-standard criterion measure against which the validity of other selfcontrol measures can be assessed. If I hypothesize that measure A (e.g., handgrip strength, the Stroop task, or a new self-report scale) is a measure of self-control, there is no agreed benchmark assessment of self-control that I can correlate with handgrip strength. Following this logic, it is impossible to conclude from our results if the observed lack of significant correlations points to validity issues with any one of our measures, or if there are broader problems with the construct space of self-control (cf., Cronbach & Meehl, 1955). Conclusions complementary to our own were drawn in recent analyses in which task/performance based measures of self-regulation (e.g., go/no-go task, delay discounting, and 35 other behavioural tasks) predicted other task-based measures, but were not associated with 27 self-report based measures of self-regulation (and vice versa;Eisenberg et al., 2018). Together with our own results, these studies indicate a so-called jingle-fallacy has emerged in self-control research, where two types of task (i.e., behavioural and self-report) that are commonly identified as operationalisations of one psychological construct (i.e., self-control), bear little-to-no empirical relationship with each other. Limitations and future directions The current studies should be considered in light of some important limitations and questions for future research. First, we focused on a relatively constrained range of selfreport and performance measures that reflect canonical measures of self-reported self-control and inhibitionrelated executive functions. However, self-control and self-discipline can be measured both by longer and shorter-form scales (Goldberg, 1992;Gosling, Rentfrow, & Swann, 2003;Duckworth & Quinn, 2009), and also by observer reports (Jackson et al., 2010;Moffitt et al., 2011). Similarly, a wide range of reaction timed tasks are available to measure inhibition-related executive functions (e.g., antisaccade task, the Stop-signal task). Given this diversity of measures, it is clear that on-going research should test the generalisability of our findings to other measures in order to further explore the structure of self-control. As mentioned when introducing our measures, there also exists a unity and diversity among established measures of executive functioning (Miyake et al., 2000;Miyake & Friedman, 2012). Future research could explore associations among self-report measures of self-control and other aspects of executive functioning, such as updating and task switching. One recent investigation indicated that the personality dimension of conscientiousness was associated with shifting, but not with the inhibition of prepotent responses or working memory updating (Fleming, Heintselman, & Bartholow, 2016). These findings suggest that rather than reflecting the ability to overcome impulses, conscientiousness might be more closely associated with control processes that allow people to flexibly respond to changing contexts and environments. Similar patterns might be expected with reported self-control given the high degree of empirical and conceptual overlap between self-control and conscientiousness (Roberts et al., 2014). Such a finding would be consistent with theoretical perspectives in which self-control involves adaptively managing priorities between activities and goals (Inzlicht et al., 2014). Related to the diversity of self-control measures, the exact role of an inhibitory mechanism (vs. selective attention/attentional control) has been questioned in regard to many behavioural measures of inhibitionrelated executive functioning (Egner & Hirsch, 2005). Furthermore, while some analyses have suggested that a common latent factor unites measures of inhibitory executive functioning (Miyake et al., 2000;Miyake & Friedman, 2012), other research indicates a lack of convergence between these measures (Egner, 2008). As each of our studies assesses the link between reported self-control and one executive functioning task, we are not able to assess links between introspective reports of the ability to control impulses and any latent executive functioning factor that is common to the Stroop and Flanker tasks. Future work should explore this possibility. Another limitation is the previously mentioned reliability paradox (Hedge et al., 2017): Robust cognitive tasks do not produce reliable individual differences, making their use as trait-level correlational tools problematic. Behavioural tasks only become well established when between-subject variability is low; however, low between-subject variability hurts reliability for individual differences and deflates correlations. One potential solution, albeit controversial, is to disattenuate correlations undermined by low reliabilities (Muchinsky, 1996). When we disattenaute the meta-analytic correlations, which range from r = .03-06, they increase somewhat to r = .05-.08. Though slightly increased, this disattenuation suggests that the correlations between reported self-control and inhibitionrelated executive function is not low because of poor reliabilities, but because they are actually uncorrelated, with less than 0.7% in shared variance. Furthermore, it should be noted that while non-difference scores from the Stroop task (e.g., mean reaction time on incompatible trials) demonstrated good reliability, Bayesian correlations supported a null relationship with reported self-control (see online supplemental materials). Together with the disattenuated correlations, these results suggest that the current findings are unlikely a direct result of the poor reliability in the executive functioning tasks. Conclusion The current findings are consistent with a null relationship between performance measures of inhibition-related executive functioning (the Stroop and flanker tasks), the Self-Control Scale, and related measures of selfdiscipline (Grit, Conscientiousness). Our results highlight empirical and conceptual problems with self-control as a psychological construct, where widely used and established measures of self-control are largely unrelated to each other. Data Accessibility Statement The data and scripts are available on our OSF page (https://osf.io/8etus/). Notes 1 In addition to the classic Stroop effect, we manipulated the proportion of congruent to incongruent trials in the Stroop task (cf., Logan & Zbrodoff, 1979). However, this manipulation was not modelled as it is not central to the present work. The task comprised 576 trials divided equally into 9 blocks. Blocks were divided into groups of three that varied in the ratio of compatible to incompatible trials (75% compatible/25% incompatible; 50% compatible/50% incompatible; 25% compatible; 75% incompatible). Condition order was counterbalanced between participants, while the three blocks with equal proportions were always presented together. Proportion congruency conditions were collapsed for all analyses. It should be noted that neither the self-control scale nor the self-discipline composite measure predicted the Stroop effect in the majority of conditions (all rs < -.144). There was a small negative correlation between the Stroop effect and error rates during the majority compatible condition and the self-discipline composite (r = -.167, p = .044). However, this correlation should be interpreted with some caution given both the high p-value and the considerable number of comparisons undertaken in this analysis. 2 We replicated our Bayesian analyses using the non-differences scores (i.e., RT and error-rates on compatible and incompatible trials), and these are presented in the online supplementary materials. The results for the non-difference scores were mixed. In reaction times, we tended to see small positive correlations between reported self-control and performance on both compatible and incompatible trials, whereas smaller negative correlations were observed for error rates on the same trial types. This pattern of results is more consistent with reported self-control correlating with a slight speed-accuracy trade-off (slower overall RT and reduced error-rates), rather than an increase in inhibition-related executive functioning. 3 A sensitivity analysis using other reasonable choices of priors revealed no concerns that affect the conclusions of the present analyses. 4 One previous factor analytical investigation suggested that the brief Self-Control Scale can be further divided into subscales with of items reflecting initiatory and inhibitory self-control (de Ridder, de Boer, Lugtig, Bakker, & van Hooft, 2011). Bayesian correlations conducted in JASP supported a null relationship between the inhibition related items and the Stroop effect in RT(r = .022, BF 01 = 14.93) and error-rates (r = .077, BF 01 = 23.86). 5 All results were identical when no exclusions were applied (see supplemental materials, https://osf.io/ jws4x/). 6 Studies 2b and 2c had high rates of exclusion. While we cannot tell exactly why exclusion rates were so high in these studies, we think that the poor data quality likely arose from the use of online survey companies in which participant panels likely had less experience with behavioural tasks than Mturk participants. It is important to note, however, that our overall conclusions were already supported from the results of study 1 and 2a without including studies 2b-c. However, we opted to include the later studies (after removing poorly performing participants) for the sake of transparent reporting. 7 Bayesian correlations conducted in JASP supported a null relationship between the inhibition related items of the brief Self-Control Scale (cf., de Ridder et al., 2011) and the Stroop effect in RT(r = -.109, BF 01 = 2.041) and error-rates (r = -.052, BF 01 = 5.581). Funding Information Alexander Etz received funding from the National Science Foundation Graduate Research Fellowship Program #DGE-1321846, and grant #1534472 from the National
2018-09-29T02:54:40.054Z
2018-11-20T00:00:00.000
{ "year": 2018, "sha1": "06d3963f6f45b2802521c6246f54b364bbcb5067", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1525/collabra.134", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7a535010ab5da46428bb7ed8016095c9698a102d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
16859349
pes2o/s2orc
v3-fos-license
Cementation of Glass-Ceramic Posterior Restorations: A Systematic Review Aim. The aim of this comprehensive review is to systematically organize the current knowledge regarding the cementation of glass-ceramic materials and restorations, with an additional focus on the benefits of Immediate Dentin Sealing (IDS). Materials and Methods. An extensive literature search concerning the cementation of single-unit glass-ceramic posterior restorations was conducted in the databases of MEDLINE (Pubmed), CENTRAL (Cochrane Central Register of Controlled Trials), and EMBASE. To be considered for inclusion, in vitro and in vivo studies should compare different cementation regimes involving a “glass-ceramic/cement/human tooth” complex. Results and Conclusions. 88 studies were included in total. The in vitro data were organized according to the following topics: (micro)shear and (micro)tensile bond strength, fracture strength, and marginal gap and integrity. For in vivo studies survival and quality of survival were considered. In vitro studies showed that adhesive systems (3-step, etch-and-rinse) result in the best (micro)shear bond strength values compared to self-adhesive and self-etch systems when luting glass-ceramic substrates to human dentin. The highest fracture strength is obtained with adhesive cements in particular. No marked clinical preference for one specific procedure could be demonstrated on the basis of the reviewed literature. The possible merits of IDS are most convincingly illustrated by the favorable microtensile bond strengths. No clinical studies regarding IDS were found. Introduction Bonded glass-ceramic restorations have gained popularity, particularly after new materials, bonding systems, cements, and cementation techniques became available in recent years. Nowadays different ceramics are introduced for the use of posterior restorations, being either an oxide-ceramic or a glass-ceramic. Glass-ceramics are of special interest in this review because their silica content and micromechanical interlocking structure allow adhesive cementation to enamel and dentin. Consequently, glass-ceramic restorations can withstand tensile forces without cement failure, even if the preparation of the tooth is nonretentive. Since the surface treatment of feldspathic porcelain in 1983 [1] became available, new materials have evolved into high strength and esthetic glass-ceramics such as lithium disilicate. This higher strength compared to earlier glass-ceramics is reached because of a different firing process [2]. Contemporary glass-ceramic fixed dental crowns possess good optical and mechanical properties, thus mimicking natural teeth to a large extent [3][4][5]. To ensure proper attachment of an indirect restoration, basically two aspects have to be taken into consideration: conditioning of the ceramic material and conditioning of the tooth substrate followed by cementation. The most commonly used conditioning method for the glass-ceramic surface these days is application of hydrofluoric acid and silanisation, as reviewed by Tian et al. [6]. Cements are considered necessary to obtain durable retention of the restoration and good marginal seal, as well as maintaining original color and marginal outline. The first dental luting agents were water based cements like zinc phosphate and glass ionomer cements. With the introduction of resin cements, properties like solubility and adhesion improved, thereby allowing a minimally invasive preparation design [7]. Contemporary resin cements vary in properties like viscosity, whether or not they need light curing, and whether they are adhesive, self-etching, or self-adhesive. However these cements require some kind of conditioning procedure of the tooth substrate and indirect restoration. In addition, sealing of dentin tubules with a filled adhesive resin directly after tooth preparation and prior to (digital or analogue) impression taking is presumed to result in improved bond strength, less gap formation, decreased bacterial leakage, and reduced dentin sensitivity [8]. This procedure may be highly clinically relevant and was first tested in vitro by Pashley et al. [9] and described in 1996 as the dual application of dentin bonding agents [10]. Later Magne referred to it as "Immediate Dentin Sealing" (IDS) [8]. Compared to luting with water based cements, adhesive cementation is more difficult and time-consuming and moisture control is more important. A clinical study showed a tendency to higher fracture rates among posterior crowns compared to anterior crowns, and indirect bonded restorations in molars revealed higher failure rates than premolar crowns [11]. Hence cementation of glass-ceramics in the posterior region appears clinically the most challenging and thus is of clinical relevance for further investigation. There is little homogeneity between studies in terms of materials, test method, and analysis. For in vitro studies four types of testing are predominantly applied: (micro)shear bond strength, (micro)tensile bond strength, fracture strength, and marginal gap. The outcomes of these studies are of importance as this could predict the long term results of indirect restorations. A shear bond strength test evaluates the degree to which two attached specimens resist shear. A true shear test is difficult to perform because one of the specimens is always fixed to the test device. Instead, a microshear bond strength test is preferable, in which a cross-sectional area of 1 mm 2 is generally used for greater uniformity of stress distribution. This test results in more adhesive failures at the bonding interface instead of cohesive failures in the substrate, which is considered to be more realistic [6]. A tensile bond strength test is performed perpendicular to the bonded interface and is therefore generally adopted as the most valid bond strength test at this moment [12]. However it is hard to control the alignment of specimen, and nonuniform stress distribution across the bonding surface occurs. With a microtensile test the small size of the specimen leads to a more favorable stress distribution and to bond failures that lie closer to their ultimate strengths [13]. Fracture loading, fracture resistance, load-to-failure, breaking strength, and fracture strength are considered synonymous terms. They are used to indicate the stress at which a specimen fails by occlusal loading, and, in the following, the term "fracture strength" will be adopted. In general, restored teeth are progressively, occlusally loaded until fracture by means of a stainless steel ball. Fracture strength and fracture type are the most common outcome parameters. The marginal gap reflects the quality of marginal adaptation and is commonly studied by means of microleakage experiments (e.g., with dye penetration or silver staining and/or by scanning electron microscopy (SEM)), either with or without thermocycling and with or without loading in a chewing simulator. With conventional nonadhesive restorations the size of the marginal gap is considered of paramount importance for the (quality of) survival of the restoration and should be as small as possible. The size of the marginal gap may not be as critical when using materials that can be luted adhesively to the tooth substrate, such as glass-ceramics. There appears to be a plethora of materials, cements, bonding systems, and cementation techniques for luting glass-ceramics to posterior teeth. The aim of this systematic review is to focus on cements and organize the current knowledge and the manner in which cements are used for the cementation of glass-ceramic materials and restorations, with an additional focus on the benefits of IDS. EMBASE. "dental ceramics"/exp OR ceramic*:ab,ti AND ("cementation"/exp OR "tooth cement"/exp OR cementation*:ab,ti OR "immediate dentin sealing":ab,ti OR luting:ab,ti OR lute:ab,ti OR "dental adhesives":ab,ti OR "resin coating":ab,ti). NOT (veneer*:ti OR posts*:ti OR implant*:ti OR zirconi*:ti OR alumin*:ti) NOT ("case report"/exp OR "review"/exp) AND[english]/lim. Run data search: January 1, 2015 (806 results). Study Selection. Titles and abstracts of the identified publications were screened by one of the authors. Full text [41] documents were obtained for all articles meeting the inclusion criteria. Additional hand searching was performed by following up on the reference lists from included articles. Full text analysis to decide on inclusion/exclusion was subsequently performed by two reviewers and Cohen's Kappa was used as the measure of agreement. Disagreements were resolved by manner of discussion. Methodological quality regarding the risk of bias in selected articles was assessed by one of the authors according to the criteria as set by the Cochrane Collaboration (Tables 1, 2, 3, 4, and 5). In case of multiple clinical studies in which the same restorations were analyzed at different time intervals, leading to different publications, the study with the longest follow-up was selected for definitive analysis. Inclusion Criteria. Only articles about glass-ceramic materials were considered. Clinically, the focus was on single-unit posterior restorations. Included studies should compare different cementation regimes and involve a "glassceramic/cement/human tooth" complex. Studies regarding the benefits of IDS attracted special attention. Descriptive studies (e.g., technical notes), systematic reviews, case reports, or studies with less than ten patients were excluded ( Figure 1). Descriptions such as "selective double-bond technique," "resin coating technique," or "adhesive resin liner" were considered synonymous for IDS. Data Extraction. The included studies were divided into in vitro and in vivo studies. For in vitro studies the data were organized according to the following topics: (micro)shear and (micro)tensile bond strength, fracture strength, and finally Excluded articles based on specific criteria Not a "glass-ceramic/cement/human tooth" complex/not a single restoration Not cementation as examined variable/results not specified for each cement Not intended outcome measure Systematic review/descriptive study of letter Anterior tooth or tooth number not specified Same research population/study retracted Not full text available in library Results The searches of MEDLINE (Pubmed), CENTRAL (Cochrane Central Register of Controlled Trials), and EMBASE resulted in 3008 publications. After exclusion of double publications, 2117 publications remained for title and abstract analysis. 1121 articles were hereafter included for full text analysis. Only a limited additional number of publications were found after checking the references of the included studies. Application of specified exclusion criteria resulted in 88 publications that could be included in the review. The exclusion criteria are described in Figure 1. Interobserver agreement (Cohen's Kappa) regarding final inclusion or exclusion of studies that were proposed after full text analysis was 0.80 (IBM SPSS 22), which is generally considered to be a strong level of agreement [14]. Initial disagreements were generally caused by ambiguities in the study design or the characterization of materials used. The included studies were assessed for their risk of bias according to the Cochrane library (Tables 1, 2, 3, 4, and 5). Assessment of allocation concealment and blinding of participants, personnel, and outcome assessors for included in vitro studies proved difficult and hardly ever applicable. Sequence generation and incomplete outcome data for in vitro studies are not explained in most cases but just named. Assessment "unclear" on incomplete outcome data generally implies that no missing data were reported. Most studies in this review did not report sequence generation; for in vitro studies the relevance of this can be subject of debate. For in vivo studies sequence generation, allocation concealment, and blinding were often assessed as "unclear," because studies often did not describe these procedures. Overall the included studies had a low risk of bias. More specifically, a low risk of bias was assessed for shear bond strength studies, tensile strength studies, and marginal gap studies. An unclear risk of bias was assessed for fracture strength studies and in vivo studies. Because of their great variety it is important to divide contemporary resin cements into subgroups regarding their curing type, their viscosity, and whether they are either adhesive (with a 3-step adhesive), self-etching (with a 2-step or 1-step adhesive), or self-adhesive. This terminology is not used consistently in literature. An overview is presented in Figure 2. Cements that are named in this study will be specified as one of these three types, which usually depends on the adhesive used. Cement and adhesive system brand names, manufacturers, city, and countries of origin are presented in Table 6. Generally, different cement brands, cement types, or cementation techniques were compared in the included studies (e.g., water based cements among which are zinc Esthetic cement)) in combination with several brands of glass-ceramic restorations. An overview of contemporary resin cements is presented in Figure 2. In only one study different groups of luting agents were used and the authors concluded that zinc phosphate cement and glass ionomer cements produced the lowest shear bond strengths, whereas the highest shear bond strengths were found with two self-etching cements (Panavia F2.0 and Multilink) and one self-adhesive resin cement (RelyX Unicem) [15]. In Vitro Several studies ( = 7, [16][17][18][19][20][21][22]) compared different resin cements in a shear bond strength test. Adhesive cements produced significantly higher shear bond strength values to dentin [16,17]. When comparing self-adhesive cements with self-etching cements, the self-etching cements showed the highest bond strengths to dentin [18]. To enamel a selfetching cement (Variolink II/Excite DSC) produced better results compared to another self-etching cement (Clearfil Esthetic cement/ED primer II) [19]. When different self-etch resin cements were compared, Duo-Link showed the highest bond strength, followed by Variolink II (with Excite DSC), and Nexus 2 showed the lowest [20]. To dentin and enamel the adhesive cement Variolink II and the self-etch cement Panavia F2.0 showed the highest shear bond strengths, with Variolink II reaching the highest values [21]. In another study a similar conclusion was reached, but with no difference between Panavia F2.0 and Variolink II [22]. Others, using a push-out test, concluded that an adhesive cement (Variolink II/Syntac) did not perform better than three self-adhesive cements [23]. To enamel three different self-etching resin cements with different setting modes (dual-cure, light-cure, and flow) were compared in a microshear bond strength test; no significant differences were seen [24]. Four studies [25][26][27][28] focused specifically on the presumed benefits of IDS compared to Delayed Dentin Sealing (DDS). In two studies different dentin adhesives acted as an IDS and the authors concluded that they did not alter the retentive strength of adhesively luted ceramic restorations using either of the tested bonding systems [25,26]. Two other studies concluded that IDS using Clearfil SE Bond resulted in improved shear bond strength compared to DDS [27,28]. The application of fluoride or triclosan based desensitizing agents prior to adhesive cementation did not influence the shear bond strength [29], nor did laser-etching of the dentin compared to a self-etch (Clearfil Esthetic) and an etch-andrinse cementation procedure (Variolink II) [30]. Application of a silane coupling agent to the ceramic surface after etching with hydrofluoric acid increases the shear bond strength [31]. In summary, some evidence supports the use of adhesive cement with respect to the shear bond strength compared to self-adhesive and self-etch systems when luting all ceramic materials to human dentin. There is little evidence to support the assumption that IDS improves the shear bond strength especially when Clearfil SE Bond was used. (Micro)tensile Bond Strength ( = 15 Studies). Fifteen articles could be included investigating the effect of different cements on glass-ceramic restorative materials with a (micro)tensile bond strength test; their risk of bias is overviewed in Table 2. In studies comparing different resin cements results were opposite or similar about which cement, self-etching or self-adhesive, resulted in the highest tensile bond strength [33][34][35] or obtained similar results for each cement, be it adhesive, self-etching, or self-adhesive [36]. Values were still worse than those obtained using adhesive luting agents [37] (personal communication) and [38]. But in another study this was contradicted because the self-etching cement did better than the adhesive cement [39]. When a less commonly used self-etching adhesive system (Super-Bond C&B) was used, a higher tensile bond strength was obtained compared to two other self-etching cements [40]. It was hypothesized that the tensile bonding strength is not so much dependent on the type of adhesive approach but more so on the chemical composition and viscosity of the cement used. Interestingly, the use of self-etch adhesive combined with a restorative composite (Clearfil SE Bond with Clearfil APX) yielded higher tensile bond stresses to dentin than dedicated self-adhesive, self-etch, and adhesive cements [39]. But no such difference was found when the same material (Clearfil APX) was used with another bonding system (Linerbond 2V) [41]. Overall, autocure leads to a lower microtensile bond strength when compared to dual-cure cement modes [42,43]. Precuring of the adhesive layer increased tensile bond strengths [43]. As before, tensile bond strengths were also higher for enamel than for dentin, that is, in a study by Habekost et al. [44]. The effect of IDS on microtensile bond strength was tested in two studies. An IDS layer (one or two resin coatings) applied directly after preparation yielded higher values compared to applying it just prior to cementation or not at all. No temporary restorations were made [45,46]. In summary, no one particular cement or adhesive system, be it self-etching, self-adhesive, or adhesive, showed overall superior results with respect to (micro)tensile bond strength. IDS improved microtensile bond strength in both included studies. Table 3. Seven studies [47][48][49][50][51][52][53] examined the effect of different cement groups like zinc phosphate, glass ionomer, or resin cements. Regardless of the preparation type, specimens with crowns that were adhesively cemented were stronger upon occlusal loading than those with conventionally cemented crowns [47]. Several other researchers came to a similar conclusion: zinc phosphate cements were associated with the lowest fracture loads [48] and adhesive cements increased fracture load significantly compared to glass ionomer and zinc phosphate cement [49,50]. When comparing two self-adhesive cements with an adhesive cement and a glass ionomer cement, the self-adhesive cement (RelyX Unicem) revealed the highest fracture strength [51]. In one study the authors concluded that the cement type had no statistical significant effect on fracture resistance within the ceramic system [52] and in another study there were no differences found in fracture strength between glass ionomer, zinc phosphate, and composite resin cements [53]. Fracture Strength ( = 15 Studies). Fifteen studies could be identified that met the inclusion criteria; their risk of bias is overviewed in Seven studies [44,[54][55][56][57][58][59] were included that examined the performance of different resins cements. Different variations of dentin bonding agents and resin luting materials were tested ((1) Mirage ABC and Mirage FLC; (2) Metabond; (3) All-bond 2 and Duo-Link; (4) Scotchbond multipurpose and 3M indirect porcelain bonding kit; (5) Mirage ABC and 3M indirect porcelain bonding kit). Mirage porcelain crowns were luted to premolars. The last two groups produced higher fracture strengths than the other three, suggesting that 3M indirect bonding kit was of significant influence [54]. In a study comparing two different dual-cure resin cements, it was unclear which adhesive system was used for each cement so the cements cannot be considered adhesive, self-etching, or self-adhesive. The authors hypothesize that cements with a higher flexural modulus exhibit higher values of fracture resistance for the ceramic/tooth assembly [55]. Others also suggest that the modulus of elasticity or the preparation design may be of larger influence than the adhesiveness of resin cements [44,56]. In one study the authors concluded that the cement type had a significant effect on fatigue resistance in favor of the self-etching Panavia F2.0 [57], but other authors concluded Panavia F did the poorest, compared to other dual-cured resin cements [58]. When comparing a dual-cure cement (RelyX ARC) with a light-cure cement (RelyX Veneer), no significant differences in loads at failure among the tested cement group [59] were seen. One study described the effect of the thickness of IDS materials (Clearfil SE Bond and Protect Liner F) on the fracture strength of IPS Empress II crowns cemented with Panavia F. The film thickness formed by Clearfil SE Bond and Protect Liner F increased the fracture load of IPS Empress II crowns [60]. In summary, teeth that are restored with an indirect glassceramic restoration, with respect to in vitro fracture strength of posterior adhesively cemented specimen, exhibit higher fracture strength with adhesive cements. Literature is inconclusive about the type of resin cement used. The modulus of elasticity is considered more important than the type of resin cement. There are no data found in the literature on fracture strength using contemporary glass-ceramics, such as lithium disilicate. So extrapolation of the findings to current materials and cementation protocols should only be done with great reservations. Little evidence supports the use of IDS in increasing the fracture load [60]. Marginal Gap and Marginal Integrity ( = 26 Studies). Twenty-six studies could be identified that met the inclusion criteria; their risk of bias is overviewed in Table 4. The effect of different viscosities was given special attention by several authors. The in vitro studies focusing on marginal gap and marginal integrity are too numerous to allow for individual discussion. Therefore the relevant findings evolving from these studies are outlined below. A consistent finding is that the least microleakage and the best marginal adaptation are obtained when using a resin cement [50,[61][62][63][64]. These cements are also the least affected by artificial ageing. A glass ionomer cement exhibited a considerable drop in marginal adaptation after thermocycling, and such a finding seems relevant to clinical practice [51]. Four studies [65][66][67][68] focused on the effect of resin cements with different viscosities on marginal adaptation when luting a glass-ceramic restoration. The degree of viscosity was generally referred to as "high" (e.g., Variolink Ultra; Microfil Pontic C; Cerec Duo cement; Spectrum-TPH) or "low" (e.g., Variolink II; Nexus-high), without further physical description of the terms "high" or "low." Both the initial size of the gap and the viscous properties of the luting agent were found to influence the final marginal (and also internal) gap width and marginal integrity. For relatively small discrepancies between the outline of the preparation and the margin of the restoration, low and high viscous cements result in similar interface widths after cementation [65]. Highly viscous cement is recommended for restorations with a larger luting space [66,67]. Even luting spaces greater than 100 m can be partially compensated by a resin cement. In such cases highly viscous, filled composite cements are recommended when considering the quality of postcementation marginal integrity [68]. In a study involving the cementation of partial crowns, preparation design was of no influence with respect to the size of the marginal gap [63]. Five studies [46,75,[80][81][82] investigated the potential benefit of an IDS on the marginal gap. A temporary restoration was provided in only one of the studies [80]. In two studies the flowable composite extended to the cervical margin [75,81], whereas in the other studies contamination of the margin with resin material was avoided [80,82], which seems a relevant difference when looking at marginal adaptation. In most studies, less microleakage was seen when applying IDS compared to no IDS [75,[80][81][82]. However, one study found little difference in reducing microleakage at the dentin interface and even increased it at the enamel interface [46]. In summary, adhesive resin cements showed the least microleakage and are least affected by artificial aging. With a large marginal gap a highly viscous cement is recommended, when the gap is smaller there is no advantage but also no disadvantage of using a highly viscous cement. "Small" and "Large" are not further specified. Compared to enamel, there was generally more microleakage in dentin. There was little proof that with etch-and-rinse systems a higher percentage of gap-free margins could be obtained in enamel, compared to dentin. With self-etching systems and self-adhesive systems equivalent or even more gap-free margins were reached in dentin. IDS was generally considered of merit in reducing microleakage. In Vivo Studies ( = 20 Studies). There were twenty clinical studies on glass-ceramic restorations comparing different cementation protocols, but protocols and materials were seldom similar among different studies. Their risk of bias is overviewed in Table 5. Clinical performance is described as survival or success, often with additional qualitative measures such as USHPS criteria (United States Public Health Services criteria) and CDA-criteria (California Dental Association criteria). Mirage fired feldspathic restorations were luted with either a dual-cure composite (Mirage) or a glass ionomer luting cement (Fuji I), resulting in 2% and 15% lost or fractured restorations, respectively, after a maximum observation period of 3 years. The predominant complication was adhesive bond failure at the cement-porcelain interface [83] as also concluded by others [84]. Clinically, good marginal adaptation and marginal seal and consequently little marginal discoloration, as well as good wear resistance, were observed, as expressed according to the USHPS criteria. No difference was seen in the cementation procedure. Marginal breakdown of this type of restoration cement with glass ionomer was also seen in a different study [85]. In another similar study restorations could be evaluated after 6 years with 12% and 26% failures, respectively. The difference was already obvious at the 3-year recall period [86]. In contrast to the former study, a deterioration of qualitative parameters was seen during the initial 3 years when judged according to USPHS-criteria regarding marginal adaptation and surface roughness for the dual-cure cement group and even more so for the glass ionomer group. The use of a light-cured (Mirage) instead of a dual-cured adhesive cement (Mirage FLC) presumably caused incomplete curing of the cement because of insufficient penetration of the light through the inlays, with concomitant reduction in fracture strength [87]. The insufficient penetration was associated with 80% versus 20% fracture of the Mirage restorations after a mean observation period of just over one year, especially in thin restorations (<2 mm). These restorations were so thin because a lining cement was used in case of deep preparations (Dycal or a glass ionomer). A similar protocol to protect the vital pulp was adopted in the study by van Dijken et al. [86], which should be kept in mind when extrapolating the results to other situations or current cementation protocols. In another split mouth study, Cerec (Vita Mark II) inlays were cemented with either a dual-cured (Vita Cerec Duo Cement, Vita) or chemically cured resin cement (Cavex Clearfil F2) and evaluated according to the criteria of the California Dental Association. Twenty-three percent of the restorations were replaced, all from the dual-cured resin cement group within a 10-year period. Possibly, the selfcuring capacity of the dual-cured resin cement was insufficient to achieve adequate hardening in order to withstand the stresses and strains that can arise in posterior regions. Although no differences in qualitative parameters were reported between baseline and the period after 10 years, acceptable scores for marginal discoloration after 10 years were seen more frequently in the dual-cured than in the chemically cured cement group (58% versus 78%) [88]. Klink and colleagues also used Vitablocs Mark II full crowns, partial crowns, and inlays luted with either Variolink II or RelyX Unicem. According to the CDA-criteria inlays and partial crowns performed well. Prevalence of complications or failure was highest for crowns. They concluded that success was related to patient factors and restoration type, not luting protocol [89]. Others also found that resin cement type had no influence on success using the same ceramic material [90]. It is noteworthy that the margins were entirely in enamel. In a study by Gemalmaz and colleagues two adhesive cements (Variolink Ultra and Enforce) and a glass ionomer cement (Geristore) were used to lute Ducere LFC ceramic inlays resulting in 13%, 13%, and 33% failures, respectively, after a little more than 2 years. Margins were evaluated by SEM on gypsum models. Deterioration of marginal adaptation, rate of submargination, and marginal discoloration of surviving restorations luted with the glass ionomer cement were markedly inferior to those luted with the other two cements, with the restorations cemented with Variolink Ultra performing the best [91]. In a prospective dual-center study, the clinical behavior of adhesively luted pressed glass-ceramic restorations (Cergogold) was evaluated using two cementation regimens (personal communication). One group of restorations was luted with Definite Multibond primer with corresponding adhesive and definite cement and the other with Syntac classic (3-step) with Variolink Ultra cement. Survival rates were 93% and 95%, respectively, after 4 years, with the first group exhibiting more hypersensitivity shortly after cementation of the restoration (27% versus 0%). Hence both luting protocols provided similar results when compared according to USPHS criteria and by SEM [92]. A similar conclusion was reached in a different study by the same group involving other patients after 4 years of clinical service [93]. Two operators luted Cergogold inlays in 39 patients using protocols same as those previously described. Considerable interoperator differences were observed with respect to annual failure rate (0.6 versus 6.2%). Lithium disilicate restorations were cemented with either a commercially available self-etching dual-curing cement (control, Multilink Automix) or a self-adhesive dual-curing "experimental" cement originating from the same company (experimental). Both cements had qualitatively similar results after 2 years of function as assessed by the modified USPHS criteria. All restorations functioned for 2 years without crown fracture or surface chipping. The undisclosed nature of the experimental cement leaves little room for practical comparison or interpretation. The publication did not mention the type of restoration that was provided (full, circumferential, or partial) [94]. For this restoration type, inlays luted with resinmodified glass ionomer cement (Fuji Plus F) or a self-cured resin composite cement (Panavia 21) yielded similar results after 5 years [95]. IPS Empress (leucite reinforced glassceramic) restorations were cemented with different adhesive approaches and can function successfully for 15 years [96]. Others also saw good long term results but described a significant amount of deterioration of marginal adaptation in the long run, even though modern adhesive procedures were used. Overall failure rates of this type of restoration were in the order of 8-10% after 10 years [97][98][99]. A classic etchand-rinse approach (Syntac classic/Variolink II) produced better marginal integrity when cementing leucite reinforced glass-ceramic inlays than a contemporary self-adhesive resin cement (Relyx Unicem) after 2 years in function [100]. Another author favored dual-cure cements based on 12-year results [101], whereas the viscosity of the cement (low versus high) had no influence on success in a large prospective study after 10 years [102]. In conclusion, most included, rather heterogeneous clinical studies involve relatively old, no longer available restoration types or systems. The use of lining cements in several older protocols challenges external validity. Cementation protocols involving glass ionomer cements generally (but not always) result in more fracture and loss of restorations as well as poorer qualitative performance of surviving restorations compared to protocols involving adhesive resin cements. Studies comparing cementation protocols for more contemporary restorative materials (lithium disilicate) are rare and involve self-etching, self-adhesive, or adhesive procedures. None of these cementation protocols can be considered clearly superior in clinical performance on the basis of the reviewed literature. There is limited evidence that light-cured resin cements perform worse than dual-cured cements, whereas solely chemically cured resin cements perform the best. Results obtained with technically challenging adhesive cementation procedures may be operator-dependent. Marginal deterioration is frequently reported, also when using adhesive cements. No clinical studies evaluated the potential benefits of IDS protocols that were identified. Discussion This review is aimed at organizing knowledge regarding the cementation of glass-ceramic restorations, particularly posterior, single-unit ones, with a special emphasis on the possible merits of IDS. The topic is of interest to the clinician because of the growing number of all ceramic restorations that are being placed. They substitute metal and metal-ceramic crowns and are advantageous because they are relatively cheap in light of the current gold price and their manufacturing price and because of their superior esthetics. In early years, glass-ceramics were cemented with conventional cements like glass ionomers, with limited adhesive properties. This reflects on the results, as demonstrated in this comprehensive review, and consequently challenges the external validity of data subtracted from these studies to contemporary, strengthened glass-ceramics (leucite reinforced glassceramic and lithium disilicate). By removing superficial glass content by etching, glass-ceramics can be cemented adhesively and as a result allow nonretentive preparation forms, maintaining sound tooth tissue. This may help in avoiding endodontic complications. Bonding to dentin has traditionally been considered to be more challenging than to enamel. IDS may provide better results with respect to the bonding capacity and it is possibly also more friendly to the pulp. Over 3000 studies were initially identified for this review, but many were discarded, predominantly because they did not compare different cementation protocols or evaluated a "glass-ceramic/cement/human tooth" complex. The selection on articles in the English language only may have introduced some bias. The in vitro and in vivo studies that were included proved dramatically heterogeneous. Consequently, they do not allow meta-analysis or relevant grouping because of different test methods (e.g., tooth and substrate preparation, dimension and geometry of the restoration or tested ceramic, tooth number, storage conditions, artificial aging/thermocycling or not, cyclic loading or not, cementation protocols (e.g., a single or a double adhesive layer), testing machines, standardization of the test method, crosshead speed of the testing device and the size of the steel ball during instrumentation, the use of a "stress breaker" such as a rubber dam, film thickness of luting cements, or (lack of) definition of outcome parameters, particularly the mode of failure). It was decided to include studies only if they compared cements or cementation procedures, thus correcting for the heterogeneity in some manner. Often it was complicated to categorize the cementation procedures into "adhesive," "self-etching," or "self-adhesive" because of the chosen bonding agents and the confusing way that they were applied and described. With respect to the application of IDS, terminology and the clinical application in the literature regarding this procedure are different. The present authors regard IDS as a procedure in which a resin layer is applied immediately after preparation, followed by impression taking and the provision of a temporary restoration in combination with a temporary cement. Eventually, this restoration is replaced by a glassceramic one, which is luted to the reactivated IDS layer and the uncovered tooth structure by means of a resin cement. In the current review, when no temporary restoration was provided in an evaluated study, it is referred to as a "resin coating," which is fundamentally different. The manner in which such an intermediate layer is applied and conditioned is also expected to be of influence and often different among studies that were included. Nevertheless and possibly as a result of the rather rigorous inclusion and exclusion criteria, the included studies in the review are generally considered of good methodological quality as evaluated by Cochrane's collaboration tool of bias. In vitro studies identify some differences in outcome resulting from the tested protocols or variables. These are generally not reflected in rather more crude, clinical outcome measures, such as survival of a restoration, presented in in vitro studies. Therefore it is tentatively suggested that when luting modern glass-ceramics to posterior teeth, adhesive protocols that are the most operator and patient friendly may be preferred. Conclusion Bearing in mind the shortcomings and limitations of this review as described above, the following conclusions are drawn. From in vitro studies it can be concluded that adhesive systems (3-step, etch-and-rinse) show the best (micro)shear bond strength values compared to self-adhesive and self-etch systems when luting to human dentin. For (micro)tensile strength values or evaluation of the marginal gap no such preference can be identified on the basis of the reviewed literature. The highest fracture strength is obtained using adhesive cements, rather than water based cements like glass ionomer. Clinical studies comparing cementation protocols for contemporary restorative glass-ceramic materials (lithium disilicate) are rare and involve self-etching, self-adhesive, and adhesive procedures. No marked clinical preference for one specific procedure could be demonstrated on the basis of the reviewed literature. Few studies focus on the possible merits of IDS. The benefits are most convincingly illustrated by the favorable microtensile bond strengths when compared to negative or positive controls in vitro. No clinical trials have been performed and deleterious clinical consequences, be it objective or subjective, were not reported.
2016-05-12T22:15:10.714Z
2015-10-18T00:00:00.000
{ "year": 2015, "sha1": "13162a7ccc3af1c49ac44ba365845bf173ee7dc7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2015/148954", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eaaadb22122d1ae153c5d2769f96b6d310f34f0e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
227257397
pes2o/s2orc
v3-fos-license
Coupling of mouse olfactory bulb projection neurons to fluctuating odour pulses Odours are transported by turbulent air currents, creating complex temporal fluctuations in odour concentration that provide a potentially informative stimulus dimension. Recently, we have shown that mice are able to discriminate odour stimuli based on their temporal structure, indicating that information contained in the temporal structure of odour plumes can be extracted by the mouse olfactory system. Here, using in vivo extra- and intracellular electrophysiological recordings, we show that mitral and tufted cells (M/TCs) of the male C57BL/6 mouse olfactory bulb can encode the dominant temporal frequencies present in odour stimuli up to at least 20 Hz. A substantial population of cell-odour pairs showed significant coupling of their subthreshold membrane potential with the odour stimulus at both 2Hz (29/70) and the supra-sniff frequency 20Hz (24/70). Furthermore, M/TCs show differential coupling of their membrane potential to odour concentration fluctuations with tufted cells coupling more strongly for the 20Hz stimulation. Frequency coupling was always observed to be invariant to odour identity and M/TCs that coupled well to a mixture also coupled to at least one of the components of the mixture. Interestingly, pharmacological blocking of the inhibitory circuitry strongly modulated frequency coupling of cell-odour pairs at both 2Hz (10/15) and 20Hz (9/15). These results provide insight into how both cellular and circuit properties contribute to the encoding of temporal odour features in the mouse olfactory bulb. Introduction Temporal structure has long been considered an integral part of sensory stimuli, notably in vision (Buracas et al., 1998;Kauffmann et al., 2014;Kauffmann et al., 2015;Borghuis et al., 2019;Chou et al., 2019;Wang et al., 2019) and audition (Nelken et al., 1999; This work is licensed under a CC BY 4.0 International license. Theunissen and Elie, 2014;VanRullen et al., 2014;Deneux et al., 2016). Odours in natural environments are transported by turbulent air streams, resulting in complex spatiotemporal odour distributions and rapid concentration fluctuations (Shraiman and Siggia, 2000;Celani et al., 2014;Pannunzi and Nowotny, 2019;Crimaldi et al., 2021;Marin et al., 2021). The neuronal circuitry of the olfactory system, particularly in invertebrates, has been shown to support the encoding of temporal structures present in odour stimuli (Hendrichs et al., 1994;Vickers and Baker, 1994;Vickers et al., 2001;Szyszka et al., 2014;Huston et al., 2015;Pannunzi and Nowotny, 2019). Temporal features in odour stimuli such as differences in stimulus onset were shown to be detectable on a behavioural level by bees (Szyszka et al., 2012;Sehdev et al., 2019). In mammals, recent reports have indicated that the neural circuitry of the early olfactory system readily sustains temporally modulated and precise action potential discharge (Cury and Uchida, 2010;Shusterman et al., 2011;Gupta et al., 2015) and is able to relay information about optogenetic stimuli with ~10 millisecond precision Li et al., 2014;Rebello et al., 2014). Furthermore, we have recently shown that mice can utilise information in the temporal structure of odour stimuli at frequencies as high as 40 Hz to guide behavioural decisions . The olfactory bulb (OB) is the first stage of olfactory processing in the mammalian brain. Olfactory sensory neurons (OSNs) in the nasal epithelium convert volatile chemical signals into electrical activity forming the input to the OB. Air flow through the nasal cavity, transport of odours through the mucus, and the multi-step biochemical signal transduction together result in slow odour responses in OSNs (Sicard, 1986;Reisert and Matthews, 2001), creating the general notion that mammalian olfaction has a limited temporal bandwidth. While OSN activity indeed reflects a low pass filtered version of the incoming odour signal (Verhagen et al., 2007), information about different frequency components can still be present in OSN population activity (Nagel and Wilson, 2011). Moreover, circuit mechanisms in other brain regions and species have been shown to boost high-frequency content and sharpen stimulus presentation (Tramo et al., 2002;Atallah and Scanziani, 2009;Nagel et al., 2015;O'Sullivan et al., 2019). Given the intricate circuitry present in the OB, where multiple types of interneurons process incoming signals (Aungst et al., 2003;Fukunaga et al., 2012;Kato et al., 2013;Miyamichi et al., 2013;Fukunaga et al., 2014;Banerjee et al., 2015;Burton, 2017), we decided to investigate whether the OB circuitry plays a role in representing and processing the temporal features of odour stimuli. Here, we show that mitral and tufted (M/T) cells, the OB output neurons, respond to odours -temporally modulated at frequencies of 2-20 Hz -in a frequency-dependent manner. Using whole-cell recordings, we show that subthreshold M/T cell activity in vivo can follow odour frequencies both at sniff and supra-sniff range for monomolecular odours and odour mixtures. We observe that while putative tufted (pTC) and mitral cells (pMC) show similar frequency coupling capacity at 2Hz, tufted cells have a higher propensity to follow odour frequencies at 20Hz. Pharmacologically "clamping" GABA receptors (Fukunaga et al., 2012) we show that local inhibition in the OB strongly modulates frequency coupling of M/T cells. that its most anterior point rested approximately 0.5 mm posterior to the bregma line. Dental cement (Paladur, Heraeus Kulzer; Simplex Rapid Liquid, Associated Dental Products Ltd.) was then applied around the edges of the implant to ensure firm adhesion to the skull. A craniotomy over the right olfactory bulb (approximately ~2mm diameter) was made with a dental drill (Success 40, Osada) and then immersed in ACSF (NaCl (125 mM), KCl (5 mM), HEPES (10 mM), pH adjusted to 7.4 with NaOH, MgSO 4 .7H 2 O (2 mM), CaCl 2 .2H 2 O (2 mM), glucose (10 mM)) before removing the skull with forceps. The dura was then peeled off using a bent 30G needle tip. Following surgery, mice were transferred to a custom head-fixation apparatus with a heatpad (RS components) connected to a DC temperature controller (FHC) for recording. The animals were maintained at 37±0.5 °C. Unit Recording-A Neuronexus poly3 probe was positioned above the OB craniotomy. An Ag/Ag + Clreference coil was immersed in a well that was constructed of dental cement around the craniotomy. The reference wire was connected to both the ground and the reference of the amplifier board (RHD2132, Intan), which was connected (Omnetics) to a head-stage adapter (A32-OM32, Neuronexus). The probe, after zeroed at the OB surface, was advanced vertically into the dorsal OB at <4μm/s. This was continued until the deepest channels showed decrease in their recorded spikes indicating the end of the dorsal mitral cell layer. This was largely in the range of 400-600 μm from the brain surface. The signal from the probe was fed into a OpenEphys acquisition board (https://open-ephys.org/acquisitionsystem/eux9baf6a5s8tid06hk1mw5aafjdz1) and streamed through the accompanying GUI software (https://open-ephys.org/gui). The data was acquired at 30KHz and displayed both in a raw format, and a band pass filtered (300 -6KHz) format. The band passed format was used primarily to visualise spikes across channels during the recording. Temporal structure of the odour stimulation was created using the VHS valves while blank valves helped maintain air flow to be constant through-out the stimulation period ( Fig 1B). The start of a stimulation was always triggered to the start of inhalation which was continuously monitored online using a flow sensor (AWM2000, Honeywell). A minimum of 8s inter-trial interval was given for all the experiments. The onset pulse was passed to the OpenEphys acquisition board so that the trial trigger was recorded simultaneously with the neural data. 800 total trials were presented during the experiment, consisting of 32 repeats of 5 different frequencies for 4 odours and 1 blank. Each trial lasted 2 seconds and was spaced by a minimum of 8 seconds between the offset of one trial and the onset of the following trial. After zeroing the pipette tip position at the OB surface, we advanced the tip to reach a depth of ~ 200 μm from the surface. Next, we stepped at 2μm/s to hunt for a cell in a similar manner as described before (Margrie et al., 2002;Margrie and Schaefer, 2003;Jordan, 2021). Upon getting a successful hit, we released the positive pressure to achieve a gigaseal. The next gentle suction helped achieve the whole-cell configuration. We swiftly shifted to current-clamp mode to start a recording. Series resistance was compensated and monitored continuously during recording. Neurons showing series resistance >25 MΩ were discarded from further analysis. The vertical depths of recorded neurons reported (e.g. Fig 4B & Fig 6E-F) are vertical distances from the brain surface. Respiration was recorded using a mass flow sensor (A3100, Honeywell) and digitized at 10KHz. The GABA A clamping experiments were performed as described before (Fukunaga et al., 2012). Briefly, muscimol and gabazine (Tocris Biosciences) were dissolved in ACSF to achieve a final concentration of 2mM (muscimol) and 0.4mM (gabazine). In a subset of experiments, this solution was superfused after ~10 minutes of recording under control conditions. Odour stimulation (whole-cell recording)-Odours were presented as mixtures of monomolecular odorants mixed in 1:1 ratio which was eventually diluted in mineral oil in a 3:7 ratio. Odour A (ethyl butyrate + 2-hexanone) and B (isopentyl acetate + eucalyptol) were used for in vivo patch clamp experiments. Odour presentations were triggered on the onset of inhalation of the mouse as described for the unit recordings. The temporal structure of the odour pulses and the triggering of the blank valves were done as for the unit recording experiments described above. A minimum of 8s inter-trial interval was given for all the experiments. Analysis Fidelity-Fidelity was defined here as the value of peak-to-trough of each square pulse normalised to the peak-to-baseline value. A fidelity of 1 therefore indicates that odour fully returns to baseline value between subsequent pulses, while a fidelity of ~ 0 for the flow indicates an almost continuous square pulse of air flow devoid of temporal structure. Odour-Respiration Convolution-Photo-Ionization Device (PID) traces were taken of the odour stimuli from the same position as the mice were during the unit recording experiments. Each frequency was presented 4 times, with two different odours (ethyl Butyrate and isoamyl acetate) randomly presented with 10s of intertrial interval. This replicated the actual odour presentation to animals. The average signal from the 4 repetitions were used for the odour-sniff convolution outlined below. Convolution step: Firstly, the respiration signal was high pass filtered, flipped, and median subtracted (such that the inhalation was now positive and exhalation negative). All values below 0 (and therefore any which were linked to the exhalation) were set to zero. Using the find peaks function, the times and heights of the inhalation peaks were recorded. The respiration trace was deemed to be in the inhalation phase when the signal had reached 5% of the total inhalation peak value and was deemed to have reached the end of inhalation when this threshold was reached again post peak. The PID odour signal was resampled using scipy.signal.resample to have the same sampling frequency as the respiration signal (from 10kHz up 30kHz). The inhalation only signal was convolved with the PID signal for the stimuli. Therefore, the resulting odour signal was modulated by the flow rate at any given time point. This convolved signal was then summed over the same time windows as in the binary convolution. The convolutions were repeated for all presentations during the experiments using the same averaged PID signals. To compare the odour signal between all the frequencies, we applied Mann-Whitney U tests to the distributions of total odour calculated. The significance values for these tests were subjected to a Bonferroni correction to account for multiple comparisons. Single Unit Responses-Unit responses to blank stimuli were first subtracted from the responses to odour stimuli at the same frequency. These subtracted responses were then averaged across repetitions of the same frequency to produce averaged subtracted cell responses (one per frequency). These responses were then z-scored so that they had an average response of 0 and a standard deviation of one, with each unit represented by an associated five value z-scored response vector. These z-scored responses were clustered using the scipy.cluster.hierarchy.linkage function, which groups units together by the distances between responses in the 5 dimensional space they occupy. The output was optimally ordered, ensuring that the global distances between neighboring cells was minimized. pTC vs pMC classification-We classified our recordings (both unit and whole-cell) into putative tufted cells and putative mitral cells based on the methodology in Fukunaga et. al, 2012. We detected the exhalation peaks for every sniff cycle for the baseline period of a given recording which were then segregated into single sniffs with the corresponding spike-clipped membrane potential (whole-cell) or spike (unit recording). The membrane potential was then averaged over all sniff cycles while an average spike probability vector was created from the unit recording. Next, the sniff cycle and the thus obtained membrane potential or spike probability was converted into a phase plot, with phase 0 indicating peak exhalation. Then the preferred phase of the resultant membrane depolarisation or spiking probability were detected and considered 'the phase of respiration coupling' for the given cell/unit. Next, we classified cells into pTC if the phase of respiration coupling was in the range of 0-160° and pMC if in the range of 190-350°. Cells which did not have their phase in either of these ranges were not considered into any of the classes. Spike sorting-Kilosort2 (https://github.com/MouseLand/Kilosort2) was used to spikesort detected events into 'clusters'. Clusters were then manually curated using phy2 (https:// github.com/cortex-lab/phy) and assigned a 'good', 'mua', or 'noise' label depending on if they were considered to be made up of neural spikes (good and multi-unit activity), or false detections (noise). The clusters made up of spikes were further divided into good or mua dependent on if they are thought to be spikes from a well isolated single unit or not. A 'good' unit is characterised by a well-defined rest period in its auto-correlogram, a characteristic spike waveform, and a stable firing rate and spike amplitude (however these can both vary throughout the recording) (Fig 2D-F). Only good clusters were used for further analysis. Linear Classifier-97 clusters across 6 animals were grouped together and used for classification. Windows used to bin spikes varied both in size and in start relative to odour onset. Window starts span 0s to 3.99s from odour onset. The window sizes ranged from 10ms to 4s. All classifiers did not consider spikes from greater than 4s from odour onset. Therefore, a classifier with a 500ms window range could start between 0s and 3.5s, and a window range of 1s would use starts up to 3s from odour onset. This full range of starts and widths were used to determine both the time after odour onset that frequencies could be distinguished, and the time window required. The data sets were always of dimensions 160 x 97 where 160 is the number of trials and 97 the number of clusters. The range of unit responses were scaled to have a mean response of 0 and a SD of 1. The data was split into a training (80%) and a test set (20%). The training set was used to train a linear Support Vector Machine (SVM) with a low regularisation parameter. The low regularisation parameter translates to less restrictions in weightings assigned to each cluster by the classifier. Once the classifiers had been trained, they were tested with the remaining 20% of trials. The trials to be saved for testing were picked at random. This training and testing were repeated 1000 times with a random selection of testing trials used each time. Classifiers were then trained on the same data but with their labels shuffled. To test how accuracy varied with number of clusters, random subsets of clusters were selected and used to train and test the classifiers. Classifiers were then trained and tested on all one-to-one combinations of trials from the experimental data set. In this case a classifier was trained on all but two trials, one from each of the two trial types present in the training data. In this set chance was 50%. Finally, a series of classifiers were trained on all frequencies across all odours, with a single trial from every type being withheld for the test set. As there were 20 total trial types (5 frequencies with 4 odours) chance was 4%. Single Cell Classifier: To test the accuracy of single cells in separating just 2Hz from 20Hz, a series of classifiers were trained and tested, in the method outlined above, on single cell responses to the 2Hz and 20Hz stimuli. These classifications were repeated 1000 times with different splits in the StratifiedKFold shuffle. The cells were then ordered from the lowest average classifier accuracy (across the four odours) to the highest classifier accuracy. Change in membrane potential-The raw recordings were spike-clipped using a custom script in spike2 (Cambridge Electronic Design, UK). They were then stored into MATLAB (Mathworks, USA) readable files for further analysis. All the recordings used were baseline-subtracted to rule out the effect of sniff related background membrane potential oscillations. This was done as described previously (Abraham et al., 2010). Briefly, stretches of baseline period were collated after matching the sniff-phase to that during the actual odour presentation. The membrane potential associated with these baseline periods was averaged to make a generic baseline trace for every cell. This was then subtracted from all the recorded traces during the odour-stimulation period to create a baseline subtracted trace. For calculating the average change in membrane potential for 2 and 20Hz; we averaged the membrane potential in a 2s period pre-odour onset (Vm base ). Next, we averaged the membrane potential in the first 500ms (~ 2 sniffs) post odour onset (Vm odour500 ) and subtracted from the baseline average voltage. In short; Avg . cℎange in membrane potential = Vm odour500 − Vm base Change in spike frequency-Action potentials were counted in the raw data and converted into spike frequency in bins of 50ms. Bar plot of the spike frequency yielded PSTH plots in Fig 4E. Further, we calculated the average spike frequency in 2s before onset (FR base ) and 500ms post onset (FR odour500 ) and eventually subtracted them from each other to calculate the net change in spike frequency. In short; Avg . cℎange in spike frequency = FR odour500 − FR base Frequency-coupling coefficient estimation-Baseline-subtracted membrane potential traces for every odour and frequency were collected (Fig 5A middle). PID traces recorded for the 2Hz and 20Hz odour stimulation were averaged over 10 different trials ( Fig 5A top). Next, we cross-correlated the PID trace and all the individual baseline-subtracted traces. This was repeated for all the trials for a given odour and frequency. We selected the peak correlation (CCodour 2Hz or CCodour 20Hz ) for all the trials. Similarly, we repeated the same exercise for the control blank stimulus which was also delivered at 2 and 20Hz and obtained a CCblank 2Hz or CCblank 20Hz . Next, we normalized the CCodour with the CCblank for the respective frequencies and averaged them over all the trials to achieve a frequency-coupling coefficient (CpC) for a given cell-odour pair. In short for a given cell-odour pair: CpC = CCodour 2Hz CCblank 2Hz and CpC = CCodour 20Hz CCblank 20Hz Baseline control CpC-For every recorded cell, we isolated the baseline periods for all the trials. These were then baseline subtracted as described before. Next, we cross-correlated each of these baseline traces with the 2Hz and 20Hz PID signals to obtain the peak cross-correlation value. The CpC value for all the baseline traces for a given cell were then calculated in the same manner as described above. This set of CpC baseline 2Hz and 20Hz was then used to determine statistical difference from the CpC odour 2Hz and 20Hz. Statistics-When only two groups were compared, a non-parametric Student's t test (paired and unpaired) have been used with Bonferroni correction used in the case of multiple comparison. When more than two groups were compared, we used one-way ANOVA. Bars and scatter plots are represented with mean ± SD of the population. The box plots are represented with the median (midline), with the 25 th percentile (top edge) and the 75 th percentile (bottom edge) while the minimum and the maximum values are represented by the top and bottom whiskers respectively. The violin plot in Fig 1C represents the distribution of all the datapoints with the midline representing the median value. M/T cells differentially respond to different frequencies in odour stimuli We have previously shown that mice can behaviourally distinguish temporal structure in odours at frequencies up to 40 Hz . Breathing in awake mice is highly variable vs almost metronomic in anaesthetized animals. Thus, in order to precisely probe the effect of temporal structure in odour stimuli on M/T cell activity, we recorded neural activity in anaesthetized mice, linking odour stimulation to the rhythmic breathing. We recorded extracellular spiking activity using Neuronexus silicone probes (97 units, 6 mice) from the dorsal OB while presenting 4 different odours (ethyl butyrate, 2-hexanone, amyl acetate and eucalyptol) at 5 different frequencies (2, 5, 10, 15 and 20Hz, Fig 1A-B) using a high-speed odour delivery device we recently developed . As M/T cells can respond to changes in air pressure due to mechanosensitivity of OSNs (Grosmaitre et al., 2007), we offset changes in flow by presenting an odourless air stream from an additional valve following a temporal structure that operated anti-correlated to the odour valve. This resulted in approximately constant air flow profile throughout the odour presentation ( Fig 1A-B). The temporal odour delivery device (tODD) allowed for a reliable odour pulse presentation ( Fig 1B) with similar net volumes of odour for all the frequencies (P = 0.3142 Welch's t-test) ( Fig 1C). To control for responses to residual flow changes, we included 'blank trials', i.e. trials identical to the odour trials in temporal structure, except that both the valves were connected to vials filled with mineral oil. Respiration was continuously monitored using a flow sensor placed in close proximity to the nostril contralateral to the recording hemisphere. Respiration frequency was 2.8 ± 0.5 Hz (mean ± SD, n=6 animals) ( Fig 1E). To minimise sniff-cycle related variability we triggered odour stimulation at the onset of inhalation. Further, we estimated the amount of odour that the animals might be inhaling during all the different frequencies. To do that we convolved the inhalation phase of every sniff cycle during the odour presentation with the recorded PID trace (Fig 1F). We then compared the thus convolved value for every sniff for the entire duration of a given odour presentation for all the different odour frequencies ( Fig 1G&I). Next, we performed a pairwise statistical analysis between all the frequency combinations for a given sniff (Fig 1H&J). We observe that the odour integral during the first sniff varies marginally albeit significantly between the slow frequencies (2Hz & 5Hz) and the fast frequencies (10-20 Hz) ( Fig 1H) while from the second sniff onwards there is no significant difference ( Fig 1J). Further, we did not find any statistical significance between the slow 2Hz and 5Hz or among the fast 10Hz, 15Hz and 20Hz frequencies. A typical recording session yielded recordings from multiple clusters from a depth of 300-500 μm from the OB surface. The recorded clusters were classified either as 'good' (well isolated clusters), 'MUA' (multi-unit activity, clusters which contained spikes of physiological origin but from numerous cells), or 'noise' (clusters containing spikes of non-physiological origin, e.g. electrical interference, movement artefacts) based on their autocorrelograms (Fig 2D-E left), waveforms ( Fig 2D-E 2F). The average baseline firing of the recorded units was found to be 11 ± 9 Hz (mean ± SD) ( Fig 2K). Importantly, a subset of units displayed visibly different spiking profiles in response to different frequency odour stimuli (Fig 2G-H). Comparing activities of units between 2Hz and 20Hz stimuli we observed that a substantial number of units showed significant difference in their activity in the first 500ms after odour onset when compared to their response to the blank stimuli. This was true for all the 4 odours tested (Ethyl butyrate -22/97; 2-Hexanone -13/97; Amyl acetate -28/97; Eucalyptol -15/97; Blank -9/97, P<0.01 Mann-Whitney U test) (Fig 2I). However, we did not find any obvious pattern between units responding for a given frequency among all the odours (Fig 2L). To examine the population level response to these different stimuli, we constructed response vectors by computing the cumulative spike count for all the 97 recorded units in the first 500ms post odour-onset and subtracting from it the spike counts of a blank trial. We trained linear classifiers on 80% of the data and tested on 20% to examine whether spiking activity obtained for different stimulus frequencies was linearly separable (Fig 3A). We observed that when we used a 500ms rolling window with temporal steps of 10ms, a classifier could achieve peak accuracies (Ethyl butyrate: 0.55 ± 0.08; 2-Hexanone: 0.51 ± 0.08; Amyl acetate: 0.53 ± 0.08; Eucalyptol: 0.45 ± 0.08) notably higher than what was obtained by training on shuffled data (0.2 ± 0.07) (Fig 3B). In addition, we trained classifiers on random subsets of unit responses, binned in a 500ms window from odour onset. The classifier accuracies increased with increasing number of units, with peak accuracies found for classifiers which had the full set of units available (Ethyl butyrate: 0.41 ± 0.05; 2-Hexanone: 0.35 ± 0.05; Amyl acetate: 0.46 ± 0.05; Eucalyptol: 0.39 ± 0.05) compared to shuffled data (0.2 ± 0.05) (Fig 3C). We then trained a series of classifiers to distinguish pairs selected from all possible combinations of odour frequencies and identities. We found that classifiers could readily distinguish responses for different stimulus frequencies well above chance Dasgupta et al. Page 10 (>0.5) for a given odour while performing even better while comparing responses for trials across different odours ( Fig 3D). Next, we withheld one trial of each type of stimuli and trained a classifier with all the remaining trials. We tested the classifier on the withheld trials to see both how well it could distinguish trials, as well as explore the structure of the false classifications (Fig 3E). For a given odour, the predictability of a frequency of stimulus reached well above chance (> 0.04). Furthermore, when comparing across the different odours, the predictability was almost perfect (Fig 3E). Next, we trained classifiers based on the response of every unit to all the four odours and for the five different frequencies. Each of the classifier had access to a single unit's response for a single odour. The average accuracy obtained from all the odours for a given unit were sorted and plotted (Fig 3F). We could observe that both the pTC and pMC appeared at different accuracy values indicating that both the cell population responses were required to achieve the overall final accuracy values for the population. One should note, however, that the number of pTCs recorded were much larger than the number of pMCs. Overall, this suggests that OB neurons can encode temporal structure present in odour stimuli at frequencies of at least up to 20Hz in their spiking pattern. Importantly, this indicates that information can be read out by downstream structures simply by summing activity over populations of M/TCs at relatively low temporal resolution (i.e. spike counts over 500 ms). M/T cells follow odour stimulus both at 2 and 20Hz To better understand the mechanism that gives rise to frequency dependent M/T cell spiking responses and to get insight into their subthreshold basis, we performed whole-cell recordings from M/T cells (Fig 4A). To increase the probability of finding a responsive cellodour pair and due to the time limitation and lower yield of stable whole-cell recordings, we employed odour mixtures as stimuli (A: Ethyl butyrate and 2-Hexanone; B: Amyl acetate and Eucalyptol) and presented odours at only two frequencies, 2Hz and 20Hz ( Fig 4E). As with the unit recordings, stimuli were triggered at the onset of inhalation and blank trials were included. The animals had an average respiration frequency of 2.9 ± 1.3 Hz (mean ± SD, n=25 mice) (Fig 1D). We recorded from 42 neurons in 25 mice at depths of 180-450 μm from the surface of the olfactory bulb ( Fig 4B). The neurons showed resting membrane potentials (RMP) ranging from -38 mV to -60 mV ( Fig 4C) and input resistance of 45-280 MΩ (Fig 4D). These values are congruent with previous findings indicating that our recordings were largely from M/T cells (Margrie et al., 2001;Margrie et al., 2002;Margrie and Schaefer, 2003;Fukunaga et al., 2012;Jordan et al., 2018b;Jordan et al., 2018a). In response to 2 Hz odour presentation 25/70 cell-odour pairs showed significant changes in action potential discharge compared to the blank stimulus at the same frequency in the first 500ms after odour onset ( Comparing average change in action potential firing frequency in the first 500ms after odour onset from the baseline between the 2Hz and 20Hz responses (Fig 4F-H) we observed that for 15/70 cell-odour pairs, responses differed significantly between the two cases (Fig 4H, P<0.05, Two-tailed unpaired t-test). For 6/15 cell-odour pairs responses were significantly larger for the 2 Hz case. This is consistent with our findings from the unit recordings (Fig 2-3). Interestingly, a larger number of cell-odour pairs (27/70) showed significant differences between the subthreshold responses to the two stimuli (Fig 4I&K, P<0.05; Two-tailed unpaired t-test) compared to the suprathreshold response (18/27 cell-odour pairs showed significantly larger Δ Voltage responses for the 2Hz stimulus). To quantify the coupling of membrane potential to the frequency of odour stimulation, we estimated the peak correlation-coefficient of the odour period (CC odour ) and compared with that for the blank condition (CC blank ) (Fig 5H-I). We observed that 20/70 ( Fig 5H) and 16/70 ( Fig 5I) (2Hz and 20Hz respectively) showed a significant difference between CC odour and CC blank . Further only 1/20 cell-odour pair among the ones showing significant change in the 2Hz case showed a CC blank higher than the CC odour . This might be due to the residual respiration coupling left for this particular case. To avoid the issue of the residual respiration driven membrane potential coupling contaminating the stimulus driven coupling, we computed a coupling coefficient index (CpC, Fig 5 A-F) for each individual cell (n = 42 cells, 70 cell-odour pairs, 25 mice, see Methods for details). In brief, to calculate CpC, the membrane potential was first baseline subtracted to minimise sniff-related membrane potential oscillations (Abraham et al., 2010) ( Fig 5G). For a given cell-odour pair, CpC was then obtained by normalising the peak cross-correlation value for the odour response to that for the mineral-oil response (Fig 5 C-F), resulting in a CpC value > 0. A high CpC value indicates a cell that has a strong cell-odour frequency coupling relative to the baseline. Further, a CpC > 1 suggests a cell's response to the odour-frequency pair is stronger than that to a blank trial and is not due to a response of a potential residual purely mechanical stimulus. A subset of the recorded cell-odour pairs showed CpC >1 suggesting possible coupling to both 2Hz (35/70) ( Fig 5J) and 20Hz (25/70) (Fig 5N). To assess statistical significance of this coupling measure, we estimated the CpC of all the baseline periods for a given cell and compared with the ones obtained for the odour period for all the trials (Fig 5K&O). Comparing the CpC between the original and the baseline, we observed that a substantial number of M/T cell-odour pairs indeed significantly coupled to both 2Hz (29/70 cell-odour pairs) ( Fig 5L) and 20Hz (24/70 cell-odour pairs) (Fig 5P) (P<0.05, Two-tailed unpaired t-test; all of these were cell-odour pairs where we had observed significant subthreshold odour-evoked responses as defined Dasgupta et al. Page 12 above). However, we observed that 2/29 and 1/24 (2Hz & 20Hz respectively) cell-odour pairs among the significantly coupled ones showed a decrease in CpC for the odour stimulus compared to the baseline condition. For a subset of recorded cells, we presented a third stimulus, a continuous odour (with no temporal structure) and estimated the CpC as before. As expected, CpCs obtained from the 2Hz and 20Hz stimuli was significantly higher than that from the constant odour stimulus for a substantial portion of the recorded M/T cells (2Hz: 17/32 cell-odour pairs; 20 Hz: 13/26 cell-odour pairs; P<0.01, Two-tailed paired t-test, Fig 5M&Q). Depth of recording correlated with CpC We next asked if the CpC was related to the intrinsic properties of the recorded cells. We observed that input resistance (Fig 6A-B) and resting membrane potential (RMP, Fig 6C-D) of a cell were not correlated with its CpC. CpC and depth showed different correlations for 2Hz and 20Hz. For the 2Hz cases we could not find any correlation between depth and CpC (Fig 6E), while CpC decreased with depth for the 20Hz cases ( Fig 6F) (P = 0.03, odour A and P = 0.019, odour B; Two-Tailed test). This suggests that tufted cells which are located more superficially than mitral cells might couple more strongly to high frequency (20 Hz) odour stimuli. Overall, however, CpC was only weakly dependent on intrinsic cellular properties. Putative Tufted cells show higher CpC than putative Mitral cells Spontaneous oscillation of membrane potential has been observed to be a reliable predictor of projection neuron type in the OB (Fukunaga et al., 2012;Jordan et al., 2018b;Ackels et al., 2020). We classified the recorded neurons into 23 putative mitral cells (pMC) and 17 putative tufted cells (pTC) based on the phase locking of the spontaneous membrane potential to the respiration cycle of the mouse (Fig 7A-B). Two of the cells could not be resolved. While overall similar, pTCs tended to couple marginally more strongly to the odour stimuli than pMCs, reaching significance for the 20 Hz case ( Fig 6 F). However, we did not observe any significant difference in the lag of the peak correlation point between pTC and pMC (P = 0.6297, 2Hz; P = 0.6634, 20Hz, Unpaired t test). (Fig 7 G-H). CpC for odour mixtures can be linearly predicted from that of individual constituents As described above, we presented two different odour stimuli at different frequencies (odour A and odour B). Comparing CpCs for odour A with that displayed for odour B we noticed that these were tightly linked: M/T cells coupling well to odour A, also coupled well to odour B whereas M/T cells poorly coupling to one also coupled weakly to the other ( Fig 8A-B). This was the case for both 2Hz ( Fig 8C) and 20Hz (Fig 8D) suggesting that the frequency coupling of a cell is independent of the odour presented. To further corroborate that CpC is an odour-independent parameter, we probed M/T cells with a mixture of the two odours. If frequency-coupling is indeed odour-independent, a cell's response to odour mixtures should be predictable from the CpC for the individual odour. To assess this, we first categorized recordings based on the direction of average change in membrane potential in the first 500ms after odour onset, and classified responses into 3 types: excitatory-inhibitory (Ex-In), excitatory-excitatory (Ex-Ex), and inhibitory-inhibitory (In-In) (Fig 9A-B). Notably, we did not observe any significant difference between CpCs for the different response types -neither for 2Hz (P = 0.54, One-way ANOVA; Fig 9C) nor for 20Hz (P = 0.15, One-way ANOVA; Fig 9D). Next, we tried to predict the CpC of a given cell to an odour mixture, based on the cell's CpC obtained from the constituent odours. As predicted from the observation that CpC is largely cell-intrinsic and odour-independent, we observed that the CpC for a given cell-mixture pair could be reliably predicted from the cell-constituent pairs both for 2Hz (P = 1.3x10 -9 , Two-Tailed test) ( Fig 9E) and 20Hz (P = 1.81x10 -6 , Two-Tailed test) (Fig 9F). Notably, we observed that CpC was not correlated with the strength of odour response for a given cell-odour pair (data not shown). Influence of inhibition on CpC Since our observations indicated that CpC is cell-intrinsic, independent of the odour presented (Figs 8-9) and was only weakly dependent on intrinsic cellular properties (Fig 6), we next asked whether a cell's CpC was shaped by circuit properties. OB inhibitory circuits are known to shape M/T cell activity and odour responses (Yokoi et al., 1995;Fukunaga et al., 2012;Fukunaga et al., 2014;Burton, 2017). To assess the role that circuit level inhibition may have on cellular CpC, we recorded from M/T cells as outlined previously, and then washed in a titrated mixture of 0.4mM gabazine and 2mM muscimol to cause "GABA A clamping" (Fukunaga et al., 2012), blocking synaptic inhibition while providing sufficient unspecific, background inhibition to avoid epileptic discharge. Following a short period of change in membrane potential, the recorded neurons returned to approximately their original RMP within a few minutes (Fig 10A). The input resistance ( Fig 10E) and RMP ( Fig 10H) did not change significantly in any of the neurons recorded. Under baseline conditions before GABA-clamping, the estimated CpC was significant compared to their baseline control for most of the recorded cell-odour pairs (13/15; 2Hz and 14/15; 20Hz, P<0.01, Two-tailed paired t-test). Post drug perfusion (with GABAA-clamp) all recorded cells were significantly coupled (15/15; 2 & 20Hz, P<0.01, Two-tailed paired t-test) (Fig 10C-D). Furthermore, we noticed a significant change in CpC for most of the cell-odour pairs post drug treatment from their baseline values (n = 10/15; 2Hz and 9/15; 20Hz, P<0.01, Two-tailed paired t test) (Fig 10F-G). Interestingly, this shift in CpC post GABA A clamping happened in both directions with 5/15 cell-odour pairs showing a significant decrease and 5/15 a significant increase in their CpC values in the 2Hz case ( Fig 10F). For the 20Hz case, we observed that 2/15 cell-odour pairs showed a decrease while for 7/15 CpC values increased (Fig 10G). Furthermore, we observed that cell-odour pairs which showed a significant increase in CpC post GABA A clamp from their baseline values, largely had initial CpC < 1 (2Hz: 0.947±0.076, 20Hz: 0.98±0.11 (mean±SD)), while for cell-odour pairs showing a decrease post GABA A clamp had initial CpC > 1 Dasgupta et al. Page 14 (2Hz: 2.31±0.39, 20Hz: 1.3±0.07 (mean±SD)). Overall, our results indicate that the local inhibitory circuitry contributes significantly to determining a cell's CpC. Discussion Mammalian olfaction research has largely used odour identity (chemical structure) and intensity as modulators of odour responses despite the presence of rich temporal structure in natural odour landscapes. Here we have shown that M/T cells in the OB in vivo can encode frequencies in odour stimuli as high as 20Hz. Furthermore, whole-cell recordings indicated that a subset of M/T cells significantly couple to frequencies of odour stimuli both at the sniff range and sub-sniff range in a largely odour-independent manner. Importantly, while heterogeneous between cells, the strength of coupling is largely independent of the odour applied. We have demonstrated that odour frequency coupling capacity is similar between pMCs and pTCs populations at 2Hz, while at 20 Hz, pTCs couple somewhat more strongly than pMCs. Finally, while coupling capacity is variable between cells but largely independent of specific intrinsic properties, we observed that inhibitory circuits strongly modulate a cell's frequency coupling capacity. Overall, we show that the OB has the capacity to encode high frequency temporal patterns present in olfactory stimuli. M/T cells vary in their propensity to follow temporally structured stimuli, and this depends on the local circuitry. In mammals, respiration ensures a low-frequency, periodic sampling of olfactory stimuli which in turn is the main source of theta activity in the early olfactory areas (Macrides and Chorover, 1972;Margrie and Schaefer, 2003;Schaefer et al., 2006). This causes rhythmic activity in M/T cells, even when devoid of odour stimuli (Grosmaitre et al., 2007;Connelly et al., 2015;Diaz-Quesada et al., 2018). Furthermore, the concentration of natural odour stimuli fluctuates in time (Riffell et al., 2008;Martinez and Moraud, 2013;Celani et al., 2014;Pannunzi and Nowotny, 2019;Ackels et al., 2021) providing an extra layer of temporal information to the input signal of the OB. Altogether this creates temporally complex input signals for OSNs that are thought to carry information about the odour source (Hopfield, 1991;Vergassola et al., 2007;Celani et al., 2014;. While signal transduction at the olfactory receptor neuron level is relatively slow (Ghatpande and Reisert, 2011), simulations have shown that OSN convergence into the OB can help sustain high-frequency information , similar to encoding in the auditory system (Carr, 1993). Furthermore, correlated and anti-correlated odour stimuli were shown to be faithfully represented in the OB and resulted in distinct behavioural responses for frequencies up to 40 Hz .Together with the aforementioned physiological (Cury and Uchida, 2010;Shusterman et al., 2011) and behavioural experiments using precise optogenetic stimuli Li et al., 2014;Rebello et al., 2014), this suggests the presence of cellular and/or network mechanisms supporting the encoding of high-frequency natural odour stimuli. Previous reports have varied inhalation frequency via tracheotomy and shown that M/T cells followed the respiratory rhythm at frequencies up to 5Hz (Diaz-Quesada et al., 2018;Short and Wachowiak, 2019;Eiting and Wachowiak, 2020). Here we have shown that in naturally breathing mice M/T cells in the OB can follow odour fluctuations well exceeding respiration Dasgupta et al. Page 15 rate at frequencies of 20Hz and that local circuit inhibition plays an important role in this encoding. Functional implications Naturally odours are carried by turbulent plumes of wind or water generating filamentous fluctuations of odour concentration (Celani et al., 2014) that can in principle contain information about the nature and location of odour sources (Fackrell and Robins, 1982;Hopfield, 1991;Moore and Atema, 1991;Mylne and Mason, 1991;Vergassola et al., 2007;Schmuker et al., 2016;Ackels et al., 2021;Crimaldi et al., 2021;Marin et al., 2021). Our observations here indicate that the M/T cells reliably encode information about odour fluctuations at frequencies from 2-20 Hz (Fig 2-3 & 5). Consistent with these results obtained from unit recordings, we observe that indeed a subset of neurons recorded intracellularly showed significantly different spiking and subthreshold membrane potential activity when presented with 2Hz or 20Hz fluctuating odour stimuli ( Fig 5K&O). Furthermore, our read-out parameter, CpC, provides an estimate of how strongly a given neuron can directly couple to a specific odour frequency. Both types of projection neurons coupled equally to odours presented at 2Hz (Fig 7C) while tufted cells showed somewhat higher coupling than mitral cells to 20Hz (Fig 6F&7D). It is possible that further physiological analysis with different temporally structured stimuli might reveal distinct subtypes of projection neurons. Recent reports have for example identified a subset of mitral/tufted cells specifically responsive to changes in concentration (Parabucki et al., 2019). Emerging molecular subtype division of projection neurons (Zeppilli et al., 2020) might reveal distinct groups of projection neurons following different temporal patterns of odour presentation. Considering the heterogeneity of projection targets, this might indicate that different postsynaptic regions receive differentially filtered information. Tufted cells that in our hands showed stronger frequency coupling for 20 Hz for example preferentially project to anterior olfactory nucleus or anterior piriform cortex (Scott et al., 1980;Schneider and Scott, 1983;Nagayama et al., 2010). Further studies will be required to find exact structures in the downstream areas related to frequency-specific odour signal computation. The fact that blocking inhibition altered the frequency coupling capacity (Fig 10) indicates that the inhibitory circuitry of the OB plays a strong role in shaping the encoding of temporal features. This is consistent with a previous finding in Drosophila projection neurons where blocking presynaptic inhibition altered response kinetics for temporally dynamic odour stimuli (Nagel et al., 2015). Additionally, we observe that post GABA Aclamping some of the cells displayed a decrease of CpC while other cells showed an increase (Fig 10F-G). Further studies are required to pinpoint the precise inhibitory pathway which might be responsible for modulating frequency-coupling in M/T cells in a cell-to-cell basis and possibly identify subpopulations of MCs and TCs that encode specific temporal features. Limitations of the study All the recordings presented here were performed in anesthetized mice benefitting from the stability of respiration in this state. Previous reports suggested that the behavioural state affects mitral cell firing properties (Rinberg et al., 2006;Kato et al., 2012). However, studies have suggested that M/T cell firing rates in awake conditions do not change but rather get Dasgupta et al. Page 16 redistributed over a breathing cycle (Gschwend et al., 2012). Whole cell recordings from M/T cells indicate that membrane properties are largely similar between the two states , consistent with recent unit recording results (Bolding et al., 2020). It will nevertheless be important to repeat these experiments in awake mice to probe if the M/T cells hold to the same frequency coupling behaviour as that found in anaesthetised. Secondly, owing to the time limitation of reliable whole-cell recordings we could probe only two frequencies. Extracellular unit recording data partially alleviated this limitation by investigating additional intermediate frequencies. Linear classifiers performed at accuracies well above chance level (Fig 3), suggesting that M/T cells can encode several frequencies up to 20Hz. Therefore, it is likely that the frequency coupling capacity of subthreshold activity could span this range as well. Temporally structured stimuli and pathophysiology In addition to better replicating naturalistic stimuli, temporally patterned sensory stimuli have been found to be advantageous in treating diseases. While direct electrical (and more recently optical) rhythmic deep brain stimulation is recognised as a possible treatment for a variety of neurodegenerative diseases (Benabid et al., 1987;Laxton et al., 2010;Zhang et al., 2019), it is only quite recent that temporally modulated sensory stimulation has been employed to in a similar manner. For example using 40 Hz visual and/or auditory stimuli has been found to help alleviate the amyloid burden from medial prefrontal cortex (Martorell et al., 2019). These reports strongly suggest that temporally structured sensory stimuli can be used as a tool to treat AD patients. A recent report indicating that humans can use temporal olfactory cues suggests this may be possible for olfaction as well (Perl et al., 2020). Therefore, temporally structured odour stimuli could offer an additional route for treatment for neurodegenerative disorders. Overall, in this study we report that M/T cells in the mouse olfactory bulb can encode temporal structure in odour stimuli and that their membrane potentials can follow frequencies of at least up to 20Hz. Extent of coupling is independent of the odours presented, varies between cells and is shaped by inhibition in the olfactory bulb. Significance Statement Odours in the natural environment have a strong temporal structure which can be extracted and used by mice in their behaviour. Here, using in vivo extra-and intracellular electrophysiological techniques, we show that the projection neurons in the olfactory bulb can encode and couple to the dominant frequency present in an odour stimulus. Furthermore, frequency coupling was observed to be differential between mitral and tufted cells, was odour-invariant but strongly modulated by local inhibitory circuits. In summary, this study provides insight into how both cellular and circuit properties modulate encoding of odour temporal features in the mouse olfactory bulb. Dasgupta of linear fractional classifications for a set of classifiers trained to distinguish all trial types from one another. Y axis represents a trial's true label and the x axis represents a classifier's given label to a trial. Chance is 0.04. (F) The single cell responses to all four odours were used to train classifiers to distinguish responses to 2Hz and 20Hz. Each classifier only had access to a single unit's responses to a single odour. The accuracies from these classifiers were averaged across the four odours and used to sort the units. Each vertical line connects the accuracies to classifiers trained on the same unit but to different odours. Each odour is represented by a different symbol. The vertical lines and symbols are coloured by each unit's putative cell classification, cyan (pTCs), magenta (pMCs) or unknown (gray). cell-odour pairs. The hollow circles represent cell-odour pairs which showed statistically insignificant difference between the 2Hz and the 20Hz trials while the solid markers (15/70) represent cell-odour pairs showing significant difference (P<0.05, Two-tailed unpaired ttest). Each marker represents a cell-odour pair and the error bars represent the SD obtained from all the trials. (I) Histogram of change in average V m in the first 500ms compared to the baseline for 2Hz stimuli and (J) 20Hz stimuli. (K) Avg. Change in V m for 2Hz vs. 20Hz. The markers and error bars have similar meaning as in (H) but for change in V m from baseline. 27/70 cell odour pairs showed significant difference between the 2Hz and 20Hz trials (P<0.05, Two-tailed unpaired t-test). Dasgupta
2020-12-03T09:07:29.612Z
2020-11-29T00:00:00.000
{ "year": 2022, "sha1": "15ee27b55c5a444f6afbf784100b4334ea7ccfc0", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9145232", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4b419f2f500c7c29f52215eae1b3c5b3e10edb8b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Chemistry" ] }
234096338
pes2o/s2orc
v3-fos-license
Common Fixed Point Theorems for Contractive Mappings of Integral Type in G -Metric Spaces and Applications Two common fixed point theorems for weakly compatible mappings satisfying contractive conditions of integral type in G -metric spaces are demonstrated. The results obtained in this paper generalize and differ from a few results in the literature and are used to prove the existence and uniqueness of common bounded and continuous solutions for certain functional equations and nonlinear Volterra integral equations. A nontrivial example is included. Introduction The Banach fixed point theorem which was first presented by Banach in 1922 is a significant result in fixed point theory. Because of its importance in proving the existence of solutions for functional equations, nonlinear Volterra integral equations and nonlinear integro-differential equations, this result has been extended in many different directions (see, e.g., and the references cited therein). In particular, Rhoades [12] and Branciari [4] generalized the Banach fixed point theorem and gave the following fixed point theorems, respectively. Theorem 1 (see [12]). Let f be a mapping from a complete metric space ðX, dÞ into itself satisfying where φ ∈ Φ 4 . Then, f has a unique fixed point in X. Theorem 2 (see [4]). Let ðX, dÞ be a complete metric space and f : X → X be a mapping satisfying where φ ∈ Φ 1 and c ∈ ½0, 1Þ is a constant. Then, f has a unique fixed point a ∈ X such that lim n→∞ f n x = a for each x ∈ X. In 2013, Gupta and Mani [21] obtained the existence and uniqueness of a fixed point for contractive mappings of an integral type in complete metric spaces by using iterative approximations. In 2007, Kumar et al. [6] proved a common fixed point theorem for a pair of compatible mappings satisfying a contractive inequality of integral type, which improves Theorem 2. Theorem 3 (see [6]). Let ðX, dÞ be a complete metric space and f , g : X → X be compatible mappings such that f X ð Þ ⊆ g X ð Þ, g is continuous, where φ ∈ Φ 1 and c ∈ ½0, 1Þ is a constant. Then, f and g have a unique common fixed point in X. In 2006, Mustafa and Sims [9] introduced a new concept of generalized metric space called G-metric space. From then on, lots of research works have been carried out on generalizing contractive conditions for different contractive mappings satisfying various known properties in G-metric spaces [1-3, 5, 10, 11, 13, 15, 19, 20]. In 2018, Gupta et al. [19] proved some fixed point theorems for the functions satisfying ϕ -contraction and mixed g-monotone property in G-metric spaces. In 2015, Gupta and Deep [20] gave a few common fixed point theorems using the property E.A. in the setting of G-metric and fuzzy metric spaces by taking a set of three conditions for self-mappings. In 2011, Aydi [1] proved a fixed point theorem for mappings satisfying a ðψ, ϕÞ-weakly contractive condition in G-metric spaces. Theorem 4 (see [1]). Let ðX, GÞ be a complete G-metric space and f be a mapping from X into itself satisfying where ψ, ϕ ∈ Φ 2 . Then, f has a unique fixed point u ∈ X and f is G-continuous at u. In 2012, Aydi [2] obtained the following common fixed point theorem for a pair of mappings involving a contractive condition of integral type in G-metric spaces. Theorem 5 (see [2]). Let ðX, GÞ be a G-metric space and f , g be two mappings from X into itself such that where φ ∈ Φ 1 and α ∈ ½0, 1Þ is a constant. If f ðXÞ ⊆ gðXÞ and gðXÞ is a complete subset of X, then f and g have a unique point of coincidence in X. Moreover, if f and g are weakly compatible, then f and g have a unique common fixed point. The objective of this paper is both to introduce two new classes of contractive mappings of integral type in the setting of G-metric spaces and to prove the existence and uniqueness of points of coincidence and common fixed points for these mappings. Our results extend Theorem 5, are different from Theorem 4, and are used to show solvability of the functional equations arising in dynamic programming and nonlinear Volterra integral equations. A nontrivial example is given. (1) A point x ∈ X is said to be a fixed point of T if Tx = x. (2) A point x ∈ X is said to be a coincidence point of S and T if Tx = Sx and w = Sx = Tx is said to be a point of coincidence of S and T. (3) A point x ∈ X is said to be a common fixed point of S and T if x = Tx = Sx. Definition 15. A pair of self-mappings f and g in a G-metric space ðX, GÞ are said to be weakly compatible if for any x ∈ X, the equality f x = gx gives that f gx = gf x. Lemma 16 (see [14]). Let X be a nonempty set and f , g : X → X be weakly compatible mappings. If f and g have a unique point of coincidence w ∈ X, then w is the unique common fixed point of f and g. Main Results Now, we study the existence and uniqueness of points of coincidence and common fixed points for contractive mappings (12) and (51) below in G-metric spaces, respectively. Theorem 19. Let ðX, GÞ be a G-metric space, f and g : X → X be two mappings satisfying where ðφ, ψ, ϕÞ If f ðXÞ ⊆ gðXÞ and gðXÞ is a complete subset of X, then f and g have a unique point of coincidence in X. Furthermore, if f and g are weakly compatible mappings, then f and g have a unique common fixed point in X. In light of (G1), (G3), (G5), and (13)-(17), we get that Now we assert that G n ≤ G n−1 , ∀n ∈ ℕ. Suppose that there exists some n 0 ∈ ℕ satisfying G n 0 > G n 0 −1 . It follows from (12), which is a contradiction. Therefore, G n ≤ G n−1 for all n ∈ ℕ and It is apparent that the sequence fG n g n∈ℕ 0 is nonincreasing and bounded, which implies that there exists r with Now, we demonstrate that r = 0. Suppose that r > 0. On account of (12), (20), and (21), ðφ, ψ, ϕÞ ∈ Φ 1 × Φ 2 × Φ 3 and Lemma 17, we deduce that 4 Journal of Function Spaces which is impossible. Thus, r = 0. That is, It follows from (G3), (G4), and (23) that which yield that Next, we verify that f f x n g n∈ℕ 0 is a G-Cauchy sequence. Suppose that f f x n g n∈ℕ 0 is not a G-Cauchy sequence. It follows from Lemma 10 that there exist a constant ε > 0 and two subsequences f f x mðkÞ g k∈ℕ and f f x nðkÞ g k∈ℕ of f f x n g n∈ℕ 0 such that nðkÞ is minimal in the sense that which means that By means of (G3)-(G5) and Lemma 13, we deduce that Letting k → ∞ in (27)-(32) and using (23) and (25), we obtain that Since gðXÞ is complete, it follows that there exists w ∈ gðXÞ such that In light of Lemma 8 and w ∈ gðXÞ, there exists a ∈ X satisfying ga = w and Next, we prove ga = f a. Suppose that ga ≠ f a. In view of (12), (13), (25), (46), ðφ, ψ, ϕÞ ∈ Φ 1 × Φ 2 × Φ 3 , and Lemmas 12 and 17, we obtain that 7 Journal of Function Spaces which is absurd. Consequently, w = ga = f a, that is, w is a point of coincidence of f and g. Lastly, we certify that f and g have a unique point of coincidence in X. Assume that there exists b ∈ X with f b = gb ≠ f a. In terms of (13), (G2), and Lemma 13, we receive that According to (12), (49), ðφ, ψ, ϕÞ ∈ Φ 1 × Φ 2 × Φ 3 , and Lemma 18, we gain that which is contradictive. Therefore, f and g have a unique point of coincidence in X. Moreover, if f and g are weakly compatible mappings, by Lemma 16, we know that f and g have a unique common fixed point in X. This completes the proof. Similar to the argument of Theorem 19, we derive the following result and omit its proof. 8 Journal of Function Spaces Theorem 20. Let ðX, GÞ be a G-metric space, f and g : X → X be two mappings satisfying where ðφ, ψ, ϕÞ ∈ Φ 1 × Φ 2 × Φ 3 and If f ðXÞ ⊆ gðXÞ and gðXÞ is a complete subset of X, then f and g have a unique point of coincidence in X. Furthermore, if f and g are weakly compatible mappings, then f and g have a unique common fixed point in X. Remark 21. In case ψðtÞ = t, ϕðtÞ = ð1 − λÞt, ∀t ∈ R + and λ ∈ ð0, 1Þ is a constant, then Theorems 19 and 20 reduce to results, which include Theorem 5 as a special case. The following example shows that Theorems 19 and 20 generalize substantially Theorem 5 and differ from Theorem 4. Journal of Function Spaces It is obvious that
2021-05-10T00:04:08.991Z
2021-01-31T00:00:00.000
{ "year": 2021, "sha1": "64942be246fd0b3f1993c3d781e236f57379c560", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jfs/2021/6619964.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "12d8406b8ae9026b0577bd6fcab3fadf937c7f8b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
14205997
pes2o/s2orc
v3-fos-license
Provable Tensor Factorization with Missing Data We study the problem of low-rank tensor factorization in the presence of missing data. We ask the following question: how many sampled entries do we need, to efficiently and exactly reconstruct a tensor with a low-rank orthogonal decomposition? We propose a novel alternating minimization based method which iteratively refines estimates of the singular vectors. We show that under certain standard assumptions, our method can recover a three-mode $n\times n\times n$ dimensional rank-$r$ tensor exactly from $O(n^{3/2} r^5 \log^4 n)$ randomly sampled entries. In the process of proving this result, we solve two challenging sub-problems for tensors with missing data. First, in the process of analyzing the initialization step, we prove a generalization of a celebrated result by Szemer\'edie et al. on the spectrum of random graphs. Next, we prove global convergence of alternating minimization with a good initialization. Simulations suggest that the dependence of the sample size on dimensionality $n$ is indeed tight. Introduction Several real-world applications routinely encounter multi-way data with structure which can be modeled as low-rank tensors. Moreover, in several settings, many of the entries of the tensor are missing, which motivated us to study the problem of low-rank tensor factorization with missing entries. For example, when recording electrical activities of the brain, the electroencephalography (EEG) signal can be represented as a three-way array (temporal, spectral, and spatial axis). Oftentimes signals are lost due to mechanical failure or loose connection. Given numerous motivating applications, several methods have been proposed for this tensor completion problem. However, with the exception of 2-way tensors (i.e., matrices), the existing methods for higher-order tensors do not have theoretical guarantees and typically suffer from the curse of local minima. In general, finding a factorization of a tensor is an NP-hard problem, even when all the entries are available. However, it was recently discovered that by restricting attention to a sub-class of tensors such as low-CP rank orthogonal tensors [1] or low-CP rank incoherent 1 tensors [2], one can efficiently find a provably approximate factorization. In particular, exact recovery of the factorization is possible for a tensor with a low-rank orthogonal CP decomposition [1]. We ask the question of recovering such a CP-decomposition when only a small number of entries are revealed, and show that exact reconstruction is possible even when we do not observe any entry in most of the fibers. tensor T has the the following form: with r n, u ∈ R n with u = 1, and u 's are orthogonal to each other. We let U ∈ R n×r be a tallorthogonal matrix where u 's is the -th column of U and U i ⊥ U j for i = j. We use ⊗ to denote the standard outer product such that the (i, j, k)-th element of T is given by: T ijk = a σ a U ia U ja U ka . We further assume that the u i 's are unstructured, which is formalized by the notion of incoherence commonly assumed in matrix completion problems. The incoherence of a symmetric tensor with orthogonal decomposition is where [n] = {1, . . . , n} is the set of the first n integers. Tensor completion becomes increasingly difficult for tensors with larger µ(T ), because the 'mass' of the tensor can be concentrated on a few entries that might not be revealed. Out of n 3 entries of T , a subset Ω ⊆ [n] × [n] × [n] is revealed. We use P Ω (·) to denote the projection of a matrix onto the revealed set such that We want to recover T exactly using the given entries (P Ω (T )). We assume that each (i, j, k) for all i ≤ j ≤ k is included in Ω with a fixed probability p (since T is symmetric, we include all permutations of (i, j, k)). This is equivalent to fixing the total number of samples |Ω| and selecting Ω uniformly at random over all n 3 |Ω| choices. The goal is to ensure exact recovery with high probability and for |Ω| that is sub-linear in the number of entries (n 3 ). Notations. For a tensor T ∈ R n×n×n , we define a linear mapping using U ∈ R n×m as T [U, U, U ] ∈ R m×m×m such that T [U, U, U ] ijk = a,b,c T abc U ai U bj U ck . The operator norm of a tensor is The Frobenius norm of a tensor is T F = ( i,j,k T 2 ijk ) 1/2 . The Euclidean norm of a vector is u 2 = ( i u 2 i ) 1/2 . We use C, C to denote any positive numerical constants and the actual value might change from line to line. Algorithm Ideally, one would like to minimize the rank of a tensor that explains all the sampled entries. subject to T ijk = T ijk for all (i, j, k) ∈ Ω . However, even computing the rank of a tensor is NP-hard in general, where the rank is defined as the minimum r for which CP-decomposition exists [3]. Instead, we fix the rank of T by explicitly modeling T as T = ∈[r] σ (u ⊗ u ⊗ u ), and solve the following problem: P Ω (T ) − P Ω Recently, [4,5] showed that an alternating minimization technique, can recover a matrix with missing entries exactly. We generalize and modify the algorithm for the case of higher order tensors and study it rigorously for tensor completion. However, due to special structure in higher-order tensors, our algorithm as well as analysis is significantly different than the matrix case (see Section 2.2 for more details). To perform the minimization, we repeat the outer-loop getting refined estimates for all r components. In the inner-loop, we loop over each component and solve for u q while fixing the others {u } =q . More precisely, we set T = u t+1 q ⊗ u q ⊗ u q + =q σ u ⊗ u ⊗ u in (4) and then find optimal u t+1 q by minimizing the least squares objective given by (4). That is, each inner iteration is a simple least squares problem over the known entries, hence can be implemented efficiently and is also embarrassingly parallel. Algorithm 1 Alternating Minimization for Tensor Completion end for 12: The main novelty in our approach is that we refine all r components iteratively as opposed to the sequential deflation technique used by the existing methods for tensor decomposition (for fully observed tensors). In sequential deflation methods, components {u 1 , u 2 , . . . , u r } are estimated sequentially and estimate of say u 2 is not used to refine u 1 . In contrast, our algorithm iterates over all r estimates in the inner loop, so as to obtain refined estimates for all u i 's in the outer loop. We believe that such a technique could be applied to improve the error bounds of (fully observed) tensor decomposition methods as well. As our method is directly solving a non-convex problem, it can easily get stuck in local minima. The key reason our approach can overcome the curse of local minima is that we start with a provably good initial point which is only a small distance away from the optima. To obtain such an initial estimate, we compute a low-rank approximation of the observed tensor using Robust Tensor Power Method (RTPM) [1]. RTPM is a generalization of the widely used power method for computing leading singular vectors of a matrix and can approximate the largest singular vectors up to the spectral norm of the "error" tensor. Hence, the challenge is to show that the error tensor has small spectral norm (see Theorem 2.1). We perform a thresholding step similar to [4] (see Lemma A.4) after the RTPM step to ensure that the estimates we get are incoherent, which is critical for our analysis. Our analysis requires the sampled entries Ω to be independent of the current iterates u i , ∀i, which in general is not possible as u i 's are computed using Ω. To avoid this issue, we divide the given samples (Ω) into equal r · τ parts randomly where τ is the number of outer loops (see Algorithm 1). Main Result Theorem 1.1. Consider any rank-r symmetric tensor T ∈ R n×n×n with an orthogonal CP decomposition in (1) satisfying µ-incoherence as defined in (2). For any positive ε > 0, there exists a positive numerical constant C such that if entries are revealed with probability p ≥ C µ 6 r 5 σ 4 max (log n) 4 log(r T F /ε) σ 4 min n 3/2 , then the following holds with probability at least 1 − n −5 log 2 (4 √ r T F /ε): • the problem (3) has a unique optimal solution; and Note that the above result can be generalized to k-mode tensors in a straightforward manner, where exact recovery is guaranteed if, p ≥ C . However, for simplicity of notation and to emphasize key points of our proof we present our proof for 3-mode tensors only in Section 2.3. We provide a proof of Theorem 1.1 in Section 2. For an incoherent, well-conditioned, and low-rank tensor with µ = O(1) and σ min = Θ(σ max ), alternating minimization requires O(r 5 n 3/2 (log n) 4 ) samples to get within an arbitrarily small normalized error. This is a vanishing fraction of the total number of entries n 3 . Each step in the alternating minimization requires O(r|Ω|) operations, hence the alternating minimization only requires O(r|Ω| log(r T F /ε)) operations. The initialization step requires O(r c |Ω|) operations for some positive constant numerical c. When r n, the computational complexity scales linearly in the sample size up to a logarithmic factor. A fiber in a third order tensor is an n-dimensional vector defined by fixing two of the axes and indexing over remaining one axis. The above theorem implies that among n 2 fibers of the form {T [I, e j , e k ]} j,k∈[n] , it is sufficient to have only O(n 3/2 (log n) 4 ) fibers with any samples. Most of the fibers are not sampled at all and, perhaps surprisingly, our approach can still recover the original low-rank tensor. This should be compared to the matrix completion setting where all fibers are required to have at least one sample. However, unlike matrices, the fundamental limit of higher order tensor completion is not known. Building on the percolation of Erdös-Renýi graphs and the coupon-collectors problem, it is known that matrix completion has multiple rank-r solutions when the sample size is less than Cµrn log n [6], hence exact recovery is impossible. But, such arguments do not generalize directly to higher order; see Section 2.5 for more discussion. Interestingly, simulations in Section 1.3 suggests that for r = O( √ n), the sample complexity scales as (r 1/2 n 3/2 log n). That is, assuming the sample complexity provided by simulations is correct, our result achieves optimal dependence on n (up to log factors). However, the dependency on r is sub-optimal (see Section 2.5 for a discussion). Empirical Results Theorem 1.1 guarantees exact recovery when p ≥ Cr 5 (log n) 4 /n 3/2 . Numerical experiments show that the average recovery rate converges to a universal curve over α, where p * = αr 1/2 ln n/((1 − ρ)n 3/2 ) in Figure 1. Our bound is tight in its dependency n up to a poly-logarithmic factor, but is loose in its dependency in the rank r. Further, it is able to recover the original matrix exactly even when the factors are not strictly orthogonal. We generate orthogonal matrices U = [u 1 , . . . , u r ] ∈ R n×r uniformly at random with n = 50 and r = 3 unless specified otherwise. For a rank-r tensor T = r i=1 u i ⊗u i ⊗u i , we randomly reveal each entry with probability p. A tensor is exactly recovered if the normalized root mean squared error, RMSE = T −T F / T F , is less than 10 −7 . Varying n and r, we plot the recovery rate averaged over 100 instances as a function of α. The degrees of freedom in representing a symmetric tensor is rn − r 2 . Hence for large, r we need number of samples scaling as r. Hence, the current dependence of p * = O( √ r) can only hold for r = O(n). For not strictly orthogonal factors, the algorithm is robust. A more robust approach for finding an initial guess could improve the performance significantly, especially for non-orthogonal tensors. Related Work Tensor decomposition and completion: The CP model proposed in [7,8,9] is a multidimensional generalization of singular value decomposition of matrices. Computing the CP decomposition involves two steps: first apply a whitening operator to the tensor to get a lower dimensional tensor with orthogonal CP decomposition. Such a whitening operator only exists when r ≤ n. Then, apply known power-method techniques for exact orthogonal CP decomposition [1]. We use this algorithm as well as the analysis for the initial step of our algorithm. For motivation and examples of orthogonal CP models we refer to [10,1]. Recently, many heuristics for tensor completion have been developed such as the weighted least squares [11], Gauss-Newton [12], alternating least-squares [13,14], trace norm minimization [15]. However, to the best of our knowledge, there is no tensor completion method with provable guarantees. In a different context, [16] show that minimizing a weighted trace norm of flattened tensor provides exact recovery using O(rn 3/2 ) samples, but each observation needs to be a dense random projection of the tensor as opposed to observing just a single entry, which is the case in the tensor completion problem. Relation to matrix completion: Matrix completion has been studied extensively in the last decade since the seminal paper by Candes and Recht [17]. Since then, several provable approaches have been developed, such as, nuclear norm minimization [17], OptSpace [18], and Alternating Minimization [4]. However, several aspects of tensor factorization makes it challenging to adopt matrix completion algorithms and analysis techniques directly. First, there is no natural convex surrogate of the tensor rank function and developing such a function is in fact a topic of active research [19,16] Next, even when all entries are revealed, tensor decomposition methods such as simultaneous power iteration are known to get stuck at local extrema, making it challenging to apply matrix decomposition methods directly. Third, for the initialization step, the best low-rank approximation of a matrix is unique and finding it is trivial. However, for tensors, finding the best low-rank approximation is notoriously difficult. On the other hand, some aspects of tensor decomposition makes it possible to prove stronger results. Matrix completion aims to recover the underlying matrix only, since the factors are not uniquely defined due to invariance under rotations. However, for orthogonal CP models, we can hope to recover the individual singular vectors u i 's exactly. In fact, Theorem 1.1 shows that our method indeed recovers the individual singular vectors exactly. Spectral analysis of tensors and hypergraphs: Theorem 2.1 and Lemma 2.2 should be compared to copious line of work on spectral analysis of matrices [20,18], with an important motivation of developing fast algorithms for low-rank matrix approximations. We prove an analogous guarantee for higher order tensors and provide a fast algorithm for low-rank tensor approximation. Theorem 2.1 is also a generalization of the celebrated result of Friedman-Kahn-Szemerédi [21] and Feige-Ofek [22] on the second eigenvalue of random graphs. We provide an upper bound the largest second eigenvalue of a random hypergraph, where each edge includes three nodes and each of the n 3 edges is selected with probability p. Analysis of the Alternating Minimization Algorithm In this section, we provide a proof of Theorem 1.1 and the proof sketches of the required main technical theorems. Formal proofs of the technical theorems and lemmas are provided in the appendix. There are two key components: a) the analysis of the initialization step (Section 2.1); and b) the convergence of alternating minimization given a sufficiently accurate initialization (Section 2.2). We use these two analyses to prove Theorem 1.1 in Section 2.3. Initialization Analysis We first show that (1/p)P Ω (T ) is close to T in spectral norm, and use it bound the error of robust power method applied directly to P Ω (T ). The normalization by (1/p) compensates for the fact that many entries are missing. For a proof of this theorem, we refer to Appendix A. Theorem 2.1 (Initialization). For p = α/n 3/2 satisfying α ≥ log n, there exists a positive constant C > 0 such that, with probability at least 1 − n −5 , where Notice that T max is the maximum entry in the tensor T and the factor 1/(T max n 3/2 p) corresponds to normalization with the worst case operator norm of p T , since pT 2 ≤ T max n 3/2 p and the maximum is achieved by T = T max (1 ⊗ 1 ⊗ 1). The following theorem guarantees that O(n 3/2 (log n) 2 ) samples are sufficient to ensure that we get arbitrarily small error. A formal proof is provided in Appendix. Together with an analysis of robust tensor power method [1, Theorem 5.1], the next error bound follows from directly substituting (5) and using the fact that for incoherent tensors T max ≤ σ max µ(T ) 3 r/n 3/2 . Notice that the estimates can be computed efficiently, requiring only O(log r + log log α) iterations, each iteration requiring O(αn 3/2 ) operations. This is close to the time required to read the |Ω| αn 3/2 samples. One caveat is that we need to run robust power method poly(r log n) times, each with fresh random initializations. Lemma 2.2. For a µ-incoherent tensor with orthogonal decomposition T = r =1 σ * (u * ⊗u * ⊗u * ) ∈ R n×n×n , there exists positive numerical constants C, C such that when α ≥ C(σ max /σ min ) 2 r 5 µ 6 (log n) 4 , running C (log r + log log α) iterations of the robust tensor power method applied to P Ω (T ) achieves Alternating Minimization Analysis We now provide convergence analysis for the alternating minimization part of Algorithm 1 to recover rank-r tensor T . Our analysis assumes that u i − u * i 2 ≤ cσ min /rσ max , ∀i where c is a small constant (dependent on r and the condition number of T ). The above mentioned assumption can be satisfied using our initialization analysis and by assuming Ω is large-enough. At a high-level, our analysis shows that each step of Algorithm 1 ensures geometric decay of a distance function (specified below) which is "similar" to max , and u t+1 = u * + d t+1 . Now, define the following distance function: The next theorem shows that this distance function decreases geometrically with number of iterations of Algorithm 1. A proof of this theorem is provided in Appendix B.4. For the estimate T t at the t-th iterations, the fit error P Ω (T − T t ) F / P Ω (T ) F closely tracks the normalized root mean squared error T − T t F / T F , suggesting that it serves as a good stopping criterion. Note that our number of samples depend on the number of iterations τ . But due to linear convergence, our sample complexity increases only by a factor of log(1/ ) where is the desired accuracy. Difference from Matrix AltMin: Here, we would like to highlight differences between our analysis and analysis of the alternating minimization method for matrix completion (matrix AltMin) [4,5]. In the matrix case, the singular vectors u * i 's need not be unique. Hence, the analysis is required to guarantee a decay in the subspace distance dist(U, U * ); typically, principal angle based subspace distance is used for analysis. In contrast, orthonormal u * i 's uniquely define the tensor and hence one can obtain distance bounds On the other other hand, an iteration of the matrix AltMin iterates over all the vectors u i , 1 ≤ i ≤ r, where r is the rank of the current iterate and hence don't have to consider the error in estimation of the fixed components U [r]\q = {u , ∀ = q}, which is a challenge for the analysis of Algorithm 1 and requires careful decomposition and bounds of the error terms. To ensure that we have sufficiently incoherent initial iterate, we perform thresholding proposed in [4]. In particular, we threshold all the elements of u 0 i (obtained from RTPM method, see Step 3 of Algorithm 1) that are larger (in magnitude) than µ/ √ n to be sign(u (i))µ √ n and then re-normalize to obtain u i . Using Lemma A.4, this procedure ensures that the obtained initial estimate u i satisfies the two criteria that is required by With this initialization, Theorem 2.3 tells us that O(log 2 (4r 1/2 T F /ε) iterations (each iteration requires O(r|Ω|) operations) is sufficient to achieve: for all q ∈ [r] with probability at least 1 − n −7 log 2 (4r 1/2 T F /ε). The desired bound follows from the next lemma with a choice ofε = ε/4r 1/2 T F . For a proof we refer to Appendix B.6. Fundamental limit and random hypergraphs For matrices, it is known that exact matrix completion is impossible if the underlying graph is disconnected. A refinement of this analysis for Erdös-Renýi graphs provides a lower bound on matrix completion: when sample size is less than Cµrn log n, no algorithm can recover the original matrix [6]. However, for tensor completion and random hyper graphs, such a simple connection does not exist. It is not known how the properties of the hyper graph is related to recovery. In this spirit, a rank-one third-order tensor completion has been studied in a specific context of MAX-3LIN problems. Consider a series of linear equations over n binary variables x = [x 1 . . . x n ] ∈ {±1} n . An instance of a 3LIN problem consists of a set of linear equations on GF(2), where each equation involve exactly three variables, e.g. We use −1 to denote true (or 1 in GF(2)) and +1 to denote false (or 0 in GF (2)). Then the exclusiveor operation denoted by ⊕ is the integer multiplication. the MAX-3LIN problem is to find a solution x that satisfies as many number of equations as possible. This is an NP-hard problem in general, and hence random instances of the problem with a planted solution has been studied [23]. Algorithm 1 provides a provable guarantee for MAX-3LIN with random assignments. Notice that the tensor has incoherence one and rank one. This implies exact reconstruction for P ≥ C(log n) 4 /n 3/2 . This significantly improves over a message-passing approach to MAX-3LIN in [23], which is guaranteed to find the planted solution for p ≥ C(log log n) 2 /(n log n). It was suggested that a new notion of connectivity called propagation connectivity is a sufficient condition for the solution of random MAX-3LIN problem with a planted solution to be unique [23,Proposition 2]. Precisely, it is claimed that if the hypergraph corresponding to an instance of MAX-3LIN is propagation connected, then the optimal solution for MAX-3LIN is unique and there is an efficient algorithm that finds it. However, the example in 6 is propagation connected but there is no unique solution: all satisfy the equations and corresponding tensor T = x ⊗ x ⊗ x is not uniquely defined either. This proves that propagation connectivity is not a sufficient condition for uniqueness of the MAX-3LIN solution. Open Problems and Future Directions Tensor completion for non-orthogonal decomposition. Numerical simulations suggests that non-orthogonal CP models can be recovered exactly (without the usual whitening step). It would be interesting to analyze our algorithm under non-orthogonal CP model. However, we would like to point here that even with fully observed tensor, exact factorization is known only for orthonormal tensors. Now, given that our method guarantees not only completion but also factorization of the tensor (which is essential for large scale applications), it is natural that our method would require a similar condition. Optimal dependence on r. The numerical results suggests threshold sample size scaling as √ r. This is surprising since the degrees of freedom in describing CP model scales as linearly in r. This implies that the √ r scaling can only hold for r = O( √ n). In comparison, for matrix completion we know that the threshold scales as r. It would be important to understand why this change in dependence in r happens for higher order tensors, and identify how it depends on k for k-th order tensor completion. A Proof of Theorem 2.1 for Initialization Analysis We prove the following bound on the spectrum of random tensors: Here we prove the theorem for general case where T is not symmetric and might even have different dimensions n 1 , n 2 and n 3 . Inspired by [21,18], our strategy is as follows: (1) Reduce to x,y, and z which belongs to discretized sets S n1 , S n2 , and S n3 ; (2) Bound the contribution of light triples using concentration of measure; (3) Bound the contribution of heavy triples using the discrepancy property of a random tripartite hypergraph. Define a discretization of an n-dimensional ball as such that S n ⊆ S n ≡ {x ∈ R n : x ≤ 1}. Later we will set ∆ to be a small enough constant. It is therefore enough show that the bound holds for discretized vectors all discretized vectors x, y, and z. One caveat is that such a probabilistic bound must hold with probability sufficiently close to one such that we can apply the union bound over all discretized choices of x, y, and z. The following lemma bounds the number of such choices. Lemma A.2 ( [18]). The size of the discretized set is bounded by | S n | ≤ ∆/10 n . A naive approach to upper bound (P Ω (T ) − p T )[x, y, z] would be to consider it as a random variable and apply concentration inequalities directly. However, this naive approach fails since x, y and z can contain entries that are much larger than their typical value of O(1/ √ n). We thus separate the analysis into two contributions, and apply concentration inequalities to bound the contribution of the light triples and use graph topology of the random sampling to bound the contribution of the heavy triples. Define the light triples as Heavy triples are defined as its complement Later we will set the appropriate value for = Θ(p √ n 1 n 2 n 3 ). We can then write each contributions separately as We will prove that both contributions are upper bounded by CT max (log n) 2 (n 1 n 2 n 3 ) 1/2 p with some positive constant C for all x ∈ S n1 , y ∈ S n2 , and z ∈ S n3 . The bound on the light triples follows from Chernoff's concentration inequalities. The bound on the heavy triples follows from the discrepancy property of random hyper graphs, which implies that there cannot be too many triples with large contributions. Theorem 2.1 then follows from Lemma A.1 with an appropriate choice of ∆ = Θ(1). We first show that the mean of Z is bounded as The mean can be written as Using the fact that for heavy triples |T ink x i y j z k | ≥ T max /(n 1 n 2 n 3 ), the expected contribution is then bounded by We next show concentration of Z around item mean. Let λ = √ n 1 n 2 n 3 /(2T max √ ) such that |λT ijk x i y j z k | ≤ 1/2 for all (i, j, k) ∈ L. Then, e λT ijk xiyj z Since p = / √ n 1 n 2 n 3 , this proves that the contribution of light triples is bounded by C T max √ with high probability. Note that for the range of p = /n 2 , the contribution of light couples is bounded by C T max /n. However, even in this regime of p, the contribution of heavy triples is still Ω(1), which dominates the light triples by a factor of √ n. This is the reason for the choice of p = Θ( /n 1.5 ). A.2 Bounding the contribution of heavy triples The contribution of heavy triples is bounded by In the following, we will show that the right-hand side of the above inequality is upper bounded by for some positive numerical constant C > 0 with probability larger than 1 − n −5 . We consider a hypergraph G = ]. Given a sampling of entries in a tensor, we let the edges in G denote the positions of the entries that is sampled. The proof is a generalization of similar proof for matrices in [21,22,18] and is based on two properties of the hypergraph G. Define the degree of a node as the number of edges connected to that particular node such that deg 1 (i) ≡ |{(i, j, k) ∈ E}|, and similarly define deg 2 (j) and deg 3 (k). Define the degree of two nodes as the number of edges connected to both of the nodes such that deg 12 (i, j) ≡ |{(i, j, k) ∈ E}|, and similarly define deg 13 (i, k) and deg 23 (j, k). 1. Bounded degree property. A hyper graph G satisfies the bounded degree property if the degree are upper bounded as follows: for some positive numerical constant ξ 0 > 0 (independent of n 1 , n 2 , n 3 and p) where p = |E|/(n 1 n 2 n 3 ). Discrepancy property. A hyper graph G satisfies the discrepancy property if for any subset of nodes , at least one of the following is true: for some positive numerical constants ξ 1 , ξ 2 > 0 (independent of n 1 , n 2 , n 3 and p). Here, e(A 1 , A 2 , A 3 ) denotes the number of edges between the three subsets A 1 , A 2 and A 3 , andē(A 1 , A 2 , A 3 ) ≡ p |A 1 | |A 2 | |A 3 | denotes the average number of edges between the three subsets. We first prove that if the sampling pattern is defined by a graph G which satisfies both the bounded degree and discrepancy properties, then the contribution of heavy triples is O( √ ). Notice that this is a deterministic statement, that holds for all graphs with the above properties. We then finish the proof by showing that the random sampling satisfies both the bounded degree and discrepancy properties with probability at least 1 − n −5 . We denote the size of each set by a (i,j,k)∈L Note that since u a (u) The contributions from various combinations of (u, v, w) utilize various subsets of our assumptions. We prove that in each case the contribution is O( √ (log n) 2 ) as follows. For ≥ log n, this proves that the contribution of the heavy triples is O( √ (log n) 2 ). We are left to prove that the bounded degree and the bounded discrepancy properties hold for a random tripartite hypergraph G = (V 1 ∪ V 2 ∪ V 3 , E) where each edge is selected with probability p. Precisely, let n = max{|V 1 |, |V 2 |, |V 3 |}, then the following lemma provides a bound on the degree and discrepancy, with high probability. Lemma A.3. For any δ ∈ [0, 1/e] and p ≥ (1/n 2 ) log n, there exists numerical constants C, C > 0 such that a random tripartite hyper graph satisfies the bounded degree property: for all i ∈ V 1 , j ∈ V 2 , and k ∈ V 3 , and the bounded discrepancy property: for all subsets A 1 ⊆ V 1 , A 2 ⊆ V 2 , and A 3 ⊆ V 3 , at least one of the following is true. A.3 Proof of the bounded degree and discrepancy properties in Lemma A.3 We first prove the bounded degree properties of (11) hold with probability at least 1 − δ. Applying standard concentration inequality, e.g. Bernstein inequality, we get that for some positive constant δ > 0, for n sufficiently large, and taking union bound over all choices of i, j and k, deg 1 (i), deg 2 (j), and deg 3 (k)'s are uniformly bounded with probability at least 1 − δ/2. Applying the union bound over all choices of (i, j), (i, k) and (j, k), we get that the bound holds uniformly with probability at least 1 − δ/2. Let's assume, without loss of generality, that a 1 ≤ a 2 ≤ a 3 . We divide the analysis into two cases depending on the size of the smallest subset. When at least two of the subsets are large, i.e. a 2 = Ω(n) and a 3 = Ω(n), then by bounded degree property, we can prove that (12) holds. However, when a 1 and a 2 are small, e.g. O(1), then the first discrepancy no longer holds, and we need a different technique to show concentration. Case 1. When a 1 ≥ n 1 /e. We use the following bound on sum of indicator variables deviating from the mean : where we denoteē(A 1 , A 2 , A 3 ) byē, which holds for t ≥ 4. For the bound holds with probability at least 1 − δ, we require Equivalently, a 1 ln en 1 a 1 + a 2 ln en 2 a 2 + a 3 ln en 3 a 3 + ln n 1 n 2 n 3 δ ≤ē t ln t 3 . For the regime of parameters such that t ≤ 4, then e(A 1 , A 2 , A 3 ) ≤ 4ē(A 1 , A 2 , A 3 ) with probability at least 1 − δ the bounded discrepancy condition, in particular the first one, holds. A.4 Proof of Thresholding Lemma A.4. Let u , 1 ≤ ≤ r be such that u − u * 2 ≤ α where α < 1/4. Also, let u * , 1 ≤ ≤ r be µ-incoherent unit vectors. Now define u as: Also, let u = u / u 2 . Then, u − u * 2 ≤ 3α ∀1 ≤ ≤ r and each u is 2µ-incoherent. In this section, we provide convergence analysis for Algorithm 1 for the special case of a rank-2 orthonormal tensor T with equal singular values, i.e. n. The purpose of this example is to highlight the proof ideas and we fix σ 1 , σ 2 to be both one at each step of Algorithm 1 for simplicity. The following theorem proves the desired linear convergence. Let [u t 1 , u t 2 ] denote the current estimate at the t-th iteration of Algorithm 1. For brevity, we drop the superscript indexing time and let [u 1 , u 2 ] denote [u t 1 , u t 2 ] whenever it is clear from the context. Theorem B.1. If u 1 and u 2 are 2µ-incoherent, then there exists a positive constant C such that for p ≥ C µ 3 log 2 n n 1.5 the following holds (w.p. ≥ 1 − log(1/ )/n 8 ): Proof. We claim that with probability at least 1 − 1/n 8 , for both i ∈ {1, 2}. This proves the desired bound. Incoherence of [u t+1 1 , u t+1 2 ] follows from Lemma B.2. Without loss of generality, we only prove the claim for i = 1. Recall that u t+1 1 is the solution of the least squares problem in Step 11 of Algorithm 1, and can be written as (16) Note that the update that can be written in a vector form: where B, C, F, G are all diagonal matrices, s.t., Let u t+1 We separate the analysis for each of the error terms. Using Lemma B.3, we have: Setting p ≥ C µ 3 log 2 n γ 2 n 3/2 for a γ to be chosen appropriately later and using Lemma B.7 and Lemma B.5, we have (w.p. ≥ 1 − 2/n 9 ): Similarly, using Lemma B.4 and p ≥ C µ 3 log 2 n γ 2 n 3/2 , we have (w.p. ≥ 1 − 1/n 9 ): We want to upper bound the error: (20), (21), and (22) that (w.p. ≥ 1 − 10/n 9 ): Setting γ ≤ 1/200 and for d ∞ ([u 1 , u 2 ], [u * 1 , u * 2 ]) ≤ 1/200 as per our assumption, this proves the desired bound. B.2 Technical lemmas for rank-two analysis The next lemma shows that all our estimates are 2µ-incoherent, which in turn allows us to bound the error in the above proof effectively. Note that the incoherence of the updates do not increase beyond a global constant (2µ). Let u t+1 1 be obtained by update (16) and let u t+1 Under the hypotheses of Theorem B.1, u t+1 1 is 2µ-incoherent with probability at least 1 − 1/n 9 . Proof. Using (16) and the definitions of B, C, F , G given in (18), we have: where the second inequality follows by bounds on B ii , C ii , F ii , G ii obtained using Lemma B.5 and the distance bound Next, we bound the first error term in (17). B.4 Proof of Theorem 2.3 and general rank-r analysis of alternating minimization Proof. We prove the theorem by showing the following for all q: The update for u q t+1 is given by: It can be written as a vector update, where B, C, F , G are all diagonal matrices, s.t., F (i, i) = jk δ ijk u q (j)u q (k)u * (j)u * (k), and G (i, i) = jk δ ijk u q (j)u q (k)u (j)u (k). Second part of the Theorem follows directly from Lemma B.9. Proof. Using (31) and the definitions of B, C, F , G given in (33), we have: where the second inequality follows by bounds on B ii , C ii , F ii , G ii obtained using Lemma B.6 and the distance bound d ∞ ([u 1 , u 2 ], [u * 1 , u * 2 ]). Lemma now follows using (44) and the bound on |σ t+1 q − σ * q | given by (43). Similarly, applying Cauchy-Schwartz,we get for a = b, It follows that
2014-06-11T05:51:54.000Z
2014-06-11T00:00:00.000
{ "year": 2014, "sha1": "336097e41c452c53b6f58cd63de6941d093238da", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "336097e41c452c53b6f58cd63de6941d093238da", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
55040554
pes2o/s2orc
v3-fos-license
Green Transformational Leadership and Green Performance: The mediating role of Green Mindfulness and Green Self-efficacy green transformational leadership is deemed as crucial element to increase the green performance of organizations. The purpose of this study is to examine the impact of green transformational leadership on green performance by considering the mediating effect of green self-efficacy and green mindfulness. The study is descriptive and quantitative in nature. Survey questionnaire method was used and data have been collected from 200 respondents by applying simple random sampling technique. After applying required tests on AMOS and SPSS, findings revealed that green transformational leadership has significant and positive influence on green performance. Moreover, result also enlightened that green mindfulness and green self-efficacy significantly and partially mediates the relationship between green transformational leadership and green performance and. At the end, this article implementation, suggestion, and limitation have also been included for further researches. INTRODUCTION Nowadays environmentalism becomes very famous due to the due to the disastrous poison environment and global warming, therefore each of the firms is trying to make green innovative environment (Chen, 2011) [7]. Green Innovation is now a powerful aggressive tool because now every customer is very afraid of the environment and environmentally preferable products in the market (Chen and Chang, 2012) [9]. Organizations should adopt green innovation for various strategies as well as for the satisfaction of environmental needs of the market (Chen, 2008[8]; Sheu, 2014) [29]. For the improvement of innovation Green Transformational Leadership is very crucial (Elkins and Keller, 2003). Earlier studies paid less attention to the effect of green transformational leadership on green performance. And also pays less attention to the effect of Green Self-efficacy and Green Mindfulness on the relationship of GTL and GP. Previous researchers followed equation are survey method which just gave them cross-sectional data so that they unable to check the dynamic change in GT, GM, GE and GP in various levels. Due to the lack of GTL in an organization the GP of an organization not be increased which becomes the cause of disastrous environmental pollution and global warming in the society. There was less effect of GM and GE on GT and GP. That's why green performance not increased. In the globally framework, this study has a great exposure all other sectors rather than manufacturing sector like Banking, hospitals, education, NGOs etc. In Pakistan green transformational leadership is very important because environmentalism is increasing. But unfortunately there is a less and limited work on this study although this study has great exposure in Pakistan, so we carry on this study in Pakistan. The purpose of current study is to evaluate the impact on Green Performance of Green Transformational Leadership in the presence of Green Self-efficacy and Green Mindfulness. Research Objectives  To investigate the relationship between Green Transformational Leadership and Green Performance.  To examine the mediating role of green mindfulness between the relationship between Green Transformational Leadership and green performance.  To determine the mediating role of green selfefficacy between the relationship of green transformational leadership and green performance. encourage his colleagues to attain environmental goals and motivate them to behave above the environmental expected performance. While, green performance may be defined as the performance of software and hardware that are included in the process of innovation which a company executes in green process and products that are involve the modernization in technologies like anticipation of pollution, saving energy, recycling of wastes and commercial environment administration (Chen et al, 2006 [11]. Previous studies did less work on the effect of green transformational leadership on green performance so we develop a framework for to cover this gap. H1: Green transformational leadership has a positive and significant effect on green performance Green Transformational Leadership And Green Mindfulness Bass (2000) [5] purported that transformational leaders give motivation to their supporters to perform above the instant self-interest through rational motivation, stimulating incentives, personalized attention, and glamor. Transformational leadership also helps in introducing new thoughts by delivering inspiration, rational motivation and visualization (Mumford, 2000) [26]. According to Arendt (2009) [1], transformational leadership can enhance their supporters' importance by stimulating incentives. In addition to stimulating incentives may help to supporters to deliberate and identify the perspective and satisfaction of their job (Chen, Chang et al., 2014).For encouraging their supporters to act above the daily tasks of the job, Transformational Leadership may be helpful for giving them stimulating idea (Bono & Judge, 2003) [6]. A motivating idea can't just express the magnificent future but it also helpful for individuals to perform their existing job (Arendt, 2009) [1]. Moreover, transformational leadership also help individuals to see their position as a worker in an organization more attentive and superior perspective (Vogues & Sutcliffe, 2012) [32]. Therefore there is a positive impact of transformational leadership on mindfulness (Chen, Chang et al. 2014).Previous studies pay less attention to the effect of Green leadership on green mindfulness so we discuss this effect in our study. H2: Green transformational leadership has a positive and significant effect on green mindfulness. Mediating Role of Green Mindfulness Previous studies explored that intentional elements of mindfulness have a positive relationship with job performance (Dane, 2011) [13]. On peripheral motivation and then improved performance mindfulness has an effective consideration (Herndon, 2008). To reduce the probability of revenue mindfulness could be helpful because it delivers better attention that can improve performance and develop work-related understanding (Vogus & Sutcliffe, 2012) [32]. Employees are fully involved in work and this involvement is favorable for innovative performance when they realize themselves in their job at a greater and more significant framework (Friedman & Forster, 2001) [16]. According to Davis and Davis (2011) improves skills of communication of employees, enables them to raise the abilities to make decisions and solving problems, so mindfulness will be helpful for performance innovation. Therefore, mindfulness would affect positively on innovative performance (Chen, Chang et al. 2014). Prior literature paid less attention to Green mindfulness so we discuss in our study to cover this gap. According to Langer (1997) [25] responsiveness of essential propensity of humans are reflected by mindfulness, and also plays an effect role in struggling against negative suggestions of bandwagon occurrences (Chen, Chang et al. 2014). Mindfulness could be considering an effective tool for an organization to reduce the ambiguous circumstances of greater ups and downs that have terrible significances (Weick& Roberts, 1993) [35]. Mindfulness is the situation of dynamic wakefulness new information's are honestly conveyed that empowers members to take consideration in the continuous establishment, modification and learning (Langer, 1997) [25]. For emerging mindfulness to work in active, vague and impulsive situation organization accumulates systematic process, moreover, mindfulness plays a vital role for to sustain in experience to change and crises (Weick et al, 1999) [36]. Previous studies described that there is a positive effect of mindfulness on innovative thinking and learning (Langer, 1997). According to Kirkpatrick (1996), the reason was that for creating open-mid ness's atmosphere, flexibility and engagement mindful behavior would be very beneficial for these socially applicable transactions, so mindfulness significantly affect the performance of the organization (Chen, Chang et al. 2014). Earlier studies almost ignore the green mindfulness so we describe mindfulness in our study to overcome the gap. H3: Green mindfulness significantly mediates the relationship between green transformational leadership and green performance. Green Transformational Leadership And Green Self-Efficacy It is explored by the Yukl (1990)[37] that transformational leadership make a vision effective in a clear way, define assurance and hopefulness, precise the purpose of achieving the vision, aggressively interconnect rules and opinions to their supporters and give authority to their supporters to attain goals (Chen, Chang et al. 2014). Moreover, transformational leadership also provide satisfactory position and perfect opinions for followers for help in believing that they have the ability to face challenges and encourage their attitude to participate actively in work-related tasks successful (Bass, 1990).Moreover, Sosik et al, (1998) [30] enlightened that to raise the disposition of followers to act above the opportunities, transformational leadership could also encourage them. Transformational Leadership positively impact on the self-efficiency of followers by giving importance to optimistic observation, approval of exceptional skills and anticipation of the outstanding performance. According to Howell and Higgins (1990) [20], transformational leaders with their experience and abilities can raise self-efficacy. Transformational leaders can also increase self-efficacy of followers by giving an instant confident feedback (Bandura, 2000). It is summed up by Gist and Mitchell (1992)[17] that transformational leaders can improve their self-efficacy. By making possible goals, connecting performance of employees according to results and illuminating principles, transformational leadership may impose a positive effect on self-efficacy (Chen, Chang et al. 2014).Previous studies ignore the effect of green transformational leadership on green self-efficacy so we discuss this in our study to cover this gap. H4: Green transformational leadership has a positive and significant effect on green self-efficacy. Mediating Role of Green Self-Efficacy Self-efficacy is linked with several behavioral outcomes like consultation and Importunity (Bandura, 1997;Schunk, 1995) [28]. Various significant occupational consequences and job performance are forecast by self-efficacy. According to Bandura (1993), there is the positive impact of self-efficacy on positive feeling and thinking, goal setting and self-regulation (Zimmerman & Bandura, 1994[38]; Chen, Chang et al. 2014).Employees who recognize themselves a great efficacious that would stimulate plenty efforts, which is helpful in producing outstanding outcomes (Stajkovic & Luthans, 1998) [31]. Earlier studies explained that self-efficacy is very important for the improvement in performance (Gist & Mitchell, 1992) [17]. When individuals consider themselves that they have a high level of self-efficacy then they create new ideas for making new products having a great confidence in their abilities (Hmieleski & Baron, 2008) [19]. Hsiao et al. (2011) [21] argued that people having strong sagacity can lead to more creativity behavior. So self-efficacy and innovative performance have a positive relationship (Mumford et al., 2002).Previous studies almost ignore the effect of selfefficacy so we cover this gap by discussing in our study. Self-efficacy means the self-judgment of people to execute a certain task in their abilities. Self-efficacy is the name of self-confidence of people for achieving job opportunities believing in their abilities (Bandura, 1993). It is explained in social cognitive theory that greater level of self-efficacy boostup the confidence level of employees and they show better performance and show higher assurance to bear failures and remain job-focused (Stajkovic & Luthans, 1998) [31]. Gist Sampling And Data This study is descriptive and quantitative in nature. Questionnaire survey method was adopted to collect data. Data Measures All the measures and instruments in this study were adopted from previously valid and reliable scales. To evaluate questionnaire items used a5-pointLikert scale. In 5-point Likert scale, 5 representing "strongly agree", 4 representing "agree", 3 representing "neutral", 2 representing "disagree" and 1 representing "strongly disagree" state of mind. Green Transformational Leadership scale: Green Transformational Leadership is measured by 6-items scale (Chen & Chang., 2013). Green mindfulness: Green mindfulness is measured by using the6-items scale that is developed by (Chen et al., 2014). Green self-efficacy: Green self-efficacy is measured by adopting 6-items scale that is developed by (Chen et al., 2014). Green performance: Green performance is measured by adopting 8-items scale by (Chen et al., 2006). Demographics Most respondents were between the age group of 26-45 whose percentage is 47%. Following to this 43%, 9.0% and 0.5% were age group of up to 25, 46-55 and 56+ respectively. Furthermore, 38%, 48%, and 14% respondents belonged to Bachelors', Masters' and Ph.D. respectively. Moreover, 26%, 66%, and 8% respondents were doing their jobs as contractual employees, permanent employees and others respectively. Similarly, 25 % respondents have up to the1year length of service and 48.5%, 19.5% and 9% respondents have 2-5 years, 5-10 and more than 10 years length of service at visited Industries respectively. Table2: Psychometric Analysis Convergent validity exist in this model because the table mentioned above is showing convergent validity and discriminant of this model such as composite reliability has greater values than AVE that is 0.8 and 0.5 respectively. When concern with discriminant validity it relies on the square root of AVE values which essentially to be more than the correlation values. In that table the correlation values are lower than the square root of AVE values, therefore, this model also shows the presence of discriminant validity. Table 3 : Fit Indices for CFA & SEM The result of Table 3 showing the fitness of model obtained from SEM and CFA dimensions both. First place is given to the fit index goodness which shows variancecovariance matrix and as the value is more than 0.90 that's why it is asserting a good fit for the model. AGFI is accommodated GFI having value more than 0.8 and this is comparatively good to show that this model is a good fit. Comparative fit index expressing CFI which demonstrates more accurate values that prove the model is close enough to complete fit because of having value more than 0.90. Root mean square error of approximation (RMSEA) having a lower value than 0.10 which is also representing the good fitness of model. NFI, PGFI, and PNFI also show that in statistical terms the model of the current study is completely fit because their values are in good range. Results demonstrate that there is a positive and significant relationship of Green Transformational Leadership with Green Performance with (R 2 = 49%, P<.05). In addition, analysis also indicates that 49% variance has been explained by Green Transformational Leadership in Green Performance. P value for the beta coefficient of Green Transformational Leadership is .000 which is significant at 5% level of significance and this means that beta value 2.58 is statistically significant. CONCLUSION AND DISCUSSION The main objective of the current study is to examine the connection of Green Performance and Green Transformational Leadership in the mediating relationship of Green Self-Efficacy and Green Mindfulness. shows that there is a significant impact on Green Selfefficacy of Green transformational leadership. The results prove that effect is significant. The results supported by the previous researchers which showed the significant impact of Green Transformational Leadership on Green Self-Efficacy (Yukl, 1990 Kirkpatrick, 1996;Chang et al., 2014). Similarly, H7 revealed that green self-efficacy also significantly and positively mediates the relationship between green performance and green transformational leadership. This relationship has proven by the result. The result is supported by the previous studies which showed the positive and significant impact of green self-efficacy on green transformational leadership and green performance (Bandura, 1997 Practical Implications The practical implication of this study is that if other organizations want to raise their green performance then they will have to adopt in their innovation development process the concepts of green transformational leadership, green mindfulness, and green self-efficacy so that company's turnover will increase. Other organizations also use this study to raise the employee's mindfulness as well as their performance. By doing this employee's turnover ratio will be decreased and their performance will be increase. This research study is not without limitations. There was area limitation. Moreover, we use Likert-type scale in a study in which problem occurred due to the responses of the Questionnaires which depended on the scale. In this study, questionnaire survey method is used which gave cross-sectional data. Future researchers can focus on longitudinal study. Some people gave intense responses and some gave very careful answers of the questions. When talking about future research this study could be applied in other sectors like Banking Sector, Education Sector, Health Sector rather than industrial (Manufacturing) sector. This study is applied in Pakistan so the future researchers could be conducted in other countries
2019-05-11T13:05:02.830Z
2017-08-04T00:00:00.000
{ "year": 2017, "sha1": "4afe3419a1060037e84c9ac6e0e8119c2a0d9bdc", "oa_license": null, "oa_url": "https://doi.org/10.17722/ijme.v9i2.346", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "34b930872213e389deb58259fbb1a3fdc8d5978c", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [ "Psychology" ] }
233235792
pes2o/s2orc
v3-fos-license
Bone Aging, Cellular Senescence, and Osteoporosis ABSTRACT Changes in aging bone that lead to osteoporosis are mediated at multiple levels, including hormonal alterations, skeletal unloading, and accumulation of senescent cells. This pathological interplay is superimposed upon medical conditions, potentially bone‐wasting medications, modifiable and unmodifiable personal risk factors, and genetic predisposition that accelerate bone loss with aging. In this study, the focus is on bone hemostasis and its dysregulation with aging. The major physiological changes with aging in bone and the role of cellular senescence in contributing to age‐related osteoporosis are summarized. The aspects of bone aging are reviewed including remodeling deficits, uncoupling phenomena, inducers of cellular senescence related to bone aging, roles of the senescence‐associated secretory phenotype, radiation‐induced bone loss as a model for bone aging, and the accumulation of senescent cells in the bone microenvironment as a predominant mechanism for age‐related osteoporosis. The study also addresses the rationale and potential for therapeutic interventions based on the clearance of senescent cells or suppression of the senescence‐associated secretory phenotype. © 2021 The Authors. JBMR Plus published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research. Introduction A s part of the skeleton, bone tissue functions to support locomotion, hematopoiesis, glucose metabolism, interactions with the renal and reproductive systems, and reservoirs for phosphorus and calcium, as well as protection for internal organs. (1) Bone is made up of extracellular matrix proteins, inorganic mineral in the form of hydroxyapatite, and many resident cell types. The formation of bone during normal development and accrual after physiological growth plate closure mainly occurs in the first two decades of life in healthy individuals, after which BMD plateaus and is followed by age-related bone loss. (2) Osteoporosis in women typically occurs during the postmenopausal period, and in both women and men caused by age-related changes. (3,4) Bone Remodeling and Aging The deposition of bone matrix, its mineralization, and remodeling is regulated by several cell types. One such cell type, the osteoclast, is a large multinucleate cell that absorbs bone and differentiates from bone marrow monocyte/macrophage precursors. Osteoclast differentiation is promoted by osteoblasts, osteocytes, and activated T lymphocytes (5)(6)(7) by the secretion of receptor activator of nuclear factor kappa-B (RANK) ligand (RANKL), which binds to the RANK receptor on the osteoclast surface. (8,9) RANK receptor expression is increased by the binding of macrophage colony-stimulating factor (M-CSF), secreted by osteoblasts and bone marrow stromal cells, (10) to the colonystimulating factor-1 receptor also known as c-FMS on the osteoclast surface. Mature osteoclasts absorb bone by secreting acid and proteolytic enzymes that dissolve the bone matrix. Osteoclast differentiation and activity can be inhibited by osteoprotegerin (OPG), a decoy receptor that binds RANKL and is secreted by osteoblasts, osteocytes, B lymphocytes, and the liver. (6,11) In contrast, osteoblasts promote new bone formation. Osteoblasts differentiate from bone mesenchymal stem cells (BMSCs) in response to WNT signaling. WNT signaling accomplishes this by stabilizing β-catenin which directs the transcription of genes involved in osteoblast differentiation like Runt-related transcription factor 2 (Runx2) and osterix (Osx) (12) while adipogenesis and CCAAT-enhancer binding protein α are inhibited. (13) WNT signaling in osteoblasts can be inhibited by sclerostin and Dickkopf-1 (DKK1). (14) Mature osteoblasts secrete osteocalcin, alkaline phosphatase, and type I collagen, the predominant component of the matrix, (15) after which mineralization occurs with calciumphosphate in the form of hydroxyapatite. Osteocytes are terminally differentiated osteoblasts that are found embedded in the mineralized matrix, are long-lived, and are important for sensing mechanical load. (16) An adult skeleton has an estimated 42 × 10 9 osteocytes occupying 215 m 2 of lacunar-canalicular surface area (17) allowing osteocytes to release a significant amount of calcium and signaling molecules as a way to maintain mineral homeostasis and direct bone remodeling. Osteocytes secrete many signaling molecules that both positively and negatively affect bone remodeling. (18) They are a major source of RANKL promoting osteoclastogenesis, (19) as well as sclerostin and DKK1, which antagonize WNT signaling and inhibit osteoblast bone formation. (14) The bone marrow holds many cell types such as those of hematopoietic lineages including myeloid (osteoclast precursor), lymphoid, and erythroid cells, as well as marrow stromal cells and BMSCs that can differentiate into osteoblasts, adipocytes, and chondrocytes. (20) Bone is not a static structure but is metabolically active in maintaining mineral homeostasis; it is constantly remodeling in response to many influences, including mechanical loading, and hormonal and immunologic pressures. Normal bone remodeling occurs throughout the skeleton and is a balance between resorption of existing bone (old, weak, or damaged) and new bone deposition. (21) It occurs in a sequential manner: (i) activation-when osteoclasts are recruited to damaged or otherwise incompetent bone, (ii) resorption-when mature osteoclasts resorb bone, (iii) reversal-when osteoclasts die and osteoblast progenitors are recruited, and (iv) formation-when mature osteoblasts deposit new bone. (22,23) Many secreted factors from osteoclasts and osteoblasts have been identified as important for bone remodeling. (18,24) In addition, membrane-bound mediators such as Ephrin family proteins are important in signaling cascades activated by direct cell-cell contact between osteoclasts and osteoblasts. (18) Matrix-associated proteins also play a role in bone remodeling, linking bone resorption and formation. Transforming growth factor β1 (TGF-β1) and insulin-like growth factor type I (IGF-1) reside in the bone matrix in their inactive form and upon osteoclast resorption are activated to promote mesenchymal cell differentiation to form mature osteoblasts. (25)(26)(27)(28) Orchestration of bone remodeling occurs at the direction and interplay of these cells and the secreted factors present. In contrast, there are other situations in which bone resorption and formation are not sequential and this is termed bone modeling. Bone modeling occurs during growth and periosteal expansion when bone is deposited, and can be lost in the case of some pharmacologic interventions and pathological skeletal disorders like inflammatory bone loss. (29) Recently, microRNAs (miRNAs) have been shown to play an important role in bone remodeling. miRNAs are evolutionarily conserved noncoding small RNAs that regulate gene expression posttranscriptionally by binding to the 3 0 -untranslated region of mRNA to block translation or promote degradation. (30) Regulation of biological processes by miRNA is complicated as each miRNA-of the thousands that have been identified-can target many transcripts and functionally overlap with other miRNAs. (31) To add to the complexity, miRNAs work not only inside the cell, but can be transported via exosomes to surrounding cells (32) and at a distance by the bloodstream and other bodily fluids. (33) In an example of miRNA control, a study in osteoblasts identified 11 miRNAs that directly inhibit Runx2 protein production, although at different levels and with most able to inhibit osteoblast differentiation in a reversible manner. (34) Osteoclast formation has likewise been shown to be influenced by miRNAs. miR-26a is upregulated late in osteoclastogenesis, with ectopic expression attenuating osteoclast and actin-ring formation, as well as bone resorption by inhibiting the expression of connective tissue growth factor (CTGF)/CCN family 2, which could be prevented with the addition of recombinant CTGF. (35) Similarly, osteocytes are regulated by miRNAs, and their miRNA-containing exosomes can influence the functioning of bone cells. (30) Interestingly, in an example of cell-to-cell communication, miR-214-3p can be secreted by osteoclasts and then taken up by osteoblasts. The miR-214-3p can then inhibit osteoblast activity in vitro and bone formation in vivo and is elevated in the serum of elderly women who show reduced bone formation with fractures. (36) The study of miRNAs is progressing rapidly, but given the complexity of possible regulatory networks much remains to be uncovered. In humans, trabecular bone loss begins in the third decade of life in both men and women and substantially increases in the perimenopause period. (37,38) This process is directly affected by decreases in estrogen and testosterone levels, leading to bone loss and potentially osteoporosis. (37) Other hormones also affect bone homeostasis, including PTH and corticosteroids. (39,40) For example, secondary hyperparathyroidism caused by low vitamin D levels and decreased renal calcium absorption with aging is a common phenomenon. (41) The changes in trabecular architecture include decreases in trabecular number, which is greater in women than in men, (38) a decreased trabecular thickness that is greater in men than women, (42) and a loss of connectivity. After menopause in women and sex steroid deficiency in men, cortical bone loss with increases in porosity has been documented. (43) Structurally, aging also affects the lacunar-canalicular system with human histomorphometric studies showing a decrease in lacunar density with a loss ranging from 15% to 30%. (44,45) The shrinkage of the canalicular network with age (46,47) affects intercellular communication and responses to skeletal loading and exercise on bone formation. Connexin-43 is a protein that maintains the canalicular network; the loss of connexin-43 with aging promotes osteocyte cell death, the appearance of empty lacunae, the recruitment of osteoclasts, and alterations in bone material properties. (48) Furthermore, loss of osteocyte lacunar density in human cortical bone triggers the accumulation of microcracks, a sign of bone deterioration contributing to osteoporosis. (49) In the bone marrow cavity, marrow adipose tissue increases with age causing the development of a yellow fatty marrow. (50) A strong correlation has been shown between decreased BMD and increased marrow adiposity, (51) with increases in marrow adiposity associated with fractures and osteoporosis. (52) In an animal BMSC-transplantation model, BMSCs transplanted from young donors into old recipients resulted in lineage switching from an osteogenic to an adipogenic fate, strongly suggesting that microenvironmental changes with aging play an important role in red marrow conversion. (53) As with other causes of osteoporosis, bone loss with aging is a process where the balance of bone remodeling is tipped in favor of bone resorption over bone formation (22,54) ; however, unlike postmenopausal bone loss, agerelated bone loss predominantly reflects decrements in bone formation. Osteoblast differentiation and proliferation is promoted by several transcription factors as discussed previously. In aging, there is a shift from promoting BMSC osteoblastogenesis to favoring adipogenesis caused by decreases in expression of Runx2 and Osx, and an increase in expression of peroxisome proliferatoractivated receptor-γ. (55) WNT signaling is also reported to decrease with age, further reducing osteoblast numbers. (56,57) In general, there is a decrease in physical activity with age leading to mechanical unloading and bone loss. Osteocytes are the primary mechanosensing cell type in bone. In a study of long-term immobilized patients, there were increased levels of sclerostin in their plasma. (58) Osteocytes are known secretors of sclerostin, which negatively impact WNT signaling and decrease osteoblast number and activity. Studies in older women have shown an age-dependent decrease in the number of osteocytes and an increase in empty lacunae, (44,59) potentially leading to a decrease in sensing of mechanical stimulation. Important factors limiting the abundance of osteoblasts, osteocytes, and their progenitor cells are the mutually exclusive pathways of cellular senescence (60) and apoptosis in these cell types, prominently found in aging bone. (61) Stromal cell/osteoblast-mediated increases in osteoclast differentiation and resorption activity are important potentiators of bone loss. BMSCs from older adults show an age-dependent increase in expression of RANKL, M-CSF, and a decrease in OPG expression. (62,63) Similarly, osteoblasts from older adults show an age-dependent increase of proosteoclastic cytokines like IL-6 and a decrease in OPG expression. (64) Further promoting the imbalance toward resorption is the decrease in osteoblast number and activity seen with aging. Cellular Senescence The hallmark of cellular senescence is irreversible growth arrest while maintaining cell viability. (65) This was first described by Leonard Hayflick, who noted that human embryonic fibroblasts have a finite lifespan in vitro. Further investigations showed that telomeres shorten with each cell division, (66,67) forming the basis of the telomere theory of cellular aging with telomere shortening providing the molecular clock. Since those early discoveries, many inducers of cellular senescence have been identified including telomere dysfunction/uncapping, DNA damage, chromatin alterations, reactive oxygen species (ROS), and oncogenes among others. (68) Senescent cells are characterized by a stable cell cycle arrest with accompanying morphologic and functional changes, continued metabolic activity, (69) and resistance to apoptosis through senescent cell antiapoptotic pathways (SCAPs). (70,71) Cell cycle arrest in senescent cells is largely the result of two signaling pathways: ATM/p53/ p21 CIP1 and p16 INK4a /retinoblastoma. Senescence-induction stimuli like DNA damage causes kinases such as ataxia telangiectasia mutated (ATM) to phosphorylate and stabilize p53 leading to increases in expression of the cyclin-dependent kinase inhibitor p21 CIP1 , which promotes cell cycle arrest. (72,73) ERK and MAPK signaling upregulates p16 INK4a inhibition of CDK4 and CDK6 leading to phosphorylation of retinoblastoma and blocking cell cycle progression from G1 to S. (74,75) p16 INK4a expression increases in aging cells and tissues and has become an important biomarker for aging studies. (76) p21 CIP1 is thought to be critical for establishing senescence, whereas p16 INK4a may maintain the phenotype. (73) The roles of p16 INK4a and p21 CIP1 in establishing and maintaining senescence is both cell-type specific and influenced by the inducing stimuli. (77) The cellular processes that lead to senescence include not only cell cycle arrest but inappropriate growth-promotion pathways (78) such as mTOR. (79,80) It is important to note that senescence is not restricted to only mitotic (proliferating) cells but also to mostly nondividing postmitotic cells, such as osteocytes. (60) Adult cells in various tissues respond differently to damage and can be prone to either senescence or apoptosis. These choices can lead to an appropriate mechanism for renewal of the tissue such as epithelial cell apoptosis (81) or senescence of stromal cells. (82) However, most tissue repair occurs through stem cell proliferation and differentiation. These stem cells also undergo senescence with physiologic aging, limiting their proliferation and differentiation capability, (83) as well as responsiveness to external signals. (84) The decline in function of differentiated cells and their stem cell pools are an important driver of age-related pathologies. Inducers and biomarkers of cellular senescence in bone The discovery of pathways that induce senescence has led to the identification of biomarkers for the detection of senescent cells that will continue to expand and increase in specificity as tools for further investigation into aging processes. Telomere dysfunction Telomeres are repeated DNA sequences found at the end of chromosomes, which are protected by a cap of proteins and maintained by telomerase. The process of replicative cell division causes the loss of telomere length, which increases with age. (85) This leads to the activation of the DNA damage repair (DDR) system that recognizes the end of the telomere as a double-strand break, (86) and further activates p53/p21 CIP1 and p16 INK4a to cause withdrawal of the cell from the cell cycle and promote senescence. (87) These observations led to the idea that telomere dysfunction-induced foci (TIFs), representing the colocalization of DDR proteins to the end of telomere, can determine cell proliferation and differentiation capabilities. It has become a useful marker for senescent cells and has been used in studies of various bone cell types including BMSCs and osteocytes. (60,88) Interestingly, postmitotic osteocytes, like muscle cells and neurons, may form shortened telomeres in a replication-independent manner. (89) Werner syndrome (WS) and dyskeratosis congenita are two genetic diseases with features of premature aging exhibiting shortened telomeres and premature osteoporosis. A model of accelerated aging in mice targeting the WS helicase and telomerase recapitulates the low bone mass and age-related osteoporosis seen in affected individuals and further shows shortened osteoblast lifespan and impaired differentiation without impaired osteoclast differentiation. (88,90) This would indicate that replicative aging of osteoblast precursors plays a pivotal role in senile osteoporosis. Further knockout studies in mice that eliminate only telomerase reverse transcriptase showed a decrease in bone mass with reduced osteoblast differentiation but increased osteoclastogenesis. (91) The signaling pathway mediating this effect was shown to involve increased p53 expression with cell cycle arrest, apoptosis, and a decrease in Runx2 expression. (88) As further confirmation of these studies, overexpression of telomerase in BMSCs maintains their osteogenic differentiation capabilities in vitro while allowing increased bone formation in vivo. (92)(93)(94)(95) In a study where circulating leukocytes were isolated from over 2000 women, it was found that telomere length was significantly correlated with BMD, and shorter telomeres were found in women with clinical osteoporosis. (96) This was directly contradicted by another study reporting that leukocyte telomere length was not associated with BMD in aged men and women. (97) Further studies are required to understand how telomere length or dysfunctional telomeres could be used as potential diagnostic markers to predict age-related osteoporosis. DNA damage DNA damage occurs by exposure to external factors such as ultraviolet and ionizing radiation, as well as endogenous factors like ROS and metabolic byproducts. The cellular DDR uses DNAdamage checkpoints to control cell cycle progression. As part of DDR signaling, p53 is stabilized, promoting cell cycle arrest through p21 CIP1 (72) and secondarily through p16 INK4a . (60,98) In the case where damage cannot be repaired, p53 induces cellular senescence. (99) Also upregulated by DDR is the zinc finger transcription factor GATA4, which stimulates NF-κB and senescence-associated secretory phenotype (SASP) production, further reinforcing the senescent phenotype. (100) A mouse study deleting an endonuclease (excision repair cross-complementary group 1-xeroderma pigmentosum group F) important for DDR, which is well conserved in a human progeroid syndrome deficiency causing osteopenia, osteoporosis, and abnormal skeletal development, showed decreases in osteoprogenitor cells with an increase in DNA damage as evidenced by increases in γ histone family member X (γH2AX foci, phosphorylation of the Ser-139 residue of the histone variant H2AX), senescence, and SASP of BMSCs and osteoblasts. (101) In another study, it was found that old mice had significantly increased markers of DNA damage such as γH2AX foci and senescence including G1 cell cycle arrest, phosphorylation of p53, and increased levels of p21 Cip1 , as well as an increase in GATA4 and activation of NF-κB stimulated SASP secretion. (102) These studies implicate a role for DNA damageinduced senescence limiting the number of osteoprogenitors and osteoblasts with aging and an increase in SASP facilitation of osteoclastogenesis. Chromatin alterations The study of epigenetic changes with aging and its influence on gene transcription, proliferation, and DNA damage is a complex process entering an exciting era of discovery. The loss of heterochromatin with aging has been documented from Caenorhabditis elegans to humans including important, normally heterochromatic areas such as telomere ends and pericentromeric regions. (103) Interestingly, in senescent cells there are also new regions of heterochromatin termed senescence-associated heterochromatic foci (SAHF). (104,105) These regions are transcriptionally inactive and may function to assist in the cell cycle arrest that occurs in senescence. (104,106,107) SAHF may not be a consistent marker as it is not found in all senescent cells such as those isolated from patients with Hutchinson-Gilford progeria syndrome. (108,109) Another marker found in all senescent cells to date is senescence-associated distension of satellites (SADS), which detect unraveled pericentromeric satellite heterochromatin. It is not exclusive to either p16 Ink4a or p21 Cip1 pathways (110,111) in contrast to SAHF, which is linked to p16-pathway activation. (104) Changes in heterochromatin detected with aging are caused in large part to nucleosome remodeling, which is determined by the histone compliment and its modifications termed marks. There is an overall loss of histones with aging and a change in the abundance of histone variants present, their location, as well as modifications such as methylation, acetylation, and phosphorylation. (103) With aging in general, there is an increase in activating histone marks (like H3K4me3) and a decrease in repressive marks (like H3K9me3), (112) but the picture is much more complex because there are differences even noted between cells from the same donor, as well as those from monozygotic twins. (113) Several studies in BMSCs have investigated the role of heterochromatin dysfunction and its promotion of senescence. WS is a progeria caused by mutation in the WRN gene, which encodes a highly conserved helicase important for many functions related to DNA repair and telomere maintenance. (114,115) WRN -/-BMSCs when passaged show premature loss of replication, telomere length, senescence with DNA damage, and a significant decrease in the repressive H3K9me3 mark important for maintenance of heterochromatin. (116) Coimmunoprecipitation assays show an association between WRN SUV39H1 (a methyltransferase for H3K9me3), HP1α, and LAP2β, with the latter two proteins helping to associate the heterochromatin with the nuclear envelope. Knockdown of SUV39H1 or HP1α in BMSCs led to decreased H3K9me3 expression and senescence linking heterochromatin destabilization with senescence. (116) In a study done in mice, A-type lamins were shown to interact with methyltransferase Suv39h1, depletion of which reduces H3K9me3 levels, restores DNA repair capacity, and delays senescence in progeroid cells. Furthermore, in Zmpste24 -/mice, a lamin A-deficient model of progeria, loss of Suv39h1 delays weight loss with increased BMD and was shown to extend lifespan by 60%. (117) Corroborating the mice data, human BMSCs derived from the dental pulp of aged patients had reduced levels of SUV39H1, H3K9me3, and the RecQ DNA helicase WRN. (116) Other investigations have shown that polycomb group (PcG) proteins are important for binding to target genes, recruiting EZH2 methyltransferase, methylating target histones like H3K27me3 and binding to condensed nucleosomes. (118) In addition, EZH2 methyltransferase is associated with the regulation of osteogenesis and bone development. (119)(120)(121)(122) As a counterbalance, JMJD3 demethylase, whose expression is increased in response to oncogenes or stress, demethylates H3K27me3 decreasing PcG complex binding leading to increased expression of p16 INK4A and senescence. (123,124) These studies highlight the impact changes in heterochromatin regions can have on senescence promotion that in turn lead to a decrease in bone marrow progenitor cells capable of renewal with aging. Globally DNA methylation declines with aging. For example, Alu hypomethylation was associated with advancing age and reduced BMD. (125) Additionally, in BMSCs there is a decrease in the expression of DNA methyltransferases DMNT-1 and DNMT-3B with aging and their inhibition induces senescence. (126,127) The loss of DMNTs increases expression of p16 INK4A and p21 CIP1 but decreases expression of PcG proteins. (126) In contrast, there is an increase in methylation of promoter-associated CpG islands near genes important for differentiation, cell-type-specific functions, and transcription factors and their modulators. (112,128) Studies of methylation at CpG sites in BMSCs showed similar changes in methylation from in vitro and in vivo samples, indicating that hypermethylated sites corresponded to homeobox family genes important for differentiation and expression of Runx2 and DLX5, important transcription factors for osteogenic differentiation. (129) Progress in mapping DNA methylation patterns in specific tissues is building the "epigenetic clock," with the potential to create a powerful database for aging studies. In a study of older women with underlying osteoporosis and osteoarthritis, where the bone samples from the femoral head were analyzed, the overall methylation levels at CpG loci were well correlated between the two age-associated comorbidities. (130) However, careful characterization and genomewide methylation profiling of bone samples identified unique methylation sites linked to cell differentiation and osteogenesis in the osteoporotic and osteoarthritic populations. (131) In a study where DNA methylation was used as a predictor for osteoporosis in blood samples from different aged patients, no such correlation was achieved, suggesting that peripheral blood is not a good source from which to make predictions about changes in bone with aging. (132) In another study in postmenopausal women, osteoporosis chromatin modifying enzymes HAT1, KAT5, HDAC6, MBD1, and DNMT3A were all downregulated, with a direct correlation between abundance of HAT1, HDAC6, and MBD1 and bone quality. (133) A comparative study with age-related osteoporosis is still elusive, leaving the question open as to whether the changes in these epigenetic enzymes during postmenopausal osteoporosis were mainly caused by the loss of estrogen. Reactive oxygen species ROS are beneficial at low levels as they promote osteoclast differentiation and resorption of bone and are usually in a delicate balance with antioxidants. (134) However, when levels of ROS increase above beneficial levels, as with aging, they cause cell death in osteoblasts and osteocytes and the destruction of bone by promoting osteoclast differentiation and activity. (135) Human studies show increases in ROS and decreases in antioxidant levels correlate with increases in osteoclast activity and decreases in osteoblast activity as a function of age and osteoporosis. (134,136,137) Osteoclasts have the largest number of mitochondria of any cell type because of the energy and acid production necessary for resorption. These mitochondria and NADPH oxidases are important sources for ROS. In a mouse study of osteoclasts expressing a mitochondria-targeted catalase, there were increases in bone mass and decreases in osteoclast formation and survival that even protected women from ovariectomy-induced bone loss, highlighting the function of mitochondrial-produced ROS. (138) Mitochondrial defects accumulate with age both morphologically and functionally, including decreases in biogenesis, mitochondrial dysfunction, and bioenergetic failure. All of these defects help to drive senescence by the interplay of mitochondrial dysfunction/ROS production with the DDR and aberrant signaling through telomere shortening/replicative senescence pathways, with the ultimate effect of disrupting bone cell and especially stem cell function. (139) microRNAs miRNAs, as previously introduced, play an important role in bone remodeling; changes in their abundance with age can greatly affect and potentiate senescence. Their importance in regulating senescence was shown when ablation of the RNase III family endoribonuclease Dicer, essential for the maturation of most miRNAs, caused senescence. (140) miRNA-195, which directly targets the 3 0 untranslated region of telomerase reverse transcriptase (TERT), increases with age. (141) Furthermore, overexpression of miR-195 induced BMSC senescence, whereas a knockdown of miR-195 increased TERT expression and promoted telomere relengthening. (141) Another example of miRNA modulation of the senescence machinery comes from studies of DNA methyltransferase-1 on repression of the RANKL promoter. It was found that TNFα could lift this repression by upregulation of miR-140-3p and miR-126. (142) Two recent studies have shown miRNAs are at the critical switching point between BMSC osteogenesis and adipogenesis/senescence. In one, increases in miR-188 downregulated histone deacetylase 9 and RPTOR-independent companion of MTOR complex causing bone loss and fat accumulation (143) ; in the other, miR-363-3p targeted TNF receptor-associated factor 3 with similar effect. (144) Other roles for miRNA and senescence induction include regulation of IGF-1 signaling, ROS, and stem cell exhaustion. (145) miRNA levels are now being correlated as biomarkers for diseases such as osteoporosis. (146,147) One report found a direct correlation between miR-29b-3p and improved bone formation rate/bone surface, (148) with declines in miR-29b-3p, among others, predicting changes in bone fragility, fracture rates, and bone turnover in postmenopausal women. (149) Retinoic acid receptor-related orphan receptor β (Rorbeta, Rorb) is a gene that is highly upregulated in aged mice and human bone samples, but is significantly downregulated during the progression of osteogenesis, and is itself downregulated by direct binding by miR-219a-5p, thus making miR-219a-5p another potential marker for future predictions of age-related osteoporosis. (150) Additional biomarkers of senescence include senescenceassociated β-galactosidase (SA-βgal) (151) and senescenceassociated α-fucosidase, (152) lysosomal enzymes that correlate with, but do not have a causative effect on senescence. In addition, cellular senescence induces changes in nuclear morphology with a progression from several compact nucleoli of proliferating cells to one enlarged nucleolus in senescent cells stuck at the G1/S stage of cell cycle progression. (108,153,154) There is also diminished ribosome biogenesis and accumulation of rRNA precursors with senescence. (108) Ribosomal protein L29 may be a related biomarker of senescent cells. (108) Single markers of senescence are limited and may not be accurate under certain conditions, (155)(156)(157) but a combination of several markers has been shown to be reliable for detecting cellular senescence. (74) Senescence-associated secretory phenotype Senescent cells are metabolically active and secrete signaling factors (cytokines, chemokines, and growth factors), proteases (matrix metalloproteases and serine proteases), extracellular matrix proteins, nonprotein components (reactive oxygen and nitrogen species), and extracellular vesicles containing miRNA. (158) These collectively are referred to as the SASP. (159) Recent studies indicate that components of the SASP may vary depending on whether they are derived from cells early or late after the onset of senescence. (160)(161)(162) Multiple inducers of senescence, the timing of senescence induction, and differing cell types lead to unique profiles for the SASP that can affect neighboring cells as well as those at a distance. The SASP has both beneficial and negative impacts on biological processes. The beneficial aspects include wound healing, (163) embryogenesis, (164)(165)(166) and tumor suppression by reinforcing growth arrest. (167,168) After damage in a young organism, senescent cells secrete SASP factors, which can alert the immune system for their clearance and promote tissue regeneration through increased stemness in surrounding cells followed by proliferation and differentiation. (169,170) The temporary presence of senescent cells in tissues and their benefits shift as aging progresses. Senescent cells accumulate with aging, leading to chronic production of SASP components and inflammation. (171) Chronically, the SASP can inhibit proliferation and promote senescence in nearby cells. (167,172,173) This can lead to a prolonged dedifferentiated state that blocks tissue regeneration after damage. (174) A persistent SASP can also be protumorigenic in committed precancerous cells promoting their proliferation, survival, and metastasis. (175) The increase in senescent cells and their SASP with aging leads to dysfunctional tissue and disease states. Targeting cellular senescence in bone Recently, studies were undertaken to identify senescent cells and their SASP in the bone microenvironment. (60) Cells were isolated from the bone of young (6-month-old) and old (24-month-old) mice and magnetic-activated cell sorting was used to enrich in vivo cell types. The data indicated that p16 Ink4a mRNA expression increased with aging in myeloid cells, B and T cells, osteoblast progenitors, osteoblasts, and osteocytes. (60) There was an accumulation of senescent osteocytes in older mice identified by increased SADS and TIF staining. In this study, SASP mRNA profiles of 36 previously identified SASP factors were investigated. Few changes in SASP factors were seen with aging in osteoblast progenitors, osteoblasts, and B and T cells, but in myeloid and osteocyte cells greater than 23 out of 36 SASP factors significantly increased. (60) Consistent with the data obtained from the mouse model, small needle bone biopsies from young and older postmenopausal women showed increases with aging in p16 INK4A and p21 CIP1 , as well as 12 of the 36 SASP factors investigated. (60) The increase with aging in senescent mouse osteocytes and SASP has been confirmed by other investigators and extended to show increased levels of RANKL in associated agedependent cortical bone loss. (176) In studies of mouse osteoprogenitors, it was found that their numbers decline more than 50% with aging and are positive for markers of DNA damage, increased p21 Cip1 , and elevated expression of SASP genes. (102) Interestingly, elimination of osteoclasts in mice has no effect on bone loss with aging. (177) To evaluate the role of senescent cell accumulation and SASP in bone aging, several approaches have been taken to eliminate senescent cell burden in mice. These include using a genetic model with the INK-ATTAC transgene containing a FKBPcaspase-8 fusion protein (which is lethal to p16 + cells with administration of the drug AP20187) or a drug combination of dasatinib and quercetin (D + Q) that clears senescent cells. (178) Mice were treated starting at 20 months of age when bone loss was documented. Both approaches showed partial elimination of senescent osteocytes with concomitant higher bone mass, strength, and microarchitecture in comparison with controls. Furthermore, these approaches suppressed bone resorption and maintained osteoblast numbers and bone formation in trabecular and cortical bone. (178) The JAK inhibitor (JAKi) ruxolitinib suppresses components of the SASP, particularly IL-6, Il-8, and PAI-1, (179,180) which are known to be important for osteoclastogenesis and bone resorption. (178) JAKi treatment of 22-monthold mice for 2 months yielded similar results to those of the INK-ATTAC transgene strategy and senolytic treatments, with lower osteoclast numbers and no significant differences in osteoblast numbers. (178) In vitro studies further confirmed that senescent cell-conditioned media promotes osteoclastogenesis. (178) Although senescent osteoprogenitors, osteoblasts, and osteocytes have direct consequences on bone architecture, the role of other senescent populations (such as myeloid-lineage cells) in aging bone are still to be determined. The identification of senescent osteocytes and osteoblasts in vivo and the rescue of age-related bone loss by the elimination of senescent cells indicate a pivotal role for cellular senescence and their SASP in bone loss and osteoporosis with aging (Fig. 1). Another model of genetic clearance of p16 cells (i.e., p16-3MR mice), failed to show any recovery of the aging bone phenotype and the elimination of senescent osteoclast progenitors did not affect the endocortical osteoclast number and age-associated bone loss. (177) However, unlike studies in INK-ATTAC mice, (178) those performed in the p16-3MR model failed to clear senescent osteocytes but did clear senescent osteoclast progenitors. (177) Moreover, elimination of senescent osteoclast progenitors did not have any significant change on the bone architecture of the aged p16-3MR mice, suggesting senescent osteocytes are key drivers for age-related osteoporosis. Taken together, these studies indicate that cellular senescence is a key mechanism that contributes to age-related osteoporosis. In addition, oxidative stress and DNA damage linked to ROS are now being associated with various other pathways of age-related osteoporosis. (181) For example, vitamin D insufficiency is associated with senescenceassociated age-related osteoporosis, (182) and recent studies suggest that it protects against age-related osteoporosis via a VDR-EZH2-p16 axis. (183) Radiation-induced bone loss: cellular senescence and similarities to age-related osteoporosis Ionizing radiation (IR) is present all around us and though levels of IR in the atmosphere are submaximal, ultraviolet light from the sun imparts significant radiation that produces DNA damage by direct and/or indirect methods such as the generation of free radicals and ROS. Therapeutic IR used for cancer treatment targets both normal and cancer cells to induce apoptosis, autophagy, or senescence. It is still unclear if there is a threshold of IR that induces a different cell fate. Although the cell has mechanisms to counter small levels of IR using antioxidants, catalase, superoxide dismutase, glutathione peroxidase, and reduced glutathione, an excess dose of IR may overcome these defense mechanisms to induce DNA damage. (184) IR-induced singlestrand breaks and double-strand breaks (DSBs) in the DNA produce a DDR. The DDR induces a growth arrest and failure to repair damaged DNA leads to a permanent state of growth arrest consistent with cellular senescence with the expression of p21 CIP1 and p16 INK4A , as well as the SASP. It has been noted that cells, which expressed the senescence marker p16 Ink4a in the absence of a DDR, did not have a SASP. (185) Thus, the cellular state, including DDR, and not just the expression of markers of senescence, drives the senescent phenotype. There are six major DNA repair pathways, O6-methyl guanine methyltransferase, base excision repair, nucleotide excision repair, mismatch repair, and DNA DSB repair, which includes homologous recombination (HR) and nonhomologous endjoining (NHEJ). Although IR-induced cellular senescence is known to be triggered by the NHEJ pathway initiated by a DSB, the indirect effects of IR, which may also lead to DNA damage, is repaired by HR. The DSB component, mediated by ATM, leads to phosphorylation of the DDR mediator H2AX. Phosphorylated γH2AX then initiates the recruitment of other proteins of the DDR. The number of γH2AX foci equates to DSB sites in a nucleus and although IR-induced γH2AX foci formation seems to resolve over time, the ability of cells to resolve these lesions depends on the DDR components, Ku70 and DNA-PKC, which are both key players in the NHEJ mechanism. Both Ku70 and DNA-PKC are high turnover proteins and their levels following IR determine the fate of DSB repair. Stabilization of these proteins either by activation of the cAMP-WNT pathway or by suppression of proteasome-based degradation allows radiated osteoprogenitors, osteoblasts, and osteocytes to survive the IR-induced DNA damage and cellular apoptosis and ultimately leads to bone accrual in rodent studies. (186)(187)(188) However, the major outcome of IR-induced damage on bone cells that sustain unresolved DNA damage is cellular senescence. Although cellular senescence was originally proposed as an explanation for the limited replicative potential of a cell, IR leads to stress-induced premature senescence (SIPS). Different forms of IR such as X-rays or γ-rays lead to SIPS. (184) Induction of SIPS by IR may vary by dose: A low dose may cause DNA damage through oxidative damage without generating a senescence phenotype, whereas high-dose IR causes DNA damage through DSBs leading to sustained DNA damaged sites, often in the telomere, and then to senescence. Low-dose IR-induced DNA damage in cells is mainly repaired and osteoprogenitors such as BMSCs do respond to low-dose IR by undergoing growth arrest, (189) with DNA damage lesions not passed to the daughter BMSCs. (190) Apart from these reports, most studies suggest that BMSCs are largely resistant to IR, based in part on the abundance of nuclear lamina proteins. Lamin A is a major component of the nuclear lamina, and the ability of cells to resist DNA damage from IR has been attributed to lamin A content. BMSCs are known to have one of the highest lamin A levels of any cell type, providing stiffness to the cells (191) and potentially resistance to SIPS. In contrast, several lines of in vitro evidence have suggested that BMSCs do not lose their ability to proliferate and form colonies in response to low-and highdose IR (188,189) but have a reduced osteogenic potential. (192,193) The in vivo evidence that BMSCs undergo IR-induced senescence is limited. In highly senescent environments, such as high-dose IR and aging, BMSCs tend to differentiate into adipocytes (188) a phenomenon commonly observed in skeletal regions where patients receive radiotherapy for cancer treatment. (194) Previous studies show that sustained DNA damage to osteoblasts and osteocytes several weeks post-IR greatly impaired bone formation. (187,188) However, there were stark differences between the cells that showed DNA damage and the cells that entered the programmed cell death pathway, with the percentage of apoptotic cells accounting for only a small fraction of cells that showed DNA lesions. (188) This gap in understanding was recently bridged by studies showing that bone cells undergo senescence in response to IR. Cellular senescence was assessed by gene expression analysis of the senescence markers p21 waf1/ Cip1 and p16 Ink4a , as well as SA-βgal staining and DSB assessment at telomere sites (TIFs). (195) Interestingly, p21 peak expression Overview of aging bone and the cellular processes regulating senescence and the SASP. (A) Normal bone formation involves activation of BMSCs to differentiate into osteoblasts, which fill in resorption pits created by osteoclasts. The release of matrix-embedded factors by resorption in turn activates osteoblast proliferation, maturation, and differentiation, creating a coordinated homeostatic process known as "coupling." Aging induces dysfunctional homoeostasis leading primarily to reduced bone formation and mineralization but also increased resorption by osteoclasts owing to increased osteoclast precursors. However, the overall osteoclast numbers on the bone surface decline with aging. BMSCs have a reduced capacity to form osteoblasts, and preferentially differentiate into adipocytes. Osteocytic cell death and empty lacunae promote the loss of the canalicular network. Senescent bone cells and their proinflammatory secretome (i.e., SASP) have been shown to play key roles in propagating the alterations seen in the aging bone. Targeted killing of senescent cells can be achieved by drugs called senolytics and inhibition of the SASP by senomodulators (or senomorphics). (B) Taking senescent osteoblasts as an example of a generalized senescent cell, several subcellular processes may become drivers of cellular senescence. Shortened or damaged telomeres or other DNA damage followed by the DDR activates cell cycle inhibitory pathways regulated by p16 INK4a or p21 CIP1 , which then suppresses cyclins such as cyclin D, CDK4/6, cyclin E, and CDK2. Together with the activation of the DDR, several pathways regulated by GATA4, mTOR, or JAK/STAT proteins activate the NF-κB-based transcriptional activation of proinflammatory SASP proteins, which in turn are excreted by the cell to induce deleterious autocrine, paracrine, or endocrine responses. Mitochondrial dysfunction, proteostasis, epigenetic modifications to the chromatin, miRNA-based dysregulation of genes and changes in the nuclear lamina are other known characteristics of a senescent cell. BMSCs, bone mesenchymal stem cells; BS, bone surface; DDR, DNA damage repair; HSCs, hematopoietic stem cells; miRNAs, microRNAs; OC, osteoclast; RB, retinoblastoma gene product; SASP, senescence-associated secretory phenotype. was predominantly higher than p16 Ink4a expression on day 1 post-IR, whereas p16 Ink4a expression peaked around day 42 post-IR. The gene expression data were confirmed by histological analysis of senescent cells using SA-βgal staining on day 28 and by TIF assay on day 42 post-IR. (195) Senescent bone-lining cells, osteocytes, and bone marrow cells together indicated an overall senescent environment, accompanied with a SASP profile. Pharmacologic clearance of senescent cells using senolytic drugs relieved the senescent cell burden and SASP, and ultimately alleviated IR-related bone loss as well. Further studies are required to understand the role of different cell types and cellular pathways that contribute to the IR-induced suppression of bone formation. For example, it is still unclear if the vasculature system in bone undergoes the same degree of IR-induced cellular senescence as other bone cells, and if that directly or indirectly contributes to bone loss. Sex steroids and skeletal health Several studies have attempted to understand the role of gonadal sex steroids in regulating skeletal health. Current understanding indicates that estrogen may play a role in the bone health of both postmenopausal women (196) and aging men. (197,198) Although the role of testosterone in regulating BMD in men was not consistent between studies, (197,198) some studies found a direct correlation with declining testosterone levels with loss in BMD, (199,200) and treatment with testosterone has shown improvement in trabecular BMD in the spine. (201) However, androgens may indirectly regulate skeletal maintenance by aromatization into estrogen and binding to the estrogen receptor. (202) In evaluating the role of estrogen deficiency in aging and osteoporosis, independent roles for estrogen function in age-associated bone loss were identified that did not overlap with cellular senescence-related bone loss. (203) In these studies, short-term estrogen treatment in human subjects failed to affect markers of senescence or the SASP. (203) Preclinical genetic studies with clearance of p16-positive senescent cells in mice alleviated age-related osteoporosis, (178) but failed to alleviate ovariectomy-induced bone loss. (203) This clearly indicated that estrogen deficiency-related bone loss is independent of senescence-related bone loss. However, further studies are needed to better ascertain the relationships between gonadal sex hormone deficiencies and cellular senescence pathways, especially in clinical studies. Pharmacologic targeting of senescent cells in the treatment of osteoporosis Preclinical studies indicate a possible role for senotherapeutic agents in the treatment of age-related osteoporosis. (178) Such agents target senescent cell apoptotic pathways and result in senescent cell clearance (i.e., senolytics) or reduction of the SASP, leaving senescent cells intact (i.e., senomodulators). Because of the accumulation of senescent cells in other types of osteoporosis, such as radiotherapy-induced bone loss, senotherapeutic approaches may also be effective in these clinical scenarios. Because there is also some evidence for senescent cell burden in mechanisms of bone loss related to diabetes mellitus, skeletal unloading, glucocorticoid use, and perhaps androgen insufficiency, (204) the therapeutic umbrella for senolytic and senomorphic drugs in osteoporosis may be further expanded. Although both senolytics and senomodulators may have similar beneficial effects in preclinical animal models of bone loss, pharmacodynamics and toxicity considerations in humans may dictate their future use as therapy for osteoporosis. Senolytic agents can likely be given once or consecutive times over a few days every 4-6 weeks based on estimations of senescent cell turnover and thus the time to reaccumulation of senescent cells. (205,206) Senomodulators would need to be given more frequently, probably daily, increasing the likelihood of adverse drug events compared with senolytics. Possible drug resistance would not be an important consideration because senescent cells are nondividing. Also, immunological senescent cell clearance, which normally occurs in youth and thereafter diminishes, (207)(208)(209)(210) may be improved with senotherapeutics and contribute to their overall beneficial effects. Compared with current pharmacologic treatment considerations for osteoporosis, senolytics offer the possibility of long-term treatment. It is still unclear to what extent, if any, postmenopausal osteoporosis may be mediated or amplified by senescent cells in the bone microenvironment or at distant locations. Preclinical studies in ovariectomized mice suggest that acute estrogen deprivation does not promote accumulation of senescent cells in bone. (203) In support of this observation, short-term administration of estrogen in postmenopausal women does not reduce markers of senescence in bone biopsies. (203) Despite these findings, treatment of postmenopausal women with senolytics may still improve bone parameters, such as bone turnover markers and BMD, and this is currently being evaluated for the senolytic cocktail D + Q and for Fisetin (ClinicalTrials.gov Identifier: NCT04313634, Targeting Cellular Senescence With Senolytics to Improve Skeletal Health in Older Humans). Ovariectomized mice are limited as models of postmenopausal osteoporosis because the typical paradigm for study involves estrogen depletion in young animals and only for short periods. In the setting of possible cellular senescence-based contributions to postmenopausal bone loss (and consideration for the use of senotherapeutic agents), both estrogen deficiency and aging must be considered as cumulative or synergistic effects. Another area for future therapeutic interventions in agerelated osteoporosis is combination therapy with senotherapeutic agents and existing approved drugs for osteoporosis. Although senotherapeutic interventions in preclinical studies suggest that their primary effect is on bone formation (with secondary effects on inhibition of bone resorption), complementary and/or additive effects of current anabolic agents or antiresorptive drugs may hold future promise. In fact, at least one medication used for the treatment of bone loss (zoledronate) conveys a mortality benefit independent of preventing fracture, as well as an extension of lifespan in an animal model of accelerated aging. (211)(212)(213)(214) Also, zoledronate was able to extend the lifespan of human mesenchymal stem cells by protecting against the accumulation of DNA damage, an important mechanism of cell aging, and preserving their ability to proliferate and differentiate in vitro. (215) Thus, zoledronate itself may have senotherapeutic properties. Conclusions Substantial alterations in bone architecture occur with aging, including decreases in trabecular thickness and number, cortical bone loss and porosity, and increase in marrow adiposity. These changes reflect imbalances in bone remodeling, favoring a net loss of bone caused predominately by increased osteoclast activity in postmenopausal women, as well as both poor bone formation and increased osteoclast activity in older men and women. Cellular senescence and apoptosis of osteoblasts and osteocytes account for much of the aging phenotype in bone, although appear to be independent of estrogen-mediated effects. There is growing evidence to suggest that cellular senescence in bone can be triggered by ROS, DNA damage, telomere dysfunction, and heterochromatin changes, depending on the cell type. miRNAs serve to modulate critical switching points, such as those between osteogenesis and adipogenesis, and aspects of the senescence program. The SASP likely mediates local and even distant deleterious effects of senescent cells, especially by myeloid cells and osteocytes. Radiation-induced bone loss provides an accelerated aging bone model that recapitulates many aspects of age-related bone loss. With both physiological and premature bone aging, genetic and pharmacologic approaches to clearing senescent cells prevent, delay, or ameliorate osteoporosis in mouse models. Senolytic compounds are currently being evaluated in interventional clinical trials.
2021-04-15T13:26:00.494Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "2b954a1a9b0b4cd5f44fd2a3ddcceebf085ffdc8", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jbm4.10488", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b954a1a9b0b4cd5f44fd2a3ddcceebf085ffdc8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252970324
pes2o/s2orc
v3-fos-license
Impact of ABCC2 1249G>A and −24C>T Polymorphisms on Lacosamide Efficacy and Plasma Concentrations in Uygur Pediatric Patients With Epilepsy in China Purpose: We aimed to evaluate the effect of the ABCC2 1249G>A (rs2273697) and −24C>T (rs717620) polymorphisms on lacosamide (LCM) plasma concentrations and the efficacy of LCM in Uygur pediatric patients with epilepsy. Methods: We analyzed 231 pediatric patients with epilepsy, among which 166 were considered to be LCM responsive. For drug assays, 2–3 mL of venous blood was collected from each patient just before the morning LCM dose was administered (approximately 12 hours after the evening dose, steady-state LCM concentrations). The remaining samples after routine therapeutic drug monitoring were used for genotyping analysis. The χ2 test and Fisher exact test were utilized for comparative analysis of the allelic and genotypic distribution of ABCC2 polymorphisms between the LCM-resistant and LCM-responsive groups. The Student t test or Mann–Whitney U test was conducted to analyze differences in plasma LCM concentration among pediatric patients with epilepsy with different genotypes. Results: Patients with the ABCC2 1249G>A GA genotype (0.7 ± 0.3 mcg/mL per kg/mg) and AA genotype (0.5 ± 0.3 mcg/mL per kg/mg) showed significantly (P < 0.001) lower LCM concentration-to-dose (CD) ratios than patients with the GG genotype (1.0 ± 0.4 mcg/mL per kg/mg). Moreover, patients with the ABCC2 −24C>T CT genotype (0.6 ± 0.2 mcg/mL per kg/mg) and TT genotype (0.6 ± 0.3 mcg/mL per kg/mg) presented a significantly (P < 0.001) lower LCM CD ratio than patients with the CC genotype (1.1 ± 0.4 mcg/mL per kg/mg). Conclusions: The ABCC2 1249G>A (rs2273697) and ABCC2 −24C>T (rs717620) polymorphisms can affect plasma LCM concentrations and treatment efficacy among a population of Uygur pediatric patients with epilepsy, causing these patients to become resistant to LCM. In clinical practice, ABCC2 polymorphisms should be identified before LCM treatment, and then, the dosage should be adjusted for pediatric patients with epilepsy accordingly. BACKGROUND Lacosamide (LCM) is a novel antiseizure medication that plays a unique anticonvulsant role by selectively inhibiting the activation of voltage-gated sodium channels. 1 In August and October 2008, LCM was granted approval in Europe and the United States, respectively, for the treatment of partial-onset seizures with or without secondary generalization in adults, adolescents, and children age 4 years with epilepsy. 2,3 Multiple studies have shown that LCM is associated with favorable short-term and long-term efficacy, tolerability, and safety in the treatment of epileptic patients. [4][5][6] LCM was granted approval in China in 2018. A case report of 72 children with focal epilepsy (the mean age was 7.2 years; range, 0.9-14 years) undergoing treatment at the People's Hospital of Xinjiang Uygur Autonomous Region in Uygur, China, has been published; 50 (69%) of the children responded to LCM therapy, which was associated with a reduction of more than 50% in the frequency of seizures. 7 However, our routine drug monitoring studies showed an increase in the number of children with epilepsy who developed drug resistance. Resistance is defined as failure of treatment with LCM as monotherapy or in combination with other antiseizure medications for at least 12 months at the maximal tolerated doses, with persistence of epileptic seizures. Multidrug transporters play a major role in the mechanism of drug resistance. Abnormally expressed multidrug transporters are known to prevent antiseizure medications (ASMs) from entering the brain tissue, thereby reducing the concentration of ASMs at specific targets in the brain and resulting in resistance to epilepsy drugs. 8 P-glycoprotein (Pgp) is the first and most widely studied multidrug transporter. 9 P-gp can pump ASMs into blood by hydrolyzing energy provided by ATP; it reduces drug concentration at the epileptic foci, thereby inhibiting ASMs in the epileptic discharge of nerve cells. 10 P-gp is encoded by the multidrug resistance gene. 11 Mutations in the multidrug resistance gene that encodes P-gp can affect the efflux of endothelial cell transporters in the blood-brain barrier. This, in turn, will affect the pharmacokinetic parameters and blood drug concentrations of ASMs, further impairing their efficacy and toxicity as well as inducing drug resistance. 12 ABCC2 (also known as multidrug resistance protein 2, MRP2) is an ATP-binding cassette transporter that is largely responsible for the active efflux of many drugs. Single nucleotide polymorphisms (SNPs) of ABCC2 can influence the expression and function of the resultant proteins and are likely associated with drug resistance among patients with epilepsy. 13 The most studied SNPs in this gene include 1249G.A (rs2273697) and 224C.T (rs717620). 14 Grewal et al 15 revealed that altered functionality of ABCB1 and ABCC2 can affect the disposition and bioavailability of ASMs, thereby interfering with antiseizure medication therapy. Xue et al suggested that the ABCC2 rs717620 polymorphism is associated with resistance to ASMs among Uygur patients with epilepsy. 16 In addition, Chen et al 17 discovered that in a subgroup of generalized seizure, ABCC2 rs3740066 CC carriers had a higher frequency of valproic acid resistance than TC + TT carriers (P , 0.05). However, there has been no comprehensive investigation into the relationship between SNPs in ABCC2 with plasma LCM concentration. Therefore, this study was conducted to evaluate the association of the 1249G.A and 224C.T genotypes of ABCC2 as well as their haplotypic and diplotypic combinations with the plasma concentration and efficacy of LCM in Uygur pediatric patients with epilepsy. The purpose of this study was to provide a valuable tool for predicting the clinical efficacy of LCM before treatment and to contribute to the individual use of ASMs among Uygur pediatric patients with epilepsy. Subjects A total of 296 pediatric patients with epilepsy who received LCM treatment at the People's Hospital of Xinjiang Uygur Autonomous Region, China, in 2018-2021 were included in this study. Sixty-five pediatric patients with epilepsy were excluded owing to incomplete data. Overall, 231 pediatric patients with epilepsy who underwent LCM treatment were included. All pediatric patients met the criteria for a diagnosis of epilepsy, as issued by the International League against Epilepsy in 2017. 18 This study was granted approval by the Ethics Committee of People's Hospital of Xinjiang Uygur Autonomous Region, Xinjiang, China (Ethical Approval Number: KY2019120614). The parents of all patients signed an informed consent form. All subjects were regularly treated with LCM tablets, in accordance with the study protocol. Seizure frequencies at 3, 6, and 12 months after the initiation of LCM therapy were recorded and compared with the baseline monthly frequency. Participants were presumed to be responsive if treatment with LCM at the maximum tolerated dose for at least 3 months, either as monotherapy or in combination with other ASMs, led to a reduction in the frequency of seizures by $50%. 19 In addition, participants were presumed to be resistant if treatment with LCM at the maximum tolerated doses for at least 12 months, either as monotherapy or combined with other ASMs, failed to reduce the frequency of seizures by $50% and epileptic seizures persisted. Pediatric patients were divided into either the responsive or resistant group according to their seizure frequency. Therapeutic Drug Monitoring of LCM All pediatric patients were administered LCM tablets 2 times a day. The initial dose of LCM was 2 mg/kg daily, and this dose was increased once every week. The target dose was 5-20 mg/kg daily for 3-4 weeks, and the target range of LCM serum concentrations was 2.5-15 mcg/mL. Once the maintenance dose was reached, the first blood sample was collected. If the LCM serum concentration did not reach the target range, blood sampling was repeated after a week of dose adjustment. For drug concentrations assays, 2-3 mL of venous blood was obtained from each patient just before the morning LCM dose was administered (approximately 12 hours after the evening dose; trough concentration). This study used a single blood sample at a single point in time. Plasma LCM concentrations were measured in only 1 blood sample per patient at the last follow-up. EDTA anticoagulant tubes containing the blood samples were immediately centrifuged at ·4000g (248C) for 5 minutes, and the resulting plasma was transferred into a clean tube and stored at 2808C. The leftover samples after routine therapeutic drug monitoring were used for genotyping analysis. The extraction process was initiated by adding 300 mL of organic deproteinization solution (Abbott Laboratories, Shanghai, China) to 100 mL of plasma sample. Next, the solution was vortexed for 2 minutes and then centrifuged at ·12,100g for 10 minutes. The upper organic layer was then collected and placed into a clean glass tube. Vials were placed within a thermostatic autosampler (108C), and 0.5-mL aliquots were used for ultra-performance liquid chromatography (UHPLC) assay. Plasma LCM concentrations were quantified using a validated UHPLC method. 20 All LCM samples were evaluated at the individualized research laboratory of the Institute of Clinical Research of our hospital. All samples were analyzed in batches once a week on average. The chromatography instrument was Waters ACQUITY UPLC BEH (C18, 2.1 · 100 mm, 1.7 mm, Shanghai, China). The mobile phase was ammonium dihydrogen phosphate solution (10 mmol/L)methanol (55:45, vol/vol; pH adjusted to 4.0 with phosphoric acid). The flow rate was 0.2 mL/min. The injection volume was 2 mL, and the detection wavelength was 210 nm. The peak area of LCM in serum samples was linear and within the concentration range of 0.5-40 mcg/mL (y = 0.0494CLCM +0.0222, r2 $ 0.9997). The lower limit of quantification (LLOQ) of LCM was determined to be 0.25 mcg/mL. After successive dilutions of the lowest calibration standard, the low quality control (LQC) of LCM was set to 0.5 mcg/mL. The intraday and interday precision values, as measured by relative SD values, were between 1.4% and 4.5%. The recovery ranged from 96.6% to 106.2%. All plasma samples were stable for up to 3 hours at ambient temperature (24 hours at 48C and 30 days at 2808C) and after 6 successive freezethaw cycles (24 hours per cycle). Genotyping The ABCC2 polymorphisms of all pediatric patients were analyzed in the research laboratory of the Institute of Clinical Pharmacy of our hospital. All samples were analyzed in 3 batches by professionals with PCR work certificates. Genomic DNA was extracted from whole blood using a standard method (http://www.qiagen.com). Two SNPs in the ABCC2 gene (1249G.A and 224C.T) were genotyped using a PCR assay using Big Dye (Applied Sanger Sequencing Technologies, Hangzhou, China), followed by restriction fragment length polymorphism (RFLP) analysis. The PCR amplification was performed in a 25 mL reaction mixture consisting of 2.5 mL 10X PCR Buffer, 1 mL dNTP (10 mM), 1 mL forward primer (10 mM), 1 mL reverse primer (10 mM), 0.2 mL Taq enzyme (5 U/mL), 1 mL gDNA, and 25 mL ddH2O. The cycling conditions used to evaluate the 2 SNPs (rs2273697 and rs717620) were as follows: initial denaturation at 958C for 5 minutes, followed by 30 cycles of denaturation, annealing, and extension at 948C for 30 seconds, 638C for 30 seconds, and 728C for 60 seconds, respectively. A final extension step was performed at 728C for 10 minutes Figure 1 presents the results of gel electrophoresis and DNA sequencing analysis for each genotype. Statistical Analysis Statistical analyses were performed using SPSS (version 4.0.100.1124; Chicago, IL, Beijing, China). A P value of ,0.05 was considered to be statistically significant. Linkage disequilibrium analysis and haplotype construction were conducted by the SHEsis online software 21 to determine whether the research subjects were representative of the entire group. The x 2 (x2) test was performed to compare allelic and genotypic distribution of ABCC2 gene polymorphisms (1249G.A and 224C.T) between the drug-resistant and drug-responsive groups. The combined effects of SNPs and nongenetic factors were further assessed using a multivariate regression model. To evaluate differences in plasma LCM concentrations among pediatric patients with epilepsy with different genotypes, the Student t test or analysis of variance (ANOVA) was performed for comparisons of continuous data and the Mann-Whitney U test was used for comparisons of nonparametric continuous data. Characteristics of Pediatric Patients The clinical and demographic characteristics of all enrolled patients are summarized in Table 2. In total, 231 Uygur pediatric patients with epilepsy (aged 4-14 years) were included in this study. The mean age at the initiation of LCM therapy was 7.7 6 4.0 years. Overall, 60% of the patients were male (n = 139). The LCM maintenance dose ranged from 25 to 400 mg/d. After normalization by weight and daily dose, the plasma LCM concentration was determined to be 0.9 6 0.4 mcg/mL per mg/kg. At the last follow-up (at 3, 6, or 12 months after the initiation of LCM therapy), 52 pediatric patients were on LCM monotherapy, 113 patients received valproic acid combination therapy, 69 patients received levetiracetam combination therapy, 65 patients received oxcarbazepine combination therapy, 29 patients received lamotrigine combination therapy, 3 patients received topiramate combination therapy, and 2 patients received clonazepam combination therapy. The patients were divided into 2 groups: the LCMresponsive group (n = 166, 72%) and the LCM-resistant group (n = 65, 28%). Multivariate analysis of factors that affect LCM response revealed no significant differences in age, sex, body mass index, and LCM dose between the responsive and resistant groups (P . 0.05). However, we observed significant differences in the types of seizures, plasma LCM concentration, concentration-to-dose (CD) ratio, and concomitant ASMs between the responsive and resistant groups (P , 0.05) ( Table 2). To further analyze the effect of each factor on the response of patients to LCM, the data on each factor were compared between the drug-resistant and drug-responsive groups using the x 2 (x2) test or Fisher exact test. Furthermore, the Student t test or the Mann-Whitney U test was conducted for analysis of quantitative variables. We observed significant differences in the type of seizure (focal onset and a combination of generalized and focal onset), plasma LCM concentrations, CD ratio, and use of monotherapy, 2 concomitant ASMs, or 3 concomitant ASMs between the drug-responsive and drug-resistant groups (P , 0.05; Table 2). Genotype and Allele Frequencies of ABCC2 All the ABCC2 polymorphisms studied were found to follow the Hardy-Weinberg equilibrium in LCM-responsive and LCM-resistant patients (P . 0.05), indicating that the research subjects were representative of the entire group. Interestingly, the proportion of patients with the ABCC2 1249G.A (rs2273697) AA genotype in the drug-resistant group was significantly higher than that in the drug-responsive group (P , 0.05; Table 3). In addition, the proportion of patients with the ABCC2 224C.T (rs717620) TT genotype in the drug-resistant group was significantly higher than that in the drug-responsive group (P , 0.05; Table 3). Furthermore, the proportion of patients with the ABCC2 1249G.A (rs2273697) A allele and ABCC2 224C.T (rs717620) T allele in the drug-resistant group was significantly higher than that in the drug-responsive group (P , 0.05; Table 3). Haplotype and Diplotype Frequencies The frequencies of 2-marker haplotypes are presented in Table 3. We estimated 4 possible haplotypes, which were then compared between the LCM-responsive and LCMresistant groups. All haplotypes were found to be overrepresented at a percentage higher than 10%. In addition, the G-C haplotype carrier frequency was significantly higher in the LCM-responsive group than in the LCM-resistant group (P , 0.05, OR = 1.897, 95% CI = 1.042-3.457). We also conducted diplotype analysis of the 1249G.A and 24C.T polymorphisms. Our results suggested that the frequency of GG-CC diplotypes carriers was significantly higher in the LCM-responsive group than in the LCMresistant group (P , 0.001; OR = 3.608, 95% CI = 1.799-7.233; Table 3). However, the AA-TT diplotypes carrier frequency was significantly lower in the LCM-responsive group than in the LCM-resistant group (P , 0.05, OR = 0.256, 95% CI = 0.081-0.807). Relationships Between Genetic Polymorphisms and Therapeutic Efficacy To investigate the effects of genetic polymorphisms on LCM responsiveness, patients were subdivided into different groups according to the SNP. Both ABCC2 1249G.A (rs2273697) and ABCC2 224C.T (rs717620) exhibited a significant correlation with response to LCM treatment. Associations Between ABCC2 Polymorphisms and Plasma LCM Concentration The mean LCM dose of the patients at the last follow-up (at 3, 6, or 12 months after the start of LCM therapy) was 7.8 6 2.1 mg/(kg$d) and 7.7 6 2.8 mg/(kg$d) in the LCMresponsive and LCM-resistant groups, respectively. In addition, the mean plasma LCM concentration in the LCM-responsive group (6.3 6 2.8 mcg/mL) was significantly higher than that in the LCM-resistant group (5.3 6 2.4 mcg/mL). The mean LCM CD ratio of the LCM-responsive group (0.9 6 0.4 mcg/mL per kg/mg) was also significantly higher than that of the LCMresistant group (0.7 6 0.3 mcg/mL per kg/mg). DISCUSSION LCM has an extensive interindividual pharmacokinetic variability, partly owing to genetic polymorphisms in drug transporters. 22 Our study comprehensively evaluated the relationship between the ABCC2 transporter and the plasma concentration and clinical efficacy of LCM in Uygur pediatric patients with epilepsy who received LCM treatment. The findings of this study indicate that the ABCC2 1249G.A (rs2273697) AA genotype and ABCC2 224C.T (rs717620) TT genotype are significantly correlated with lower plasma LCM concentrations and LCM CD ratios than the GG genotype and CC genotype (P , 0.001) in LCMresistant patients. To the best of our knowledge, this is the first study to investigate the relationship between genetic polymorphisms in drug transporters and plasma LCM concentrations. First, we analyzed the demographic characteristics of pediatric patients in the LCM-responsive and LCM-resistant groups. In the LCM-resistant group, the proportion of combined generalized and focal onset seizure was significantly higher than of other seizure types (P , 0.05). Furthermore, in the LCM-responsive group, the proportion of focal onset seizure was significantly higher than that of any other seizure type (P , 0.05). The reason may be the higher number of pediatric patients with combined generalized and focal onset seizure in the drug-resistant group. Perrenoud et al 23 observed no differences in plasma LCM levels between responders (median 10.4 mcg/mL) and nonresponders (median 9.5 mcg$mL-1; P = 0.36), even after adjusting for additional predictors of clinical outcome. However, our study revealed that the mean plasma LCM concentration and LCM CD ratio in the LCM-resistant group were significantly lower than those in the LCM-responsive group (P , 0.05). The results of this study were different from those of Perrenoud et al. 23 Previous studies have reported ABCB1, SCN1A, SCN2A, ATP1A2, and ATP1A3 as genetic factors associated with drug resistance in epileptic patients. [24][25][26][27] However, only a few studies have evaluated the relationship between ABCC2 polymorphisms and drug resistance in patients with epilepsy. Several studies have shown that the relationship between ABCC2 polymorphisms and drug resistance is not consistent across different geographical regions and countries. [15][16][17]28,29 Grewal et al 15 revealed that alterations in the functionality of ABCC2 can affect the disposition and bioavailability of drugs, which interferes with ASM therapy. A meta-analysis by Grover et al indirectly suggests a possible role of the ABCC2 transporter at the blood-brain barrier in altering patient response to ASMs. 28 Moreover, Ufer et al 29 suggested a higher risk of ASM failure in ABCC2 224C.T allele carriers. However, Seo et al, 28 Kim et al, 29 and Kim et al 30 found no association between ABCC2 polymorphisms and resistance to ASMs (including valproic acid, sultiame, phenobarbital, carbamazepine, oxcarbazepine, and lamotrigine) in young patients with epilepsy and adults with drugrefractory epilepsy. [28][29][30] Our current results suggested that ABCC2 1249G.A (rs2273697) and ABCC2 224C.T (rs717620) were significantly related to LCM treatment response. The AA genotypes of ABCC2 1249G.A (rs2273697) were more frequent among patients who were LCM resistant. In addition, the number of carriers of ABCC2 224C.T (rs717620) CT and TT was higher among LCM-resistant patients than among LCMresponsive patients. Moreover, the number of carriers of the ABCC2 1249G.A (rs2273697) GG genotype and ABCC2 224C.T (rs717620) CC genotype was higher in the LCMresponsive group than in the LCM-resistant group (P , 0.05). Our results are consistent with the findings of previous studies by Grewal et al, 15 Grover et al, 28 and Ufer et al. 29 However, they were inconsistent with the results published by Seo et al, 30 Kim et al, 31 and Hilger et al. 32 These discrepancies are likely attributed to ethnic differences in the frequencies of ABCC2 genotypes and haplotypes. In addition, modern DNA studies have shown that the Uygur population has 2.6% Western Eurasian-specific haplogroups, whereas the Han population does not. 33 Moreover, the Uyghur population is relatively isolated and has low mobility, strong genetic polymorphisms, and complex genetic structure. Therefore, there may be differences in the transport/metabolism pathways of ASMs between the Uygur and Han populations. These results emphasized the need for investigating the functional significance of ABCC2 polymorphisms across different ethnic groups. Several studies have indicated that ABCC2 polymorphisms may be related to ASM concentrations and responsiveness to ASMs (such as valproic acid and oxcarbazepine). [34][35][36] Chen et al discovered that valproic acid concentration in patients with the ABCC2 rs2273697 AA genotype was significantly higher than that in patients with the GA + GG genotype (P = 0.000). This result suggests that ABCC2 polymorphisms can affect valproic acid concentration and, consequently, treatment outcome in patients with epilepsy receiving valproic acid monotherapy. 34 Yang et al reported that the ABCC2 rs2273697 polymorphism was significantly related to oxcarbazepine plasma concentration in the whole patient cohort and in patients stratified by age (P = 0 0.033). 35 However, Shen et al 36 reported that polymorphisms of ABCC2 rs2273697 were not associated with the concentrations and therapeutic efficacy of oxcarbazepine. No reports are available on the relationship between ABCC2 polymorphism with the plasma concentrations and efficacy of LCM in pediatric patients with epilepsy. This study is the first to evaluate the effect of ABCC2 polymorphisms on the plasma concentrations and efficacy of LCM in Uygur pediatric patients with epilepsy. Our current study results show that ABCC2 1249G.A and ABCC2 224C.T (rs717620) polymorphisms have a significant effect on plasma LCM concentration. A significantly lower plasma LCM concentration was observed in ABCC2 1249G.A (rs2273697) AA genotype and ABCC2 224C.T (rs717620) TT genotype carriers, compared with that in GG genotype carriers (P , 0.001). Thus, the findings of this study suggest that the ABCC2 1249G.A (rs2273697) AA genotype and ABCC2 224C.T (rs717620) TT genotype can decrease P-gp activity, increase the gastrointestinal absorption of LCM, and ultimately decrease plasma LCM concentration. Our study, however, had some limitations. First, pediatric patients present with limitations such as poor compliance to drug dosing. Second, considering that there are multiple factors that can affect LCM pharmacokinetics and treatment outcomes, there is always a possibility of confounders, including other SNPs. Third, regional differences, evaluation of clinical efficacy, judgment of epileptic drug resistance, and patient evaluation of their own condition also limited this study. Finally, the sample size was small. Therefore, the relationship between ABCC2 polymorphisms and plasma LCM concentrations in patients with epileptic needs to be verified using ambidirectional methods and a larger sample size. CONCLUSIONS The results of this study indicate that the ABCC2 1249G.A (rs2273697) and ABCC2 224C.T (rs717620) polymorphisms affect the plasma concentrations and treatment efficacy of LCM in pediatric patients with epilepsy, leading to drug resistance. In clinical practice, ABCC2 polymorphisms should be identified before the start of LCM therapy. Plasma LCM concentrations should be monitored, and the dose should be readjusted for pediatric patients with epilepsy accordingly.
2022-10-19T06:16:27.123Z
2022-10-10T00:00:00.000
{ "year": 2023, "sha1": "bb50d72d700725cbd8e4091dae8163a1c828e4dc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "WoltersKluwer", "pdf_hash": "b38531feb6b6a34c41464dead02840838c306237", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
252438984
pes2o/s2orc
v3-fos-license
IntereStyle: Encoding an Interest Region for Robust StyleGAN Inversion Recently, manipulation of real-world images has been highly elaborated along with the development of Generative Adversarial Networks (GANs) and corresponding encoders, which embed real-world images into the latent space. However, designing encoders of GAN still remains a challenging task due to the trade-off between distortion and perception. In this paper, we point out that the existing encoders try to lower the distortion not only on the interest region, e.g., human facial region but also on the uninterest region, e.g., background patterns and obstacles. However, most uninterest regions in real-world images are located at out-of-distribution (OOD), which are infeasible to be ideally reconstructed by generative models. Moreover, we empirically find that the uninterest region overlapped with the interest region can mangle the original feature of the interest region, e.g., a microphone overlapped with a facial region is inverted into the white beard. As a result, lowering the distortion of the whole image while maintaining the perceptual quality is very challenging. To overcome this trade-off, we propose a simple yet effective encoder training scheme, coined IntereStyle, which facilitates encoding by focusing on the interest region. IntereStyle steers the encoder to disentangle the encodings of the interest and uninterest regions. To this end, we filter the information of the uninterest region iteratively to regulate the negative impact of the uninterest region. We demonstrate that IntereStyle achieves both lower distortion and higher perceptual quality compared to the existing state-of-the-art encoders. Especially, our model robustly conserves features of the original images, which shows the robust image editing and style mixing results. We will release our code with the pre-trained model after the review. Introduction Recently, as Generative Adversarial Networks (GANs) [15] have been remarkably developed, real-world image editing through latent manipulation has been prevalent [27,28,33,30,32,25]. Especially, the strong disentangled property of Style-GAN [18,19] latent space, i.e., W , enables scrupulous image editing [32,22], Fig. 1: Encoding of IntereStyle. Original images, interest region, inversion, the difference between the original images and their inversions, and the editing results (smile and Mohawk). The magnitude of the difference between the original and inversion images is colored in red. Our model successfully minimizes distortion on the interest region, even without the interest region mask for the inference. Moreover, even with low distortion, our model shows high editability. which can change only desirable features, e.g., facial expression, while maintaining the others, e.g., identity and hairstyle. For editing the image precisely with StyleGAN, it is required to get the suitable style latent, from which Style-GAN can reconstruct the image that has low distortion, high perceptual quality, and editability without deforming the feature of the original image. Though StyleGAN is generally known to construct the image with high perceptual quality, the original style space W is not enough to represent every real-world image with low distortion. Consequently, a vast majority of recent StyleGAN encoders, including optimization-based methods, embed images into W + space [1,24,3]. W uses the identical style vector for every layer in StyleGAN, obtained by the mapping function. On the other hand, W + space provides a different style vector per layer and can even provide a random style vector in R 512 . However, as the distribution of style latent is far from W , reconstructed images show low perceptual quality and editability [30,25]. Consequently, lowering the distortion while keeping the other factors is still challenging. In this paper, we claim that training to lower distortion on the entire region of the image directly is undesirable. In most cases, images contain regions that cannot be generated due to the inherent limitation of generators. Figure 1 shows clear examples of real-world images in the facial domain, which contain regions that are infeasible to be generated, e.g., hats, accessories, and noisy backgrounds. Therefore, an encoder needs to concentrate on the generable region for inversion while ignoring the un-generable region (e.g., non-facial region for StyleGAN trained with FFHQ). This strategy helps the latents inverted from Original Restyle pSp IntereStyle Fig. 2: Lowering distortion on the uninterest region. Inversion results of pSp [24], Restyle [3], and ours. An overlapped obstacle (i.e., hand) on the facial region precludes clean inversion. Firstly, pSp shows high distortion on the eyes and generates unrealistic facial shapes on the obstacle region. Restyle tries to reconstruct the obstacle region, but the reconstructed image shows artifacts on the nose and chin. On the contrary, our model shows the lowest distortion among the existing models, while maintaining high perceptual quality as shown above. the generable region to be close to W , which show high perceptual quality and editability, as shown in Figure 1. Another observation is that an attempt to reconstruct the region which is not generable induces severe distortion even on the other generable regions. For example, in Figure 2, a hand overlapped with the facial region is not generable by GAN encoders. Restyle [3], which shows the lowest distortion among all encoderbased inversion models until now, tries to lower distortion on the hand too, which rather causes catastrophic distortions on the nose and chin. In the light of these observations, it is important to distinguish the region to be reconstructed elaborately from the rest. Here we define the term interest region, where the model focuses on the precise reconstruction with low distortion and high perceptual quality. Practically, in most cases, the interest region is aligned with the generable region of the image. For example, in facial image generation, the main interests are the face and hair parts of the output images, which are easier to generate than backgrounds. By focusing on the interest region, we can reduce distortion without any additional task, such as an attempt to encode latent excessively far from W [8]. Contributions We propose a simple yet effective method for training a Style-GAN encoder, coined IntereStyle, which steers the encoder to invert given images by focusing on the interest region. In particular, we introduce two novel training schemes for the StyleGAN encoder: (a) Interest Disentanglement (InD) and (b) Uninterest Filter (UnF). First, InD precludes the style latent, which includes the information on the uninterest region, from distorting the inversion result of the interest region. Second, UnF filters the information of the uninterest region, which prevents our model from redundantly attending to the uninterest region. UnF boosts the effect of InD by forcing the model not to focus on the uninter-est region overly. In addition, we propose a very simple yet effective scheme for determining the interest region, required only at the training stage. We demonstrate that IntereStyle, combined with the iterative refinement [3], effectively reduces the distortion at the interest region of CelebA-HQ-test dataset. To the best of our knowledge, IntereStyle achieves the lowest distortion among the existing state-of-the-art encoder-based StyleGAN inversion models without generator tuning. Moreover, we qualitatively show that our model robustly preserves features of the original images even with overlapped obstacles, while other baselines fail to. Lastly, we show the experimental results for image editing via InterFaceGAN [28], StyleCLIP [22], and style mixing [18] results, where our model shows remarkably robust outputs when input images contain significant noises, e.g., obstacles on the face. Related Work GAN Inversion GAN inversion aims to transform given real-world images into latent vectors from which a pre-trained GAN model can reconstruct the original image. In the early stage of GAN inversion, the majority of models rely partially [36,4,5,35] or entirely [11,23,1,2,16,10,12,26] on the optimization steps per image. Though the optimization-based models show high inversion qualities, these models should perform numerous optimization steps per every input image [20], which are extremely time-consuming. Thus, training encoders that map images into the latent space has been prevalent to invert images in the real-time domain [29,35,24,30,3,31]. However, regardless of encoding methods, the existing state-of-the-art GAN inversion models focus on the whole region of images [3,24,30,2], including both interest and uninterest regions. We propose that focusing mainly on the interest region during GAN inversion improves the perceptual quality and editability of inverted images. GAN Inversion Trade-Off The desirable GAN inversion should consider both distortion and perceptual quality of inverted images [6,30,25]. However, due to the trade-off between two features, maintaining low distortion while enhancing the perceptual quality remains a challenging task [30,25]. Especially in Style-GAN, an inverted image from the latent far from W distribution achieves lower distortion [1,2,24] but shows lower perceptual quality [30] than the image from W distribution. Moreover, the latent far from W distribution shows lower editability [30,25], which makes editing the inverted image harder. Here the variance among latents for all layers can be used as an indicator of distance from W , where W shows a zero variance due to the identical latent per layer. 3 As shown in Figure 3, the existing StyleGAN inversion models that show low distortion but suffer from low perceptual quality, e.g., pSp [24] and Restyle [3], show relatively high variance among latents for all layers of StyleGAN. Especially, Figure 3b shows that Restyle gradually increases the variance of latents as the iteration refinement progresses. In the case of e4e [30], it encodes images into latents close to W but with high distortion. In contrast to the existing methods, our model focuses on lowering distortion at the interest region, i.e., hair and face. Since it is much easier than lowering at the uninterest region, i.e., irregular backgrounds, hats, and accessories, our model successfully achieves lower distortion than the existing models while avoiding the drop of high perceptual quality. Method In this section, we propose a simple yet effective StyleGAN encoder training scheme named IntereStyle. We first introduce our notation and the model architecture. Then, we introduce how to determine the interest region in input images. Next, we propose two novel methods: interest disentanglement and uninterest filter in Section 3.3 and Section 3.4, respectively. Finally, we describe the whole training scheme of IntereStyle in Section 3.5. Notation and Architecture Our architecture is shown in Figure 4, which is based on pSp [24] model, combined with the iterative refinement [3,31]. At i-th iteration of the iterative refinement, our encoder E receives a latent calculated at the previous step, w i−1 4 , Fig. 4: Overall structure of IntereStyle. IntereStyle trained with total of Nth iteration, receives N images, i.e., I 1 , I 2 , ..., I N . I i is an original image passed through a low-pass filter with radius r i , which steers the model to focus on the coarse features at the early step of iterations, and I i gets clearer as i grows. The Encoder targets to embed the difference of two images I i andŷ i−1 into the latent space, ∆ i−1 . After the N -th iteration is finished, We apply interest disentanglement. We multiplyŷ N with the I mask , to wipe out the uninterest region. We yield ∆ b , by computing the difference between I and I · I mask . By adding ∆ b to the obtained latent w N , we yield the outputŷ. At the inference stage, we yield the final outputŷ N , without applying interest disentanglement. together with a pair of images. The pair consists of (ŷ i−1 , I i ), whereŷ i−1 is a decoded result of w i−1 via generator G, i.e.,ŷ i−1 = G(w i−1 ), and I i is a preprocessed ground truth image, I, by our proposed method in Section 3.4. E targets to encode the difference betweenŷ i−1 and I i into the latent, ∆ i . Consequently, G can yield an imageŷ i , which is more similar to I i thanŷ i−1 , by decoding the latent w i = w i−1 + ∆ i . Our model iteratively refines the latent with a total of N iterations. Finally, we utilize a loss function L image for training, consisting of the weighted sum of L 2 , LP IP S [34], and L ID [24]. We explain the details of each loss in Appendix A. Interest Region To guide the model to focus on the interest region, we need to label the interest region first. The interest region can be designated arbitrarily according to the usage of the inversion model. For instance, facial and hair regions for the facial domain, and the whole body for the animal domain can be set as the interest region. For labeling this interest region, the off-the-shelf segmentation networks are used, which is described in Section 4. However, directly using the segmentation masks from networks causes the distortion of facial boundaries in the generated image. To accommodate the boundary information, we use the We compare the interest region obtained by the raw mask and the dilated mask. In the raw mask, the interest region excludes the information related to the facial boundary, which occurs distortion of a facial shape. Consequently, we dilate the mask to force the interest region to include the facial boundary information, as shown in I · Dilated Mask. dilated segmentation mask containing the interest region, as shown in Figure 5. Without dilation, the loss term on the interest region cannot penalize the inverted face on the uninterest region. Consequently, we dilate the mask to penalize the overflowed reconstruction of the interest region boundary. Though the small part of the uninterest region would be included in the interest region through the dilation, we empirically find that our model still precisely generates the interest region without any distortion of boundaries. We visually show the effect of mask dilation at the ablation study in Section 4.1. Interest Disentanglement To enforce the model to invert the interest region into the latent space precisely, we should train the model to concentrate on the interest region regardless of the uninterest region. However, due to the spatial-agnostic feature of Adaptive Instance Normalization (AdaIN) [21], inverted style latents considering the uninterest region may deteriorate the inversion quality of the interest region. To prevent the encoded style of the uninterest region from deforming the inverted result of the interest region, the inversion of each region should be disentangled. As E encodes the difference of the input pair of images in ideal, the latent ∆ obtained by encoding the pair of images that only differ on the uninterest region does not contain the information of the interest region. In other words, the decoding results from the latents w and w+∆ should be the same on the interest region. Motivated by the above, we propose a simple method named Interest Disentanglement (InD) to distinguish the inversions of the interest region and uninterest region. We construct the pair of input images for InD as follows: the original image I, and the same I but multiplied with the interest region mask. Then, as shown in Figure 4, we can yield the pair of images which only differs in the uninterest region, and the corresponding latent from E, ∆ b . Ideally, the information in ∆ b is solely related to the uninterest region, which implies w N generates the interest region robustly even ∆ b is added. Consequently, we define InD loss, L InD as follows; whereŷ is the inversion result from the latent w = w N + ∆ b . We apply Interest Disentanglement only at the training stage, which enables the inference without the interest region mask. We empirically find that IntereStyle focuses on the interest region without any prior mask given, after the training. Uninterest Filter At the early steps of the iterative refinement, E focuses on reducing the distortion of the uninterest region [3]. Due to the spatial-agnostic feature of AdaIN, we claim that excessively focusing on the uninterest region hinders the inversion of the interest region. We propose a method named Uninterest Filter (UnF), to make E concentrate on the interest region at every iteration consistently. UnF eliminates details of the uninterest region, which is inherently infeasible to reconstruct. Thus, E can reduce the redundant attention on the uninterest region for the low distortion. In detail, UnF eases calculating ∆ i by blurring the uninterest region of I at each iteration, with a low-pass Gaussian Filter with radius r, LP F r . As shown in Figure 4, UnF gradually reduces the radius of Gaussian filter of LP F as iterations progress, with the following two reasons; First, the redundant attention on the uninterest region is considerably severe at the early stage of the iterations [3]. Consequently, we should blur the image at the early iterations heavily. Second, excessive blur results in the severe texture difference between the interest and uninterest region. We claim that E is implicitly trained to encode the difference of the whole region, which can be biased to produce the blurred region when the blurred images are consistently given. For the realistic generation, the input at the N -th iteration, I N is deblurred, i.e., identical to I. We calculate the input image at the i-th iteration as below: Training IntereStyle At the training stage, we jointly train the model with the off-the-shelf encoder training loss [24] L image and L InD . However, in contrast to Restyle [3], which back-propagates N times per batch, we back-propagate only once after the N -th iteration is over. Thus, ours show relatively faster training speed compared to Restyle. Our final training loss is defined as below: L total := L image (I · I mask ,ŷ N · I mask ) + λL InD . Our proposed methods, InD and UnF are synergistic at the training; While InD disentangles the inversion of the uninterest region, UnF forces to look at the interest region. Though applying L image to the images multiplied with I mask inherently drives E to focus on the interest region, InD is essential for robust training. Without L InD , we find E implicitly contains information of the uninterest region into ∆ i , which affects the inversion of the interest region by AdaIN. Experiments In this section, we briefly introduce our datasets and baselines first. The implementation details are described in Appendix A. Then, we compare the inversion results with baselines and ablation scenarios, both in qualitative and quantitative ways. Next, we compare the image manipulation of our model, together with baselines. Finally, we look into the iterative scheme of our model with Restyle. Though we mainly show the results on the facial domain, we note that our method shows remarkable results in various domains. We show the experimental results on the animal domain in Figure 6 briefly and the plenty experimental results in Appendix D. Datasets For the facial domain, we trained the encoder using the FFHQ dataset, consisting of 70,000 human facial images. For the validation, we used the CelebA-HQ test set, consisting of 2,800 human facial images. We did not add or change any alignment or augmentation procedures compared to the existing encoder training methods [24,3,30] for a fair comparison. To generate the interest and uninterest region masks, we used the pre-trained Graphonomy [14] model. For the animal domain, we used AFHQ wild dataset [9] for training and validation, which consists of 4,730 and 500 images, respectively. We used the pre-trained DEtection TRansformer (DE-TR; [7]) for obtaining the interest region. Baselines We compared our model with the several well-known StyleGAN encoders: IDGI [35], pSp [24], e4e [30], and Restyle [3]. Moreover, in the case of the qualitative comparison of inversion, we additionally compared it with the optimization-based model [1,2], which is well-known for its outstanding performance. For the baseline models, we used the pre-trained weights that are publicly available for evaluation. Please refer to Appendix C for more detailed information of each baseline. Inversion Evaluation Qualitative Evaluation Figure 6 shows the inverted images of IntereStyle and two StyleGAN inversion baselines, Image2StyleGAN [1] and Restyle [3]. In this figure, we show the entire inversion results along with the cropped images, which correspond to the areas of overlapping of the interest and uninterest regions. Without our robust inversion schemes, mere attempts to lower distortion over the entire region often occurred severe artifacts or feature deformations. Indeed, the baselines produced artifacts, which severely lower the perceptual quality. In Fig. 6: Qualitative comparison. Comparison of various StyleGAN inversion methods. IntereStyle effectively disentangled the uninterest regions (e.g., mic, letter, fingers and wood) from the interest region, which enabled robust handling of artifacts. However, the baseline models suffered from artifacts, which significantly deformed the feature of original images. Best viewed in zoom-in. Table 1: Quantitative comparison. We calculated each result by multiplying the mask, i.e., interest and facial masks, for the exact comparison of inversion quality of each region. IntereStyle showed the lowest L 2 and LPIPS on both the interest and facial regions among the state-of-the-art StyleGAN inversion models. Moreover, IntereStyle showed the best ID similarity among the baselines. addition, they mangled features of original images in some cases. For instance, Restyle pSp turned the microphone into a white beard, which does not exist in the original image. The optimization-based inversion relatively mitigated artifacts among the baselines, but still suffered from them. Moreover, it required more than 200 times longer inference time compared to the one of IntereStyle. In contrast, IntereStyle showed the most robust outputs compared to all baselines, which best preserved the features of original images without artifacts. In Figure 7, we qualitatively showed the effectiveness of the mask dilation. Without dilation, the model could not precisely reconstruct the original boundary of the interest region, which was denoted as the red region. In contrast, with the mask dilation, our model minimized the boundary deformation. Quantitative Evaluation We used L 2 and LPIPS [34] losses and measured ID similarity [24] by utilizing the pre-trained Curricularface [17], which shows the state-of-the-art performance on facial recognition. We measured the quality on the interest region, the facial and hair regions in this paper, which need to be inverted precisely. To this end, we multiplied the interest region mask at the calculation of ID similarity to preclude the facial recognition model from being affected by the non-facial region. Since the facial recognition performance is dependent on the features of the non-facial region [13], the inverted images are prone to be identified as similar faces with the original images due to the resemblance of non-facial regions. To compare the models solely on the facial region, we should wipe out the uninterest region. As shown in Table 1, IntereStyle showed low distortion on both the interest and facial regions and preserved the identity well simultaneously. We conclude that focusing on the interest region is indeed helpful for robust inversion. Table 2 shows the ablation study by sequentially applying each component of our method to measure the effectiveness of the model performance. InD reduced the negative effect of the uninterest region, which indeed lowered distortion, compared to Our model showed robust inversion results together with high editability consistently, while the baselines failed in various cases. First of all, pSp and e4e failed to invert robustly, which ignored makeups and the detailed appearance of each image. In the cases of Restyle pSp and Restyle e4e , both were vulnerable with the overlapped obstacles. In the right image of (a), Restyle pSp generated severe artifacts, while Restyle e4e distorted the shape of mouth significantly. Moreover, the Restyle-based models showed poor editability. In (b), Restyle pSp and Restyle e4e failed to change the hairstyle in the Mohawk case. naïvely applying L image on the interest region. UnF lowered LPIPS by forcing the model to preserve features of the interest region. Please refer to Appendix D for more detailed results of each ablation model. Fig. 9: Style mixing results. We interpolated latents from Source A to Source B. We took styles corresponding to either coarse (4 2 − 8 2 ), middle (16 2 − 32 2 ), or fine (64 2 − 1024 2 ) resolution from source B and took the rest from source A. IntereStyle showed the robust results on the interpolation, even with obstacles on the original source images, while Restyle pSp suffered from severe artifacts. Editing via Latent Manipulation Inversion of GAN is deeply related to the image manipulation on the latent space. In this section, we compare the quality of edited images produced by various StyleGAN inversion models [24,30,3], manipulated via InterFaceGAN [28] and StyleCLIP [22] methods, and style mixing [18,19]. Figure 8 shows the results of editing real-world images via InterFaceGAN and StyleCLIP, together with the inversion results. We changed three attributes for each method; smile, age, and pose for InterFaceGAN, and "smile", "lipstick", and "Mohawk hairstyle" for StyleCLIP. Our model showed high inversion and perceptual qualities consistently among various editing scenarios, even with strong makeups or obstacles. However, pSp and e4e missed important features of images, such as makeups or eye shapes. Moreover, pSp produced artifacts in several editing scenarios. In the cases of Restyle pSp and Restyle e4e , they failed to robustly handle obstacles. In the right side of Figure 8a, Restyle pSp produced severe artifacts around the mouth, while Restyle e4e totally changed the shape. Moreover, the Restyle-based models showed low editability in specific cases, such as "Mohawk". To attribute to the superior disentanglement feature of StyleGAN latent space [28], we can separately manipulate the coarse and fine features of images. Following the settings from the StyleGAN [18] experiment, we took styles corresponding to either coarse, middle, or fine spatial resolution, respectively, from the latent of source B, and the others were taken from the latent of source A. Moreover, we mixed more than one hard case, e.g., obstacles on faces and extreme poses, to evaluate the robustness of our model. As shown in Figure 9, our model showed higher perceptual quality on the interpolated images compared to Restyle pSp . Restyle pSp produced images with texture shift, i.e., images with cartoon-like texture, distorted facial shape, and undesirable artifacts during the style mixing. In contrast, our model generated stable facial outputs. Additional qualitative results are shown in Appendix D.2. Original Iterative Outputs → Iterative Refinement of IntereStyle We compared the progress of iterative refinement between Restyle [3] and Inter-eStyle in Figure 10. Restyle reconstructed most of the coarse features within a few steps, while the variance of Restyle increased consistently as iteration progressed. In other words, the reduction of distortion is marginal, though Restyle excessively focuses on this. Consequently, the latent from Restyle was located far from W , which yields an image with low perceptual quality. In contrast, In-tereStyle concentrated on the interest region that could be generated without a broad extension from W . Consequently, IntereStyle effectively reduced distortion on the interest region by iteration while maintaining high perceptual quality. Conclusions For StyleGAN inversion, focusing on the interest region is essential but underexplored yet. We found excessive attention on the uninterest region occurs the drop of perceptual quality and high distortion on the interest region. We proposed a simple yet effective StyleGAN encoder training scheme, coined Inter-eStyle, composed of Interest Disentanglement and Uninterest Filter. We demonstrated that IntereStyle showed both low distortion and high perceptual inversion quality, and enabled various latent manipulations robustly for image editing. We look forward to our work to be widely used in future research or the industry field, which needs a delicate inversion of the interest region for image editing.
2022-09-23T06:42:50.619Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "6bf7f27ad3d4bea67797835ed09f9469749e4052", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9212f7b3fcc03bd4ff3d8babee502ed26194024a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
201349114
pes2o/s2orc
v3-fos-license
Examining Online Health Sciences Graduate Programs in Canada Approximately one in 10 employed Canadians worked in health care and social services in 2016. Health professionals perceive life-long learning as an important element of professional life and value flexibility in their continuing education activities. Online learning is ideally suited to meet this need for flexible health sciences continuing education. The present study sought to identify and characterize online graduate programs in health sciences offered by Canadian universities. All Canadian (non-technical) university websites were hand searched for online graduate programs in health and related fields. Each identified program was characterized by 10 features: province, university, flexibility (i.e., fully online or blended), subject area, curriculum (e.g., coursework, thesis or project, practicum), duration and timing options (i.e., full-time, part-time), admission requirements, class size and acceptance rates, and employment outcomes. The search identified 171 Canadian university online graduate programs in health and related fields. Across Canada, the greatest numbers of programs are offered in Ontario and British Columbia. Most programs are master’s and graduate certificate programs, with graduate diploma and PhD programs being less common. While the majority of programs require an undergraduate degree for admission, some programs base entry requirements on previous work experience. Most programs offer a blended learning experience, with fewer being fully online. The most common content areas include nursing, public health, occupational health, and occupational therapy. These findings highlight opportunities to advance fully online, health continuing education in novel subject areas. Résumé de l'article Approximately one in 10 employed Canadians worked in health care and social services in 2016. Health professionals perceive life-long learning as an important element of professional life and value flexibility in their continuing education activities. Online learning is ideally suited to meet this need for flexible health sciences continuing education. The present study sought to identify and characterize online graduate programs in health sciences offered by Canadian universities. All Canadian (non-technical) university websites were hand searched for online graduate programs in health and related fields. Each identified program was characterized by 10 features: province, university, flexibility (i.e., fully online or blended), subject area, curriculum (e.g., coursework, thesis or project, practicum), duration and timing options (i.e., full-time, part-time), admission requirements, class size and acceptance rates, and employment outcomes. The search identified 171 Canadian university online graduate programs in health and related fields. Across Canada, the greatest numbers of programs are offered in Ontario and British Columbia. Most programs are master's and graduate certificate programs, with graduate diploma and PhD programs being less common. While the majority of programs require an undergraduate degree for admission, some programs base entry requirements on previous work experience. Most programs offer a blended learning experience, with fewer being fully online. The most common content areas include nursing, public health, occupational health, and occupational therapy. These findings highlight opportunities to advance fully online, health continuing education in novel subject areas. Introduction E-learning, as defined by the Canadian Council on Learning, involves the development of knowledge and skills through the use of technology (Canadian Council on Learning [CCL], 2009). Technology can support engagement with content through online learning activities and tools, and promote interaction among individuals in distance education (Abrami et al., 2006). Many higher education institutions are adopting elearning as a means of providing accessible and flexible educational opportunities to meet the learning needs of students in the 21st century. Indeed, e-learning has become a critical cornerstone in higher education advancement. The number of Canadian adults between the ages of 25 to 64 holding university degrees continues to rise (Statistics Canada, 2013) and post-secondary institutions have reported steady growth in online enrolments since 2015 (Martel, 2015;Bates et al., 2017;Donovan et al., 2018). In 2016-2017, 17% of all Canadian post-secondary students were taking at least one online course for credit, and 65% of those same post-secondary institutions anticipated modest (1-10%) to fast growth (over 10%) of their online enrolments over the next year (Donovan et al., 2018). Catering to the growing student demographic of part-time, mature, and working professionals, online education offers convenient, flexible, student-centered educational opportunities (Innes, Mackay, & McCabe, 2006). For 57% of Canadian institutions, online learning was rated very important for expanding continuing and professional education programs (Donovan et al., 2018). Moreover, online education allows for universities to increase student access, be more economically competitive by attracting students from outside the traditional service area, improve educational attainment, and provide pedagogical improvements (Abrami et al., 2006;Donovan et al., 2018). Data from multiple domains provide strong evidence that health education is an area of current and future demand, not only in Canada, but worldwide. As the Canadian population ages, there has been a rise in life expectancy accompanied by chronic conditions such as arthritis, diabetes, and cardiovascular disease (Public Health Agency of Canada, 2016). This demographic change is increasing the demand on healthcare systems, highlighting the need to expand the number of health professionals who possess the competencies and skills required to: 1) adapt to the rapidly evolving health care sectors, and ii) contribute to the complex problem-solving that is required by the health changes of today and tomorrow. E-learning has been found to be an appropriate and effective method for learning health-related content and can be used to meet this growing need for working health professionals (Moore & Hart, 2004;Shenk, Moore, & Davis, 2004;Wernet, Olliges, & Delicath, 2000). Currently, few studies have investigated online learning opportunities in the health sciences in Canada. This may be attributed to the devolved and distributed structure of the higher education system (Contact North, 2016). Highlighting this gap in the literature presents a time-sensitive and valuable opportunity to further our understanding of online education opportunities in Canada. To our knowledge, no published studies have evaluated the current landscape of online graduate education in the health sciences offered by Canadian universities. Consequently, this research aimed to identify and characterize current online postgraduate programs in health and related fields offered by Canadian universities. The study identified existing program availability and opportunities for further development in novel areas of concentration. Methods Canadian university websites were manually searched between January 2017 and October 2017 for fully online or blended graduate programs in a health or health-related field. College-level institutions and polytechnic universities were excluded from this study in order to focus on university-based programs. All data were exclusively collected from the university websites; universities were not contacted for further information or clarification about their online programs. Programs were included in the data analysis if they met the following inclusion criteria: (i) online format, (ii) graduate-level program (e.g., post-baccalaureate certificates, diplomas, master's and PhD), and (iii) in a health or health-related field. To meet the online inclusion criterion, the majority of the program had to be available in an online or blended format. A program was considered graduate-level if a post-secondary degree or equivalent credential was required for admission. A program was defined as a health or healthrelated program if the program's stated intent was to provide education related to health. Health, as defined in the Constitution of the World Health Organization, is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity (World Health Organization, 1948). A list of all Canadian universities was created (see Appendix) and corresponding university websites were searched by three independent researchers, including two bilingual speakers. To identify online health programs that meet set inclusion criterion, a thorough search process was undertaken using the Google search engine, university website search features, and direct access to relevant departmental web pages. This preliminary search yielded 192 programs that were entered into an Excel database. To validate the accuracy of program findings, all university websites were reviewed again by an independent researcher. A cross-comparison of research findings was conducted, along with a consolidated team analysis to review any discrepancies in inclusion. A total of 171 programs met the inclusion criteria following the final phase of data collection. The data were analyzed iteratively using content analysis. Data were categorized in the database according to: province, university, program name, program type, subject area, learning format, program format, experiential learning, program flexibility, academic admissions, work or volunteer-related admissions criteria, class size, acceptance rates, and job outcomes. Codes were inductively created from recurring patterns in the data, as well as defined and categorized to assist in thematic analysis (Table 1). Class size Code based on availability of information on class size. No: information on class size not provided. Yes: information on class size provided. Acceptance rates Code based on availability of information on acceptance rates. No: information on acceptance rates not provided. Yes: information on acceptance rates provided. Job outcomes Code based on availability of information on potential employment outcomes. No: potential career outcomes not provided. Yes: specific career outcomes provided. Vague: vague or very general careers in health care noted. Results The results from the website search identified 171 online graduate programs in a health or health-related field offered across 44 Canadian universities ( Table 2). The programs were offered across Canada, in British Columbia (n=35), Alberta (n=26), Saskatchewan (n=9), Manitoba (n=1), Ontario (n=50), Quebec (n=26), New Brunswick (n=6), Nova Scotia (n=10), and Newfoundland (n=8). There were greater numbers of programs available in some provinces, particularly Ontario and British Columbia, likely in accordance to a higher saturation of universities in these provinces. No programs were identified within the Yukon Territory, Northwest Territories, Nunavut, and Prince Edward Island. Of the 171 programs identified, there was a variety of graduate-level credentials, certifications, and degree opportunities in the health field. The results identified 47 certificate, 21 diploma, 76 master's, and four doctoral online health programs. Three combined degree programs, including a graduate diploma and master's degree, and dual-master's degrees, were also identified. Some programs (n=20) did not report the type of graduate credential, or did not classify the program as a certificate, diploma, master's, or doctoral degree (i.e., microprogram). The most common content areas offered by the online programs included: nursing, public health, and occupational health or physical therapy. This finding was consistent across the certificate, diploma, master's, and doctoral program types, with some variance in subject area frequency and availability. Of the certificate programs, there was a higher prevalence of nursing (n=13), occupational health and safety (n=5), public health (n=4), and mental health (n=4) programs. The diploma programs included varied subject areas, with a higher proportion of nursing (n=4) and health information (n=3) program availability. Of the four PhD programs identified, three programs specialized in nursing. Finally, there was a higher frequency of nursing (n=21), public health (n=9), social work (n=7), counselling (n=5), occupational therapy (n=4), and clinical science (n=4) master's programs. Less common were programs in the following subject areas: addiction, anesthesia, clinical epidemiology, food science/safety, oncology, palliative care, nutrition, health 260 and social services, rehabilitation science, dementia, polysomnography, health leadership/management, health education, pediatric psychosocial care, gerontology, child psychology, eHealth, and medical radiation. The programs identified were delivered fully online, or in a combination of distance and on-campus faceto-face learning experiences, which were referred to as blended. A total of 76 programs were fully online, with a higher proportion of certificate (n=37) and diploma (n=10) programs, compared to master's (n=14) and doctoral (n=0) degrees. The majority of programs included a blended learning format (n=92), with mandatory on-campus institutes, courses, residencies, workshops, practicums, and other in-class delivery methods. Three programs offered both blended and fully online learning opportunities, dependent on student preference. Many of the programs (n=132) included flexible program structures, with part-time and self-selected paces. According to program type, many of the certificate (n=37), diploma (n=17), master's (n=57), and doctoral (n=2) programs included flexible formats and duration. In congruence with flexible format structure, a significant portion of the programs offered experiential learning opportunities (n=111). These included internships, practicums, residencies, clinical practice, research projects, placements, workshops, labs, and fieldwork. Most of the master's (n=67) and doctoral (n=4) programs offered an experiential education component; whereas, certificate (n=19) and diploma (n=10) programs were less likely to offer hands-on learning experiences. Some of the master's (n=28) and all of the doctoral (n=4) programs offered a thesis or dissertation option. Most of the programs (n=89) required an undergraduate degree or equivalent for admission into the program. Equivalent qualifications included a college degree, undergraduate-level courses, certificate, or diploma. Some programs (n=70) required an undergraduate degree plus additional qualifications, degrees, or academic experience. For example, a post-secondary education degree or diploma, in addition to registration by an accredited government body (i.e., a Registered Nurse in Canada) or a graduate-level degree were required for admission. Many of the program admission requirements (n=70) included previous work or volunteer experience, which ranged in duration and relevance to the program-area. Finally, few program websites (n=14) offered information about acceptance rates and class sizes. While some program websites provided information about employment opportunities and career outcomes, including a list of specific career options or opportunities for advancement in their field, this content tended to be ambiguous or largely undefined for the majority of programs. For example, one Master of Public Health webpage described career opportunities with the following statement: "Career in public health practice." Discussion The present study identified 171 online programs in health or a health-related field offered by Canadian universities. Certificate and master's programs are the most prevalent online health credentials, with fewer online educational opportunities at the diploma or doctoral level. The majority of programs focus on specific disciplines or professions including nursing, public health, and occupational health or therapy; fewer online 261 programs take an interprofessional perspective. Many programs offer an experiential learning component, particularly those at the master's and doctoral levels. Less than half of the programs identified were offered fully online, with the remaining programs requiring students to participate in a mandatory on-campus component, which was clearly indicated on the program websites. Thus, there appears to be an opportunity to develop additional, fully online graduate programs in health sciences, particularly at the master's and doctoral level, incorporating interprofessional learning and practice within the program pedagogy. Limitations While procedures were put in place to improve the overall quality of the collected data, there are some limitations to this study. The search strategy used to collect data could have missed programs at universities as the websites of non-health departments, such as education and psychology, were not searched. Some websites were difficult to navigate and information was often not optimally presented, or information was implied rather than explicitly stated. Lastly, since these data reflect only information available to the researchers within the 10-month period of time over which they were collected, and due to the evolving nature of online and program information, the present findings could quickly become outdated. Despite these limitations, the present findings contribute to our understanding of the current state of e-learning across Canadian universities in the field of health. While most university websites provided program overviews, admission requirements, application process and deadlines, and course information, many program website layouts were difficult to navigate and some information was unable to be retrieved. A limited number of websites provided statistical information regarding acceptance rates and class sizes for prospective students. Highlighting such pertinent information with greater transparency is one avenue for change. In addition, employment opportunities associated with the program were often ambiguous and largely undefined. Program websites should be designed in a comprehensive, accessible manner to attract and inform prospective students. Along with standard program information, websites should offer data and supporting information pertaining to program admissions and employment after graduation. Future Program Development and Research As indicated by the number of applications Canadian universities receive for their graduate program(s) in the health sciences field, there is no shortage of student interest in pursuing a health-related career as indicated. This study suggests that online academic programs are readily available to a vast population of students. As this educational format continues to gain popularity, institutional websites must continue updating their websites to foster the needs of the student population. This includes providing relevant, upto-date information that is presented to interested students in a logical, user-friendly format, allowing for efficient navigation. Today, university students and employed professionals alike, place a high value on flexibility of time and place in their continued educational endeavors. Therefore, the need to provide additional fully online programs that contain experiential learning opportunities is of great importance and Conclusions This research aimed to identify current online graduate programs in the health sciences offered by Canadian universities. As this research suggests, there is a critical and continual need for online graduate programs to be structured in a format that allows for an optimal level of accessibility and flexibility for the student population. While this type of education is increasing among Canadian institutions, findings suggest that this program configuration is particularly lacking at the master's and doctoral level. Additional fully online post graduate programs that align with personal demands of potential students such as ongoing work and family commitments are needed.
2019-08-23T16:21:53.704Z
2019-03-20T00:00:00.000
{ "year": 2019, "sha1": "3c869361667355b3c9a869b230290c481a6dc971", "oa_license": "CCBY", "oa_url": "http://www.irrodl.org/index.php/irrodl/article/download/4007/5152", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3882bdcdba4c23fba3fe306c3fa3f6e32f3ad2fd", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
178661680
pes2o/s2orc
v3-fos-license
Design of SMART Car The purpose of this research is to try getting use of each possible power source to get a car moving with the minimum running cost and minimum damage to the environment. Our Vision We are planning to make a car that opens the doors for new generation of cars Our Mission Using the pervious experiences to have a new Engineering Icon Our scope of work Energy used to move the car Reliability Features What is meant by Smart? Smart means that the car is too smart to understand the ongoing events and to keep up with our daily lives. S.M.A.R.T. also means Sound: as we need the design of the car without any errors so we can have sound design for the car Material: Using the green materials is also taken into consideration in our smart car Achievable: Making sure that this Smart car can be done is also meant by our project Realistic: Real world makes us sure that we have to search for what the society needs so we can't exceed a specific cost or require a very hard mean of production Tolerance: We have also to take into consideration the tolerance of the calculations we can't depend on that the world is ideal The Design According to the pervious experiences in this field we would go for a weight of the car 860 kg (including the battery and the motor) As the simulation in Solid-Works® makes us see we have a frontal area of 1.7 m 2 Energy used to move the car. Due to our mission we were looking for a previous car to start where others finished not starting over also in the energy used we were looking for the perfect solution that would provide: 1. Sustainable source 2. Relatively cheap Reliable We are also looking for new resources that would fulfill these conditions. A lot of wasted energy could be used as we can do some energy harvesting to the interior design as well as the exterior one. The Energy would be divided into two sections inside the car and outside the car. The energy harvesting would also depend on the main source of energy as we can gain much energy from the heat loss if we are using I.C.E. (Internal Combustion Engine) or we gain use regenerators if we are using a B.E.V. (Battery Electric Vehicle). Later on in this report we will choose which kind of cars to develop Reliability You can't ask a customer that just bought a car not to use his car daily or to ask him to wait for couple of hours to make sure that the car is ready to use. It is so important to make sure that the car is ready at any time no matter it is a shiny day or a hard raining day. Also the maintenance is a very important factor. Most of car users would consider their car is "in a good shape" for at least 5 years of normal usage. We have to make sure that our design could be used for 5 years without needing to replace any important parts. Features Nowadays the customers are looking for his comfort if you have a very good car without any features you can't have a successful car in the market. We see the Chinese experience in the cars. With very low price you are now able to have a car full of features like a parking sensor and automatic transmission. Maybe it is something very easy but Design of SMART Car it is very important to a daily customer. Also it should be considered from the begging to calculate the power needed for these features. It is also unbelievable that we found that can generate energy from these features. We can't depend on our low price or that the car is green or the low running cost only. We have to do it the smart® way. What do we want from our car? We are looking for a car without any running cost maybe it's a dream but we have to think big then we can achieve it later. We will start by getting a previous experience and then modifying it to cost us no running cost at all or very low running cost. We are also looking for a green car low emissions an environmental friendly car would be very acceptable now a days. How can we achieve our purpose and the market purpose at the same time? We all to have to admit that the main motive for any product is the customer desire but there are 4 other impacting factors that have (almost) the same effect on any product. Science has this big effect on the product. The technology factors can change a lot nowadays. No one could have every imagined that we need a mobile phone. No one could think that a "smart" phone is now a need. It is all about science that now we have can have micro processers so you can have a small computer as you used to call or a smart phone as we can say. And this is our gate to a new smart environmental friendly car. We have to take into consideration what we should afford for customer satisfaction without giving up the new concept we are looking after. About the market When you look at the numbers it is too easy to know how you can sell a car. You could even ask yourself when I am going to buy a car what do I think about? Your answer would be: 1. The cost (fixed and running) 2. The look 3. The features 4. Previous experiences Of course you can't make sure that everyone is happy but you have to take into consideration the popular demands. The Investment Climate It is a gear thing to know that a lot of brands nowadays are looking for some green projects we can see Toyota® and Nissan® are already heading for this new market. The electric & the hyper cars are already doing a great job worldwide. The capital is controlled not only by the customer needs or the near profit smart business men look for a way long profit and sustainable products only have the ability to do that. Politics also can do magic. It is totally obvious that normal finite resources are located in the Middle East area. Capital doesn't like wars getting away from this are of the world is now possible they just want to make sure that they don't want to get back there. Aesthetics and Internal design combination The second factor is the look we have to agree that it has a great affect (of course the price is the most important factor). The designers can do great work to have a good looking product but our job is to make sure that this "magic" could be done. To make this clear let's take a journey in Korea. Two years ago Hyundai® had a vision "New thinking new possibilities". Back then Hyundai was just a normal automotive industry company that makes economic cars and some heavy weight industries. The company CEO decided to change the way it works. He asked for whole new designs but not only from the designers but also with the engineers. The look of the Hyundai was acceptable enough to sell the car check fig 1 it is the Hyundai coupe it is good looking enough to have good selling numbers but the new vision didn't mean just to sell We can now start the journey the designers went so crazy by themselves then later met the engineers to get back to reality as most of their sweet dreams met the aerodynamics laws and drag power requirements then designers went back to do their homework later on they met for the final concept drawing (fig 2) then it turned from the concept into outlook dimensions then went back again to designers to make sure that these dimensions won't ruin their look (fig 3)then it went back to the engineers to build up a 3D model (fig 4) then there they go they have now the body shape. Now talking about our product we have to combine the look and the engineering concepts. The effect of this is great. We could see the change happened in Hyundai brand during the last two years because applying some aerodynamics and principles of the exterior design. Introduction An electric vehicle (EV), also referred to as an electric drive vehicle, uses one or more electric motors or traction motors for propulsion. There are three main types of electrical vehicles that exist, those that are directly powered from external power source, those that are powered by stored electricity originally from an external power source, and those that are powered by an on-board electrical generator, such as an internal combustion engine (a hybrid electric vehicle) or a hydrogen fuel cell. History Electric motive power started with a small drifter operated by a miniature electric motor, built by Thomas Davenport in 1835. In 1838, a Scotsman named Robert Davidson built an electric locomotive that attained a speed of four miles per hour (6 km/h). In England a patent was granted in 1840 for the use of rails as conductors of electric current, and similar American patents were issued to Lilley and Colten in 1847. Between 1832 and 1839, Robert Anderson of Scotland invented the first crude electric carriage, powered by non-rechargeable primary cells. By the 20th century, electric cars and rail transport were commonplace, with commercial electric automobiles having the majority of the market. Over time their general-purpose commercial use reduced to specialist roles, as platform trucks, forklift trucks, ambulances, tow tractors and urban delivery vehicles, such as the iconic British milk float; for most of the 20th century, the UK was the world's largest user of electric road vehicles. Electrified trains were used for coal transport, as the motors did not use precious oxygen in the mines. Switzerland's lack of natural fossil resources forced the rapid electrification of their rail network. One of the earliest rechargeable batteriesthe nickel-iron battery -was favored by Edison for use in electric cars. EVs were among the earliest automobiles, and before the preeminence of light, powerful internal combustion engines, electric automobiles held many vehicle land speed and distance records in the early 1900s. They were produced by Baker Electric, Columbia Electric, Detroit Electric, and others, and at one point in history out-sold gasoline-powered vehicles. In fact, in 1900, 28 percent of the cars on the road in the USA were electric. EVs were so popular that even President Woodrow Wilson and his secret service agents toured Washington DC in their Milburn Electrics, which covered 60-70 miles per charge. In the 1930s, National City Lines, which was a partnership of General Motors, Firestone, and Standard Oil of California purchased many electric tram networks across the country to dismantle them and replace them with GM buses. The partnership was convicted of conspiring to monopolize the sale of equipment and supplies to their subsidiary companies conspiracy, but was acquitted of conspiring to monopolize the provision of transportation services. Electric tram line technologies could be used to recharge BEVs and PHEVs on the highway while the user drives, providing virtually unrestricted driving range. The technology is old and well established. Table 2 shows the full description and explanation for EV, Pure-EV / Pure-Electric Car, PHEV and Hybrid. In short any vehicle that can be plugged in. Pure-EV / Pure-Electric Car Pure-Electric Vehicle A vehicle with a plug-in battery and an internal combustion engine (ICE). Typical PHEVs will have a pure-electric range of over 10 miles. After the pure-electric range is utilized, the vehicle reverts to the benefits of full hybrid capability (utilizing both battery power and ICE) without range compromise. Extended-Range Electric Vehicle Alternative descriptions: Range Extended Electric Vehicle (RE-EV) Series hybrid A vehicle powered by a battery with an ICE powered generator on board. E-REVs are like pure-EVs but with a shorter battery range of around 40 miles. Range is extended by an on board generator providing many additional miles of mobility. With an E-REV the vehicle is still always electrically driven. Hybrid Alternative descriptions: Hybrid Electric Vehicles (HEV) Normal Hybrid Parallel Hybrid Standard Hybrid A hybrid vehicle is powered by, either or both, a battery and an ICE. The power source is selected automatically by the vehicle, depending on speed, engine load and battery charge level. This battery cannot be plugged in; charge is maintained by regenerative braking supplemented by ICE generated power. A number of fuels can power hybrid ICEs, including petrol, diesel, Compressed Natural Gas, Liquid Petroleum Gas and other alternative fuels. Electric vehicle performance The term 'electric vehicle' (EV) refers to any vehicle powered, in part or in full, by a battery that can be directly plugged into the mains. Performance will depend on the type of EV. All pure-electric cars qualifying for the Plug-In Car Grant must be able to travel at least 70 miles on a single charge and many are capable of 100 miles. Universal Journal of Mechanical Engineering 1(1): 1-12, 2013 5 Plug-in hybrid cars qualifying for the Plug-In Car Grant must be able to travel in excess of 10 miles on battery power, although many are able to travel further, before reverting to the benefits of full hybrid capability (utilizing both battery power and ICE) without range compromise. Extended-range electric cars qualifying for the Plug-In Car Grant must meet requirements relating to Plug-in hybrids, but are typically able to travel in excess of 40 miles on battery power with hundreds of miles of additional range via the on-board generator. The average individual journey length is about 13 Kilometers and the average total daily distance travelled is 40 Kilometers. These distances can be comfortably achieved using pure-electric cars and many journeys can be made with plug-in hybrid or extended-range electric cars using only battery power. Vehicle experience, range, speed, suitability 3.5.1. What are EVs like to drive? EVs are easy and fun to drive. Smooth, swift acceleration and light handling make the driving experience very enjoyable. Also, electric motors are very quiet, which means the driver is in a quiet, calm environment. Finally, similar to automatic cars, there is no gearbox in a pure-EV, which is particularly useful in built-up areas or heavy traffic. Electric cars require the same driving license as traditional cars and pure-electric cars can be driven on an automatic-only driving license. What are the benefits of EVs? Electricity is one of a number of options which has great potential as an alternative to oil. It can be produced from sustainable sources, it can be readily supplied, and it produces no emissions at the point of use. This means EVs can offer significant environmental benefits when used as urban commuter transport. Here are some of the benefits of EVs when operating solely on battery power: -no emissions at the point of use -a quiet driving experience -fun to drive -easy to use infrastructure -practical and easy to drive, particularly in urban stop-start traffic -home charging is convenient and avoids queuing at petrol stations 3.5.3. What is the top speed and acceleration of an EV? Electric vehicle specifications indicate that EVs are able to achieve similar speeds to their ICE counterparts during every day driving. All EVs which are qualified must be capable of reaching speeds of 60mph or more. Some pure-electric cars can reach speeds up to 125mph where permitted. Power is delivered by the electric motor as soon as the vehicle begins to move which gives smooth and swift acceleration. Does an EV have adequate range for all my needs? Range depends on the type of EV and how it is driven. Currently, most pure-electric cars offer a range up to 100 miles and are ideal for short to medium length journeys. If you are likely to be regularly driving short to medium range journeys and over 100 miles then an E-REV, PHEV or alternative fuel/ low carbon ICE may be more suitable. The average individual journey length is 8.6 miles and the average total daily distance travelled is 25 miles. In Europe, more than 80% of Europeans drive less than 63 miles in a typical day. This shows that a significant number of journeys could easily be made using an EV. Will EVs suit everyone? Not all vehicles in the market are suitable for all drivers but EVs match the transport needs of a great proportion of the population very well. The intended use will determine what type of EV is most suitable. Manufacturers are introducing more car models, which will satisfy the demand for vehicles of different size and capacity. Whilst the majority of EVs on the market are likely to be city-sized vehicles, research is also being carried out on cars in the super luxury market. Until recently, pure-electric cars have been used mainly in commercial and urban environments. Charging 3.6.1. How long does it take to charge an EV? How long it takes to charge an EV depends on the type of vehicle, how depleted the battery is and the type of charge point used. Typically, pure-electric cars using standard charging will take between six and eight hours to charge fully and can be 'opportunity charged' whenever possible to keep the battery topped up. Pure-EVs capable of using rapid charge points could be fully charged in around 30 minutes and can be 'topped up' in around 20 minutes, depending on the type of charge point and available power. PHEVs take approximately one and a half hours to charge from a standard electricity supply. E-REVs take approximately four hours to charge from a standard electricity supply. PHEVs and E-REVs require less time to charge as their batteries are smaller. Why does standard charging take this long? Charging a battery is not the same process as replacing fuel in a tank. Current battery technology means that it takes longer to charge an EV than it would to refuel a conventional car with petrol or diesel. However, if you have access to off-street parking at home, the process of charging is potentially very simple. You just plug in your EV when you get home and leave it to charge. Batteries 3.7.1. How long will the battery last in my EV? 6 Design of SMART Car Battery manufacturers usually consider the end of life for a battery to be when its capacity drops to 80% of its rated capacity. This means that if your original battery has a range of 100 miles on a full charge, after eight to 10 years (depending on how much the vehicle has been driven) this may have reduced to 80 miles. However, batteries can still deliver usable power below 80% charge capacity, although this will produce shorter range. Whether you want to exchange it at that stage for a newer battery will partly depend on your driving habits. A number of vehicle manufacturers have designed the battery to last the lifetime of the car. What is the cost of a replacement battery? That depends on the size and type of the battery, which are determined partly by the vehicle. Batteries are relatively expensive at the moment but it is likely that prices will come down, as technology improves and volumes increase. Servicing, repair and breakdown 3.8.1. Where will I be able to get an EV repaired or serviced? Manufacturers will ensure that service technicians are provided with detailed service instructions and training, just as they do for ICE vehicles. In addition, industry training programs are being developed to ensure dealers, technicians, manufacturing staff, emergency services and breakdown assistance staff can become qualified to handle EVs. 3.8.2. What will it cost to service a Pure-EV? There are fewer moving parts in a Pure-EV, which should reduce servicing costs and downtime. When the Pure-EV does require servicing it will be similar to an ICE service. Although the power train is different, many of the service actions for Pure-EVs are similar to ICEs. What warranty can I expect? The warranty of an EV will be in line with current warranties on ICE vehicles. All manufacturers who utilize the Plug-In Car Grant must offer a minimum three year battery warranty on the car as standard, as well as an option for the consumer to purchase a further two year warranty extension. Do EVs work in cold weather? Yes. As with any newly developed vehicle, manufacturers have carried out extensive testing in extreme weather conditions. In addition, there have been a number of 'real life' trials of EVs since 2009. During February 2010 everyday users drove their EVs in the worst winter weather conditions for 30 years. The range of EVs may be affected by cold weather; the use of heating and other items is likely to increase the load on the vehicle system and reduce the range, particularly of pure-EVs, in cold weather. Control systems can be used in EVs to minimize the amount of energy used by additional items, such as air conditioning and heating. Finally, it is worth knowing that EVs don't need a warm up period like many conventional ICE vehicles in the winter. The selection of the car In this section we will choose the car that we will make the improvements on. From the previous section we knew that the selection would be a hybrid-car as it allows us too much of the development. From the hybrid cars we picked up the ultra-commuter that drives us into the next question "Why the ultra-commuter?" but before that we have to be introduced to "what is the ultra-commuter?" The University of Queensland Ultra-Commuter project is the demonstration of an ultra-light weight, low drag, energy efficient and low polluting, electric commuter vehicle equipped with a 2.2 m2 onboard solar array. A key goal of the project is to make the vehicle predominantly self-sufficient from solar power for normal driving purposes, so that it does not require charging or refueling from off board sources. This paper examines the technical feasibility of the solar-powered commuter vehicle concept, as it applies the Ultra-Commuter project. A parametric description of a solar-powered commuter vehicle is presented. Real solar insolation data is then used to predict the solar driving range for the Ultra-Commuter and this is compared to typical urban usage patterns for commuter vehicles in Queensland. A comparative analysis of annual greenhouse gas emissions from the vehicle is also presented. The results show that the Ultra-Commuter's on-board solar array can provide substantial supplementation of the energy required for normal driving, powering 87% of annual travel needs for an average Queensland passenger vehicle. The vehicle also has excellent potential to reduce annual greenhouse gas emissions from the private transport sector, achieving a 97% reduction in CO 2 emissions when compared to the average Queensland passenger vehicle. Lastly, the vehicle battery pack provides for tolerance to consecutive days of poor weather without resorting to grid charging, giving uninterrupted functionality to the user. These results hold great promise for the technical feasibility of the solar-powered commuter vehicle concept and the Ultra-Commuter project. Why the ultra-commuter The ultra -commuter would be selected because of the following: 1-Based on academic search so information could be collected easily 2-One of the best results in drag force tests 3-Big space for improvements 4-Looks good and can fill the market needs 5-The cost is reasonable as from the study it's price could be under 20,000$ What are the technical specifications of the ultra-commuter As shown in Figure 3 these are the way the car looks like. Improvements Ideas The ultra-commuter uses 2.7L of gas every 100 Km as it was mentioned in the introduction we are looking for decreasing this 2.7L or even canceling it depending on renewable energy. In this section we will show some of the ideas that might help us to do so. These ideas will be followed by calculations to make sure that these improvements are applicable and will compensate the usage of the fuel 1. Better solar cells efficiency 2. Enlarging the power bank 3. Lowing the speed (using less power consumption motors) 4. Using the wind energy 5. Lighter weight 6. Using the surrounding environment (Piezo material) 7. Expected problem Better Solar cells efficiency As it was mentioned in section 3 the ultra-commuter has 2.2 m 2 solar cells surface array with efficiency of 12.6%. Using a better solar cells might make the efficiency go up to 15.6%. The type of the new solar cells We would recommend using the mono-crystalline as it's: 1-Mono-crystalline solar panels have the highest efficiency rates since they are made out of the highest-grade silicon. 2-Mono-crystalline silicon solar panels are space-efficient. Since these solar panels yield the highest power outputs, they also require the least amount of compared to any other types. Mono-crystalline solar panels produce up to four times the amount of electricity as thin-film solar panels. 3-Mono-crystalline solar panels live the longest. Most solar panel manufacturers put a 25-year warranty on their mono-crystalline solar panels. 4-Tend to perform better than similarly rated polycrystalline solar panels at low-light conditions. Maintenance The way to raise the efficiency of the solar cell is to keep it clean and in a good position. Here in our application we can add to the scheduled maintenance (daily) that the vehicle owner should keep the array clean. One of the best solutions also is to add sun tracking system so the array is always positioned in the best position facing the sun. The good maintenance can always keep the array efficiency and in sometimes helping the solar cells to have better efficiency even if the maintenance is simple it has a big impact. Egypt conditions Allah has given Egypt very good resources and one of them is the Sun. The ultra-commuter was launched already in Europe. All of the calculations are calculated according to the conditions over there the difference in the solar radiation would make a great impact on the usage of the fuel. Enlarging the battery bank The ultra-commuter already has a good battery bank but erasing the fuel storage and the internal combustion engine would make us able to enlarge the battery bank. The ultra-commuter is already using 75 Kg of batteries li-ion 100 cells. We would recommend use the weight that is erased to add new batteries that might help to extend the range. The calculation section might make us have a better view. Lowing the power consumption The ultra-commuter uses 2 motors each motor is 75Kw as it was mentioned before. The ultra-commuter has some specifications that might not be needed as a car moving in Cairo. The car has too much torque and can do very high speed which something good but not needed. Something was noticeable also there is no transmission the car. There is no gear box as the motors are powerful enough to move the light-weighted car. In the next section we will calculate if we can use another motor hat consumes less power. Using wind energy One of the most important factors that might be an obstacle while designing a car is the air resistance. Using the wind in our favor might be a good point as it might help generating the needed power. The aerodynamics of the car might help by providing some cavities that we can add wind turbines to generate some electrical power Lighter weight The weight is a very important factor the mass of the car is around 600 Kg (more details is provided at section 3) so while choosing the best improvement option the weight factor might be considered as one of the important factors also erasing all the unnecessary weight might make it easier to move the car and to consume less power. The weight might be saved by erasing 120 Kg of the internal combustion engine and about 36 liter of the fuel. Detailed calculations will be provided in the next section Using the surroundings One of the ideas was using the surrounding environment like the Piezo material or putting some solar cells in the back lights to get use of the other cars lights. The Piezo material might be also placed under the seat or under the gas pedal that might provide some electrical power. It was considered very little power but we thought about it. More details will be provided in the next section. Stability of the car The improvements should be equally distributed in the body of the car as the stability of the car might be affected. Aerodynamics Some of the suggested improvements require adding some movable parts like moving the array to track the sun. Safety of the car Some of the improvements might affect the safety of the car like the weight of the car. The safety of the car might be considered. Solutions Strict calculations should be provided also simulation of the improvements should be available. We used Universal Journal of Mechanical Engineering 1(1): 1-12, 2013 9 Solid-works® software for the aerodynamics calculations, stability and the weight. We used fluent® software for the calculations of the solar cells. Validity and simulations This section is named validity and simulation because of this section is about making sure that the improvements are valid and applicable also the simulation is very important to us. It's known to us that the car is already using 2.7L of fuel every 100 Km and all of the calculations are made at average speed 50 Km/h without repetitive stops. In this section we will not calculate all the power consumption we will just calculate the power that compensate the 2.7 L of fuel. The motor The ultra-commuter uses 2 Dc motors 75 Kw with torque 500 N.m. each and maximum revolution per minute 1500 r.p.m. each The ultra-commuter is capable of achieving 150 Km/h but the needed speed isn't 150 Km/h we just need to go average speed at 50 km/h we would recommend changing the motor. So we need to have a new motor that can provide the same torque. We might sacrifice the revolution per minute which means technically the maximum speed that the car can achieve as we are designing the car to be economical not for sport usages. We might also use the same theory of the gas cars (one motor to move the two wheels) we can reduce the power and the weight which will save a lot but re-designing the car might require changing the shape of the car to erase the cavity of the two motors and have one cavity for one motor. That would change the shape of aerodynamics which is something away from our scope of research. So reducing the motors Kw to 50 Kw with the same torque and less r.p.m. As m motor = m old motor − m new motor = 4 Kg This is negligible amount so we will remain the mass at its old value. The solar cells It was mentioned before that the usage of solar cells can be improved by several ways 6.4.1. Solar tracking One of the ways to get higher efficiency of the solar cells is to track the sun the tracking. Simply the figure 3 shows it as the solar mechanism is responsible of moving the solar panel to face the arrays of the sun at a particular angel. It could be also defined as a solar tracker is a device that orients various payloads toward the sun. Payloads can be photovoltaic panels, reflectors, lenses or other optical devices. But since it's a moving car we need to make sure that it won't increase the resistance of the air to the car. Also that it won't affect the stability of the car. That would happen through the simulation. As it appears in the aerodynamics analysis the resistance at the top is very high so moving the panel would generate a massive resistance to the car even if it would increase the efficiency but it would require a way more power to move the car. The draw was made by solid-works® software. Egypt Conditions & The efficiency of the solar cells The car is designed primary to be in Europe & America but there are differences between the radiation rate between Egypt (Africa) and Europe. In Egypt more solar radiation will be so it might make us save more energy but we should consider that when the car was under test it was only gaining solar energy there so we will calculate the energy generated here and the energy was generated there and the difference between the two values will make us now the saved power by running the car in Egypt and not in Europe. First of all we should know the solar radiation both in Egypt and in Europe. By looking at the Egyptian solar radiation map we get that the solar radiation equals average of 6.7 Kwh/day/m 2 . = * As P is the power provided by the Figure 6. Aerodynamic test result Wind energy By reviewing the figure4 (the aerodynamics analysis) we got that the best spot to place the wind turbine is 1 & 2 6.5.1. Choosing the wind turbine As we already have erased the internal combustion engine we have some space to add some weight but we are pretty limited due to the space that could be afforded by the design. 6.5.2. The aero dynamics analysis and how the drag power is affected By reviewing solid-works® simulation we find that we have about 150 cm free space as it is shown in fig 5.5. We can put two standard wind-turbines each one of them can afford 1 Kwh per day. This wind-turbine has a safety lock that it would shut down if it exceeded a certain velocity also it won't start working only after minimum velocity (helping not to require more drag force to make the car achieve the same velocity). The solid-work® simulation showed us that adding the two objects (it wasn't defined as wind-turbine) in the cavity in the frontal and the back it would increase the Cd to make it 0.33. That would make us need 0.5 Kwh from the motor to retain the same velocity (It is also noticeable that the standard wind-turbine would generate 1 Kwh per day as it's the neat value) .That would make the calculations as below . This is still below the original mass of the car. The new battery pack will provide peak power of 60 Kw but it still needs to be recharged before taking off. Final saying From mathematical calculation and simulation software we can say that the reliability of our calculations is about 40%, most of our results were taken at intermediate conditions. Facing a cloudy day or a day without any wind would make a great problem for the car driver. That's pretty obvious that the conditions can affect deeply on any renewable power resource. That might lead us to three solutions we might have the ability to do some comparisons between them. 1-Keeping the I.C.E. No matter what happens an internal combustion engine with volume of 800 cc will never be a bad choice. It has some advantages: -Very high reliability -Average noise level -Not very high fixed cost 6.7.2 Replacing the I.C.E. with generator If we have some electric motors what about using a generator to generate power to the electric motors. By consulting some electrical engineers and according to the free mass we have. We can have a generator with peak power of 70Kw that can provide 30Kw with only 0.8L of fuel. The advantages: -Good reliability -Low emissions -Low running cost 6.7.3 Add more battery bank and provide recharging stations The parking lot might make us able to use the solar power by both providing cool place to park the car and to generate electrical power that might recharge the car while parking it. The advantages: Zero running cost -No noise at all -Totally green From the comparison above we can see that we can't get everything but we have to prioritize the things we need and according to our purpose of this research we need to get: 1-Renewable & Green source of energy 2-Good Reliability 3-Not very high cost So, that drives us to select the second choice which might provide Green source of energy and also renewable. Conclusions -We can lose some speed and performance in order to keep our environment. -Semi-green car can give us the benefit of green energy with a Moderate running. -Third World countries should work to get the benefits from its natural resources in order to improve its economy. -Renewable energy will not run out ever. Other sources of energy are finite and will someday be depleted.
2019-06-10T13:25:07.308Z
2013-09-24T00:00:00.000
{ "year": 2013, "sha1": "7883da5113449c5cb261a7296427ae793cff5b65", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/201306/ujme.2013.010101.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f4b15eac64c771ba57eff7efb124b5230266b7a9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
265422681
pes2o/s2orc
v3-fos-license
Redescription, molecular characterisation and Wolbachia endosymbionts of Mansonella (Tupainema) dunni (Mullin & Orihel, 1972) (Spirurida: Onchocercidae) from the common treeshrew Tupaia glis Diard & Duvaucel (Mammalia: Scandentia) in Peninsular Malaysia The genus Mansonella Faust, 1929 includes 29 species, mainly parasites of platyrrhine monkeys in South America and anthropoid apes in Africa. In Malaysia, Mansonella (Tupainema) dunni (Mullin & Orihel, 1972) was described from the common treeshrew Tupaia glis Diard & Duvaucel (Scandentia). In a recent classification of the genus Mansonella, seven subgenera were proposed, with M. (Tup.) dunni as a monotypic species in the subgenus Tupainema. In this study, we collected new material of M. (Tup.) dunni from common treeshrews in Peninsular Malaysia and redescribed the morphological features of this species. We found that M. (Tup.) dunni differs from M. (Cutifilaria) perforata Uni et al., 2004 from sika deer Cervus nippon (Cetartiodactyla) in Japan, with regards to morphological features and predilection sites in their respective hosts. Based on multi-locus sequence analyses, we examined the molecular phylogeny of M. (Tup.) dunni and its Wolbachia genotype. Species of the genus Mansonella grouped monophyletically in clade ONC5 and M. (Tup.) dunni was placed in the most derived position within this genus. Mansonella (Tup.) dunni was closely related to M. (M.) ozzardi (Manson, 1897) from humans in Central and South America, and most distant from M. (C.) perforata. The calculated p-distances between the cox1 gene sequences for M. (Tup.) dunni and its congeners were 13.09% for M. (M.) ozzardi and 15.6–16.15% for M. (C.) perforata. The molecular phylogeny of Mansonella spp. thus corroborates their morphological differences. We determined that M. (Tup.) dunni harbours Wolbachia endosymbionts of the supergroup F genotype, in keeping with all other Mansonella species screened to date. The genus Mansonella Faust, 1929 includes 29 species, mainly parasites of platyrrhine monkeys in South America and anthropoid apes in Africa.In Malaysia, Mansonella (Tupainema) dunni (Mullin & Orihel, 1972) was described from the common treeshrew Tupaia glis Diard & Duvaucel (Scandentia).In a recent classification of the genus Mansonella, seven subgenera were proposed, with M. (Tup.) dunni as a monotypic species in the subgenus Tupainema.In this study, we collected new material of M. (Tup.) dunni from common treeshrews in Peninsular Malaysia and redescribed the morphological features of this species.We found that M. (Tup.) dunni differs from M. (Cutifilaria) perforata Uni et al., 2004 from sika deer Cervus nippon (Cetartiodactyla) in Japan, with regards to morphological features and predilection sites in their respective hosts.Based on multi-locus sequence analyses, we examined the molecular phylogeny of M. (Tup.) dunni and its Wolbachia genotype.Species of the genus Mansonella grouped monophyletically in clade ONC5 and M. (Tup.) dunni was placed in the most derived position within this genus.Mansonella (Tup.)dunni was closely related to M. (M.) ozzardi (Manson, 1897) from humans in Central and South America, and most distant from M. (C.) perforata.The calculated p-distances between the cox1 gene sequences for M. (Tup.) dunni and its congeners were 13.09% for M. (M.) ozzardi and 15.6-16.15%for M. (C.) perforata.The molecular phylogeny of Mansonella spp.thus corroborates their morphological differences.We determined that M. (Tup.) dunni harbours Wolbachia endosymbionts of the supergroup F genotype, in keeping with all other Mansonella species screened to date. In this study, we examined the morphological features of M. (Tup.) dunni from common treeshrews in Peninsular Malaysia in detail and compared them to those of other subgenera within the genus.Based on multi-locus sequence analyses of seven genes (12S rDNA,cox1,rbp1,hsp70,myoHC,18S rDNA and 28S rDNA), we determined its phylogenetic position within Mansonella and in relation to other members of the Onchocercidae.In addition, we identified the Wolbachia endosymbiont genotype of M. (Tup.) dunni using molecular analyses and compared it with those of other filariae.Adult filariae were collected from the subcutaneous connective tissues of the common treeshrews under a stereomicroscope and used for subsequent morphological and molecular analyses; thick blood smears were also examined. Morphological methods Isolated worms were fixed in 70% ethanol and temporarily mounted in lactophenol solution (R & M Chemicals, Essex, UK) for morphological examination under a compound microscope equipped with differential interference contrast.For each worm, we recorded body length and width, distance between the anterior extremity and nerve-ring, distance between the anterior extremity and vulva, and length of oesophagus, spicules and tail.We also recorded the length and width of microfilariae taken from the uteri of fixed adult females.Measurements of a representative female (J3 from Johor) and male (J4 from Johor) of M. (Tup.) dunni are presented first, followed by the range, including the representative specimens, in parentheses, and the mean in brackets.Measurements are in micrometres unless otherwise stated.The mid-region of a fixed female was embedded in paraffin, and sections were stained with haematoxylin and eosin (HE).Thick blood smears were stained with 3% Giemsa solution (pH 7.4) and examined for microfilariae under a compound microscope. Additional molecular analyses We determined partial sequences of the mitochondrial cox1 and 12S rRNA genes of four females (ID nos.M1 and M2 from Johor; G3 and G4 from Selangor).DNA extraction, PCR amplification and sequencing were performed as described previously (Casiraghi et al., 2001;Agatsuma et al., 2005;Uni et al., 2017).We also cloned the PCR products of the ITS1 region into pGEM-T vectors and determined the sequences of the recombinant plasmid (Saijuntha et al., 2018).Phylogenetic trees of the nucleotide sequences of the cox1, 12S rRNA genes and the ITS1 region of M. (Tup.) dunni were constructed using the maximum-likelihood method in MEGA7 with 1000 bootstrap replicates (Kumar et al., 2016).Sequence data were deposited in the GenBank database.The lengths of the sequence datasets used for the analyses were as follows: cox1, 393 bp; 12S rDNA, 304 bp; and ITS1, 866 bp. Genetic distances between filarial species Cox1 sequences generated during the present study and those from the literature were analysed.First, the cox1 sequence divergence was estimated by the number of base differences per site between two sequences (uncorrected p-distance) using MEGA7 (Kumar et al., 2016).Subsequently, pairwise comparisons between the selected 17 cox1 sequences were processed, with each sequence representing a species. Phylogenetic analyses of Wolbachia endosymbionts Sequences generated during the present study and previously published from draft/complete genomes were aligned for each gene using SATe-II (Liu et al., 2012) and subsequently concatenated.The complete dataset comprised 53 Wolbachia genotypes and had a length of 2480 bp.For each gene, the best-fitting substitution model was determined using the corrected version of the Akaike Information Criterion (AICc), JMo-delTest analyses (v2.1.10)(Guindon and Gascuel, 2003).HKY+I was the best fit for ftsZ; HKY+I+Γ for 16S rDNA; K81uf+Γ for dnaA; TMP3uf for coxA; and TMP3uf+Γ for fbpA and gatB.The phylogenetic relationships of the Wolbachia genotypes based on the concatenation of these six genes were inferred by the maximum-likelihood method using RaxML-ng (Kozlov et al., 2019).The dataset was partitioned to implement the best-fitting substitution model for each gene.The program was executed by generating 10 random start trees by using 1000 bootstrap replicates.(Mullin & Orihel, 1972) Eberhard & Orihel, 1984 3.1.1. Taxonomic Voucher material deposited: One female (MNHN-IN-110YT: UG2 from Ulu Gombak, Selangor) and one male (MNHN-IN-111YT: UG4 from Ulu Gombak, Selangor) were deposited in the Museum National d'Histoire Naturelle (MNHN), Paris, France.Additional specimens (7 females, MdF-1-7; 3 males, MdM-1-3) were deposited in the Museum of Zoology, Institute of Biological Sciences, Universiti Malaya. Redescription of Mansonella (Tupainema) dunni Site in host: Adult worms occurred in the subcutaneous connective tissues of the neck and abdomen and microfilariae circulated in the blood of the common treeshrews. Morphological features of present specimens General [Figs. 1 and 2].Pre-oesophageal cuticular ring absent.Four external labial papillae and four cephalic papillae arranged in laterally elongated rectangle.Amphids lateral, approximately on level of external labial papillae (Figs.1A and 2A).Anterior end slightly dilated, narrowing to conspicuous hemispherical prominence at apex in both sexes.Oesophagus not divided into muscular and glandular portions.Annular body swellings with coelomocytes in anterior half of body in both sexes.Caudal end of female with two lateral lappets and two terminal cones.In male, caudal papillae arranged in two ventrolateral groups near cloaca.Spicules unequal and dissimilar.Microfilaria without sheath. Prevalence and intensity of filariae Adults of M. (Tup.) dunni were found in 10 out of 45 (22.2%) common treeshrews collected from five research areas in Peninsular Malaysia.Microfilariae of M. (Tup.) dunni were present in the blood of 13 of these animals (28.9%).Specimens of M. (Tup.) dunni (79 adults: females and 13 males) were obtained from the subcutaneous connective tissues of the neck and abdomen below the axillae.Intensity of infection ranged from 1 to 20 with a mean of 7 worms per infected host. Remarks We compared the morphological features of the present specimens with descriptions of M. (Tup.) dunni by previous authors (Mullin and Orihel, 1972;Eberhard and Orihel, 1984;Bain et al., 2015) and noticed some differences concerning the position of the vulva and features of the area rugosa.In the original description by Mullin and Orihel (1972), the vulva was stated to be located near the base of the oesophagus, but the metrics of the holotype were not recorded.However, the range and mean of the distance of the vulva from the anterior extremity were shorter than the length of the oesophagus.Subsequently, Eberhard and Orihel (1984) gave the position of the vulva of M. (Tup.) dunni as located at or posterior to the base of the oesophagus.In contrast, in the key to the subgenera of Mansonella presented by Bain et al. (2015), Tupainema is placed in the group of subgenera in which the vulva is located anterior to or in the region of the oesophago-intestinal junction.More specifically, the diagnosis for Tupainema gives the position of the vulva as at the level of or just posterior to the oesophago-intestinal junction (Bain et al., 2015).In the present study, the vulva was situated posterior to the junction in many of the longer females, but anterior to it in the shorter ones.We therefore suggest defining the subgenus Tupainema as possessing a vulva that is predominantly located posterior to the oesophago-intestinal junction. In the subgenus Cutifilaria, which comprises the two species M. (C.) perforata and M. (C.) wenki, the vulva is situated markedly posterior to the oesophago-intestinal junction (Bain and Schulz-Key, 1974;Uni et al., 2004), and in the subgenus Pseudolitomosa at the junction (Yamaguti, 1941).In the remaining subgenera Mansonella, Esslingeria and Tetrapetalonema the vulva is situated at the mid-level of the oesophagus (Eberhard and Orihel, 1984;Bain et al., 2015). In the present specimens, the area rugosa is composed of a single row of tiny cuticular bosses (Fig. 2D).Bain et al. (2015) described the area rugosa in M. (Tup.) dunni as composed of very short longitudinal cuticular crests and used these cuticular projections to differentiate subgenera in the key: pointed cuticular rugosities in Cutifilaria vs short longitudinal crests in Mansonella, Esslingeria, Pseudolitomosa, Tetrapetalonema and Tupainema; males of Filyamagutia are unknown.However, we found these projections to be very small and almost too subtle to examine in detail.Consequently, we suggest that the key to the subgenera of Mansonella be modified to reflect that the subgenus Cutifilaria possesses an area rugosa with transverse bands composed of irregularly disposed cuticular bosses, whereas the remaining subgenera, including the subgenus Tupainema, are characterized by an area rugosa with transverse bands composed of a single row of cuticular bosses or short longitudinal crests. Generally speaking, the area rugosa is a variable feature in males of the Onchocercinae.It has been recorded in species of Mansonella and Cercopithifilaria, but not Onchocerca (Bain et al., 2015;Uni et al., 2001Uni et al., , 2004Uni et al., , 2007Uni et al., , 2020)).Bain and Chabaud (1988) suggested that body swellings and the area rugosa assist males in holding the female during copulation. Wolbachia phylogeny Mansonella (Tup.)dunni harboured endosymbionts of the Wolbachia supergroup F genotype (Fig. 4).This is in keeping with other congeners that have been screened for Wolbachia to date.The Wolbachia genotype of M. (Tup.) dunni was closely related to that of M. (M.) ozzardi, but distant from that of M. (C.) perforata.The supergroup F genotype includes endosymbionts of both filarial species and arthropods: the filariae being represented by species of Mansonella (Onchocercinae) and M. hiepei (Splendidofilariinae). Discussion According to Eberhard and Orihel (1984), the morphological features of Mansonella spp., including the absence of a pre-oesophageal cuticular ring, the weakly developed oesophagus, spicule morphology, absence of a gubernaculum, arrangement of pericloacal papillae and unsheathed microfilariae, suggest that Mansonella is a highly evolved genus within the Onchocercidae.Indeed, in the present molecular phylogeny, Mansonella spp.were positioned in the most derived group in clade ONC5 (Fig. 3).Comparing the six congeners in five subgenera that were available for our molecular analyses, we found certain morphological affinities between M. (Tup.) dunni and M. (M.) ozzardi.In addition to the generic characteristics mentioned above, the two species share the presence of annular body swellings, a female tail with two lateral lappets and divided axial points, and an area rugosa with transverse bands of very short cuticular crests in M. (M.) ozzardi and pointed bosses in M. (Tup.) dunni (Orihel and Eberhard, 1982;Eberhard and Orihel, 1984;Bain et al., 2015).The microfilariae of M. (Tup.) dunni and M. (M.) ozzardi are remarkably similar in that the tip of the tail is without a nucleus or nuclei.In contrast, species of the subgenera Tetrapetalonema, Esslingeria and Cutifilaria have microfilariae with nuclei Fig. 4. Phylogenetic tree of Wolbachia endosymbionts based on six markers using maximum-likelihood inference.Analysis based on concatenation of 16S rDNA, ftsZ, dnaA, coxA, fbpA and gatB.The dataset was partitioned to implement the best-fitting substitution model for each gene.The robustness of nodes was assessed with 1000 bootstrap replicates.The Wolbachia supergroups (A-F, H and J) were identified according to Lefoulon et al. (2016) and Lo et al. (2002).The red triangle indicates the sequence generated in this study.The scale-bar indicates the distance in substitutions per nucleotide.Abbreviation: Wb, Wolbachia. extending to the tip of the tail (Eberhard and Orihel, 1984;Uni et al., 2004;Bain et al., 2015).The predilection sites for adults and microfilariae are also similar in both species (Orihel and Eberhard, 1982;Eberhard and Orihel, 1984).However, the arrangement of the papillae on the cephalic plate and the position of the vulva differs between the two species.In addition, the male tail of M. (M.) ozzardi bears a cuticular flap; that of M. (Tup.) dunni has two subterminal lappets. Mansonella (C.) perforata on the other hand, differs from M. (Tup.) dunni in the following features: markedly posterior position of the vulva, absence of body swellings, male tail without lateral lappets, structure of the right spicule, transverse bands of the area rugosa consisting of irregularly arranged pointed rugosities and presence of nuclei at the bifid tail end of microfilariae (Uni et al., 2004;Bain et al., 2015).In addition, the predilection site (dermis) for both adults and microfilariae of M. (C.) perforata differs from that of M. (Tup.) dunni (subcutaneous tissues for adults and blood for microfilariae) (Uni et al., 2004).The morphological distinctness of M. (C.) perforata is corroborated by its basal position in the phylogenetic tree (Fig. 3), when compared to its congeners. The presence or absence of a pre-oesophageal cuticular ring and the division of the oesophagus appear to be morphological features that are generally related to the molecular phylogeny of ONC5 (Fig. 3).In the species of Mansonella, a pre-oesophageal ring is absent and the oesophagus is poorly developed and undivided.In the next group, both M. hiepei and R. andersoni have no pre-oesophageal ring, the oesophagus is clearly divided in the former and indistinctly divided in the latter species (Lankester and Snider, 1982;Hering-Hagenbeck et al., 2000).In C. pavlovskyi, the pre-oesophageal ring is present, and the oesophagus is undivided (Bartlett and Anderson, 1980); in its sister taxon A. alessandroi, the pre-oesophageal ring is also present, but the oesophagus is divided (see figure 1 of Bain et al., 1981).In the group of Foleyella and Pelecitus, the presence or absence of the pre-oesophageal ring and the division of the oesophagus varies with the species (Bartlett, 1986;Bartlett and Greiner, 1986).Finally, in Brugia spp., Wuchereria spp.and M. sofiani, both the pre-oesophageal ring and division of the oesophagus are present (Uni et al., 2017). The genetic distance between sequences of the cox1 gene of M. (Tup.) dunni and other congeners was 13.09% for M. (M.) ozzardi and 15.6-16.15%for M. (C.) perforata.According to Ferri et al. (2009), filariae can be considered different species if the genetic distance based on the cox1 sequences is greater than 4.8%.In Onchocerca spp., cox1 interspecific distances are higher than 4.5% and intraspecific distances are lower than 2% (Lefoulon et al., 2017).Therefore, we consider the distances (0.42%) between the two specimens (M1 from Johor and G4 from Selangor) of M. (Tup.) dunni collected from different states of Malaysia intraspecific.It is noteworthy that the interspecific distances between Mansonella spp.were rather large in comparison with the generic distance (9.06%) between B. malayi and W. bancrofti, both placed in the first group of clade ONC5.Mansonella is a large genus with 29 species, while Brugia and Wuchereria are small genera with 11 and two species, respectively (Bain et al., 2014).Brugia spp.have close affinities to W. bancrofti, based on the morphological characteristics of both adults and infective larvae, their development, and transmission (Anderson, 2000).In the molecular phylogenetic analyses of filariae and their Wolbachia endosymbionts, Brugia species are also closely associated with W. bancrofti (Casiraghi et al., 2001).Ramesh et al. (2012) estimated that Brugia and Wuchereria have diverged some 675,000 years ago; a relatively recent split in the superfamily Filarioidea according to McNulty et al. (2013).We speculate that Mansonella has a complex evolutionary history and attribute its large genetic divergence to its worldwide geographical distribution and broad host spectrum (11 families in five orders of Mammalia). Recently, Poux et al. (2006) suggested a scenario for the arrival of primates and caviomorphs in South America by a trans-Atlantic migration from Africa at the end of the Eocene (< 45 Mya), followed by the radiation of extant platyrrhines during the Early Miocene (> 16 Mya).According to Bain (2002), M. (E.) perstans was introduced into South America by human migration from Africa.In contrast, M. (M.) ozzardi originated from host switching of parasites of carnivores or sciurids in North America.The most ancestral form of the Mansonella lineage likely existed in the Asiatic region and its hosts migrated towards Africa through the Arabian Peninsula.On the other hand, some of the hosts in Asia migrated towards North America via the Bering Strait (Bain, 2002).In North America, M. (M.) llewellyni (Price, 1962) was described from the raccoon Procyon lotor (L.) (Carnivora: Procyonidae) and M. (M.) interstitium (Price, 1962) from the gray squirrel Sciurus carolinensis Gmelin (Rodentia: Sciuridae) (Price, 1962).In Asia, M. (P.) musasabi, M. (F.) akitensis and M. (C.) perforata were found in the giant flying squirrel, the black bear and the sika deer, respectively, emphasizing the heterogenous host spectrum and wide geographical distribution of Mansonella spp. Considering the origin of treeshrews, the common ancestor diverged into Scandentia, Dermoptera and Primates during the Cretaceous (90 Mya), and the genus Tupaia Raffles, 1821 arose at the end of the Miocene (10 Mya) (Janečka et al., 2007).Phylogenetically, treeshrews (Scandentia) are considered more closely related to Primates than to Rodentia and Lagomorpha (Springer et al., 2004).Roberts et al. (2011) suggested that Miocene tectonic events, volcanism and geographical instability drove treeshrew diversification in Southeast Asia.Therefore, we speculate that treeshrews acquired an ancestral form of Mansonella through host switching of Mansonella forms in sciurids, carnivores or ruminants distributed in the Holarctic Region.Moreover, host specificity was not as strong during the Pleistocene (< 2.58 Mya) as originally claimed (Krueger et al., 2007).We further posit that host switching was facilitated by haematophagous arthropods serving as vectors for these filariae. To date, no investigation concerning possible vectors of M. (Tup.) dunni has been carried out, although 62 species of Simulium and 108 species of Culicoides have been reported from Peninsular Malaysia (Wirth and Hubert, 1989;Takaoka et al., 2018).Tidwell et al. (1980) established the S. sanguineum group as one of the main vectors of M. (M.) ozzardi in the Mitú region in Columbia.Orihel et al. (1981) patas monkeys Erythrocebus patas (Schreber) (Primates: Cercopithecidae).In North America, Yates et al. (1982) obtained larval stages of M. (M.) llewellyni in the thoracic muscles of C. hollensis (Melander & Brues) fed on the blood of a raccoon infected with the filarial parasites. Interestingly, M. (Tup.) dunni from common treeshrews held the most derived position among its congeners in our molecular analyses.Ultimately, finding the ancestral forms of the Mansonella lineage will necessitate molecular analysis of other related filariae such as S. sunci from the Asian musk shrew Suncus murinus (L.) (Soricomorpha) in the Indomalayan realm (Hutterer, 2005;Morales-Hojas, 2009;Bain et al., 2015). Regarding the phylogenetic relationships between Wolbachia supergroups and host species of the Onchocercidae, an ancestral absence of Wolbachia, horizontal acquisitions, secondary losses, and local coevolution with host filariae are all scenarios that have been suggested to date (Bain et al., 2008;Ferri et al., 2011;Lefoulon et al., 2012Lefoulon et al., , 2016;;Uni et al., 2020).In this study, M. (Tup.) dunni harboured Wolbachia of the supergroup F genotype.Lefoulon et al. (2016) suggested an absence of global coevolution between filarial worms and their Wolbachia endosymbionts; while strong coevolution is found in the relationships between the supergroups C and J and their filarial hosts, weak coevolution is seen in the relationships between the supergroups D and F and their filarial hosts.In our study, the Wolbachia genotype of M. (Tup.) dunni was closely related to that of M. (M.) ozzardi, but distant from that of M. (C.) perforata (Fig. 4), reflecting the phylogenetic relationships of these congeners. Conclusions We redescribed the morphological features of M. (Tup.) dunni obtained from common treeshrews in Malaysia.Based on this, we suggested modifications to the key for the subgenera of Mansonella proposed by Bain et al. (2015) concerning the position of the vulva and the composition of the transverse bands of the area rugosa.Morphological analysis revealed that M. (Tup.) dunni shares certain morphological features with M. (M.) ozzardi but differs distinctly from M. (C.) perforata.Molecular analyses indicated that species of Mansonella constitute a monophyletic group in clade ONC5 of the Onchocercidae, and M. (Tup.) dunni is one of the most recently derived filariae.Mansonella (Tup.)dunni formed a sister clade to M. (M.) ozzardi from humans and was most distant from M. (C.) perforata from sika deer in the newly generated phylogenetic tree.Hence, we consider that the molecular phylogeny of Mansonella species corroborates their morphological differences.Wolbachia endosymbionts of the supergroup F genotype were detected in M. (Tup.) dunni.The Wolbachia genotype of M. (Tup.) dunni was closely related to that of M. (M.) ozzardi, but distant from that of M. (C.) perforata. Fig. 3 . Fig. 3. Onchocercid clades based on partitioned concatenated datasets of 12S rDNA, cox1, rbp1, hsp70, myoHC, 18S rDNA and 28S rDNA sequences using maximumlikelihood inference.The total length of datasets is approximately 3695 bp.Fifty-seven onchocercid sequences (representing 55 species) were analysed.Filaria latala and Protospirura muricola were used as the outgroup.The topology was inferred using 1000 bootstrap replications.The onchocercid subfamilies are indicated by colour: blue for Onchocercinae, dark green for Dirofilariinae, purple for Splendidofilariinae, pale green for Setariinae, yellow for Waltonellinae, orange for Icosiellinae and red for Oswaldofilariinae.The red triangles indicate the sequences generated in this study.The scale-bar below the diagram indicates the number of inferred changes along each branch.Abbreviations: MNHN, sequences were analysed at the Museum National d'Histoire Naturelle, Paris.GEN, sequence data were obtained from GenBank. harvested infective larvae of M. (M.) ozzardi from C. furens (Poey) collected in Haiti as well as from Simulium sp.(sanguineum group) collected in the Colombian Amazon and experimentally obtained adult worms from A.S. Mat Udin et al.
2023-11-25T16:14:52.893Z
2023-11-01T00:00:00.000
{ "year": 2023, "sha1": "6eb45c5a33fa185b2ff95bd5e98654cf1e82f4fe", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.crpvbd.2023.100154", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "3a4244a114e3ce6810ba3147a402da616b697397", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
267989203
pes2o/s2orc
v3-fos-license
THE INFLUENCE OF DISCIPLINE AND WORK MOTIVATION ON EMPLOYEE PERFORMANCE South Jakarta because this is directly related to the results of the abilities and skills of all human resources, which are the central brain of the company to help achieve the company's main goals. There are several performance problems at PT Sejahtera PrimaPersada South Jakarta, starting from poor work results, which have experienced a decline in standards, and the lack of job knowledge of workers, which results in ineffective and inefficient work time and results in very low-quality work. Moreover, employees must take more initiative or problems arising in their duties. Muhammad Gandung Based on performance evaluation data at PT Sejahtera PrimaPersada, South Jakarta, employee performance is still significantly less than the expected standards, such as the highest work results in 2016 amounting to 90% and the lowest in 2020 amounting to 75%.According to the data above, each year's work results have decreased by an overall average of 84%.The highest initiative was in 2016, at 80%, and the lowest was in 2022, at 76%.From the above data, initiatives in 2020 experienced a decrease from previous years, with an overall average of 78%.The highest timeliness in 2016 was 85%, and the lowest in 2020 was 75%.According to the data above, timeliness in 2020 decreased from previous years with an overall average of 82%.According to Mahsun (2016), performance is a description of the level of achievement of an activity, program, or policy in realizing an organization's goals, objectives, mission, and vision, as stated in an organization's strategic planning.An organization or company must have standards of behaviour that must be carried out about the company, whether written or not, and to encourage employees to comply to create good performance, it is necessary to have discipline and work motivation. Apart from performance factors, the factor that influences employee performance is work discipline.At PT Sejahtera PrimaPersada, South Jakarta itself, the main problem is the need for more discipline among employees, so they underestimate the number of people who are late, absent without explanation, sick and leave, which increases yearly.Many employees still need to comply with the work regulations and guidelines made by the company.Employees need to be able to use their time effectively to save time, which should be used as well as possible for work; there is a lack of responsibility at work, and much work is neglected. Based on the company's internal table, the employee attendance data above shows that many employees are still absent due to illness, permission or negligence.In 2016, there were 34 absences; in 2017, there was an increase of 37 absences; in 2018, there was another increase of 40 absences; in 2019, there was another increase to 48 absences; and in 2020, the absence of absent employees increased further, namely, to These 67 conditions are an increase from the previous four years.This is caused by employees' need for more awareness to be disciplined and follow applicable regulations.The attendance data from 2016 to 2020 shows that the number of employees absent due to illness, permission, and negligence continues to increase.This can certainly reduce the performance of PT Sejahtera PrimaPersada South Jakarta employees.This should be the focus of PT Sejahtera PrimaPersada South Jakarta to increase the value of employee work discipline because if this is allowed, employee morale will decline and will result in a decline in the performance of PT Sejahtera PrimaPersada South Jakarta employees in the future.Rivai (2015) stated that work discipline is a tool used by company management to communicate with employees so that they are willing to change their behaviour and to increase awareness and a person's willingness to comply with all regulations and social norms that apply in a company.Apart from work discipline, work motivation is another factor that influences employee performance.The failure of a company to achieve its targets threatens to reduce work productivity due to a lack of employee motivation in carrying out their duties.PT Sejahtera PrimaPersada South Jakarta itself still has many shortcomings in motivating employees, one of which is the lack of encouragement and enthusiasm from superiors to their subordinates, so employees find it challenging to explore expertise and skills, there is a lack of willingness to carry out tasks among employees so that work overload often occurs. Based on the results of the pre-survey regarding work motivation, it can be concluded that only 26 employees have low motivation to achieve.Likewise, motivation to affiliate is also low at only 19 employees.Moreover, the motivation to rule is only 11 employees.One of the attributes of work motivation can consciously or unconsciously influence the level of employee work motivation.If there is no employee motivation when carrying out their work, it can be detrimental to the company by reducing the performance of PT Sejahtera PrimaPersada South Jakarta employees. According to Hasibuan (2017), motivation is about encouraging subordinates' enthusiasm for work so they are willing to work hard by providing all their abilities and skills to realize the company's goals.This needs to be a serious concern from the leadership because there are still problems in terms of employee performance, namely that there are still many employees who arrive late, they are still found to be absent without explanation, and there are still many employees who do not comply with the work regulations and guidelines made by the company, by looking at These two factors, namely discipline and work motivation, are essential aspects in generating employee performance.It will create a conducive work climate to synergize with increasing employee work enthusiasm or enthusiasm to achieve organizational goals, especially at PT Sejahtera PrimaPersada South Jakarta. RESEARCH METHODS This research type is associative quantitative research, which asks about the relationship between two or more variables (Sugiyono, 2013).The place of research was carried out at PT Sejahtera PrimaPersadaSouth Jakarta, Jalan Warung Buncit Raya No.301 Mampang Prapatan, Duren Tiga Village, Pancoran District, South Jakarta City.This research will be carried out for 6 months, from January 2022 to June 2022. In this research, the population is all PT Sejahtera PrimaPersadaSouth Jakartatotaling employees, 56 people.The sampling method used in this research is the saturated sample method.The saturated sampling method is a technique in which all population members are used as samples.Of the entire population, the sample used in this research was 56 employees.Data collection used a Likert scale questionnaire, while data processing used multiple linear regression analysis with SPSS version 26.00. RESEARCH RESULTS AND DISCUSSION Data Description Based on survey data, it is known that the gender percentage of employees is male with a percentage of 67.9% or 38 employees, while the number of female gender is less with a percentage of 32.1% or 18 employees.Furthermore, based on the data, it is known that the most respondents were those aged 18-30 years, with a total of 25 respondents or 44.6%, than those aged 31-40 years, with a total of 18 respondents or 32.1%, and the lowest were aged > 40 years with a total of 13 people or 23.2%.Based on the three tables above, it can be seen that the value of rcount in variable validity > rtable value (0.2632) this means that the data processed using the SPSS computer program is declared valid.Based on Cronbach's test data'sAlphaIt was obtained that all variable items were declared reliable because of the Cronbach value'sAlphaobtained > 0.60, thus all variables are reliable. Classic assumption test Normality test Table 3. Normality Test Results Based on the Kolmogorov normality test table, the Asymp value is obtained.Sig is 0.200 > 0.05 so it can be concluded that the data meets the assumption of normal distribution. Multicollinearity Test Multicollinearity Test Results can be seen in the following table: Based on the results of the multicollinearity test in the table above, it shows that all independent variables have a Tolerance value > 0.1 and a VIF value < 10 or (0.628 > 0.1 and 1.593 < 10).Thus, it can be concluded that all independent variables in this study do not have multicollinearity. Heteroscedasticity Test The results of the Heteroscedasticity Test can be seen in the following image: One Figure 1. Heteroscedasticity Test Results Based on the image above, the pattern of dots in the regression scatterplot spreads in an unclear pattern above and below the number 0 on the Y axis.So in this regression model there is no heteroscedasticity problem. Autocorrelation Test Autocorrelation Test Results can be seen in the following table: Based on the results of the autocorrelation test above, the Durbin-Watson value is 1.967.This value will be compared using a sample size of 56 (n) and the number of independent variables is two and the dependent variable is one.After looking at the Durbin-Watson table, the dL value is 1.4954 and dU is 1.6430.So the autocorrelation measurement is as follows: du < dw < 4du: 1.6430 < 1.967 < 2.357.It can be concluded that there is no autocorrelation. Multiple Linear Regression Analysis Multiple Liner Regression Test Results can be seen in the following table: The interpretation of the regression equation above is: a.The constant value shows that if discipline and work motivation are 0 (zero), employee performance will be 31,925.b.The discipline coefficient value has a positive value of 0.292, indicating that every increase of 1 score in discipline will be followed by an increase in employee performance of 0.292.The Sig value of Muhammad Gandung 0.017 is smaller than 0.05 (0.017 < 0.05) indicating that work discipline has a significant effect on employee performance c.The work motivation coefficient value has a positive value of 0.026, indicating that every 1 increase in work motivation score will be followed by an increase in employee performance of 0.026.The Sig value of 0.820 is greater than 0.05 (0.820 > 0.05) indicating that work motivation has no significant effect on employee performance Based on the table above, it is obtained that Fcount > Ftable or (4.298 > 3.17).This is also reinforced by the probability significance value of 0.019 < 0.050.Thus Ho is rejected and Ha is accepted.It can be concluded that Discipline (X1) and Work Motivation (X2) simultaneously have a significant effect on Employee Performance (Y) at PT Sejahtera PrimaPersada South Jakarta. Test resultsCoefficientDeetermination can be seen in the following table: Based on the table above, the R² value is 0.462.This shows that employee performance is influenced by discipline and work motivation by 46.2% while the remaining 53.8% is influenced by other factors outside this research. DISCUSSION Discipline significantly affects the performance of PT Sejahtera PrimaPersada South Jakarta employees.This is in line with research conducted by Rasminto et al. (2020), Hasibuan & Silvya (2019), Tanjung & Manalu (2019), Larasati & Suhermi (2021), Harahap & Tirtayasa (2020) that discipline has a significant effect on employee performance.Consistency in employee performance is a solid foundation for the success of company operations.Thus, employees who uphold discipline tend to carry out their daily tasks consistently, arrive on time, comply with work deadlines, and strictly follow company procedures.With this consistency, companies can count on stable and reliable results, providing a solid foundation for long-term growth and sustainability.By prioritizing discipline, employees maintain high standards of personal performance and encourage their coworkers to follow the same example.This reduces disruptions and delays in work processes, speeds up workflow, and increases overall productivity; these efficiencies are a vital foundation for a company's progress in a competitive market.Punctuality in completing tasks is an essential aspect of the discipline that employees apply.Disciplined employees will ensure they complete their tasks according to the set deadlines.This is important in managing projects and planning business strategies, as delays can disrupt schedules and affect the team's overall performance.Apart from these direct benefits, discipline also creates a positive work culture in the workplace; when discipline is appreciated and encouraged, employees feel more involved and committed to their work and the company's overall goals. Work motivation does not significantly affect the performance of PT Sejahtera PrimaPersada South Jakarta employees, so the second hypothesis is rejected.This aligns with research conducted by Adha et al. (2019), who found that work motivation does not significantly affect employee performance.Although motivation is considered a catalyst for productivity, reality shows that responses to motivation vary among individuals, creating challenges in establishing a consistent relationship between motivation and performance.Furthermore, differences in personality and individual characteristics complicate linking motivation to performance.Every employee has different preferences, values, and needs, so a motivation strategy that works for one individual may not have the same impact on another.Apart from that, external factors also play an essential role in determining employee performance.Even though an employee may have a high level of motivation, external obstacles such as unsupportive company policies, unstable economic conditions, or lack of direction from management can limit their ability to achieve their potential.Full of them.Variability in working conditions can also reduce the effect of motivation on employee performance.Lack of recognition for achievements, lack of support from superiors, or interpersonal conflict within the team can interfere with acquired motivation, resulting in less-than-optimal performance despite high motivation levels.Lastly, the importance of skills and abilities in achieving desired results should be noticed.High motivation may not overcome a lack of skills or knowledge necessary to complete a task effectively.Even though employees are motivated to achieve goals, these deficiencies may need improved performance. Discipline and work motivation simultaneously significantly affect the performance of PT Sejahtera PrimaPersada Jakarta employees.The results of simultaneous hypothesis testing (F test) obtained an account value of 4.298 and a significant value of 0.019, while the table value at the confidence level was 33.18.The third hypothesis is accepted because count> table or 4.298 > 3.17 and the significant value < 0.05.The R Square value is 0.462; this shows that employee performance is influenced by discipline and work motivation by 46.2%, while other factors outside this research influence the remaining 53.8%.This aligns with research conducted by Hasibuan and Silvya (2019), who found that discipline and work motivation significantly affect employee performance. CONCLUSION Based on the research results, it can be concluded that discipline significantly affects the performance of PT Sejahtera PrimaPersada South Jakarta employees.This finding is consistent with several previous studies showing that discipline is essential in achieving stable and reliable performance.Furthermore, work motivation has little effect on the performance of PT Sejahtera PrimaPersada South Jakarta employees.This is in line with findings from research that state that responses to motivation vary between individuals, as well as the complexity of external and internal factors that influence employee performance. Discipline and work motivation simultaneously significantly affect the performance of PT Sejahtera PrimaPersada South Jakarta employees.The F-test results show that these two factors significantly impact employee performance together, with an R Square value of 46.2%, indicating that almost half of the variation in employee performance can be explained by discipline and work motivation.Thus, management should continue to encourage a culture of discipline and motivate employees to improve their performance effectively.Although other factors outside of this research still influence employee performance, understanding the role of discipline and work motivation can be a strong foundation in efforts to increase productivity and organizational success. Table 1 . Summary of Validity Test Results Test
2024-02-27T16:26:31.537Z
2024-02-10T00:00:00.000
{ "year": 2024, "sha1": "ca66e9b79d4af881eb96235c692507c22459d854", "oa_license": "CCBYSA", "oa_url": "https://journal.admi.or.id/index.php/IJML/article/download/1220/1388", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "990975ec716e6560a11c7bf0cfcb5bf7ad5a0491", "s2fieldsofstudy": [ "Business", "Psychology" ], "extfieldsofstudy": [] }
63756375
pes2o/s2orc
v3-fos-license
Adaptive Modulation with Customised Core Processor The objectives of the work include but not limited to Multiple modulation and demodulation with switching ability at any condition, Choosing a best fit modulation based on efficiency and also maintain the BER and SNR value, Filtering the channel noise and automatically identify the corresponding demodulator at receiver and ensuring security good for the system without scalability issues. The demonstrated modulation and demodulation schemes include1,2: • ASK • CPFSK • D8PSK • QAM8 • QASK • DQPSK • SUNDE • QAM 16 • QAM 64 • QAM 256 Introduction The objectives of the work include but not limited to Multiple modulation and demodulation with switching ability at any condition, Choosing a best fit modulation based on efficiency and also maintain the BER and SNR value, Filtering the channel noise and automatically identify the corresponding demodulator at receiver and ensuring security good for the system without scalability issues. The demonstrated modulation and demodulation schemes include 1,2 : Communication Model The communication between transmitter and receiver is shown along with the channel in Figure 1 Abstract Objectives: To develop an automatic modulation detection. Methods/Statistical Analysis: A single system having the antenna transmits and receives multiple signals. Autonomous modulation techniques avoid the multipath fading, delay and bandwidth limitation. Transmitter selects the modulation from group based on the receiver location and selected modulation gives maximum accuracy. Findings: This approach avoids the size limitation because of parallel relay terminal structure acceptance. This research work focuses on different set of modulation schemes in virtual logic channel implemented in low power consuming hardware and also be customized. Application/Improvements: Proposed work is a large number low power hardware used in more secured manner. Modulation Detection In this work, Automatic modulation detection extracts seven parameters (Features based on amplitude, frequency and phase) for identification of different modulation techniques namely: ASK2, ASK4, FSK2, FSK4, PSK2, PSK4, QAM16 and QAM64. All the different parameter have been calculated in during the real time process, based on statistics of the signal condition the parameter are selected cautiously. The parameters selected are 4,5 : absPhase2 The method of classification as mentioned above is based on the threshold value and is calculated for various modulation techniques. Based on the flow chart shown in Figure 4 the type of modulation transmitted to the receiver is identified 6 . Hardware Implementation The implementation uses two hardware nodes (ARM 9 and above based architecture with "GNU Radio" LINUX drivers installed) 7 . • Transmitter board consisting of all the modulation blocks • Receiver board consisting of all the demodulation blocks Transmitter Block The transmitter board consists of carrier generator, various modulation blocks, modulation selection switch, noise generator, DAC converter as shown in Figure 5. The User selects one of the modulations using the "modulation selection switch". Based on the selected modulation scheme, necessary software blocks are activated like data, carrier and noise. Working function for the software blocks are data block is used to modulate data, carrier block is used to generate carrier and noise block is used to generate the noise as well as control the noise. The modulated wave is transmitted which is converted to in an analog signal using on board DAC and the signal is transmitted using antenna 8,9 . Receiver Block The receiver board consists of carrier generator, various demodulation blocks, noise generator, ADC converter as shown in Figure 6. The modulated signal is sampled through Analog to Digital convertor at a high sampling rate. The digitized signal is then noise filtered by the noise filtering software block. The algorithm is applied in data sample by the detected modulation scheme, applicable software blocks are applied in the data results get demodulated data 10 . Case (i) 16 QAM Implementation details 16 QAM, commonly used in radio networks and microwave digital radios, offers four values for 'I' and four values for 'Q' , yielding 16 possible states, as shown in Figure 7. 16 QAM sends four bits per symbol. The signal can transition from any state to any other state 11 . 16 QAM is more spectrally efficient than BPSK, QPSK, OQPSK, and π/4-DQPSK. The QAM approach transmitter and receiver path have the nonlinearity path with noise, here one symbol is interpreted with another symbol that may cause error 12 . This approach decrease the inter symbol interference problems so its give the better result compare with other modulation techniques. The 16 QAM implementation, shows a flow chart in which it import the GNU radio blocks, select the defaults values (provided by GNU blocks) and provide the modulation/demodulation for 16 QAM process Figure 8 Case (ii) PSK Implementation The digital modulation technique here phase shift keying is by changing the data with carrier reference signal. PSK implementation blocks shows Figure 9. PSK has unique pattern, respectively unique pattern have equal number of bits form the result of encoder. The phase is represented by the particular symbol its received form the unique pattern. To implementing process of PSK is quite good, but in order to improve the process to go for DPSK 15,16 . The process simplified because the omitting the one section because it is no need for demodulator, reason is reference signal and unique pattern bit symbol represent exact phase of the received signal. In Figure 10 imported GNU blocks generate gray code constellation for different M-PSK values, convert the code and receive demodulation for PSK 17 . Results and Discussion In this work, the results have been found out based on linux and hardware correlation, where related to modulation choices have been provided and chosen the QAM256. It shows the constellation with 256 arity and the start and end of the modulation status. It shows the status of port number also ( Figure 10). The QAM code detection output is shown in Figure 11. Figure 11. QAM code detection output. Conclusion In this work, multiple modulation techniques to automatically identify an unknown modulation using different set of modulation are used. This technique is practically implemented and tested for ten modulation types with hardware implementation on FL2440 ARM9 Embedded core. The hardware implementation takes low power. Additionally, the Modulation Identification Technique is implemented with a Scheduler and optimize performance based on the priority of a node, availability of the modulation and channel. The error detection and correction at receiver also implemented using ARM9 core.
2019-02-16T14:30:41.699Z
2016-09-29T00:00:00.000
{ "year": 2016, "sha1": "a4c8d29ce0e88c6d1bc415b4860e18977d308d7e", "oa_license": null, "oa_url": "https://doi.org/10.17485/ijst/2016/v9i35/101797", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "bc6840a7660da23b6b33e3bb1e1ac859aaa286c0", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
17571869
pes2o/s2orc
v3-fos-license
Isolated Medial Rectus Nuclear Palsy as a Rare Presentation of Midbrain Infarction. BACKGROUND Diplopia is a common subjective complaint that can be the first manifestation of a serious pathology. Here, we report a rare case of midbrain infarction involving the lateral subnucleus of the oculomotor nuclear complex presenting as diplopia, with no other stroke manifestations. CASE REPORT An 83-year-old right-handed white man with past medical history of diabetes mellitus, hypertension, dyslipidemia, and coronary artery disease presented to the emergency department (ED) with diplopia and unsteadiness. Two days prior to admission, the patient woke up with constant horizontal diplopia and unsteadiness, which limited his daily activities and led to a fall at home. He denied any weakness, clumsiness, nausea, vomiting, photophobia, fever, or chills. Ocular exam showed a disconjugate gaze at rest, weakness of the left medial rectus muscle, impaired convergence test, and bilateral 3-mm reactive pupils. The diplopia resolved by closing either eye. The remaining extraocular muscles and other cranial nerves were normal. There was no nystagmus, ptosis, or visual field deficit. Sensation, muscle tone, and strength were normal in all extremities. Magnetic resonance imaging (MRI) of the brain revealed a tiny focus of restricted diffusion in the left posterior lateral midbrain. CONCLUSIONS A thorough history and physical examination is essential to diagnose and manage diplopia. Isolated extraocular palsy is usually thought to be caused by orbital lesions or muscular diseases. Here, we report a case of midbrain infarction manifested as isolated medial rectus palsy. Background Diplopia is a common subjective complaint that can be the first manifestation of a serious pathology. Isolated medial rectus palsy as the presenting manifestation of midbrain infarction is rare, particularly when no other stroke manifestations are identified. Here, we report a case of midbrain infarction that presented as isolated medial rectus palsy, and outline the clinical approach taken to identifying the etiology of diplopia in this patient. Case Report An 83-year-old right-handed white man presented to the ED with diplopia and unsteadiness for 2 days. Two days prior to admission, the patient woke up with constant horizontal diplopia, where single objects appeared as double (side-by-side images). He also had unsteadiness, which limited his daily activities and led to a fall at home. He denied any weakness or clumsiness, numbness or tingling, slurred speech, nausea, vomiting, photophobia, fever, or chills. His past medical history was significant for diabetes mellitus, hypertension, dyslipidemia, and coronary artery disease. He denied smoking, drinking alcohol, or using illicit drugs. On physical exam, he was afebrile, with respiratory rate of 16 breaths/min, blood pressure of 170/80 mmHg, heart rate of 60 bpm, and oxygen saturation of 98% on room air. An ocular exam showed a disconjugate gaze at rest, weakness of the left medial rectus muscle, impaired convergence test, and bilateral 3-mm reactive pupils ( Figure 1; Video 1). The diplopia resolved by closing either eye. The remaining 5 extraocular muscles (superior rectus, inferior rectus, lateral rectus, superior oblique, and inferior oblique) and other cranial nerves were normal. There was no nystagmus, ptosis, or visual field deficit. Sensation, muscle tone, and strength were normal in all extremities. At this point, our differential diagnoses were ischemic stroke involving the lateral subnuclei of the oculomotor nucleus, posterior communicating artery aneurysm, cavernous sinus aneurysm, intranuclear ophthalmoplegia (INO), refractive error, or idiopathic oculomotor nerve palsy. A CT scan of the head showed a small old left putamen lacunar infarct. MRI of the brain revealed a tiny focus of restricted diffusion in the left posterior lateral midbrain and findings suggestive of advanced amyloid angiopathy (Figures 2, 3). The patient was started on aspirin 81 mg/day and was advised to avoid other anti-platelet agents or anti-coagulants due to the high risk of bleeding from cerebral amyloid angiopathy. Discussion Diplopia is a common subjective complaint encountered by health care providers. Identifying the etiology of diplopia can be a challenge because of the long list of differential diagnoses. Thus, a thorough history and physical examination is essential to diagnose and manage diplopia. Our patient had binocular diplopia, defined as "diplopia (that) resolves when the affected eye is occluded", which indicates misalignment of the visual axes as being the cause of the diplopia. In contrast, monocular diplopia, defined as "diplopia (that) persists when the affected eye is occluded" is usually caused by ophthalmological pathology, such as refractive error [1]. The binocular diplopia in our patient limited the differential diagnosis to impaired neural control or function of the extraocular muscles. Horizontal diplopia further shortened the differential diagnosis list to impairment of the medial rectus, lateral rectus, or both. The restricted adduction of the left eye (Video 1) suggests dysfunction of the left medial rectus muscle, which is one of the extraocular muscles supplied by the lateral subnuclei of the oculomotor nuclear complex (CNIII). The lateral subnucleus of the oculomotor nuclear complex is composed of 3 subnuclei, which, from dorsal to ventral, supply the inferior rectus, inferior oblique, and medial rectus muscles, respectively [2]. A typical presentation of complete oculomotor nerve palsy includes "down and out" position of the affected eye, ipsilateral ptosis, and fixed dilated pupil [3)] However, partial oculomotor nerve palsy is more commonly encountered in clinical practice than is complete oculomotor nerve palsy. Pupil-sparing oculomotor nerve palsy is usually caused by vascular microinfarction, but this is true only when there is a complete ptosis of the eye [1]. The complete pupil-sparing in our patient made a compressive lesion, such as posterior communicating artery aneurysm, less likely. Another differential diagnosis in our case was INO. However, the absence of nystagmus in the contralateral abducting eye and the impaired convergence test in our patient favored the diagnosis of isolated medial rectus palsy over INO [1]. In addition to the physical exam findings, the advanced age of our patient and history of hypertension, diabetes, dyslipidemia, and coronary artery disease supported the diagnosis of midbrain infarction involving the lateral subnuclei of the oculomotor nuclear complex. This diagnosis was confirmed by the MRI, which also revealed an incidental finding of advanced amyloid angiopathy (Figures 2, 3). Isolated unilateral extraocular palsy is usually thought to be caused by orbital lesions or muscular diseases. In this report, we presented a rare case of midbrain infarction involving the lateral subnuclei of the occulomotor nuclear complex presenting as isolated medial rectus palsy. Isolated superior and inferior rectus palsy have been reported after midbrain infarction [4,5]. In fact, a similar case of left medial rectus palsy, but with left partial ptosis, was reported by Rabadi as a consequence of midbrain infarction [6]. Conclusions Unilateral isolated medial rectus nuclear palsy can be the only manifestation of midbrain infarction. Systematic clinical approach with appropriate history and physical examination is essential to elucidate the etiology of diplopia and to avoid missing a serious underlying diagnosis, such as cerebrovascular accidents.
2017-08-16T07:39:25.230Z
2015-10-08T00:00:00.000
{ "year": 2015, "sha1": "710842d2b296f604eb4428b488ac22aedc0a7d10", "oa_license": null, "oa_url": "https://europepmc.org/articles/pmc4603610?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "710842d2b296f604eb4428b488ac22aedc0a7d10", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264989754
pes2o/s2orc
v3-fos-license
Mapping Ecological Infrastructure in a Cross-Border Regional Context : Facing the decline of biodiversity worldwide, the conservation of the remaining natural and semi-natural areas is fundamental. To do so, the concept of green infrastructure has gained attention recently. This case study presents the method developed to identify the green infrastructure in a cross-border, urbanized territory between Switzerland and France in the area of influence of the city of Geneva. The first part of the methodology consists of calculating and mapping the inputs aggregated in four pillars: (i) the distribution of habitats as well as the predicted distribution of hundreds of plant and animal species, (ii) the supply of five ecosystem services, (iii) the functional connectivity for three animal species and the light pollution and (iv) five indices of landscape’s structure. These inputs are then used to run a prioritization model to identify the areas with the highest ecological interest according to these weighted inputs. The cross-border situation of this case study had impacts on the way the input data were gathered and weighted and on the way the output was created to consider the expectations of the three main local authorities involved, without creating any legal obligations on the implementation of the green infrastructure. As a positive sign of the usefulness of these results, the resulting maps were immediately transferred to the land use planners in charge of developing ambitious visions of the “Grand Gen è ve” territory for 2050 in alignment with 10 objectives of ecological transition as recently agreed and signed by local authorities. The method presented in this article is flexible and includes a broad description of biodiversity, supporting a reliable network of areas with high ecological values for conservation purposes and human well-being. Introduction Global biodiversity is under a major crisis at every level and its genetic, species and ecosystem diversity is declining rapidly [1,2].The destruction of natural habitats for agriculture and/or urbanization is the main cause of its decline and is directly linked to our way of occupying terrestrial and marine surfaces as well as to our consumption patterns [3][4][5][6][7].Biological diversity decline might ultimately alter ecosystem functions such as productivity, stability and resilience, jeopardizing our food and water security as well as our socio-economic well-being [8][9][10][11][12][13][14][15][16][17].Thus, the conservation of the remaining natural and semi-natural areas is fundamental, especially in urbanized environments where urbanization represents an additional pressure. Green infrastructure (GI) is defined as a network of (semi-)natural areas allowing structural and functional connectivity of the landscape where biodiversity and ecosystem Land 2023, 12, 2010 2 of 26 services are concentrated.The concept of GI fits perfectly the modern view of nature's conservation that emphasizes the cohabitation of people and nature with sustainable and resilient interactions [18].This new paradigm encompasses the common health of human societies and natural systems, highlighting our direct dependence upon ecosystems as described in the "One Health" concept [19,20].GI is usually described as an interconnected network of (semi-)natural areas designed to deliver wide range of ecological, social and economic benefits [21][22][23], although several definitions have been used [24,25].It is usually made up of large areas concentrating biological diversity and ecosystem service supply, linked with corridors allowing structural and functional connectivity [22,26].GI is highly relevant in urban areas because it gives an ecological value to each element of the territory, focusing on the multifunctionality of the landscape.GI also integrates the nature-based solutions to mitigate the effects of global changes [26][27][28].It is promoted at the European scale but also at the federal and cantonal scale in Switzerland [29][30][31].In France, GI contributes directly to the policy on green and blue networks (http://www.trameverteetbleue.fr(accessed on 1 July 2023).Internationally, GI fits Target 3 of the urgent actions that need to be taken over the decade to 2030 from the 15th Conference of the Parties to the Convention on Biological Diversity, which proclaims to "Ensure and enable that by 2030 at least 30 per cent of terrestrial areas of particular importance for biodiversity and ecosystem functions and services, are effectively conserved and managed through ecologically representative, well-connected and equitably governed systems of protected areas and other effective areabased conservation measures, while ensuring that any sustainable use is fully consistent with conservation outcomes."[32]. There is no consensus on the methodology nor the inputs that should be used in order to identify and map a GI [24].This has led to confusions where the term "green infrastructure" was used in very divergent ways while several concepts and terms were referring to the same idea (e.g., ecological network, green corridors, green prints, etc.) [25].For example, in highly urbanized environments, GI is often used as a greening method or as architectural elements such as green walls or green roofs [33].In other situations, GI is restricted to areas supporting ecosystem supply only or protected lands [34][35][36][37][38][39].More details of how GI is used in the scientific literature as well as the methods employed to identify it and the associated limits are available in Honeck et al. (2020) [26].This literature review identified a methodological gap in the identification of GI where most of the articles do not consider all aspects of biodiversity conservation and GI's definition. The methodology employed here is based on the "three pillars" approach that has already been applied in Geneva, Switzerland [40].This approach allows the consideration of all aspects of biodiversity conservation and respects the initial definition of GI [26].The method is adapted here to aggregate the inputs in four pillars, the third and the fourth being initially grouped: (1) the diversity and distribution of species and habitats using species distribution models and a land use-land cover (LULC) map of the territory, (2) the supply of ecosystem services and (3) the functional and (4) the structural connectivity of the landscape.A spatial prioritization tool is used to select the network of areas with the highest ecological interest.The novelty of this article is the presentation of an application of the theoretical approach developed in Honeck et al. (2020) [26] on a cross-border territory, emphasizing the various methods used to calculate and integrate 2437 inputs for the identification of the GI.This exhaustive work can be used as a baseline for any territory aiming at mapping its own GI. This article is focused on the prioritization of the GI network covering 30% of the territory of a regional cross-border agglomeration between France and Switzerland named "Greater Geneva".This region is located in the European Alps and in the area of economic influence of the city of Geneva.In January 2023, the elected representatives of Greater Geneva signed the Charter for Greater Geneva in Transition with the desire to make the ecological transition the backbone of cross-border cooperation, recognizing that the erosion of life, the depletion of natural resources and climate degradation are our greatest threats [41].The Charter sets out 10 strategic commitments to respect both the social floor and the ecological ceiling.The GI is fully in line with Objective 3 on Biodiversity of the Charter that aims at stopping the loss of natural habitats by 2050.It is also expected to have a positive impact on all other objectives.This particular transboundary setting generated several issues that were addressed by the research questions presented below. (i) What are the difficulties of gathering input data across borders?; (ii) What is the distribution of prioritization value across the studied areas?; (iii) What is the best 30 percent of the territory?; (iv) How can we accommodate the desire of each administrative entity to identify its own best 30%?; (v) What share of the identified GI is already protected?(vi) What are the difficulties in establishing GIs across borders? Study Area Greater Geneva is a cross-border territory between Switzerland and France of approximately 2000 km 2 located around the city of Geneva.In its strict limits, it integrates three administrative entities grouped in two Swiss cantons ("Genève" and "Vaud" with the District of Nyon), and two French Departments ("Ain" and "Haute-Savoie") with the "Pôle Métropolitain du Genevois Français".This peculiar territory induces difficulties in compiling data because the taxonomy, the methods and the data availability vary from one administrative entity to another.However, the territory has a biogeographic consistency and is delimitated by mountain ranges, the Alps in the south-west and the Jura in the north-east, which justifies the GI assessment at this scale beyond borders.The region is particularly dynamic, and the population is growing rapidly due to the attractivity of the city of Geneva.The territory is dominated by urbanized areas and crops in the lowlands and forests and pastures in the mountainous areas. Method for Mapping the Ecological Infrastructure According to the definition of GI, the inputs used to identify and map it have to consider several aspects of biodiversity and ecosystem service conservation [22,26,42].For clarity, they were grouped into four main pillars: (1) the diversity pillar that includes the assessment of species and habitat distributions based on models and the aggregation of available LULC data; (2) the ecosystem service pillar that aims at mapping the supply of five regulating ecosystem services; (3) the connectivity pillar that includes maps of the functional connection for three animal species and light pollution; and (4) structural indices of the landscape based on the LULC categories.Once the inputs have been prepared at a spatial resolution of 25 m, they are included in a spatial prioritization tool set to classify every pixel of the territory according to its relative importance for biodiversity and ecosystems service conservation [42,43] (Figure 1).The method and the theoretical background used here were developed and explained exhaustively in two papers [26,40].The inputs were selected according to the available data for describing the four pillars, their collinearity and their ecological meaning.The weight attributed to each of them was discussed in the working team and with the main stakeholders.The following sections explain the methods used to calculate and map the inputs. Pillar 1: Species and Habitat Distribution The assessment of the distributions of many species of plants and animals allows the identification of important areas for the conservation of specific richness.Furthermore, the inclusion of all distributions into the spatial prioritization process allows the selection of areas that are of the highest importance for rare species.The distribution of natural habitats also plays an important role in nature's conservation by providing food and shelter to animals and plants but also by the maintenance of their ecological functions. Pillar 1: Species and Habitat Distribution The assessment of the distributions of many species of plants and animals allows the identification of important areas for the conservation of specific richness.Furthermore, the inclusion of all distributions into the spatial prioritization process allows the selection of areas that are of the highest importance for rare species.The distribution of natural habitats also plays an important role in nature's conservation by providing food and shelter to animals and plants but also by the maintenance of their ecological functions. Natural Habitats The distribution of habitats and more globally the LULC information are highly important for spatial planning at the regional scale but also for species distribution models (SDMs) [44]. The LULC map was created based on the compilation of the French and Swiss geomatic information sources, respectively named "Institut national de l'information géographique et forestière" (IGN, https://www.ign.fr/institut) and "Topographic Land Model" from SwissTopo (TLM3D, https://www.swisstopo.admin.ch/fr/geodata/landscape/tlm3d.html)(accessed on 1 July 2023).The data available across the study area were heterogenous in typology and geometry so both sources of information were used to homogenize the various maps into one.The geometry of the IGN map was extracted to divide the territory by administrative parcels that were transformed into polygons.Then, the habitat maps were transformed into five-meters points and added to the polygons where the most represented habitat was selected for each parcel.The dense urban environments, as well as the roads, railways, highways, rivers and running water, were then added.Finally, the diffuse urban environment class was created based on the presence of vegetation in the urban classes using NDVI information.More details of the method are presented in Figure 2. The categories that were used in the prioritization process represent 19 (semi-)natural habitats, with the urban ones being excluded from this analysis.Each selected category was extracted as a unique input, ensuring the selection of at least a part of each (semi-) natural habitat in the final GI network by the prioritization process. Natural Habitats The distribution of habitats and more globally the LULC information are highly important for spatial planning at the regional scale but also for species distribution models (SDMs) [44]. The LULC map was created based on the compilation of the French and Swiss geomatic information sources, respectively named "Institut national de l'information géographique et forestière" (IGN, https://www.ign.fr/institut) and "Topographic Land Model" from Swis-sTopo (TLM3D, https://www.swisstopo.admin.ch/fr/geodata/landscape/tlm3d.html)(accessed on 1 July 2023).The data available across the study area were heterogenous in typology and geometry so both sources of information were used to homogenize the various maps into one.The geometry of the IGN map was extracted to divide the territory by administrative parcels that were transformed into polygons.Then, the habitat maps were transformed into five-meters points and added to the polygons where the most represented habitat was selected for each parcel.The dense urban environments, as well as the roads, railways, highways, rivers and running water, were then added.Finally, the diffuse urban environment class was created based on the presence of vegetation in the urban classes using NDVI information.More details of the method are presented in Figure 2. The categories that were used in the prioritization process represent 19 (semi-)natural habitats, with the urban ones being excluded from this analysis.Each selected category was extracted as a unique input, ensuring the selection of at least a part of each (semi-) natural habitat in the final GI network by the prioritization process. Species Distribution Modeling SDM allows the creation of a covering map of habitat suitability based on the georeferenced observations of species' individuals and predictive variables [45][46][47].Several methods exist and have been used extensively in conservation [48][49][50][51].Species' occurrences were compiled from French and Swiss botanical conservatories and monitoring programs.Only observations between 2000 and 2020 and with a precision below 25 m were kept.To better conserve endangered species, we compiled the red list statuses from the different entities and selected the most threatened status.This ensures that the threats species are facing are not under-evaluated.At the end of the process, 585 species of animals and 1816 plants were selected.Predictive variables were selected based on their collinearity, ecological meaning and modeling performances in the study area and are presented in Table 1 [52].The resolution of these variables is 25 m.The chosen modeling algorithm was MaxEnt [53,54] (version 3.4.1)because it is widely used in SDM [55] and known to perform well especially with presence-only data [56,57].The models were run using "Dismo" [58], "ENMeval" [59] and "sdm" [60] packages in R [61].The default settings were kept except for the beta multiplier that was set to 2.00 to avoid over-fitting [62,63].For each model, the occurrences were randomly split with 75% used for calibration and 25% for evaluating the model's performances 10 times in a raw.Ten thousand background data were randomly created for each model.Then, for each species, a final model was calibrated with all occurrences available to map habitat suitability with all the information.More details about the modeling method and data selection process can be found in Sanguet et al. (2022) [52]. Pillar 2: Ecosystem Service Supply The preservation of ecosystem services of regulation and support to biodiversity contributes to conserving the good functioning of ecosystems.Furthermore, the preservation of ecosystem services' supply, as part of nature's contributions to people and nature-based solutions, helps mitigate the detrimental effects of climate change or the consequences of extreme meteorological events.Other types of ecosystem services such as resource production or cultural services were not included in this work because they might induce the selection of areas with low quality or detrimental effects on biodiversity.Five ecosystem services were modeled and mapped using InVEST (version 3.12.0): the suitable areas for pollinators, the atmospheric carbon storage, the nutrient delivery ratio, the sediment delivery ratio and the leaf area index. Suitable Areas for Pollinators Pollinators are highly important for crop pollination and, as a consequence, for our food provision.Identifying their most suitable habitats to be integrated into the final GI network participates in maintaining the populations' health and pollinators' availability.The index calculated here represents a potential abundance of pollinators for each pixel of the resulting map based on their ecology, considering the quality and attractivity of the habitats for feeding and nesting habits.The "Crop Pollination" program was used in the software InVEST.Two tables are needed to run the model.The guild table contains the characteristics of 20 wild bee species while the biophysical table associates habitats with wild species' traits and habits.Both tables were created based on a literature review and local expert knowledge to adjust the value to the local context (Tables A1 and A2 in Appendix A).The optional farm map was not used.The model produces one map for each season (winter excluded) which were added to map the total suitability of the landscape. Atmospheric Carbon Storage Carbon dioxide (CO 2 ) is a greenhouse gas massively rejected into the atmosphere by human activities and is the main cause of the observed global warming.The preservation of natural habitats known to store carbon avoids their destruction and thus the release of the stored carbon dioxide into the atmosphere.Preserving forest growth compensates for a part of our emissions and participates sequestrating carbon into organic matter and the soil.The mapping of this ecosystem service uses a biophysical table linking each habitat category to its carbon storage capacity.The table, named "Carbon pools" in the "Carbon Storage and Sequestration" program of InVEST, was adapted from the available data in InVEST's documentation and is available in Appendix A (Table A3).Only the storage was measured and not the sequestration. Nutrient Delivery Ratio The "Nutrient Delivery Ratio" (NDR) program of InVEST calculates the flow of nutrients into the rivers and other water bodies or their retention in the soil's upper layers.Excessive nutrient accumulation in water could impacts aquatic ecosystems composition and functioning.The NDR program models the landscape's load and retention of nitrogen and phosphorous based on a biophysical table linking the LULC categories to their nutrient retention and load abilities, but also on the digital elevation model, a nutrient runoff proxy such as the annual precipitations, and the distribution of watersheds.The values of the biophysical table were adapted to the local context from the literature [66][67][68] as well as from the available data on the InVEST documentation [69] and is available in the Appendix A (Table A4).The Borselli K parameter was set to 2, the subsurface critical length to 200 m, the subsurface maximum retention efficiency to 0.8 and after several tests the threshold flow accumulation was set to 140.This last parameter adjusts the modelling of the location of temporary rivers based on the digital elevation model.The value was selected after several tests to better fit known permanent and temporary rivers on the territory.Several results were produced, and the effective retention map was kept.It represents the relative capacity of each pixel to retain nutrients. Sediment Delivery Ratio The "Sediment Delivery Ratio" program in InvEST models the flow of sediments and thus the erosion of the landscape.Erosion might induce a higher risk of landslides and a loss of organic matter in the soil.The preservation of areas reducing the risk of erosion is a nature-based solution and allows the mitigation and avoidance of natural hazards.The data and settings required for this model are relatively similar to the NDR model.The biophysical table was adapted from the existing data in the InVEST documentation [69] and the literature [70,71] and is available in Appendix A (Table A4).The values link each LULC category with its ability to reduce the loss in sediment and its management by humans to reduce the erosion.The erosivity and erodibility maps were downloaded from the European Soil Data Centre [72] and projected in the territory at 25 m resolution.The settings used were the following: threshold flow accumulation = 140, Borselli K parameter = 2, maximum SDTR value = 0.8, Borselli ICO parameter = 0.5, maximum L value = 122.The results are composed of several maps and the avoided sediment export was selected.It gives a value to each pixel according to their ability to avoid sediment export. Leaf Area Index Vegetation cover reduces the temperature and filters the air.Due to mitigating the effects of climate change and regulating the micro-climate, it is especially interesting to preserve green spaces in urban environments.This ability could be mapped by the leaf area index based on remote sensing images of the territory.The normalized differentiation vegetation index (NDVI) allows mapping of the greenness of a landscape and has been largely used to classify vegetation types [73,74].The average maximum value of the NDVI in the territory was calculated based on the remote sensing images from Landsat-5, Landsat-7 and Landsat-8 and compiled in the Swiss Datacube [52,[73][74][75].Then, a formula was applicated to the raster to calculate the leaf area index (1) [76].0.57 * e (2.33 * NDV I) (1) Pillar 3: Functional Connectivity Functional connectivity ensures spatial connections between habitats and maintains species movements which are especially important for their resilience against climate change [77] and for gene-flow.According to their characteristics, shape, surface or fragmentation level, natural habitats' quality and functions vary [78].Functional connectivity corresponds to the relative ease of mobility in the landscape for a species and depends on the intrinsic characteristics of the species and the landscape [79,80].Indeed, the same landscape could be used very differently from one animal to another.Thus, a well-connected territory should allow various kind of species movements such as daily movement, largescale migration and dispersion, ultimately permitting gene flow across populations.Hence, the functional connectivity was studied by the identification of both the corridors and the areas constraining species' movements for three animal species as well as by the mapping of light pollution, an essential factor for nocturnal species' movements. Combined Connectivity and Corridors Three species with various spatial behaviors and using different habitat types were selected for these analyses: Cervus elaphus L. (red deer), Capreolus capreolus L. (roe deer) and Lepus europaeus P. (Brown hare).Two maps representing the global connectivity of the landscape as well as the constraining areas were modeled for each species.They are both based on two main inputs: the species reservoirs (or core areas) and a resistance matrix.Potential reservoirs of wild populations were calculated using species' resistance and connectivity maps of the Greater Geneva region [81,82].These maps were created using species' habitat requirements and expert knowledge based on the LULC information of the territory.Ecological barriers to species' movements such as buildings and fenced highways are taken into account, which means the resulting model excludes portions of the territory that are not used or accessible to the species. The first map represents the energetic cost of crossing the habitats located on the animal's path and thus the probability for it to move across the landscape.Cumulative costs were calculated using a workflow in GIS software between the species' reservoirs, using a matrix allocating a resistance value to each LULC category of the territory.This method corresponds to the surface generalization of least-cost path models [82].Hence, suitable habitats close to the reservoirs of the considered species' population are easier to cross because of its low energetic cost, while unsuitable habitats are more energy intensive. The second connectivity map emphasizes the areas constrained by the urban occupation.In other words, it represents how few or many alternative ways are available for wildlife to move from one point to another.CircuitScape (v0.1.0)software was used to compute this map based on the circuit theory [81][82][83].The preservation of these constrained corridors ensures the connectivity of the landscape, even for the urbanized areas. Light Pollution The spatial organization of a landscape could be used differently depending on the animal.The spatial behaviors of nocturnal species do not only depend on the landscape's structure but also on the artificial light of urban areas.Indeed, they need dark spaces to carry out their movements as well as other activities.Thus, the identification of light Land 2023, 12, 2010 9 of 26 pollution allows the preservation of areas that are especially shaded and dark during the night and the identification of areas highly polluted by light.To do so, urban areas are transformed into light emitting spaces and the model is adjusted according to the altitude and the presence of forests or water bodies.The map was modeled by F. Tapissier in 2016 [84], before the current restrictions in the use of electricity and urban lighting.Many villages and urban areas now drastically reduce their nocturnal light, and the current map might over-represent current light pollution in the study area. Pillar 4: Landscape Structure Landscape structure, or structural connectivity, corresponds to the spatial arrangement of its LULC categories.The distribution and physical organization of the (semi-)natural habitats were assessed in the territory in order to identify areas with a high interest in conservation, based on five indices: the fragmentation (or the continuity) of natural areas, the soil permeability, the naturality of habitats, the diversity of (semi-)natural habitats and the identification of core natural areas. Fragmentation Natural habitats are considered fragmented when their distribution is discontinuous and patches are separated by ecological barriers, mostly anthropic land cover types and transportation networks [85].The fragmented habitats have a reduced availability for species especially in an urban context.Indeed, sound, odors or light might prevent certain species from living at the margins of their natural habitat if it is surrounded by humanmade infrastructures [86].Furthermore, connected habitats favor species' movements and migrations.One method to model and map habitat fragmentation is to calculate the MESH size that corresponds to the probability of two randomly picked pixels belonging to the same habitat patch [87].To do so, the LULC map of the territory was transformed into two binary values, 1 for ecological anthropic barriers and 0 for (semi-)natural habitats.This raster was then used as input in the software Fragstat (v4.2) [88] and the "Effective mesh size (MESH)" program in the window "Aggregation" of the "Class metrics" category was selected.The moving window sampling strategy was used with a round radius of 200 m and a maximum of 50% border with no data.The resulting map was then modified to show the continuity of the natural habitats, which is the exact opposite of the fragmentation. Soil Permeability In an urban context, the environment is mostly impermeable, preventing water from being absorbed in the soil which increases the risk of flooding.This impermeability is mostly due to the use of concrete.Thus, saving permeable habitats is a nature-based solution to mitigate the effects of extreme weather events, maintain ecological functions linked to the water cycle and favor soil biodiversity.To map the permeability of the study area, the categories "highways", "road" and "dense urban areas" were considered as impermeable while the other categories were permeable.This permeability layer favors the conservation of natural habitats in opposition to highly anthropic LULC categories. Naturality Naturality corresponds to the ecological quality of a habitat.A very anthropic LULC category would have a low naturality while a highly diverse, well-managed habitat with low disturbance would have a high naturality.This index allows ranking of the habitats according to their intrinsic quality in the study area.Using experts' knowledge, all LULC categories were assigned a value between 1, corresponding to a very low naturality for urban areas, and 5, for the most interesting habitats.Then, a spatial focal statistic was applied to the raster using a 200 m radius to smooth the values and avoid class boundary effects.The map was exported at 25 m resolution. Diversity of Natural Habitats A high diversity of natural habitats favors a high species richness in a territory, especially when these habitats are equally distributed.Thus, the Shannon index [89] was used to calculate the diversity of natural habitats in the study area.To do so, the LULC categories were aggregated in seven classes based on their similarity, without considering dense urban areas and transportation.The classes are the following: meadows, lightly urban areas, natural cliffs and rocks, disturbed vegetation, forests, agriculture and wetlands.The model was run using Fragstat (v4.2), selecting "Shannon's diversity index (SHDI)" in the "Diversity" window of the "Landscape metrics" class.The moving window sampling strategy was used with a round radius of 200 m and a maximum of 50% border/no data. Core Areas This input is complementary to the diversity of natural habitats because it identifies patches of habitat that are large enough to have a central core area free from the influence of neighboring habitats (edge effect).Indeed, some species need large areas of the same habitat to thrive.However, the influence of the neighboring habitats varies depending on their naturality and intrinsic characteristics.For example, an anthropic LULC category has a strong influence that could penetrate deeper into the natural habitat in the form of olfactive, chemical or light pollution.On the other hand, natural habitats have a lower influence between themselves, and their characteristics have a lower level of penetration.A core area would then correspond to an area free from any edge effect. The modeling of the distribution of core areas necessitates two tables.The first one aggregates the habitats in categories based on their similarity.The second is a penetration matrix linking each category with each other with a distance of influence.The distance must be written using the raster's metric system and be a multiple of its resolution.Here, the resolution of the raster is 25 m squared, so only multiples of 25 are accepted in the penetration matrix.The values used were based on experts' knowledge and calibration tests.Then, core areas were mapped with Fragstat (v4.2) using "Core Area Median" in the "Core Area" window of the "Landscape Metrics" class.The moving window sampling strategy was used with a round radius of 200 m and a maximum of 50% border/no data. Spatial Conservation Prioritization Spatial conservation prioritization allows identification and mapping of the optimal compromise between the layers used as inputs, considering their allocated weight and other settings.We used the additive benefit function of Zonation 5 [43] to prioritize each pixel of the study area.The process starts by ranking all the pixels of the study area according to their ecological interest, then iteratively removes cells with the smallest marginal loss in terms of conservation value [43].The additive benefit function prioritizes areas where many inputs have a high ecological value, thus selecting pixels with high richness over those with rare features.The resulting map is a raster in which pixels' rank ranges between 0 for low conservation value and 1 for the relative highest ecological interest.Inputs should be used with the same logic in their pixels' value, by which the most interesting areas that should be conserved should have a high value while low-quality areas have a low value. The previously mentioned inputs were classified into four classes corresponding to the four pillars and attributed a weight depending on their quality and capacity to identify highly relevant areas for conservation.In total, 2437 inputs were used in the prioritization process, mostly species habitat suitability maps.The details of inputs and their associated weight are found in Table 2.The weights were established empirically by several trials that were discussed among the authors and with the stakeholder group.The study area is composed of three administrative entities: the French territory named "Pôle Metropolitain", the Canton of Geneva and the District of Nyon in the Canton of Vaud.We compared the output of an analysis covering the entire region, with an analysis combining the results of three subregions.According to the stakeholders, the final GI network should cover 30% of each of these three regions, even though their size and ecological quality differ.To do so, the prioritization was made over the whole territory but the identification of the best 30% has been made on the separate entities.As the Canton of Geneva has already had a GI identified in a previous work with the same methodology, it was integrated in the final results to preserve the coherence with the previous analysis. Identification of the Habitats The LULC map presented in Figure 3 consists of a raster of 25 m resolution and 27 categories.It represents broad natural and semi-natural habitats as well as various land uses and urban environments.Having access to a homogenous, precise and well-covering LULC map is fundamental for the analysis carried in this work but also for having a common baseline between stakeholders and the various administrative entities, especially in a cross-border territory. Maps Used as Inputs to the Final Prioritization The inputs used in the prioritization process represent interesting results by themselves because some areas might not be selected in the final GI while representing a high interest for one of the inputs.The 2437 maps calculated and modeled were aggregated into 41 maps, available for consultation by local authorities and planners in the following link: https://sitv-qual.geneve.ch/portal/apps/webappviewer/index.html?id=f5865a0 162d64efb8434644372c022c6 (accessed on 1 July 2023). All the input variables and the outputs of the analysis are available on the Yareta system for scientific archives (Supplementary Material S1). The LULC map presented in Figure 3 consists of a raster of 25 m resolution and 2 categories.It represents broad natural and semi-natural habitats as well as various lan uses and urban environments.Having access to a homogenous, precise and well-coverin LULC map is fundamental for the analysis carried in this work but also for having a com mon baseline between stakeholders and the various administrative entities, especially i a cross-border territory. Maps Used as Inputs to the Final Prioritization The inputs used in the prioritization process represent interesting results by them selves because some areas might not be selected in the final GI while representing a hig interest for one of the inputs.The 2437 maps calculated and modeled were aggregate into 41 maps, available for consultation by local authorities and planners in the followin link: https://sitv-qual.geneve.ch/portal/apps/webappviewer/index.html?id=f5865a0162d64efb8434644372c022c6 (accessed on 1 July 2023). All the input variables and the outputs of the analysis are available on the Yaret system for scientific archives (Supplementary Material S1). Biodiversity Diagnosis and GI The main output of Zonation is the biodiversity diagnosis of the study area and i presented in Figure 4.This map represents the ranking of all pixels according to thei ecological interest based on the inputs.The mountainous regions located on the north eastern and the south-western parts of the territory seem of particular interest for conser vation, which was expected because the anthropic pressure is lower in these areas com pared to the lowlands, located at the center of the map.Urban centers are highly visible especially the agglomeration of Geneva located in the middle of the study area where th conservation value of the cells is low.Forested areas in the lowlands also seem of hig interest. Biodiversity Diagnosis and GI The main output of Zonation is the biodiversity diagnosis of the study area and is presented in Figure 4.This map represents the ranking of all pixels according to their ecological interest based on the inputs.The mountainous regions located on the north-eastern and the south-western parts of the territory seem of particular interest for conservation, which was expected because the anthropic pressure is lower in these areas compared to the lowlands, located at the center of the map.Urban centers are highly visible, especially the agglomeration of Geneva located in the middle of the study area where the conservation value of the cells is low.Forested areas in the lowlands also seem of high interest. From this covering map, the top 30% pixels from the French administrative entity as well as the Swiss District of Nyon were independently extracted and then merge together with the GI of the Canton of Geneva identified in a previous work.This methodology, presented in Figure 5, allows identification of the GI for each administrative entity as well as for the whole territory to fit all political agendas. The GI identified is visible in Figure 6.It is mostly made up of large patches well connected with many smaller isolated patches orbiting around the main core areas.Most of the GI is covered by closed forests (35%) and natural meadows (25%), while cultivated and diffuse urban areas both cover 16% of the GI, wetlands only 5% and open forests 3%.Closed forests thus represent more than one-third of the GI which is not surprising because they cover a similar portion of the whole study area, which means that they are not over-represented.However, natural meadows only cover 14% of the study area which means that they are over-represented in the GI, as are the wetlands only covering 1.5% of the study area.These habitats are thus highly important for biodiversity conservation in the region.On the other hand, cultivated areas are under-represented because they cover more than one-fifth of the study area.Another way of verifying which habitat is especially important for conservation is to look at the proportions of these habitats integrated into the GI.As previously mentioned, slightly more than one-quarter of all closed forests is integrated in the GI while this proportion rises to 53%, 97% and even 100% of all natural meadows, open forests and wetlands, respectively.This is a clear signal that these habitats are crucial elements of a functional, effective and reliable network of conservation.From this covering map, the top 30% pixels from the French administrative entity as well as the Swiss District of Nyon were independently extracted and then merge together with the GI of the Canton of Geneva identified in a previous work.This methodology presented in Figure 5, allows identification of the GI for each administrative entity as well as for the whole territory to fit all political agendas.From this covering map, the top 30% pixels from the French administrative entity as well as the Swiss District of Nyon were independently extracted and then merge together with the GI of the Canton of Geneva identified in a previous work.This methodology, presented in Figure 5, allows identification of the GI for each administrative entity as well as for the whole territory to fit all political agendas.The GI identified is visible in Figure 6.It is mostly made up of large patches well connected with many smaller isolated patches orbiting around the main core areas.Most of the GI is covered by closed forests (35%) and natural meadows (25%), while cultivated and diffuse urban areas both cover 16% of the GI, wetlands only 5% and open forests 3%. the region.On the other hand, cultivated areas are under-represented because they cover more than one-fifth of the study area.Another way of verifying which habitat is especially important for conservation is to look at the proportions of these habitats integrated into the GI.As previously mentioned, slightly more than one-quarter of all closed forests is integrated in the GI while this proportion rises to 53%, 97% and even 100% of all natural meadows, open forests and wetlands, respectively.This is a clear signal that these habitats are crucial elements of a functional, effective and reliable network of conservation.Figure 6 also shows the proportion of the GI that is already protected.This result demonstrates that there is still a lot of effort to be made to preserve the areas with a high biodiversity value in the region for the benefit of future generations in the context of steady economic and population growth.As the best identified 30% of the territory will probably not obtain a legal status of protection, other efficient area-based measures (OECM) should be considered for their preservation as proposed by the IUCN [90]. The GI was initially identified by selecting the best 30% of the main Zonation output pixels over the whole study area, but the GI cover over the three territorial entities was disequilibrated because the main natural areas are found in the mountains located on the French side.This initial version would represent the best possible areas to conserve over the whole study area as there is no modification brought to the result of the prioritization process.However, with the methodology presented here, each entity has its own 30% GI cover in its territory.This is more equitable for the administrative entities and valuable for Figure 6 also shows the proportion of the GI that is already protected.This result demonstrates that there is still a lot of effort to be made to preserve the areas with a high biodiversity value in the region for the benefit of future generations in the context of steady economic and population growth.As the best identified 30% of the territory will probably not obtain a legal status of protection, other efficient area-based measures (OECM) should be considered for their preservation as proposed by the IUCN [90]. The GI was initially identified by selecting the best 30% of the main Zonation output pixels over the whole study area, but the GI cover over the three territorial entities was disequilibrated because the main natural areas are found in the mountains located on the French side.This initial version would represent the best possible areas to conserve over the whole study area as there is no modification brought to the result of the prioritization process.However, with the methodology presented here, each entity has its own 30% GI cover in its territory.This is more equitable for the administrative entities and valuable for spatial conservation planning in order to reach the objective of dedicating 30% of each territory to nature's conservation.The aggregated version of GI in the study area redistributed cells mainly from the patches in the mountainous areas to the more urban and cultivated areas in the lowlands as presented in Figure 7. Interestingly, most of the pixels that were removed from the initial version are either isolated or located at the margins of the main patches.Thus, the GI distribution from the aggregated version does not fundamentally change compared to the first version.This result means that the main patches identified in the initial "optimal" version are still included in the aggregated version.Thus, the conservation value of the aggregated GI network is still similar to the initial version, which is verified by the high spatial overlap of 88.1% between the two versions. that were removed from the initial version are either isolated or located at the margins of the main patches.Thus, the GI distribution from the aggregated version does not fundamentally change compared to the first version.This result means that the main patches identified in the initial "optimal" version are still included in the aggregated version Thus, the conservation value of the aggregated GI network is still similar to the initial version, which is verified by the high spatial overlap of 88.1% between the two versions.In the new version, many areas (in red) appear in the two Swiss entities while only a few (in blue) seem to be deleted.This is only an impression because the surfaces are similar but lost surfaces in blue are mostly made of isolated pixels while the gained areas in red are aggregated in patches.The overlap between the two versions (in yellow) is high. Selection of Inputs The inputs used in this methodology were selected based on their complementarity to avoid collinearity as much as possible, their representativity of the natural processes and their ease to be reproduced and explained to the authorities in charge of implementing the results.The final selection is the result of the separate assessment of each pillar in order to test the most reliable inputs for the local context and with the data available [26,40,52]. The distribution of natural habitats is key information for the modeling of the majority of inputs.Having access to a high-resolution LULC map with many detailed categories is fundamental for this work and its quality determines the accuracy of the resulting GI.Identifying many natural habitats allows consideration of each one of them in the prioritization process and thus ensures their representativity and conservation in the GI.However, for most of the inputs and species distribution models, eight categories are sufficient [52].All species of plant and animal for which enough precise occurrences were available In the new version, many areas (in red) appear in the two Swiss entities while only a few (in blue) seem to be deleted.This is only an impression because the surfaces are similar but lost surfaces in blue are mostly made of isolated pixels while the gained areas in red are aggregated in patches.The overlap between the two versions (in yellow) is high. Selection of Inputs The inputs used in this methodology were selected based on their complementarity to avoid collinearity as much as possible, their representativity of the natural processes and their ease to be reproduced and explained to the authorities in charge of implementing the results.The final selection is the result of the separate assessment of each pillar in order to test the most reliable inputs for the local context and with the data available [26,40,52]. The distribution of natural habitats is key information for the modeling of the majority of inputs.Having access to a high-resolution LULC map with many detailed categories is fundamental for this work and its quality determines the accuracy of the resulting GI.Identifying many natural habitats allows consideration of each one of them in the prioritization process and thus ensures their representativity and conservation in the GI.However, for most of the inputs and species distribution models, eight categories are sufficient [52].All species of plant and animal for which enough precise occurrences were available were modeled, with no distinction for their native status.However, a higher weight was given to species considered vulnerable, endangered or critically endangered in one of the red lists of the three territories.The inclusion of as many species as possible in the prioritization process creates a more reliable GI network by integrating all available distributions. The selection of the ecosystem services was mainly based on their modeling availability in the software InVEST coupled with the perspective of conserving biodiversity.Many other ecosystems services were available but were linked with the production of resources and energy, or with the cultural value of the landscape, which might have deleterious outcomes for nature's conservation.Although conserving highly biologically diverse areas usually has a positive influence on the preservation of qualitative and quantitative ecosystem service supplies, the opposite is not true [42,91].The modeling of ecosystem services highly depends on values given to the settings and to the biophysical tables, as well as the quality of input maps.Specific values for the LULC categories used in the biophysical tables are mostly impossible to find in the literature, especially when they have to be adjusted to the local context.Furthermore, many settings require relative values, which depend on the characteristics of the inputs and of the territory.Thus, experts' knowledge is highly valuable in this type of work.They should examine the inputs with care and verify the credibility of the resulting maps to iteratively calibrate the settings and tables. The functional connectivity of the landscape for the species Cervus elaphus L. was calculated with the help of GPS trackers placed on several individuals [81,92].The data collected allowed to define a resistance matrix based on the species' habitat preferences, and thus represent a fundamental input for modeling the global connectivity and constrained zones.It has not been carried out for the two other species where inputs were based on experts' knowledge [82].It would be highly interesting to use GPS trackers for more individuals and species, but this method is expensive and not always applicable, especially for small animals.However, to better understand the functional connectivity of the territory, more animals could be studied such as amphibians, reptiles or small mammals, although large mammals' results might serve as an umbrella for other species.Even though small animals' movements do not reach the full extent of the study area, assessing their connectivity might also result in identifying corridors at a finer scale, resulting in a better representativeness of animal connectivity in the prioritization process.This assessment, however, is more challenging because of the lack of fine-scale data and its computational requirements. Prioritization The prioritization process was carried out over the whole territory but the best 30% was selected for each administrative entity.One limit of this approach could be the loss of connection at the edges of the three territorial entities, because the methodology does not consider that one patch should be entirely selected if its area is located on both sides of a border.This would be an important loss for the global structural and functional connectivity of the network as borders have no impact on species movements.However, this problem was not observed here, which could be explained by the fact that most of the borders are in the urbanized lowlands.In these areas, natural habitats with high ecological values are not so common and, thus, are easily identified by the methodology.This implies that the cross-border patches of natural habitats are selected on both sides of the border.Another reason that could explain this pattern is that the habitats located in the highlands and the lowlands are different.For example, deciduous forests are mainly found in the lowlands and coniferous forests in the highlands.Thus, the prioritization process would preferentially select patches of the same habitat, especially if it is not found anywhere else in the territory.It means that having access to a detailed map of the LULC information including many natural habitats might prevent the final network from being too much impacted by the spatial limits of the analysis. The GI identified here covers the most interesting areas for nature's conservation over the whole study area, considering the local context and administrative entities.However, it does not imply that areas located outside of the GI should not be considered for protection as well.This result is an overview of an optimal protection network to conserve all the aspects of the biodiversity and should be seen as a common aim for the territory.At a smaller scale, the conservation of natural habitats is fundamental because they could host important functions and diversity at this scale that are not represented or that are repetitive at the regional scale of the study area.Nevertheless, identifying and protecting a network of natural habitats is part of the solution to halt biodiversity loss but should be complemented, for example by lowering our impacts outside protected areas.Indeed, it is preferable to conserve, protect and restore more than not enough. Perspectives The proposed GI is still theoretical, and its effective implementation raises questions, especially regarding the legislation and the inclusion of private lands.There are already many protected areas with various appellations and legal basis in this cross-border area.Integrating conservation areas into the GI is an interesting idea, but the selection of which types of conservation area are integrated or not should be discussed among the stakeholders as some of them do not have strict legal basis.However, the integration of such large patches of protected areas might change the final distribution of the GI due to the prioritization process because most protected lands are made up of only a few natural habitats.Another solution would be to use the GI as informative data for spatial and urban planning to avoid the destruction of areas of high ecological interest, but the effective positive impact of GI on nature's protection might be lower. The ponderation used in the prioritization process was selected according to the value and interest attributed to the inputs by a group of experts.However, a more inclusive approach might be interesting, using, for example, the "best-worst method" based on the multiple criteria decision-making process [93].This method allows sorting of the inputs from the best to the worst one and vice versa based on their conservation value.At the end of the process, a weight is automatically calculated and can be attributed to each input and each pillar.This approach is interesting but should be used with caution.Indeed, being able to classify the interest given to maps representing ecological processes is a complicated task and necessitates knowledge about the study area, its ecology and ecosystems, as well as about the method used to calculate the inputs which participates in evaluating their quality.Thus, the inclusion of the stakeholders is highly important, and they should be part of the process and decisions to model and map the inputs in order to fully understand their intrinsic meaning. Conclusions Most of the complex analyses presented in this work rely on the cross-border map of natural habitats that was first created for the Greater Geneva region.Without this common input the rest of the analyses would have been made difficult.All the environmental data (topography, water and climate, species, etc.) are by nature crossing borders.Most of the difficulties faced during this work are linked to the cross-border situation of the study area which means that a lot of work is still needed to share common data, classifications and methods at a larger scale to allow such analysis.However, this article proves that innovative insights, positive for the regional biodiversity, could emerge when the scientific community and the stakeholders from the different administrative entities work together.Indeed, the resulting maps from this article were immediately transferred to the land use planners in charge of developing ambitious visions of the "Greater Geneva" territory for 2050 in alignment with 10 objectives of ecological transition as recently agreed and signed by local authorities [41]. This article has shown that the territory of "Greater Geneva" could be classified from its best to its worse pixel, in terms of ecological value, using a prioritization process based on the use of many inputs grouped into four main pillars.This approach is very useful to identify the best areas that should be considered as hotspots of biodiversity and ecosystem services but also to identify coldspots that can be good candidates for potential urban developments or for ecological restoration.The results have shown that selecting the best 30% of each administrative entity of the study area was better accepted by stakeholders and did not fundamentally change the quality of the GI.The share of the identified GI that is already under protection is relatively low, demonstrating that much effort would be needed to reach the international target of 30% by 2030.However, the use of OECM as advocated by the IUCN could be a good solution to provide a status to the newly identified areas. The GI identified in this work is the result of several years of research on the pillars and inputs on the same territory.The study area is well prospected and working together with naturalists, experts and the authorities in charge of the ecological conservation of the Figure 1 . Figure 1.Workflow of the method used to calculate the green infrastructure. Figure 1 . Figure 1.Workflow of the method used to calculate the green infrastructure. Figure 2 . Figure 2. Details on the methodology to create the cross-border LULC map of the study area.Figure 2. Details on the methodology to create the cross-border LULC map of the study area. Figure 2 . Figure 2. Details on the methodology to create the cross-border LULC map of the study area.Figure 2. Details on the methodology to create the cross-border LULC map of the study area. Figure 3 . Figure 3. LULC map of Greater Geneva made from a compilation of local LULC information. Figure 3 . Figure 3. LULC map of Greater Geneva made from a compilation of local LULC information. 26 Figure 4 . Figure 4. Main Zonation output ranking of the pixels of the study area according to their conservation value.A high score corresponds to a high conservation value. Figure 4 . Figure 4. Main Zonation output ranking of the pixels of the study area according to their conservation value.A high score corresponds to a high conservation value. Figure 4 . Figure 4. Main Zonation output ranking of the pixels of the study area according to their conservation value.A high score corresponds to a high conservation value. Figure 5 . Figure 5. Overall methodology to identify the best 30% of the territory and of each administrative entity. Figure 5 . Figure 5. Overall methodology to identify the best 30% of the territory and of each administrative entity. Figure 6 . Figure 6.The green infrastructure of the territory representing the best 30% of the main Zonation output for each administrative entity.The light green shows the areas integrated into the green infrastructure that is already protected. Figure 6 . Figure 6.The green infrastructure of the territory representing the best 30% of the main Zonation output for each administrative entity.The light green shows the areas integrated into the green infrastructure that is already protected. Figure 7 . Figure 7. Differences and overlap between the initial version and the aggregated version.In the new version, many areas (in red) appear in the two Swiss entities while only a few (in blue) seem to be deleted.This is only an impression because the surfaces are similar but lost surfaces in blue are mostly made of isolated pixels while the gained areas in red are aggregated in patches.The overlap between the two versions (in yellow) is high. Figure 7 . Figure 7. Differences and overlap between the initial version and the aggregated version.In the new version, many areas (in red) appear in the two Swiss entities while only a few (in blue) seem to be deleted.This is only an impression because the surfaces are similar but lost surfaces in blue are mostly made of isolated pixels while the gained areas in red are aggregated in patches.The overlap between the two versions (in yellow) is high. Table 1 . Predictive variables used in the SDM. Table 2 . Summary of all the inputs used in the prioritization process in Zonation 5 and their associated weight. Table A2 . Guild table used for the InVEST pollination model. Table A3 . Biophysical table used for the InVEST carbon storage model. Table A4 . Biophysical tables used for the InVEST sediment and nutrient delivery models (SDR and NDR).
2023-11-04T15:15:51.306Z
2023-11-02T00:00:00.000
{ "year": 2023, "sha1": "652edcdd11ef7fb902731264b3ba516d66bd653f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-445X/12/11/2010/pdf?version=1698921756", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "545f096edd9a638f1fbef45be2464afa9e171d28", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [] }
207832545
pes2o/s2orc
v3-fos-license
Asymmetrical Flux Density Distribution in Stator Teeth of Surface Permanent Magnet Machines This work is showing in detail the flux density behaviour in the stator teeth of a synchronous machine. A 3-phase Surface Permanent Magnet (SPM) motor is considered. These motors are widely employed in applications where high efficiency and power densities are required. This paper aims to analytically demonstrate the asymmetrical distribution of the stator teeth flux density. It is shown that this phenomena is depending on the number of slots per pole and per phase in the machine. Finally, a comparison with Finite Element results is given to validate the effectiveness of the proposed model. I. INTRODUCTION FOR any machine design, the computation of the iron losses is a very important aspect for improving the machine efficiency and thermal management [1]. This is strictly related to the estimation of the flux density distribution and its behaviour with respect to different harmonic components. Several works in literature are proposing techniques to minimise the iron losses both analytically and via finite element optimisation. In [2], a method for reducing the harmonics due to the permanent magnets is proposed. Therefore, the flux density analysis is essential to estimate and minimise the iron losses at the design stage [3]- [5]. Other works [6], implement complex subdomain models to predict the flux density within the stator core always considering symmetrical behaviour [7]. In this paper, the flux density distribution in different stator teeth has been analysed in detail considering a distributed winding with a single layer and full pitch. Such winding configuration is often chosen in fault-tolerant electrical machines [8], [9]. First, an analytical solution has been implemented in section II, to identify eventual asymmetries in different parts of the stator core. This has been compared with Finite Elements results, presented in section III, for validation. Finally, a summary of the paper outcomes is offered in the conclusions. II. ANALYTICAL MODEL The model proposed in this paper is based on the assumption of radial flux density in the air-gap, no slotting effect and linear materials [10]. Under these hypotheses, the magnetic field in the air-gap can be evaluated for each angular position (ϑ) in terms of Fourier series as: whereH ρ (H ρ =H S,ρ +H P M,ρ ) is the ρ th Fourier series component of the spatial distribution of the magnetic field in the air-gap. The ρ th harmonic contribution produced by the stator currentsH S,ρ can be defined as: where N is the number of turns per phase, q the slots per pole and per phase, δ the air-gap thickness and K aρ is the winding factor for the ρ th field harmonic.ī S ρ is the ρ th current space vector defined by the Clarke transformation of the currents for the three-phase winding (U-V-W) considered: where j is the unity imaginary number (j 2 = −1). Because the zero sequence current is null, due to the star connection of the three-phase winding, it is possible to write the following relationships among the space vectors: Finally, the rotor magnets contribution, to the ρ th harmonic of field in the air-gap, can be expressed with a good approximation as:H where B r is the remanence flux density, δ P M the magnet thickness, µ P M the magnet permeability, ∆ P M the magnet angular width, and θ r the rotor position in electrical radians. Table I is summarising the main machine parameters. The airgap flux density, for the considered SPM machine, when the rotor is aligned with the magnetic axis of phase U (θ r = 0), is shown in Fig. 1. The figure highlights the two components of the flux: the one produced by the magnets, and the flux generated by the stator currents, when the machine is fed with its rated current (ī S ρ = ji q e jϑr , with iq = 683 A pk ). The total flux density is obtained by the sum of two components. In order to evaluate the flux density in each stator tooth, the proposed model considers that all the flux, under the slot pitch, is crossing the air-gap and flowing through the same tooth. Therefore, the analytical evaluation of the teeth flux is provided for each T th tooth by: with ϑ T = 2πp N slots (T − 1) the angular position of the centre of the T th tooth (N slots is the overall slot number of the stator), N slots the angular pitch between two neighbouring slots (or teeth), L the active length of the machine, and R g the middle radius of the air-gap. Substituting (1), with (2) and (5), in (6) it is possible to define the T th tooth flux in terms of Fourier series as: with: Under the assumption of the model, the flux density related to one tooth can be evaluated as: Eq. 9 can be rewritten in terms of Fourier series as: with the stator contribution: and the rotor contribution: e jρϑr e −jρϑ T = K P M ρ e jρ(ϑr−ϑ T ) , (12) with K Sρ and K P M ρ constants depending from the harmonic order. At steady state operation, with an angular speed of the rotor ω (ϑ r = ωt), the flux density in each T th tooth varies in the time according to the following equation: From the first term of (14) it is possible to note that the magnets generate in each tooth infinite time harmonics (at angular frequency ρω). These present the same amplitude in each tooth and they are shifted with a fixed time delay dt = ϑ T ω . Therefore, the behaviour of the teeth is completely symmetrical in time. Instead, the flux density contribution produced by the currents to each tooth is composed by a sum of sinusoidal terms all at the same angular frequency (ω). In this case every component presents a time delay, from one tooth to the consecutive one, which depends on the field harmonic order (space harmonic): dt = ± ρϑ T ω . Therefore, the resulting flux density component is sinusoidal at angular frequency (ω), but it presents a different magnitude and phase depending on the considered tooth. It is worth to note that under the assumption of sinusoidally distributed winding (i.e., sinusoidal flux distribution of the armature field in the air-gap), the latter phenomenon does not appear, and (14) can be simplified as: This highlights that the asymmetrical distribution of flux density in the stator teeth is due to the higher order harmonics of the armature field. In Fig. 2 the expected waveforms of the flux density produced by the magnets and the stator currents are shown, respectively, for three consecutive teeth when the machine has q > 1 (in this case q = 3). The corresponding Fourier spectrum is given in Fig. 3 in terms of time harmonics with respect to the fundamental, with the current and magnet contributions. Whereas the contribution of each permanent magnet spatial field harmonic generates a different time harmonic, the ones produced by the stator result in flux density components at the fundamental frequency. The analytical model, allows to decouple the effect of each stator harmonic contribution generated at the air-gap, by (11) and (13), as shown in Fig. 4. The same analyses are carried out for a machine with one slot per pole and per phase (q = 1): in Fig. 5, flux density waveforms by currents and magnets are shown respectively, Fig. 6 displays the corresponding Fourier spectra and Fig. 7 shows the stator spatial harmonic contributions. As expected from the analytical model, the only case with the harmonic contributions equally shifted among the teeth is for q = 1. The angular phase shift of the flux density in one tooth caused by the ρ th space harmonic (ρϑ T ) can be rewritten considering that ρ is equal to 3n ± 1, with n an even number (n = 2k) and N slot = 6pq. Therefore, it is possible to re-write the angular phase shift as follows: It can be seen that the first term in (16) is a multiple of 2π only when q = 1, while the second term does not depend from the considered space harmonic. It results that q = 1 is the only solution for which the phase shift is not affected by the harmonic order. III. FEA VALIDATION AND RESULTS In order to validate the analytical model, the obtained results have been compared with the flux density values determined by means of FE simulations. The comparison is carried out considering only the radial component of the flux density at the centre of each tooth. This is a good approximation for the analysis of asymmetries among different teeth, neglecting local effects. To compare the FE results with the analytical model, linear materials have been considered [11]. As a case study, two SPM machines, with q = 3 and q = 1 (the parameters are listed in Table I), have been considered. The two machine geometries are shown in 8 and 11 respectively. Firstly, the 36 slots machine (with q = 3) is analysed. Fig. 9 shows the comparison between FE and analytical results for the flux density distribution in all the 36 stator teeth. The magnet and current contributions (presented at the top and bottom, respectively), when the rotor is at its initial position (ϑ r = 0), are considered separately. Fig. 12 displays the same results obtained for the machine with 12 teeth (q = 1). To compare the behaviour for all the rotor positions, Fig. 10 and Fig. 13 show the distribution of flux density in three consecutive teeth for one mechanical revolution of the rotor (ϑ r = 0 : 360), for q = 3 and q = 1, respectively. The results presented are validating the qualitative behaviour expected by the analytical model, i.e., the presence of asymmetries in the flux density distribution among the stator teeth when q > 1. IV. CONCLUSION In this work, a detail investigation of the flux density distribution in the teeth of an SPM machine has been carried out via analytical and FE analyses. The results highlight that in a three-phase machine with distributed winding the flux density distribution is not symmetrical when the number of slots per pole and per phase is higher than one (q > 1). This aspect can be taken into account in the early stage of the machine design, when simplified analytical models are used to predict the initial geometrical parameters of the electrical machine. In particular, the presented model can be used to predict the effects of these parameters on the asymmetrical distribution of iron losses and saturations, which are related to the teeth flux density. The behaviour, expected by the simplified analytical model, has been validated against FE results for two examples of SPM machines with different numbers of slots per pole and per phase: q = 3 and q = 1. It is found that the flux density in different teeth is asymmetrical for a machine with q = 3, while it is symmetrical for a machine with q = 1. Fig. 11. SPM machine geometry with q = 1. Fig. 12. Flux density for each tooth when the rotor is aligned with the phase A (ϑr = 0) and the stator currents are controlled to generate the rated torque for the machine with q = 1. Fig. 13. Flux density comparison for three consecutive teeth between FEA and analytical approach with q = 1.
2019-11-03T14:15:49.686Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "aad01e343e0e234117419d346960e7e006f52b7e", "oa_license": "CCBY", "oa_url": "https://nottingham-repository.worktribe.com/preview/4090981/Asymmetrical%20flux%20density%20distribution%20in%20stator%20teeth%20of%20surface%20permanent%20magnet%20machines.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "505ae6e54924a0ef8d5ce30e21decaa5d6aaff59", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
244657407
pes2o/s2orc
v3-fos-license
Analysis and Design Based on the Operation Mode of Power Electronic Transformer in Smart Grid Power electronic transformers(PET) are the key energy conversion equipment in the operation of modern smart grids, the main function of PET is to achieve the conversion of AC voltage to AC voltage, while taking into account the DC ports. This article mainly studies three-stage power electronic transformers based on three-phase uncontrolled rectifier, full-bridge isolated DC-DC converter and three-phase inverter. The operation mechanism and actual working process of the three parts of the PET are analyzed respectively, and the transformer is simulated and analyzed based on the Matlab/Simulink simulation platform. The rectifier converts the AC voltage on the grid side into a rippled DC voltage; the DC-DC converter transforms the obtained DC voltage, taking into account the access of the DC ports; the inverter converts the obtained DC voltage into AC voltage through unipolar modulation and connects to the grid. The experimental results show that the PET constructed in this way can operate safely and stably, which has good voltage conversion and electrical isolation functions, and can be connected to DC loads. ICPEPT 2021 Journal of Physics: Conference Series 2108 (2021) 012073 IOP Publishing doi: 10.1088/1742-6596/2108/1/012073 2 method effectively verifies the accuracy and validity of the PET topology and has a wide range of application prospects in smart grids. In addition to the functions of traditional transformers, they can also control current and voltage and improve power quality [7]. In today's era, new energy power generation has become a new trend. The world is advocating the concept of energy saving, environmental protection, and green power generation. PET can also connect the new energy power generation system with the power system and input the generated electrical energy into the power system [8]. Literature [9] proposed a three-level PET, in the topology of this PET, a three-phase and three-level topology is used. In this type of transformer, the voltage that the switch components bear is only one-half of the voltage that the two-level circuit bears, which increases the range of input voltage levels and is better suited for high-voltage power distribution. At present, the application of PET in power systems includes two aspects: distributed power grid connection and improvement of power quality [10]. 2.Rectifier PET are mainly divided into three parts: AC-DC rectifier, DC-DC converter and DC-AC inverter, in order to realize AC-DC-AC voltage conversion [11]. The rectifier is the input part of the entire pet. Since the input voltage is three-phase voltage during the operation of the power grid, a three-phase threewire structure is adopted, and the inside of the rectifier adopt an H-bridge full bridge structure. The bridge arm connected by each phase voltage is composed of two diodes in series, and a capacitor and a resistor are connected in series with the three bridge arms [12]. The topological structure of the rectifier is shown as in Figure 1. 、 、 are the three-phase voltage of the three-phase power supply, 、 、 are the upper diode of each corresponding bridge arm in the three-phase power supply, ' 、 ' 、 ' are the lower diode of each corresponding bridge arm in the three-phase power supply, C is the filter capacitor for output DC voltage,R is the equivalent load. The goal is to output 1000V direct current, the input three-phase power is 730 volts, C is 0.0033 farads, and R is 10 ohms. The rectifier is controlled by the conduction characteristics of the diode. The upper diode of the bridge arm connected to the phase with the higher phase voltage is turned on, and the diode below the bridge arm of the phase with the lower phase voltage is turned on, thereby converting the input threephase sinusoidal AC voltage into DC voltage. The voltage ripple obtained in this way is relatively large, and an inductor connected in parallel with the bridge arm needs to be added for filtering. The existence of capacitor can eliminate the pulsation of DC voltage, thereby ensuring that the conversion can obtain a stable DC voltage. In order to get an output voltage of 1000V, after experiments, the input voltage needs to be adjusted to 730V. The voltage waveform after filtering is shown as in Figure 2. 3.DC-DC Converter As the secondary transmission path of the PET, The DC-DC converter adopts a full-bridge isolated DC-DC converter, and each part on both sides has an H-bridge structure [13]. Its topological structure is shown as in Figure 3. is the input power supply voltage, and are load current and output side voltage respectively, are mosfet switch tubes, L is auxiliary inductance, is the current on the inductor side, and are the output voltages of the H bridge at both ends of the DC-DC converter, and are the buffer capacitors on the power supply side, R is equivalent load. is 1000 volts, and both are 0.0033 farads, L is 0.001 Henry, and R is 10 ohms. The equivalent circuit diagram of the full-bridge isolated DC-DC converter is shown in Figure 4. and are both positive values, so it can be seen from the calculation result that the positive or negative of active power depends on sin θ. When the value of θ is 0-180 degrees, the power is positive, and it reaches the maximum when θ is equal to 90 degrees. From the characteristic curve of the sine function, it can be concluded that when two power supplies are connected, the power supply with the leading phase is charged and the power supply with the lagging phase is discharged. The input signal of the converter is a square wave signal, and the phase difference is controlled by the square wave inverter. On the output side of the DC-DC converter, it is also necessary to add a filter capacitor. The voltage output waveform of the DC side is shown in Figure 5. 4.Inverter The inverter is the last stage of the PET. The function of this part is to convert the DC voltage into the AC voltage that can be used in the grid system. The input end of the inverter is a DC voltage, and it needs to be output as an AC voltage, which requires adding a suitable input signal. Input a sine wave (modulation wave) and a triangle wave (carrier wave), and compare the amplitudes of the two waveforms, When the amplitude of the sine wave is large, the output is high level 1, and when the amplitude of the triangular wave is large, the output is low level 0. That is, Pulse Width Modulation (PWM modulation). PWM modulation is to adjust the switches in the inverter circuit to make the multiple pulses obtained at the output of the circuit have the same amplitude, so that the equivalent voltage of each pulse is the waveform of a sine wave, which is used instead of generating a sine wave. Due to the PET is to be connected to the power grid for work, the three-phase voltage at the output must conform to the law of phase difference, which is "360 /N"degrees. As it is three-phase, so N is 3, then the phase difference is 120 degrees, and three input signals are obtained after offsetting 120 degrees and 240 degrees. The modulation mode of the inverter is divided into unipolar modulation and bipolar modulation. The inverter is connected through an H bridge, and there are two switch tubes on each bridge arm. Bipolar modulation means that two bridge arms share the same input signal. When the upper switch tube of one bridge arm is turned on, the lower switch tube of the other bridge arm is turned on, so that the waveform obtained in this way has both positive and negative polarities within half a cycle. For unipolar modulation, each bridge arm connect an input signal, and the conduction of the switch tube on each bridge arm is independent of each other, which is not affected by other bridge arms. The waveform obtained in this way has only a single polarity in each half cycle [14]. Assuming that the frequency of the fundamental wave is 50Hz and the frequency of the carrier wave is 1000Hz, put them into the inverter of unipolar modulation and bipolar modulation respectively. Using the "Powergui" module in the Matlab/Simulink simulation platform to perform Fast Fourier Transform Analysis(FFTA) on the two methods. The results are shown in Figure 6. Figure 6. FFTA diagram of bipolar and unipolar modulation. THD represents the proportion of harmonic distortion, the large value means the great harmonics. The comparison shows that the unipolar modulation method is better than the bipolar modulation. Since the output voltage needs to match the grid, a three-phase inverter is used in the PET to convert the constant DC voltage into AC voltage after filtering [15]. The topology of the inverter is shown in Figure 7. are the output three-phase power supply. The inductance are 0.005 farads, the resistance values are all 10 ohms, and is 1000 volts. In a three-phase inverter, each phase is connected by an H bridge arm, which is composed of two switch tubes in series. Each bridge arm is connected to an equivalent load resistor and an filter inductance, the inductance and resistance are connected in parallel, which will play a role in voltage division. From the formula X = ωL = 2πfL we can get the resistance of the inductor is proportional to the inductance. If L increases, the inductor's partial voltage will increase, the generated harmonics will be smaller, and the output voltage curve will be smoother. However, an increase in the inductance voltage divider will cause the output voltage to decrease, and the voltage amplitude on the equivalent resistance will be reduced. The output voltage should also meet the voltage requirements for grid operation, so the inductance should be adjusted appropriately. At high frequencies, the inductive reactance is bigger than the impedance, then the carrier frequency plays a major role. At low frequencies, the result is just the opposite. Therefore, the choice of inductance must meet two conditions:2π ˃˃ R and 2π ˂˂ R,simplify and get ˂˂ L ˂˂ . A suitable inductance can be obtained by taking an intermediate value within this range. After being processed by the inverter, the output AC voltage waveform is shown in Figure 8. Figure 8. Waveform diagram of inverter output AC voltage The overall simulation system diagram of the PET is shown in Figure 9. 5.Conclusion In order to build a PET with DC ports, this paper builds a three-stage PET based on a three-phase uncontrolled rectifier, a full-bridge isolated DC-DC converter, and a unipolar modulation three-phase inverter. And build the simulation system of this transformer through Matlab/Simulink simulation platform, carry on simulation analysis and verification to it. Theoretical analysis and experimental results show that: 1. Three-phase uncontrolled rectifier can convert the AC voltage on the grid side into DC voltage, and the resulting DC voltage has a small ripple, which is convenient to connect the full-bridge isolated DC-DC converter. 2. The full-bridge isolated DC-DC converter is mainly used for voltage conversion and electrical isolation. During operation, the power conversion, the charging and discharging functions are realized by controlling the phase difference of the voltage across the DC-DC converter. 3. The output voltage of the DC-DC converter is used as the input voltage of the three-phase inverter, the unipolar modulation method is adopted to adjust the inverter, which can make the AC voltage at the output end have smaller ripple, and the obtained voltage is suitable for connecting to the power grid. 4. When designing the inverter, it is necessary to add a filter inductance. The selection of the inductance is related to the frequency of the fundamental wave and the carrier wave. The value of the inductance is usually within the range of ˂˂ L ˂˂ .
2021-11-26T20:06:36.726Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "8b9f0f2c404ee3f92a5571862aff57fa594a55c2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2108/1/012073", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8b9f0f2c404ee3f92a5571862aff57fa594a55c2", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220470327
pes2o/s2orc
v3-fos-license
Steroids and Alzheimer’s Disease: Changes Associated with Pathology and Therapeutic Potential Alzheimer’s disease (AD) is a multifactorial age-related neurodegenerative disease that today has no effective treatment to prevent or slow its progression. Neuroactive steroids, including neurosteroids and sex steroids, have attracted attention as potential suitable candidates to alleviate AD pathology. Accumulating evidence shows that they exhibit pleiotropic neuroprotective properties that are relevant for AD. This review focuses on the relationship between selected neuroactive steroids and the main aspects of AD disease, pointing out contributions and gaps with reference to sex differences. We take into account the regulation of brain steroid concentrations associated with human AD pathology. Consideration is given to preclinical studies in AD models providing current knowledge on the neuroprotection offered by neuroactive (neuro)steroids on major AD pathogenic factors, such as amyloid-β (Aβ) and tau pathology, mitochondrial impairment, neuroinflammation, neurogenesis and memory loss. Stimulating endogenous steroid production opens a new steroid-based strategy to potentially overcome AD pathology. This article is part of a Special Issue entitled Steroids and the Nervous System. Introduction Alzheimer's disease (AD) is the most common dementia of the elderly and remains one of today's biggest public health challenges. The main pathological features observed in the AD brain are loss of synapses at early stages of the disease, senile plaques of amyloid-β (Aβ) peptides, neurofibrillary tangles (NFTs) containing hyper and abnormally phosphorylated tau proteins leading to progressive neuronal loss (cortical atrophy). One form of the disease is familial AD, a very rare pure autosomal dominant disease with early onset before 65 years, which is caused by mutations in amyloid precursor protein (APP), presenilin-1 (PS1) or presenilin-2 genes all connected to Aβ accumulation. The other form is sporadic AD or late-onset AD which accounts for the majority of AD cases and is caused by environmental factors and genetic predisposition [1]. Soluble Aβ oligomers impact synaptic plasticity early in the pathological process [2,3]. Progressive synaptic dysfunction then impairs episodic/recent memory followed by slow decline in other cognitive abilities, accompanied by neuropsychiatric symptoms including apathy, anxiety and depression [4,5]. A large body of evidence suggests that there is a long prodromal infraclinical phase during which pathological changes begin decades before plaques and tangles are formed [6]. Besides aging, which is considered the greatest risk factor of AD, apolipoprotein (ApoE) ε4 genotype and sex are critical AD risk factors [7,8]. Several neurobiological pathways also contribute to neurodegeneration and cognitive impairment, including mitochondrial dysfunction and oxidative stress, neuroinflammations that are pertinent targets of neuroactive neurosteroids. At present, no medication exists for AD despite intensive research on neuropathology, symptoms and mechanisms. Symptomatic treatments have had either little or no effect, so new strategies mostly aim at reducing the overall burden within the AD brain. Among them, the exogenous administration of neuroactive steroids or the modulation of their endogenous production can potentially provide therapeutic benefits, particularly in the preclinical stage before the neurodegenerative disease process is established. Endogenous neuroactive steroids include steroids that are synthesized de novo in the central nervous system (CNS) as neurosteroids, hormonal steroids generated from endocrine glands and transported into the brain from circulation, and steroids synthesized in the brain from gonadal steroids. Significant alterations in their concentration and metabolism are observed in the blood, brain and cerebrospinal fluid (CSF) samples of AD patients. In addition, neuroactive steroids, independent of their central or peripheral origin, regulate a wide range of key physiological processes in the CNS by regulating gene expression, neuronal excitability and signaling. These data have provided the basis for their neuroprotective, neuroregenerative and neuropsychopharmacological effects that may be pertinent for AD treatment [9][10][11]. The aim of this review is to summarize the current research on the levels of steroids and biosynthetic enzymes in the AD brain and their relationship with critical pathogenic factors in various AD models, including human neuroblastoma cell lines, rats and transgenic mice developing Aβ or tau pathology. Emphasis is on steroid specificity and, when possible, on sex differences. The focus is on key neuroactive steroids, namely the neurosteroids pregnenolone (PREG) progesterone (PROG), alloprogregnanolone, dehydroepiandrosterone (DHEA), the sulfated steroids pregnenolone sulfate (PREGS) and dehydroepiandrosterone sulfate (DHEAS), as well as the sex-steroid testosterone and 17β-estradiol (E2). Steroidogenesis in the Human Brain Since human brain steroidogenesis can differ from that of laboratory animals, investigating steroid and enzyme levels also requires samples from AD patients and age-and gender-matched nondemented controls. Moreover, blood steroid levels do not necessarily adequately reflect brain steroid concentrations and enzyme activity. All steroids are synthesized from cholesterol either de novo from acetate in the endoplasmic reticulum or imported from density lipoproteins derived from dietary sources. Steroidogenic pathways are well characterized in rodent brains [12]. The human brain also has the capacity to synthesize steroids. Evidence supports the existence of key steroidogenic enzyme mRNAs, protein or activity in brain samples. The biosynthesis and metabolism of neurosteroids and sex steroids in the human brain is depicted in Figure 1. The cytochrome P450side-chain cleavage (P450scc), encoded by the Cyp11A1 gene, is involved in the initial and rate-limiting step inside mitochondrial matrix that converts cholesterol to pregnenolone (PREG), the precursor of all steroids [13]. P450scc mRNA was found in several areas, i.e., the temporal and frontal neocortex, hippocampus, corpus callosum, thalamus, caudate nucleus and amygdala [14,15]. The corresponding protein was detected in the cerebellar white matter by immunochemistry [16]. The 3β-Hydroxysteroid dehydrogenase-∆5→∆4 isomerase (3β-HSD) is a membrane-bound enzyme located in the mitochondria and endoplasmic reticulum. It catalyzes the production of PROG from PREG and androstenedione from DHEA. In humans, type 1 and type 2 3β-HSD isoforms are characterized. Both 3β-HSD isoform mRNAs were present in the corpus callosum, hippocampus and amygdala, with type 2 at higher levels than type 1 [15]. PROG can act in its native form and/or after transformation into active metabolites. In the CNS, the 5α-reductase converts PROG to 5α-dihydroprogesterone (5α-DHP) which is further reduced by the 3α-hydroxysteroid dehydrogenase (3α-HSD) to the potent γ-aminobutyric acid type A (GABA A ) receptor-acting neurosteroid 3α,5α-tetrahydroprogesterone (3α,5α-THP) also named allopregnanolone. Two forms of 5α-reductase exist, with type 1 being the major isoform widely distributed in the human brain. Type 1 5α-reductase mRNA and activity are located in the temporal neocortex, subcortical white matter and hippocampus [14,17]. The 3α-HSD belongs to the aldo-keto reductase (AKR) superfamily with AKR 1 C 1 , AKR 1 C 2 and AKR 1 C 3 isoenzymes expressed in the human brain [18]. AKR 1 C 2 (type 3 3α-HSD) metabolizes 5α-DHP to allopregnanolone while AKR 1 C 1 metabolizes allopregnanolone to its 20α-reduced metabolite thus reducing neurosteroid concentration in the brain [19,20]. AKR 1 C 3 (3α-HSD type 2) mRNA was characterized in the temporal neocortex and subcortical white matter [17]. Interestingly, mRNA expression of 3α-HSD was much higher than that of 5α-reductase, suggesting that 5α-reduction is the rate-limiting step in the synthesis of allopregnanolone. The neurosteroids PREG and PROG can also be converted by the cytochrome P450c17. This 17 α-hydroxylase/17, 20-lyase (CYP17) converts both steroids into DHEA and androstenedione, respectively. The level of P450c17 mRNA was higher in the corpus callosum and amygdala than the hippocampus and cerebellum [15]. With respect to sulfated steroids, human 3β-hydroxysteroid sulfotransferases (3β-HSTs) are present in the human brain and SULT2B1 is selective for the sulfonation of 3β-hydroxysteroids, such as PREG and DHEA [21,22]. Only SULT2B1b (not SULT2B1a) transcripts were detected throughout the human brain regions with the frontal/temporal lobes and thalamus expressing the highest levels [22]. The 17β-hydroxysteroid dehydrogenases (17β-HSD) and aromatase are essential in the end steps of neurosteroidogenesis. The 17β-HSD is involved in the interconversion of androstenedione to the strong androgenic testosterone, and DHEA to 17β-androstenediol. The expression of mRNAs encoding type 1, 3, 4, 5, 7, 8 and 10 isoforms of 17β-HSD was detected in the human temporal lobe [14,23]. The in vitro activity of 17β-HSD promoted the synthesis of testosterone in the cerebral cortex and subcortical white matter [24] and of 17β-androstenediol in several human brain regions, including the hippocampus, amygdala striatum and cerebellum [25]. 17β-HSD10, known to be involved in the inactivation of many endogenous steroids, was found to be highly expressed in the human hippocampus, hypothalamus and amygdala [26]. At last, the cytochrome P450 aromatase, encoded by Cyp19 gene, is another key enzyme for the aromatization of testosterone to E2. Aromatase mRNA expression was detected in the frontal cortex, hippocampus, subcortical white matter, thalamus and hypothalamus [14,[27][28][29]. Aromatase activity was demonstrated in human frontal and temporal brain regions [30,31], hippocampus, pons, thalamus and hypothalamus [28,32]. Notably, 17 β-HSD plays a critical role in the bidirectional reactions between E2 and the weak estrogenic compound estrone E1, and the 17 β-HSD type 1 catalyzed the reduction of E1 to E2 [33]. To date, no sex-specific differences have been identified for all the enzymes cited above except for P450scc, for which mRNAs levels were higher in adult women than men, particularly in the temporal lobe, frontal lobe and hippocampus [34,35]. The steroid biosynthetic pathway and steroid enzyme location in the human brain are depicted in Figure 1. Changes in Neurosteroid and Biosynthetic Enzyme Levels Increasing evidence suggests that dysregulation of steroid endogenous concentrations and their biosynthetic enzymes play a role in neurological diseases, including AD [36,37]. It is to be noted that brain steroid levels reflect not only hormonal production and metabolism from endocrine glands, but also the local neurosteroidogenesis. There is a very limited or no recent data addressing brain steroid levels in AD or control subjects. Therefore, we choose to cite the few initial studies that described substantial changes in neurosteroid levels in postmortem brain samples of AD patients as compared to cognitively intact nondemented subjects. Those investigations used reliable methodologies, coupling solid phase extraction, purification by high performance liquid chromatography and identification by gas chromatography-mass spectrometry (GC-MS) instead of radioimmunoassay. Indeed, the latter represents an important limitation in analyzing low concentrations of steroids in brain tissue samples, due mainly to the lack of specificity and availability of antibodies [38]. Numerous neurosteroids have been quantified in postmortem human brain specimens. Substantial changes in their levels in the AD brain ( Figure 2A) suggest that disequilibrium in neurosteroid pathways have a role in AD pathogenesis. The regulation of steroid levels by Aβ burden in vitro and in vivo is described ( Figure 2B). The main steps of neuroactive steroid synthesis are shown. The initial and limiting step involves P450scc activity catalyzing the metabolism of cholesterol to pregnenolone (PREG), the precursor of all neurosteroids and sex steroids. Then, PREG is transformed to progesterone (PROG) and its 5α-reduced and 5α,3αreduced metabolites (5α-DHP and allopregnanolone). These steps involve 3β-HSD and the 5αreductase/3α-HSD complex enzymes. PREG and PROG are precursors of DHEA and androstenedione, respectively. PREGS and DHEAS are produced from PREG and DHEA by sulfotransferases and can in turn be desulfated by sulfatases. The so-called sex steroids are testosterone produced from androstenedione by 17β-HSD activity and E2 from testosterone aromatization. (B) Location of steroidogenic enzyme in the human brain regions. Enzyme mRNA, protein and activity are present, except for P450scc whose mRNA was only expressed. P450scc: P450side chain cleavage, 3β-HSD: 3β-hydroxysteroid dehydrogenase-Δ5 (Δ4 isomerase, 3α-HSD: hydrosteroid dehydrogenase, 3β-HST: 3β-hydroxysteroid sulfotransferase, 17β-HSD: 17βhydroxysteroid dehydrogenase 5α-DHP: 5α-dihydroprogesterone; 3α,5α-THP: 3α,5αtetrahydroprogesterone. Figure 1. Neurosteroid and sex steroid biosynthetic pathway in the human brain. (A) The main steps of neuroactive steroid synthesis are shown. The initial and limiting step involves P450scc activity catalyzing the metabolism of cholesterol to pregnenolone (PREG), the precursor of all neurosteroids and sex steroids. Then, PREG is transformed to progesterone (PROG) and its 5α-reduced and 5α,3α-reduced metabolites (5α-DHP and allopregnanolone). These steps involve 3β-HSD and the 5α-reductase/3α-HSD complex enzymes. PREG and PROG are precursors of DHEA and androstenedione, respectively. PREGS and DHEAS are produced from PREG and DHEA by sulfotransferases and can in turn be desulfated by sulfatases. The so-called sex steroids are testosterone produced from androstenedione by 17β-HSD activity and E2 from testosterone aromatization. (B) Location of steroidogenic enzyme in the human brain regions. Enzyme mRNA, protein and activity are present, except for P450scc whose mRNA was only expressed. P450scc: P450side chain cleavage, 3β-HSD: 3β-hydroxysteroid dehydrogenase-∆5 (∆4 isomerase, 3α-HSD: hydrosteroid dehydrogenase, 3β-HST: 3β-hydroxysteroid sulfotransferase, 17β-HSD: 17β-hydroxysteroid dehydrogenase 5α-DHP: 5α-dihydroprogesterone; 3α,5α-THP: 3α,5α-tetrahydroprogesterone. PREGS and DHEAS were found to be significantly lower in aged AD patients than age-matched nondemented controls, especially in the striatum, cerebellum and hypothalamus, and negatively correlated with high levels of cortical Aβ and phosphorylated tau proteins [39]. These reduced PREGS and DHEAS levels suggested that the 3β-HST enzyme involved in their biosynthesis was reduced. However, Calan et al. 2016 [40] showed that toxic doses of Aβ significantly increased PREGS cellular levels in a time-dependent manner in SH-SY5Y cultured cells as compared to control cells ( Figure 2A). This enhanced steroid production was interpreted as a result of self-defense probably to overcome harmful effects of Aβ. The levels of free neurosteroids are also regulated in AD brain regions. Decreased PREG and DHEA concentrations were noticed in several brain areas of aged AD patients albeit not significantly different than controls. Similar steroid levels were found between regions in the AD group, including frontal cortex, striatum, amygdala and hippocampus [39]. In the studies by Marx et al. 2006 andNaylor et al. 2010 [41,42], high DHEA and PREG concentrations were observed in the prefrontal and temporal cortices of AD patients and tended to be positively correlated with Braak and Braak stage [41]. Similarly, in SH-SY5Y neuronal cells, treatment with Aβ peptides (Aβ 25-35 , Aβ 1-40 or Aβ 1-42 ) at toxic doses significantly enhanced PREG levels [40] ( Figure 2B). PREG levels were higher in the group treated with Aβ 25-35 than the two others and proportionate to cell cholesterol content. The author suggested that the effect of Aβ on PREG levels might be a result of self-defense. PREG concentration also significantly increased in the hippocampus on days 7 and 12 following bilateral injection of Aβ [25][26][27][28][29][30][31][32][33][34][35] in the rat CA1 region [43]. astrocytes of several brain regions with AD pathology, including the hippocampus, hypothalamus and amygdala [46]. The upregulation of 17β-SHD activity may then lead to local reduced allopregnanolone levels in AD brain. In fact, discrepancies were observed in 17β-HSD type 10 levels that were either upregulated in the late stages of the AD brain [26] or unchanged [47]. This mitochondrial enzyme may also interact with soluble or aggregated Aβ [48,49]. In addition, it is unclear whether allopregnanolone reduction in the AD brain is related to Aβ accumulation per se. The decrease in allopreganolone levels in the AD brain was found inversely correlated to Braak and Braak stage, reflecting neuropathological disease severity [41,42]. Thus, the allopregnanolone content in the AD brain may rather have relevance in tau pathology since Braak and Braak staging focus on NFTs. Interestingly, Luchetti et al. 2011 [47] detected high levels of 3α-HSD type 3 mRNA in cortical astrocytes starting from Braak stage 3. Although an enhancement in allopregnanolone synthesis was not demonstrated, it may be seen as a rescue mechanism early in the disease process aimed at promoting brain protection. Lower PROG concentrations were quantified in several brain regions (frontal cortex, striatum, hypothalamus and hippocampus) of AD patients than controls, with no significant difference between the groups possibly due to the low number of patients [39]. In rats, PROG levels were significantly reduced in the hippocampus and prefrontal cortex following prolonged bilateral administration of Aβ [25][26][27][28][29][30][31][32][33][34][35] into the CA1 region [43]. Similarly, the synthesis of PROG from PREG was reduced in neuronal cell cultures under Aβ burden conditions [44,45] (Figure 2B). Allopregnanolone was found to be significantly decreased in the prefrontal and temporal cortices of AD patients [41,42]. In contrast to PROG, allopregnanolone content was unchanged in the hippocampus of Aβ-treated rats nor was its production in SH-SY5Y cells displaying Aβ pathology [43][44][45]. These data suggest that Aβ may target only specific steroidogenic enzymes, decreasing 3β-HSD activity to reduce PROG without affecting 3α-HSD involved in allopregnanolone. Thus, steroid changes in AD brain and those under Aβ burden conditions appear inconsistent. The direct effects of Aβ peptides on steroid enzyme expression and activity need further evaluation in several in vivo models of Aβ pathology. The mechanism by which allopregnanolone decreased in AD brain remains unclear. He et al. 2005 [46] demonstrated that the human brain 17β-HSD type 10 can catalyze allopregnanolone oxidation to yield 5α-DHP. High levels of 17β-HSD10 were quantified in activated astrocytes of several brain regions with AD pathology, including the hippocampus, hypothalamus and amygdala [46]. The upregulation of 17β-SHD activity may then lead to local reduced allopregnanolone levels in AD brain. In fact, discrepancies were observed in 17β-HSD type 10 levels that were either upregulated in the late stages of the AD brain [26] or unchanged [47]. This mitochondrial enzyme may also interact with soluble or aggregated Aβ [48,49]. In addition, it is unclear whether allopregnanolone reduction in the AD brain is related to Aβ accumulation per se. The decrease in allopreganolone levels in the AD brain was found inversely correlated to Braak and Braak stage, reflecting neuropathological disease severity [41,42]. Thus, the allopregnanolone content in the AD brain may rather have relevance in tau pathology since Braak and Braak staging focus on NFTs. Interestingly, Luchetti et al. 2011 [47] detected high levels of 3α-HSD type 3 mRNA in cortical astrocytes starting from Braak stage 3. Although an enhancement in allopregnanolone synthesis was not demonstrated, it may be seen as a rescue mechanism early in the disease process aimed at promoting brain protection. The relationship between pathological tau and neuroactive steroids remains elusive. Only one investigation stated that tau with pathogenic mutation P301L associated with frontotemporal dementia (FTD) had no impact on neurosteroid synthesis: neither the production of PROG nor allopregnanolone was modified in SH-SY5Y cells stably transfected by the mutant P301L tau and incubated with PREG as compared to native cells [44]. Further studies are required to deeply explore the relationship between pathological forms of tau (P301S mutant, oligomers or fibrillary tau) and neurosteroid concentrations. Changes in Sex Steroids and Biosynthetic Enzyme Levels Changes in sex steroid levels are also relevant to AD. The age-related decrease in brain levels of testosterone in men and E2 in women during menopause have been associated with greater risk of developing AD [37,50]. Brain testosterone levels were found to be lower in men with AD than control subjects. Rosario et al. 2011 [50] mentioned that testosterone was regulated according to age and disease stage ( Figure 3A). It was reduced only in the brains of men aged 60-79 years diagnosed with AD at mild stages but not in those over 80, suggesting that testosterone loss in the brain occurs early in the disease process. Indeed, brain testosterone was inversely correlated with soluble Aβ levels [50]. Interestingly, Type 1 17β-HSD mRNA progressively increased in the AD prefrontal cortex, starting from Braak stages 3-4 suggesting an early increase in testosterone synthesis in relation with tau pathology that may culminate in Braak stages 5-6 [47]. In male 3xTg-AD mice, an increase in brain testosterone with age was seen in the hippocampus ( Figure 3B) associated with the expression of the early tau pathologic conformational marker Alz50 and extra-neuronal Aβ deposition, with no change in the androgen receptor level [51]. In AD women, the regulation of brain E2 is also age-dependent ( Figure 3A). Women with AD aged 80 years and older exhibited significantly lower brain E2 than age-matched nondemented controls [50,52] ( Figure 3A). Surprisingly, aromatase expression increased from mild to moderate stages, particularly in the prefrontal cortex and hippocampus [47,53]. This aromatase increase is even higher in the later stages and occurred in both astrocytes [50] and neurons [53]. Again, these results point out the importance of the disease stage in the evaluation of steroid and enzyme levels in the AD brain. The enhancement of testosterone levels in the late stage of AD pathology as well as the upregulation of aromatase and 17β-HDS type 1 (which can also convert estrone to E2) to enhance E2 synthesis could be viewed as part of a compensatory neuroprotective mechanism. Consistent with this idea, brain injury in rodents rapidly upregulated aromatase enzyme expression in glial cells at the injury site suggesting that increased E2 levels may afford protection in injured neurons [54,55]. An upregulation of E2 synthesis was observed in SH-SY5Y neuronal cells with Aβ burden [44,45] ( Figure 3B). E2 significantly increased in the prefrontal cortex and in the hippocampus after bilateral infusion of Aβ into the male rat hippocampus [43] ( Figure 3B). Surprisingly, in aged 3xTg-AD mice, the increase in brain testosterone was not associated with a concomitant increase in brain E2. Contrariwise, brain E2 did not change with age in both males and females [51]. The relationship between pathological tau and neuroactive steroids remains elusive. Only one investigation stated that tau with pathogenic mutation P301L associated with frontotemporal dementia (FTD) had no impact on neurosteroid synthesis: neither the production of PROG nor allopregnanolone was modified in SH-SY5Y cells stably transfected by the mutant P301L tau and incubated with PREG as compared to native cells [44]. Further studies are required to deeply explore the relationship between pathological forms of tau (P301S mutant, oligomers or fibrillary tau) and neurosteroid concentrations. Changes in Sex Steroids and Biosynthetic Enzyme Levels Changes in sex steroid levels are also relevant to AD. The age-related decrease in brain levels of testosterone in men and E2 in women during menopause have been associated with greater risk of developing AD [37,50]. Brain testosterone levels were found to be lower in men with AD than control subjects. Rosario et al. 2011 [50] mentioned that testosterone was regulated according to age and disease stage ( Figure 3A). It was reduced only in the brains of men aged 60-79 years diagnosed with AD at mild stages but not in those over 80, suggesting that testosterone loss in the brain occurs early in the disease process. Indeed, brain testosterone was inversely correlated with soluble Aβ levels [50]. Interestingly, Type 1 17β-HSD mRNA progressively increased in the AD prefrontal cortex, starting from Braak stages 3-4 suggesting an early increase in testosterone synthesis in relation with tau pathology that may culminate in Braak stages 5-6 [47]. In male 3xTg-AD mice, an increase in brain testosterone with age was seen in the hippocampus ( Figure 3B) associated with the expression of the early tau pathologic conformational marker Alz50 and extra-neuronal Aβ deposition, with no change in the androgen receptor level [51]. In AD women, the regulation of brain E2 is also age-dependent ( Figure 3A). Women with AD aged 80 years and older exhibited significantly lower brain E2 than age-matched nondemented controls [50,52] ( Figure 3A). Surprisingly, aromatase expression increased from mild to moderate Sex Difference in Neuroactive Steroid Levels Sex differences have been noted in AD but remain debated. Most studies suggest that women have greater frequency and lifetime risk than men. There is also mixed opinion concerning prevalence and incidence rates, and disease course [56][57][58][59]. Sex difference in the occurrence and distribution of Aβ plaques in the brain or CSF Aβ concentrations is unknown. Evidence of any impact of sex on brain tau hyperphosphorylation and NFTs or CSF tau levels is not established [60]. Sex differences in neuroactive steroid levels may be important to consider in AD, yet are poorly explored. The study by Corbo et al. 2014 [61] showed a direct influence of CYP17 genotypes on AD susceptibility and age of onset mainly in men. The study of Rosario et al. 2011 [50] revealed sex-specific brain levels of testosterone and E2 in AD patients ( Figure 3A). In postmortem AD brain samples, reduced testosterone and E2 levels were noted in women older than 80 years, while only testosterone decreased in men aged 60-79 years. An earlier report from the same group indicated that E2 brain levels did not decrease with age in men and were unaffected in AD brain at any stage of the pathology [62]. Furthermore, the aromatase gene CYP19A1 polymorphisms appeared to be exclusively associated with AD risk in women and not in men [63,64]. Therefore, brain E2 and aromatase levels do not seem to be linked to AD status in men. It is interesting to note that sex-specific differences in the aromatase were demonstrated in transgenic mice that early express Aβ pathology ( Figure 3B) [53]. The expression of aromatase mRNA and protein in the hippocampus was similar in male 5xFAD mice (that express human APP and PSEN1 transgene with a total of five mutations) and controls, whereas it was significantly lower in 5xFAD females [53]. Therefore, the contribution of E2 to the sex-specific effect seen in AD may primarily be related to Aβ pathology among AD etiologies. The regulation of E2 production and related enzymes (aromatase, 17β-HSD) in the brain of AD women and men may deserve further investigation that needs to take into account age, sex, and disease state. Direct access to the human brain remains challenging as compared to CSF. Changes in CSF neuroactive steroids might reflect changes in the brain and be a good alternative to better understand their role in AD. Changes in CSF neuroactive steroids were previously observed in humans with no brain disorders [65][66][67], but they are scarce in AD. A lower CSF E2 level was found in AD female patients compared to nondemented ones [68] and this corroborates the lower brain E2 in AD women as compared to controls [50,52]. Steroids and Genetics of Late-Onset AD The only strong and well-established genetic risk factor for the development of late-onset AD is the inheritance of the APOE-ε4 allele (for review [69]). ApoE is a multifunctional protein that binds to the low-density lipoprotein receptor family and therefore plays a central role in maintaining cholesterol/lipid homeostasis in the brain [70]. Recent studies provide evidence of a connection between abnormal cholesterol metabolism by ApoE4 and AD pathology. Impaired efflux of cholesterol in APOE-ε4 neurons contributes to its intracellular accumulation and Aβ increase [71]. Since cholesterol is the precursor of all steroids, associations with APOE4 and cholesterol-derived steroids could be considered. However, only a few reports have investigated this issue. A significant decrease in allopregnanolone levels was observed in the temporal cortex of patients positive for the APOE-ε4 allele compared to patients not carrying it [42]. Sex differences in the risk of AD are also modified by APOE genotypes. The APOE-ε4 risk of developing AD was thought to be greater in women as compared to male carriers [72,73]. In fact, greater risk of late-onset AD was evident in APOE-ε4 homozygote females, while increased risk of early-onset AD was evident in APOE-ε4 homozygote males [8,72]. Increased APOE-related risk in women seems to be associated with tau pathology [72]. Why APOE-ε4 gene confers higher risk in women is unclear. Whether E2 levels in APOE-ε4 carrying women are directly or indirectly linked to AD risk and severity is also uncertain. In men, E2 was lower in APOE-ε4 carriers with AD than controls, and testosterone was lower only in AD men without APOE-ε4 [74]. The roles of endogenous steroid levels as factors relevant to APOE-ε4 carriers at risk of AD remain to be fully characterized. Protective Effects of Neuroactive Steroids on AD-Like Neuropathology Endogenous neuroactive (neuro) steroids are among the most potent modulators of CNS functions. The changes in their levels in the AD brain suggest that they may be key modulators of AD-like neuropathology. They can target several important landmarks of AD pathology, via a variety of mechanisms including prevention of apoptosis, oxidative stress, mitochondrial dysfunction, synaptic loss and regulation of intracellular survival signaling pathways. Amyloid-β Pathology The Aβ protein is a crucial initiator that triggers the pathological events leading to AD through accumulation and aggregation within the CNS. It is a small protein composed of 39-43 amino acids generated by sequential cleavage of human amyloid precursor protein by βand γsecretases. Its two major forms are Aβ 1-40 and Aβ 1-42 with the latter being more prone to aggregate in AD patients. The Aβ 25-35 fragment is a biologically active C-terminal region of Aβ . Aggregated Aβ peptides have harmful properties, but proof implicates soluble oligomeric Aβ as the primary noxious form. Aβ toxicity to neurons promotes a myriad of detrimental cellular events associated with neuronal death including, for instance, pore formation, oxidative stress, lipid peroxidation, mitochondrial dysfunction, neuroinflammation, loss of synapses and disruption of the cytoskeleton, among others (for reviews [2,75]). All Aβ forms are convenient tools for in vitro and in vivo experimental models of Aβ pathology associated with AD. Several steroids illustrated their capacity to protect cells from death induced by these peptides and, the way they achieve their neuroprotective effects is elucidated most of the time. We may note that, for a given steroid, results may appear controversial depending on the Aβ form, cell type, dose and duration of steroid treatment used. Inversely, Aβ peptides can affect steroid levels in neuronal cells. Effects of Neurosteroids on Aβ Toxicity The effects of neurosteroids on Aβ toxicity are summarized in Figure 4. Regarding PREG and PREGS, very few studies have reported their regulatory effects on Aβ-induced neuronal toxicity. PREG protected mouse hippocampal (HT-22) and rat pheochromocytoma (PC-12) cell lines in a dose-dependent and significant manner against Aβ 25-35 -induced cell death [76,77], but the molecular target(s) for its action awaits identification. Interestingly, PREGS treatment differentially regulated neuronal cell survival in in vitro Aβinduced AD models. One study indicated that it did not have any effect on the Aβ25-35-induced decrease in PC12 cell viability [77]. More recently, we observed that it significantly and dosedependently counteracted the reduced cell viability induced by Aβ25-35 in rat B104 neuroblastoma cells by preventing the cells from entering late apoptosis [78]. DHEAS was also capable of significantly attenuating Aβ25-35-induced toxicity, by preventing the cells from entering both late Interestingly, PREGS treatment differentially regulated neuronal cell survival in in vitro Aβ-induced AD models. One study indicated that it did not have any effect on the Aβ 25-35 -induced decrease in PC12 cell viability [77]. More recently, we observed that it significantly and dose-dependently counteracted the reduced cell viability induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] in rat B104 neuroblastoma cells by preventing the cells from entering late apoptosis [78]. DHEAS was also capable of significantly attenuating Aβ 25-35 -induced toxicity, by preventing the cells from entering both late apoptosis and necrosis [78] ( Figure 4A)]. DHEA neuroprotection was observed against neurite growth impairment and loss of newborn neurons caused by Aβ [25][26][27][28][29][30][31][32][33][34][35] infusion in the mouse dentate gyrus. This effect involved a sigma1 receptor-dependent activation of PI3K-Akt-mTOR signaling pathways that play a role in the regulation of apoptosis and cell growth [79] (Figure 4B). PROG has also been shown as an effective neuroprotectant in AD models (Figure 4). Several studies stated that PROG exerts neuroprotective effects against Aβ 25-35 -induced toxicity in vitro and in vivo ( Figure 4A,B). PROG significantly and dose-dependently improved neuronal survival in primary cultured rat cortical neurons treated by Aβ 25-35 [80]. This effect implicated a decrease in the upregulation of the apoptotic marker Bax/Bcl2 ratio that signals the loss of mitochondrial membrane potential and downstream caspase-3 activation. In addition, the mitochondrial PROG receptor membrane component 1 was activated and the c-Jun N-terminal kinase pathway inactivated ( Figure 4A). The classic PROG receptor was also partly involved. In vivo, PROG treatment reduced the decrease in hippocampal cell number induced by Aβ [25][26][27][28][29][30][31][32][33][34][35] injection into the rat hippocampal CA1 region [43] ( Figure 4B). The delivery procedure had a differential impact on the effect of PROG on Aβ intraneuronal accumulation. Indeed, in ovariectomized triple transgenic AD mice 3xTg-AD (bearing the human APP SWE , Tau P301L , and PS1 M146V genes linked to AD and FTD), PROG cyclic delivery significantly attenuated Aβ accumulation in different brain regions ( Figure 4C), whereas PROG continuous treatment for three months was devoid of any action [81]. These findings highlighted the importance of timing and duration of steroid administration. Additional studies are needed to clarify the mechanisms underlying different PROG treatment outcomes. One possible mechanism of PROG positive result may involve enhancement of Aβ clearance factors [82]. The neuroprotective effects of allopregnanolone against Aβ pathology have also been demonstrated. Allopregnanolone prevented the neurotoxicity resulting from Aβ 1-42 exposure in SH-SY5Y and primary cortical neurons via the suppression of extracellular signal-regulated kinase phosphorylation induced by Aβ and independently of GABA A receptor activity [83] (Figure 4A). Discrepancies were noted in the in vivo effects of allopregnanolone treatment on Aβ levels across AD transgenic mouse models that depended on the duration, frequency and time window of treatment. For instance, short chronic allopregnanolone treatment in two models of autosomal dominant AD resulted in increased soluble Aβ levels in the brain of female APPswe/PS1 mice but not in female APPswe/Arc ( Figure 4C) [84,85]. This difference might be explained by the distinct contribution of mutated PS1 and Arc genes in regulating Aβ production. Whether these data have any relevance to human late-onset AD is open. In male 3xTg-AD mice, an allopregnanolone regimen of one/week/six months at an early stage of pathology progression showed the highest efficacy on Aβ reduction as compared to three/week/three months and one/month [86]. Therefore, the therapeutic time window is important for steroid efficacy and intra-not extraneuronal Aβ seems to be a critical target for allopregnanolone benefits. Effects of Sex Steroids on Aβ Toxicity Androgens may contribute to slowing down AD. Gonadectomy increased Aβ levels and androgens exerted a substantial inhibition of Aβ accumulation in male AD mouse models [87,88]. Testosterone treatment of gonadectomized 3xTg-AD mice prevented the increase of Aβ accumulation in several brain regions by direct activation of the androgen receptor [88,89]. E2 treatment was partially effective in reducing Aβ in these mice. Both testosterone and E2 reduced tau hyperphosphorylation [89], suggesting a role of testosterone via aromatization ( Figure 5A). The role of E2 in reducing Aβ accumulation and associated toxic events has been investigated [90]. In ovariectomized 3xTg-AD, E2 treatment entirely prevented Aβ accumulation in specific brain regions [91] ( Figure 5A). E2 protective effect was partially attenuated by PROG suggesting that these steroids act in part via a common mechanism that yet needs clarification. E2 also prevented the Aβinduced apoptosis in rat cerebellar granule cells [92] ( Figure 5B). The protective effects of E2 against Aβ involved the prevention of Bax/Bcl-2 ratio up-regulation, subsequent cytochrome c mitochondrial release and caspase 3 activation [92]. They could also be associated with an increased expression of the insulin-degrading enzyme involved in Aβ clearance [82] as well as enhanced Aβ proteases and somatostatinergic systems [93]. Intriguingly, E2 actions occurred independently of classical estrogen receptor mechanisms. Of note, previous studies indicate that brain E2 deficiency accelerated Aβ deposition in APP23 mice (overexpressing the human APP751 with familial Swedish AD double mutation KM670/671NL) [52]. This report highlighted the potential protective role of local E2 levels in the female brain as compared to chronic ovarian hormone deprivation. Tau Pathology Research on tau protein revealed that it undergoes several changes in the AD brain, mainly hyperphosphorylation, truncation, aggregation, seeding and spreading. Tau oligomers, fibrils and aggregates constitutive of NFTs exert neurotoxicity in AD. Hyperphosphorylated tau (PHF-tau) is well-described and closely connected with neurodegeneration and cognitive dysfunction in AD (for review [94]). In contrast to Aβ, the relationship between tau and steroids is still in its infancy. We previously demonstrated that hyperphosphorylated tau levels (recognized by AT8 antibody) were negatively correlated with DHEAS concentrations in the hypothalamus of aged AD patients as compared with nondemented ones [39]. In patients with FTD, low amounts of PROG in the serum significantly correlated with low disease severity [95], but the link with brain PROG was lacking. Interestingly, some studies reported the effects of neuroactive steroids on physiological and pathological tau levels. For instance, PROG treatment significantly decreased total tau in human neuroblastoma SK-N-MC cell line [95]. In ovariectomized 3xTg-AD mice, both cyclic and continuous PROG treatment (25.0 pellet s.c. for three months) significantly attenuated hyperphosphorylated tau The role of E2 in reducing Aβ accumulation and associated toxic events has been investigated [90]. In ovariectomized 3xTg-AD, E2 treatment entirely prevented Aβ accumulation in specific brain regions [91] ( Figure 5A). E2 protective effect was partially attenuated by PROG suggesting that these steroids act in part via a common mechanism that yet needs clarification. E2 also prevented the Aβ-induced apoptosis in rat cerebellar granule cells [92] ( Figure 5B). The protective effects of E2 against Aβ involved the prevention of Bax/Bcl-2 ratio up-regulation, subsequent cytochrome c mitochondrial release and caspase 3 activation [92]. They could also be associated with an increased expression of the insulin-degrading enzyme involved in Aβ clearance [82] as well as enhanced Aβ proteases and somatostatinergic systems [93]. Intriguingly, E2 actions occurred independently of classical estrogen receptor mechanisms. Of note, previous studies indicate that brain E2 deficiency accelerated Aβ deposition in APP23 mice (overexpressing the human APP751 with familial Swedish AD double mutation KM670/671NL) [52]. This report highlighted the potential protective role of local E2 levels in the female brain as compared to chronic ovarian hormone deprivation. Tau Pathology Research on tau protein revealed that it undergoes several changes in the AD brain, mainly hyperphosphorylation, truncation, aggregation, seeding and spreading. Tau oligomers, fibrils and aggregates constitutive of NFTs exert neurotoxicity in AD. Hyperphosphorylated tau (PHF-tau) is well-described and closely connected with neurodegeneration and cognitive dysfunction in AD (for review [94]). In contrast to Aβ, the relationship between tau and steroids is still in its infancy. We previously demonstrated that hyperphosphorylated tau levels (recognized by AT8 antibody) were negatively correlated with DHEAS concentrations in the hypothalamus of aged AD patients as compared with nondemented ones [39]. In patients with FTD, low amounts of PROG in the serum significantly correlated with low disease severity [95], but the link with brain PROG was lacking. Interestingly, some studies reported the effects of neuroactive steroids on physiological and pathological tau levels. For instance, PROG treatment significantly decreased total tau in human neuroblastoma SK-N-MC cell line [95]. In ovariectomized 3xTg-AD mice, both cyclic and continuous PROG treatment (25.0 pellet s.c. for three months) significantly attenuated hyperphosphorylated tau levels (recognized by AT8 antibody) in the hippocampus and cortex, as compared to control mice [81,91]. The way in which PROG exerts its protective effect on tau hyperphosphorylation is unknown. Testosterone treatment (10 mg pellet s.c. continuous delivery for two months) reduced tau hyperphosphorylation (AT8) in the CA1 hippocampus gonadectomized male 3xTg-AD mice to levels lower than observed in controls, independently of androgen receptor activation [89]. The influence of E2 on AD-like tau pathology has been described, but discrepancies have been noted depending on the in vitro and in vivo models. E2 treatment sustained tau hyperphosphorylation induced by protein kinase A activation in human embryonic kidney HEK293 cells stably expressing tau441 [96]. Similarly, it enhanced tau phosphorylation on several epitopes (Ser396, Ser262 but not Ser202/Thr205 (AT8)) through adenosine monophosphate protein kinase activation human SH-SH5Y neuroblastoma cells [97]. In contrast, it attenuated tau hyperphosphorylation at multiple sites (S396/404, Thr231, Thr205, S199/202) induced by transient GSK3β overexpression in mouse N2A neuroblastoma cells [98]. In ovariectomized 3xTg-AD, E2 individual treatment was ineffective on tau phosphorylation whereas combined E2 and PROG treatment reduced phosphotau (AT8) levels [81]. Whether sex steroids regulate other pathological forms of tau such as tau oligomers deserves further investigation. Recently, the impact of sex on tau pathology was mentioned in the P301L model of FTD, with female mice displaying significantly higher total tau and phosphotau levels (AT8 and AT100 (ptau-T212/S214) positive neurons) in the cerebral cortex and hippocampus than males [99]. It is likely that the severity of sex-related tauopathy depends partly on local sex steroid levels, but this remains to be shown. One earlier exploratory study in women with FTD aged 70 year-old indicated a high correlation between estrogen use and the development of FTD, suggesting that estrogen replacement therapy may be contraindicated in women with early FTD symptoms [100]. Mitochondrial Impairment Mitochondria are highly dynamic organelles essential for the bioenergetic state of the cell. Mitochondrial dysfunction results in decreased energy production, weakened respiratory function, altered calcium homeostasis, oxidative stress or neuroinflammation that leads to neuronal death. Mitochondrial defects are features of both sporadic and familial AD. It is known that Aβ and hyperphosphorylated tau negatively impact mitochondrial function. Alternatively, mitochondrial dysfunction can generate Aβ accumulation or be independent of Aβ (for review [101]). Consequences of mitochondrial dysfunction, particularly oxidative stress, can influence brain steroid levels and steroidogenic pathways and therefore may have an implication in AD. Treatment with ferrous sulfate-catalyzed oxidative neuronal damage caused a significant increase in DHEA levels in the hippocampus and frontal cortex of AD patients [102]. The authors indicated that DHEA production may come from an oxygenated metabolite of PREG or cholesterol. It was unclear if this endogenous mechanism was able to rescue neurons from death. Low DHEAS in aged rat brain was found to be associated with reduced antioxidant glutathione [103]. This result may have implication in AD pathology since reduction in brain glutathione content is a prominent feature of the disease [104]. Upregulation of glutathione or DHEAS levels is a potential strategy to consider for slowing down AD progression [105]. In addition, sex steroid loss and sex differences are linked to the mitochondrial dysfunction. Impaired mitochondrial function was found in female 3xTg-AD brains at a late stage (nine months and onwards) of AD-like disease while at an early stage (around one month) in males [106]. Furthermore, chronic ovarian hormone deprivation by ovariectomy in 3xTg-AD mice exacerbated brain bioenergetics deficits [107]. Multiple approaches have been undertaken to target mitochondrial abnormalities for developing neuroprotective strategies. Regarding neurosteroids, they have been shown to display varied antioxidant effects against oxidative stress-induced cell death caused by Aβ (Table 1). DHEA treatment of isolated mouse brain mitochondria attenuated the decreased mitochondrial respiration and increase in reactive oxygen species (ROS) production induced by Aβ peptide [108]. Allopregnanolone significantly diminished the intracellular production of ROS, enhanced superoxide dismutase (SOD) activity involved in Aβ-induced PC12 cell death [109]. Notably, we previously demonstrated the antioxidant neuroprotective effects of the synthetic enantiomers of PREGS and DHEAS, which prevented aβ 25-35 -induced lipid peroxidation in the mouse hippocampus [78]. Whether PROG regulates AD-induced mitochondrial dysfunction is still unknown, but its benefits in normal brain mitochondria include enhancing functional efficiency and increased metabolic rates [111]. The effects of PREG, DHEA and their sulfated derivatives on mitochondrial deficits are currently lacking. A complex relationship exists between sex steroids and mitochondrial function that depends on sex, aging or disease (for review [112]). In AD, the role of E2 on the maintenance and function of mitochondria is ambiguous [113]. A selection of studies demonstrated the neuroprotective effects of E2 against AD-related mitochondrial injury. E2 pretreatment (10 nM, 24 h) protected against the heavy metal (cobalt and mercury) induced oxidative stress (loss of glutathione) and the increase in Aβ 1-40 secretion in SH-SY5Y neuroblastoma cells [114]. E2 pretreatment (100 nM and 1 µM, 2 h) preserved mitochondrial membrane potential and intracellular Ca2+ homeostasis, attenuated ATP depletion, and reduced mitochondrial calcium overload induced by H 2 O 2 (150 µM) toxicity in human neuroblastoma SH-N-SY cells [115]. E2 pretreatment protected against Aβ 1-42 -induced generation of ROS in choroid plexus explants and cultured choroid plexus epithelial cells through the E2 receptor-dependent internalization of reduction of Aβ uptake [110] (Table 1). Notably, E2 exacerbation of oxidative stress-induced-cell death was recently unveiled in C6 glial cells and N27 neuronal cells and explained by the time window, i.e., deleterious steroid post-treatment vs. beneficial steroid pretreatment [116]. E2 treatment of ovariectomized 3xTg-AD mice prevented the decrease in mitochondrial respiration, increase in oxidative stress and the subsequent Aβ-accumulation in brain induced by ovariectomy [107] (Table 1). Of interest are the few studies attesting for a sex-steroid difference in oxidative stress markers in AD patients. For instance, the higher activities of superoxide dismutase and glutathione peroxidase in postmortem AD brain compared to controls were further upregulated in women than men [117,118]. Neuroinflammation Neuroinflammation was historically considered as a secondary event in neurodegenerative diseases. Accumulating evidence now suggests that it is a key mechanism in AD initiation and progression. Abnormalities of Aβ and tau cause activation of microglia and astrocytes, and trigger the innate immune system by releasing proinflammatory mediators (for reviews [119,120]). Recent studies have revealed different types of pathological microglia associated with AD and key regulators of the microglia pathogenicity, such as the triggering receptor expressed on myeloid cells 2 [121,122]. Therefore, glial activation with associated inflammatory mediators and regulators are important targets for therapeutic approaches in AD research. Few studies have addressed the role of neurosteroids in preventing neuroinflammation in AD with an impact on Aβ-induced increase in cytokine secretion and microglial activation. PROG blocked Aβ 25-35 -mediated upregulation of TNFα and interleukin-1 (IL-1) and concomitantly increased cell survival in rat hippocampus [43]. PROG suppressed inflammatory responses induced by oligomeric Aβ in primary astrocyte cultures, including IL-1 β and TNF-α production. The reduction of endoplasmic reticulum stress markers (PERK/elF2α) triggered by Aβ was believed to be part of the PROG anti-inflammatory action [123]. Strong interactions between sex steroids and chronic inflammation were evidenced in AD (for review [124]). Physiological concentrations of E2 and testosterone displayed anti-inflammatory activities in several AD models ( Table 2). In APP23 transgenic mice (overexpressing human APP751 with the familial Swedish AD double mutation), ovariectomy led to more Aβ plaques associated with reactive microglia that parallel disease progression. Chronic E2 administration delayed this process by directly acting on resident microglia helping them to remove to Aβ [125]. In a primary culture of microglia derived from human cortex, E2 treatment enhanced uptake of Aβ [126]. The E2 anti-inflammatory effect was likely to be mediated via the decrease of the Aβ-induced NF-κB activation, which is critical for the induction of inflammatory response genes in activated BV-2 microglial cells [125,127]. Testosterone and its 5α-reduced metabolite 5α-dihydrotestosterone promote microglia to phagocytose and clear Aβ and inhibit proinflammatory cytokine expression in Aβ-activated murine microglia cell cultures (Table 2). Androgen administration also reduced Aβ 1-42 -induced IL-1β expression and neuronal death in the murine hippocampus. These anti-inflammatory effects were mediated by the suppression of aβ-induced NF-κB and p38 mitogen-activated protein kinase activation [128]. In addition, 5α-dihydrotestosterone promoted Aβ uptake by microglia through increasing formyl peptide receptor 2 (FPR2) and enhanced Aβ clearance by increasing the levels of the Aβ degrading enzyme endothelin-converting enzyme 1c (Table 2) [128]. Thus, the anti-inflammatory activities of E2 and testosterone contribute to their protective effects on Aβ-activated microglia-induced neurotoxicity, independently of their respective classical receptors. Direct regulation of resident microglia by steroids inhibiting chronic inflammation associated with AD may be a valuable therapeutic strategy of AD. Only limited studies have focused on tau pathology and neuroinflammation (for review [129]). Tau-mediated neuroinflammation and disease was evidenced in AD patients and models of pure tauopathy. For instance, positron emission tomography (PET) analysis of AD patient brains showed a direct positive correlation between tau aggregation and microglial activation in the parahippocampus at the early stage of the disease [130]. Activated microglia/infiltrating macrophages as well as induction and overproduction of inflammatory mediators (IL-1β and cyclooxygenase-2) were detected in the hippocampus and cortex of P301S transgenic mice and in the brain a patient with FTD associated with P301S mutation [131,132]. In fact, distinct microglial responses were observed depending on the type of tau pathology, tau phosphorylation state and it is even believed that reducing microglia number does not change tau pathological lesions [133][134][135]. In this context, the role of neuroactive steroids in tau-mediated neuroinflammation awaits further experimentation. Neurogenesis, Synaptic Failure and Memory Loss Neuronal loss in AD leads to progressive brain atrophy. Neurogenesis, which allows the endogenous formation of newly born neurons in the adult brain, is known to be less efficient during aging and in neurodegenerative diseases. Reduced neurogenesis in AD is associated with the lack of maturation and functional integration of newborn neurons in late stages of the disease [136]. Therefore, stimulating endogenous neurogenesis could be a therapeutic target for early intervention in AD. The neurosteroid modulation may have possible benefits on the course of the disease by increasing survival of adult-born neurons contributing to memory improvement. ↓ Aβ-induced proinflammatory cytokine IL-1β [128] ↑ microglia Aβ uptake through upregulating FPR2 ↑ microglia clearance of Aβ through upregulating ECE-1c [128] ↓ decrease, ↑ increase vs. Aβ condition. PREGS significantly reduced the impairment of neurite growth as well as survival and maturation of hippocampal dentate gyrus newborn neurons of APPswe/PS1dE9 mice [137]. Several studies in the laboratory of Brinton D. demonstrated the efficacy of allopregnanolone in promoting neurogenesis in the hippocampal subgranular zone of 3xTg-AD mice [86]. The regenerative action of allopregnanolone involved GABA A -R activation inducing chloride ion efflux from neural progenitor and upregulation of cell cycle genes that promote mitosis and repress cell division [138]. PREGS and DHEAS neurotrophic effects were observed in cultured rat neuroblastoma cells by the significant reduction of Aβ 25-35 -induced decrease of neurite growth [78]. Sex steroids are also important players in regulating adult hippocampal neurogenesis. There is ample evidence that E2 and testosterone enhanced the hippocampal dentate gyrus neuronal renewal in young adult female and male rodents, resulting in increased memory function (for reviews [139,140]). By contrast, literature highlighting their impact on neurogenesis in AD models is poor. One study indicated that chronic E2 administration significantly increased hippocampal neurogenesis in ovariectomized mice injected with Aβ 1-42 into the brain. This E2 protective effect occurring during the early, not late, stage of the Aβ pathological process, alleviated memory loss [141]. The effects of androgens on neurogenesis in AD models are yet to be discovered. Synaptic impairment in the neocortex and hippocampus is an early pathological feature of AD that correlates strongly with cognitive decline. Multiple studies support that pathological Aβ and tau affect the integrity of synaptic function individually and in interaction with several complex mechanisms that lead to impaired synaptic plasticity, neurotransmitter receptor dysfunction and memory loss (for reviews [3,142]). Thus, preventing synaptic dysfunction may be an attractive approach to delay AD progression and symptoms. Under physiological conditions, neuroactive steroids play a significant role in the integrity of synapses and the modulation of synaptic plasticity underlying learning and memory. PREGS is a well-known modulator of glutamatergic excitatory synaptic transmission and plasticity. It sustains presynaptic glutamate release, potentiates glutamatergic post-synaptic N-methyl-D-aspartate receptor (NMDA-R) function, induces its trafficking and enhances long-term potentiation [143,144]. Allopregnanolone is a potent allosteric modulator of GABA A receptor activity thereby exerts control over neuronal excitability. Recently, it was shown to increase mature excitatory synapse dendritic spines of cultured mature hippocampal neurons [145]. E2 regulated excitatory synaptic transmission in hippocampal neurons via estrogen receptor activation and by altering synaptic distribution of NMDA-Rs in prefrontal cortex [146,147]. Testosterone also enhanced genesis of spines through androgen receptor activation [148,149]. Surprisingly, the modulation of synaptic function and plasticity by steroids in AD is almost unexplored. One report highlighted the beneficial effects of E2 on early-Aβ 25-35 -induced synaptic dysfunction in organotypic hippocampal cultures [150]. Testosterone improved the oligomeric Aβ-induced presynaptic failure in primary cultures of hippocampal neurons [151]. Sex impact on postsynaptic protein levels was shown in P301L mice, and a more severe synaptopathy observed in females than males [99]. The potential protective effects of neuroactive (neuro)steroids on synaptic dysfunction associated with Aβ and tau pathology needs to be widely explored. Tremendous efforts have been devoted to characterizing the progressive impairment in learning and memory in the course of AD. Early stages of AD are marked by salient deficits in episodic and working memory as well as novelty processing impairment [4,152]. Although animal AD models do not fully recapitulate human AD cognitive deficits, they are a very useful tool for assessing memory functioning and pharmacological intervention in behavioral tasks [153]. Evidence demonstrates that neuroactive steroids differentially regulate learning and memory performance in rodents. The memory deficits induced by aβ [25][26][27][28][29][30][31][32][33][34][35] acute central administration in young adult male mice were attenuated by PREGS and DHEAS in the spontaneous alternation, passive avoidance or Morris water maze tests [78,154] (Table 3). The PREGS and DHEAS protective effects against Aβ 25-35 -induced amnesia involved the activation of sigma1 and α7 nicotinic acetylcholine receptors [154,155]. Interestingly, the synthetic enantiomeric analogues of PREGS and DHEAS also behaved as antiamnestic molecules against Aβ [25][26][27][28][29][30][31][32][33][34][35] in young adult mice [78]. Acute PROG treatment was devoid of activity per se in this model but prevented the antiamnesic effect of natural PREGS and DHEA through sigma1 receptor modulation [154]. By contrast, chronic PROG administration restored the spatial/hippocampal memory deficits in mid-age mutant female APPswePSEN1 ∆e 9 mice tested in the object placement and water maze tasks. This improvement involved increased PROG metabolism towards allopregnanolone in the cerebral cortex but not in the hippocampus of transgenic mice [156]. Curiously, chronic allopregnanolone in the same APPSwe/PS1 mouse model caused memory impairment in males, not females, with an accelerated disease progression [84,85]. In mid-aged 3xTg-AD mice, not aged, allopregnanolone single injection remarkably restored hippocampal associative learning that depended on the survival of newly generated neurons in the dentate gyrus [157]. The underlying mechanisms of allopregnanolone age and sex-dependent action on AD-related memory impairment need further research. Table 3. Effects of neuroactive steroids on impaired neurogenesis and memory in AD models. The influence of sex steroids on memory function in humans and rodents has been comprehensively reviewed in literature. However, a great diversity of outcomes has been obtained (i.e., improvement, decrease or no effect) depending on the disease stage, subject gender, study design, modes of delivery, type of memory evaluated and steroid dosage, among others. This is true for androgenic neuroactive steroids DHEA, DHEAS, testosterone and for estrogen (for reviews [158][159][160]). Modulation of Endogenous Neuroactive Steroid Production for Protection in AD Restoration of steroid homeostasis could be achieved by the supplementation of neuroactive steroids with a proper dosing and treatment regimen. Nowadays, an innovative strategy that is more and more attractive is to restore altered endogenous levels of protective steroids by promoting neurosteroidogenesis. In this context, targeting the 18 kDa translocator protein (TSPO) is becoming a valuable strategy [161]. TSPO is a high-affinity cholesterol-binding protein involved in the intramitochondrial cholesterol transport and steroid biosynthesis [162]. It is predominantly expressed in steroid-synthesizing organs including brain, and TSPO ligand treatment increases concentrations of several steroids. For instance, the pharmacodynamic study recently conducted by our group revealed that TSPO activation by etifoxine stimulated steroidogenesis in male rat brain, preferentially towards the synthesis of PREG, PROG and allopregnanolone [163], suggesting a differential regulation of neurosteroid production by this TSPO agonist. Moreover, TSPO is upregulated in reactive glial cells during CNS pathologies, including AD [164,165]. It is involved in the control of many fundamental functions including mitochondrial respiration and permeability, energy metabolism, cell proliferation and differentiation. Several TSPO ligands have proven their efficacy as neuroprotective, anti-inflammatory and regenerating molecules in experimental AD models [166]. The TSPO ligand Ro5-4864 (4 -chlorodiazepam) was neuroprotective against Aβ 1-40 -induced neurotoxicity in SH-5YSY neuroblastoma cells. It reduced the Aβ-induced apoptotic Bax upregulation and downregulation of survivin, a member of the inhibitor of apoptosis protein family [167]. In human SH-SY5Y neuroblastoma cells transfected with wild-type APP, TSPO ligand treatment (10 nM, 24 h) including two TSPO ligands of reference XDB173 and SSR-180,575 and two new imidazoquinazolinone compounds exerted neuroprotective effects. All compounds were able to improve mitochondrial respiration, decrease ROS and Aβ level while enhancing PREG synthesis [168]. In SH-SY5Y cells expressing the pathological tau-P301L and therefore abnormal tau hyperphosphorylation, a treatment with the same TSPO ligands (10-100 nM, 24 h) increased mitochondrial bioenergetics (ATP levels and mitochondrial membrane potential) [169]. They also improved the production of PREG at 20 µM at 2 h [169]. Furthermore, TSPO ligand efficacy was proven in in vivo AD models. Ro5-4864 (3 mg/kg i.p. once weekly for three months) in young and aged male 3xTg-AD mice improved memory loss (performance in the spontaneous alternation test) while attenuating hippocampal Aβ accumulation and gliosis [170]. Steroid synthesis occurred in the brain following Ro5-4864 injection (3 mg/kg i.p. once weekly beginning two weeks after surgery and continuing for four weeks) in gonadectomized 3xTg-AD mice. In young adults, PROG and testosterone levels significantly decreased in the brain whereas those of PREG allopregnanolone were not significantly affected [170]. In aged 3xTg-AD mice, Ro5-4864 did not significantly alter brain levels of testosterone; however brain levels of PROG and allopregnanolone decreased and brain PREG were unchanged [170]. PK11195 treatment (3 mg/kg i.p. once weekly for five weeks) in aged female 3xTg-AD mice improved memory (performance in the spontaneous alternation test) while reducing both soluble and aggregated Aβ [171]. Concluding Remarks Protective strategies that increase neural functioning as well as attenuating multiple aspects of AD-related neuropathology are more than welcome for treating AD. Preclinical evidence indicates that neurosteroids and sex steroids can promote neuronal survival, neurogenesis and memory function, by limiting apoptosis, oxidative stress, mitochondrial failure and microglial activation. In some cases, steroid beneficial effects depend on sex and stage of the neuropathology process. TSPO ligands also confer protection against AD-related pathology and they are worthy of constant research in the context of AD treatment. Studies still need improvement by answering questions with regard to the true model with AD fully represented, the therapeutic time window of intervention as well as solutions for reproducing molecule efficacy and safety. In addition, our knowledge of neurosteroids, steroid enzymes and metabolism in the AD brain and CSF during the course of the disease is far from complete, and larger studies using standardized validated protocols are required. Revitalizing research on steroids in AD with novel concepts close to human condition may help to obtain better and successfully translated benefits in patients with AD. So far, the translation of basic research on steroids in AD has not been fruitful. Gaps are yet to be filled regarding appropriate steroid formulation, dosing and regimen, alone or in combination, administration routes and bioavailability. Also, because of sex differences in AD pathology and outcomes, there is an urgent need for new sex-specific, even personalized, steroid-based therapies. As the AD pathogenesis starts decades before symptoms appear, the question remains as to the earliest possible ethical intervention. Author Contributions: The author confirms she is the sole contributor of this work and approved it for publication. The author has read and agreed to the published version of the manuscript. Funding: The publication fees of this review were covered by Inserm funds.
2020-07-09T09:09:19.848Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "798344477483272a38097b117cb5c79414d0cf15", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/13/4812/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f5ce0cc65e1c0be09306fc523b9defd74f1d016", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
260300919
pes2o/s2orc
v3-fos-license
Experimental and Theoretical Biological Probing of Schiff Bases as Esterase Inhibitors: Structural, Spectral and Molecular Insights The present study was designed to evaluate the in vitro and in silico potential of the Schiff bases (Z)-4-ethoxy-N-((5-nitrothiophen-2-yl)methylene)benzenamine (1) and (Z)-2,4-diiodo-6-((2-methyl-3-nitrophenylimino)methyl)phenol (2). These Schiff bases were synthesized according to a reported method using ethanol as a solvent, and each reaction was monitored on a TLC until completion of the reaction. The structures of both compounds were elucidated using spectroscopic techniques such as UV–Vis, FTIR, 1H NMR and 13C NMR. Molecular structure was determined using single-crystal XRD, which revealed that compounds 1 and 2 were monoclinic and triclinic, respectively. Hirshfeld surface analysis (HS) and 2D fingerprint plots were used to determine the intermolecular interactions along the contact contribution in the crystalline molecules. The structures of both compounds were optimized through a hybrid functional method B3LYP using the 6-31G(d,p) basis set, and various structural parameters were studied. The experimental and theoretical parameters (bond angle and bond length) of the compounds were compared with each other and are in close agreement. The in vitro esterase potential of the synthesized compounds was checked using a spectrophotometric model, while in silico molecular docking studies were performed with AutoDock against two enzymes of the esterase family. The docking studies and the in vitro assessment predicted that such molecules could be used as enzyme inhibitors against the tested enzymes: acetylcholine esterase (AChE) and butyrylcholine esterase (BChE). Introduction Schiff bases (SB) are synthesized by the reaction of aldehydes/ketones and amines in a suitable medium and have shown many valuable pharmacological applications, including the inhibition of acetylcholine esterase and butyrylcholine esterase, enzymes responsible for Alzheimer's disease [1][2][3]. SBs also behave as ligands that develop coordination with metals through imine nitrogen [4]. Characteristically, geometrical cavity restrictions can Molecules 2023, 28, 5703 2 of 13 be provided by SBs, which are important for host-guest cooperation and lipophilicity modulation [5]. These exceptional properties offer stability, selectivity, and sensitivity to the SB. Chemists are still trying to formulate various types of SB installed with diverse structural features for key applications, and well-designed and effective ligands of SBs are contemplated as "privileged ligands" [6]. SBs are extensively engaged in catalysis, separation phenomena, biochemistry, material science, decarboxylation, and enzymatic aldolization [7,8]. Heterocyclic cores are one of the dominant organic classes containing at least one hetero atom in the ring [8]. The general structures of heterocyclic and cyclic organic compounds exhibit similarities, while the existence of the heteroatoms administers some sort of chemical and physical properties in these heterocyclic compounds [8,9]. Many of them are those heterocycles that contain six-and five-membered rings in their nucleus, and these compounds mostly have one to three atoms that are hetero in nature [10]. It is recognized that due to the compressed structure of many heterocyclic compounds, they acquire different properties, such as anti-wear, anti-corrosion, and anti-oxidant [11] traits. In the cells of a living organism, heterocycles display a fundamental role [10]. These compounds have vast dimensional applications in different fields, as they are potent among those categorical compounds that have been used as veterinary products, pharmaceuticals, and agrochemicals. Heterocyclic compounds also exhibit applications such as dyestuffs, sanitizers, antioxidants, developers, copolymers, and corrosion inhibitors [12]. In drug discovery, the molecular docking technique has emerged as an increasingly significant tool. A docking study can be utilized to represent an interaction between a protein and a small ligand molecule at its atomic level [13]. This venture allows researchers to identify the behavior of small molecules regarding connectivity at the binding site of a target protein and illustrates their fundamental biochemical pathways. This tool further characterizes the "best-fit" ligand orientation that binds with a specific protein molecule and can be used to conclude the intermolecular structure of the complex organized among more than two molecules [14]. Due to its large number of medicinal applications, the case of ligand-protein interaction has become much more fascinating. A ligand behaves like a small molecule that exhibits interaction at the specific site of a protein. Various binding modes have considerable mutual conformations, through which binding phenomena could occur. Before the process of the docking study, knowledge about the binding site may significantly enhance its efficiency [15]. In this work, we have synthesized two novel SBs in continuation of our previous work. The structures of the compounds were determined with SCXRD; computation studies were done with Gaussian software; while biological potentials were carried out using in vitro and in silico approaches. Spectroscopic Studies of the Synthesized Compounds (1-2) This current study is an extension of an already reported study of compounds by our research group [1,3,16]. In this project, we have synthesized two new SBs through the condensation of 5-nitro-2-thiphenecarboxaldehyde with p-phenetidine and 2-(trifluoromethyl)aniline under reflux conditions [1,16]. The progress of the reaction was monitored on a TLC, and after completion of the reaction, the compounds were crystallized in ethanol under slow evaporation at room temperature. The structure of the targeted compounds was elucidated using spectroscopic techniques and X-ray diffraction. Compound 1 showed two absorption bands at 286 nm and 416 nm, while compound 2 displayed them at 230 nm, 235 nm, 286 nm, and 372 nm under UV-Vis spectroscopy ( Figures S1 and S2). In UV-Vis, the band below 300 nm was attributed to the π-π* transition, and the band above 300 nm was assigned to n-π* transitions [17]. On the FTIR spectrum, various informative bands appeared in the range of 650-4000 cm −1 . The absorption band in the range of 1690-1590 cm −1 is a characteristic band of the C=N group in a Schiff base and considered as a confirmatory indicator for the formation of an SB [18,19]. In FTIR spectra of compounds 1 and 2, strong bands appeared at 1609 cm −1 and 1518 cm −1 , respectively, and were ascribed to the presence of azomethine linkage. The appearance of such bands in 1 and 2 gave a preliminary indication regarding the targeted products. Furthermore, due to the presence of an iodine (I) group in compound 2, the C=N group was shifted to 1495 cm −1 as compared to 1. The IR bands at 1531 cm −1 and 1595 cm −1 were due to C=C (aryl) in 1 and 2, respectively ( Figures S3 and S4). NMR spectra were recorded in DMSO-d 6 using a Bruker Avance III 400 MHz NMR spectrometer. The disappearance of NH 2 and the appearance of a new singlet peak in 1 H-NMR were assigned to HC=N at δ 8.90 and 13.81, which confirmed the synthesis of the targeted compounds, respectively [20]. The doublet peaks at δ 8.17 and 7.65 with coupling constant 4.3 Hz were due to the protons of the thiophene moiety in compound 1. The signals for Ar-H in 1 and 2 were observed in the range δ 7.46-6.84 and 7.27-8.38, respectively ( Figures S5 and S6). The signals of methylene and the methyl group in 1 were exhibited at 4.06 and 1.33 ppm, respectively. The 13 C NMR of 1 and 2 showed eleven and fourteen signals from 158.14 to 14.63 ppm and 162.66 ppm to 14.25 ppm, respectively. All these peaks suggested the formation of 1 and 2, as shown in Figures Frontier Molecular Orbitals The optimized structures of the compounds under study were presented in Figures 1 and 2. The bond length and bond angle of the synthesized compounds were compared between experimental (XRD) data and theoretical (DFT) results. It was determined from comparison that both studies (experimental and theoretical) have close agreement with each other, as shown in Tables S9-S13. Furthermore, the relationship between the DFT and XRD calculations was further evaluated on the basis of a correlation coefficient (R 2 ). It was seen that the R 2 value is 0.9886 and 0.9679 for bond length and bond angle, respectively, for compound 1, while for 2, it is 0.9922 and 0.8098 for bond length and angle, respectively, as shown in Figures S11-S14. These results demonstrated that in both studies the structural parameters are very close to each other as the value of the correlation coefficient is close to 1.0. The slight difference of the bond angle in compound 1 may be due to the gaseous and solid phase calculation of the DFT and XRD studies, respectively. The DFT and XRD results were further compared by superimposing the structures of each compound ( Figure S14a,b). The root mean square deviation (RMSD) was calculated after superimposing the geometries of the compounds. The RMSD was 0.2427 and 0.8398 for compound 1 and 2, respectively. A DFT study was also used to draw the HOMO and LUMO orbital of both compounds along energy gaps between different orbitals. The energy difference between the HOMO and LUMO of compound 1 is 3.901 eV, while Frontier Molecular Orbitals The optimized structures of the compounds under study were presented in Figures 1 and 2. The bond length and bond angle of the synthesized compounds were compared between experimental (XRD) data and theoretical (DFT) results. It was determined from comparison that both studies (experimental and theoretical) have close agreement with each other, as shown in Tables S9-S13. Furthermore, the relationship between the DFT and XRD calculations was further evaluated on the basis of a correlation coefficient (R 2 ). It was seen that the R 2 value is 0.9886 and 0.9679 for bond length and bond angle, respectively, for compound 1, while for 2, it is 0.9922 and 0.8098 for bond length and angle, respectively, as shown in Figures S11-S14. These results demonstrated that in both studies the structural parameters are very close to each other as the value of the correlation coefficient is close to 1.0. The slight difference of the bond angle in compound 1 may be due to the gaseous and solid phase calculation of the DFT and XRD studies, respectively. The DFT and XRD results were further compared by superimposing the structures of each compound ( Figure S14a,b). The root mean square deviation (RMSD) was calculated after superimposing the geometries of the compounds. The RMSD was 0.2427 and 0.8398 for compound 1 and 2, respectively. A DFT study was also used to draw the HOMO and LUMO orbital of both compounds along energy gaps between different orbitals. The energy difference between the HOMO and LUMO of compound 1 is 3.901 eV, while HOMO−1 and LUMO+1 is 7.662 eV, as shown in Figure 3. The HOMO and LUMO energy difference of 2 is 3.315 eV, and similarly, HOMO−1 and LUMO+1 is 4.288 eV (Figure 4). The energy difference between the HOMO and LUMO of both compounds is relatively enough to stabilize the molecules [25]. The chemical reactivity of the molecules depends on the HOMO-LUMO energy gap, and molecules may be differentiated as hard or soft on the basis of the HOMO-LUMO energy gap. A molecule having a small energy gap is denoted as a soft molecule, and a molecule with a large energy gap is denoted as hard. The molecule with a small energy gap has a strong ability to demonstrate good biological activity because it requires little energy for excitation. Furthermore, compounds in which the LUMO is more stabilized also display excellent activity in a biological system. In our study, as compound 2 has a small energy gap that is also due to the presence of iodine (electronegative) atoms on the ring, it displayed higher activity as compared to compound 1. Molecules 2023, 28, x FOR PEER REVIEW 6 of 13 HOMO−1 and LUMO+1 is 7.662 eV, as shown in Figure 3. The HOMO and LUMO energy difference of 2 is 3.315 eV, and similarly, HOMO−1 and LUMO+1 is 4.288 eV (Figure 4). The energy difference between the HOMO and LUMO of both compounds is relatively enough to stabilize the molecules [25]. The chemical reactivity of the molecules depends on the HOMO-LUMO energy gap, and molecules may be differentiated as hard or soft on the basis of the HOMO-LUMO energy gap. A molecule having a small energy gap is denoted as a soft molecule, and a molecule with a large energy gap is denoted as hard. The molecule with a small energy gap has a strong ability to demonstrate good biological activity because it requires little energy for excitation. Furthermore, compounds in which the LUMO is more stabilized also display excellent activity in a biological system. In our study, as compound 2 has a small energy gap that is also due to the presence of iodine (electronegative) atoms on the ring, it displayed higher activity as compared to compound 1. Molecules 2023, 28, x FOR PEER REVIEW 6 of 13 HOMO−1 and LUMO+1 is 7.662 eV, as shown in Figure 3. The HOMO and LUMO energy difference of 2 is 3.315 eV, and similarly, HOMO−1 and LUMO+1 is 4.288 eV (Figure 4). The energy difference between the HOMO and LUMO of both compounds is relatively enough to stabilize the molecules [25]. The chemical reactivity of the molecules depends on the HOMO-LUMO energy gap, and molecules may be differentiated as hard or soft on the basis of the HOMO-LUMO energy gap. A molecule having a small energy gap is denoted as a soft molecule, and a molecule with a large energy gap is denoted as hard. The molecule with a small energy gap has a strong ability to demonstrate good biological activity because it requires little energy for excitation. Furthermore, compounds in which the LUMO is more stabilized also display excellent activity in a biological system. In our study, as compound 2 has a small energy gap that is also due to the presence of iodine (electronegative) atoms on the ring, it displayed higher activity as compared to compound 1. Natural Bond Orbitals The natural bond orbitals (NBO) of the synthesized compounds were computed with Gaussian software. NBOs are used in finding the individual bond in the molecules and their energies linked with bond pair/lone pair electrons, which helps in the interactions of atoms. With the use of NBOs, we can predict the behavior of the donor and acceptor atom in a molecule. During such interaction, electrons lose occupancy from a bonded orbital and shift to an empty orbital. It was found in compound 1 under study that S1-C17 behaved as a donor and that C28 acted as an acceptor with the energy of 0.51 kcal/mol, while O2-C20 and C24 had the energy of 0.59 kcal/mol. The energy associated between N3 to C7 is 2.60 kcal/mol, and that of O4-N5 to C23 is 0.63 kcal/mol. The NBOs of compound 2 are also listed in Table S14, along with the highest energy assisted with I1-C19 to C12, which is 1.77 kcal/mol, while the least energy (0.52 kcal/mol) is N4-C18 to O6. Similarly, the energy is 11.87 kcal/mol when N4-O5 behaves as a donor and O6 behaves as an acceptor. Density of State The density of state (DOS) of the synthesized compounds was computed from the optimized files of the targeted compounds using GaussSum software. It is an important strategy in order to determine the different state or energy levels in the molecules for the excitation of the electrons from the ground state. It was clear from DOS spectra as shown in Figure S15 that energy differences between HOMO−1 and LUMO+1 varied among both compounds. The energy difference between HOMO to LUMO and HOMO−1 to LUMO+1 for compound 1 is greater than that of compound 2. It is also calculated from the FMOs of compound 1 that the energy gap for HOMO/LUMO is 3.901 eV, while HOMO−1/LUMO+1 is 7.662 eV, and the overall energy difference is 3.761 eV. The energy difference between HOMO/LUMO and HOMO−1/LUMO+1 is 3.315 eV and 4.288 eV, respectively, for compound 2. The net difference in the form of energy among FMOs is 0.973 eV. Global Reactivity Parameters The global reactivity parameters of the synthesized compounds were calculated from the output files of the synthesized compounds with Gaussian software. These parameters are important in order to understand the reactivity as well as stability of the compounds. Compound 1 has high electronegativity as compared to compound 2, which is due to the presence of -OH in the molecule. The ionization and electron affinity of compound 1 are 8.294 and 6.1168, respectively (Table S15). The chemical hardness and chemical potential are favorable for the kinetic stability of the compounds under study. Hirshfeld Surface Analysis Hirshfeld surface analysis (HS) is mostly used to check the intermolecular interactions in the crystalline compounds, which stabilize the crystals. The di (inside) and the de (outside) demonstrate the distances to the Hirshfeld surface beyond the nuclei in the context of van der Waals radii. The HS is mapped with white, red, and blue colors contingent upon the distances of the total radii [16,26]. The Hirshfeld surfaces of the compounds were created using a high surface resolution with the 3D d norm surfaces and mapped over a d norm range of −0.55 to 1.0 Å, a shape-index range of −0.10 to 1.0 Å, and curvedness ranged from −0.40 to 4.0 Å. The surfaces were made to seem non-transparent to show a picture of the molecules, around which they were calculated. For the identification of close contacts, the d norm surface was used, and its value ranged from negative to positive. The negative value represented the shorter contacts, while positive value depicted longer contacts compared to r vdW (van der Waals radii). The red area meant closer contacts with a negative d norm value; blue spots on surfaces exhibited longer contacts and a positive value of d norm ; while white colors on the surfaces were due to equal distance along a zero value ( Figure 5). and a positive value of dnorm; while white colors on the surfaces were due to equal distance along a zero value ( Figure 5). In Vitro and In Silico Enzyme Inhibition The esterase family basically belongs to the hydrolase enzyme and actively participates in Alzheimer's disease. The in vitro potential of the synthesized compounds is presented in Table 2. It was found that compound 2 was more active in AChE as well as against BChE. Docking approaches frequently account for an energy-based scoring function, which helps to analyze the most affirmative conformation of ligands when binding with the target [15]. From general hypothesis, it is illustrated that the lesser the energy score value, the better the binding of a protein-ligand molecule, compared to those with greater energy values [27]. Therefore, it is necessary to identify those ligand-binding modes that have the lowest energy value [28]. The docking studies of the synthesized compounds were carried out using standard parameters. From docking study, it was In Vitro and In Silico Enzyme Inhibition The esterase family basically belongs to the hydrolase enzyme and actively participates in Alzheimer's disease. The in vitro potential of the synthesized compounds is presented in Table 2. It was found that compound 2 was more active in AChE as well as against BChE. Docking approaches frequently account for an energy-based scoring function, which helps to analyze the most affirmative conformation of ligands when binding with the target [15]. From general hypothesis, it is illustrated that the lesser the energy score value, the better the binding of a protein-ligand molecule, compared to those with greater energy values [27]. Therefore, it is necessary to identify those ligand-binding modes that have the lowest energy value [28]. The docking studies of the synthesized compounds were carried out using standard parameters. From docking study, it was determined that SBs can be used as enzyme inhibitors, as they exhibited remarkable docking scores as well as binding energy (Table 2). It was concluded from the results presented in the table that compound 2 exhibited the highest dock score as compared to compound 1 against AChE, while against BChE, a similar pattern was shown by both compounds. Compound 1 showed interactions with AChE with π-π and π-alkyl interaction with Trp84. Moreover, π-π interactions were also exhibited by compound 1 with Phe331, while with Phe288 it showed strong hydrogen bonding interaction due to the oxygen atom of the nitro group. Furthermore, compound 1 also exhibited interactions with Ile287, His440, and Glu199, as shown in Figure 6. On the active site of BChE, compound 1 exhibited interaction with Trp82 via π-π interaction with phenyl as well as the thiophene ring of compound 1. Compound 1 also showed interactions with BChE and may have inhibited BChE due to hydrogen bonding interactions ( Figure S18). Compound 2 also showed a good docking score as well as binding energy with AChE (docking score: −12.8735) and BChE (docking score: −11.1704). The phenyl ring of compound 2 exhibited π-π interaction with Trp84 and Trp334, while the nitro group on aniline showed hydrogen bond interaction with Gly117, His440, Gly118, Glu199, and Tyr130. Similarly, the hydroxyl group on the phenyl ring showed hydrogen bond interaction with Asp72, as shown in Figure S19. The iodine atoms on molecule 2 showed interactions with Phe330 and Tyr70 on AchE ( Figure S19). Compound 2 demonstrated firm binding with different amino acid residues located on the active site of BChE with Trp82, Glu197, Gly116, Gly115, and Tyr332. The compound showed hydrogen bond interaction with Glu197, Tyr128, Gly115, and Gly116. In contrast to hydrogen bonding interactions, compound 2 also showed interaction with Tyr332, viz., strong π-alkyl and π-π interaction between the phenyl rings of Schiff base 2 and the phenyl ring of Trp82 ( Figure S20). determined that SBs can be used as enzyme inhibitors, as they exhibited remarkable docking scores as well as binding energy (Table 2). It was concluded from the results presented in the table that compound 2 exhibited the highest dock score as compared to compound 1 against AChE, while against BChE, a similar pattern was shown by both compounds. Compound 1 showed interactions with AChE with π-π and π-alkyl interaction with Trp84. Moreover, π-π interactions were also exhibited by compound 1 with Phe331, while with Phe288 it showed strong hydrogen bonding interaction due to the oxygen atom of the nitro group. Furthermore, compound 1 also exhibited interactions with Ile287, His440, and Glu199, as shown in Figure 6. On the active site of BChE, compound 1 exhibited interaction with Trp82 via π-π interaction with phenyl as well as the thiophene ring of compound 1. Compound 1 also showed interactions with BChE and may have inhibited BChE due to hydrogen bonding interactions ( Figure S18). Compound 2 also showed a good docking score as well as binding energy with AChE (docking score: −12.8735) and BChE (docking score: −11.1704). The phenyl ring of compound 2 exhibited π-π interaction with Trp84 and Trp334, while the nitro group on aniline showed hydrogen bond interaction with Gly117, His440, Gly118, Glu199, and Tyr130. Similarly, the hydroxyl group on the phenyl ring showed hydrogen bond interaction with Asp72, as shown in Figure S19. The iodine atoms on molecule 2 showed interactions with Phe330 and Tyr70 on AchE ( Figure S19). Compound 2 demonstrated firm binding with different amino acid residues located on the active site of BChE with Trp82, Glu197, Gly116, Gly115, and Tyr332. The compound showed hydrogen bond interaction with Glu197, Tyr128, Gly115, and Gly116. In contrast to hydrogen bonding interactions, compound 2 also showed interaction with Tyr332, viz., strong π-alkyl and π-π interaction between the phenyl rings of Schiff base 2 and the phenyl ring of Trp82 ( Figure S20). Chemicals and Instruments The 5-Nitro-2-thiphenecarboxaldehyde, p-phenetidine, 2-(trifluoromethyl)aniline, ethanol, and all other solvents and reagents used in this study were purchased from Merck (Germany). UV-Vis absorption spectra were obtained at room temperature in a region of 200 to 900 nm for a 1 × 10 −3 M solution of the sample in ethanol, using a Thermo Evolution Array UV-Vis spectrophotometer. A PerkinElmer Spectrum Two FT-IR spectrophotometer equipped with an ATR module was used to carry out the IR spectra of the prepared compounds in a region from 4000-400 cm −1 . NMR spectra were recorded in DMSO-d 6 using a Bruker Avance III 400 MHz NMR Spectrometer. Chemicals and Instruments The 5-Nitro-2-thiphenecarboxaldehyde, p-phenetidine, 2-(trifluoromethyl)aniline, ethanol, and all other solvents and reagents used in this study were purchased from Merck (Germany). UV-Vis absorption spectra were obtained at room temperature in a region of 200 to 900 nm for a 1 × 10 −3 M solution of the sample in ethanol, using a Thermo Evolution Array UV-Vis spectrophotometer. A PerkinElmer Spectrum Two FT-IR spectrophotometer equipped with an ATR module was used to carry out the IR spectra of the prepared compounds in a region from 4000-400 cm −1 . NMR spectra were recorded in DMSO-d6 using a Bruker Avance III 400 MHz NMR Spectrometer. Crystal Structure Determination X-ray single-crystal diffraction data were collected at 296 K for 1 and 2 on a STOE IPDS 2 [29] diffractometer equipped with a graphite monochromator using Mo-Kα radiation (λ = 0.71073 Å). The structures were solved by direct methods using the SHELXT program [30] and refined on F 2 by full matrix least squares using the SHELXL program [31]. Unit cell refinement using all observed reflections and data reduction were performed using X-AREA. All non-hydrogen atoms were refined anisotropically, and the hydrogen atoms were included in geometric positions. Crystal Structure Determination X-ray single-crystal diffraction data were collected at 296 K for 1 and 2 on a STOE IPDS 2 [29] diffractometer equipped with a graphite monochromator using Mo-Kα radiation (λ = 0.71073 Å). The structures were solved by direct methods using the SHELXT program [30] and refined on F 2 by full matrix least squares using the SHELXL program [31]. Unit cell refinement using all observed reflections and data reduction were performed using X-AREA. All non-hydrogen atoms were refined anisotropically, and the hydrogen atoms were included in geometric positions. The final difference Fourier maps showed no peaks of chemical significance. The molecular geometry calculations and drawings were performed with WinGX [32]. Computational Studies Optimization and DFT study were conducted with Gaussian 09 using the hybrid functional B3LYP method 6-31G(d,p) basis set [33,34]. The results of DFT studies in terms of frontier molecular orbitals (FMO), natural bond orbitals (NBO), density of state (DOS), and global reactivity parameters were viewed in GaussView 5.0. For optimization purposes, the input files were taken from the crystal structure of the respective molecule to achieve the best match with the experimental data [1,16,35]. Hirshfeld Surfaces Analysis A Hirshfeld surface is an outer contour of the space a molecule or an atom consumes in a crystalline environment. The Hirshfeld surfaces along with 2D fingerprint plots were calculated using the Crystal Explorer 17.5 program with built in TONTO [35]. The normalized contact distance d norm based on both de and di were calculated using a standard equation [1,36]. In Vitro and In Silico Assessments Involving Esterases In vitro assessment was carried out using spectrophotometric method [37]. First, 100 µL of the tested compound (5 mg/mL) was added in a test tube, followed by the addition of 100 µL enzymes (AChE and BChE separately); the contents were mixed and incubated for 10 min at 37 • C. After 10 min of incubation, a 0.5 mL buffer (50 mM), 50 µL of AChI, and DTNB were added. The test tubes were incubated at 37 • C for 30 min, and after incubation, absorbance was measured at 410 nm using a UV-Vis spectrophotometer. The PMSF was used as a positive control (standard), while DMSO was used as a negative control during activity. The IC 50 values were calculated using various concentrations (100-10 µL) of the tested compounds. The docking studies of the title compounds were carried out with AutoDock, an online freely available software docking program [1,37]. The crystal structures of AChE (acetylocholino esterase) and BChE (butyrylcholino esterase), with PDB codes 1EVE and 1P0I, respectively, were selected for docking. The docking results, interactions, and the analyses of surfaces in graphical form were viewed using Discovery Studio Visualizer. Conclusions Organic chemists are continuously synthesizing new bioactive molecules for the betterment of human health. SBs have immense potential and active ingredients for curing many ailments related to human beings. In this work, we synthesized two novel Schiff bases comprising heterocyclic moieties, and structural elucidation was carried out using spectroscopic techniques. The assigned crystalline structure of each compound was further studied under Hirshfeld surface analysis. The 2D and 3D plots of compound 1 and 2 revealed crystalline stability due to various interactions. The crystal structures were further optimized with Gaussian software, and the calculations were compared with XRD results, which demonstrated that bond angle and bond length are in close agreement. With the use of Gaussian software, various structural parameters of the synthesized compounds were computed, such as electronegativity, electron affinity, chemical potential, and chemical hardness, which exhibited that these molecules have remarkable kinetic stability. The compounds were evaluated using in vitro and in silico models against AChE and BChE. Compound 2 demonstrated good results compared to 1, as it showed 76.15 ± 1.2% inhibition against AChE and 72.32 ± 0.9% against BChE. The theoretical and experimental results revealed that both compounds had good potential to inhibit the enzymes, and they may be used as enzyme inhibitors to treat Alzheimer's disease.
2023-07-30T15:05:08.791Z
2023-07-28T00:00:00.000
{ "year": 2023, "sha1": "931b3191fc55fe41ad7234c92e860bdbfcf8f079", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/28/15/5703/pdf?version=1690523396", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "397631f788b4ebf15eb8550b20cd8c65a601b4f7", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237883622
pes2o/s2orc
v3-fos-license
The effect of intrauterine hypoxia on testicular reproductive function Goal — to assess the effect of antenatal hypoxia of various origins on the morphology and reproductive function of the testes of newborn and mature rats in experiment. Material and Methods — In experiments 15 white outbred female rats aged 4 to 10 months with a weight of 200±30 g were used. Laboratory animals were randomly divided into 2 experimental and 1 control groups, 5 females each. The first group underwent normobaric hypoxia throughout pregnancy (21 days). Hypoxia modeling was conducted in accordance with the method of N.N. Karkishchenko (2010). The second group underwent hemic hypoxia during the second and third week of pregnancy, in accordance with the method of L.M. Sosedova (2012). The third (control) group was not exposed to any effect throughout pregnancy. Results — in the testicles of newborn rats of the experimental groups, the decrease of tubule diameter was observed, the increase of stroma area and development of interstitial edema were noted. In the group of hemic hypoxia, a significant decrease in the number of Leydig cells was noted. In the tissues of the testicles of mature rats, who underwent antenatal hypoxia, a decrease of tubule diameter, a significant decrease in the spermatogenesis index and a decrease of spermatogonia number were noted. The developed damage of spermatogenic epithelium in experimental groups of newborns and mature rats was confirmed by marked expression of the apoptosis marker (Bax), weak expression of proliferation markers (Ki-67) and receptor of receptor of fibroblast growth factor (FGFR). Conclusion — in animals with chronic hypoxia of various origins, there is an inhibition of spermatogenesis and a violation of the spermatogenetic function of the testicular seminiferous tubules. Introduction Infertility is a severe pathology that affects about 70 million people worldwide. The World Health Organization (WHO) estimates that 9% of couples worldwide struggle with fertility problems, with 50% being male infertility [1]. The endocrine system of the fetus, including the testicular, is formed in utero from 8-9 weeks of the gestational period and is exposed to various damaging factors. Morphogenesis of the endocrine system of the fetus determines the adaptive reactions of the body in postnatal life, its features can underlie the pathogenesis of endocrine diseases of an adult, including the pathogenesis of male infertility. Fetal hypoxia is one of the most common complications of pregnancy and childbirth [2]. However, at present, the role of hypoxia in the development of the reproductive system of the fetus remains controversial [2,3]. It is difficult to carry out a comparative analysis in dynamics on clinical material; therefore an attempt was made to simulate variants of intrauterine hypoxia and to study its effect on the reproductive system in an experimental model. Goal of the study -to assess the effect of antenatal hypoxia of various origins on the morphology and reproductive function of the testes of newborn and mature rats in experiment. Study design The experimental study, of the effect of chronic hypoxia on the development of the testes in the offspring of rats during the neonatal and puberty period was carried out. In a previously published work [4], we investigated the effect of normobaric hypoxia on the development of the gonads in rat fetuses. In the present study, we expanded the scope of work and studied the effect of normobaric and hemic hypoxia on testicular development in the offspring of rats during the neonatal period, as well as in mature rats. The Manual on experimental (preclinical) study of new pharmacological substances [5] and "The guide to laboratory animals and alternative models in biomedical researches" [6] were based on the development of the experimental research model. Work with laboratory animals was conducted in accordance with the Geneva Convention on the "International Principles of Biomedical Research using Animals" (1985) and the Helsinki Declaration (2000) on the humane attitude to animals. The experiment was performed on 15 healthy white male rats weighing 180-260 g, obtained from the vivarium of the Saratov State Medical University n a V.I. Razumovsky. Animals received a standard diet once a day, with free access to water. Animals were randomly divided into 2 experimental and 1 control groups, 5 females each. The first group underwent normobaric hypoxia throughout pregnancy (21 days). Hypoxia modeling was carried out in accordance with the method of N.N. Karkishchenko (2010) [6]. The animals of the experimental group were placed under a glass cover and stayed there until symptoms of hypoxia (frequent intermittent breathing, passive state) became visible. This experiment was conducted daily until the onset of labor. The second group underwent hemic hypoxia during the second and third week of pregnancy, in accordance with the method of L.M. Sosedova (2012) [7]. From the 10th to the 19th day of pregnancy, rats were injected daily with a sodium nitrite solution at a dose of 50 mg/kg. The third (control) group was not exposed to any effect throughout pregnancy. The timing of pregnancy began when spermatocytes were found in the vaginal smears of rats. After delivery, there were 91 individuals in the litter, 27 in the first group, 11 of them males, 29 in the second, 10 of them were males, only 35 were in the control group, 18 of which were males. To study the effect of chronic intrauterine hypoxia in the longterm period, 5 males were selected from each group. All individuals were kept under standard vivarium conditions until maturity (90 days). At the onset of puberty, male rats were removed from the experiment by decapitation. The remaining 55 newborn pups were withdrawn from the experiment by cervical dislocation. Morphological study The testicles of rats were fixed in buffered neutral 10% formalin; dehydrated in a battery of alcohols of ascending concentration, poured into paraffin. Sections of testicles 4 to 5 μm thick were placed on objective glasses and dewaxed according to the conventional standard procedure. Sections were stained with hematoxylin and eosin and used for immunohistochemical (IHC) staining. In ten fields of view of each case in the testicles, the following parameters were counted: the number of tubules, the number of cells in the tubules, the number of vessels in the stroma, the number of Leydig cells, the diameter of the tubules, the area of the parenchyma and stroma. The following signs were determined in the tissues of the testicles of mature rats: tubule diameter, number of spermatogonia, spermatogenesis index, which is the ratio of the number of layers of spermatogenic cells found in each convoluted seminiferous tubule to the number of calculated tubules. Morphometric analysis of histological preparations was performed using a Microvizor of medical transmitted light µVizo-101 (LOMO). Immunohistochemical (IHC) method After dewaxing and rehydration of paraffin sections, an IHC study was performed according to the immunohistochemical staining protocol. As a visualization system, REVEAL-Biotin-Free Polyvalent DAB (Spring Bioscience, USA) was used. REVEAL-Biotin-Free Polyvalent DAB kit (Spring Bioscience, USA) was used as the detection system, and diaminobenzidine (DAB) was used as the chromogen. The following antibodies of Abcam (UK) were used in the work: Monoclonal Rabbit Anti-Ki67 (1:100 dilution); Monoclonal Mouse anti Bax antibody (1:50 dilution), Polyclonal Antibody to Receptor of fibroblast growth factor 2 (1:10 dilution). .001** 0.027*** * -P value at comparison of control group and normobaric hypoxia; ** -P value at comparison of r control group and hemic hypoxia; *** -P value at comparison of normobaric and hemic hypoxia. Evaluation of immunohistochemical reactions was based on a visual assessment of the staining intensity and counting the number of immunopositive (positive) cells. Statistical analysis In the statistical analysis of the results, we used IBM SPSS Statistics 24.0, Microsoft Office Excel 2007 software packages. When checking the sample sets of the studied quantities for the normality of the distribution by the Kolmogorov-Smirnov method, it was found that the distribution of the studied parameters is different from the normal one, therefore, for comparative analysis, non-parametric statistics methods were used with calculation of the median, interquartile range, significance levels of differences between groups according to the Mann-Whitney criterion and Fisher. Differences were considered statistically significant at p <0.05. Morphological characteristics of the testicles in newborn rats During a light microscopy survey of testicular tissue of the control group, a thick connective tissue capsule was determined from the outside. The convoluted seminiferous tubules had an oval and rounded shape. Cells in the tubules were located randomly, there was no clearance. A soft-fibrous connective tissue was located between the tubules ( Figure 1A). Under microscopy of testicular tissue of the 1st experimental group (normobaric hypoxia), the morphological picture did not differ significantly. Individual spermatogenic epithelial cells were vacuolated. Some sparseness of the tubular apparatus due to fibrosis and peritubular edema was noted ( Figure 1B). In testicular tissue of the 2nd experimental group (hemic hypoxia), the tubules were reduced, marked stromal edema was noted, the basement membrane of individual tubules was fragmented and fuzzy. Some fields of view were represented only by stromal component tubules were absent (Figure 1 C). Morphometric indicators are shown in Table 1. According to the results of the non-parametric Mann-Whitney test, significant differences were revealed between the 1st and 3rd groups according to the following parameters: tubule area, stroma area, number of tubules in the field of view. When comparing hemic hypoxia with the control, significant differences were revealed in such parameters as the diameter of the tubules, the area of the tubules and the area of the stroma, the number of tubules, the number of vessels, and the number of Leydig cells. An immunohistochemical study, in the control group showed pronounced expression of proliferation marker expression (Ki-67), a weak expression of apoptosis marker reaction (Bax) and moderate expression of receptor of fibroblast growth factor (FGFR). In both experimental groups, weak expression of the proliferation marker was observed, moderate expression of the apoptosis marker, expression of receptor of fibroblast growth factor was not observed or was weak. According to the results of the non-parametric Fisher test, significant differences in the groups were observed with the expression of a proliferation marker (Ki67), apoptosis marker (Bax), receptor of fibroblast growth factor (FGFR). Data on the percentage of expressing cells in groups and significant differences are presented in Table 2. * -φ value for control group and normobaric hypoxia; ** -φ value for control group and hemic hypoxia; *** -φ value for normobaric and hemic hypoxia. Morphological characteristics of the testicles in mature rats In control group, the testicle was surrounded by a dense protein membrane consisting of collagen and elastic fibers, which were located between which fibroblasts and fibrocytes. The convoluted seminiferous tubules had a round, oval and polygonal shape (Figure 2A). The diameter of the tubules ranged from 0.21 to 0.25 mm (median 0.23 mm). In the lumen of the tubules, on the basement membrane, there were sustentocytes and spermatogenic cells at various stages of differentiation. Almost all of the seminiferous tubules contained cells of the germinogenic epithelium at all stages of maturation, including spermatocytes. The number of spermatogonia in one tubule ranged from 53 to 69 (median 56 in the tubule). The spermatogenesis index was 4 ( Table 3). In the first experimental group (normobaric hypoxia), foci of destruction of the basement membrane, interstitial edema, degenerative changes in spermatogenic epithelium, and karyopicnosis were noted. The diameter of the tubules fluctuated slightly, the median was 0.21 mm. The number of spermatogonia varied from 48 to 63 (median 47 in one tubule), the spermatogenesis index was 3 units. In the second experimental group (hemic hypoxia), an increase in pathological changes was noted: the diameter of the convoluted tubules was reduced, and in the part of spermatogonia, dystrophic changes in the form of vacuolization of the cytoplasm. In some tubules, inhibition of spermatogenesis was observed ( Figure 2C). During morphometric analysis, the diameter of the tubules ranged from 0.151 to 0.173 (median 0.159 mm), the number of spermatogonia ranged from 38 to 51 (median 47), and the spermatogenesis index was 3 (Table 3). According to the results of the non-parametric Mann-Whitney test, significant differences were found in both experimental groups and the control in all parameters. When comparing the experimental groups with each other, significant differences were observed in such parameters as the number of spermatogonia, the diameter of the tubules (p ≤ 0.05). An immunohistochemical study of testicular tissue of mature rats of the control group showed pronounced expression of the proliferation marker (Ki67) in spermatogonia, weak expression of the apoptosis marker (Bax) in individual spermatogonia, and moderate expression of receptor of fibroblast growth factor in spermatozoa (Figure 3). In the experimental groups, the expression of the proliferation marker was weak, in the group of hemic hypoxia in the epithelial cells of individual tubules, expression was not observed. Expression of the apoptosis marker (Bax) was different in the experimental groups. So, in the group with experimental normobaric hypoxia, the expression of the apoptosis marker (Bax) was weakly and moderately expressed in the epithelial cells of individual tubules. In the group with hemic hypoxia, expression was nuclear, pronounced and was observed in all spermatogenous epithelial cells ( Figure 3). The expression of receptor of fibroblast growth factor was weak in both experimental groups (Table 4). According to the results of the non-parametric Fisher test, significant differences in the experimental and control groups were observed in the expression of a proliferation marker (Ki67), apoptosis marker (Bax), receptor of fibroblast growth factor (FGFR). When comparing the experimental groups with each other, significant differences were observed in the expression of apoptosis marker (Bax). Discussion In our study, in the testicular tissue of newborn rat pups, we observed a decrease in the diameter of the tubules, and the area of the parenchyma with increasing area of the stroma and development of interstitial edema, caused by hypoxia of various geneses. The number of Leydig cells decreased in the tissues of the testicles of newborn rat pups only in the group of hemic hypoxia. Our studies are consistent with the studies of a number of authors, in which dystrophic and necrobiotic changes in the spermatogenic tubular epithelium were noted, as well as sclerotic changes and a decrease in the number of Leydig cells under the influence of hypoxia [3,8]. In the tissues of the testicles of mature rats, who underwent antenatal hypoxia, a decrease in the diameter of the tubules, a significant decrease in the spermatogenesis index and a decrease in the number of spermatogonia were observed, which is in full agreement with the results of studies by other authors on the effect of oxidative stress of various etiologies on the development of testicular dysfunction [3,8,9]. It is known, that fibroblast growth factor receptors are involved in the processes of angiogenesis, embryogenesis, tissue regeneration. A study by Lucía Saucedo et al. (2015) reports a positive effect of receptors of fibroblast growth factor on sperm motility [10]. In our work, we observed damage to the spermatogenic epithelium in experimental groups of newborns and mature males, which was confirmed by pronounced expression of the apoptosis marker (Bax), weak expression of proliferation markers (Ki67) and receptor of fibroblast growth factor (FGFR). In a previous work [4], at immunohistochemical study, we noted a decrease in the proliferative potential and an increase in the expression of the apoptosis marker in gonocytes, Sertoli and Leydig cells in the offspring of rats with normobaric hypoxia. In the present study, we observed this trend in adulthood as well. Hemic hypoxia caused effects similar to those caused by normobaric hypoxia, but the degree of damage to the spermatogenic epithelium was higher. Conclusion Based on the foregoing, it can be concluded that in animals with chronic hypohia of various origins the inhibition of spermatogenesis takes place, which is reliably confirmed by both generally accepted morphometric methods and specific immunohistochemical research methods. After antenatal hypoxia in the testes of mature individuals the generative dysfunction is detected, as evidenced by the decrease in expression of proliferation marker Ki67 and receptor of fibroblast growth factor, and an increase in expression of apoptosis marker Bax.
2021-09-01T15:13:58.931Z
2021-06-20T00:00:00.000
{ "year": 2021, "sha1": "f7e3385c46369373c239ab38f63e46357c8dcf6b", "oa_license": "CCBYNC", "oa_url": "https://romj.org/files/pdf/2021/romj-2021-0204.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "56310fe7d759fa072e5fe81fb114bc5511f36ae4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268425041
pes2o/s2orc
v3-fos-license
Thoracic fluid content (TFC) using electrical cardiometry versus lung ultrasound in the diagnosis of transient tachypnea of newborn This study aimed to evaluate TFC by EC versus lung ultrasound (LUS) findings for diagnosing and follow-up of TTN in late preterm and term neonates. This prospective observational study was conducted on 80 neonates with gestational age ≥ 34 weeks. TTN group included 40 neonates diagnosed with TTN, and no lung disease (NLD) group included 40 neonates without respiratory distress. LUS and EC were performed within the first 24 h of life and repeated after 72 h. There was a statistically significant increase in TFC in TTN group on D1 [48.48 ± 4.86 (1 KOhm−1)] compared to NLD group [32.95 ± 4.59 (1 KOhm−1)], and then significant decrease in TFC in D3 [34.90 ± 4.42 (1 KOhm−1)] compared to D1 in the TTN group. There was a significant positive correlation between both TFC and LUS with Downes’ score, TTN score, and duration of oxygen therapy in the TTN group. Conclusion: Both LUS and TFC by EC provide good bedside tools that could help to diagnose and monitor TTN. TFC showed a good correlation with LUS score and degree of respiratory distress. What is Known: • Transient tachypnea of the newborn (TTN) is the most common cause of respiratory distress in newborns. • TTN is a diagnosis of exclusion, there are no specific clinical parameters or biomarker has been identified for TTN. What is New: • Thoracic fluid content (TFC) by electrical cardiometry is a new parameter to evaluate lung fluid volume and could help to diagnose and monitor TTN and correlates with lung ultrasound score. Introduction Transient tachypnea of the newborn (TTN) is the most common cause of respiratory distress in term gestation newborns.TTN results from impaired clearance of the fetal lung fluid after birth.It can lead to admission to the neonatal intensive care unit (NICU), need for respiratory support, unnecessary antibiotics usage, and prolonged hospital stays [1,2]. Usually, this condition resolves over 24-72 h.TTN is a diagnosis of exclusion, it is primarily diagnosed based on medical history, typical clinical presentation and chest X-ray may show a radiopaque line in the horizontal fissure of the right lung, fluid infiltrate throughout alveoli, and the lungs may appear hyperinflated [3].Supportive treatment might be sufficient.But, non-invasive respiratory support may be required to reduce respiratory distress, reduce the work of breathing, improve clearance of lung liquid, and reduce the duration of tachypnea [4]. Lung ultrasound (LUS) has been used in the diagnosis of many types of neonatal and children's lung diseases, including TTN, respiratory distress syndrome (RDS), meconium aspiration syndrome (MAS), and pneumonia.LUS is an accurate, non-invasive, and reliable tool for diagnosing TTN and is valuable for the early and differential diagnosis of TTN.The most common ultrasonographic features of TTN are double lung point (DLP), interstitial syndromes or white lungs, and pleural line abnormalities [5]. Electrical cardiometry (EC) is a safe, accurate, and reproducible technique for hemodynamic assessment in children and infants [6] and validated for use in neonates [7,8].EC measures alteration in thoracic resistance or impedance, using skin electrodes by sending low amplitude, high frequency electrical current through the thorax.EC is able to isolate the changes in impedance created by the circulation, partly due to the change in orientation of the erythrocytes during the cardiac cycle [9].Now, a new parameter is available, thoracic fluid content (TFC).It is an indicator of total fluid volume.It represents thoracic intravascular, extravascular, and intrapleural fluid content.Larger TFC indicates a higher total thoracic fluid volume and indirect measure of lung congestion and/or hypervolemia.The TFC showed good correlation with extravascular lung water [10,11].Bioimpedance is now the only tool to evaluate TFC continuously and noninvasively at the bedside.TFC is measured as the baseline resistance (bioimpedance) to the passage of a small electrical current through all chest tissues, including skeletal muscle, cardiac muscle, lung, chest wall, subcutaneous fat, bone, and fluid [12]. EC is a bedside tool giving the trend of hemodynamic parameters with nomograms to provide normative data to help to distinguish between normal and pathological conditions [13]. Due to ongoing need for more specific markers of TTN, we conducted the current study to evaluate the TFC by EC versus LUS findings in the diagnosis and follow-up of TTN in late preterm and full-term neonates. Methods This was a prospective observational study that was conducted at NICU of Tanta University Hospitals.The study was approved by the local ethical committee of Faculty of Medicine, Tanta University (No. 34987/10/21).The study was registered at www.Clini calTr ials.gov with ID: NCT05538780.Written parental consent was signed before the enrollment.Eighty newborns with gestational age ≥ 34 weeks (calculated from the first day of last menstrual period and using new Ballard score [14]) were included in the study during the period from January 2022 till December 2022.Neonates with gestational age less than 34 weeks, major congenital abnormalities, congenital heart diseases, neonatal sepsis, neonatal pneumonia, or other causes of respiratory distress other than TTN were excluded. All enrolled neonates were divided into two groups: TTN group included 40 neonates diagnosed with TTN and met the criteria detailed in the Montreux definition for diagnosis of TTN [15] and no lung disease (NLD) group included 40 term neonates with no respiratory distress signs, normal medical history, normal chest clinical examination, and spontaneously breathing in room air with no need for supplemental oxygen [16]. The study was conducted in accordance with the Helsinki Declaration.The manuscript was prepared following STROBE guidelines [17]. Methods All neonates were subjected to full history taking, thorough clinical examination including Downes' scoring [18], TTN clinical score: to assess degree of respiratory distress in neonates with TTN [19] and routine laboratory investigations included complete blood picture, C-reactive protein, liver and renal function tests, random blood sugar, and blood gasses. Lung ultrasound (LUS) LUS was performed on neonates diagnosed with TTN within the first 24 h of life and repeated after 72 h and was also performed on NLD group within the first 24 h of life by single neonatologist using Siemens Acuson X300 ultrasound machine (Siemens Health Care GmbH, Erlangen, Germany) with 13-5 MHz linear transducer, while the baby was quiet. The LUS score was measured by dividing each lung into 3 areas (upper anterior, lower anterior, and lateral).For each lung area, a 0-to 3-point score was given (total score ranging from 0 to 18).The LUS score encompassed signs typical of TTN [20].The transducer was placed perpendicular to the ribs and moved from the midline to the lateral side. The LUS score 0 indicates A-pattern (defined by the presence of the lung sliding and horizontal A-lines, or less than 3 vertical B-lines); 1, B-pattern (defined as the presence of ≥ 3 well-spaced B-lines); 2, severe B-pattern (defined as the presence of crowded and coalescent B-lines with or without consolidations limited to the subpleural space); 3, extended consolidations which are characterized by tissue echogenicity with static or dynamic air bronchograms [21]. Electrical cardiometry (EC) Fluid status was shown as thoracic fluid content (TFC), corrected flow time (FTC), and stroke volume variation (SVV) and was measured on all 40 neonates diagnosed with TTN within the first 24 h of life and repeated after 72 h and was also performed on NLD group within the first 24 h of life by single neonatologist using EC, ICON (Osypka Medical GmbH, Berlin, Germany), while the baby was quiet. After skin disinfection with alcohol, four skin electrodes were placed on the forehead, at the neck below the left ear, at the left midaxillary line and left thigh and values were recorded for 30 s [22,23]. TFC is derived from the thoracic electrical base impedance (1/base impedance), but its usefulness is to evaluate either pulmonary fluid overload or soft tissue edema.SVV is defined as the percentage of change between the maximal and minimal stroke volumes over a period of 30 s. FTC is a preload indicator and it is the systole time divided by the square root of cardiac cycle time [22]. Statistical analysis The sample size calculation was performed using G.power 3.1.9.2 (Universitat Kiel, Germany).The sample size was calculated according to the sensitivity of LUS for determining TTN was 75% and sensitivity of TFC expected to be ranged between 90 and 100% according to a previous study [24].Based on the following considerations: 0.05 α error and 80% power of the study, allocation ratio 1:1.Two cases were added to each group to overcome dropout.Therefore, 40 patients will be allocated in each group. Statistical analysis was performed with the Statistical Package for the Social Sciences version 20.0 (SPSS Inc., Chicago, IL, USA).Continuous data are presented as mean ± standard deviation or median (interquartile 25-75) for non-normal distributed variables, whereas discrete data are given as absolute values and percentages.Group means of the continuous variables were compared with Student's t-test or Mann-Whitney U test when appropriate.Continuous variables were compared between day 3 and day 1 using paired samples t-test or Wilcoxon's rank sum test when appropriate.Categorical variables were compared with chisquared test, or Fisher exact test, when appropriate.Correlations were assessed using either Pearson's correlation test or Spearman's rank test according to the distribution pattern of the variable.A p value of < 0.05 was considered statistically significant with a confidence interval of 95%. Results The characteristics of both groups are shown in Table 1.The rate of cesarean section (CS) was significantly higher in TTN group.Downes' score was 5.15 ± 0.62 and TTN score was 5.70 ± 0.88 in TTN patients.All the cases of TTN received oxygen support in the form of high flow nasal cannula and duration of oxygen therapy was 3.43 ± 0.90 days. EC parameters (TFC, FTC, and SVV) and LUS score of the studied groups showed that there was statistically significant increase in TFC and LUS score in TTN group on D1 compared to NLD group.Also, there was statistically significant decrease in TFC in D3 compared to D1 in TTN group.On the other hand, there was no statistically significant difference between TTN group on D3 and NLD group.FTC and SVV showed no statistically significant difference between TTN group on D1 and D3.Also, there was no statistically significant difference between TTN group on D1 and NLD group and between TTN group on D3 and NLD group (Fig. 1). There was significant positive correlation between LUS score and TFC in the TTN group on D1 and D3.Also, there was significant positive correlation between both TFC and LUS score with Downes' score, TTN score, and duration of oxygen therapy in the TTN group on D1 and D3, while there was no significant correlation between them and APGAR 1 min, APGAR 5 min, weight, FTC, and SVV in the TTN group on D1 and D3. Discussion TTN is a common cause of respiratory distress in newborns caused by retained fetal lung fluid and consists of a period of rapid breathing that usually resolves within 24-72 h [25].LUS is a non-invasive tool that is increasingly used in the diagnosis of TTN.It allows visualization of the neonate's lungs and real-time assessment of Fig. 1 EC parameters (TFC, FTC, and SVV) and LUS score of the studied groups conditions like TTN.It also helps exclude other lung diseases and guides further management [26]. Clinical evaluation of cardiac status is important to assess sequelae.EC can be used to noninvasively measure extravascular lung water index in TTN.By tracking changes in thoracic electrical bioimpedance during the cardiac cycle, lung edema and decreased lung water content over time as the condition resolves can be continuously monitored.This technique may allow quantification of disease severity and treatment response without radiation exposure [22]. In the current study, there was no significant difference between both groups as regards gestational age, gender, Apgar 1 and 5 min, anthropometric measurements, or antenatal risk factors, while, as regards mode of delivery, there was a statistically significant increase in CS deliveries in TTN group.This came in agreement with Derbent et al. [27], who showed that the proportion of CS in the TTN group was significantly higher.Another retrospective study by Kasap et al. [28] carried out on 95 newborns with TTN and showed that 79% of the patients were delivered by CS. TFC was significantly higher in TTN group on D1 compared to NLD group.Furthermore, there was statistically significant decrease in TFC on D3 compared to D1 in TTN group.But there was no statistically significant difference between TTN group on D3 and NLD group.This could be explained by the resorption of lung fluids over time in TTN patients. In agreement with our findings, Bassiouny et al. [29] revealed that TFC within the first 6 h was high.However, TFC at 24 h of ≤ 24 mL/kg and TFC drop rate at 24 h of > 12% are statistically significant discriminators of TTN from non-TTN. TFC also was positively correlated with Downes' and TTN scores in the TTN group.These results agree with Paviotti et al. [12], who found that TFC independently correlates with the presence of respiratory distress at birth and at 24 h of life in late preterm and term newborns. LUS findings demonstrated DLP only in 31 cases, pleural line abnormalities were found in all the cases, while white lung was present in 6 in newborns with TTN.These results agree with Raimondi et al. [24], who concluded that pleural line with no consolidation is a consistent finding in TTN and the presence of a DLP is not essential for the LUS diagnosis of TTN. As regards LUS, in our present study, there was significant increase in LUS score in TTN group on D1 compared to NLD group.Also, there was significant decrease in LUS score at D3 compared to D1 in TTN group.The difference between LUS score on D1 and D3 in TTN group could be explained that LUS score decreased progressively over time with resolution of TTN.Our results agreed with Pezza et al. [30], who showed that lung aeration score was evaluated and improved over time in TTN patients.This also agreed with Li et al. [31], who found that LUS scores decreased significantly from day 1 to day 2.They also found that TTN group exhibited significantly higher LUS scores than did the control group.Also, Yoon et al. [22] demonstrated that LUS score for the prediction of TTN had a sensitivity of 67% and specificity of 97%.This also came in agreement with He et al. [32] meta-analysis which evaluated the diagnostic value of LUS for detecting TTN. Our data showed a significant positive correlation between LUS and both Downes' and TTN scores in the studied cases on D1 and D3.This can be explained because LUS score correlated with the severity of respiratory distress which was clinically assessed with Downes' and clinical TTN scores during the TTN course.This observation came in agreement with the study by Raimondi et al. [24], who found a significant correlation between LUS and Silverman score.Both scores decreased progressively over time. In our present study, there was a significant positive correlation between LUS score and duration of oxygen therapy.This agreed with Gunes et al. [33], who showed positive correlation between LUS and oxygen exposure.Also, Li et al. [31] observed a moderate correlation between the LUS score and respiratory severity score (RSS), which indicates that the LUS score reflects the clinical respiratory severity of neonates diagnosed with TTN. We also observed a significant positive correlation between LUS and TFC on the first and third days in TTN cases.This came in agreement with Yoon et al. [22], who found that TFC correlated well with ultrasound in the estimation of extravascular lung fluid.As regards FTC and SVV, there was no statistically significant difference between TTN group and NLD group. To the best of our knowledge, there are few studies in the current literature evaluating the EC in the diagnosis of TTN.Meanwhile, this is the first study to compare EC versus LUS to diagnose and follow up newborns with TTN. The current study has some limitations; it was a singlecenter study, sample size was relatively small, the lack of studies about the reproducibility of the EC in neonates, the time points of evaluation can be variable within first 24 h of age and on the 3rd day, and neonates with different respiratory disorders like RDS were not included. Conclusions Both LUS and TFC by EC provide good bedside tools that could help to diagnose and monitor TTN.TFC showed good correlation with LUS score and degree of respiratory distress.EC has been proposed as a non-invasive, safe, simple, and non-operator dependent real-time monitor for TTN. Table 1 Neonate characteristics of the studied groups TTN transient tachypnea of newborn, NLD no lung disease, DM diabetes mellitus, HTN hypertension, PROM premature rupture of membranes, UTI urinary tract infection
2024-03-17T06:17:49.252Z
2024-03-15T00:00:00.000
{ "year": 2024, "sha1": "8c06979708be8348d6b99f1d0580385ffef95f2b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00431-024-05507-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b5e8d6d6aeb24e134f37e7e10e26839fef9844cf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
44194572
pes2o/s2orc
v3-fos-license
Application of Monte Carlo simulation for estimation of uncertainty of four-point roundness measurements of rolls Large-scale rotors in the paper and steel industry are called rolls. Rolls are reground at regular intervals and roundness measurements are made throughout the machining process. Measurement systems for roundness and diameter variation of large rolls (diameter <2000 mm) are available on the market, and generally use two to four sensors and a roundness measurement algorithm. These methods are intended to separate roundness of the rotor from its movement. The hybrid four-point method has improved accuracy, even for harmonic component amplitudes. For reliable measurement results, every measurement should be traceable with an estimation of measurement uncertainty. In this paper, the Monte-Carlo method is used for uncertainty evaluation of the harmonic components of the measured roundness profile under typical industrial conditions. According to the evaluation, the standard uncertainties for the harmonic amplitudes with the hybrid method are below 0.5 (cid:2) m for the even harmonics and from 1.5 (cid:2) m to 2.5 (cid:2) m for the odd harmonics, when the standard uncertainty for the four probes is 0.3 (cid:2) m each. The standard uncertainty for roundness deviation is 3.3 (cid:2) m. Introduction Roundness is defined by ISO 12181-1 [1] and ISO 12181-2 [2] as a geometrical property of a cross-section of a piece intended to be round. Roundness is an important feature of all rotating machines where smooth rotation of the rotors or even surface quality and even thickness of the end product are needed, such as paper machines, steel strip or sheet production, printing machines, engines and generators etc. In length metrology, diameter is often measured as a two-point measurement that is affected by out-ofroundness of the part. Measurements of roundness profiles are also useful when a specific harmonic component is critical or important, e.g. for vibration excitation. In laboratories, roundness measuring machines can measure deviation from roundness using a single sensor, as high-accuracy bearing assembly ensures that there is only a small rotational error in the radial direction [3][4][5]. In paper mills, roundness measurements are usually carried out with the roll placed on a lathe or grinding machine as shown in Fig. 1 1 . Heavy rolls rotate with their own bearings or are supported by sliding pads. With these measurement setups it is difficult to avoid a rotational error of the roll's centreline; thus one-or two-point measurement methods cannot properly separate this rotational error from the geometry of the workpiece -hence the usage of multi-point measurement devices in the paper industry [6]. Most of these devices are based on the Ozono method, where the roundness is calculated from weighted sensor signals in a given configuration around the rotor [7]. In the steel industry the roundness tolerances of the rolls are not as tight as in the paper industry, thus a two-point measurement device is used, which is well suited for diameter variation profile measurement. Generally, in steel strip and paper production the diameter and the diameter variation profiles are more important than the roundness [8][9][10][11]. The reliability of the measurement is naturally important for machined workpieces in production. Competitive production needs reliable information about the geometry of the workpiece or some specific dimension or feature of the workpiece, e.g. roundness profile. In modern machine tools for large scale rotors, i.e. in paper or steel mills, the reliability of the onsite measurement device is important also for the error compensation of the roll grinder or lathe. The control systems of the machine tools use the geometry information provided by the measurement device for error compensation; thus the measured geometry must be accurate for the compensation to be correct [8,10,11]. Uncertainty of a measurement can be evaluated using the "GUM" method, which uses a linear Taylor expansion of the measurement model with sensitivity coefficients [12]. If the measurement model is simple, this method is straightforward and used extensively. However, once the measurement model becomes complex, as with measurement of rolls, the sensitivity coefficients are difficult to evaluate. In 2008, "Supplements to the GUM" were published describing the use of the Monte-Carlo method for uncertainty evaluation [13]. Using the Monte Carlo method the measurement is simulated using input quantities which are random, but follows probability density functions relevant to each uncertainty contribution to the measurement [14][15][16]. Its strength is that non-linearity in the measurement model is not a problem. In this paper, the principle of the four-point method is described first. The application of the Monte-Carlo method for an uncertainty evaluation is presented next, and finally the simulation results are reported and discussed. Roundness and Fourier series The roundness profile is typically presented in polar coordinates, but for analytical purposes a more relevant presentation is the use of Fourier series terms. For roundness profile characterization only terms with n ≥ 2 are significant, because the term n = 0 denotes the offset of the signal, i.e. the DC value, and the term n = 1 stands for the eccentricity of the roundness profile. Therefore, our results include only the terms n ≥ 2. One of the most common Fourier analyses is done with the fast Fourier transform (FFT) algorithm developed originally by Cooley & Tukey [17]. The inverse FFT algorithm can be used to compose the original measurement signal in the time domain from these complex numbers. Filtering of some unwanted frequencies or components is straightforward. The complex number representing the unwanted frequency or component is set to zero before the inverse FFT, an example of which is shown by Mosier-Boss et al. [18]. In the analysed measurement signals of our research, the FFT algorithm is used both for identifying certain harmonic components and for filtering purposes. Four-point roll roundness measurement The studied four-point roundness measurement method is a combination of the two-point method and the Ozono three-point method. Both are briefly discussed here. Two-point method The two-point method uses only two probes ( Fig. 2A). In some applications, one of the probes can be replaced with a fixed point (Fig. 2B). Practical implementations of this type of device include modified roll callipers (see Fig. 3). These devices can also be used for absolute diameter measurements, if the distance between the probe and the follower or the distance between the two probes is known. Otherwise it can only measure the variation in the diameter when measuring a rotating object. This method measures the diameter profile or diameter variation profile. In principle, the only difference between the two is that in variation profile, the average or minimum diameter value has been subtracted. The diameter variation measurement is commonly used for large roll grinding machines. There, the measured profile is inaccurately called the "roundness profile", although a two-point measuring method cannot measure the true roundness profile because it suffers from harmonic filtration. Using this type of diameter-measuring device one cannot measure odd lobe shapes like triangular, 5-lobe, 7-lobe etc. geometries, because the method is unable to separate the form error of the cross-section from the error motion of the rotating axis [8,19,20]. Calculation of the measured diameter variation profile of a workpiece with the two-point method is straightforward, and includes only addition or subtraction depending on the orientation of the probes. If the values of the probes increase in the direction of the increasing diameter, the measured diameter variation d is: where s 1 and s 4 are measured sensor signals (see Fig. 3). Variation for radius r is: The harmonic amplitudes D n are then calculated by Fourier transform of the roundness profile where n = 2,4.N/2. Three-point Ozono roundness measurement method One of the first numerical methods in the literature for assessing roundness is that by Ozono [7]. The method is complex, thus only the basic principles are presented here. The roundness profile is determined by measuring run-out s 1 (Â), s 2 (Â) and s 3 (Â) at three different angles denoted by 1 , 2 and 3 . In practice, the first angle is set as 1 = 0 as shown in Fig. 4. A roundness profile function m(Â k ) is introduced and denoted as where k = 0, 1, 2, . . ., N-1, and where N is the number of samples per revolution. The idea is to eliminate centre point motion by using Eq. (4) with appropriate weighting factors w 2 and w 3 . The weighting factors w 2 and w 3 are derived from the conditions sin 1 + w 2 sin 2 + w 3 sin 3 = 0 and (5) Kato et al. [21] have developed a numerical method to optimize the measuring angles, resulting in As a function of observation angles the weighting factors w 2 and w 3 can be expressed as , and (7) Previous studies show that the sensitivity of the algorithm is at its best with no major harmonic suppression when the number of lobes per revolution is below 35 [22]. The harmonic amplitudes E n are then calculated by Fourier transform of the roundness profile Hybrid four-point roundness measurement One of the multi-point methods commonly used in the roll geometry measurement devices of roll grinding machines is the hybrid four-point method. The method behind the hybrid fourpoint measurement device is based on the three-point Ozono method [7], but combined with the two-point measurement method. As mentioned above, the two-point measurement method (when only using sensors S 1 and S 4 in Fig. 5) suffers from harmonic filtration, making it unsuitable for the measurement of odd-numbered harmonic lobes of a roundness profile [9,19], but the even-numbered harmonic lobes are measured accurately. The hybrid four-point method presented originally by Väänänen [23] uses the Ozono method to measure the odd-numbered harmonic lobes and combines the result with the even-numbered lobes measured with the two-point method. Because of this, the hybrid four-point method should ensure an overall better accuracy compared with the Ozono or two-point method alone. The harmonic amplitudes of the hybrid four-point method can be expressed as G n have the even amplitudes from the two-point method and the odd amplitudes from the Ozono method. Fig. 6 illustrates the analytic principle of the method. The idea of combining harmonics from different sets of measurements has also been used in the calibration of roundness standards [24]. Measurement devices There are several versions of the measurement device (see Figs. 1 and 7 Fig. 7). All of them have four probes attached either directly to a frame or to four radially adjustable rods on a frame. The rods are used to bring the probes into the measurement position, if rotors with different diameters are measured. Our simulations were based on the adjustable rod setup as shown in Fig. 7, which creates an additional source of error (rod alignment error, see Fig. 12). Measurement frame The frame of the measurement device is made of carbon fibre due to its low thermal expansion coefficient and lightness. There are two frame types, the first consisting of one piece with all of the rods and probes on the same frame (Fig. 7 right), the second comprising two parts (Fig. 7 left). The two-part frame may introduce an additional error source, but is outside the scope of this paper. Probes There are several alternatives for probes. Commonly used displacement probes are length gauges working internally with photoelectric scanning of a grating and a plunger with a ball touching the roll. For the chosen length gauge (Heidenhain MT 12) the measurement error was verified to be within ±0.2 m when calibrated against a laser interferometer at the Finnish national metrology institute (VTT MIKES). Different versions with different measurement heads exist. Calibration disc For testing and calibration of roundness measurement devices, discs with different roundness properties are used. An example is shown in Fig. 8 with a disc diameter of around 500 mm. In a previous work [9] calibration disc with a roundness profile containing 2-30 undulations per revolution (UPR) was designed and manufactured (Fig. 9). The roundness deviation of the profile was minimized by optimizing the phase angles of the individual harmonics. In the previous work [9] the calibration disc was measured with four laser probes and the measurement result averaged When calibrating a four-point device, the disc is typically attached to a shaft on roller bearings and the shaft is rotated. over 100 rotations. Same four-point method was used as in this paper. The roundness profile of the calibration disc was manufactured by grinding. Due to the limitations of the grinding accuracy, the harmonic amplitudes of the ground profile differ from the nominal 10 m by several micrometres. The development and use of the disc and two other similar discs with different profiles were presented elsewhere [25], as were the preliminary results [26], but two roundness measurements from the disc with the profile shown in Fig. 9 are discussed briefly here. One measurement was performed with a laboratory four-point test device, comprised mostly of the same parts as a commercial device but with a self-made frame. During this measurement the disc was rotated on roller bearings. The difference between the roundness plots in Fig. 10 is caused in part by differences in filtering. The RONt values were 109.0 m measured by the reference device and 106.5 m for the four-point device. For the harmonic amplitudes the differences were less than 1 m (Fig. 11). Probability distributions In general, uncertainty evaluation or uncertainty budgets have been used to identify predominant uncertainty sources. In a typical "Classic Gum" approach, all uncertainty components are collected into one table together with sensitivity coefficients. As noted elsewhere [27], there is no counterpart to equivalent sensitivity coefficients in the Monte Carlo method. However, it is possible to run the Monte Carlo simulation with one uncertainty source at time while holding the other input quantities fixed at their best estimates [27]. A 'non-linear' sensitivity coefficient can be defined from the results [27]. The profile used in the simulation is the same as in Fig. 9. The algorithm doing the calculations for the four-point method (Eq. (10)) was acquired as an executable program, which takes measured data as an input file and calculates the harmonic amplitudes as a result of roundness. The principle for the Monte Carlo simulation is to generate synthetic data representing a roundness measurement, distorted with suitable distributions for error contributions. Next, uncertainty evaluation inputs are presented for . It is assumed that these positions differ from the nominal angular values with a standard uncertainty of 0.5 • (Fig. 12). A standard uncertainty of 0.3 m for the scale error of length probes is assumed based on calibrations and experience. The temperature of the instrument is assumed to be 20 • C, with a standard uncertainty of 0.5 • C. One measurement typically takes 10-30 s to complete. The effect of temperature is taken as a linear expansion of the whole measurement device during the measurements. Far more complex temperature effects may occur in different industrial environments, e.g. when measuring hot or warm workpieces. Modelling of these should be based on real temperature measurements and different scenarios and is outside the scope of this paper. For length probes, an assumed alignment error with standard uncertainty of one degree is estimated. The resulting cosine error for an effective length of 1 mm is about half of the scale error of the probes and can be omitted, as preliminary analysis showed that also the scale error is of minor significance. In all roundness measurements of rolls there is also some movement of the centreline of the roll. In the simulation it is assumed that the centre will move with an amplitude of 10 m with a frequency twice the rotational frequency. The probability distribution function is arcsine, i.e. U-shaped representing the cyclic variation from −10 m to 10 m. Significant error sources with their probability density functions are shown in Table 1. As there are four rods with probes, the number of separately simulated quantities is ten. A script, written in Python and using the SciPy mathematical package, generates input data files representing simulated measurement data containing the desired PDFs and calls the executable analysis program for the four-point method. From a test run with no error contributions from the probes etc., the result shows that the algorithm works well. The Monte Carlo simulation with the PDFs of Table 1 is done with a large number of test runs, and from the results the mean and standard deviations of the outputted harmonic components is calculated. To evaluate the sensitivity for each uncertainty source, simulations are done with one error source at a time for alignment, probe error and temperature change. The standard deviation was set to 1.0 • for alignment, 1.0 m for scale error and 1.0 • C for temperature change. These results are relative to the selected value and serve to illustrate virtual sensitivity as discussed earlier. Results and discussion The output from the Monte Carlo simulations with 10 000 runs is shown in Figs. 13-17 , where the different standard uncertainties are shown as error bars. Each simulation with 10 000 runs took half an hour on a Windows PC with Intel i7 processor. Fig. 13 shows the output from a simulation run with centre point movement as the only source of error. This simulation demonstrates the method in conditions where the values of PDFs of the measurement instrument are set to zero. The result shows that only some harmonics are affected by the centreline movement used in the simulation. This is a limitation of the four-point method. The maximum deviation of the amplitude was less than 0.05 m. Table 1 Selected significant error sources. The notation of the PDFs follows specific guidelines [13]. The result shown in Fig. 14 is from a simulation run with all the error sources listed in Table 1. This represents the measurement uncertainty of the method under assumed typical measurement conditions in the industry. The uncertainties of the odd harmonic amplitudes are generally higher than for the even harmonics. This is a feature of the hybrid measurement method, where odd harmonics are calculated with the Ozono method and even harmonics with the two-point method. The deviation was also analysed from the simulated roundness. Fig. 14 also shows the average roundness curve, and from the results the standard uncertainty for roundness deviation was evaluated at 3.3 m when filtered with a Gaussian filter with cut-off UPR 30. The sensitivity results with the thermal error source show a similar increase in the uncertainties of the odd harmonic amplitudes (Fig. 15). The reason is also the same: different calculation methods for odd and even harmonics. The sensitivity results with the four probe error sources show very small uncertainties for all of the probes (Fig. 16). These simulations were run with one probe error value = 1 m at a time. It seems that the hybrid four-point roundness measurement method is robust and not sensitive to probing error. However, with the Monte Carlo uncertainty evaluation there is a risk that the errors are averaged out too much. This happens if the error source in reality is not as random as assumed with normal distribution and zero expectation value [28]. Some experiments are required before the low sensitivity obtained for the probing error can be finally concluded. These studies would include autocorrelation of points measured with one probe and cross-correlation for data between several probes. The probes are insensitive to small alignment errors (cosine error type), because their magnitude is negligible compared to other error sources. The simulations for rod alignment error sensitivity produced clear uncertainties for odd harmonic amplitudes for the rods S 1 Fig. 14. Output from Monte Carlo simulation where the standard uncertainties are shown as error bars for the harmonics (left) and the average of simulated results as a polar plot (right). This simulation was run with all the error sources specified in Table 1. to S 3 . S 1 and S 4 have very small uncertainties for even harmonic amplitudes. The results are shown in Fig. 17. The hybrid four-point roundness measurement method is similar to the Ozono method and sensitive to the errors in the S 2 and S 3 run-out signals, which is natural because the Ozono method forms part of the roundness calculation. Temperature variation affects the measurement result noticeably, but this is mitigated by the short measurement time, and there is normally no need to measure hot or warm workpieces since their geometry changes with temperature. Conclusions Knowledge of measurement uncertainty is a fundamental requirement arising from both practical problems, scientific issues and quality systems. Measurement of rolls in an industrial environment using a four-point measurement device is an example of a measurement with large economic impact where knowledge of measurement uncertainty has been weak or non-existent. To our knowledge this paper, which presents the evaluation of this measurement uncertainty, is the first of its kind. The influence of several uncertainty components is analysed and discussed. The results are characteristic of the hybrid four-point method, although unstable temperature conditions or the presence of vibrations may make additional uncertainty contributions in a very rough industrial environment. With the present assumptions, the four-point hybrid algorithm works well. This in conformance with the good experience from industrial use. We also conclude that the predominant uncertainty contribution for a four-point measurement instrument is the positioning of rods of the probes S 2 and S 3 . According to our evaluation, the standard uncertainties for harmonic amplitudes with the hybrid method are below 0.5 m for even harmonics, and from 1.5 m to 2.5 m for odd harmonics. The uncertainties of the odd harmonic amplitudes are generally higher than for the even harmonics. The evaluated uncertainties are in line with measurements using a calibration disc. A further result is insensitivity to probe error. However, future research is needed to investigate the statistical properties of randomness in probe error, and to refine error modelling, before final conclusions can be drawn regarding robustness to probe error. Also, the uncertainty of the phase of harmonic components should be investigated further.
2017-08-19T11:26:50.838Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "2ada9f79a43beaaf68a6baf2141d1a8ba2bebd43", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.precisioneng.2016.12.001", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "acfcde60c92c8b1d7b88b22224de4ea8d574a2aa", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
257954483
pes2o/s2orc
v3-fos-license
Genomic prediction of rice mesocotyl length indicative of directing seeding suitability using a half-sib hybrid population Direct seeding has been widely adopted as an economical and labor-saving technique in rice production, though problems such as low seedling emergence rate, emergence irregularity and poor lodging resistance are existing. These problems are currently partially overcome by increasing seeding rate, however it is not acceptable for hybrid rice due to the high seed cost. Improving direct seeding by breeding is seen as the ultimate solution to these problems. For hybrid breeding, identifying superior hybrids among a massive number of hybrids from crossings between male and female parental populations by phenotypic evaluation is tedious and costly. Contrastingly, genomic selection/prediction (GS/GP) could efficiently detect the superior hybrids capitalizing on genomic data, which holds a great potential in plant hybrids breeding. In this study, we utilized 402 rice inbred varieties and 401 hybrids to investigate the effectiveness of GS on rice mesocotyl length, a representative indicative trait of direct seeding suitability. Several GP methods and training set designs were studied to seek the optimal scenario of hybrid prediction. It was shown that using half-sib hybrids as training set with the phenotypes of all parental lines being fitted as a covariate could optimally predict mesocotyl length. Partitioning the molecular markers into trait-associated and -unassociated groups based on genome-wide association study using all parental lines and hybrids could further improve the prediction accuracy. This study indicates that GS could be an effective and efficient method for hybrid breeding for rice direct seeding. Introduction Rice, as an essential food crop, feeds more than half of the world human population. To meet this huge demand, modern and advanced agricultural technologies were used to improve rice production. Mechanized direct seeding can conspicuously improve the planting efficiency, which has been widely adopted in rice production [1,2]. However, direct seeding in rice also faces some difficulties such as the low emergence rate, irregular emergence and easy lodging of seedlings [3]. Increasing seeding rate might resolve the problems for inbred lines yet it is not an option for hybrids due to the high seed cost. Considering the advantage of exploiting heterosis from hybrids in rice breeding, e.g., Jumin et al. [4] reported that the F 1 hybrids had an approximate 20% higher grain yield than the inbred lines, developing hybrid varieties suited for direct seeding is of great significance and now has become the focus of many rice breeding programs. Several traits that are indicative of the ease of direct seeding have been identified. One representative is the mesocotyl length as a long mesocotyl could markedly improve emergence rate, early vigor and lodging tolerance [5]. However, modern varieties developed for well irrigated ecosystem by transplanting regularly normally have short mesocotyl (�1.0 cm) [6]. Thereby, it is crucial to breed hybrid varieties with long mesocotyl for direct seeding. For hybrid development, identifying excellent hybrid combinations is pivotal. Since the number of hybrids producible is far more than the number of parental lines, selecting the exceptional combinations to be produced and tested is difficult for breeders. Accurately predicting hybrid performance so that only promising combinations are field-tested has long been a research hotspot. Mid-parent performance, general and specific combining abilities, genetic distance between parental lines estimated using traits or markers have been tested but are of limited usefulness depending on the traits and parental populations [7][8][9][10]. Currently, genomic selection (GS) has been widely used to predict hybrid performance in various crops. In GS, the performances of untested genotyped plant individuals are predicted based on the genomic relationship between them and a well-composed training set with both phenotypic and genotypic data. Riedelsheimer et al. [11] used 285 maize inbred lines to test cross with two maize varieties to obtain 570 hybrids. Through the cross-validation within the hybrids, it was found that the prediction accuracies of seven traits ranged from 0.72 to 0.81 with the heritabilities varied from 0.82 to 0.98. Xu et al. [12] predicted the yield of all possible 21,945 hybrid progenies using 278 hybrids generated from random crossings of 210 recombinant inbred lines. If the top 10 hybrid combinations were selected for hybrid breeding, the yield would be increased by 16%. The inclusion of non-additive effects, i.e., dominant and epistatic effects, in genomic prediction brought no benefit in real data but showed usefulness in simulation when non-additive effects were simulated [12]. Thereby, it is potentially profitable to accommodate the non-additive effects though the additive effects are dominant [12]. In addition to genomic information, other omics information is also able to assist genomic prediction. Xu et al. [13] reported that combining the parental phenotypes with other predictors can significantly improve the predictability of yield-related traits in rice. Fu et al. [14] used four methods including multiple linear regression, PLS, SVM and transcriptome distance to predict the phenotype of maize hybrids and found that the prediction based on transcriptome distance was the most accurate. Xu et al. [15] used the metabolic data of 210 inbred lines to predict the yield of their hybrids and found that the prediction ability was almost twice than that of genome markers. Westhues et al. [16] revealed the advantages of combining transcriptome data with genomic data measured for parents for the prediction of untested hybrids. In the prediction of hybrid rice, Wang et al. [17] compared the predictability of combinations between multiomics data including genomics, transcriptome and metabolome data and eight GS methods, finding that the GBLUP approach integrating genomics and metabolome data performed overall the best. The abovementioned studies have shown that GS holds the potential to effectively predict yield and yield-related traits in hybrid rice, but no study ever investigated the potential of GS on hybrid rice mesocotyl length which is indicative to direct seeding. In this study, we measured the mesocotyl length of 402 rice inbred lines including a famous male sterile line Taifeng A and their 401 hybrid progenies produced by test-crossing the 401 lines with Taifeng A as the female parent. We examined several genomic prediction scenarios including midparental value prediction, marker-assisted selection (MAS) and genome-wide association study (GWAS). Our major aim is to find the optimal hybrid rice prediction scenario with the highest prediction accuracy of mesocotyl length to disclose the potential of using genomic selection to accelerate the breeding of hybrid rice suited to direct seeding. Rice materials The 402 rice varieties used in this study are mainly from South Asia, Southeast Asia and South China, conserved in the International Rice Research Institute. The specific variety information was shown in S1 Table. The 401 F1 hybrid populations were produced by test crossing 401 rice varieties (as male parent) with a widely used male sterile line Taifeng A (as female parent). Phenotypic data A randomized complete block design with three replicates was used to layout the test of mesocotyl length measurement for all parental lines and hybrids. In order to minimize the impact of environment on the phenotypic performances of hybrid and its parent, each hybrid was planted next to its male parent. Total 15 full seeds per variety were taken for sowing at the depth of 6 cm in each block. The plastic cavity tray with 50 hole was used for sowing. The hole depth, upper diameter and bottom diameter of the tray was 9.5 cm, 4.5 cm, and 2.1 cm, respectively. After sowing, each plastic cavity tray was placed in the corresponding plastic pallet with nutrient soil covered the bottom at the depth of 3 cm, and then all the pallets were transported into a large-volume oven to culture at 30˚C under the dark. Keep the soil in the tray and pallet moist until the seeds germinated unearthed. Record the emergence rate every day until that of all varieties reached 100%. After that, take out all seedlings in the hole and wash them with clean water and then randomly select 10 seedlings per variety with uniform rise to take photos. The mesocotyl length measurement was performed using image J (https://imagej.en.softonic. com/). The phenotypic values were adjusted to derive the best linear unbiased estimates (BLUE) using formula: y = Xb + Zu + e, where y is the observed phenotypic values of mesocotyl length for all lines and hybrids, b is the block effect, u is the genetic effect, X and Z are the design matrices for b and u, e � N 0; Is 2 e À � is the random residual where I is identity matrix and s 2 e is the residual variance component. Both b and u were regarded as fixed effect. The phenotypic adjustment process was implemented in R [18] using package sommer [19]. A total of 7,882,841 bi-allelic SNPs were identified for the 402 lines. Quality control for the SNPs followed criteria that 1) remove SNPs with minor allele frequency (MAF) less than 0.05; 2) remove SNPs with genotyping call rate less than 90%; 3) exclude SNPs with heterozygotes rate more than 10%. As a result, 196,640 high quality SNPs retained. Genotype imputation was implemented to impute the missing genotypic profiles of the 196,640 high quality SNPs by IMPUTE2 software [21]. The heterozygotes were all arbitrarily set to missing values and imputed. Once imputation was done, a quality control for linkage disequilibrium (LD) between SNPs was applied to keep independent SNPs. The software PLINK [22] was used with the parameters window size, shifting step, and r 2 threshold respectively being set to 50 SNPs, 5 SNPs, and 0.1. Finally, 10,547 independent SNPs were available for the 402 lines. The genotypic data of the 402 lines was provided in S2 Table. The genotypes of the hybrids were deduced from the genotypes of their parents. Specifically, for a particular SNP, the two types of homozygotes in the parental lines were numerically coded as 0 or 2, indicating the number of copies of the alternative allele. The profile of hybrids was the mean value of the genotypes of their parents, i.e., 0 or 1 or 2. To investigate the population structure underlying the lines, a cluster analysis based on the SNP genotypic data was performed. Mid-parental value prediction The mid-parental values of the hybrids in the test sets of each cross-validation scenario (details can be found in section 2.8) were used as the phenotypically predicted genetic values of the hybrids. Marker-assisted selection The marker-assisted selection includes two steps. In the first step, the GWAS was performed in the training set of each cross-validation scenario using a mixed linear model: where y r is a r-dimensional vector of adjusted phenotypic values of Mesocotyl length, r is the number of genotypes in the training set, 1 r is a rdimensional vector of ones, μ is the intercept, PC k is the k th the principal component vector derived from the genomic data, b is the additive genetic effect of j th SNP, X r j is a r-dimensional vector containing genotypic profiles of j th SNP, g r is a r-dimensional vector of additive genetic effects of genotypes following g r � N 0; A r s 2 a r � � , A r is a r×r-dimensional additive genomic relationship matrix estimated following Yang et al. [23], s 2 a r is the corresponding variance component, Z r is the design matrix of g r , and e are the random residuals following e � N 0; Is 2 e À � where I is identity matrix and s 2 e is the residual variance component. The thresholds of filtering significant SNPs ranged from 5×10 −5 to 0.01. Once GWAS was done, for each significance threshold, a linear model using the identified trait-associated SNPs (TA-SNPs) was fitted as: y r ¼ 1 r m þ P m j¼1 X j b j þ e in cross-validation scenarios 1-5 (details can be found in section 2.8), where m is the number of TA-SNPs. The estimated effects of the TA-SNPs b b j were accordingly derived. The effective number of TA-SNPs were calculated following Jiang et al. [24]. Briefly, a principal component analysis was performed using the genotypic profiles of all the TA-SNPs. The number of principal components in total explaining 95% variation is the effective number of TA-SNPs. In the second step, the phenotypic values of hybrids in the test set b y s were predicted using the formula b where 1 s is a s-dimensional vector of ones, s is the number of hybrids in the test set, b m, b b and b b j are respectively the estimated values of intercept, and effect of j th TA-SNP from the linear model in the first step, X s j is a s-dimensional vector of genotypic profiles of j th SNP in the test set. The GWAS analyses were implemented in GCTA software [25] using option "-mlma". The calibration and prediction linear models were fitted in R [18]. Genomic prediction methods Two BLUP models, GBLUP and EGBLUP, and two Bayesian approaches, BayesB and BayesR were used in genomic prediction. The two BLUP models could be uniformly formulated as y = 1 n μ + Zg + ε in cross-validation scenarios 1-5 and y h = 1 h μ h + Wβ + Z h g h +ε h in the cross-validation scenario incorporating mid-parental value as a covariate (details can be found in section 2.8), where y is a n-dimensional vector of adjusted phenotypic values of mesocotyl length, n is the number of genotypes in both training and test sets, y h is a h-dimensional vector of adjusted phenotypic values of mesocotyl length for all hybrids, h is the number of all hybrids, 1 n and 1 h are n-and h-dimensional vectors of ones, μ and μ h are the intercepts, W is a hdimensional covariate vector of mid-parental values, β is the covariate effect, g and g h are the n-and h-dimensional vectors of genetic effect of genotyped individuals and genotyped hybrids, Z and Z h are the design matrices for g and g h , ε and ε h are the random residuals fol- where I n and I h are identity matrices, s 2 ε and s 2 ε h are the variance component of residuals. For GBLUP, the genetic effect g and g h were the addi- where a and a h are an nand h-dimensional vector of additive genetic effects of genotyped individuals and genotyped hybrids, s 2 a and s 2 a h are corresponding variance components, and A and A h are the additive genomic relationship matrices [26]. For EGBLUP, the genetic effect g and g h contain both additive and additive-by-additive epistatic effects assuming g ¼ a denotes the Hadamard product, p and p h are an n-and h-dimensional vector of epistatic genetic effect of genotyped individuals and genotyped hybrids, s 2 p and s 2 p h are corresponding variance components. The genomic heritability was calculated based on the GBLUP model where the variance components were estimated respectively in the populations of lines and hybrids. The two Bayesian approaches could be uniformly formulated as y r = 1 r μ + X r γ + e in cross-validation scenarios 1-5 and In BayesB, the prior distribution of marker effect is assumed to be a mixture of a t distribution with a fixed probability π and a point mass at zero with a probability 1-π [27]. In BayesR, the marker effect is assumed to follow a mixture of four normal distributions with zero mean and varied variances. The sum of proportions of each normal distribution π = (π 1 , π 2 , π 3 , π 4 ) is constrained to unity [28]. The phenotypic values of hybrids in the test set b y s were predicted using the formula b In different cross-validation scenarios, theoretically the calibration models using hybrids (scenarios 2-5 and the scenario incorporating mid-parental value as a covariate) could predict both additive and dominant genetic effects and part them. However, as only one female was used in our study, deductively, the additive genotypic profiles of the hybrids were completely collinear with their dominant genotypic profiles. Therefore, the additive and dominant genetic effects could not be de facto partitioned. Despite this, the collinearity resulted the dominant effect compounded with the additive effect, thereupon the genetic merit of dominant effect was still involved and utilized in the scenarios using hybrids (scenarios 2-5). The BLUP models were realized in R [18] using package BGLR [29]. The Bayesian approaches were fitted using GCTB software [30]. The iteration times, burn-in, and thinning of all models were set to 30,000, 5,000, and 5 respectively. Cross-validation scenarios The 401 hybrids were stochastically and evenly divided into five folds. One fold formed the test set. Five scenarios to compose the training set were considered as follows: Scenario 1) reference hybrids' parents: the parental lines of the hybrids not in the test set formed the training set; Scenario 2) reference hybrids: other four folds of hybrids beside the test set constituted the training set; Scenario 3) reference hybrids and their parents: other four folds of hybrids beside the test set and their parents collectively comprised the training set; Scenario 4) reference hybrids and all lines: other four folds of hybrids beside the test set and all lines were combined as the training set; Scenario 5) reference hybrids and parents of test set: other four folds of hybrids beside the test set and the parental lines of the test set collectively comprised the training set. Another scenario using mid-parental values of all hybrids as a covariate in the prediction models was considered. The training set was constituted by the four folds of hybrids beside the test set. In this scenario, the phenotypic data of all parents and reference hybrids was taken advantage, which contained comparable reference information as scenario 5. Each cross-validation scenario was repeated 20 times, yielding in total 100 times random partitioning of training and test sets for each scenario. The prediction accuracy of GS and MAS was evaluated on the basis of combining five test sets in each repeat of cross-validation. Specifically, the genomic predicted genetic values of the five tests in each repeat of cross-validation were combined and the Pearson correlation coefficient between the combined predicted values and the corresponding adjusted phenotypic values was calculated to measure the genomic prediction accuracy. Thereupon, 20 prediction accuracies from the 20 repeats of cross-validations were shown for each training set composition scenario. In the mid-parental value prediction scenario, there was no model-training and the predicted genetic values of hybrids in the test set were perpetually the mid-parental values of their parents. Therefore, when the five hybrid test sets in each repeat of cross-validation were combined to measure the prediction accuracy, the predicted values in the combination were invariable disregarding to the samples of training and test sets, that is, the mid-parental values of the total hybrid population. Due to this, there was just one prediction accuracy value in the scenario of the mid-parental value prediction. The scenarios of different training set compositions were illustrated in S1 Fig. All prediction accuracies of GS and MAS were z-transformed for statistical test and analysis of variance (ANOVA). To investigate the impact of training set size on genomic prediction accuracy, 5% to 80% of reference hybrids in each training set composition scenario were randomly sampled to establish training set subsets with different sizes. The sampling of training set subsets was repeated 20 times for each sampling ratio, yielding in total 2000 times (20×100) calibrations and predictions in marker-assisted selection and genomic prediction. Classification of SNPs in genomic prediction The SNP markers were classified into two groups by GWAS. One group consisted of the TA-SNPs identified in GWAS and another group was the remaining genome-wide SNPs. GWAS was implemented respectively using all lines and the total population including all lines and hybrids. The GWAS model was identical to that used in MAS. The threshold of significance determining the TA-SNPs was decided by the best performing MAS model with overall highest prediction accuracy. The GBLUP model was used to validate the effectiveness of classifying markers in genomic prediction. The group of TA-SNPs was respectively fitted as fix and random effect in the model. As fix effect, considering the number of TA-SNPs would be large, a principal component analysis was utilized. The principal components accounting for 95% variation were used in place of TA-SNPs in the model. When TA-SNPs were fitted as random effect, two separate kernels respectively composed by TA-SNPs and remaining genome-wide SNPs were fitted in the GBLUP model. The GBLUP model was implemented in R package BGLR [29] with 30,000 iterations, 5,000 times burn-in, and thinning of 5. Phenotypic analysis statistics and population diversity Results of the phenotypic analysis were summarized in Table 1 and in details shown in S3 Table. The BLUE of mesocotyl length ranged from -0.14 to 5.87 for parent lines and -0.1 to 5.61 for hybrids. Heterosis analysis indicated that among 401 hybrids, 41% show high parent heterosis, 19% shown mid-parent heterosis, 26% shown low parent heterosis and the remaining 14% shown hybrid inferiority (S4 Table and The genetic diversity of the parental lines was overall high, as indicated by the wider range of genetic similarities between parental lines (Fig 1). Predictability of marker-assisted selection The prediction accuracies of MAS in different training set composition scenarios were shown in Fig 2 and S5 Table. Using mid-parental values to predict the performances of hybrids (midparental value prediction) resulted in an accuracy of 0.59, which was used as a reference to other prediction scenarios and marked using a red dash line in Fig 2. When the training population contains parental lines of the reference hybrids for cross-validation only (scenario 1), which assumed no data on hybrids is available, the prediction accuracy ranged from 0 to 0.39, which increased to the maximum value with the increase of different significance thresholds (P value) to 0.001, which was obviously lower than the result of mid-parental value prediction. The effect of P value on MAS prediction was significant. When only the reference hybrids were used in the training set (scenario 2), which was the typical scheme commonly applied in other GS studies of rice hybrid prediction, the prediction accuracy ranged from 0 to 0.48, increased to the maximum with the increase of P value to 0.0025, which was lower than the result of mid-parental value prediction. Surprisingly, when P value was 0.0075 or 0.01, the prediction accuracy dramatically dropped to less than 0.15. This might indicate that the increase of number of markers due to a liberal P value brings more noise than signal into the multiple linear regression models we applied. When the reference hybrids and their parental lines were used in the training set (scenario 3), which modelled the situation that phenotypic test was conducted for some of the hybrids and their parental lines, the prediction accuracy varied from 0.01 to 0.51 with the maximum value being achieved when P value was 0.001, which was lower than the result of mid-parental value prediction. When the training set contained reference hybrids and their parental lines, and parental lines of the untested hybrids (scenario 4), which modelled the situation that parental lines of untested hybrids have been tested, the prediction accuracy ranged from 0.39 to 0.61, increased to the maximum value with the increase of P value to 0.005. When P value was 0.0025, 0.005 or 0.0075, the prediction accuracies in scenario 4 were significantly higher than the result of mid-parental value prediction. When the training set contained reference hybrids and parental lines of the test set (scenario 5), which assumed the paternal lines of reference hybrids are not helpful to the prediction of test hybrids due to the genetic distance, the prediction accuracy ranged from 0.18 to 0.61, increased to the maximum value with the increase of P value to 0.005. When P value was 0.005 or 0.0075, the prediction accuracy in scenarios 5 was also significantly higher than the result of mid-parental value prediction. Scenario 4 achieved the higher prediction accuracy in a wider range of P values, which was the best scenario. Two-factor variance analysis showed that the mean variance of the scenario was over threefold higher than that of the P value, indicating that the training set composition had much higher impact on prediction accuracy than the P value (S6 Table). The interaction between the scenario and P value was also significant but relatively less important (S6 Table). Although Fig 2. Prediction accuracies of mesocotyl length using marker-assisted selection with different significance thresholds (P value) for selecting trait-associated SNP markers based on different training set composition scenarios. The red dash line indicates the prediction accuracy using mid-parental values. The numbers in parentheses indicate the number of effective trait-associated SNPs (TA-SNPs). The asterisks indicate the prediction accuracies based on varying training set composition scenarios were significantly higher (p < 0.05, t-test) than the prediction accuracy using mid-parental values. All prediction accuracies were z-transformed for statistical test. there was no single best P value for all training set compositions, 0.0025 was a better choice when all compositions were considered (Fig 2). Predictability of genomic prediction The average prediction accuracies were obtained using different GP models and the result were shown in Fig 3. In the model by GBLUP, the prediction accuracies of scenario 1 to 5 were 0.54, 0.63, 0.6, 0.63 and 0.67, respectively, which were mostly significantly higher than the result of mid-parent value prediction except scenario 1. The prediction accuracy of scenario 2 was significantly higher than that of scenario 1, indicating reference hybrids as training set performed more outstandingly than the parents of reference hybrids. The prediction accuracy of scenario 3 was significantly higher than that of scenario 1, but significantly lower than that of scenario 2, implying combining the parents of reference hybrids with reference hybrids as training set was better than only using parents as training set but inferior to using reference hybrids as training set. However, the prediction accuracy of scenario 4, which was the best group performing in MAS, was significantly higher than that of scenario 3 and equal to that of scenario 2, demonstrating that integrating all parents into the training set consisting of reference hybrids would only marginally improve the predictability. The prediction accuracy of scenario 5 performed the best among all scenarios, which indicated integrating parents of test set into the training set constituted by reference hybrids would significantly improve the predictability in GS. Comparing different genomic prediction models, their prediction accuracies were quite similar in each scenario except for scenario 4 in which the EGBLUP method performed conspicuously better than other approaches (Fig 3). The asterisks indicate the genomic prediction accuracies based on varying training set composition scenarios were significantly higher (p < 0.05, t-test) than the prediction accuracy using mid-parental values. Different letters above the bars indicated the genomic prediction accuracies of varying scenarios within a specific model were significantly different (p < 0.05, t-test). All prediction accuracies were z-transformed for statistical test. In variance analysis, the mean variance of the scenario was over 30 folds higher than that of the prediction model (S7 Table), indicating that the training set composition had much higher effect on prediction accuracy than the prediction model. The interaction between the scenario and the prediction model was also significant but relatively less important (S7 Table). Using parental performance as covariates in genomic prediction Previous studies have concluded that incorporating parent information into the model could improve the prediction accuracy [13]. We also used the mid-parental value as a covariate incorporated into the model. Surprisingly, the prediction accuracy was markedly and significantly higher than that of scenario 2 (only using reference hybrids as training set), demonstrating the huge advantage of integrating the mid-parent value as a covariate into the model (Fig 4). Different training set sizes with subsets of reference hybrids Next, we investigated the effect of training set size on genomic prediction under different training set compositions. For scenario 2 to 5, n hybrids out of the 321 reference hybrids in the training set of cross-validation were randomly selected and form the reference hybrid subsets, together with the lines respectively in scenario 2 to 5 to constitute the training sets with different sizes, where n ranged from 5% to 100%. In scenario 1, the parents of sampling reference hybrids formed the training set. The results were given in Fig 5. For all methods, the prediction accuracies in scenario 1 to 5 were all growing with the sampling rate of reference hybrids n increased from 5% to 100% and reached a plateau when n became 40%. The specific sample size was shown in S8 Table. The increasing trend of prediction accuracies in all scenarios were similar for different prediction methods, except for scenario 2 in which the two Bayesian methods displayed no apparent improvement when n increased from 5% to 10%. The prediction accuracies of two BLUP methods were significantly higher than those of the two Bayesian approaches in scenario 2 when n < 20%, but that was similar when n � 20%. Overall, the BLUP methods performed superiorly to the Bayesian approaches when the training set is small (n < 20%) and the male parents of the test hybrids were not used, i.e., scenarios 1-3. GBLUP separately fitting trait-associated and -unassociated markers Since genome wide markers can be used to identify markers associated with trait, i.e., TA-SNPs, via GWAS, it might be better if TA-SNPs and trait-unassociated markers were fitted separately in GS. The TA-SNPs can be separately fitted either as fixed effect or as random effect (see Method for details). We first performed GWAS analysis using all parental lines and selected the TA-SNPs with P-value incurring the overall highest prediction accuracies in MAS, i.e., < 0.005, for prediction. The result was shown in Fig 6A, where the prediction accuracies of scenario 1 to 5 with the TA-SNPs incorporated in GBLUP as fixed effect were 0.566, 0.639, 0.613, 0.616, 0.655, respectively. In contrast, the prediction accuracies of scenario 1 to 5 with the TA-SNPs incorporated in GBLUP as random effect were 0.573, 0.665, 0.628, 0.625 and 0.666, respectively, which were all significantly higher than that with TA-SNPs used as fixed effect. Among all scenarios with the TA-SNPs incorporated in GBLUP either as fixed or random effect, the prediction accuracy of scenario 5 was the highest (0.655 and 0.666, respectively). Next, we conducted GWAS analysis using all parental lines and hybrids and selected the TA-SNPs also with the P-value < 0.005 for prediction, which was shown in Fig 6B, where the prediction accuracies of scenario 1 to 5 with the TA-SNPs incorporated in GBLUP as fixed effect were 0.622, 0.705, 0.678, 0.671, 0.691, respectively. In contrast, the prediction accuracies of scenario 1 to 5 with the TA-SNPs incorporated in GBLUP as random effect were 0.616, 0.703, 0.669, 0.664 and 0.687, respectively. Interestingly, the prediction accuracies of all scenarios with the TA-SNPs incorporated as fixed effect were higher than that with TA-SNPs incorporated as random effect, especially for scenario 1, 3 and 4, where significant differences were found between them. Among all scenarios with the TA-SNPs incorporated in GBLUP either as fixed or random effect, the prediction accuracy of scenario 2 was the highest (0.705 and 0.703, respectively). We also compared the prediction accuracies of different scenarios with TA-SNPs incorporated in GBLUP as fixed effect or random effect and those with undifferentiated using all SNPs in GBLUP. When the TA-SNPs were from GWAS based on all lines, some significant increases were found between the prediction accuracies of scenario 1 to 3 with TA-SNPs incorporated in GBLUP as fixed effect and those with undifferentiated use of all SNPs, and some significant reductions were found between the prediction accuracies of scenario 4 and 5 with TA-SNPs incorporated in GBLUP as fixed effect and those with undifferentiated use of all SNPs (Fig 6A). Meanwhile, the prediction accuracies of scenario 1 to 3 with TA-SNPs incorporated in GBLUP as random effect were significantly higher than those with undifferentiated use of all SNPs, and significant reduction was found between the prediction accuracy of scenario 4 with TA-SNPs incorporated in GBLUP as random effect and that with undifferentiated use of all SNPs (Fig 6A). No difference was found between the prediction accuracy of scenario 5 with TA-SNPs incorporated in GBLUP as random effect and that with undifferentiated use of all SNPs (Fig 6A). In contrast, when the TA-SNPs were GWAS analyzed based on all lines and hybrids, no matter incorporating TA-SNPs as fixed effect or as random effect in GBLUP, the prediction accuracies of scenario 1 to 5 were significantly higher than those with undifferentiated use of all SNPs (Fig 6B). In summary, the best choice for modeling was accommodating the TA-SNPs from GWAS based on all lines and hybrids as fixed effect in GBLUP under scenario 2. Discussion This study demonstrated the potential of GS in breeding hybrid rice varieties suited for direct seeding capitalizing on an indicative trait mesocotyl length. We based on 401 hybrid Fig 6. Genomic prediction accuracies of mesocotyl length based on different training set composition scenarios using GBLUP by partitioning SNP markers into trait-associated and -unassociated sets using genome-wide association study (GWAS) based on all lines (A) and all lines and hybrids (B). The trait-associated SNPs (TA-SNPs) were respectively used as fixed covariates (fixed effect) and independent random kernel (random effect) in the GBLUP model. Undifferentiated use of SNPs was using all SNPs in the GBLUP model. Different letters above the bars indicated the genomic prediction accuracies achieved by varying treatments of SNPs in the model were significantly different (p < 0.05, t-test) after a Fisher's z-transformation. https://doi.org/10.1371/journal.pone.0283989.g006 PLOS ONE combinations from test-crossing 401 sequenced rice varieties from Southeast Asia, Guangdong and South Asia with a sequenced variety Taifeng A to underpin the prediction of mesocotyl length in hybrid rice. The inbred lines used as male parents are the ancestral parents of many elite varieties, and genetically diverse. Taifeng A is a female sterile line with excellent agronomic characters and is widely used in developing hybrid varieties. Therefore, the results from our study have a high practical value. Relatedness driving the prediction accuracy in MAS Previous studies have demonstrated that the relatedness between the training and test sets could impact the prediction accuracy in MAS [31,32]. This finding is validated in our study in rice. The training set composition scenarios 4 and 5 included the parents of test hybrids and the prediction accuracies in these two scenarios were remarkably higher than those in other scenarios without the parents of test hybrids especially when the threshold of P value to determine the significant SNPs was relatively liberal which may incur redundancy of predictors and impede the predictability of MAS model (Fig 2). The relatively high relatedness between the training and test sets in scenarios 4 and 5 could compensate for the nuisance as compared to other scenarios. Since the relatedness is described by SNP genotypes and a reliable estimation of relatedness requires a certain number of markers, when the P value threshold was strict and only a few significant SNPs were available, the impact of relatedness is negligible and the prediction accuracies were driven by the number of available significant SNPs (Fig 2). The impact of relatedness could also be inspected in scenarios 1-3 in which the prediction accuracies in scenario 2 were conspicuously higher than those in scenarios 1 and 3 when the P value thresholds were liberal (� 0.0025) resulting in a comparable number of available significant SNPs in the three scenarios. Theoretically, the relatedness between the training and test sets in scenario 2 is higher than that in scenarios 1 and 3 because the male parents of the reference hybrids were genetically distant to the test hybrids thus including them in the training set would impair the relatedness. GS is superior to MAS in rice mesocotyl length prediction In the absence of genotypic data, using the mid-parental values can realize the prediction of hybrids performances. We used the mid-parental value of the hybrids as a reference to the genomics-enabled predictions. Interestingly, the mid-parental prediction accuracy was overall comparable to that of MAS but significantly lower than that of GS, indicating that GS is an efficient genomics-enabled approach in mesocotyl length breeding of hybrid rice. The conspicuous advantage of scenarios 2-5 over scenario 1 could be attributed to the accommodation of dominant effects in addition to additive effects and also the relatedness exploited because the male parents of the reference hybrids are genetically distant to the test hybrids. The superiority of scenario 5 over scenario 4 substantiates the importance of relatedness (Fig 3). The advantage of EGBLUP over other genomic prediction approaches in scenarios 3 and 4 indicates using more inbred lines is helpful to capture the epistatic effects (Fig 3). Incorporating parental performance as covariates improves prediction accuracy Compared to the genomic prediction based solely on genomics data, including parental phenotypes in the model could significantly improve the prediction accuracy [13]. Our study underpinned this finding (Fig 4). What is worth to notice is the magnitude of reference information contained in the training set composition scenario 4 was identical to that in the scenario using mid-parental values as phenomics data in the model, however, the prediction accuracies in the former scenario were significantly lower than those in the latter scenario (Figs 3 and 4), which indicates using mid-parental values as phenomics data in GS is a more efficient way to exploit the parental information. Xu et al. [13] mentioned using the parental phenotypes as a covariate (predictor) in the model might intrinsically capture environmental effects and genotype-by-environment interactions. In breeding, breeders could learn from it as the phenotypes of parental lines are often available prior to the crossing, therefore, no additional spending is needed. Separately modelling the trait-associated and -unassociated markers significantly improved the genomic prediction accuracy Previous studies have demonstrated that the predictability would be significantly enhanced by integrating the associated markers into the model in GS [33][34][35]. Here, we found similar results. The prediction accuracies for all scenarios with associated markers, which were identified from GWAS analysis using all parental lines and hybrids, either as fixed effect or random effect, were significantly higher than those undiscriminatingly using all the SNPs ( Fig 6B). As compared, the advantage of distinguishingly using the SNPs in genomic prediction reduced when the GWAS was implemented only using the lines (Fig 6A). This could be explained by that expanding the population for GWAS would enhance the power of identifying trait-associated makers thereupon enhancing genomic predictability. Comparing the effectiveness of using TA-SNPs as fixed and random effect, when GWAS was conducted using all the parental lines and hybrids, fitting the TA-SNPs as fixed effect in the genomic prediction model was overall better than that as random effects. However, the precedence was reversed when GWAS was implemented only based on the parental lines ( Fig 6). This could be attributed to that GWAS using all lines and hybrids is more powerful and able to identify more reliable TA-SNPs. Because being a fixed effect in the linear model mostly would have stronger effect relative to being a random effect, a more reliable identification of trait-marker association in GWAS could underpin the fixed effect treatment. If the GWAS is not so powerful, a more conservative usage of treating the trait-associated makers as random effects would be more proper. Overall, prior to implementing GS, using GWAS to identify trait-associated markers and discriminatingly modelling the trait-associated and -unassociated markers in GS models is suggested. Conclusion Based on a population of 402 rice lines and their 401 hybrid combinations, we demonstrated that using half-sib hybrids as the training set together with the mid-parental phenotypic values of all hybrids fitted as a covariate in the genomic models could achieve an optimal prediction of mesocotyl length, which is indicative of rice direct seeding ease. Including approximately 60 hybrids (20% of total hybrids) in the training set is able to obtain a comparable prediction accuracy to using all hybrids. Dividing the SNPs into trait-associated and -unassociated groups using GWAS based on the entire population could further improve the prediction accuracy. In practice, we suggest to firstly implement GWAS to differentiate the trait-associated and -unassociated markers based on all observations, and then using the phenotyped hybrids as training set to predict the untested hybrids with their parents' phenotypes fitted as a covariate in the GP models separately accommodating the trait-associated and -unassociated markers. Formal analysis: Liang Chen, Sang He.
2023-04-06T06:16:31.452Z
2023-04-05T00:00:00.000
{ "year": 2023, "sha1": "671f222f4a0933ad8711e011ae204d0286974218", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0283989&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "17d656d71d14a8dd0a5cc3dea6ee6fb4e497e88d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
259108565
pes2o/s2orc
v3-fos-license
In-Context Learning through the Bayesian Prism In-context learning is one of the surprising and useful features of large language models. How it works is an active area of research. Recently, stylized meta-learning-like setups have been devised that train these models on a sequence of input-output pairs $(x, f(x))$ from a function class using the language modeling loss and observe generalization to unseen functions from the same class. One of the main discoveries in this line of research has been that for several problems such as linear regression, trained transformers learn algorithms for learning functions in context. However, the inductive biases of these models resulting in this behavior are not clearly understood. A model with unlimited training data and compute is a Bayesian predictor: it learns the pretraining distribution. It has been shown that high-capacity transformers mimic the Bayesian predictor for linear regression. In this paper, we show empirical evidence of transformers exhibiting the behavior of this ideal learner across different linear and non-linear function classes. We also extend the previous setups to work in the multitask setting and verify that transformers can do in-context learning in this setup as well and the Bayesian perspective sheds light on this setting also. Finally, via the example of learning Fourier series, we study the inductive bias for in-context learning. We find that in-context learning may or may not have simplicity bias depending on the pretraining data distribution. Introduction In-context learning (ICL) is one of the ingredients behind the astounding performance of large language models (LLMs) Brown et al. [2020], Workshop [2023], Touvron et al. [2023]. Unlike traditional learning, ICL is the ability to learn new functions f without weight updates during inference from input-output examples (x, f (x)); in other words, learning happens in context. For instance, given prompt up -> down, low -> high, small -> a pretrained LLM will likely produce output big. It infers that the function in the two examples is the antonym of the input and applies it on the new input. This behavior extends to more sophisticated and novel functions unlikely to have been seen during training and has been the subject of intense study, e.g. Min et al. [2022b], Webson and Pavlick [2022], Min et al. [2022a], Liu et al. [2023], Dong et al. [2023]. Apart from its applications in NLP, more broadly ICL can also be viewed as providing a method for meta-learning Schmidhuber [1987], Thrun and Pratt [2012], Hospedales et al. [2022] where the model learns to learn a class of functions. Theoretical understanding of ICL is an active area of research. Since the real-world datasets used for LLM training are difficult to model theoretically and are very large, ICL has also been studied in stylized setups Xie et al. [2022], Chan et al. [2022b], Garg et al. [2022], Wang et al. [2023], Hahn and Goyal [2023]. These setups study different facets of ICL. In this paper, we focus on the framework of Garg et al. [2022] which is closely related to meta-learning. Unlike in NLP where training is done on documents for next-token prediction task, here the training and test data look similar in the sense that the training data consists of input of the form ((x 1 , f (x 1 )), . . . , (x k , f (x k )), x k+1 ) and output is f (x k+1 ), where x i ∈ R d and are chosen i.i.d. from a distribution and f : R d → R is a function from a family of functions, for example, linear functions or shallow neural networks. We call this setup including an extension we introduce in this paper MICL ; cf. Min et al. [2022a]. A striking discovery in Garg et al. [2022] was that for several function families, transformer-based language models during pretraining learn to implicitly implement well-known algorithms for learning those functions in context. For example, when shown 20 examples of the form (x, w T x), where x, w ∈ R 20 , the model correctly outputs w T test x on test input x test . Apart from linear regression, they show sparse linear regression and shallow neural networks where the trained model appears to implement a well-known algorithm and for decision trees, the trained model does better than baselines. Two follow-up works Akyürek et al. [2022], von Oswald et al. [2022 largely focused on the case of linear regression. Among other things, they showed that transformers with one attention layer learn to implement one step of gradient descent on the linear regression objective with further characterization of the higher number of layers. We ask: can we extend MICL to more general families? Bayesian predictor. An ideal language model (LM) with unlimited training data and compute would learn the pretraining distribution as that results in the smallest loss. Such an LM produces the output by simply sampling from the pretraining distribution conditioned on the input prompt. Such an ideal model is often called Bayesian predictor. Many works make the assumption that trained LMs are Bayesian predictors, e.g. Saunshi et al. [2021], Xie et al. [2022], Wang et al. [2023]. In particular, Xie et al. [2022] study a synthetic setup where the pretraining distribution is given by a mixture of hidden Markov models. Wang et al. [2023] relate ICL to topic modeling. Most relevant to the present paper, Akyürek et al. [2022] show that in the Garg et al. [2022] setup for linear and ridge regression, as the capacity of transformer models is increased they tend towards the Bayesian predictor. They find that in the underdetermined setting, namely when the number of examples is smaller than the dimension of the input, the model learns to output the least L 2 -norm solution. How extensively do high-capacity LMs mimic the Bayesian predictor? Simplicity bias. The success of neural networks, and in particular of transformers, across a very wide range of domains and modalities including text, vision, audio, multimodal, RL, biology, and more, begs for an explanation. A general principle, related to Occam's razor and Solomonoff induction, is that induction is made possible by simplicity bias: the tendency of machine learning algorithms to prefer simpler functions. It has been suggested that the success of neural networks is also due to simplicity bias in their training; there are many notions of simplicity, e.g. Domingos [1999], and it is an active area of research to more quantitatively and formally understand the inductive bias of neural network training; see e.g. Mingard et al. [2023], Goldblum et al. [2023], Bhattamishra et al. [2022] and references therein. For pretraining, the tendency of neural networks to prefer lower frequency functions has been dubbed spectral bias and is one of the well-studied notions of simplicity; see e.g. Rahaman et al. [2019], Canatar et al. [2021], Fridovich-Keil et al. [2022]. LLMs achieve good performance at a very diverse range of tasks even on novel tasks learned only in context and not seen during pretraining. For example, Hahn and Goyal [2023] find that text-davinci-003, a model in the GPT series, performs quite well on synthetic compositional string manipulation tasks, most of which are unlikely to have arisen in the training data. How do LLMs accomplish this? Does in-context learning also enjoy a simplicity bias like pretraining? ICL inductive biases in practical LMs. Recent works, e.g., Wei et al. [2023], Pan et al. [2023], Si et al. [2023], show that small and large pre-trained models perform ICL differently when tested on various NLP tasks. Small models first recognize the task from the prompt (based on their pretraining semantic priors) and then solve it for the evaluation examples in the prompt. Whereas larger models can learn the task from the prompt itself, an emergent ability only seen in larger models. Since each task in our MICL setup is a different function from a family, the model must learn the task in context to improve performance. Therefore, despite differences in training setups, the means of performing ICL in our setting and in the NLP domain for larger models can be thought of as similar. Our contributions. We extend the Garg et al. [2022] setup to include multiple families of functions. For example, the prompts could be generated from a mixture of tasks where the function f is chosen to be either a linear function or a decision tree with equal probability. We call this extended setup MICL . We experimentally study MICL and find that high-capacity transformer-based LMs can learn in context when given such task mixtures. The ICL error curves of multitask-trained LMs on individual tasks look essentially identical to the ICL error curves of single-task-trained LMs on the respective tasks. To understand how this ability arises we investigate in depth whether high-capacity LMs mimic the Bayesian predictor. We provide direct and indirect evidence that indeed they do and this could pave the way to understanding why they appear to implement well-known algorithms while learning in context. In general, Bayesian inference leads to high-dimensional integrals which can be hard to estimate. In situations where this difficulty does not arise we provide quantitative evidence for Bayesian prediction, and where it is hard to do we provide qualitative arguments. Generalizing the results of Garg et al. [2022], we show that transformers solve several linear inverse problems, a class of problems with important applications. We also show that transformers can learn some non-linear function families and here they mimic the Bayesian predictor. The ability to solve mixed tasks also arises naturally as a consequence of Bayesian prediction: there's no need for the model to first identify the task and then solve it. Finally, we investigate the inductive bias in a simple MICL setting for learning functions given by Fourier series. We measure the complexity of a function by the highest frequency that occurs in its Fourier expansion. If ICL is biased towards fitting functions of lower maximum frequency, this would suggest that it has a bias for lower frequency like the spectral bias for pretraining. We find that the LM mimics the Bayesian predictor. This means that the ICL inductive bias of the model is determined by the pretraining data distribution: If during pretraining all frequencies (up to a fixed maximum frequency) are equally represented, then during ICL the LM shows no preference for any frequency. On the other hand, if lower frequencies are predominantly present in the pretraining data distribution then during ICL the LM prefers lower frequencies. Chan et al. [2022a] studies the effect of pretraining data distribution on ICL. Chan et al. [2022b] study inductive biases of transformers for in-weights and in-context learning and the effect of training data. However, the problem setting in both papers is quite different from ours and they do not consider simplicity bias. Our results are in line with Hahn and Goyal [2023] who show, mirroring classic results on Solomonoff induction, that pretraining distributions with bias towards simpler functions lead to ICL abilities for discrete compositional problems; they do not however explore Bayesian prediction quantitatively. Background We first discuss the in-context learning setup for learning function classes as introduced in Garg et al. [2022]. Let D X be a probability distribution on R d . Let F be a family of functions f : R d → R and let D F be a distribution on F . For simplicity, we often use f ∼ F to mean f ∼ D F . To construct a prompt P = x 1 , f (x i ), · · · , x p , f (x p ), x p+1 of length p, we sample inputs x i ∼ D X i.i.d. for i ∈ {1, · · · p}. A transformer-based language model M`is trained to predict f (x p+1 ) given P, using the objective miǹ E f ,x 1:p where P i denotes the sub-prompt containing the first i input-output examples as well as the (i + 1)-th input, i.e. x 1 , f (x 1 ), · · · , x i , f (x i ), x i+1 and x 1:p = (x 1 , . . . , x p ). While other choices of the loss function ℓ ·, · are possible since we study regression problems we use the squared-error loss in accordance with Garg et al. [2022], Akyürek et al. [2022], von Oswald et al. [2022]. At test time, we present the model with prompts P test that were unseen during training with high probability and compute the error when provided k in-context examples: loss@k = E f ,P test ℓ M`(P k ), f (x k+1 ) . By definition, a test for in-context learning for a model can be performed by measuring loss@k at increasing values of k and checking if the error goes down as more examples are provided Olsson et al. [2022]. PME. We mentioned earlier that an ideal LM would learn the pretraining distribution. This happens when using the cross-entropy loss. Since we use the square loss in (1), the predictions of the model can be computed using the posterior mean estimator (PME) from Bayesian statistics. For each prompt length i we can compute PME by taking the corresponding summand in (1) which will be given by M` This is the optimal solution for prompt P, which we refer to as PME. Please refer to §A.1 for technical details behind this computation. Model and training details We use the decoder-only transformer architecture Vaswani et al. [2017] as used in the GPT models Radford et al. [2019]. Unless specified otherwise, we use 12 layers, 8 heads, and a hidden size (d h ) of 256 in the architecture for all of our experiments. For encoding the inputs x i 's and f (x i )'s, we use the same scheme as Garg et al. [2022] which uses a linear map E ∈ R d h ×d to embed the inputs x i 's as Press et al. [2022]. We observed this issue with length generalization in our in-context learning setup as well. We found that removing position encodings altogether significantly improved the length generalization for both dense and sparse linear regression while maintaining virtually the same performance in the training regime (see Figure 7 in Appendix; d = 20, p = 40 and testing was done up to length 100). These observations are in line with Bhattamishra et al. [2020], Haviv et al. [2022 that shows transformers language models without explicit position encodings can still learn positional information. Hence for the rest of our experiments, unless specified, we do not use any positional encodings while training our models. Do transformers learn optimal predictors in context? We evaluate transformers on a family of linear and non-linear regression tasks. On the tasks where it is possible to compute the Bayesian predictor, we then study how close the solutions obtained by the transformer are to this characterization. In this section, we focus only on single task ICL setting (i.e. the model is trained to predict functions from a single family), while the mixture of tasks is discussed §4. Linear inverse problems In this section, the class of functions is fixed to the class of linear functions across all problems, i.e. F = f : The case p = d corresponds to linear regression; this and the other problems in this section fall under linear inverse problems. Linear inverse problems are classic problems arising in diverse applications in engineering, science, and medicine. Here one wants to estimate model parameters from a few linear measurements. Often these measurements are expensive and can be fewer in number than the number of parameters (p < d). Such seemingly ill-posed problems can still be solved if there are structural constraints satisfied by the parameters. These constraints can take many forms from being sparse to having a low-rank structure. The influential convex programming approach for the sparse case was greatly generalized to apply to many more types of inverse problems; see Chandrasekaran et al. [2012]. In this section, we will show that transformers can solve some of these in context. The problem-specific structural constraints are encoded in the prior for w. Function classes and baselines Dense Regression (F DR ). This represents the simplest case of linear regression as studied in Garg et al. Results We train transformer-based models on the five tasks following §2.1. Each model is trained with d = 20 and p = 40, excluding Low-Rank Regression where we train with d = 100, p = 114, and r = 1. Figures 1b-1d compare the loss@k values on these tasks with different baselines. Additionally, we also extract the implied weights w probe from the trained models when given a prompt P following Akyürek et al. [2022] by generating model's predictions {y ′ i } on the test inputs {x ′ i } 2d i=1 ∼ D X and then solving the system of equations to recover w probe . We then compare the implied weights w probe with the ground truth weights w as well as the weights extracted from different baselines to better understand the inductive biases exhibited by these models during in-context learning (Figures 1f-1h). Since results for dense regression have been already covered in Akyürek et al. [2022], we do not repeat them here, but for completeness provide them in Figure 8 of Appendix. Refer to Figure 9 for the results on task F ZR . We observe that Transformers trained on this task (F ZR ) provide better performance than those trained on Sign-Vector Regression (F SVR ). Therefore, we can conclude that Transformers do not require any convexity conditions on weight vectors. For skewed-covariance regression, we observe that the transformer follows the PME solution very closely both in terms of the loss@k values ( Figure 1a) loss@k values for transformers and baselines on skewed covariance, sparse, sign-vector, and low-rank regression tasks. Bottom: Comparing the errors between the implicit weights recovered from transformers w probe with the ground truth weights w and weights computed by different baselines. w PME-Skew denotes the weights obtained by minimizing w T Σ −1 w for the skewed covariance regression task. weights for which the error between w probe and w PME−Skew (weights obtained by minimizing w T Σ −1 w) is close to zero at all prompt lengths (Figure 1e). On all the remaining tasks as well, the models perform better than OLS and are able to solve the problem with < d samples i.e. underdetermined region meaning that they are able to understand the structure of the problem. The error curves of transformers for the tasks align closely with the errors of Lasso (Figure 1b), L ∞ minimization (Figure 1c), and L * minimization ( Figure 1d) baselines for the respective tasks. Interestingly for low-rank regression transformer actually performs better. Though, due to the larger problem dimension, (d = 100) in this, it requires a bigger model: 24 layers, 16 heads, and 512 hidden size. In Figures 1f, 1g, and 1h, we observe that at small prompt lengths w probe and w OLS are close. We conjecture that this might be attributed to both w probe and w OLS being close to 0 for small prompt lengths (Figure 10 in Appendix). Prior distributions for all three tasks are centrally-symmetric, hence, at small prompt lengths when the posterior is likely to be close to the prior, the PME is close to the mean of the prior which is 0. At larger prompt lengths transformers start to agree with w Lasso , w L ∞ , and w L * . This is consistent with the transformer following PME, assuming w Lasso , w L ∞ , and w L * are close to PME-we leave it to future work to determine whether this is true (note that for sparse regression Lasso approximates the MAP estimate which should approach the PME solution as more data is observed). Non-linear functions Moving beyond linear functions, we now study how well transformers can in-context learn function classes with more complex relationships between the input and output, and if their behavior resembles the ideal learner i.e. the PME. Particularly, we consider the function classes of the form maps the input vector x to an alternate feature representation. This corresponds to learning the mapping Φ(x) and then performing linear regression on top of it. Under the assumption of a standard Gaussian prior on w, the PME for the dense regression can be easily extended for F Φ : min w ∥w∥ 2 , s.t. In our experiments, we consider two such non-linear function classes, Fourier Series and Degree-2 Monomial Basis Regression. Below, we provide their description as well as results concerning the performance of transformers on these two classes. Fourier Series A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. One can represent the Fourier series using the sine-cosine form given by: where, x ∈ [−L, L], and a 0 , a n 's and b n 's are known as Fourier coefficients and cos nπ/L and sin nπ/L define the frequency n components. We can define the function class F fourier Φ N by considering Φ as the Fourier feature map i.e. Φ N (x) = [1, cos (πx/L), · · · , cos (Nπx/L), sin (πx/L), · · · , sin (Nπx/L)] T , and w as Fourier coefficients: For training transformers to in-context-learn F fourier for M ∈ [1, 10], i.e. during evaluation we also prompt the model with functions with different maximum frequency as seen during training. As a baseline, we use OLS on the Fourier features (denoted as OLS Fourier Basis) which will be equivalent to the PME. Measuring inductive biases. Once we train a transformer-based model to in-context learn F fourier Φ N , how can we investigate the inductive biases that the model learns to solve the problem? We would like to answer questions such as, when prompted with k input-output examples what are the prominent frequencies in the function simulated by the model, or, how do these exhibited frequencies change as we change the value of k? We start by sampling in-context examples (x 1 , f (x 1 ), · · · x k , f (x k )), and given the context obtain the model's predictions on a set of m test We can then perform Discrete Fourier Transform (DFT) on {y ′ 1 , · · · , y ′ m } to obtain the Fourier coefficients of the function output by M, which we can analyze to understand the dominant frequencies. Results. The results of our experiments concerning the Fourier series are provided in Figure 2. Transformers obtain loss@k values close to the OLS Fourier Basis baseline ( Figure 2a) indicating at least for the smaller prompt lengths the model is able to simulate the behavior of the ideal predictor (PME). Since the inputs x i , in this case, are scalars, we can visualize the functions learned in context by transformers. We show one such example for a randomly selected function f ∼ F fourier Φ M for prompting the model in Figure 2b. As can be observed, the functions predicted by both the transformer and baseline have a close alignment, and both approach the ground truth function f as more examples are provided. Finally, we visualize the distribution of the frequencies for the predicted functions in Figure 2c. For a value of M, we sample 10 different functions and provide k in-context examples to the model to extract the frequencies of the predicted functions using the DFT method. As can be observed, when provided with fewer in-context examples (k = 2) both Transformer and the baseline predict functions with all the 10 frequencies (indicated by the values of a 2 n + b 2 n in a similar range for n ∈ [1, 10]), but as more examples are provided they begin to recognize the gold maximum frequency (i.e. M = 4). We provide more detailed plots for all the combinations of M and k in Figures 11 and 12 of the Appendix. This suggests that the transformers are following the Bayesian predictor and are not biased towards smaller frequencies. Random Fourier Features Mapping input data to random low-dimensional features has been shown to be effective to approximate large-scale kernels Rahimi and Recht [2007]. In this section, we are particularly interested in Random Fourier Features (RFF) which can be shown to approximate the Radial Basis Function kernel and are given as: Both ω i and δ are sampled randomly, such that ω i ∈ N (0, I d ) and δ i ∈ (0, 2π). We can then define the function family F RFF Φ D as linear functions over the random fourier features i.e. f = w T Φ D (x) such that f ∼ F RFF Φ D . While training the transformer on this function class, we sample ω i 's and δ i 's once and keep them fixed throughout the training. As a baseline, we use OLS over (Φ D (x), y) pairs which will give the PME for the problem (denote this as RFF-OLS). Results: For this particular family, we observed mixed results for transformers, i.e. they fail to generalize to functions of the family when the complexity of the problem is high. The complexity of this function class is dictated by the length of the ω i vectors (and the inputs x) i.e. d and the number of random features D. We plot the loss@k values for transformer models trained on F RFF Φ D for different values of d and D in Figure 3. As can be observed, the complexity of the problem for the transformers is primarily governed by d, where they are able to solve the tasks for even large values of D, however, while they perform well for smaller values of d (d = 1 and d = 4), for d = 10, they perform much worse compared to the RFF-OLS baseline and the loss@k doesn't improve much once ∼ 15 in-context examples are provided. Degree-2 Monomial Basis Regression We define a function family F mon.(2) S with the basis formed by a subset S(x) of degree-2 monomials Φ M (x) of any input x. That is, S( We compare the performance of transformers on this class with OLS performed on the basis S (OLS S ). PME is given by the least L 2 norm solution and is hence equivalent to OLS S . We observe that the error curves of the In-context learning of multiple function classes While the original formulation by Garg et al. [2022] is for training transformers on a single class of functions, in our work we extend the in-context learning setup to multiple classes of functions. Formally, we define a task-mixture using a set of m function classes F T = {F T 1 , · · · , F T m } corresponding to the set of tasks T = {T 1 , · · · , T m } (where T i represents a task such as DR, SR, SVR, etc.) and sampling probabilities α = [α 1 , · · · α m ] T with ∑ m i=1 α i = 1. We use α to sample a task for constructing the training prompt P. We assume the input distribution D X to be same for each class F T i . More concretely, the sampling process for P is defined as: We can work out the PME for a task mixture, which is given by: where β i = α i p i (P)/p T (P) for i ≤ m. Probability density p i (·) is induced by the task T i on the prompts in a natural way. Please refer to §A.1 in the Appendix for the derivation. The models are trained with the same objective as in Eq. 1, and at test time we evaluate loss@k for each task individually. -3 -3 -3 -1.9 -3 -2.9 -2.9 -3 -3 -3.1 -3 -3 -3.1 -1.6 -3 -2.9 -2.9 -3 -3 -3 Figure 4: Transformers simulate PME when trained on dense regression task-mixture (d = 10, p = 10, α 1 = α 2 = 1 2 ) with weights having a mixture of Gaussian prior (GMM). (a): Comparing the performance of the Transformer with Posterior Mean Estimator (PME) of individual Gaussian components (PME (T 1 ) and PME (T 2 )) and of the mixture PME (GMM). Gaussian Mixture Models (GMMs) In the function classes discussed so far in this section, the computation of PME has not been discussed since it is usually intractable as it involves high-dimensional integrals. This subsection mitigates that gap by presenting a case where PME is tractable and provides quantitative evidence of transformers simulating PME. We design a Dense Regression task-mixture (F {DR 1 ,DR 2 } ) where the prior on w is given by a mixture GMM of two Gaussians as follows: , where α 1 + α 2 = 1. Note that this is equivalent to a task-mixture in §4 terminology where T = {DR 1 , DR 2 }, DR 1 and DR 2 both being DR tasks with different priors, and α = [α 1 , α 2 ] T . For our experiments, the d-dimensional mean vectors are given by µ 1 = (3, 0, .., 0) and µ 2 = (−3, 0, ..., 0) for the two Gaussian components of the mixture (call them T 1 and T 2 respectively). The covariance matrices are equal (Σ 1 = Σ 2 = Σ * ), where Σ * is the identity matrix I d with the top-left entry replaced by 0 1 . We train the transformer on two types of mixtures: (a) with equal mixture weights (α 1 = α 2 = 1 2 ), (b) with unequal mixture weights (α 1 = 2 3 , α 2 = 1 3 ). We use d = 10 and p (prompt length) = {10, 20} and utilize curriculum for d and p as given in Table 1. Results. Figure 4 shows the squared errors between different predictors and ground truth, along with their weights. In Figure 4a, we note that Transformer's error trends almost exactly align with those of the PME of the mixture, PME (GMM), when prompts come from either T 1 or T 2 . For each plot, let T prompt and T other denote the component from which prompts are provided and the other component respectively. When d(= 10) examples from T prompt have been provided, the Transformer, PME (T prompt ), and PME (GMM) all converge to the same minimum error of 0. This shows that Transformer is simulating PME (GMM), which converges to PME (T prompt ) at k = d. PME (T other )'s errors keep increasing as more examples are provided. These observations are in line with Eq. 4: As more examples from the prompt are observed, the weights of individual PMEs used to compute the PME (GMM) (i.e. the β's) adjust themselves such that the contribution of T prompt increases in the mixture with k ( Figure 16 in the Appendix shows this more evidently). In Figure 4b, MSE between weights from different predictors are plotted. For Transformer, we obtain these weights w probe by probing using the procedure mentioned in §3.1.2. Transformer's weights are almost exactly identical to PME (GMM) ∀k ∈ {1, 2, · · · , p}. This is another concrete evidence that it is simulating PME (GMM). Initially (at k = 0), when no information about the prompt is known, the Transformer's weights (w probe ) are close to those of both PME (T 1 ) and PME (T 2 ), giving the same error. But as k increases and more examples are provided, w probe converges to PME (T prompt ) and diverges from PME (T other ). At k = d, the ground truth weights w, PME (GMM), PME (T prompt ), and Transformer weights w probe all converge to the same value. Figure 4c shows the evolution of the first dimension of the Transformer weights, i.e. (w probe ) 1 , with prompt length k. We see that Transformer is simulating PME (GMM), which approaches PME (T prompt ) with increasing prompt length (k). Note that PME (GMM) approaches PME (T prompt ) with increasing k (Eq. 4). Also note that in our setting, regardless of k the first dimension of PME (T i ) is (µ i ) 1 , the first dimension of the mean of the prior distribution T i , since T i has a fixed value (i.e. zero variance) in the first dimension. Hence, if Transformer is simulating PME (GMM), the first dimension of Transformer's weights (w probe ) 1 must approach (µ 1 ) 1 (when T prompt = T 1 ) and (µ 2 ) 1 (when T prompt = T 2 ). This is exactly what we observe as (w probe ) 1 approaches +3 and −3 on T 1 and T 2 prompts respectively. At prompt length 0, in the absence of any information about the prompt, (w probe ) 1 ≈ 0. This agrees with Eq. 4 since 0 = (µ 1 ) 1 .β 1 + (µ 2 ) 1 .β 2 , where (µ 1 ) 1 = +3, (µ 2 ) 1 = −3, β 1 = α 1 = 0.5 and β 2 = α 2 = 0.5 when prompt P is empty. The figure shows that with the increasing evidence from the prompt, the transformer shifts its weights to T prompt 's weights as evidenced by the first coordinate changing from 0 to +3 or −3 based on the prompt. Lastly, in Figure 4d, we check the behavior of Transformer and PME (GMM) on specially constructed prompts P where (x i ) 1 = 0 and (x i ) 2:d ∼ N (0, 1), ∀i ∈ {1, · · · , p}. For our setup, choosing such x i 's guarantees that no information about the distribution of w becomes known by observing P (since the only distinguishing dimension between T 1 and T 2 is the 1 st dimension and that does not influence the prompt in this case as (x i ) 1 = 0). We note that Transformer's weights are all ≈ 0 regardless of the prompt length, agreeing with the PME (GMM). Observing more examples from the prompt does not reveal any information about the underlying distribution of w in this case. All of this evidence strongly supports our hypothesis that Transformer behaves like the ideal learner and computes the Posterior Mean Estimate (PME). The results of the mixture with unequal weights (α 1 , α 2 ) for T 1 and T 2 and for p = 20 model further strengthen this evidence. Please refer to §A.8 for these results. ICL on task mixtures We start by training transformer models on the mixture of linear regression tasks discussed in §3.1. We consider binary mixtures of Dense Regression and Sparse Regression (F {DR, SR} ), Dense Regression and Sign-Vector Regression (F {DR, SVR} ), and Dense Regression and Skewed-Covariance Regression (F {DR, Skew-DR} ) as well as the tertiary mixture consisting of all three tasks (F {DR, SR, SVR} ). Unless specified we consider the mixtures to be uniform i.e. α i = 1 |T | for all T ∈ T and use these values to sample batches during training. We also explore more complex mixtures like dense regression and decision tree mixture F {DR, DT} and decision tree and neural network mixture F {DT, NN} . During the evaluation, we test the mixture model (denoted as Transformer F T ) on the prompts sampled from each of the function classes in the mixture. We consider the model to have in-context learned the mixture of tasks if it obtains similar performance as the single-task models specific to these function classes. For example, a transformer model trained on the dense and sparse regression mixture (Transformer F {DR, SR} ) should obtain performance similar to the single-task model trained on dense regression function class (Transformer F DR ), when prompted with a function f ∼ F DR and vice-versa. We have consistent observations for all of these mixtures, so we discuss only F {DR, SR} here in detail, while the results for the Figure 5: Comparing the performance of a Transformer model trained on dense and sparse regression mixture F {DR, SR} with baselines, as well as single task models, trained on F DR and F SR individually. other mixtures can be found in §A.9 of the Appendix. Results. The results for the binary mixtures of linear functions are given in Figure 5. As can be observed in Figure 5a, the transformer model trained on F {DR, SR} obtains performance close to the OLS baseline as well as the transformer model specifically trained on the dense regression function class F DR when evaluated with dense regression prompts. On the other hand, when evaluated with sparse regression prompts the same model follows Lasso and single-task sparse regression model (Transformer (F SR )) closely. As a check, note that the single-task models when prompted with functions from a family different from what they were trained on, observe much higher errors, confirming that the transformers learn to solve individual tasks based on the in-context examples provided. We also recovered the weights from the multi-task models using the same method as discussed in §3.1 when given prompts from each function class and measure how well they agree with the gold baselines as well as the single-task models trained on individual tasks. We denote weights recovered from the multi-task models as w probe T and the ones from single task models as w probe T i when trained on task T i . In Figures 5b and 5c we report results for the F {DR, SR} mixture and we observe that the weights recovered by the mixture model start to agree with task-wise models once sufficient in-context examples are provided (more or less close to the recovery bound) while the errors are high initially. Interestingly, we observe for very low prompt lengths (k < 5), the errors tend to be small, which can be explained by the similar reasoning as in §3.1 i.e. the priors for both tasks being centrally symmetric. These observations are consistent with the hypothesis that transformers compute PME (assuming transformers trained on single tasks simulate the PME). Conditions affecting multi-task in-context learning. In some of our initial experiments with F {DR, SR} mixture transformers failed to learn to solve the individual tasks of the mixture and were following OLS for both F DR and F SR prompts. To probe this, we first noted that the variance of the function outputs varied greatly for the two tasks, where for dense regression it equals d and equals the sparsity parameter s for sparse regression. We hypothesized that the model learning to solve just dense regression might be attributed to the disproportionately high signal from dense regression compared to sparse. To resolve this, we experimented with increasing the sampling rate for the F SR task family during training. Particularly on training the model with α SR = 0.87, we observed that the resulting model did learn to solve both tasks. Alternatively, normalizing the outputs of the two tasks such that they have the same variance and using a uniform mixture (α SR = 0.5) also resulted in multi-task in-context learning capabilities (also the setting of our experiments in Figure 5). Hence, the training distribution can have a significant role to play in the model acquiring abilities to solve different tasks as has been also observed in other works on in-context learning in LLMs Razeghi et al. [2022], Chan et al. [2022a]. We also studied if the curriculum had any role to play in the models acquiring multi-task in-context learning capabilities. In our initial experiments without normalization and non-uniform mixtures, we observed that the model only learned to solve both tasks when the curriculum was enabled. However, training the model without curriculum for a longer duration (≈ more training data), we did observe it to eventually learn to solve both of the tasks indicated by a sharp dip in the evaluation loss for the sparse regression task during training. This is also in line with recent works Hoffmann et al. [2022], Touvron et al. [2023], which show that the capabilities of LLMs can be drastically improved by scaling up the number of tokens the models are trained on. Detailed results concerning these findings are in Figure 25 of the Appendix. Simplicity bias in ICL? We consider a mixture of Fourier series function classes with different maximum frequencies, i.e. F fourier ), with the model performing much better compared to the latter for short prompt lengths (k < 20) while the former baseline performs better. We also measure the frequencies exhibited by the functions predicted by the transformer in Figure 6b. We observe that transformers have a bias towards lower frequencies when prompted with a few examples; however, when given sufficiently many examples they are able to recover the gold frequencies. This simplicity bias can be traced to the training dataset for the mixture since lower frequencies are present in most of the functions of the mixture while higher frequencies will be more rare: Frequency 1 will be present in all the function classes whereas frequency N will be present only in F fourier Φ N . Our results indicate that the simplicity bias in these models during in-context learning arises from the training data distribution. To further verify this observation, we also consider the case where the training data is biased towards high frequencies and check if transformers trained with such data exhibit bias towards high frequencies (complexity bias). To motivate such a mixture, we first define an alternate fourier basis: Φ n 0 ,N (x) = [cos (n 0 π/L), sin (n 0 π/L), cos ((n 0 + 1)π/L), sin ((n 0 + 1)π/L), · · · , cos (Nπ/L), sin (Nπ/L)], where n 0 ≥ 0 is the minimum frequency in the basis. Φ n 0 ,N defines the function family F f ourier Φ n 0 ,N and equivalently we can define the mixture of such function classes as F Φ f ourier One can see such a mixture will be biased towards high frequency; frequency N is present in each function class of the mixture, while frequency 1 is only present in F f ourier Φ 1,N . We train a transformer model on such a mixture for N = 5 and at test time, we evaluate the model on functions Figure 6c shows the inductive biases measure from this trained model and we can clearly observe a case of complexity bias, where at small prompt lengths, the model exhibited a strong bias towards the higher end of the frequencies that it was trained on i.e. close to 5. We also trained models for higher values of the maximum frequency i.e. N = 10 for the high-frequency bias case, but interestingly observed the model failed to learn this task mixture. Even for N = 5, we noticed that the convergence was much slower compared to training on the simplicity bias mixture F f ourier Φ 1:N . This indicates, while in this case, the origin of simplicity bias comes from the training data, it is harder for the model to learn to capture more complex training distributions, and simplicity bias in the pre-training data distribution might lead to more efficient training Mueller and Linzen [2023]. Conclusion We showed evidence of high-capacity transformers simulating the behavior of an ideal learner for various families of functions as well as their mixtures in the new MICL framework. Some of the key takeaways from our work include: 1. We show for a variety of single-task function classes like Skewed-Covariance Regression, Fourier Series, and Random Fourier Features as well as the mixture of tasks like Gaussian Mixture models that transformers are able to learn the PME solution for these problems. We verify this by not only comparing the errors of transformers with the PME but also investigating the inductive biases captured by these models and comparing them with those of ideal learners. 2. For tasks where the computation of PME is intractable, we compare the performance of transformers with strong baselines for these tasks and show transformers either outperform or obtain similar performances to these algorithms, hinting that these models might be simulating the behavior of ideal learners for these tasks. Specifically for linear inverse problems, we observe transformers are able to capture the structure of the problem, like the sparsity or low-rank parameterization of the ground truth function. we also notice that the training data distribution as well as the amount of training data can dictate the emergence of this property. 4. Finally, through our experiments on mixtures of Fourier series function classes, we showed that the simplicity bias during ICL in transformers can be traced back to training distribution. Particularly, we observed that transformers exhibit bias towards low frequencies when the training data is itself biased towards low frequencies and observe the inverse behavior when the training data is biased towards high frequencies. We also note that transformers might struggle to learn function classes with more complex training distributions like in the case of the RFF tasks with increasing values of d as well as for the complexity bias Fourier series task with large values of N. There are many interesting directions for future work. Because of the difficulties in computing PME, we were not able to conclusively establish in many cases that the transformers do Bayesian prediction. It is an interesting theoretical challenge to resolve this. While we showed that for many problems, including non-linear ones, transformers achieve small ICL errors, this is not true for classes like neural networks and decision trees. This presumably happens because of computational as well as information-theoretic difficulties with the harder function classes. Despite this transformers achieve interesting results. How can we explain this from a Bayesian perspective? Also, most of the evaluations in our work focused on drawing functions at test time from the same distribution as seen during training. While we perform some OOD evaluations particularly for Fourier series (where we evaluate on different maximum frequencies as seen during training) and for Degree-2 monomial basis regression tasks (details in Appendix §A.6), a more rigorous OOD testing to check the validity of our results is an important future direction. Finally, in this paper, we treated transformers as black boxes: opening the box and uncovering the underlying mechanisms transformers use to do Bayesian prediction would be very interesting. Stephanie Chan, Adam Santoro, Andrew Lampinen, Jane Wang where β 1 = α 1 p 1 (P) p T (P) and β 2 = α 2 p 2 (P) p T (P) . Plugging this in (3) we get Press et al. [2022] which perform better on length generalization, we find that something much simpler works surprisingly well in our setup. We found that removing position encodings significantly improved the length generalization for both dense and sparse linear regression while maintaining virtually the same performance in the training regime as can be seen in Figure 7. These observations are in line with Bhattamishra et al. [2020] which shows that decoder-only transformers without positional encodings fare much better in recognizing formal languages as well as Haviv et al. [2022] that shows transformers language models without explicit position encodings can still learn positional information. Both works attribute this phenomenon to the presence of the causal mask in decoder-only models which implicitly provides positional information to these models. Hence by default in all our experiments, unless specified, we do not use any positional encodings while training our models. A.3 Experimental Setup We use Adam optimizer Kingma and Ba [2015] to train our models. We train all of our models with curriculum and observe that curriculum helps in faster convergence, i.e., the same optima can also be achieved by training the model for more training steps as also noted by Garg et al. [2022]. Table 1 states the curriculum used for each experiment, where the syntax followed for each column specifying curriculum is [start, end, increment, interval]. The value of the said attribute goes from start to end, increasing by increment every interval train steps. Our experiments were conducted on a system comprising 32 NVIDIA V100 16GB GPUs. The cumulative training time of all models for this project was ∼ 30,000 GPU hours. While reporting the results, the error is averaged over 1280 prompts and shaded regions denote a 90% confidence interval over 1000 bootstrap trials. We adapt Garg et al. [2022] code-base for our experiments. We use PytorchPaszke et al. [2019] and Huggingface TransformersWolf et al. [2020] libraries to implement the model architecture and training procedure. For the baselines against which we compare transformers, we use scikit-learn's 2 implementation of OLS, Ridge and Lasso, and for L ∞ and L * norm minimization given the linear constraints we use CVXPY 3 . The code for all of our experiments can be found on https://anonymous.4open.science/r/icl-bayesian-prism A.4 Linear Inverse Problems Here, we discuss the results omitted from the §3.1.2 for conciseness. Figure 8 shows the results on the Dense Regression task and our experiments corroborate the findings of Akyürek et al. [2022], where transformers not only obtain errors close to OLS and Ridge regression for the dense regression task (Figure 8a) but the extracted weights also very closely align with weights obtained by the two algorithms (Figure 8b). This does indicate that the model is able to simulate the PME behavior for the dense regression class. For sparse and sign-vector regression, we also visualize the weights recovered from the transformer for one of the functions for each family. As can be observed in Figure 10, for sparse regression at sufficiently high prompt lengths (k > 10), the model is able to recognize the sparse structure of the problem and detect the non-zero elements of the weight vector. Similarly, the recovered weights for sign-vector regression beyond k > 10, start exhibiting the sign-vector nature of the weights (i.e. each component either being +1 or -1). transformer and the baseline align closely. Similarly, in Figure 11, we present the distribution of frequencies in the predicted functions for the two methods and again observe consistent findings. A.6 Degree-2 Monomial Basis Regression We now detail the degree-2 monomial basis regression function family that was mentioned in §3.2.3. As stated in §3.2.1, the Fourier Series function class can be viewed as linear regression over the Fourier basis consisting of sinusoidal functions. Similarly, we define a function class F mon. (2) Φ M with the basis formed by degree-2 monomials for any d-dimensional input vector x. Using the notation introduced in 3.1.1 the basis for F mon. (2) terms in Φ M − S to 0. We experiment with d = 20, with the prompt length p = 290 and |S| = 20. We do not use curriculum (d, p, |S| are fixed for the entire duration of the training run). Baselines. We use OLS fitted to the following bases as baselines: S basis (OLS S ), all degree-2 monomials i.e., Φ M basis (OLS Φ M ), and to a basis of all polynomial features up to degree-2 (OLS poly. (2) ). We also compare Lasso (α = 0.01) fitted to all degree-2 monomials i.e., Φ M basis (Lasso Φ M ) as a baseline. sub-family of degree-2 monomial basis regression. Evaluation of transformer on prompts generated using the same S used during training. Figure 13, we show the In-Distribution (ID) evaluation results for the F mon.(2) S experiments. Here, the test prompts contain functions formed by S (the same basis used during training). We observe that Transformers closely follow OLS S . The increasing order of performance (decreasing loss@k for k ≥ |S|) of different solvers is: OLS poly.(2) ≤ OLS Φ M < Lasso Φ M < Transformers < OLS S . Transformer's squared error takes a little longer than OLS S to converge. Lasso Φ M is able to take the advantage of sparsity of the problem and is hence better than both OLS Φ M and OLS poly.(2) , which respectively converge at k = 210 and k = 231 4 . We also conduct an Out-of-Distribution (OOD) evaluation for F mon.(2) S , whose results are shown in Figure 14. Here, we generate prompts from a basis S ′ ⊂ Φ M of the same size as S but differing from S in n degree-2 terms, i.e. |S ′ − S| = n. We show the results for different values of n. Figure 14a shows the OLS S undergoes a steep rise in errors momentarily at k = |S| (double descent). Figure 14b zooms into the lower error region of Figure 14a where we notice that Transformer mimics OLS S , while OLS S ′ is the best-performing baseline (since it fits to the S ′ basis used to construct the prompts). Transformer does not undergo double descent (for n = 1) and is hence momentarily better than OLS S at k = |S|. Similar plots are shown for n ∈ {2, 3, 4, 5, 10, 15, 20}. As n increases, the height of OLS S peak increases and the Transformer also starts to have a rise in errors at k = |S|. For n = 20, S ′ and S have nothing in common, and Transformer still follows OLS S (OLS fitted to the training basis S). As mentioned under §3.2, when the prior on weights w is Gaussian, the PME is the minimum L 2 -norm solution. For F mon.(2) S , that solution is given by OLS S . Therefore, the results suggest that the transformer is computing PME. In summary, transformers closely follow OLS S in this set-up, and more so on the OOD data, where they even surpass OLS S 's performance when it experiences double descent. A.7 Haar Wavelet Basis Regression Similar to Fourier Series and Degree-2 Monomial Basis Regression, we also define another non-linear regression function family (F Haar Φ H ) using a different basis, Φ H , called the Haar wavelet basis. Φ H is defined on the interval [0, 1] and is given by: , where 1 is the constant function which is 1 everywhere on [0, 1]. To define f , we sample w from N (0, 1) and compute its dot product with the basis, i.e. w T Φ H (·). We construct the prompt P by evaluating f at different values of x ∼ U (0, 1). The Transformer model is then trained on these prompts P. We use d = 1 and p = 32, both of which are fixed throughout the training run, i.e. we do not use curriculum. We only consider the basis terms corresponding to n ∈ {0, 1, 2, 3}. The baseline used is OLS on Haar Wavelet Basis features (OLS H ). Note that for the model used throughout the paper ( §2.1), at k = 32 the loss@k value is 0.18, while for a bigger model and OLS H it is 0.07. Therefore, for this task we report the results for the bigger model which has 24 layers, 16 heads and 512 hidden size. Results. In Figure 15, we observe that Transformer very closely mimics the errors of OLS H (i.e. OLS fitted to the Haar Wavelet Basis) and converged to OLS H at k = 32. Since the prior on the weights w is Gaussian, OLS H is the PME. Hence, Transformer's performance on this task also suggests that it is simulating PME. A.8 Gaussian Mixture Models (GMMs) Here we discuss some details regarding §4.1 and more results on GMMs. We start with a description of how we calculate PMEs for this setup. Computation of PMEs. As mentioned in §A.1 and §3.2, we can compute the individual PMEs for components T 1 and T 2 by minimizing the L 2 distance between the hyperplane induced by the prompt constraints and the mean of the Gaussian distribution. In particular, to compute PME for each Gaussian component of the prior, we solve a system of linear equations defined by the prompt constraints (w T i x i = y i , ∀i ∈ {1, 2, .., p}) in conjunction with an additional constraint for the first coordinate, i.e. (w) 1 = +3 (for N d (µ 1 , Σ 1 ) or w 1 = −3 (for N d (µ 2 , Σ 2 )). Given these individual PMEs, we calculate the PME of the mixture using Eq. 4. Now we discuss more results for GMMs. First, we see the evolution of β's (from Eq. 4), PME (GMM), and Transformer's probed weights across the prompt length (Figures 16 and 17). Next, we see the results for the Transformer models trained on the mixture with unequal weights, i.e. α 1 ̸ = α 2 ( Figure 18) and for the p = 20 model ( Figure 19). Evolution of β's, PME (GMM), and w probe . Figure 16 plots the evolution of β's and 1 st dimension of PME (GMM) for 10 different w's. The β's (Figures 16a and 16b) are 0.5 (equal to α's) at k = 0 (when no information is observed from the prompt). Gradually, as more examples are observed from the prompt, β T prompt approaches 1, while β T other approaches 0. This is responsible for PME (GMM) converging to PME (T prompt ) as seen in §4.1. The 1 st dimension of PME (GMM) (Figure 16c) starts at 0 and converges to +3 or −3 depending on whether T prompt is T 1 or T 2 . This is the same trend we saw for w probe in Figure 4c. Figure 17 shows the same evolution in the form of line plots where we see the average across 1280 samples of w. In Figure 17a, β T prompt approaches 1, while β T other approaches 0 as noted earlier. Consequently, in Figure 17b, 1 st dimension of PME (GMM) approaches +3 or −3 based on the prompt. The 1 st dimension of Transformer's probed weights, i.e. (w probe ) 1 almost exactly mimics PME (GMM). Transformer model trained with longer prompt length (p = 20). Figure 19 depicts similar evidence as Figure 4 of Transformer simulating PME (GMM) for a model trained with d = 10, p = 20, α 1 = α 2 = 1 2 . We see that all the observations discussed in §4.1 also hold true for this model. Transformer converges to PME (GMM) and PME (T prompt ) w.r.t. both loss@k ( Figure 19a) and weights (Figure 19b) at k = 10 and keeps following them for larger k as well. Comparing loss@k values of the mixture model with single-task models with different prompt distributions. Bottom: Comparing the errors between the weights recovered from the mixture model and different single task models and baselines while evaluating on F DR and F SVR prompts. In summary, all the evidence strongly suggests that Transformer performs Bayesian Inference and computes PME corresponding to the task at hand. If the task is a mixture, Transformer simulates the PME of the task mixture as given by 4. A.9 ICL on task mixtures Here we detail some of the experiments with task mixtures that we discuss in passing in §4.2. Particularly, we describe the results for the homogeneous mixtures F {DR, SVR} , F {DR, Skew-DR} and F {DR, SR, SVR} , as well as heterogeneous mixtures F {DR, DT} and F {DT, NN} . As can be seen in Figure 20, the transformer model trained on F {DR, SVR} mixture, behaves close to OLS when prompted with f ∈ F DR and close to the L ∞ minimization baseline when provided sign-vector regression prompts ( f ∈ F SVR ). We also have similar observations for the F {DR, Skew-DR} mixture case in Figure 21, where the multi-task ICL model follows the PME of both tasks when sufficient examples are provided from the respective task. Similarly, for the model trained on the tertiary mixture F {DR, SR, SVR} (as can be seen in Figure 22), the multi-task model can simulate the behavior of the three single-task models depending on the distribution of in-context examples. On F SR and F SVR prompts the multi-task model performs slightly worse compared to the single-task models trained on F SR and F SVR respectively, however once sufficient examples are provided (still < 20), they do obtain close errors. This observation is consistent with the PME hypothesis i.e. once more evidence is observed the β values PME of the mixture should converge to the PME of the task from which prompt P is sampled. The results on heterogeneous mixtures we discuss in detail below: Heterogeneous Mixtures: Up until now, our experiments for the multi-task case have been focused on task mixtures where all function families have the same parameterized form i.e w T x for linear mixtures and w T Φ(x) for Fourier mixtures. We now move to more complex mixtures where this no longer holds true. In particular, we consider dense regression and decision tree mixture F {DR, DT} and decision tree and neural network mixture F {DT, NN} . We follow Garg et al. [2022]'s setup for decision trees and neural networks. We consider decision trees of depth 4 and 20-dimensional input vectors x. A decision tree is sampled by choosing the split node randomly from the features at each depth, and the output of the function is given by the values stored in the leaf nodes which are sampled from N (0, 1). For neural networks, we consider 2-layer (1 hidden + 1 output) multi-layer perceptrons (MLP) with ReLU non-linearity i.e. f (x) = ∑ r i=1 α i ReLU(w T i x), where α ∈ R and w i ∈ R d . The network parameters a i s and w i s are sampled from N (0, 2/r) and N (0, 1) respectively. The input vectors x i s are sampled from N (0, 1) for both tasks. We consider greedy tree learning and stochastic gradient descent 5 over a 2-layer MLP as our baselines for decision trees and neural networks respectively. The values of hyperparameters for baselines such as the number of gradient descent steps, initial learning rate for Adam, etc. are the same as Garg et al. [2022]. The results for the two mixtures are provided in Figure 23. The mixture model Transformer (F {DR, DT} ) follows the single task model Transformer (F DR ) when provided in-context examples from f ∼ F DR and agrees with Transformer (F DT ) when prompted with f ∼ F DT (Figure 23a. Similarly, we have consistent findings for F {DT, NN} mixture as well, where the model learns to solve both tasks depending upon the input prompt (Figure 23b). A.10 Fourier series mixture detailed results In §4.3 we discussed results on F fourier Φ 1:N mixture, where we showed visualizations of the inductive biases for M = 4 and M = 10, and k = 2 and k = 20. Here we confirm the observations from §4.3 by detailing results for different combinations of M and k in Figure 24. The findings are consistent with Figure 6b, where for all values of M we observe a case of simplicity bias towards low frequencies at smaller prompt lengths and as more examples are provided it is able to recognize the gold frequencies. A.11 Conditions necessary for multi-task ICL We detail the results for conditions affecting multi-task ICL discussed in §4.2 here. Figure 25 compares transformer models trained on F {DR, SR} mixture with different setups i.e. training without task-normalization and uniform mixture weights α i 's (Figure 25a), training without task-normalization and non-uniform mixture weights (Figure 25b), and training with task normalization and uniform mixture weights ( Figure 25c). As described in §4.2, we perform task normalization by ensuring that the outputs f (x) for all the tasks have the same variance, which results in all the tasks providing a similar training signal to the model. To perform normalization, we simply divide the weights w sampled for the tasks by a normalization constant, which is decided according to the nature of the task. With this, we make sure that the output y = w T x has a unit variance. The normalization constants for different tasks are provided in Table 2. All the experiments discussed above (like most others in the main paper) were performed using curriculum learning. As discussed in §4.2, we investigated if the curriculum has any effect on multi-task ICL capabilities. The results for the same are provided in Figure 26. We also explore the effect of normalization on multi-task ICL in Figure 27 for F {DR, SVR} task. As can be seen in Figure 27a, for this particular mixture even while training the model without normalization, the model exhibited multi-task ICL, which can be explained by both tasks having the same output variance (i.e. d). Interestingly, when we evaluate this model (i.e. the one trained on unnormalized mixture) on in-context examples which have the outputs f (x i )'s normalized, the model fails to solve F SVR and follows OLS baseline for both the tasks. We hypothesize that since this situation represents Out of Distribution (OOD) evaluation and the model might not be robust towards performing multi-task ICL on prompts that come from a different distribution than those seen during training. Exploring OOD generalization in the multi-task case is a compelling direction that we leave for future work. Figure 25: Conditions affecting multi-task ICL in transformers. Top: Evaluating loss@k for transformer model trained on F {DR, SR} task family without normalization and considering uniform mixtures (i.e. α DR = α SR = 0.5), and comparing with single-task models and baselines. While the blue curve (Transformer F {DR, SR} ) is hard to see here, it is because it overlaps almost perfectly with the red curve corresponding to OLS in both cases.Center: Similar plots as above but for the model trained on the mixture F {DR, SR} with non-uniform weights i.e. α DR = 0.13, α SR = 0.87. Bottom: Training the model with the normalized (and uniform) mixture such that outputs for the two tasks have the same variance. All the models are trained with the curriculum. The discussion continues in Figure 26 for the models trained without curriculum. Figure 25b). Top: Evaluating the checkpoint corresponding to the 500k training step of the aforementioned model. Again, the blue curve (Transformer F {DR, SR} ) is hard to see here, but it is because it overlaps almost perfectly with the red curve corresponding to OLS in both cases.Center: Evaluating the same model but a much later checkpoint i.e. at 800k training step. Bottom: Evolution of loss@10 on F SR prompts while training the above model.
2023-06-09T01:16:27.818Z
2023-06-08T00:00:00.000
{ "year": 2023, "sha1": "f4d543ff431359947bf41152ac01233b8062221f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f4d543ff431359947bf41152ac01233b8062221f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16984949
pes2o/s2orc
v3-fos-license
Ethnomedical survey of plants used by the Orang Asli in Kampung Bawong, Perak, West Malaysia Background A qualitative ethnomedical survey was carried out among a local Orang Asli tribe to gather information on the use of medicinal plants in the region of Kampung Bawong, Perak of West Malaysia in order to evaluate the potential medicinal uses of local plants used in curing different diseases and illnesses. Methods Sixteen informants ranging in age from 35 to 65 years were interviewed. A total of 62 species of plants used by Orang Asli are described in this study based on field surveys and direct face to face communication. These plants belonged to 36 families and are used to treat a wide range of discomforts and diseases. Results The results of this study showed that majority of the Orang Asli, of Kampung Bawong are still dependent on local plants as their primary source of medication. As the first ethnomedical study in this area, publishing this work is expected to open up more studies to identify and assess the pharmacological and toxicological action of the plants from this region. Conclusions Preservation and recording of ethnobotanical and ethnomedical uses of traditional medicinal plants is an indispensable obligation for sustaining the medicinal and cultural resource of mankind. Extensive research on such traditional plants is of prime importance to scientifically validate their ethnomedical claims. Background The study of tribal knowledge of plants is an imperative facet of ethnomedical research. People healed themselves with traditional herbal medicines and ancient remedies from time immemorial [1,2]. Human beings have found remedies within their habitat, and have adopted different strategies depending upon the climatic, phyto-geographic and faunal characteristics, as well as upon the peculiar culture and socio-structural typologies [3]. Most of such information is passed on to the following generations by traditional healers through oral communication and discipleship practice [4]. Moreover, the World Health Organization (WHO) has reported that about 80% of the world population relies on traditional medicine to cure ailments [5,6]. Plants play a major role in the treatment of diseases and still remain the foremost alternative for a large majority of people [7][8][9]. This knowledge, if wisely utilized, could draw out promising herbal leads [10]. Perak, (Fig. 1) (5.02 N latitude and 101.08 E longitude), in Malaysia is one such area where traditional healing systems are still in practice among the local natives, especially the 'Orang Asli' tribes. Till date, no literature is available regarding the ethnomedical knowledge of this area, though there are ethnomedical reports on few other regions in Malaysia [11][12][13]. The 'Orang Asli', which means 'first people', are considered to be the original natives of peninsular Malaysia. There are about 150, 000 Orang Asli people of which 60% still live in the rain forests. There are 19 sub-groups among them, like Semai, Temiar, Lanoh and Jah Hut to name a few [14]. Many of the Orang Asli practitioners use local plant parts and plant juices to cure ailments and this practice is still in use [15]. Yet, little attention has been given to their traditional expertise to incorporate their knowledge in modern medicine. This study is an attempt to identify and document the use of traditional medicine among the local Orang Asli along the Kampung Bawong region in Perak. Methods Regular field trips were made to the selected tribal localities in different seasons of the year 2008, conducted in rural area located in Kampung Bawong. The authors worked with a specific tribe of Orang asli called the 'semang' who fall under the group 'negrito' (Fig. 2, 3). Sixteen informants were involved in the interviews. All informants were in the age group of 35 to 65 years. All informants were male. 3 of them were practicing herbalists, and the rest 13 were individuals who gained knowledge on medicinal uses of plants from their parents and relatives who were historically using the plants with promising results. Interviews were conducted in a local dialect of Malay language. Interviewing individual informant was of fundamental importance to assure the reliability of the gathered information. Individual interviews were conducted with 7 informants (3 herbalists and 4 individual informants) and one group discussion involving the remaining 9 informants was also conducted. The interviews were built on trust with a common aspiration to improve the health situation in the country and to conserve and increase the knowledge on medicinal plants. The information was collected in the local dialect of Malay language. Special concern was taken in collecting information to steer clear of any unoriginal information by sources such as books and magazines were rejected. Some informants were repeatedly merited during field trips to confirm the information provided by them previously. Interpretation and translation of the information received into technical or medicinal terms was cautiously avoided during the interviews so as to obtain a genuine picture of customs and uses. All the plants were identified by Dr. Encik Sani, Botanist, Department of Botany, University Kebangsan Malaysia, Selangor, Malaysia. Voucher herbarium specimens were prepared and deposited in the herbarium of Department of Pharmacognosy, Masterskill University College of Health Sciences, Selangor, Malaysia. Results and Discussion The present ethnomedical field survey indicated that there are 62 medicinal plant species belonging to a total of 36 families which are used in Kampung Bawong ( Table 1). Most of these species grow in the wild naturally and their medicinal properties are crucial in traditional medicine of the Orang Asli. Majority of the species reported in this paper are widely known throughout peninsular Malaysia and are employed for a large number of medical conditions. The plants were often used by most of the informants more or less for the same purpose, and with only slight variations in recipes. The plants are usually collected from wild. All species were easily recognized by the informants with their respective local Malay dialect names. Some of the plants commonly used belong to Conclusions This current ethnomedical field survey carried out among the Orang Asli living in the Kampung Bawong region of Perak, Malaysia reveals that many medicinal plants are still broadly used by the population in the area where the study was conducted for treating various diseases and ailments. It is believed that there are more than 100 species of traditional herbal medicines found in this region. Since many plant species are indicated as potential resource for treating various diseases, this should encourage further research in ethnomedicine. The informants' consensus in the treatment of the main reported diseases is quite high, giving more validity to the plants as a traditional remedy. The current data will expand the genetic resources obtainable in the area of research and signify a potential source of natural products for treating various diseases. The preservation of these plant species is the gateway toward developing efficacious remedies for treating diseases. Due to lack of knowledge and interest among the younger generations, some of the traditional medical information was buried together with the previous generations. This implies that the local government and village authorities need to act fast to conserve the ethnomedical knowledge of Orang Asli in the village Kampung Bawong, and the medicinal plants require preservation in addition to the ethnobotanical and ethnomedical knowledge recording. The preservation of these herbs along with the traditional knowledge of how to use them is an indispensable obligation for sustaining traditional medicine as a medicinal and cultural resource. Thus a future extensive research of these plants in this locality is recommended to identify and assess their ethnomedical claim.
2017-06-28T06:13:09.101Z
2010-02-07T00:00:00.000
{ "year": 2010, "sha1": "45112bc313238cbf5763dfa258937a14a7a3841e", "oa_license": "CCBY", "oa_url": "https://ethnobiomed.biomedcentral.com/track/pdf/10.1186/1746-4269-6-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6091b56c32eabd54656f87de53733d9f550e7e20", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
235766196
pes2o/s2orc
v3-fos-license
Isolated Persistent Left Superior Vena Cava Revealed by an Associated Asthma Background Persistent left superior vena cava (PLSVC) is a rare anomaly of the thoracic venous system. Case Report. We present a case of a patient with isolated asymptomatic PLSVC, who was diagnosed because of dyspnea revealing an associated asthma. An 18-year-old male patient complained of paroxystic sibilant dyspnea. He did not have any anomaly in physical examination. The chest X-ray revealed cardiomegaly with a widening of lower mediastinum. The electrocardiogram does not show any anomaly. Echocardiography showed the PLSVC. The thoracic contrast computed tomography of the chest showed ecstasies of the right cardiac cavities and a double superior vena cava. The patient did not have similar family cases. Respiratory functional explorations led to the diagnosis of an associated asthma. Currently, he is followed up periodically. Asthma was improved with inhaled corticosteroid treatment. Conclusion PLSVC is rare but can have important clinical implications. Associated severe cardiac malformations must be systematically sought. Background Persistent left superior vena cava (PLSVC) is an uncommon vascular anomaly. It is usually asymptomatic and had no hemodynamic implications, so it is frequently detected when cardiovascular imaging is performed for an unrelated reason [1]. It affects about 0.1 to 0.5% of the general population [1]. PLSVC can be isolated or associated with other cardiac malformations, such as atrial septal defect, endocardial cushion defect, or tetralogy of Fallot [1]. In this case report, we present a patient with isolated PLSVC with no other cardiac abnormalities, who was diagnosed because of dyspnea revealing associated asthma. Case Presentation A 18-year-old male patient presented with symptoms suggesting asthma. He was a high school student with no tobacco consumption. He did not have any exposure to bron-chial irritative agents. He had a medical history of recurrent lower respiratory tract infections from birth until the age of 3 years. At the age of 13 years, paroxystic dyspnea attacks began to occur. He did not have any associated respiratory or general symptoms. Physical examination revealed normal vital signs. Respiratory, cardiac, and abdominal physical examination was normal. Complete blood count and basic metabolic panel tests were normal. The chest radiograph revealed cardiomegaly (cardiothoracic ratio = 0:6) with a widening of the lower mediastinum. The electrocardiogram does not show any anomaly. The thoracic contrast computed tomography of the chest showed ecstasies of the right cardiac cavities and a double superior vena cava (Figures 1 and 2). Echocardiography showed normal size left ventricle, dilated straight cavities without pulmonary arterial hypertension signs, no atrium septal defect, nondilated inferior vena cava, and an aspect in favor of persistent left superior vena cava. Flexible bronchoscopy showed a diffuse mucosal inflammation of the right bronchial tree without any bronchus obstruction. Endoscopy showed also the presence of a supernumerary bronchus in the right side of the right stump bronchus. The patient had normal spirometry. The methacholine challenge test showed mild bronchial hyperresponsiveness. The patient did not have similar family cases of congenital abnormalities. Currently, he is followed up periodically. By putting on 200 μg inhaled beclomethasone twice daily, asthma was improved. Discussion PLSVC is an uncommon vascular anomaly but is considered the most common congenital anomaly of the thoracic venous system [2]. The incidence is about 0.1 to 0.5% of the total population [2]. Four anatomic types of PLSVC were described: in the first type, there is an anastomosis between the left and right superior vena cava through the innominate venous trunk. In the second type, left and right superior vena cavae are completely separated [3]. In the third type, right superior vena cava is absent or completely atrophied. In this case, blood drainage is realized through the left superior vena cava. In the fourth type, left and right superior vena cava are separated, each one presenting its own correspondent azygos vein [3]. In our case report, imaging showed the first category of the previously described classification. Most of PLSVC are asymptomatic and have no hemodynamic abnormalities [4]. It can be detected accidentally during imaging for unrelated symptoms or in the context of a process of intravascular invasive procedure [4]. However, in some cases, there is abnormal sinus rhythm or bradycardia present in 36% of cases [5]. This cardiac rhythm disorder can lead to indication of pacemaker implantation because of sick sinus syndrome resulting from histological abnormalities caused by an enlarged coronary sinus. This arrhythmia can be caused by histological modifications in the atrioventricular sinus. The existence of multiple electric nodes between PLSVC, coronary sinus, and right atrium can also lead to this arrhythmia [6]. Tachyarrhythmia can be caused by electric generating capacity of the PLSVC, interatrial conduction delay, or atrial arrhythmia secondary to coronary sinus dilation [6]. In our case, we did not note any hemodynamic abnormalities, and electrocardiogram does not show any anomaly. Clinical cyanosis can be also a manifestation of PLSVC (8% of patients) due to the left to right shunt. In this case, patients always suffer from septal defect, ventricular septal defect, or other cardiovascular malformations. Echocardiography in our case does not show any atrial or ventricular septal defect or other cardiovascular malformations. Contrast echography after administration of agitated saline solution should have been undertaken to document shunts between right and left cavities. The detection of PLSVC is important when the left subclavian vein is used for catheterization procedures like Swan-Ganz catheterization and usage in renal dialysis [2,4], in chemotherapy, or in cardiac stimulator placement [7,8]. These investigations can be complicated by left subclavian vein thrombosis, cardiac arrhythmia, perforation of the coronary sinus, cardiac tamponade, cardiogenic shock, or even death [4]. When using a central venous catheter, drugs can directly enter in the systemic circulation when they are applied from the left brachiocephalic vein [4]. PLSVC can also be complicated by paradoxical embolism secondary to right-to-left shunt due to draining PLSVC into the right atrium either directly or via an unroofed coronary sinus [4]. A PLSVC can also cause problems in pacemaker implantation or cardiopulmonary bypass [8]. The possibility of associated severe cardiac malformations to PLSVC leads us to indicate further cardiologic investigations such as transthoracic contrast echocardiography, Case Reports in Vascular Medicine transesophageal echocardiography associated with contrast saline solution, MRI, or cardiac computed tomography with contrast solution [9]. The only limitation of thoracic contrast computed tomography is radiation exposure. MRI could be a best alternative significantly superior to both transthoracic and transesophageal echocardiography [9,10]. In our case, symptoms suggest the diagnosis of asthma. However, the aspect of widening of lower mediastinum in chest radiograph (unusual in asthma patients) led us to practice thoracic contrast computed tomography that revealed the presence of PLSVC. Conclusions PLSVC is a very rare vascular anomaly. It is usually asymptomatic, incidentally detected after cardiovascular imaging whatever was the reason. To confirm the diagnosis, magnetic resonance imaging can be a best alternative significantly superior to both transthoracic and transesophageal echocardiography. PLSVC may have important clinical implications especially during certain catheterization procedures or in the case of associated severe cardiac malformations. Data Availability Data is available on request to the corresponding author (nidhalbelloumi@gmail.com).
2021-07-09T05:18:48.763Z
2021-06-19T00:00:00.000
{ "year": 2021, "sha1": "388287f4f64d785df7e7e4aea803cdaf2bf1b023", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/crivam/2021/5597105.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "62380b681b3287f7634aec77435572d5111f0b15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225733719
pes2o/s2orc
v3-fos-license
DETERMINANTS OF INVESTORS DECISION MAKING: A CASE STUDY IN THE INDONESIA STOCK EXCHANGE (IDX) Inflation in Indonesia tends to perform an issue for investors with regards to the yield value from the investment instrument they put in. The capital market in Indonesia is currently promoting stake in shares with the slogan "let's save shares," which at this time, the returns continue to improve. The purpose of this research to identify the characteristics of stock investors in the IDX, analyze the decision making process of stock investors to choose stocks in the Indonesian stock exchange. It is also intended to find out factors influencing investors to engage in trading activities on the IDX. The data analysis technique used is the validity and reliability of the questionnaire given to respondents, descriptive analysis, and factor analysis. The results of the study found that the validity and reliability of this study are valid and reliable. In the decision-making stage of stock purchase, it is obtained the introduction of problems with sources of income as the primary goal to get capital gains and dividends. Most information searches come from the internet with information searching time around 1-2 days. In evaluating alternatives in making purchases, it turns out stock is the leading choice, with the main consideration being the profit potential. In determining the purchase decision, all respondents have planned well, and themselves have the most influence with the decision making time ranging from 2-7 days. Post-purchase behavior respondents were satisfied with their choice, and the waiting time to get benefits ranged from 3-4 weeks. This research shows, four factors influence investors in buying and selling shares on the IDX, namely information factors, preferred stock factors, market activity factors, and risk-limiting factors. The most influential factor is the information showing the highest percentage of 46.52 percent, among other observed factors. Keywords: Capital Market, Investors, Factor Analysis, Decision Making. INTRODUCTION Inflation is a financial threat faced by every individual. Individuals who do not prepare their financial planning have great potential to lose money in the future. Inflation can be interpreted as a decrease in the purchasing power of money for goods or services (Widianto, 2017). Money with the same nominal over time can't afford to buy goods of the same value or the same. Indonesia's annual inflation data for the past five years can be seen in the figure below. Indonesia's annual inflation tends to fluctuate. During the period 2014 to 2018 Indonesia's inflation rate averaged 4.30 percent. After dropping to its lowest point in five years at 3 percent Year on Year in 2016, Indonesia's inflation increased again at 3.6 percent in 2017. Many ways can be done to deal with inflation. It can be in the form of saving money in a bank by expecting a return in the form of interest rates or investing with opportunities for acquisition greater than the amount of inflation. When compared to the amount of the return, between saving at the bank with investment, the investment is worthy of choice because savings have an average interest rate of 6.05 percent or only a difference of 1.75 percent of Indonesia's average inflation. While investment products have an average return of 8.11 percent in the last 3 years. According to Haming and Basalamah (2010), investment can be interpreted as an activity to invest a number of capital or money carried out at this time in the hope of obtaining profits or returns from capital within a certain period of time. Each type of investment has its own characteristics related to the returns obtained, namely the level of investment risk, the ideal investment period, the ease of withdrawing investment, and the amount of capital needed. Investment can be made in various instruments and places. One of them is investing in the capital market or capital market. The capital market acts as a liaison between investors and companies or government institutions through trading long-term instruments such as stocks, bonds, mutual funds and so forth. The purpose of investors in investing is to get a return on the amount of capital invested. Different investment products also have different returns. Data on the return on investment products can be seen in the figure below. Capital market investment products in the form of shares have the highest rate of return compared to other products with an average growth of 11.59 percent per year. In addition to the high rate of return, stocks are also an investment that is easily disbursed by investors. Choosing a stock as an investment must consider many factors, if one chooses a stock, it is not the profit gained but the loss. Some of these factors include the issuer, the time of purchase and sale, the company's financial statements, the history of stock movements, global market conditions, government policies and other factors that can be taken into consideration during the decision making process. According to Cholidia (2017) the decision-making process in choosing stocks produces two types of investors, namely investors who consider rationally and investors who emphasize psychological aspects such as experience and trust in other people's references. Decision making that is only based on irrational considerations will produce results that are not optimal, of course, this is detrimental to investors. Investor growth in the Indonesian capital market is considered quite good. However, when compared to other countries, the public interest in Indonesia for investment is still quite low, namely only around 0.15% of Indonesia's population, while the population of Malaysia is around 15%, Singapore 30% and Australia 30%. Based on data from the Financial Services Authority in Indonesia, the number of securities accounts is still very small, which is less than 600,000 accounts compared to Thailand which has reached 25 million accounts. PT BEI (BEI) Indonesia is a developing country where its financial orientation is short-term or in the category of saving society. When compared with developed countries, the orientation is more in the long term or in the category of investing society. Awareness of their financial management is so great that they are able to set aside 30 percent of their income for investment. Therefore, intensive and sustained public education is needed to change the society from a saving society to an investing society. Education that is carried out in stages is expected to be able to build community motivation to move from saving to investing. Education about the capital market to the public is important because it is useful to increase the number of interested parties to invest in the capital market (Tandio, 2016). Therefore, the government through the IDX launched a campaign movement program called "Let's Save Stocks" in order to increase the number of investors in the Indonesian capital market. This campaign aims to motivate, educate and develop the capital market industry, as well as adding new investors who target young people, such as students, students, and young employees. Existing investments have various types. One form of investment that is popular and attractive nowadays is investment in the form of shares. Shares are proof of ownership of a company in which the owner is also a shareholder (Samsul, 2006). Based on a survey conducted by IDX, Nielsen, and the University of Indonesia, it is known that young people have great potential to become stock investors. From the results of these studies it turns out that ownership of shares began to become part of people's lifestyles (Rezza, 2016). The trend in buying luxury and branded goods to be used as investment instruments began to recede. Lately, the public has again looked at investments in the capital market through share saving. This is inseparable from the campaign movement carried out by IDX. Unimed campus community is also a target of the IDX as investors and prospective investors, information to add insight and confidence in understanding capital market courses will also increase motivation in investing and on the background described above. The formulation of the problem in this research is what factors influence the Unimed campus community in investing in the IDX?. The scope of this research is the financial study of the factors that influence investors in investing in the IDX and the respondents involved are lecturers, employees and students who have taken or are currently taking capital market courses. This study aims to identify the characteristics of stock investors on the IDX, analyze the investor decision-making process and see what factors influence the unimed campus community in investing in the capital market. The benefit of this research is that it can provide information to investors who will invest in the capital market, so that investors can obtain maximum investment returns and can increase understanding of investor behavior in stock investment decision making and can provide additional information and data for research with similar themes. LITERATURE REVIEW Purchase Decision Process Consumer decision making includes all the processes that consumers go through in identifying problems, finding solutions, evaluating alternatives, and choosing between their purchasing choices. According to Kotler and Keller (2009), there are five stages of the purchase decision process, namely problem recognition, information search, alternative evaluation, purchase decision, and post-purchase behavior. We can describe the decision process as follows: a) Problem Recognition, the buying process starts when the buyer is aware of a problem or need that is triggered by internal and external stimuli. With internal stimulation one of one's normal needs rises to the maximum level and becomes an impulse; or needs can arise due to external stimuli. b) Information Search, we can distinguish between two levels of involvement with search. This lower search state is called sharp attention. At this level a person just becomes more receptive to information about a product. At the next level, one can enter active information search by looking for reading material, calling friends, doing online activities, and visiting stores to learn about the product. There are main sources of information where consumers are divided into four groups: personal (family, friends, neighbors, colleagues), commercial (advertisements, websites, sales people, distributors, packaging, display), public (mass media, consumer rating organizations), and experimental (handling, checking, product use). c) Alternative Evaluation, the basic concept that helps us understand the evaluation process that is consumers are trying to satisfy a need, consumers are looking for certain benefits from product solutions and consumers see each product as a group of attributes with various abilities to deliver the benefits needed to satisfy these needs. Consumers will pay the greatest attention to the attributes that deliver benefits that meet needs. d) Purchasing Decisions, in the evaluation phase, consumers form inter-brand preferences in a collection of choices. Consumers might also form an intention to buy the most preferred brand. In carrying out the purchase intention, consumers can form five sub-decisions namely brand, supplier, quantity, time, and payment method. e) Post Purchase Behavior, after a purchase consumers may experience conflict due to seeing certain worrying features or hearing pleasant things about other brands and be alert to information that supports their decision. Factors That Influence Investors in Investing Investment can be interpreted as an activity to invest a number of capital or money carried out at this time in the hope of obtaining profits or returns from that capital within a certain time period. This statement is in line with the opinion of Tandelilin (2010) commitment to a number of funds or other resources made at this time with the aim of obtaining future benefits. Broadly speaking, investment is divided into two types, namely real investment and financial investment. Real investment can be defined as investment with real or real objects, for example investment in property, gold, land, trading, and so on. While financial investment is an investment in financial assets for example stocks, deposits, bonds, and so forth. The psychology of a trader greatly influences the level of success in investing, because it affects the decisions taken when trading stocks. According to Wira (2017) the basic behavior of traders when trading shares is divided into two, namely seeking pleasure and avoiding suffering/loss. When in a state of loss the trader will make several alternative choices including fighting, that is, the trader is confident in his analysis. Then the second alternative is to survive or accept the situation, and the last is to leave a loss or cut loss. There are many factors that can influence stock investment decisions on the IDX. According to Septyanto (2013) there are several factors that influence investment in the capital market, both internal company factors in the form of information, risks and returns, corporate policies, and external factors in the form of world market conditions, as well as issues or rumors. Information Information has a significant influence to shape an investor's perception in making decisions. Scoot (2009) states that accounting information contains information if it helps inveso revise the initial beliefs of shares in the decision making process of buying or selling shares. Information limitations make a decision difficult and affect investors' decisions to choose the offered stock. Product information can be received by investors in various forms such as financial statements, fundamental and technical analysis of shares, business performance of listed companies, future business prospects and recommendations from stock analysts. Risk and Return Risk is an opportunity for failure to get a yield in accordance with estimates in an investment and according to Tandelilin (2001) risk is the possibility of the difference between the actual return received and the expected return. Risk has a close relationship with the rate of return and these two things can not be separated so that it affects the investment style of a trader. Broadly speaking, there are two strategies that are commonly used by investors in buying and selling shares related to this, namely the stock investment strategy and stock trading (May 2017). The stock investment strategy tends to have a low risk because this strategy can reduce the risk of price fluctuations by investing in a stock in the long run. While the stock trading strategy is to buy and sell shares in the short term and focus on the profits derived from the difference between the buying and selling prices, this strategy has a higher risk than the stock investment strategy because it utilizes fluctuations in stock prices. The close relationship between risk and profit causes risk and return to be factors that investors consider in investing. Corporate Policy Corporate policy is an initiative taken by a company that can have an impact on investor share ownership or the price of the stock itself. Some corporate actions commonly carried out by issuers are buyback and right issue. Buyback is a policy to repurchase shares outstanding in the public by an issuer to increase its share ownership and reduce the number of shares outstanding in the public. While the right issue is a policy to increase the number of shares outstanding in the public with the aim of obtaining additional funds for the issuer. Investor World Market Conditions Investors who invest in IDX do not only come from domestic, foreign investors also participate in stock trading on the IDX. This influx of foreign investors also influences the investment strategy of domestic investors by following the activities of foreign investors in buying or selling a stock. This results when foreign investors release their shares, domestic investors join in so that it can cause the index to fall even sharper. Foreign investors invest their capital in exchanges throughout the world so that the exchanges in the world have global links. The events and dynamics of stock prices between one stock exchange with another exchanged influence, especially with exchanges from neighboring countries such as crashes that occur in the Singapore exchange will have an impact on the Taiwan, Hong Kong, Japan and Indonesia stock exchanges. Rumors or Issues The strategy of "buy on rumors, sell on news" is mostly carried out by investors in the capital market. Investors have an expectation of getting an abnormal return by buying shares before they become news (Brunnermeier, 2001(Brunnermeier, , 2005. In addition to the chance of abnormal returns, the strategy also carries a higher risk. Risks that arise related to changes in patterns of stock price volatility because rumors must be validated before they become information (Berger et.all, 2011). Place and Type of Research The study was conducted at Unimed using questionnaire sheets filled out by investors or potential investors. Types and sources of data used are primary and secondary data. This type of research is quantitative research. Data Analysis Technique The population in this study is the unimed campus community residing in the faculty of economics, including the population are lecturers, employees and students who have taken or are currently taking capital market courses totaling approximately 1,500 people. The sampling method used is the probability sampling method that is random sampling using simple random sampling. The number of respondents needed in this study was determined by the Slovin formula and the sample in this study was 316 respondents with an error rate of five percent. Validity testing is done using SPSS Statistics software with the technique used Pearson Product Moment. The reliability test used in this study was the Cronbach's Alpha technique. Descriptive statistics are used to describe or give an idea of the object under study through sample data. Factor analysis according to Ghozali (2013) is an analysis which aims to define the structure of a data matrix and analyze the structure of interrelationships (correlation) between a large number of variables (test scores, test items, questionnaire answers) by defining a set of variables or dimensions and often called a factor or component). The factor analysis used is principal component analysis (PCA) and common factor analysis which is often called principal axis factoring. Characteristics of Respondents Characteristics of respondents in this study can be distinguished by sex, age, last education, occupation and investment duration. Characteristics of respondents can be seen in the table below: Based on the table it can be seen that respondents by sex, more respondents with male sex 50.9 percent compared with women 49.1 percent. This shows that the interest of men to invest shares in the IDX is quite large. Based on the age of more respondents in the age range 17 to 25 years 87 percent. If seen based on the last education of respondents with the most recent high school / vocational education dominates with the number of 85.1 percent, based on the work of respondents with jobs as students more with a number of 85 percent then followed by respondents with jobs as civil servants by 13 percent. Based on the length of investment in the capital market, most respondents have invested in a span of less than 1 year on the IDX with 157 respondents followed by 1-5 years as many as 141 people. Validity and Reliability The validity test results state that all items declared valid, because the value of r count is greater than the value of r table. The average value of r count is in the range of 0.437 -0.800 which is greater than the value of r table of 0.1104. Based on statistical tests obtained Cronbach's Alpha value of 0.940 which is greater than 0.60, the statement in this study can be stated reliably. Descriptive Analysis of Decision Making Based on the theory according to Kotler and Keller (2009) there are five stages of the purchase decision process, namely problem recognition, information search, alternative evaluation, purchase decision, and post-purchase behavior. The results of the study can be seen in the table below: Based on the above table, it can be seen, that the respondents in this research when making the process of making a decision to buy and sell shares, in the introduction stage of the problem of choosing a source of income as the main goal of investing by 42 percent, followed by the media for learning financial products by 35 percent. The expected benefit is getting capital gains and dividends at the same time with a percentage of 40.8 percent and followed by those who only expect a dividend of 38.3 percent. Evaluating the alternative, shares remain the main choice with a percentage of 99 percent and consideration in choosing shares because of the potential profit of 43.7 percent and followed by a share price of 24.7 percent. In the decision making process, respondents are 100 percent planned, and the most influential thing in making decisions is self by 75.3 percent then by stock analysts by 10.1 percent. Postpurchase behaviour of respondents as much as 56.3 percent were satisfied, and as many as 34.2 percent felt very satisfied, the time needed to get a profit of 36.4 percent answered for 3-4 weeks, while less than 1 week was 32 percent. Factor Analysis In this study there are five factors used for factor analysis, namely information factors, risks and returns, corporate policies, world markets, and rumors or issues. This factor has 21 variable elements, namely, financial statements, charts of price movements, issuer's performance, issuer's business prospects, stock analyst recommendations, capital gains and / or dividends, stock support and resistance, bluechip shares, fried stocks, -2% risk of losses, -8% stop loss, right issue, buyback, stock split, GMS, dividend distribution, net foreign buy, net foreign sell, good rumors, and bad rumors. The first step in factor analysis is to look at the value of the Kaiser-Mayer Olkinmaure of sampling adequacy (KMO). This KMO value is used to determine whether the indicator is suitable to be used. The KMO value can be said to meet the criteria if it is greater or equal to 0.5 then the variable is feasible to be tested and can be continued to the next stage. The KMO value obtained from the studied variable is 0.885. The KMO value obtained is greater than 0.5, so factor analysis is feasible to be used in this study. The next step is to look at the matrics anti-image table. In the anti-image matrics table there is an anti-image correlation column which can show the MSA value in the form of a number that forms diagonally. MSA value must be greater than 0.5. If a variable has an MSA value below 0.5 then that variable must be eliminated. For the variables tested, the MSA values for all variables above 0.5 so that it can proceed to the next stage. Then in the next stage communalities values are generated through extraction with the Principal Component Analysis (PCA) method. The PCA method also produces Total Variance Explained which shows the value of Eigenvalues. Eigenvalues of more than 1 indicate that each factor is able to represent the variables analyzed which are indicated by the magnitude of the variance described. In the extraction process using the PCA method, there are 4 new factors that have more than 1 Eigenvalues. First Factor (Information) The first factor formed from factor analysis is named information that has eleven variables, namely financial statement variables, stock price charts, issuer's performance, issuer's prospects, stock analysts, announcements and dividend distribution, right issues, buy backs, stock splits and GMS. This factor is the biggest factor formed from the analysis of factors that have a data diversity of 46.52 percent, which means that the decision making process in choosing stocks to buy and or sell takes into account internal and external information factors of 46.52 percent and is the most important factor to be considered in choosing shares on the IDX. The factor loading values for the variables in the information factor are in the range 0.829 to 0.564. The information factor therein is the financial statement variable which has the highest loading factor, which is 0.829, which indicates that before investors make a purchase or sale of shares, the main financial statement information is seen to determine the decision whether the issuer's shares are worth buying or selling, when the company's financial statements issued will influence the decision making process of investors in buying and selling shares on the IDX. The financial statements reflect the company's performance in quarterly or quarterly periods over three months. A healthy and profitable financial report certainly attracts investors to buy the issuer's shares and vice versa, if based on the financial statements the issuer suffers losses and has a lot of debt the investor will deal with the issuer's shares. Furthermore, there is a business prospect variable which has the second biggest loading factor, namely 0.790, future business prospects of listed companies are considered because the better the prospects of the company in the future, the more attractive the investor's interest to invest in these shares and causes the stock price to increase and vice versa .Variabel grafik pergerakan harian saham dengan loading faktor sebesar 0,770. Daily stock movement charts can reflect and predict stock price movements on the next trading day. This daily stock movement chart will be analyzed by technical analysis. The simplest technical analysis is to determine the area of support and resistance of the stock. Variable chart daily movement of shares can influence the decision making process of investors in buying and selling shares on the Stock Exchange. The right issue variable with a loading factor of 0.748, is a policy to increase the outstanding shares in the public by the company. This policy is carried out by the issuer as one way to get cash. The funds from the rights issue can be used by issuers for market expansion and to pay debts. Share prices will rise and influence investors to buy when it is published that right issue funds will be used for market expansion and other positive company activities, otherwise the rights issue will fail to attract investors if it turns out that the funds are only used to pay company debts. The stock buy back variable has a loading factor of 0.712 which indicates that this variable greatly influences investors' decision making in buying and selling shares on the IDX. Buy back shares are repurchases of shares outstanding in the public by the company. The implication of this policy is that stock prices will rise to a certain price limit offered by the company. Investors who participate when there is this policy are generally the type of short-term traders because the buy back moment occurs in a relatively short time. The variable performance of the company or company operations with a loading factor of 0.703. The company's operations also take into consideration investors in buying and selling shares on the Stock Exchange, because by knowing the operational activities and products or services offered by the company running well, investors expect the company to obtain sales profits which will then be distributed to investors in the form of dividends. The stock analyst recommendation variable has a loading factor of 0.688 which means that this variable influences the investor's decision making process in buying and selling shares on the IDX. Stock analysts usually provide recommendations on what stocks are interesting to buy or sell every morning before the trading floor opens. Stock analysts are either paid or free from securities. Not a few investors who use stock analysts as a guide in buying and selling shares on the Stock Exchange, especially novice investors. The GMS variable has a loading factor of 0.654 which means that this variable influences the investor's decision making process in buying and selling shares on the IDX. When there is a RUPS or general meeting of shareholders, it will influence investors to accumulate the shares of the issuer, because after the RUPS there is usually no dividend distribution to shareholders. In addition there is a motive for investors who buy shares when a RUPS will be held is to get the opportunity to attend the GSM that has been scheduled and know the policies and actions that have been and will be carried out by the company in the future. Dividend distribution variable with transaction at cum date and ex date with loading factor of 0.622 and issuer variable that distributed dividend with loading factor of 0.564. Dividends are yields given to each shareholder in question. To obtain dividend rights, investors must be registered as shareholders at the cum date or accumulation date and ex date or exit date. At the time before the cum date stock prices will usually surge because many investors buy the shares and go down when ex date is caused by retail investors who directly sell their shares because they only target the recording date of dividend recipients in addition to that companies that routinely distribute dividends also attract investors in long term investment. The last variable is a stock split with a loading factor of 0.594. Stock split is the breakdown of the number of shares into a number of more shares by using a lower nominal value per share proportionally, this causes investors to buy shares at a cheaper price. Issuers that carry out stock split stocks usually price per share is included in the expensive category that affects the liquidity of these shares. Second Factor (Preferred Stock) The second factor that is formed from the factor analysis is named preferred stock which has four variables namely the area resistant variable, area support, bluchip stock and fried stock. This factor has a diversity of data of 9.56 percent, which means that the decision making process of investors in buying shares on the Stock Exchange considers the preferred stock factor of 9.56 percent. The loading factor of the variable diversity of yield factor is in the range of 0.792 to 0.608. The resistant area variable or price is in the Resistant Area with a loading factor of 0.792. When the stock price is in the resistant area there are two possibilities that will occur namely, if the price can break resistant and make a new high price (break out new high) then investors will tend to buy this stock and continue to accelerate the stock price increase, but if the price fails to penetrate area resistant investors will sell their shares in order to secure the profits they have earned and cause the stock price to go down quickly. The second variable included in the factor of choice is the price in the support area with a loading factor of 0.735. When the stock price is in the support area there are two possibilities that will occur namely, if the price breaks the support, investors will tend to sell their shares because the price will tend to continue to fall and form new support, but if the price fails to penetrate the support area and turn around investors will buy these shares because it is likely that prices will rise within a period of time. The next variable in the preferred stock factor is bluechip stock with a loading factor of 0.712. Bluechip shares are shares that are classified as large capitalized stock issuers listed on the IDX. These stock issuers are generally the market leaders in the businesses they run. Bluechip shares attract investors to buy and sell shares because the shares of this company are classified as safe and low risk to invest for a long time. The last variable on the preferred stock factor is often to buy fried foods with a loading factor of 0.608. Fried stock is a type of stock that at the time of purchase will provide a loss in the short term, but not long after this type of stock can turn up and provide benefits with a pretty good return value. Fried stocks have illiquid movements and are easily manipulated because of their low prices, so that their movements are easy to go up and down. It will be difficult for novice investors with not much information, fried stock might cause more harm than profit. Stock prices can fall very sharply to near the limit of suspensions, but can go up again to the upper limit of stock suspensions. Therefore, it requires a lot of courage and guts for investors who want to play in this type of stock and suitable for investors who like risk in stock investments. Third Factor (Market Activity) The third factor formed from factor analysis is named market activity which has four variables, namely Net Foreign Sell, good rumors, Net Foreign Buy, and bad rumors. This factor has a diversity of data of 7.63 percent, meaning that the decision making process of investors in choosing stocks to buy and or sell takes into account the market activity factor of 7.63 percent. The factor loading value of the variables in the market activity factor is in the range 0.823 to 0.659. Market activity factor there is a Net Foreign Sell variable which has the highest loading factor of 0.823 which indicates that when a foreign investor conducts sales activity on a stock issuer, it will psychologically influence the stock investor to sell his shares to the issuer. The second variable on the market activity factor is buying when there are good rumors with a leading factor of 0.774. Good rumors usually make the stock price go up in a short time because it attracts investors to take part in accumulating the issuer's shares until finally the official news is published. The next variable is Net Foreign Buy which has a loading factor that is equal to 0.732. Contrary to net foreign sell when there is a surge in volume on a stock caused by net foreign buy, then usually local investors also accumulate these shares. The last variable is selling when there are bad rumors with a loading factor of 0.659. The investor's decision to sell his shares when bad rumors circulate is one risk management to reduce losses if the rumor turns out to be true. Fourth Factor (Risk Limitation) The fourth factor that is formed from the factor analysis is named risk limitation which has two variables namely the maximum stop loss tolerance variable -8% of trading per share issuer and the maximum risk of loss -2% of the total investment capital. This factor has a diversity of data of 5.76 percent which means that the decision making process of investors in buying shares on the Stock Exchange considers a risk limitation factor of 5.76 percent. The loading value of the variable diversity of risk limiting factors is in the range of 0.783 to 0.696. The first variable in the risk limiting factor is the maximum stop loss tolerance variable -8% of trading per share issuer that has a loading factor of 0.696. This variable influences the decision making process of investors in buying and selling shares on the IDX, investors will tend to consider stop loss or sell their shares if the stock falls from the purchase price of -8%, the figure depends on the money management of each investor there is less than -8% already stop loss there is also more than -8%. The next variable is the maximum risk of loss of -2% of the total investment capital with a loading factor of 0.783. These results indicate that investors will consider and evaluate the performance of their shares if they reach a risk of maximum loss of -2% of the total investment capital. Conclusions Based on the results of research that has been done, it can be concluded as follows: 1) Stock investor respondents on the Stock Exchange numbered 316 people, dominated by male sex investors with a percentage of 50.9 percent, 87 percent of respondents aged 17 to 25 years, then the most dominant last education was SMA / SMK 85.1 percent as well as employment are students with the same percentage, and as many as 49.7 percent of respondents have invested shares in the IDX for less than 1 year. 2) Test the validity of all statement items in the range of values 0.437 -0.803 above r Table of 0.1104 so that all statement items in this study are valid. Reliability test by looking at the Cronbach's alpha value of 0.940 which is greater than 0.6 then the statement in this study was declared reliable. 3) The decision making process of stock investors to choose shares on the IDX is divided into five stages. At the introduction of needs, the motivation that drives investors to invest in shares is as a source of income with the benefit of getting capital gains and dividends. In the information search stage, sources of information about listed companies are obtained from information searches on the internet with information searching time of around 1-2 days. The next stage is an alternative evaluation that investors consider in choosing stocks because of the potential profit and almost 100 percent of investors have no investment choice other than stock investment. In the decision process stage, all investors plan in advance to buy shares and the biggest influence giver is themselves then the time needed to determine investment choices is 2-7 days. In the post-decision stage, the time experienced in obtaining profit ranges from 3-4 weeks, so the level of satisfaction felt by investors after investing is satisfied and will advise others to invest in shares on the IDX. 4) There are four factors that influence the decision making of stock investors in choosing shares on the Stock Exchange, namely 46.52 percent information factor, 9.56 percent preferred stock factor, 7.63 percent market activity factor and risk limiting factors 5,76 percent. Suggestions Based on the conclusions obtained, the suggestions made are as follows: 1) Investors must make it a habit to make a summary of the information obtained, in order to arrange trading plans and make money management so that the profits obtained are consistent. Investors should keep a record of every transaction that has been done to evaluate the performance of their stock portfolio in a period, for example, on weekends or at the end of the month. 2) Respondents in this study are dominant students as status, so it is recommended to conduct stock transactions with investment type not swing trader or scalper because based on the research results, factors that influence investors to buy and sell shares are information, so it takes more time and experience in studying information to get the right decision in investing. 3) It is hoped that further researchers will add to factors other than those examined in this study, such as macroeconomic, microeconomic, and political factors in order to supplement this research in influencing stock purchase decisions on the IDX.
2020-07-09T09:14:04.887Z
2020-06-20T00:00:00.000
{ "year": 2020, "sha1": "348c4e24b593f377cf3df35af505be228cbb4735", "oa_license": "CCBYSA", "oa_url": "https://jurnal.unimed.ac.id/2012/index.php/jcrs/article/download/18493/13602", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d848f11efac640e24e3d5a4f5a52c6a71075b79d", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Business" ] }
5823380
pes2o/s2orc
v3-fos-license
Prognosis of Elderly Japanese Patients Aged ≥80 Years Undergoing Hemodialysis Although the number of elderly patients requiring dialysis has increased, data regarding the prognosis of elderly patients undergoing hemodialysis are limited. In the present study, prognosis in Japanese hemodialysis patients aged ≥80 years was evaluated. From January 1988 to July 2013, 1144 consecutive patients with end-stage renal disease required renal replacement therapy at our institution; of these, 141 were aged ≥80 years. These patients' charts were retrospectively reviewed for relevant clinical variables and survival time. The life expectancies table from the National Vital Statistics database was used, and prognostic factors were assessed by multivariate analysis. In total, 107 deaths (76%) were recorded during the study period. The median survival time and estimated life-shortening period in the patients were 2.6 years and −5.3 years, respectively. Eastern Cooperative Oncology Group Performance Status and hemoglobin level were revealed as prognostic factors in the multivariate analysis. Estimates of prognosis and prognostic factors may provide useful information for physicians as well as elderly patients with end-stage kidney disease. Introduction As the Japanese population continues to age and the prevalence of chronic kidney disease increases [1,2], clinicians are frequently faced with the decision of whether or not to initiate renal replacement therapy for their patients. According to the latest nationwide review conducted by the Japanese Society for Dialysis Therapy in 2012, 309,946 patients were on dialysis, and dialysis was initiated in 38,165 new patients that year [3]. Along with this increase in the number of dialysis patients, the number of older patients (≥80 years) undergoing hemodialysis treatment each year has also increased. In 2004, 14% of all dialysis patients in Japan were ≥80 years old. These figures were 16% in 2006, 18% in 2008, 19% in 2010, and 22% in 2012, whereas the number of Japanese patients aged 70-79 years receiving dialysis has remained unchanged in the last decade ( Figure 1) [3]. Many clinicians believe that age is a barrier for initiation of renal replacement therapy because dialysis in elderly patients has been associated with an increased risk of mortality. However, data regarding the prognosis of elderly patients undergoing hemodialysis are limited. Thus, in the present study, the median survival time in hemodialysis patients aged ≥80 years was evaluated, and the period of time by which these patients' lives were shortened (life-shortening period) was estimated using a life expectancies table from the National Vital Statistics data for 2008 [4]. Prognostic factors were then assessed by multivariate analysis. Materials and Methods This study was conducted in accordance with the ethical standards of the Declaration of Helsinki and approved by the Institutional Ethics Committee. From January 1988 to July 2013, 1144 consecutive patients with end-stage renal disease required renal replacement therapy at the Oyokyo Kidney Research Institute, Hirosaki, Japan. Of these, 141 were aged ≥80 years. Patient charts were retrospectively reviewed for relevant clinical variables and survival time. The following data were collected for use in the analyses: patient age, gender, body mass index, and blood pressure; hemoglobin, serum albumin, phosphorus, potassium, and corrected calcium levels; blood urea nitrogen level and estimated glomerular filtration rate (eGFR); concomitant use of antihypertensive drugs (angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, or calcium blockers); and presence or absence of diabetes mellitus or cerebral and cardiovascular disease (cerebral infarction, heart failure, myocardial infarction, and angina pectoris) at the initial visit. The eGFR was calculated using values for age, gender, and serum creatinine levels and the equation shown below [5]. This eGFR equation for Japanese patients is a modified version of the abbreviated Modification of Diet in Renal Disease Study formula: eGFR mL/min/1.73 m 2 = 194 × sCr −1.094 × age −0.287 (×0.739, if female) [6]. Patient general health status before dialysis initiation was evaluated on the Eastern Cooperative Oncology Group Performance Status scale (ECOG-PS) [7]. The life expectancy is calibrated using the life expectancies table [4] based on expected age of death on specific age at dialysis initiation. To evaluate differences in life expectancy between these patients and the general population, lifeshortening periods were calculated according to the following formula: expected age of death on specific age at dialysis initiation-the actual age of death. Basic Policies for Indication of Renal Replacement Therapy. Hemodialysis is the standard treatment strategy for renal replacement therapy in elderly patients (≥80 years) with end-stage renal disease at our institution. The purpose of this treatment is to minimize present suffering, gain time to consider continuation of renal replacement therapy and its alternatives, and ensure renal survival. Patients who refuse renal replacement therapy and those with systemic comorbidities, extremely advanced heart failure, or severe complications are designated as "not indicated for treatment. " 2.2. Follow-Up Schedule. All patients were routinely followed up for thrice-weekly hemodialysis with standard care according to the guidelines of the Japanese Society for Dialysis Therapy for the management of patients on chronic hemodialysis [8,9] and tracked until the occurrence of death, loss of follow-up, or end of study (July 31, 2013), whichever came first. Erythropoiesis-stimulating agents were used when hemoglobin level was lower than 10 g/L in all patients. The target hemoglobin level was 10-11 g/L. Statistical Analysis. Patient survival was evaluated using the Kaplan-Meier method. Variables were compared among groups using Student's -test or Mann-Whitney test. Age, gender, body mass index, blood pressure, hemoglobin, serum albumin, phosphorus, potassium, and corrected calcium levels, blood urea nitrogen, estimated glomerular filtration rate (eGFR), concomitant use of antihypertensive drugs, presence or absence of diabetes mellitus, and cerebral and cardiovascular disease were analyzed using stepwise Cox regression multivariate analysis to determine independent predictors for overall survival. After these factors were identified, a receiver operating characteristic (ROC) curve was used to determine the optimal cut-off value for prognosis. This value was calculated using the following formula [10]: (1−sensitivity) 2 + (1 − specificity) 2 . Each patient was categorized according to the number of risk factors identified in the Cox regression multivariate analysis to evaluate the predictive potential of risk criteria for prognosis. Each positively identified risk factor was given a score of 1, and scores for all other risk factors were summed. Patients were classified into three groups according to the number of risk factors: the low-risk group (patients with no risk factors), the intermediate-risk group (one risk factor), and the high-risk group (two risk factors). The statistical significance of the differences between the three groups was evaluated by the log rank test. All statistical analyses were performed using the SPSS software package version 19.0 (SPSS, Chicago, IL, USA) and GraphPad Prism version 5.03 (GraphPad Software, San Diego, CA, USA). A value of <0.05 was considered statistically significant. Results Characteristics of all 1144 dialysis patients are summarized in Table 1. The age distribution of patients was as follows: 129 (11%), 202 (18%), 324 (28%), 348 (30%), and 141 (12%) The Scientific World Journal 3 The Scientific World Journal Only three patients survived longer than the general life expectancy in the Japanese population ( Figure 4). Results of the multivariate analysis revealed ECOG-PS and hemoglobin levels as significant prognostic factors in elderly patients undergoing hemodialysis (Table 3). Risk criteria were constructed using these significant independent risk factors for stratification of patient survival. The optimal cut-off points calculated from ROC curves for ECOG-PS and hemoglobin level were >1 and <9.55 g/L, respectively. Patients were then categorized according to the number of independent risk factors for overall survival. This risk classification indicated significantly poor prognoses in the intermediate-and high-risk groups compared with those in the low-risk group ( = 0.0059) ( Figure 5). The median sur-vival time in the low-risk group was 63 months, whereas that in the other groups was 23-24 months. Discussion In Japan, the number of elderly patients with end-stage renal disease requiring dialysis treatment continues to grow. Data from the latest Japanese Society for Dialysis Therapy database (2012) showed that the number of dialysis patients aged ≥80 years increased by more than 8% between 2005 and 2013 [3], and 22% of all dialysis patients were aged ≥80 years. Several studies have particularly evaluated the indications and outcomes of maintenance dialysis in elderly patients [11][12][13][14][15][16][17][18]; however, no reports have examined prognosis in these patients in Japan. In this study, survival outcome in elderly Japanese hemodialysis patients aged ≥80 years was compared with that in the general population. The median survival time of 2.6 years was comparable to that in previous reports of approximately 2.4-3.2 years [13,16,17,19]. Japan is a country where life expectancy is high [4,20]. Thus, in the general population, life expectancy among patients aged ≥80 years is 7.6 years. The estimated median life-shortening period calculated from the life expectancies table was −5.3 years in deceased patients. These data could not be compared with those from other industrialized countries; however, comparison using the estimated median lifeshortening periods from various countries may reveal social differences, including those related to medical or insurance systems. Further studies are required on this issue. Because elderly patients on hemodialysis constitute a heterogeneous The Scientific World Journal 5 group of patients, their chronological age may not necessarily correlate with their biological age. Several risk factors pertain to elderly hemodialysis patients aged ≥80 years, including body mass index, late referral to a nephrologist, poor performance status, presence of peripheral vascular disease [17], older age, acute congestive heart failure, any walking impairment, and hemoglobin level (<10 g/L) [18]. Establishing a standard risk-associated classification system for planning dialysis initiation may help clinicians in treatment decision-making. In the multivariate Cox analysis, independent predictors for overall survival were ECOG-PS ≥1 and hemoglobin level was ≤9.55 g/L. Based on these risk criteria, survival rates were significantly lower in the intermediate-and high-risk groups. Because of technological advancements in dialysis treatment, old age is no longer considered as a contraindication in 6 The Scientific World Journal most industrialized countries [15-18, 21, 22]. Recent studies suggested that dialysis provides a survival benefit compared with conservative management for patients with stages 4-5 chronic kidney disease over the age of 75 years [23][24][25]. However, dialysis may or may not offer a substantial prolongation of life expectancy with an acceptable quality of life (QOL) among elderly patients. QOL is very important for older patients for whom renal transplantation is an unlikely option. However, very few studies have addressed QOL issues in elderly patients with end-stage renal disease because of the controversial nature of this decision [19,21,26]. Lamping et al. reported no significant differences in QOL scores between elderly dialysis patients and elderly individuals in the general population in the UK and USA [26]. In contrast, Tamura et al. reported an association between dialysis initiation and substantial and sustained decline in functional status among nursing home residents with end-stage renal disease [27]. Carson et al. demonstrated prolonged survival for elderly patients (≥70 years of age) on dialysis compared with those conservatively treated. However, survival time in patients who were conservatively treated may be substantial, considering that a similar number of hospital-free days was recorded for both groups of patients [19]. Benefits and disadvantages of dialysis in elderly patients and its effects on QOL continue to be debated. The particular needs and traditional customs of the populations in each individual country or region must be considered. In addition, because randomized controlled trials comparing outcomes and QOL in patients receiving renal replacement therapy compared with those conservatively treated are not feasible, observational studies remain the only means by which treatment methods can be compared. The present study has several important limitations, including its retrospective nature, small sample size, and the inclusion of patients within a single institution. We could not address the total dose and its influences of erythropoiesisstimulating agents. There might be association between the dose of erythropoiesis-stimulating agents and patients' death. Higher use of erythropoiesis-stimulating agents for poor responder or excessively higher hemoglobin levels may link to poor prognosis. Therefore, the results of this study cannot be generalized. In addition, the primary question as to who will receive survival and comprehensive benefits may remain unanswered. In elderly patients undergoing hemodialysis, details of patient characteristics such as late referral, presence of peripheral vascular disease, frailty, and information regarding QOL were lacking and must therefore be considered in future studies. Despite these limitations, the present study provided some useful information. Shortening of life expectancy was investigated using the National Vital Statistics survey database for Japan in 2008. Because medical and insurance systems differ among countries, decision-making regarding dialysis initiation and its indications may also differ. Because Japan is a country with a high life expectancy rate [4,20], life expectancy in elderly patients should be calculated using life expectancy tables in each individual country or region. The present study is the first to investigate the shortening of life expectancy in Japanese hemodialysis patients aged ≥80 years. Conclusion In conclusion, the results of this study shed light on the prognosis of elderly dialysis patients in Japan, whose number is increasing. Our observations suggest that old age is no longer considered as a contraindication, and hemodialysis initiation is acceptable for the rather elderly patients with end-stage renal disease in consideration of risk factors. Further clinical study is required to determine the most appropriate treatment for elderly patients with end-stage renal failure. A new evaluation system is required to aid decision-making between conservative or renal replacement therapy with consideration of comorbidities, health status, and patient preferences in the elderly Japanese population.
2016-05-12T22:15:10.714Z
2013-10-09T00:00:00.000
{ "year": 2013, "sha1": "975a21fa4d26806f0a901c1c499a8dd3c542a112", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2013/693514.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "90061a552fb8cf77aedffb0c4455fef0385fd3ad", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249942199
pes2o/s2orc
v3-fos-license
HOTRUNZ: an open-access 1 km resolution monthly 1910–2019 time series of interpolated temperature and rainfall grids with associated uncertainty for New Zealand . Long time series of temperature and rainfall grids are fundamental to understanding how these environmental variables affect environmental or ecological patterns and processes such as plant distributions, plant and animal phenology, wildfires, and hydrology. Ideally such temperature and rainfall grids are openly available and associated with uncertainties so that data-quality issues are transparent to users. We present a History of Open Temperature and Rainfall with Uncertainty in New Zealand (HOTRUNZ) that uses climatological aided natural neighbour interpolation to provide monthly 1 km resolution grids of total rainfall, mean air temperature, mean daily maximum air temperature, and mean daily minimum air temperature across New Zealand from 1910 to 2019. HOTRUNZ matches the best available temporal extent and spatial resolution of any open-access temperature and rainfall grids that include New Zealand and is unique in providing associated spatial uncertainty in the variables’ units. The HOTRUNZ grids capture the dynamic spatial and temporal nature of monthly temperature and rainfall and the uncertainties associated with the interpolation. We also demonstrate how to quantify and visualise temporal trends across New Zealand that recognise the temporal and spatial variation in uncertainties in the HOTRUNZ data. The HOTRUNZ data are openly available at https://doi.org/10.7931/zmvz-xf30 Introduction Climatologies such as WorldClim 2 (Fick and Hijmans, 2017) and CHELSA (Karger et al., 2017) that provide spatial grids (or layers, surfaces) consisting of cells (or grid points, pixels) that record climatic variables such as temperature and rainfall underpin thousands of environmental and ecological studies. These climatologies represent long-term ≈ 30-year climate averages, but long time series of temperature and rainfall grids, each of which covers a shorter period, are also highly desirable as they can be correlated with long-term environmental or ecological data to explore how temperature and rainfall influence such processes. For example, monthly time-series data of temperature and rainfall conditions can be used to improve the understanding of plant distributions (Stewart et al., 2021), plant and animal phenology (Gordo and Sanz, 2005), wildfire histories (Girardin and Wotton, 2009), and hydrology (Remesan et al., 2014). Meteorologists and climatologists are recognising the importance of implementing open-access methodologies, as openly sharing data, source code, and knowledge provides exciting opportunities for scientific discovery (de Vos et al., 2020). For New Zealand there are limited time-series options for national-scale temperature and rainfall grids (Table 1), and while each varies in its spatial and temporal characteristics, no one open-access dataset has the combined criteria of ≤ 1 km 2 spatial and monthly temporal resolution over the last century. Producing such historical temperature and rain-fall data is challenging because weather station data are often sparse, especially in remote areas and for earlier time periods. However, this challenge should not preclude the creation of historical data, if we accept that we cannot be consistently successful for all locations and dates and we prioritise providing temperature and rainfall data with associated uncertainty data. Much emphasis has been placed on quantifying (Foley, 2010) and visualising (Retchless and Brewer, 2016) the uncertainty of future climates. Quantifying and visualising uncertainty are also critical for the judicious use of historic climate and weather data, as even when following open science practices that involve sharing data and code it can still be difficult to ensure that an end-user is aware of and understands the quality of the data (de Vos et al., 2020). Historical climate and weather data are often used predictively, but these predictions must recognise the underlying uncertainties so that their reliability can be understood and communicated. The uncertainty (or reliability) of weather and climate spatial data is often quantified with metrics such as the root-mean-square error (RMSE) or the mean absolute error (MAE) derived from cross-validation that iteratively excludes each weather station and then interpolates a value for the weather station using the remaining data (Willmott and Matsuura, 2006). However, these are global metrics that describe uncertainty with a single spatially averaged value and provide no information about how the uncertainty will vary across space (Zhang and Goodchild, 2002). When estimates of spatial uncertainty accompany climate and weather data, they typically provide only simple indications of areas that are reliable (Abatzoglou et al., 2018;Harris et al., 2020b), which limits how uncertainty can be incorporated into analyses as estimates from individual cells are not matched with an uncertainty in the variables' units. To facilitate understanding of how long-term temperature and rainfall patterns have been changing and potentially affecting environmental and ecological processes in New Zealand, we use climatologically aided interpolation to produce a History of Open Temperature and Rainfall with Uncertainty in New Zealand (HOTRUNZ). HOTRUNZ is an openly available history of monthly temperature and rainfall data for 1 km 2 grid cells across New Zealand that matches the best available spatial and temporal extent and resolution of any currently available open data and is unique in providing associated spatial uncertainty grids in each variable's units (Table 1). Weather and climatology data Precomputed monthly statistics calculated from weather station data are freely available from New Zealand's National Climate Database (NIWA, 2020). For New Zealand's three main islands and their associated near-shore islands, we queried the database for data from 1900 to 2019 for monthly statistics of total rainfall, mean air temperature, mean daily maximum air temperature, and mean daily minimum air temperature. Using older weather station data can be problematic as the manual nature of the recording can lead to gaps in the records over time, and locations can be poorly recorded. Therefore, to ensure a high quality of weather station data, we only used monthly statistics data for weather stations that were recorded in the database as having a complete set of daily records for every day of the month and hence did not contain any estimated values. We also only used data from weather stations whose locations were recorded in the database as being reliable to within 100-200 m at worst and therefore could be reliably located within a 1 km 2 grid cell. Having applied these filters to the weather station database, we had data from 3438 weather stations across New Zealand, but most of these stations were from lower elevational areas, with 89 % below 500 m (Fig. 1). This lack of weather station data at higher elevations has been noted before and is in part why interpolations of temperature and rainfall data in New Zealand are usually less accurate at higher elevations (Tait and Macara, 2014;Tait et al., 2012). To recognise the challenges of interpolating temperature and rainfall at higher elevations, we will follow Tait et al. (2012) and will evaluate the accuracy of our data for locations below and above 500 m elevation (Fig. 1). Further complexity is added to data availability in that many of the weather stations were short-lived or provided intermittent records, so the amount of data available in any given month varied through time, across space, and between variables, but there were some consistencies. There were much more rainfall data than temperature data, there were more data in recent times (with a peak around 1980), and there were always fewer data in the mountainous interiors of both islands and in the more remote southerly and westerly regions of New Zealand (Fig. 2). Climatologies that describe average weather patterns over several decades have underpinned successful previous rainfall interpolations in New Zealand (Tait et al., 2006). Therefore, we used a climatologically aided interpolation approach by interpolating monthly values as an anomaly from a climatic normal (Willmott and Robeson, 1995) as this technique has been successfully applied elsewhere (Abatzoglou et al., 2018;Harris et al., 2020b;Hofstra et al., 2008). The basic premise of climatologically aided interpolation is that rather than directly interpolating weather station data in each month, by using an underlying long-term climatology grid for that month, monthly anomalies can be calculated as the difference between the weather station value and the climatology grid value. These anomalies are then interpolated and added to the climatology grid to estimate how the conditions in that month deviated from the climatological normal. When compared to the same interpolation method using just weather station data, climatologically aided interpolation is advantageous in that it (i) has reduced errors, (ii) has errors that are insensitive to changes in the number of weather stations, and (iii) can indirectly account for topo- An uncertainty grid is provided to aid a user to make a personal uncertainty assessment but it is not in the variables' units. graphic effects when climatologies with high spatial resolution are used (Willmott and Robeson, 1995). Openly available New Zealand climatology grids for the rainfall and temperature variables giving the average conditions over the 30-year period from 1950-1980 for each month at 100 m grid cell resolution (McCarthy et al., 2021;Leathwick et al., 2002) were used to produce 1 km 2 grid cell resolution climatologies as the basis of the climatologically aided interpolation. These climatologies were developed from long-term weather station data interpolated using thin-plate splines using geographic variables of elevation, and in the case of rainfall an east-west topographic protection variable (Leathwick et al., 2002). Using these climatologies for our climatologically aided interpolations incorporates these geographic variables indirectly into our interpolations, and by having a climatology for each month, as opposed to a single climatology for a whole year, seasonal shifts in temperature and rainfall are also accounted for. Interpolation with uncertainty We selected natural neighbour (or Sibson) interpolation (Sibson, 1981) to interpolate the anomalies. Natural neighbour interpolation has been shown to perform well for interpolating rainfall and temperature data (Hofstra et al., 2008;Keller et al., 2015;Lyra et al., 2018). More specifically we chose to use natural neighbour interpolation because it is (i) an exact interpolator, meaning it will retain the original data values at locations with input data in the interpolated grid and will only interpolate within the range of the original data and so cannot produce wildly unrealistic interpolations; (ii) a local method, which interpolates for a location only using data from that location's immediate surrounds; (iii) spatially adaptive, so automatically adapting to localised data distribution and density; and (iv) not based on fitting statistical trends and so does not require large sample sizes (Etherington, 2020). These properties are desirable given our interpolations will need to adapt to the increasingly sparse and irregular data that occur further back in New Zealand history and which preclude using more complex interpolation methods that require more data. For example, when interpolating at higher elevations where there are fewer data, simpler methods can perform better than more complex methods (Stahl et al., 2006). Therefore, in choosing natural neighbour interpolation we do not suggest that it is universally the best interpolation method, but rather it was the most appropriate given our data situation and interpolation objectives. We applied a discrete (or digital) form of natural neighbour interpolation (Park et al., 2006) that simultaneously calculates uncertainty as a cross-validation error-distance field (Etherington, 2020). This interpolation method works by defining a grid of cells for which interpolated values will be calculated. Data cells are first defined as those cells that contain weather station data. Where there are multiple weather stations within a single data cell the data cell is given the mean value of any weather stations it contains; therefore with discrete natural neighbour interpolation it is not the number of weather stations but rather the number of data cells that determines the amount of data available for the interpolation. The number of data cells and the spatial pattern of the data cells, as measured by mean nearest-neighbour distance (Clark and Evans, 1954), varied over time ( Fig. 3). At elevations <500 m the number of data cells for all variables increases steadily over time with a peak around the 1980s followed by a decline that while continual for rainfall is then reversed for all temperature variables. While the number of data cells varies, the mean nearest-neighbour distance is reasonably constant over time, indicating that spatial coverage by the weather station network has been consistent despite variations in the number of data locations. Spatial coverage does decrease moving back through time, and that coverage was particularly poor for the temperature variables pre-1910. At elevations ≥ 500 m there are far fewer data cells at all points in time, and pre-1910 there were usually no data cells available for the temperature variables. However, the mean nearest-neighbour distance of the data cells at or above 500 m is similar to the data cells below 500 m. This difference in number of data cells but consistency in spatial pattern between the elevational regions suggests that data cells at or above 500 m tend to occur either along the edge of the region with elevations ≥ 500 m or are clustered within the region with elevations ≥ 500 m, which is consistent with the general pattern of all weather stations providing data (Fig. 1). Overall, these patterns in data cell numbers and spatial pattern indicate a need for caution in our interpolations in regions ≥ 500 m elevation and pre-1910. Once the data cells for each month are established, the discrete natural neighbour interpolation proceeds by assigning all other cells the value of the nearest data cell (Fig. 4a). The interpolated value for a grid cell is the mean of the grid cell values that are as close or closer to the interpolation cell than a data cell (Fig. 4b), which, when repeated across all grid cells, yields a smooth interpolation grid (Fig. 4c). A cross-validation error-distance field (Etherington, 2020) is then used to quantify the interpolation uncertainty. In essence uncertainty increases with distance from a data cell, with zero uncertainty at the data cells themselves where the underlying value is known and does not require interpolation. The rate at which uncertainty increases with distance from data cells is based on a cross-validation of the data cells that calculates the absolute interpolation error and distance to other data cells. This approach ultimately results in areas having greater uncertainty when they are more distant from data cells and are near data cells that would be harder to in- Figure 2. A time series of the total number of monthly weather records for each year within 25 km ×25 km grid cells for (a) total rainfall and (b) mean air temperature (which are similar to mean daily maximum air temperature and mean daily minimum air temperature). Cells with ≥ 12 records will usually mean at least one weather station is present with records for all 12 months of the year, but in some instances many weather stations may be present, resulting in tens or hundreds of records per grid cell. Figure 3. Monthly time series of the number of data cells (which relates closely to the number of weather stations) and the mean nearestneighbour distance of data cells contributing to climatologically aided natural neighbour interpolation across New Zealand at <500 m elevation and ≥ 500 m elevation for (a) total rainfall, (b) mean air temperature, (c) mean daily minimum air temperature, and (d) mean daily maximum air temperature. Missing data indicate an absence of data cells in an elevational category. The grey areas show the period over which the climatologies used to aid interpolation apply, and the dashed lines indicate the temporal limit of reliable data. terpolate accurately when absent. Uncertainty is calculated using the discrete natural neighbour interpolation process to interpolate the distances to the data cells to produce natural neighbour distances (Fig. 4d) and to interpolate the crossvalidated error rate, as the ratio of the cross-validated absolute error to natural neighbour distance, for each data cell (Fig. 4e). The product of the natural neighbour distances and cross-validated error rates produces a cross-validation errordistance field (Fig. 4f) that yields a grid of interpolation uncertainties in the same units as the interpolation and is highest in areas that are more distant from data cells and in areas where the variable being interpolated has higher spatial het- Figure 4. A hypothetical example to illustrate discrete natural neighbour interpolation with a cross-validation error-distance field. Beginning with a set of data cells d i , shown as squares, a grid of cells is defined and (a) each cell is given the value of the nearest data cell d, (b) for any interpolation cell i the interpolated value is the mean of cell values that are as close or closer to the interpolation cell than a data cell, which (c) when repeated for all grid cells produces a natural neighbour interpolation. Natural neighbour interpolation is also used to interpolate (d) the distances to the data cells and (e) the cross-validated error rate of each data cell. The interpolation uncertainty is then (f) the cross-validation error distance field that is the product of the natural neighbour distances and error rates (adapted from Etherington, 2020, https://creativecommons.org/licenses/by/4.0/, last access: 16 February 2021). erogeneity and therefore is harder to interpolate accurately when data are sparse (Etherington, 2020). The resulting temperature and rainfall grids appeared to capture the heterogeneous and dynamic nature of the monthly temperature and rainfall variables and the uncertainty associated with the interpolation. For example, the May total rainfalls and uncertainties (Fig. 3) illustrate the dynamic nature of the monthly rainfall and the associated uncertainty. There are clear differences in total rainfall in May over time and shifts in the location of the wettest regions. Our emphasis on quantifying uncertainty is justified by the magnitude and location of uncertainty changing through time. This emphasises our view that global error estimates for large areas and timeframes are potentially unhelpful, as the degree of uncertainty can change rapidly over space and time, and so all interpolation estimates should be associated with an individual matching uncertainty estimate. We interpret this rapidly changing uncertainty to result primarily from the data limitations of interpolation, as higher uncertainty occurs in locations more distant from temperature or rainfall data whose location and abundance change over space and time (Figs. 2 and 3). However, uncertainty can be high in regions where temperature and rainfall data are available, which we interpret as arising from the spatial variability of individual monthly temperature and rainfall patterns, as when spatial variability of temperature and rainfall patterns increases over shorter distances, such as in mountainous terrain, interpolation becomes increasingly uncertain (Etherington, 2020). Interpolation evaluation The MAE provides a reasonable estimate of the actual error rates for discrete natural neighbour interpolation (Ether-ington, 2020). Therefore, even though each monthly temperature and rainfall grid has a matching uncertainty grid, we also used cross-validation to calculate the MAE for each monthly interpolation to validate the method and facilitate comparisons with other temperature and rainfall interpolations (Willmott and Matsuura, 2006). While the number of data cells (which equates closely to the number of weather stations) used during interpolation varied over time, the distance between data cells remained reasonably constant (Fig. 3). This consistency of spacing between data cells may explain why the MAEs associated with all the temperature and rainfall variables remained reasonably constant around 18 mm for total rainfall below 500 m and 24 mm for rainfall above 500 m and around 0.5 • C for all temperature variables below 500 m and 0.7 • C for all temperature variables above 500 m (Fig. 6). As might be expected, MAEs were lowest when interpolating data during 1950-1980, which is the period the climatologies aiding the interpolation were created for, and at lower elevations for which there are more data (Fig. 3). There was also an increase in MAE for some variables moving away from the 1950-1980 climatology period that becomes more pronounced pre-1910 as weather station data availability becomes extremely limited, resulting in as few as 53 to 104 rainfall and 3 to 17 temperature data cells across New Zealand in each month, with no data cells for temperature variables above 500 m pre-1910. We conclude that the data are most reliable from 1910 to 2019, representing the 4 decades either side of the 1950-1980 climatologies used in the interpolations. Similar patterns to the MAE are seen using the RMSE (Fig. S1 in the Supplement), and actual and estimated values are strongly positively correlated (Fig. S2); thus, while these metrics are sensitive to outliers, we include them as additional information given their use in other temperature and rainfall interpolation evaluations (Hofstra et al., 2008). We used the same cross-validation MAE approach to evaluate how well the estimated uncertainty matched the crossvalidated absolute errors (Fig. 7). Again, performance was best during the 1950-1980 climatology period, with some variables showing more pronounced decreases in performance moving away from this time period. The uncertainty error was around 28 mm for total rainfall below 500 m and 35 mm for rainfall above 500 m, but there was a less pronounced difference between different elevations for temperature, with all temperature variables at both elevations having an error of around 1 • C. Similar patterns to the MAE are seen using the RMSE (Fig. S3) with a positive relationship between actual errors and estimated uncertainties (Fig. S4). Using the uncertainty data When using interpolated temperature and rainfall data, users need to be able to make decisions specific to their requirements (de Vos et al., 2020). Therefore, in HOTRUNZ we have endeavoured to match every interpolation with an individual measure of uncertainty in the relevant units. However, we recognise that it is unusual for historical temperature and rainfall data to be presented alongside such detailed uncertainty estimates. These uncertainty estimates provide a new and possibly challenging analytical opportunity; therefore we demonstrate how potential data users could incorporate the uncertainty data into an analytical workflow and visualise the results. A simple example of temperature and rainfall time-series analysis would be to use a Spearman rank (r s ) correlation (Gregory, 1978;Spearman, 1904) to detect the directionality of any trends in temperature and rainfall patterns over time (Girardin and Wotton, 2009). To incorporate uncertainty into this process, we adopt a Monte Carlo approach in which we produce many equally possible temperature and rainfall histories by randomly sampling each month's temperature and rainfall as a random value from a probability distribution. For our example analysis we simply use a uniform distribution with a range equal to the interpolation uncertainty (limiting rainfall to a minimum of 0 mm). Many trends can then be calculated, with their distribution used to infer the reliability of the analysis. For a location with high uncertainty, the possible temperature and rainfall histories can vary widely around the single interpolated temperature and rainfall history, resulting in a wide distribution of possible trends (Fig. 8a). In other instances, the trend may be stronger than the uncertainty, meaning that while the strength of the trend varies, its direction can be clearly established (Fig. 8b). At locations where there is little uncertainty and hence minimal variation in possible temperature and rainfall histories, the trend can be established precisely (Fig. 8c). If this trend analysis with uncertainty process is repeated for every location, spatial trends can be analysed. The challenge is then how to visualise the spatial pattern of uncertainty. Based on guidance relating to the cartographic visualisation of uncertainty (Kaye et al., 2012;Retchless and Brewer, 2016), we selected a diverging colour scheme to show trends as the median r s of the possible temperature and rainfall histories. To mask locations of increasing uncertainty, we used a value-by-alpha approach (Roth et al., 2010) that overlays a black mask that is increasingly opaque in locations of increasing uncertainty that is measured as the 5th to 95th percentile range of the r s of the possible temperature and rainfall histories. The resulting map indicates that in some regions there are clear trends in temperature and rainfall patterns over time, but in other regions the uncertainty is too large to reliably make inferences from them (Fig. 9). Areas of high uncertainty are not randomly distributed and are instead concentrated in areas with higher elevations and those with lower or more distant data availability (Fig. 1). However, the exceptions to these general patterns again highlight the importance of providing specific uncertainty data for all interpolations. So, while our evaluation indicates that areas ≥ 500 m elevation are generally less accurate than areas <500 m elevation (Fig. 6), users of HOTRUNZ should refer to the individual uncertainty data associated with their specific locations of interest because general rules may not always apply. The results from our trend analysis (Fig. 9) clearly demonstrate that it is possible to produce a long-term history of interpolated temperature and rainfall and emphasise the importance of quantifying the uncertainty of all interpolations. Of course, the approach we have used here is simply trying to illustrate the potential of the uncertainty data and should not be interpreted as the best or only analytical approach. It is obviously impossible for us to give guidance on how to incorporate uncertainty into all possible applications, but the Monte Carlo approach we present could be adapted to many situations. For example, the trend analysis shown here could be extended for those users interested in comparing long-term environmental data to weather data to identify associations between an environmental process and temperature and rainfall (Girardin and Wotton, 2009). over which the climatologies used to aid interpolation apply, and the dashed lines indicate the temporal limit of reliable data. Limitations and future recommendations We have stressed the importance of quantifying uncertainty when interpolating rainfall and temperature variables and have applied a novel approach that matches every interpolation estimate with an associated measure of uncertainty in the variables' units. However, while uncertainty in geographical information results from a combination of geographi-cal abstraction, data acquisition, and geoprocessing (Zhang and Goodchild, 2002) our measure of uncertainty only captures the uncertainty associated with geoprocessing. Therefore, while the quantified uncertainty should alert potential users of locations where the interpolation is likely to be less reliable, potential users will still need to apply their own assessment regarding uncertainty associated with geographical abstraction and data acquisition. For example, we have used a geographical abstraction based upon 1 km 2 grid cells, but for some applications this abstraction may be too coarse, creating uncertainty about patterns at finer scales. Similarly, while we deliberately excluded unreliable weather station data, the amount of data varies through space and time, and uncertainty associated with interpolations will generally be higher in locations where there are fewer data and the spatial pattern of the variable being interpolated is more complex (Etherington, 2020). While temperature and rainfall are key environmental variables influencing many ecological and physical processes, other variables could similarly be explored for different contexts. Climatologies also exist for solar radiation, humidity, pressure deficit, and wind speed (McCarthy et al., 2021) for which matching monthly statistics from weather station data are available (NIWA, 2020); expanding the variable cover-age to include these variables could be a useful addition to any future version of the dataset. Some aspects of the temporal and spatial scales of HOTRUNZ could be improved in subsequent refinements of this dataset. One limitation is that the monthly temporal resolution does not capture extreme but short-duration weather events. For example, in July of 1996 there was an extreme cold snap that had significant effects on vegetation in southern New Zealand (Bannister, 2003) but is not evident in our data that average minimum temperatures over the whole month. We only had access to monthly climatologies on which to base our interpolations, but if openly available weekly, or even daily, climatologies were created then it would be possible to interpolate historical weather at a finer temporal resolution to better capture extreme weather events of short durations. While improving temporal resolution would require the creation of new climatologies, the spatial resolution could be Figure 9. Spatial uncertainty analysis for the summer months (December, January, and February) trends in (a) mean maximum temperature, and (b) total rainfall across New Zealand. Trends were calculated as a Spearman's Rank (r s ) between temperature or rainfall and time from 1910 to 2019 for 100 possible histories that were within the range of uncertainty for the interpolated values for each month. The median r s value is visualised as a diverging colour scheme indicating positive or negative trends in temperature and rainfall. Uncertainty is visualised by the 5th to 95th percentile range of r s values as a measure of uncertainty such that darker shades of each colour indicate areas with greater uncertainty. improved up to 100 m with the climatologies used here. This improvement could be beneficial in the mountainous areas of New Zealand where temperatures can vary considerably within the 1 km resolution of our grids. Future improvements in temporal and spatial resolution would benefit from a more efficient computational workflow. Our computational workflow limited our processing to a 1 km resolution, but future versions could either leverage high-performance computing or, to maintain a high degree of openness, continue to use desktop computing but with discrete natural neighbour interpolation leveraging the power of graphics processing units that are well suited to this method (Park et al., 2006). We believe there is little point in extending the temporal extent of the dataset. The growth of MAE associated with the reduction of temperature and rainfall data pre-1910 (Fig. 4) indicates the available weather station data are insufficient to reliably estimate historical temperature and rainfall using our approach before the 20th century. While there are sources of additional temperature and rainfall data held in archives (Lorrey and Chappell, 2016) palaeo-environmental techniques may provide a better option for longer-term temperature and rainfall information (Cook et al., 2006;Duncan et al., 2010). There was a subtle increase in MAE towards the present that may be a function of a reduction in weather station data, particularly rainfall data, and perhaps a growing temporal mismatch between the 1950-1980 climatologies used to aid the interpolation (Fig. 6). This indicates that continuing to use interpolation methods may become less feasible in the future. More recent climatologies could be produced to provide more temporally relevant climate data, but satellite data may provide a more useful data source for more recent temperature and rainfall history (Funk et al., 2015). We believe our interpolation approach is most valid for the pre-satellite era and that by producing data that span part of the satellite era we provide a useful overlap for comparative purposes that could allow for a transition between historical temperature and rainfall data sources. Virtanen et al., 2020), numba (https:// numba.pydata.org/, last access: 7 June 2022; Lam et al., 2015), and Matplotlib (https://matplotlib.org/, last access: 17 June 2022; Hunter, 2007) packages. All the resulting code used to process data and plot figures presented here (Etherington, 2021) is openly available under an MIT Licence from the Manaaki Whenua -Landcare Research DataStore https://doi.org/10.7931/yk7g-vz81. Data availability The resulting HOTRUNZ data are openly available in non-proprietary file formats under a Creative Commons by Attribution 4.0 Licence and are archived at the Manaaki Whenua -Landcare Research DataStore https://doi.org/10.7931/zmvz-xf30. Conclusions We present HOTRUNZ as a, rather than the, history of New Zealand temperature and rainfall; it would be possible to repeat the process using equally defensible quantitative methods and obtain different results. Likewise, changes in the spatial or temporal resolution will result in different patterns; however, there is no single best resolution as this will vary depending on the desired application. Nevertheless, as HOTRUNZ matches the highest available spatial and temporal extent and resolution of any currently available openaccess grids (Table 1), we believe that in creating HOTRUNZ we have significantly improved the ability for environmental and ecological scientists in New Zealand to understand how changing temperature and rainfall patterns have affected various environmental and ecological processes. Even with the spatially and temporally complex patterns of uncertainty, which are sometimes large, it is still possible to find consistent trends in temperature and rainfall (Fig. 9). We hope our efforts to produce interpolation estimates with associated uncertainty, and examples of how to build that uncertainty into any analyses, will encourage the quantification and visualisation of uncertainty in weather and climate datasets elsewhere. Author contributions. TRE, GLWP, and JMW conceived and developed the idea. TRE developed the code and performed the data processing. TRE, GLWP, and JMW wrote the manuscript. Competing interests. The contact author has declared that neither they nor their co-authors have any competing interests. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Financial support. This work was supported by the Strategic Science Investment Funding for Crown Research Institutes from the New Zealand Ministry of Business, Innovation and Employment's Science and Innovation Group and The University of Auckland Faculty Research Development Fund (grant no. 3702237). Review statement. This paper was edited by Alexander Gruber and reviewed by three anonymous referees.
2022-06-23T15:08:02.325Z
2022-06-21T00:00:00.000
{ "year": 2022, "sha1": "d11460a28ced35aac115b1fa49a71b5cf737482c", "oa_license": "CCBY", "oa_url": "https://essd.copernicus.org/articles/14/2817/2022/essd-14-2817-2022.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2ea638fe00a207e2abb83ef0c469c1896ef6465f", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
231871172
pes2o/s2orc
v3-fos-license
Hyperintense Brain Lesions in Asymptomatic Low Risk Patients with Paroxysmal Atrial Fibrillation Undergoing Radiofrequency Pulmonary Vein Isolation Background: The aim was to determine the occurrence, consequences and risk factors for brain white matter hyperintensities (WMH) assessed in magnetic resonance imaging (MRI) in low-risk patients with paroxysmal atrial fibrillation (AF) undergoing radiofrequency pulmonary vein isolation (PVI-RF). Methods: 74 patients with AF (median 58.5 years (IQR 50–63), 45 male) were included. Before and after a minimum of 6 months after PVI-RF, a brain MRI and a mini-mental state examination (MMSE) were performed. Results: Baseline WMH lesions were found in 55 (74.3%) patients and in 48 from 62 (77.4%) patients after PVI-RF. The WMH lesions were more frequent among older patients, with a higher CHA2DS2-Vasc (C—Congestive heart failure/LV dysfunction, H—Hypertension, A—Age, D—Diabetes mellitus, S—Stroke, V—Vascular Disease, Sc—Sex category). Factors affecting the severity of the WMH were: older age, the co-existence of the PFO and coronary artery disease (CAD). After a follow-up period, the factors predisposing to brain WMH lesions occurrence (age, higher BMI and CHA2DS2-Vasc score) and to the more advanced changes (age, higher CHA2DS2-Vasc score, CAD, PFO) were obtained. Conclusions: The presence and severity of cerebral microembolism are associated with age, higher CHA2DS2-Vasc score and the coexistence of PFO and CAD. PVI-RF procedure and its efficacy does not influence on MRI lesions. In this population, cerebral microembolism is not related to cognitive impairment. Introduction Atrial fibrillation (AF) leads to the formation of microemboli embolizing cerebral microcirculation [1]. Because patients may be asymptomatic for a long time, these changes are considered clinically silent. Chronic cerebral microembolism associated with AF and hypoperfusion of cerebral circulation gradually lead to the development of clinically silent cerebral ischemia (SCI), which may be the basis for the development of neuropsychological deficits and even lead to dementia. SCI refers to small vessel disease and, next to TIA and stroke, is included in the entire panel of cerebral vascular diseases [2]. Importantly, the clinical significance of SCI is highlighted by the fact that their incidence is up to 10 times higher than that of strokes [3]. Until now, brain lesions have been consistently associated with small vessel disease, a higher incidence of dementia, and impaired global cognitive function [4][5][6]. Therefore, small vessel disease based on structural changes such as silent brain infarcts, white matter hyperintensities (WMH), and brain micro-bleeding can be a key link between AF and cognitive disorders, especially in neurologically asymptomatic patients. The importance of AF in the context of brain changes and cognitive impairment should be considered, taking into account mainly SCI. Ischemia occurs in the mechanism of microembolism of small cerebral vessels with thrombi formed during turbulent blood flow in the left atrium (LA). The division of WMH type brain lesions, found in the magnetic resonance imaging (MRI) study, proposed by Fazekas et al. in 1987, and reproduced by many subsequent researchers [7][8][9] has become common. In clinical conditions, visual scales are preferentially used to assess WMH of the brain, taking into account shape, severity or location of hyperintensive brain lesions, while the scale proposed by Fazekas et al. is considered as a reference. The modified Fazekas scale is adequate to describe the intensity and range of WMH foci described in the MRI study [10]. These changes are often observed in patients with AF, but the relationship between them and AF is not clear, especially in young people. The mere presence of WMH lesions in the MRI of the brain is potentially related to AF and hypertension. Comorbid risk factors for vascular diseases such as older age, obesity, hyperlipidemia, diabetes, lower physical activity and generalized inflammatory response can also play an important role [11][12][13][14][15][16]. These changes may be associated with observed neuropsychological, motor and mood disorders. The potential role of AF in cognitive impairment and dementia due to cerebral microembolism (imaged as WMH changes in the brain MRI) has become of recent interest. Assessment of the patient's overall neurocognitive function can be performed using the mini-mental state examination (MMSE) test, the most common and simple tool designed to test a wide range of cognitive functions. The scale can also be helpful in fast screening for dementia exclusion. Embolic complications leading to clinically significant stroke represent a small percentage of all complications associated with percutaneous pulmonary vein isolation (PVI). According to available data from large world registers, the risk of periprocedural stroke is 0.5%-1% [17][18][19][20][21]. However, considering the significance of PVI in the context of clinically silent microembolic changes, it has been proven that depending on the source of energy used, the risk of their occurrence is high. The frequency of reported ischemic lesions in MRI of the head the day after ablation ranges from 4.3% to 50% [22][23][24][25][26][27]. In the case of percutaneous PVI procedures using radiofrequency pulmonary vein isolation (PVI-RF) of the open-irrigated type, microembolic complications after the procedure were observed in 5%-18% of cases [23][24][25][26][27][28][29]. In turn, after the PVI-RF duty-cycled phased procedure, the reported incidence of cerebral ischemic foci in MRI studies is significantly higher-from 37.5% to 38.9% [22,23]. Most brain imaging examinations for peri-procedural silent ischemic lesions were performed 24-48 h after ablation. However, there is still a lack of reliable data on the long-term effects of the PVI-RF procedure in relation to the presence of cerebral ischemic foci and their potential impact on cognitive function in relation to the effectiveness of the procedure. The Aims of the Study Were 1. To determine the occurrence, consequences and risk factors for brain white matter hyperintensities (WMH) assessed in magnetic resonance imaging (MRI) in low-risk patients undergoing PVI-RF. 2. To determine risk factors for brain WMH lesions assessed in the brain MRI in low-risk patients before and after PVI-RF. 3. To determine the impact of PVI-RF procedure on the occurrence and severity of WMH lesions assessed in the brain MRI. 4. To assess a potential relationship of atrial fibrillation with cognitive decline, with particular relation to PVI-RF impact. Materials and Methods Seventy-four patients with diagnosed paroxysmal non-valvular AF, who were hospitalized in the Department of Cardiology between 2013 and 2017 in order to perform PVI for the first time, were enrolled into the study. The study group was evaluated during hospitalization before the procedure and after a minimum of 6 months after the PVI-RF procedure the patients were invited for clinical re-evaluation. The study group consisted of patients with a median age of 58.5 (IQR (interquartile range) 50-63 years) with a predominance of male (60.8%). Patients with AF diagnosed up to 5 years, symptomatic (median EHRA scale 3) and relatively low score on the CHA2DS2-Vasc scale (median scale 2) predominated. The standard inclusion criteria were: documented paroxysmal symptomatic nonvalvular atrial fibrillation (EHRA IIb-IV), despite the optimal treatment, and qualified for the PVI, adequate warfarin treatment before admission, maintaining sinus rhythm during hospitalization before enrollment, preserved LV systolic function (LV EF ≥ 50%), written informed consent and >18 years of age. We excluded patients with: a history of artery pathology (uni-or bilateral carotid artery, ascending aorta, carotid or vertebral artery atherosclerotic stenosis (defined as arterial stenosis ≥ 50% in the NASCET (North American Symptomatic Carotid Endarterectomy Trial) [11] or dissection; Behçet disease; uni-or bilateral intracranial artery stenosis; a history of carotid angioplasty or endarterectomy; vasculitis), connective tissue disease, neuroborreliosis, multiple sclerosis, Sneddon syndrome, a history of stroke or transient ischemic attack (TIA), structural heart disease (cardiomyopathies, significant valvular heart disease), states connected with hypercoagulation or a predisposition to systemic embolism, a history of PVI, pregnancy, refusal to participate, acute kidney disease or kidney disease chronic with glomerular filtration (GFR) < 30 mL/min, contraindications for MRI. On admission, a detailed medical history that included the current course of the disease, the main symptoms (classified in EHRA class), concomitant diseases (including coronary artery disease, type 2 diabetes, arterial hypertension, hyperlipidemia, peripheral artery disease), a familial history of arrhythmia, current pharmacotherapy (especially compliance using oral anticoagulants) and tobacco smoking was collected from each subject. We also collected physical examination parameters: weight, height and body mass index (BMI), body surface area (BSA). During hospitalization laboratory tests, an MRI of the head, MMSE test, 24-h ECG Holter monitoring, transthoracic and transesophageal echocardiography, as well as Doppler ultrasound of extracardiac carotid and vertebral arteries were performed in all patients. A minimum 6-month follow-up period of medical history, including possible recurrence of arrhythmia and pharmacotherapy, was noted. Furthermore, in all subjects, we performed: transthoracic echocardiography, MRI of the head, MMSE test and 7-day ECG recording using the Holter method. Magnetic Resonance Imaging All participating patients underwent an MRI of the brain. The investigation was performed the day before the ablation using a 1.5-T scanner (Siemens Healthcare GmbH, Erlangen, Germany). MRI was performed using the standard sequences: T1, T2, FLAIR, SWI, DWI and 3DFLAIR. No contrast dye was used. All MRI images were analyzed independently by an experienced radiologist and neurologist, both of whom were blinded to the clinical status of the patients. Brain WMH size and quantity were assessed using the Fazekas scale and divided into three degrees of severity: Grade 1-mild WMH were defined by punctate lesions with a maximum diameter of 9 mm for a single lesion and of 20 mm for grouped lesions. Grade 2-moderate WMH were early confluent lesions of 10-20 mm single lesions and >20 mm grouped lesions of any diameter and only connecting bridges between the individual lesions. Grade 3-severe WMH were single lesions or confluent areas of hyperintensity ≥ 20 mm in diameter. Test Mini-Mental State Examination The MMSE test was performed in a secluded place using the version according to Folstein et al. [30], recommended by the Interdisciplinary Group of Experts in the Diagnosis of Dementia of the Psychogeriatrics and Alzheimer's Disease Section of the Polish Psychiatric Association. The MMSE test evaluated: orientation in time and place, recall, attention and calculation, language manipulation and constructional praxis. The maximal test result is a score of 30, and a 27-30 result is correct. A score less than 24 is highly suggestive of the presence of cognitive decline or dementia. The result of 24-26 suggests mild cognitive impairment. ECG Holter Monitoring Holter ECG recordings were made using Lifecard CF recorders (Spacelabs Healthcare, Snoqualmie, United States) and were analyzed using the Sentinel software (version 11, Del Mar Reynolds, Irvine, United States). The registration was made during the day immediately preceding the PVI-RF procedure and during the control examination at least 6 months after the procedure. Registration after the observation period was carried out using the 7-day option with ECG recording at home. The analysis of ECG records included an assessment of the rhythm frequency (minimum, maximum and average), as well as supraventricular and ventricular arrhythmias. An episode of AF was considered to be arrhythmia lasting > 30 s, two episodes of AF at the same time separated by sinus rhythm lasting < 30 s were considered as one episode. Transthoracic/Transesophageal Echocardiography and Carotid Ultrasound On admission, ECG-gated transthoracic and transesophageal echocardiography, as well as a carotid artery Doppler ultrasound was performed in all patients. An experienced physician took all of the measurements using the same investigation protocol and techniques in order to reduce inter-and intra-observer variability. The echocardiography investigation was performed using a GE Healthcare VIVID 7 Dimension (General Electrics Medical Systems, Horten, Norway) with a 2.5 MHz sector ultrasound transducer for transthoracic, while a 2-7 MHz for transesophageal echocardiography. The study transesophageal echocardiography (TEE) assessed the patency of PFO and the presence of thrombi and echogenic blood in the LA appendage (grade 1 to 3). Using pulsed doppler, blood flow velocity in the LA appendage was measured. Patients with the presence of thrombi and third grade echogenic blood in the LA appendage were excluded. An extracranial artery Doppler ultrasound was performed using a GE Healthcare VIVID 7 Dimension (General Electrics Medical Systems, Horten, Norway) with the visualization of the common, internal, external and vertebral carotid arteries was performed by experienced ultrasonographist using a 10 MHz linear transducer in order to exclude significant extracranial artery stenosis. PVI-RF Procedure Access to pulmonary veins was obtained using the Seldinger technique. At the beginning of the procedure, rotational LA angiography (injection of contrast agent into the pulmonary artery during rapid right ventricular stimulation) was performed. Then, after transseptal puncture, 3-dimensional electro-anatomical mapping was performed using the CARTO ® 3 system (Biosense Webster, Diamond Bar, CA, USA). Pulmonary vein isolation was obtained by RF radiofrequency ablation with a ThermoCool ® SmartTouch ® catheter (Biosense Webster, Diamond Bar, CA, USA). The procedure was performed using a Lasso electrode (Biosense Webster, Diamond Bar, CA, USA) or Achieve (Medtronic, MN, USA). The procedure was performed at the international normalized ratio (INR) assessed on the day of ablation from subtherapeutic values to 2.5. Immediately after transseptal puncture, all patients received an intravenous bolus of unfractionated heparin (100 IU/kg), followed by a continuous infusion of unfractionated heparin (2000 IU/h) through a sheath inserted through the atrial septum to obtain an activated clotting time (ACT) > 300 s. During the procedure, the ACT was measured at 30-min intervals. In patients with INR on the day of ablation less than 2, after the procedure, a 24-h intravenous continuous infusion of unfractionated heparin was performed under the control of APTT (activated partial thromboplastin time). The next dose of vitamin K antagonists (VKA) was given 4 h after ablation to get an INR of 2.0 to 3.0 the next day. Statistical Analysis Statistical analysis was performed using STATISTICA software (version 13.1 PL, TIBCO Software Inc., Palo Alto, California, United States). All data were collected in a Microsoft Office Excel spreadsheet (version 2016 PL, Microsoft, Redmond, United States). A p value of less than 0.05 was considered to be statistically significant. Results for continuous variables are presented as the mean with standard deviation for normal distributions or the median with interquartile range for non-normal distributions. The normality of the distribution of continuous variables was verified with the Shapiro-Wilk test. The tables also present ordinal variables as percentages of the trait frequency. Depending on the distribution of variables, parametric and non-parametric tests were used for dependent and independent variables. Table 1 presents the basic parameters characterizing the study group, the results of the TEE examination and comorbidities and factors considered to be cardiovascular risk factors. PFO was found in 20 patients (27%). The study group was dominated by patients with no echogenic or current grade 1 blood. Patients with LA appendage thrombus were excluded from the study. In the study group, just over half of the patients had hyperlipidemia and hypertension. Results are given as the mean with standard deviation for normal distributions or the median with interquartile range for non-normal distributions or absolute numbers and percentages. AF-atrial fibrillation, APTT-activated partial thromboplastin time, BMI-body mass index, BSA-body surface area, INR-international normalized ratio, PFO-patent foramen ovale, PLT-blood platelets, y-years. WMH Lesions and Psychological Assessment before PVI-RF Treatment There were no significant differences in the results of MMSE test depending on brain WMH lesions presence and severity. Patients without pre-PVI-RF brain WMH lesions obtained a median of 30 points in the MMSE test (IQR 28-30), while 29 points (IQR 28-30) when lesions were present (p = 0.11). Table 2 presents MMSE test scoring depending on the pre-PVI-RF procedure WMH lesions severity. Depending on the pre-PVI-RF presence of any WMH lesions in the brain MRI (lesions absent or present regardless of severity), patients differed in age and obtained CHA2DS2-Vasc score. Patients with present WMH lesions were older (60 (56-65) vs. 44 (38-56), p < 0.001) and scored higher on the CHA2DS2-Vasc score (2 (1-3) vs. 1 (0-2), p = 0.009), when compared with patients without any WMH-type lesions. Multivariate logistic regression analysis was performed with the inclusion of the following pre-PVI-RF procedure variables: CHA2DS2-Vasc, AF duration time, BMI, PFO presence, echogenic blood presence in the left atrium appendage, tobacco smoking and hyperlipidemia. CHA2DS-Vasc was statistically significant (p = 0.02). In the second one, the multivariate logistic regression analysis model, the variables included in the CHA2DS2-Vasc scale were analyzed: arterial hypertension, age, diabetes mellitus, sex, coronary artery disease and peripheral artery disease. Patients with congestive heart failure were excluded from the study. Age was statistically significant (p < 0.001). Depending on the pre-procedural evaluation of WMH lesions severity in the brain MRI (lesions absent or present in the Fazekas scale: 1-3), statistically significant patients differed in age, the CHA2DS2-Vasc scale score, presence of PFO and coronary artery disease ( Table 2). WMH Lesions and Psychological Assessment after PVI-RF Treatment There were no significant differences in the results of MMSE test depending on brain WMH lesions presence and severity. Patients without follow-up brain WMH lesions obtained the median of 30 points in the MMSE test (IQR 29-30), while 29 points (IQR 28-30) when lesions were present (p = 0.1). Table 3 presents MMSE test scoring, depending on the WMH lesions severity after the observation period. Results are given as the median with interquartile range for non-normal distributions or absolute numbers and percentages. AF-atrial fibrillation, BMI-body mass index, CAD-coronary artery disease, MMSE-Mini-Mental State Examination, MRI-magnetic resonance imaging, PFO-patent foramen ovale, PVI-RF-radiofrequency pulmonary vein isolation, WMH-white matter hiperintensities, y-years. Multivariate logistic regression analysis was performed with the inclusion of the following post-PVI-RF procedure variables: CHA2DS2-Vasc, AF duration time, BMI, PFO presence, echogenic blood presence in the left atrium appendage, tobacco smoking and hyperlipidemia. CHA2DS-Vasc was statistically significant (p = 0.02). In the second one, the multivariate logistic regression analysis model, the variables included in the CHA2DS2-Vasc scale have been analyzed: arterial hypertension, age, diabetes mellitus, sex, coronary artery disease and peripheral artery disease. Patients with congestive heart failure were excluded from the study. Age was statistically significant (p < 0.001). Depending on the WMH lesions severity in the brain MRI, patients statistically significantly differed in terms of age, CHA2DS2-Vasc scale score and coronary artery disease presence. A trend towards a difference in the incidence of PFO was obtained (Table 3). WMH Lesions and Psychological Assessment Depending on PVI-RF Procedure Median follow-up was 9.9 months (IQR 7.6-11.8 months). In the 7-day Holter evaluation after the observation period, the effectiveness of the PVI-RF procedure was confirmed in 53.8%. In the study group, no statistically significant differences were found between the presence and severity of the brain WMH lesions, as well as MMSE scores before and after the PVI-EF observation period (Table 4). There were no statistically significant differences between the presence and severity of post-PVI-RF WMH lesions in the brain, as well as the post-PVI-RF MMSE score in patient subgroups depending on PVI-RF procedure effectiveness. Discussion This publication represents part of a single-center, non-randomized, prospective study of a population consisting of relatively young patients, with a history of paroxysmal, symptomatic AF (median EHRA 3), without significant structural heart disease and with a low score obtained in the CHA2DS2-Vasc scale that were classified to PVI-RF procedure. Study Group The inclusion of only older patients could not only affect the structural changes of the heart related to its fibrosis or the accumulation of cardiovascular risk factors, but also prevent a reliable assessment of the presence and severity of WHM lesions in the brain. In order to avoid the accumulation of the brain WMH risk factors, patients with a relatively low CHA2DS2-Vasc score dominated in the whole group. Stable coronary artery disease was diagnosed in 15 patients (19% of the study group), while the study turned out that risk factors for atherosclerosis and coronary artery disease were not related to the effectiveness of PVI-RF procedure. In the study, the main criterion for inclusion in the study was LV systolic function, i.e., LV EF ≥ 50%. To ensure the homogeneity of the group, it was limited to patients with paroxysmal AF, especially because this group benefits most from the PVI-RF procedure in the context of effectiveness [31][32][33]. Therefore, the study group consisted of patients with paroxysmal AF, for the most part with an established arrhythmia diagnosis < 5 years (46% of patients), and the duration of AF had no effect on the post-observation effectiveness of the PVI-RF procedure. Finally, the subgroups of patients with successful and ineffective PVI-RF procedure after observation period were homogeneous in terms of sex, age, and the burden of associated diseases. Due to the longer healing and remodeling process of LA, as well as the longer time needed to assess WMH lesions and cognitive impairment, the minimum period after which patients were re-evaluated was 6 months. In the available literature, most studies showing silent peri-procedural brain microembolic changes were performed 24-48 h after surgery for 3 months, rarely after a longer period of time. There is still a lack of reliable data on the long-term effects of the PVI-RF procedure in relation to the presence of WMH lesions of the brain and their potential impact on cognitive function, with particular emphasis on the effectiveness of the procedure. WMH Lesions before and after the PVI-RF Procedure According to various data, even 60% of patients with AF may have asymptomatic brain changes on a microembolic basis, regardless of the procedures performed in the left heart chambers [1,34]. These changes in the white matter of the brain in patients with AF appear to be secondary to concomitant disease processes, and their relative size is much higher compared to the minimum number and size of the lesions caused by cardiovascular procedures in this group of patients. By assessing pre-PVI-RF MRI images for the presence of WMH lesions and their relationship to clinical parameters, statistically significant more frequent occurrence of WMH cerebral lesions was found among older patients and those achieving a higher CHA2DS2-Vasc score. While MRI images in patients with current WMH lesions in terms of their severity, patients with more advanced lesions were older, had higher CHA2DS2-Vasc score, PFO and coronary artery disease. After the minimal observation period, no statistically significant differences were found among the other assessed parameters. In the available literature, most brain imaging studies for microembolic lesions after PVI procedures were performed using 1.5 Tesla MR devices. At present, the latest reports using 3-Tesla MR devices indicate the possibility of more frequent occurrence of silent microembolic changes in the brain in the group of patients with AF [35]. However, the reported frequency of microembolic lesions after ablation procedure due to AF may vary depending on the definition of diagnosis, methodology and used ablation technique. Therefore, it is difficult to clearly identify potentially modifiable factors common to all literature data, indicating the cause and affecting the frequency of brain WMH lesions. The mechanism of brain WMH lesions in patients undergoing PVI-RF ablation remains poorly understood. It is certainly multifactorial and seems to be a consequence of the ablation procedure itself. While it seems clear that these lesions are the result of gas or particles brain tissue microembolization, consideration of potential confounding factors should also include aspects of the patient or the equipment used during the procedure. The presence of brain microembolic changes as a result of intracardiac procedures requiring access to the left chambers of the heart has become a topic indicating the thrombogenic potential of various technologies and ablation approaches. In available literature, irrigated RF ablation techniques dominate in descriptions of PVI techniques, using single-tip catheters. A similar technique was used in the present study. During PVI ablation due to AF, heparinization with monitoring and maintenance of minimum ACT time values of 300 or 350 s is recommended. Appropriate heparinization may reduce the formation of thrombi on catheters and transseptal sheaths located in LA, as well as at the ablation site. Scaglione et al. showed that peri-operative ACT time below 320 ms is an independent prognostic factor of brain emboli changes during the observation period [36]. In another study conducted by Martinek's team, there was no difference in minimal and medium levels of ACT when comparing patients with peri-procedural cerebral microembolic lesions [37]. In own work, during the PVI-RF procedure, the goal was to achieve ACT > 300 ms as standard. In this study, patients were prepared for ablation with anticoagulant treatment with the VKA drugs group, in accordance with the ESC recommendations in force at that time [38]. In the light of recent reports, the need for uninterrupted periprocedural anticoagulant treatment, both VKA and factor X inhibitors, is emphasized, which may be important in reducing periprocedural cerebral microembolic events [26,37]. It has been shown that the continuation of VKA treatment with peri-procedural maintenance of the therapeutic INR ratio ≥ 2.0 is associated with a significantly lower incidence of post-ablation brain microembolic lesions. On the other hand, subtherapeutic INR levels were associated with a 3.1-fold higher risk of microembolism [26]. In the current study, the average INR determined on the day of ablation in the study group was 1.9 on average. There was no correlation between the periprocedural INR value and the presence and severity of WMH lesions in the brain, before and after PVI-RF. In the literature, can be found single reports on the potential impact of ACT time obtained during the ablation procedure [26,36,37], intraprocedural electrical cardioversion [26,37], the presence of self-contrasting blood in the LA appendage in the pre-treatment TEE [36], scoring CHA2DS2-Vasc scale [39], age [37] or pre-ablation brain WMH lesions [25] for an increased incidence of post-procedural microembolic lesions in the brain. In our own work, these reports were partially confirmed, although after several months of observation period a similar frequency and severity of WMH lesions in the brain MRI were found, compared to the results obtained before the PVI-RF procedure. Psychological Assessment before and after the PVI-RF Procedure Scientific evidence suggests that AF alone is associated with a higher risk of cognitive impairment and dementia, even in patients with no stroke history [39,40]. It is associated with more than doubling the risk of developing silent brain damage [41]. It is believed that silent cerebral ischemia in patients with AF arise on a microembolic basis. Due to their small size and location (away from speech and motor centers), they usually do not cause clinically apparent focal neurological deficits. However, with the accumulation of silent brain microembolic lesions, a progressive cognitive deficit may occur [42]. Our own results do not confirm these premises. While the incidence of silent brain damage in the form of WMH described in MRI of the brain is high in the study group (68.8%), patients with baseline WMH-type lesions (taking into account their severity) did not achieve worse results in the MMSE test as compared to patients with normal brain image on MRI. Perhaps the study group characteristic, including the relatively young age of patients, explains such observations. Another issue is the impact of PVI-RF ablation in AF patients on cognition by generating new microemboli foci. Most of the indications that the PVI-RF procedure itself may affect cognitive function come from observational, non-randomized studies. So far, the relationship between silent cerebral embolism and cognitive impairment in patients undergoing PVI-RF has not been clearly confirmed. In this study, it was assumed that the selected population (taking into account numerous selection criteria and excluding patients after symptomatic stroke) is relatively "young" and not burdened with many comorbidities. In all patients, except brain MRI, we performed a parallel neuropsychological assessment, taking into account the effectiveness of the PVI-RF procedure. In the study group, there was no significant difference between the initial cognitive abilities assessed at baseline using the MMSE test, and the test results obtained after the observation period. Similarly, patients who underwent successful PVI-RF procedure obtained a comparable result in the MMSE test, in relation to patients with recurrent AF after the observation period. Patients with reported WMH lesions after the follow-up MRI showed significantly lower results in the MMSE test performed before the PVI-RF procedure, which seems to be accidental. There were no significant differences in the MMSE test results depending on the presence and severity of WMH changes in brain image analysis after the observation period. For comparison, in the prospective MACPAF study, 37 patients with paroxysmal AF underwent MRI cerebral imaging (3-Tesla device) within 48 h after PVI [43]. New microembolic lesions were found in 43.2% of patients, of which only 6.5% of acute post-procedural brain lesions in MRI scans after 6 months constituted a permanent scar. Neuropsychological assessment after 6 months showed no significant effect of these changes on attention, motor function, short-term memory or learning. Although in the case of PVI procedures, the periprocedural frequency of new microembolic lesions is usually estimated to be 18% [23,26,28,29]; the long-term consequences of asymptomatic or subclinical ischemic stroke associated with the procedure are unknown. Cognitive functions are not routinely tested after ablation procedures, and the mechanisms underlying these disorders are still not fully understood. Limitations The analyzed group was relatively small and the statistical power of this study is limited. The population consisted of a selected group of patients that were quite young and had a low thrombo-embolic risk and no significant concomitant diseases. The conclusions should be limited to populations of asymptomatic patients with non-valvular paroxysmal AF. On the other hand, the characteristics of the study group make the results interesting. Increasing the group population could influence the demonstration of other relationships, especially in the case of parameters where only the trend towards significance was achieved. On the other hand, the size of the group in the presented study did not differ significantly from other studies on related topics. Additional study limitations are associated with the absence of a control group and the results of MRI examinations performed with only a 1.5-Tesla machine. The use of 3-Tesla MRI could be associated with greater sensitivity to detect silent microembolic brain lesions in the study group. Conclusions Cerebral microembolism assessed by MRI is often found in patients with paroxysmal AF, and its presence and severity are associated with age and a higher CHA2DS2-Vasc score. Coexistence of PFO and coronary artery disease is an additional factor affecting the severity of the lesions. In a population of relatively young patients with AF, without significant cardiovascular loads, cerebral microembolism is not associated with cognitive impairment. It seems that the PVI-RF procedure does not affect the more frequent occurrence of brain microemboli lesions and cognitive abilities. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on reasonable request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2021-02-11T06:16:34.510Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "8a05f4f0de35e885be3250372fb069bdf9c19dc1", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7913160", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "7c179e62c7c6c22828f226554eeda0f6f7eb94d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246566247
pes2o/s2orc
v3-fos-license
Complete mitochondrial genome of the greater round-eared bat, Tonatia bidens (Chiroptera: Phyllostomidae) from Brazil and phylogenetic relationships within the family Abstract The greater round-eared bat, Tonatia bidens, is a locally rare species belonging to the highly diverse family Phyllostomidae. In this study, the complete mitogenome of T. bidens was sequenced using optimized protocols of DNA extraction from fixed cells originally prepared for cytogenetic studies. Here we present the complete mitogenome and place our results in a phylogenetic context with other data generated for the family Phyllostomidae. The circular genome had 16,717 bp in size, comprising 37 genes and GC content of 42.24%. Furthermore, the phylogenetic tree indicated a well-supported relationship between the representatives of Tonatia into the subfamily Phyllostominae. Round-eared bats of the genus Tonatia (Phyllostomidae, Phyllostominae) are widely distributed throughout Latin America and represented by three extant species: Tonatia bidens Spix 1823, T. bakeri and T. maresi Williams, Willig & Reid 1995, the last two recently elevated from subspecies of T. saurophila to species rank (Basantes et al. 2020). Despite their wide distribution, little information encompassing aspects of their biology is known, and most authors consider the representatives of Tonatia as rare species due to few occurrences in nature, generally in small groups, especially T. bidens (Willig 1985;Esb erard and Bergallo 2004). Tonatia bidens is a medium-sized bat, endemic to South America found in Argentina, Paraguay, Bolivia, and Brazil. Scarce incidence with fragmented and punctual sampling data justifies its categorization as Data Deficient (Barquez and Diaz 2016). Therefore, by describing the mitogenome of T. bidens, we provide new insights into extranuclear genome evolution as well as initial steps into the genetic characterization for this species. Herein, one male specimen of T. bidens was collected in Parque Estadual Pedra da Boca (6 27 0 32.2 00 S, 35 40 0 45.4 00 W), Araruna, Para ıba, Brazil, during field expeditions in 2004, as authorized by IBAMA (Licence no. 12264-1 to Santos, N). The voucher specimen was deposited in the mammalian collection of Departamento de Sistem atica e Ecologia at Universidade Federal da Paraiba, João Pessoa, Brazil (www.ccen.ufpb.br/museubiologia/) under the voucher number UFPB5719. Total genomic DNA was extracted from fixed cytological suspension (stored in methanol-acetic acid, 3:1, v/v) and deposited in the Laborat orio de Gen etica e Citogen etica Animal e Humana (LGCAH/UFPE; Voucher ID: M829). Fixed cells in suspension were washed twice with 1  PBS (Life Technologies, pH 7.4) for 20 min before DNA extraction using the DNeasy Blood & Tissue kit (Qiagen), following the manufacturer's instructions for blood samples. Paired-end libraries were built with Nextera DNA Flex Library Prep kit (Illumina) and sequenced using a high-output v2 kit (300 cycles) on an Illumina NextSeq 500 platform. The mitogenome was assembled using NovoPlasty 3.6 (Dierckxsens et al. 2017) and annotation was performed with MITOS2 (Bernt et al. 2013), with minor manual corrections in Geneious Prime 2020.2 (Biomatters). We investigated the relationships among T. bidens and other 20 phyllostomid species using available mitogenomes recovered from GenBank, and assigning the sequenced mitogenomes of representatives of the families Mormoopidae, Mystacinidae, and Noctilionidae as outgroups. The phylogenetic analyses were based on all PCGs amino acid (AA) sequences aligned with MAFFT 7.3 (Katoh et al. 2002). Maximum likelihood (ML) trees were obtained with RAxML v8.2 (Stamatakis 2014) using the substitution model PROTGAMMA and rapid bootstraping (BS) with 1,000 replicates ( Figure 1). The phylogenetic analysis recovered a monophyletic Phyllostomidae clade with strong support (BS ¼ 99) and included mitogenomes from representatives of all eleven subfamilies from the classification proposed by Baker et al. (2003Baker et al. ( , 2016 Figure 1). Our analysis resulted mostly in wellsupported clades (BS >70). In addition, in contrast with the Baker et al. (2003Baker et al. ( , 2016 phylogenies, incongruences were observed in the positions of the subfamilies Micronycterinae relative to Macrotinae, Lonchophyllinae in alternative branching relative to other nectarivorous bats, and Lonchorhininae appeared as a lineage that diverged after Phyllostominae, most of these relations resulted in low-supported clades (Figure 1), although a similar mitogenome AA-tree was observed by Botero-Castro et al. (2018). Within the subfamily Phyllostominae (BS ¼ 74), T. bidens was recovered as sister to T. maresi (BS ¼ 100), and the genus Tonatia was more closely related to Lophostoma than to Chrotopterus and Vampyrum (Figure 1), as already observed in other phylogenetic approaches (Baker et al. 2003;Rojas et al. 2016;Basantes et al. 2020). The present study serves as encouragement to other cytogenetic laboratories to take advantage of cytological material for new purposes, especially in rare species, as provided here useful knowledge of genomic data in this species which will be crucial for the understanding biodiversity and future studies in the taxonomy and molecular systematics of phyllostomid bats. Ethics statement All procedures performed in this study involving animals followed guidelines established by Animal Care and Use guidelines of ICMBio (Instituto Chico Mendes de Conservac¸ão da Biodiversidade), and using the collecting permit number (12264-1) by the Instituto Brasileiro do Meio Ambiente e dos Recursos Naturais Renov aveis (IBAMA). Author contribution statement JCF, SV, and CGS-C contributed to the conception and design of the study, analysis, and interpretation of the data. JCF wrote the first version of the manuscript. SV, GO, NS and CGS-C critically reviewed the article regarding its intellectual content. NS collected biological samples. All authors read, discussed, and approved the final version and all authors agree to be accountable for all aspects of the work. Disclosure statement The authors report no conflicts of interest.
2022-02-06T16:40:57.882Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "c8bae6524f2992fd7d64e387c196515722457a21", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23802359.2022.2030820?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32536e8c080366209667cf0044cf20abdfe5d08f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
7377273
pes2o/s2orc
v3-fos-license
Torsion of a mucocele of the vermiform appendix: a case report and review of the literature Torsion of a mucocele of the vermiform appendix is an extremely rare condition and also a rare cause of an acute abdomen with a clinical presentation that is indistinguishable from acute appendicitis, and thus, the condition is diagnosed during operation. Here, the authors describe the case of a 78-year-old female, who presented with intermittent abdominal pain. The appendix had a pelvic position and the torsion was counterclockwise. In addition, the torsion was associated with mucocele of the appendix, which was considered a secondary factor of torsion. Appendectomy and drainage were performed. INTRODUCTION Appendicular torsion, which was first described by Payne in 1918, is an infrequent cause of acute abdomen syndrome. Despite its rarity, sufficient cases have been reported in the literature to affirm that its presentation is practically identical to acute appendicitis [1]. Torsion of the appendix could be a primary event or an event secondary to other pathologies. In cases of primary torsion, a specimen examination shows secondary ischemic or necrotic change and luminal dilatation distal to the torsion site without any primary lesion. Gopal et al. [2] postulated that the contributing factors are a long appendix and a fan-shaped mesoappendix with a narrow base, which is usually attached to the appendix laterally. On the other hand, secondary torsion is caused by an appendiceal abnormality, such as, fecalith [3], cystadenoma [4], mucocele [5], a carcinoid tumor [5], adhesion [6], or lipoma [7]. Here, we report a case of secondary torsion of the vermiform appendix with mucocele, and review the literature on secondary appendiceal torsion. CASE REPORT A 78-year-old female patient presented to the emergency room complaining of intermittent abdominal pain for 6 days, accompanied by nausea and vomiting for 2 hours. Her medical history included well-controlled diabetes mellitus for 15 years and no previous abdominal S48 thesurgery.or.kr 3). Histopathology showed that the lumen of the appendix was dilated and contained large amounts of mucus-like material, but there was no evidence of malignancy (Fig. 4). The postoperative course was uncomplicated, and the patient was discharged fourteen days later in good general condition. DISCUSSION Torsion of the vermiform appendix is a rare condition that clinically simulates acute appendicitis. The condition is indistinguishable preoperatively from acute appendicitis and it is invariably diagnosed intraoperatively [8]. The site of torsion is variable and could be at the base or about 1 cm or more distal to the base [8]. Furthermore, the position of the appendix is variable, but is usually described as free-lying or pelvic (as in our case) [2]. Primary and secondary torsion of the vermiform appendix have been described. Primary torsion appears to be associated with anatomical variations, such as, a narrow base, a long appendix, or a fan-shaped mesoappendix [9]. Primary torsion of the vermiform appendix remains a rare finding that is usually found at surgery under another di-agnosis-most commonly acute appendicitis. The origin of primary torsion remains unknown. Among cases with acute inflammation, the inflammatory response has been deemed an event secondary to torsion, rather than a precipitating cause [10]. On the other hand, secondary torsion has been reported to be associated with appendiceal abnormalities, such as, fecalith, cystadenoma, mucocele, car- In conclusion, torsion of the vermiform appendix is a rare disorder with an unclear etiology, and causes abdominal symptoms clinically indistinguishable from acute appendicitis. Its presentation is so variable that preoperative diagnosis is extremely difficult, and it is treated by appendectomy. Although appendicitis is the most common intra-abdominal surgical emergency, torsion of the vermiform appendix has only rarely been described, and is an uncommon cause of an acute abdomen. When a patient presents with abdominal pain indicating appendicitis, torsion of the vermiform appendix should be considered in the differential diagnosis. CONFLICTS OF INTEREST No potential conflict of interest relevant to this article was reported.
2017-10-18T17:20:14.554Z
2011-11-25T00:00:00.000
{ "year": 2011, "sha1": "0e34ea008535eb0ce36d987d952fdaeedce8564b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4174/jkss.2011.81.suppl1.s47", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce05a09e39c66437d9db280de57cd7e1dcb63ea4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3667839
pes2o/s2orc
v3-fos-license
Alcohol intake and brain white matter in middle aged men: Microscopic and macroscopic differences Heavy alcohol consumption is associated with deleterious changes in the brain but associations of moderate alcohol intake are not well understood. We examined the association of alcohol consumption with brain white matter health in 377 middle-aged men (56–66 years old; mean 61.8 ± 2.6 years) who were participants in the Vietnam Era Twin Study of Aging (VETSA). T1-, T2-, proton density-, and diffusion-weighted magnetic resonance images were obtained. Diffusion measures were quantified from 12 major white matter tracts. Global white matter lesion (WML) burden was also quantified. Mixed effects linear models examined differences in diffusivity and WMLs by amount of alcohol intake. Analyses adjusted for numerous demographic, health, and lifestyle variables. An inverted-U association was found between alcohol intake and fractional anisotropy (FA) in several tracts, including the inferior-frontal-occipital fasciculus, uncinate fasciculus, superior longitudinal fasciculus, the forceps minor and the anterior thalamic radiations. In these tracts, FA increased with increasing alcohol intake, peaking with moderate alcohol intake (9–28 drinks in 14 days), and declining with heavier intake. Associations remained significant after exclusion of individuals with diabetes or hypertension. There was a U-shaped association in WML burden with highest burden among never drinkers and heavy drinkers (>28 drinks in 14 days). This association was no longer significant after exclusion of individuals with hypertension, since WML burden among heavy drinkers no longer differed from that of other drinkers. This suggests that hypertension related to heavy alcohol intake may contribute to WML burden observed among heavy drinkers. Together, these correlational results suggest that among middle-aged men, moderate drinking may be associated with metrics of better white matter health, particularly microstructural measures, whereas drinking beyond recommended guidelines may be associated with both microstructural and macrostructural white matter damage. Introduction Many epidemiological studies have indicated that moderate alcohol intake is associated with preserved cognitive function in aging and reduced risk of dementia (Beydoun et al., 2014;Neafsey and Collins, 2011;Richard et al., 2017). In contrast, human neuroimaging studies have generally found that alcohol is associated with deleterious changes in the brain including global and regional brain shrinkage and white matter damage, with frontal lobes being particularly affected (Oscar-Berman and Marinkovic, 2007;Sullivan et al., 2010). However, most neuroimaging studies have focused on individuals with risky drinking patterns or alcohol use disorders. Relatively few have looked at associations of neuroimaging measures with alcohol use among communitybased samples. Of those that have, some have found suggestions of a protective association of moderate drinking with white matter health (Mukamal et al., 2001;den Heijer et al., 2004), but others have not (Anstey et al., 2006;Ding et al., 2004;Topiwala et al., 2017). Suggestions of a protective association of light-to-moderate alcohol intake on https://doi.org/10.1016/j.nicl.2018.02.006 Received 6 November 2017; Received in revised form 29 January 2018; Accepted 5 February 2018 white matter lesions (WMLs) have been found for older adults (den Heijer et al., 2004;Mukamal et al., 2001), but not for middle-aged adults (Anstey et al., 2006;Ding et al., 2004). It is possible that moderate alcohol intake may protect against the development of WMLs with advancing age, becoming evident only at older ages. White matter is highly vulnerable to aging. WMLs, evidenced as areas of abnormal signal intensity within the white matter on magnetic resonance imaging (MRI), increase in prevalence with age (Gunning-Dixon et al., 2009;Jernigan et al., 2001) and are often of vascular origin (Bernbaum et al., 2015;Fazekas et al., 1993). Moderate alcohol intake reduces the risk of cardiovascular morbidity and mortality through multiple potential pathways, including anti-atherosclerotic, anti-thrombotic and anti-inflammatory mechanisms (Bau et al., 2007;Collins et al., 2009;Rimm et al., 1999). It is plausible that moderate alcohol intake may also protect against cerebrovascular dysfunction that contributes to white matter damage in aging. Diffusion-weighted imaging, which is sensitive to microscopic changes in the brain that affect the free flow of water molecules, enables sensitive assessment of aging-related and health-related changes in brain white matter microstructure. White matter microstructure is often probed through the use of fractional anisotropy (FA)a measure of the degree to which water motion is restricted along a single direction. In white matter tracts, diffusion occurs most strongly along the tract length (axial diffusion), with axonal walls and myelin sheaths serving as barriers to the flow of water across the tract (radial diffusion). Axonal and myelin loss alter diffusion, as do other pathological changes such as increased fluid and loss of intracellular structures. Decreases in FA and increases in mean diffusivity occur with aging (Sullivan et al., 2008), and in individuals with alcohol use disorders (Pfefferbaum et al., 2009). Alterations in white matter diffusion have been reported to precede development of WMLs (de Groot et al., 2013), and thus may be a more sensitive index of white matter health than macroscopic lesions. Here, we investigated the association between alcohol intake and white matter health in a cohort study of middle-aged men from across the USA, the Vietnam Era Twin Study of Aging (VETSA, (Kremen et al., 2013;Kremen et al., 2006). Based on prior findings of a positive association between moderate alcohol intake and cognitive function in aging (Neafsey and Collins, 2011;Richard et al., 2017), and on findings of cardioprotective effects of moderate drinking (Xi et al., 2017), we hypothesized that individuals who consume moderate amounts of alcohol would show indices of healthier white matter relative to those who do not drink or who drink more heavily, and that associations would be stronger for measures of white matter diffusion than for WMLs. Participants VETSA participants were recruited from the Vietnam Era Twin Registry, a nationally distributed sample of male-male twin pairs who served in the United States military at some point between 1965and 1975(Goldberg et al., 2002. Participants are similar in health and lifestyle characteristics to American men in their age range (Schoenborn and Heyman, 2009). Although all VETSA participants are veterans, the majority (~80%) did not experience combat situations. The current study focused on 377 participants (including 132 twin pairs) who completed the wave 2 MRI visit between 2009 and 2013, whose anatomical and diffusion-weighted imaging data passed quality control. The study was conducted under institutional review board supervision at the participating institutions; all participants provided signed informed consent. Alcohol intake Alcohol use was assessed as part of a structured medical history interview at each visit. Participants were asked if they had consumed ≥ 20 alcoholic drinks in their life. Those who responded 'Yes' were asked the number of days in the past 2 weeks in which they consumed beer, and how many beers they had on days in which they drank beer. These questions were repeated for wine and hard liquor. We summed across beverage types to yield the total number of drinks in the past 14 days. Participants also completed the Cage questionnaire, a 4 question interview designed to detect problematic drinking; scores > 1 are indicative of potential for alcohol use disorder (Ewing, 1984). In primary analyses, alcohol was treated as a categorical variable to enable separation of current nondrinkers from never drinkers (those who reported not consuming 20 drinks in their life). To examine dose effects, current alcohol consumers were divided into approximate quartiles, and groups are referred to as very light drinkers (1-3 drinks in the past 14 days); light drinkers (4-8 drinks); moderate drinkers (9-28 drinks); and heavier drinkers (> 28 drinks in the past 14 days; a level of drinking that may increase risk for alcohol-related problems according to the National Institute on Alcohol Abuse and Alcoholism, 2005). In secondary analyses, we examined the association of alcohol with white matter metrics when alcohol was treated as a continuous variable (number of drinks in the past 14 days). Due to the non-normal distribution, values were log transformed after adding a constant (1) to avoid values of zero. Health, behaviors, and socioeconomic covariates We examined demographic, health, and behavioral variables that may be associated with alcohol intake, including age, race/ethnicity, indicators of socioeconomic status, adiposity, smoking, blood pressure, cholesterol, diabetes, and inflammation. Information on educational attainment, family income, cigarette smoking and medical history were obtained from structured interviews. During the research visit, height, weight, and waist circumference were measured. Systolic and diastolic blood pressure (SBP and DBP) were measured as the average of 2 morning and 2 afternoon seated readings. Participants were classified as having hypertension based on SBP ≥ 140 mm Hg, DBP ≥ 90 mm Hg, self-report of a physician diagnosis, or self-reported use of antihypertensive medication for cardiovascular health. Diabetes status was ascertained from self-report of a doctor's diagnosis or reported use of a diabetes-related medication. Fasting morning blood samples were obtained. Low-and high-density lipoprotein (LDL and HDL) cholesterol and triglycerides were assayed as part of a lipid panel via spectrophotometry. C-reactive protein (CRP) levels were assessed with nephelometry. Due to non-normal distributions, triglyceride, cholesterol, and CRP values were log-transformed prior to analyses. Assays were conducted by Quest Diagnostics Inc./Nichols Institute, San Juan Capistrano, CA. Image acquisition and processing T1-, T2-, proton-density-(PD) and diffusion-weighted images were obtained using standardized protocols on 3T scanners at two sites (University of California San Diego and Massachusetts General Hospital). Image acquisition details have been previously described (Fennema-Notestine et al., 2016;McEvoy et al., 2015). Image processing was performed at UC San Diego, using previously described procedures (Fennema-Notestine et al., 2016;McEvoy et al., 2015), and briefly summarized below. White matter microstructure Diffusion-weighted images were corrected for distortions and registered to the T1-weighted image (Hagler Jr et al., 2009). Diffusion images were rigidly resampled, using cubic interpolation, into a standard orientation with 2 mm isotropic resolution. Conventional DTI methods were used to model the diffusion tensor as an ellipsoid where eigenvalues λ 1 , λ 2 , and λ 3 define the three primary axes (Basser, 1995;Basser et al., 1994;Le Bihan et al., 2001;Pierpaoli et al., 1996). Derived measures included FA, a scalar value of the degree of anisotropic diffusion within the voxel; axial diffusivity, the average diffusion along the primary axis; and radial diffusivity, the average diffusion along the two non-primary axes. Average values for specific fiber tracts were obtained using a probabilistic atlas of fiber tract locations and orientations (Hagler Jr et al., 2009). T1-weighted images were used to nonlinearly register the brain to a common space, and diffusion tensor orientation estimates were compared to the AtlasTrack atlas to obtain a map of the relative probability that a voxel belonged to a particular fiber given the location and similarity of diffusion orientations. These probability values were used to calculate weighted averages of the diffusion measures for each fiber tract. A fiber probability threshold of 0.08 was used to ensure that voxels with very low probability of belonging to a given fiber did not contribute to average values. FreeSurfer software (Fischl et al., 2002) was used to automatically segment gray matter, white matter and cerebral spinal fluid (CSF) on the high resolution structural image, and this information was used to identify and exclude voxels in fiber tract regions of interest that were primarily gray matter or CSF. Diffusion metrics in 12 major fiber tracts were analyzed: uncinate fasciculus (UF), inferior frontal-occipital fasciculus (IFOF), inferior and superior lateral fasciculi (ILF, SLF), cingulum and parahippocampal portions of the cingulum bundle (CgC, CgH), fornix, anterior thalamic radiations (ATR), corticospinal tract (CST), corpus callosum (CC), and forceps minor and forceps major (Fmin, Fmaj). Because a prior study indicated no hemispheric differences in the association of alcohol with diffusion metrics (Pfefferbaum et al., 2009), right and left hemisphere measures were averaged. White matter lesions A multi-channel segmentation approach, which leverages complementary information in T1-T2-and PD-weighted volumes, was used to quantify WML volume. These measures were available for 348 participants. Details of the approach have been previously described (Fennema-Notestine et al., 2016). Briefly, T2 and PD volumes were registered to the T1 volume, all volumes were bias corrected, and then a 3-class tissue segmentation was performed (white matter, gray matter, CSF). WMLs were classified using morphological operators to identify voxel clusters originally segmented as gray matter that fell within anatomically-defined white matter regions, excluding partially-volumed voxels along ventricles. Results were visually reviewed and edited when necessary to correct misclassifications. Total white matter volume and WML volume were log-transformed due to non-normal distributions. The proportion of log total white matter that was accounted for by log WML volume was calculated to determine WML burden. Statistical analysis To determine whether demographic and clinical characteristics differed across alcohol groups, we used linear mixed-effect models (Proc Mixed SAS, version 9.4) for continuous variables, and chi-square tests for categorical variables. To assess stability of alcohol use, we computed the Pearson correlation between alcohol intake reported at the current visit with that reported in the visit that occurred 5 years previously (VETSA wave 1). For our primary analyses we used linear-mixed effect models to determine whether white matter metrics differed between groups. Base models included fixed effects of scanner (one per site) and age. A "family ID" variable was included as a random effect to control for nonindependence of twin data. Adjusted models included potentially confounding covariates that differed across groups at p < 0.10, and included years of education, family income (categorized into ≤$39,999; $40,000-$89,999, ≥$90,000), SBP, HDL, smoking (never, current, former) and diabetes status (yes/no). When significant differences in FA were observed, secondary analyses examined whether diffusion differences occurred primarily along radial or axial directions. To assess the association between alcohol and WMLs, analyses were repeated using WML burden as the outcome variable. A probability value of p < 0.05 was considered significant. We corrected for multiple comparisons across the 12 tracts using a false discovery rate approach taking into account the correlation between tracts to determine the effective number of independent tests (Li and Ji, 2005). When significant associations of alcohol with DTI metrics were observed, following adjustment for multiple comparisons across the 12 tracts, post-hoc comparisons were performed to assess significance of differences between the moderate drinking group and the other 5 groups, using the Sidak adjustment for multiple comparisons. In secondary analyses, we repeated the analyses treating alcohol as a continuous measure of log-transformed number of drinks over the past 14 day, and including alcohol as a non-linear variable. We performed sensitivity analyses to determine whether outliers in the heavy drinking group accounted for observed decreases in FA among the heavy drinking group, by excluding 18 individuals who reported drinking > 56 drinks in the past 14 days (equivalent to > 4 drinks/day). We examined whether the inclusion of individuals with potential alcohol use disorders may have affected the results by excluding individuals with Cage scores > 1. We also examined the influence of cardiometabolic disease on the results by repeating the analyses after excluding individuals with diabetes or with hypertension. Results Participants were 61.8 ( ± 2.62, range 56-66) years old, on average, predominantly white, non-Hispanic (88.6%), with an average education of 13.8 years (SD = 2.1, range 5-20 years). The majority (346/377; 92%) reported having consumed at least 20 alcoholic beverages in their life, and most (248/377, 66%) reported drinking in the past 14 days. Beer was the most frequently consumed beverage (197/ 248, 79% of current drinkers), followed by wine (118/248, 48%) and hard liquor (108/248, 44%). Many (134/248, 54%) reported drinking more than one type of alcohol. Among the 318 participants with data at both waves, number of drinks reported in wave 2 correlated with number of drinks reported 5 years previously in wave 1 at r = 0.67; p < 0.001. Table 1 shows the demographic, behavioral, and clinical characteristics of each group. Groups did not differ in age. Moderate drinkers had significantly higher family income than other groups. Heavy drinkers were more likely to be current smokers than other groups. HDL levels increased with increasing alcohol intake, and SBP was highest in the heavy drinking group. Rates of diabetes were highest among current non-drinkers and very light drinkers. The proportion of individuals with potential alcohol use problems (Cage questionnaire score > 1) was highest among the heaviest drinking group, followed by current nondrinkers. 3.1. Association between alcohol and white matter microstructure Fig. 1 shows mean FA in the 12 white matter tracts as a function of group. In base models, controlling for age, scanner and family-relatedness, differences in FA by alcohol group were significant after correction for multiple comparisons for the IFOF only (Table 2), with highest FA among moderate drinkers. After adjusting for potentially confounding covariates that differed across groups (education, family income, SBP, HDL, smoking and diabetes status) significant effects were additionally observed for the UF, ATR, SLF, and Fmin, after correction for multiple comparisons (Table 2). For all these tracts, average FA was highest for the moderate drinking group (9-28 drinks in 14 days) with lower FA for the heavier drinking group. Post-hoc pairwise comparisons comparing the moderate drinking group to the other groups indicated that FA among moderate drinkers was significantly higher than among never drinkers and non-drinkers for the IFOF; higher than among nondrinkers for the ATR and Fmin, and higher than among heavier drinkers for the UF, IFOF, ATR, and SLF (see Fig. 1). Examination of radial and axial diffusivity in tracts showing significant association with FA revealed no differences across groups in axial diffusivity (all p values > 0.1; see Table 3), but significant group differences for radial diffusivity in the IFOF and Fmin (Table 3), with lowest diffusivity among moderate drinkers, and higher diffusivity among heavy drinkers (Fig. 2). Sensitivity analyses focused on the IFOF because this tract showed the strongest associations with alcohol intake. Sensitivity analyses were performed with adjustment for the same potential confounders included in the primary analyses. When alcohol was modeled as a continuous rather than categorical variable, there was a significant inverted-U association of alcohol with IFOF FA in base (F(1,129) = 11.80; p = 0.001) and fully adjusted models (F(1,113) 0 = 12.4; p = 0.001). When the primary analyses, treating alcohol as a categorical variable were repeated with exclusion of individuals who consumed > 56 drinks in the past 14 days, FA continued to be lower among the heaviest drinking group than among the moderate drinking group (Sidak adjusted p = 0.005). Excluding individuals with possible alcohol use disorders (Cage score > 1), did not change the results: the main effect of alcohol remained significant F(5,58) = 5.73; p < 0.001, and posthoc comparisons showed greater FA among moderate drinkers than among current non-drinkers (Sidak adjusted p = 0.005), and heavier drinkers (Sidak adjusted p < 0.001). Excluding individuals with diabetes (n = 53) did not change the results (base model, F(5,100) = 3.34, p = 0.008; fully adjusted model F(5,86) = 3.90, p = 0.003; raw p values). Although excluding individuals with hypertension (n = 200) reduced sample size by more than half, FA continued to differ across groups (base model: F(5,33) = 3.49; p = 0.012; fully adjusted model (F (5,23) = 3.38; p = 0.02), with highest FA among light and moderate drinkers and lowest FA among the never drinkers. Similar results were obtained after exclusion of individuals with either hypertension or diabetes (excluding 216 individuals; base model: F(5,28) = 2.73; p = 0.04; fully adjusted model (F(5,19) = 2.57; p = 0.061)). Association between alcohol and white matter lesions The linear mixed effect model showed no significant differences in WML burden as a function of alcohol group (F(5,113) = 1.50; p = 0.20; Fig. 3). This was unchanged after adjustment for covariates (F (5,98) = 1.72; p = 0.14). However, when alcohol was treated as a continuous variable, there was evidence for a U-shaped association of individuals with hypertension reduced WML burden among the heaviest drinking group but not among never drinkers, who continued to show highest WML burden. Discussion Numerous studies have indicated that chronic alcoholism is associated with detrimental changes in the brain (Harper, 2009;Oscar-Berman and Marinkovic, 2007;Sullivan et al., 2010). However, few studies have focused on effects of moderate drinking on healthy brain aging. We found evidence for an inverted-U shaped association between alcohol intake and white matter microstructure and gross WMLs, with stronger associations observed for white matter microstructure. For both types of measures, light to moderate alcohol intake was associated with indices of better white matter health than non-drinking or heavier drinking. Several white matter tracts that travel through the frontal lobe showed signs of better white matter microstructural integrity among moderate drinkers, with most pronounced associations in the IFOF, a major white matter tract that extends from occipital cortex to the inferior frontal lobe. FA in the IFOF increased with increasing alcohol intake, reaching a maximum at the moderate drinking level (9-28 drinks in the prior 14 days), then decreasing with greater alcohol intake. A similar pattern was also observed in the anterior projections of the thalamus and the forceps minor -the anterior fibers of the corpus callosum that connect left and right frontal cortex. Significantly decreased FA in heavy drinkers (> 28 drinks in 14 days) relative to moderate drinkers was also observed in the UF, a fiber tract that connects anterior temporal to prefrontal cortex, and the SLF, a fiber tract that connects posterior temporal and parietal areas to frontal cortex. The finding of reduced FA with heavier drinking is consistent with results from studies of individuals with alcohol use disorders or risky drinking patterns, who have shown decreased FA and increased radial diffusivity in frontal and superior white matter tracts (Harris et al., 2008;Pfefferbaum et al., 2009;Yeh et al., 2009). Animal models of binge drinking and human neuropathological studies on individuals with alcohol use disorders have demonstrated neuronal loss, primarily in frontal association cortex, and alterations in white matter, including myelin damage (Samantaray et al., 2015;Wiggins et al., 1988). Given the multitude of factors that can affect alterations in diffusion, we cannot be certain of the neurobiological basis of the differences observed here (Beaulieu, 2002;Wheeler-Kingshott and Cercignani, 2009). However, animal and human histopathological studies indicate Table 3 Results of the fixed effects models of differences in radial and axial diffusivity across alcohol groups for the tracts showing significant differences in FA. that myelin damage increases radial diffusivity and reduces FA, and that FA primarily reflects radial rather than axial diffusivity (Chang et al., 2017;Klawiter et al., 2011;Song et al., 2005). Thus, reduced FA and increased radial diffusivity among the heavier drinking group may reflect myelin damage related to heavy alcohol exposure, whereas the increased FA and decreased radial diffusivity among moderate drinkers may reflect better-preserved myelin. WMLs are common among older adults, particularly those with hypertension, and are commonly associated with ischemic damage (Bernbaum et al., 2015;Fazekas et al., 1993). We previously reported that in this middle-aged sample, WMLs were predominantly located in periventricular and deep frontal and parietal white matter, and were positively associated with age and hypertension (Fennema-Notestine et al., 2016). The U-shaped association between alcohol and WMLs observed here is consistent with findings from two studies of older adults, the Cardiovascular Health Study (Mukamal et al., 2001) (aged 65+ years) and the Rotterdam study (den Heijer et al., 2004) (aged 60-90 years) in which light-to-moderate drinkers showed fewer WMLs than did abstainers or heavy drinkers. Our findings differ from the null results reported in two studies of middle-aged adults, the ARIC (Ding et al., 2004) (mean age 57 years) and PATH Through Life (60-64 years) studies (Anstey et al., 2006). However, the ARIC study (Ding et al., 2004) contained too few heavy drinkers to examine WMLs separately in heavy drinkers, and the PATH Through Life study (Anstey et al., 2006) did not differentiate between never drinkers and current non-drinkers. Additionally, our multi-channel segmentation method for quantifying WMLs is likely more sensitive than semi-quantitative methods for grading WMLs (Ding et al., 2004), or than methods that rely on less information (Anstey et al., 2006). It may, thus, improve ability to detect subtle differences in WMLs in our middle-aged sample. Our results suggest that putative effects of alcohol on WMLs are already present in midlife (ages 56-66 years). Because alterations in white matter diffusion appear to precede development of WMLs (de Groot et al., 2013), our results also suggest that associations of alcohol with white matter microstructure may be evident even earlier in life and may be more reliably detected than WMLs in midlife adults. Given that white matter microstructure deteriorates with age (Gunning-Dixon et al., 2009), the observed associations will likely increase with age. Ongoing follow-up will enable us to address this issue. There are several pathways by which alcohol may exert protective effects. Alcohol intake increases HDL levels, which in turn, are associated with reduced risk of atherosclerosis (Rimm et al., 1999). HDL levels increased systematically across our alcohol groups, with highest levels among the heaviest drinking group. Inclusion of HDL in the adjusted models did not attenuate the observed associations, suggesting that differences in HDL do not underlie the associations of alcohol with white matter microstructure. Alcohol also has anti-inflammatory properties (Collins et al., 2009;Crews and Nixon, 2009); however, CRP levels did not differ across groups. Other mechanisms not assessed here, such as alcohol's anticoagulant effects or possible direct neuroprotective effects (Collins et al., 2009), may underlie the observed association of higher FA among moderate drinkers. A frequently-cited concern regarding studies that report health benefits of alcohol is the use of non-drinking comparison groups that mix former drinkers with abstainers. Former drinkers may have poorer health than current drinkers (Rehm et al., 2008;Wannamethee and Shaper, 1997) and lifetime abstainers may differ from drinkers in numerous factors that may also affect health (Bondy and Rehm, 1998). The finding that approximately 30% of our current non-drinking group had a Cage score > 1 suggests that this group contained individuals with a history of prior heavy drinking. However, excluding these individuals from the analyses did not materially change the results. Furthermore, our finding that moderate drinkers showed higher IFOF FA than lighter drinkers also suggests that the protective associations of alcohol observed here do not arise from confounds related to nondrinkers. Moderate drinking was associated with higher family income, which may be associated with increased access to health care and ability to pursue a healthy lifestyle. Including income and education as covariates in the analyses did not attenuate the associations, but residual confounding cannot be ruled out. Hypertension and diabetes are also important potential confounders, since these diseases are associated with increased risk of white matter damage (Bernbaum et al., 2015;Fazekas et al., 1993;Gouw et al., 2008), and moderate drinkers showed lowest prevalence of these disorders. Although it is possible that this may be a manifestation of cardiovascular benefits of a history of moderate drinking, particularly with regard to diabetes (Matsumoto et al., 2014), sensitivity analyses excluding individuals with diabetes or hypertension continued to show higher FA among light and moderate drinkers than among nondrinkers and very light drinkers. However, the U-shaped association of WML burden with alcohol intake was no longer significant after excluding individuals with hypertension. After exclusion of individuals with hypertension, the heaviest drinking group no longer showed higher WML burden than other drinkers. Because heavy alcohol intake is a well-known risk factor for hypertension (Matsumoto et al., 2014), and hypertension is one of the strongest predictors of white matter damage (Skoog, 1998), this suggests that alcohol-related hypertension may underlie the higher WML burden observed among heavy drinkers. This study has other limitations. Self-reported alcohol use is prone to under-estimation (Devos-Comby and Lange, 2008). However, this would not affect the rank-order of alcohol intake groups, but does limit our ability to determine thresholds at which associations change from beneficial to harmful. Additionally, we used single time-point assessment of alcohol use, which may not adequately reflect an individual's history of alcohol exposure. However, studies have shown that alcohol use tends to remain stable in middle and older age (Bobo et al., 2010;Brennan et al., 2011;McEvoy et al., 2013;Moore et al., 2005), and we observed a high correlation (r = 0.67) between self-reported alcohol use in VETSA waves 1 and 2. Finally, the sample was restricted to primarily white men. Thus, findings may not generalize to women or other races or ethnicities. Conclusions Among middle-aged men, moderate alcohol intake appears to be associated with signs of healthier brain white matter, whereas heavier alcohol intake is associated with signs of white matter damage. Given the correlational nature of this study, caution is warranted in interpreting the findings to suggest a causal effect of alcohol on brain white matter health. Our analyses did adjust for multiple health and demographic factors, thus minimizing their potential as alternative causal factors. Nevertheless, other factors, not assessed here, may underlie the observed associations. Our findings extend previously reported possible beneficial associations of alcohol with cardiovascular health to measures of brain health. They show that these associations occur in middle age, prior to the age at which substantial age-related cognitive decline typically manifests. Longitudinal follow-up is needed to determine whether moderate alcohol intake during middle life is related to rate of white matter decline in older age. AcknowledgementsFunding This work was supported by the National Institutes of Health (grant numbers AA021187, AG018386, AG022381, AG022982 and AG018384). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIAAA, NIA/ NIH, or the VA. The funding agencies had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.Conflicts of interest LKM holds stock in CorTechs Laboratories, Inc. AMD is a founder and holds equity in CorTechs Laboratories, Inc., and also serves on its Scientific Advisory Board. He is a member of the Scientific Advisory Board of Human Longevity, Inc. and receives funding through research agreements with General Electric Healthcare and Medtronic, Inc. The terms of this arrangement have been reviewed and approved by the University of California, San Diego, in accordance with its conflict of interest policies. All other authors declare no competing financial interests.
2018-02-13T22:08:28.883Z
2018-02-07T00:00:00.000
{ "year": 2018, "sha1": "1ed99b232f95fd91affc7e92d229561c30c9daf0", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.nicl.2018.02.006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ed99b232f95fd91affc7e92d229561c30c9daf0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125210106
pes2o/s2orc
v3-fos-license
New shadow detection and removal approach to improve neural stereo correspondence of dense urban VHR remote sensing images Abstract Shadows cause problems in many remote sensing applications like images segmentation, objects extraction and stereo vision. This paper presents a new and an automatic approach to detect and remove shadows from pair of dense urban very high resolution (VHR) remote sensing images. The main contribution of this paper is twofold. First, a proposed approach is efficient to restore objects hidden by shadows, second, it improves a stereo matching process. We have chosen to operate on Ikonos pairs as an example of urban remote sensing images, for that, shadow detection is achieved using a new technique of property based method, operating directly in red. green and blue colour space (RGB). Shadow removal proposed technique aims to produce a needed amount of light to the shadow regions by multiplying the shadow regions by constants, after that, the shadow edge correction is applied to reduce the errors due to diffusion in the shadow boundary. Once pair of shadow free images is recovered, we apply a stereo matching process using a Hopfield neural technique in order to find homologous regions. Our results from different urban pairs show the effectiveness, the simplicity and the fastness of the proposed approach to reveal details hidden by shadows and to obtain a high stereo matching rate. Introduction In the last years, both very high resolution (VHR) urban remote sensing images and aerial images show very fine details of features such as buildings, roads, cars and vegetation [Maglione et al., 2014], however, the amount of shadow and occlusion increases with the spatial resolution. Shadow is an inevitable natural phenomena which is usually cast by elevated objects like buildings, bridges, towers, and it is necessary to reduce or to compensate its effect in order to get information we need according to the application aimed at. In this paper, we are interested in stereo matching of buildings of urban areas located in Ikonos VHR images, the proposed method constitutes of an extension, of the one we have proposed in Zigh and Belbachir [2012], to occluded buildings. So, to obtain accurate results, we have to overcome shadow issues. Shadow detection and removal methods work together to remove shadows [Krishna et al., 2012]. One of the most important reviews of shadows detection techniques were cited in Andres et al. [2012], Al-Najdawi et al. [2012] and Shahtahmassebi et al. [2013]. Andres et al. [2012] have reviewed recent methods for moving objects in video sequences, Adeline et al. [2013] have presented a study of shadow detection methods for single very high resolution aerial images. From that, we can categorised shadow detection method for very high resolution remote sensing images into four main classes: machine learning methods, physics based methods, model based methods and property based methods. In the first class, Martel-Brisson and Zaccarin [2005], have used Gaussian mixture model as an unsupervised method to classify shadowed regions, Levine and Bhattacharyya [2005] have used Support Vector Machine (SVM) to classify shadow boundaries after segmentation and Lalonde et al. [2010], have proposed a supervised shadow classification using a conditional random field. These methods require reference samples to train the classifier and/or are generally computationally expensive. In physics based methods, shadow detection algorithms generally use material reflectances, they take into consideration the illumination and the atmospheric conditions, and derive some physical properties of material. There are few published papers using this class [Adler-Golden et al., 2002;Richter and Müller, 2005] because of the lack of this additional information. Model based methods need an accurate 3D model or an atmospheric illumination conditions to determine where shadows are located on the image [Adeline et al., 2013]. So, it is performed directly on radiance data. Using digital surface models (DSM), several geometrical methods for shadow detections have been published in literature, like Rau et al. [2002], whom use Z-buffer technique with DSM data and multi-view ortho-photos, more recently, Tolt et al. [2011] employ a straight forward line-of-sight analysis with a very accurate DSM combined with SVM supervised classification. The last class called property based methods doesn't require any priori information because it can be applied directly to raw data based on some specific properties like chromaticity and intensity. Paul Dare [2005] has detected shadow from panchromatic Ikonos images by thresholding at a predetermined level and post processing the segmented regions, Tsai [2006] has exploited the properties of shadows in chromaticity, it was applied in several invariant colour spaces including hue-saturation-value (HSV), hue-saturation-intensity (HSI), huechroma-value (HCV), luma-in phase-quadrature (YIQ) and luma-blue difference-red difference chroma components (YCbCr) models, Arévalo et al. [2008] have exploited both a shadow invariant colour component and edge information to detect shadow in Quickbird images. More recently, Krishna et al. [2012] have detected shadow by considering the HSV colour space using Otsu's method for thresholding. Almost all shadow detection scientific researches cited above were applied on very high resolution single images (aerial or satellite images). Concerning our research, we have to mention three main points: -We apply shadow detection for stereo VHR remote sensing images, it is considered as a pre processing stage to another one, so it must be computationally fast and the use of machine learning methods isn't advised in our case; -We don't have additional data, like reference images, atmospheric conditions and DSM, therefore, physics and models based methods are not required; -We have observed from the property based methods that the choice of features has greater impact on shadow detection results. From that, we propose a new fast shadow detection technique using RGB colour space and two features: spectral (intensity ) and spatial (shape of shadows). Once shadows areas are located, they can be removed. There exists several shadow removal techniques, each one often depends on the shadow detection method used beforehand. Nevertheless, we can distinguish two main shadow removal methods [Feng and Gleicher, 2008]: the first one is the zeroing gradient applied in the penumbra (boundaries of shadow) followed by image reconstruction step [Finlayson et al., 2002;Weiss, 2001], like Finlayson et al. [2006] whom achieved the shadow removal in three stages: a 1D shadow-free illumination invariant image is created, from this, a 2D colour representation is derived and then a 3D shadow-free color image is generated, the shadow edges are finally corrected by in-painting. In Feng and Gleicher [2008], a shadow free image is reconstructed by reintegration using poisson equation. Finlayson et al. [2009] proposed that shadows can be removed by minimizing entropy and, more recently, Qiang and Chee-Hung [2013] used the fisher linear discriminate to produce the invariant images and reintegrated the derivative filter outputs to generate shadow free images. The second class of methods consists of removing shadow by multiplying a suitable scalar to the shadow pixels. We can cite a scientific research of Almoussa [2005] who minimized an energy function to obtain this scalar. In the same class, Murali and Govindan [2013] removed shadow by multiplying R, G and B channels of the shadow pixels using appropriate constants. Most of the works in the first class of shadows removal methods cited above, need multiple images, calibrated camera and user intervention with brush tool to specify the shadow boundary, also, reintegrating using poisson equation is time intensive. The second class includes simpler methods than those in the first one. Therefore, we have been inspired from an energy minimization concept applied in Almoussa [2005], to propose a new shadow removal method which consists in minimizing the existing energy between shadowed and illuminated (unshadowed) areas, to find three suitable coefficients related to the three image components R, G and B. Our main contribution in a proposed removal method is the insertion of some filters to improve a shadow removing, particularly in penumbra area. A proposed shadow removal method is innovative in VHR urban remote sensing images. In this field, where a phenomena is so complicated because it exists in dense and variable areas, we feel a high contribution of our method to improve a result of buildings stereo matching process. The considered shadows Shadows occur when objects totally or partially occlude direct light from a source of illumination. Shadows can be divided into three classes: cast, self and boundaries (Fig. 1). These classes can be considered as three kinds of usefulness shadow (Fig. 2). A cast shadow is projected by the object in the direction of the light source, a self shadow is the part of the object which is not illuminated by direct light and boundaries consist of the edge of shadow called penumbra. Most of the proposed methods presented in the remote sensing field only deal with cast shadows [Arévalo et al., 2008]. In this paper, our challenge is to deal with both cast and self shadow, also, we are interested in correcting penumbra areas. We mention that we have to treat only large surfaces of shadows called "usefulness shadows", because the smallest ones situated outside usefulness shadows are considered "helpful" for a next processing stage: buildings extraction, we called them "useful shadows" (Fig. 1). We notice that we predefine this classification of shadows into "usefulness" and "useful" shadows ( Fig. 2). Figure 3 shows the flowchart of the proposed shadow detection and removal method used to improve buildings stereo matching process. A method used to put in correspondence buildings from the pair of mages has been applied previously by Zigh and Belbachir [2012]. Shadow detection step We are interested in this paper by removing large areas of shadows called usefulness shadows, for that, we should detect these shadows beforehand. Keeping an initial colour space of our pairs of images (RGB), a proposed shadow detection method is based on the spectral and spatial information of image, it consists of Otsu thresholding algorithm which calculates an optimal threshold that minimizes the intra-class variance, after that, some spatial filters are added in order to preserve only the usefulness shadows. So, firstly, the algorithm separates shadows (cast and self-shadows) from non-shadows by thresholding at a predetermined level according to an equation: With: As a result, we obtain a binary image with black shadowed areas corresponding to zero pixels (Fig. 3b). However, Otsu algorithm doesn't offer the opportunity to extract only large shadowed regions (usefulness shadows), for that, in the second phase, the useful regions considered as artefacts are deleted using the following spatial filters: a. Filtering areas outside the usefulness shadows (these areas are called useful shadows): each black or low brightened object existing outside the usefulness shadows is detected as a shadow which makes confusion (Fig. 4b). This object can be a small surface of shadow that is useful for us (useful in building extraction step). For that, we apply a morphological closing filter to delete it (Fig. 4c); b. Filtering inside the usefulness shadows: white or high brightened object existing in the usefulness shadows couldn't be detected as a shadowed object (Fig. 4b), so, it will be deleted using a median filters (Fig. 4c). As we detect the usefulness shadows regions that can be cast or self (Fig. 4c), we have to remove them in the next step. Shadow removal step We propose a new shadow removal method, it is based on an energy minimization concept followed by a penumbra correction technique. As we assume that the hidden regions under the usefulness shadow achieve almost the same illumination as the nearest non shadow regions [Murali and Govindan, 2013], we need to compute the average value for each colour (R, G and B) inside and outside shadow regions, so, the constant light is a three- solving these three equations gives: We notice that the shadow detection and removal method proposed above, was applied on many urban pairs of images (around thirty pairs), we choose to illustrate in this paper, only the results concerning the two most complicated shadowed urban scenes. As an example, we illustrate below a result of shadow removal method for one image (Fig. 5). Here, an obtained coefficient  c according to equation (6) is ; sumIN = reshape(sum(sum(Ain )),1,3); sumOUT = reshape(sum(sum(Aout)),1,3); Vin = sumIN/sum(sum(1 -shadow)); Vout= sumOUT/sum(sum(shadow)); c= Vin./Vout. According to the obtained image (b) in Figure 5, we notice that the shadow is well removed, so, the hidden objects are recovered. The only inconvenient is an over-illumination towards the boundaries of shadow. Boundary of shadow is one kind of edges, so, it can be defined as an abrupt local change of intensity in an image. In a shadow removal method cited above, we multiply shadow boundary by a coefficient "c" (see Equation 6 above), which creates an over-illuminated shadow area (Fig. 5b and Fig. 6b). To overcome this issue, we propose a new penumbra correction technique which consists of two parts: penumbra detection and penumbra smoothness. a. Penumbra detection: boundaries of shadow are detected using a binary image (obtained from shadow detection step detailed in paragraph 3.1). After that, a Canny filter is applied on this binary image, and in order to raise the thickness of shadow boundaries, morphological dilation is used. A final result of penumbra detection step is illustrated below on Figure 6c; b. Penumbra smoothness: it is achieved using a median filter (5*5), considered as a non linear low pass filter which aims to reduce the amount of intensity variation between each penumbra pixel and its neighbours (Fig. 6d). The proposed method in the buildings stereo matching process The entire process was tested over the pair of stereo sample images generated by IKONOS 2 satellite data and used previously in Zigh and Belbachir [2012]. We have only one pair of stereo images and we are interested in building stereo matching application in shadowed areas, for that, we choose only shadowed sub pairs (portions) acquired under different sun elevation angles and covering urban areas with elevated buildings. These sub pairs are selected using Paint tool. To evaluate the performance of the proposed approach, the training set includes thirty shadowed sub pairs, we illustrate in this paper the results concerning only two typical urban sub pairs where significant shadow regions appear. We notice that the shadow existing in the second original sub pair (Fig. 8) is larger and more complicated than that in the first original sub pair (Fig. 7). The proposed shadows detection and removal method is applied to improve the building stereo matching process proposed previously in Zigh and Belbachir [2012]. This stereo correspondence is a neural method, the quality of its result depends strongly on the fuzzy buildings extraction step done beforehand [Zigh and Belbachir, 2012]. To demonstrate the usefulness of the proposed shadows detection and removal method, we apply stereo matching process firstly on the original pairs (shadowed pairs) and secondly on the recovered shadowed free pairs. Results of stereo matching process on the original pairs (shadowed pairs). Firstly, we show below the results of stereo matching process applied on the first original shadowed pair of images. As it is illustrated above, fuzzy buildings extraction step gives us a good segmentation of regions of interest (buildings), however, it doesn't allow a same number of regions in each image for two main reasons: Firstly, the difference of capture conditions from right to left image has an effect on regions positions, as a result, some regions in the left image haven't homologous in the right one (Fig. 9a2, b2) such as regions 18 and 26 in Figure 9b2. Secondly, the shadow hides many regions, so they can't be extracted, like region labelled 37 in the left image which hasn't got homologous in the right image (Fig. 9b2, a2). In total, we obtain only 25 regions in the right image (Fig. 8a2) and 43 regions in the left one ( Fig. 8b2) (see Tab. 1). The result of this step (buildings extraction) has a direct influence on a next one called neural stereo matching process. So, an obtained matching rate is 44% in the right image and 25.58% in the left one (see Tab.1), we have only one ambiguous region relative to nearest areas. We notice that a neural stereo matching applied technique is based on a Hopfield network whose nodes constitute of the defined assumptions (the possible regions correspondence), and the connections between them represent the new constraints, including geometric and photometric regions properties: surface, elongation, perimeter, colour and gravity centre coordinates [Zigh and Belbachir, 2012]. The optimization problem is solved by minimizing an energy function, so, an update of each neuron state is done in order to perform the network evolution and then allowing it to settle down into a stable state. The stable state represents the best solution (each neuron represents a possible correspondence between a right region and a left one). Secondly, we apply stereo matching process on the second pair of images, this one can be considered as one of the most complicated shadowed existing VHR remote sensing pairs. We notice that shadow surface varies from the left to the right image covering the same scene. This variability increases according to shadow complexity which is more prevalent near sky scrapers (Fig. 10c, d). From there, all shadowed buildings couldn't be extracted, we clearly see on Figure 10c2, d2 that they are fused with a background. We have obtained from the fuzzy buildings extraction step, 35 regions in the right image and 43 regions in the left one. As a result, shadowed buildings are not matched, a neural stereo matching rate in the right image is smaller than that obtained in the first sub pair (31.42% compared to 44%) with three ambiguous regions (regions labelled 3,6,7). Concerning a left image, we have obtained a same stereo matching rate 25.58% in the two sub pairs, but we have three ambiguous regions in the second sub pair (regions labelled 3,7,6) (Tab. 1). Results of stereo matching process on the recovered shadowed free pairs Now, we apply a proposed shadow detection and removal method in the beginning, in order to detect hidden regions which will be extracted. As it was indicated previously (in section 2 -The considered shadows-), we are interested in recovering only a large shadow areas (usefulness shadows), the smallest ones situated outside usefulness shadows are helpful in our case, so, we can easily see on Figure 11aa1 that they were been merged with a background. Therefore, a result of a fuzzy buildings extraction step is efficient. Much more regions are obtained compared to first case (first shadowed sub pairs in Fig. 9), there are 33 regions in the right image and 46 regions in the left one. This buildings extraction result has a direct and a positive impact on a neural stereo matching process. An example of one of totally hidden buildings recovering is a pair of regions labeled 12 in Figures 11aa3 and bb3. An example of partially hidden buildings recovering is a pair of regions labeled 16 in the same Figure 11 aa3 and bb3. As a result, we obtain an interesting stereo matching rate, it is 63.63% in the right image and 45.65% in the left one (Tab. 2), it corresponds to an average improvement of 19.85% more than the first stereo matching case (first shadowed sub pair). After that, we apply stereo matching process on the second recovered shadowed free sub pair. Obtained results are illustrated below. Concerning this dense urban pair of images having one of the most complicated and expanded shadow, a proposed shadow detection and removal approach allows an interesting recovering of shadow affected objects, we can show for example recovered regions 8 and 15 ( Fig. 12cc3 and dd3) which were totally hidden in original pair of images ( Fig. 10c1 and d1). Another example concerning a partially hidden regions includes region labelled 2 ( Fig. 10c1 and d1), it is completely restored after shadow treatment ( Fig. 12cc3 and dd3). As a result, we obtain 56 extracted regions in the left image and 50 extracted regions in the right one (Tab. 2), let's remember that we have detected before the application of shadow detection and removal algorithm only 43 extracted regions in the left image and 35 extracted regions in the right one (Tab.1). A result of fuzzy buildings extraction step influences positively on neural stereo matching process, so, we obtain a matching rate equal to 32.14% for left image and 36% for right image. Computational time of the improved buildings stereo matching process A computational time is almost proportional to the size of input images (i.e original images) and the applied algorithm. In our study, during each program execution, the proposed algorithm is applied on the right and the left images at the same time, having each one a size of 208*287 pixels. The program is executed using an Intel Core I3 machine with a 64bit operating system and 4 GB RAM. We have shown above (Tabs. 1 and 2) that the ability of the proposed shadows detection and removal approach to reveal details covered by shadows influences directly on the neural stereo matching result, so, the improved buildings stereo matching process is more accurate and its rate is raised. To proclaim the fastness of the improved buildings stereo matching process, we calculate a computational time with and without shadows processing. So, for the first pair of images, without shadows detection and removal: a total time cost is 09.04 seconds (4.33 seconds for a stereo matching step), and with shadows detection and removal step: a total time cost is 12.62 seconds (6.93 seconds for a stereo matching step). For the second pair of images, without shadows detection and removal step: a total time cost is 13.13 seconds (6.74 seconds for a stereo matching step), and with shadows processing step: a total time cost is 17.62 seconds (7.13 seconds for a stereo matching step). We notice that about a half of total time processing is consumed by a neural stereo matching step, this time is needed by neurones to reach a stable state. Nevertheless, shadow detection and removal processing uses few seconds which could be very promising in large area of applications (because the other half of time processing includes: images readings and plotting +shadows processing+ regions labelling). The second pair of images includes one of the most complicated and expanded shadows, so, it needs more computational time to give appropriate result. But, in general, with a memory space of 2.10 GB and a time processing equal to 17.62 seconds at maximum in a most complicated case, a computational time necessary to improve the buildings stereo matching process is considered as low, it could be reduced much more using a faster processor and a bigger RAM size as 8GB. Conclusion A new method to detect and remove shadows from the pairs of RGB very high resolution remote sensing images is proposed. We have chosen to operate on Ikonos images as an example because they can be considered as one of the most prevalent shadowed remote sensing images. A primary goal of the proposed method is to deal with cast, self and boundaries of usefulness shadows, for that, an hybrid detection method which combines spectral and spatial information is done, it doesn't require any priori information and it operates directly on RGB colour space without any spaces conversions as it has been done in Murali and Govindan [2013] and Krishna et al. [2012]. It is simple and efficient. It is clear that the not conversion of the color space allows a short time processing. After that, shadow is removed using an energy minimization method, our main contribution in a proposed removal method is the insertion of some filters to improve shadow removing, particularly in penumbra area. This removal method is automatic and efficient. A second goal of a proposed shadow detection and removal method is to improve a building neural stereo matching process in complex urban environment. In stereo images, although surfaces of shadows change from left to right image covering the same scene, we notice that shadows in the recovered images obtained from a proposed method are either considerably attenuated or effectively removed, from that, new pairs of regions (buildings) appear, as a result, we obtain an interesting improvement of stereo matching rate and a good reduction of ambiguous regions. Experimental results show that the stereo matching method [Zigh and Belbachir, 2012] applied on the recovered shadowed free pairs gives better results than the same one applied on the original pairs (shadowed pairs). So, we can conclude that the whole method in this paper is simple, fast and efficient. The application of the proposed shadow detection and removal method isn't limited to the stereo matching of buildings, it can easily be applied to restore and to put into correspondence other kind of regions like cars or facets of buildings.
2019-04-22T13:06:24.384Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "7144492a4ed91fd465396e3ca246bb50d93a7767", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.5721/EuJRS20154825?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "05081b0bf3e40a3355fcfed1eb1093de0d845c32", "s2fieldsofstudy": [ "Mathematics", "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
248018895
pes2o/s2orc
v3-fos-license
Co-sponsorship networks in the Brazilian Congress: An exploratory analysis of Caucus influence ♦ A defining characteristic of the Brazilian Congress is the informal self-organization of legislators from different parties into issue-specific caucuses, known as bancadas . Though not official entities and often loosely delineated, these groups many times play central roles in proposing and approving or blocking legislation. Presidents often negotiate directly with the more influential caucuses instead of going through formal political parties. Caucuses can thus have great influence on public policy, both as a locus of lobbying and rent-seeking or as means of representation and governability. In this paper we use network theory to measure the links between legislators using the co-sponsorship of proposed bills. The analysis identifies and ranks the main caucuses and provides a measure of their influence and power. The networks that emerge from these exercises show that, at least at the level of co-sponsorships, caucuses can provide a better description of the legislators’ behavior and interaction than political parties. Introduction A perennial controversy in Brazilian political science research revolves around the nature and impact of political parties. The Brazilian version of open-list proportional representation, federalism, campaign finance and other legislative rules is conducive to fragmentation and a large number of parties. Early research focused on the lack of ideology and coherence of Brazilian political parties, which it associated with gridlock, pork and high budget deficits (Ames 2002;Mainwaring 1999;Lamounier 1994). A subsequent view, however, analyzed roll-call votes and other data from legislative behavior to argue that despite the high level of churn of political parties and the apparent ideologically-incompatible switching by legislators across parties, these were nevertheless organic, institutionalized and coherent, as least within Congress if not so much in the electoral realm (Limongi and Figueiredo 1999;Santos 2003;Neto 2006;Pereira and Mueller 2003). In this view, parties play a crucial role in the intermediation process through which a strong agenda-dominating president negotiates support in exchange for pork. As bad as that sounds, this literature has argued that these political institutions can be conducive to higher governability and reforms (Alston and Mueller 2006;Pereira and Mueller 2000;Bertholini, Pereira and Renno 2018). 1 1 Many of the formal institutions that determine the power and function of Congress in Brazil are similar to those of the US Congress, though there are several important differences, and even more informal and contextual differences. The main role of Congress is to pass laws and to serve as a check on Presidential action. Congress is bicameral, composed of a House of Representatives (513 representatives distributed proportionally to state population serving four-year terms) and a Senate (81 senators, three per state serving staggered eight-year terms). Senators are elected by majoritarian rule and Representatives by proportional representation. This latter rule leads to a highly fragmented party system, so that the President's party is unlikely to have a majority and must therefore build a ruling coalition. Much of this debate took place at a time in which the presidency was dominated by the two most clearly ideologically coherent parties (PT -The Workers' Party (2003-2015 and PSDB -Social Democratic Brazilian Party (1995-2002). Recent changes in the Brazilian political context, however, raise the question of whether the basic logic of these analyses remains valid or if the system is undergoing a fundamental change. The two dominant parties have been emasculated by a series of scandals, an extended economic downturn and the inability to adapt to a new political landscape dominated by social media. The new President -Jair Bolsonaro -has weak connections to political parties and has made little effort to use traditional means to negotiate support with Congress, opting instead to seek support directly from his political base. He was elected after joining a small party that grew due to his influence in the 2018 presidential campaign. Less than one year later he had already left and by 2021 still did not have a stable party affiliation. In this shifting political landscape, however, it is not clear that political parties have been weakened or lost influence. In the vacuum left by the lack of presidential attention to executive-legislative relations, Congress has taken on a new protagonism, for example, leadership in approving the important reform of the social welfare system. In this context of a fragmented and shifting multiple-party system, one of the most interesting adaptations has been the emergence of informal issue-specific trans-party groupings of legislators, known as bancadas or caucuses. Whereas a bipartisan system, such as that in the US, naturally reflects and accommodates most of societies' cleavages, a fragmented party system has trouble in partitioning the policy space and political agenda across over 28 different parties. It is natural that informal or non-institutionalized ways to deal with this pressure should arise through lobbying, the judicialization of politics, bureaucratic activism, NGOs, organized civil society, and lobbying, among other manifestations that seek to fill the void of more formal modes of representation. In 2005 the House of Representatives institutionalized one of the early manifestations of the trend for legislators from different parties, but with some affinities in agenda, to self-organize. It started registering 'parliamentary fronts' (PF) which focused on a specific area of legislation supported by at least one-third of the members of Congress, and which had an official representative (Silveira and Araújo 2019). The fronts could use office space within the House but did not receive financing. They were not given any official rights or duties in legislative proceedings and there is great variability in their level of organization. Despite the ambiguous role they played, the number of registered fronts had grown from 113 in 2003 to 241 by 2015. Typically, each new legislature brings a flurry of new registered fronts with the number falling by the next electoral year (Silveira and Araújo 2019). Although the parliamentary fronts arise from the incapacity of the traditional party system to mediate society's demands for representation, it is not the case that they supersede or substitute for political parties. Both Coradini (2010) Silveira and Araújo (2019) see a more synergistic relationship with parties and fronts not necessarily playing competing roles. The fact that there are so many parliamentary fronts means that most of them tend to be focused on fairly specific themes or policy issues. For example, while there is a more encompassing Parliamentary Front for Human Rights, there is also a PF in Defense of the Rights of Women, another for the Rights of Children and Teenagers, as well as one for the Victims of Violence and another in Support for Indigenous People, among several other (Silveira and Araújo 2019). Because each PF must attain the registered support of one third of all legislators, the high number of fronts also reflects that there is a common strategy of each legislator freely supporting several fronts, which in a way debases the power of their representation, similar to the way grade inflation deflates the merit of achieving an A. It was possibly these shortcomings of the parliamentary fronts that opened the way for the bancadas. These are also self-organized, thematic and transversal, but are not formally recognized or constrained, and consequently are typically larger and more encompassing in their interests. Yet, despite the greater size and reach, the better organized caucuses are able to overcome the problems of collective action and their disparate party origins to come together around their common interests. Recognizing the caucuses' ability to act in unity within the fragmented legislature, Presidents have often reached out directly to trade policies over specific issue for support on important reform agendas, circumventing traditional parties. For example, the Rural Caucus may receive policy concessions regarding environmental regulation, indigenous land demarcation and rural credit debt in exchange for support on social welfare reform. Similarly, the Evangelical Caucus may require obstruction of gay marriage legislation and of stem cell research in exchange for support on a tax reform bill. While caucuses have become more prominent in the Brazilian legislative process in recent years, and especially in the Bolsonaro administration, their informal standing makes it hard to assess in greater detail what is the magnitude and nature of their impact. It is also not clear to what extent and in which realms of the legislative process and of political representation the bancadas supersede political parties. Because of the effect they can have on economic development and policymaking, understanding the role of bancadas in the Brazilian legislative process is important beyond their relevance to the political science literature. Congressional caucuses are one of the mechanisms through which interest groups and social forces affect the allocation and distribution of rents and resources through government policy in Brazil, and are therefore relevant for understanding the country's developmental process, including the persistence of inefficient policies and institutions. In this paper we use network theory applied to co-sponsorship relations among legislators as a bottom-up method for identifying and ranking caucuses and measuring their level of coherence in relation to each other and to political parties. By using community detection algorithms on over 30 thousand co-authorship links between legislators we can identify which subsets of legislators form organic groups. These groups can then by analyzed to establish which issue area the community represents. Similarly, we can compare how well the emergent structure of the estimated network is explained by membership in the community relative to membership in political parties. A co-sponsorship relation between two legislators arises when a legislator signs on as a co-sponsor to a piece of legislation being proposed by another. This practice has existed in the U.S. Congress as early as the 1930's (Campbell 1982;Fowler 2006a) and exist is some form or another today in most other countries' legislatures including Brazil. 2 Because a sponsorship does not seem to constrain posterior voting behavior, it is not obvious why this practice has become so ubiquitous. A relatively large literature has sought to explain co-sponsoring as forms of signaling to constituents (Campbell 1982), to other legislators (Kessler and Krehbiel 1996;Caldeira, Clark, and Patterson 1993), to interest groups and campaign contributors (Rocca and Gordon 2010), to facilitate logrolling (Bernhard and Sulkin 2009), to increase the chance of the proposal's approval (Browne 1985), among other explanations. This literature claims that co-sponsorship patterns can provide valuable information about legislators' behavior and legislative outcomes and for many purposes is preferable to roll-call data (Aleman et al. 2009;Desposato, Kearney, and Crisp 2011). Following the literature that analyzes co-sponsorships using network theory (Fowler 2006a(Fowler 2006bBurkett 1998;Porter et al. 2005), we interpret co-sponsorship relations as edges in a network where each legislator is a node. 3 This gives us a network with more than 30 thousand links originating from all legislators that participated as co-sponsors in the 55 th legislature in the Brazilian House of Representatives (2015 -2019), and including all proposals presented between 2011 and 2018. We analyze the network that emerges from this data to identify whether the groups that are revealed are simply the political parties or whether alternative patterns, such as the thematic caucuses, better explain the network's structure. For most countries co-sponsorship relations align closely to party membership. Figure 1 shows a sample of co-sponsorship networks for other countries. 4 We show in this paper that, exceptionally, in Brazil this is not the case. The paper is organized as follows. In Section 2 we estimate the co-sponsorship network and explore different ways to explain its emergent structure. We partition of the network, identify the communities and rank them according to their ability to overcome the problems of collective action. In Section 3 we investigate the major caucuses and give examples of the propositions they pursued. Section 4 repeats the exercise with data from the Senate. We find that in this higher chamber caucuses do not play a similar role as in the House. Section 5 concludes. Estimating co-sponsorship networks for the Brazilian House of Representatives We use data on co-sponsorship in the Brazilian Chamber of Deputies obtained from House of Representative's online portal. 5 The network is composed of all legislators in the 55 th legislature (2015-2019) who at any point took part in a co-sponsorship relation. Legislators are the nodes and the edges are given by the co-sponsorship of a proposition. We use all propositions presented between 2011 and 2018, thus including some from the previous legislature, as long as the co-sponsors are part of the 55 th legislature. We excluded proposals that require a high minimum number of co-sponsors, such as proposed amendments to the Constitution (PEC) and requests for the creation of a parliamentary commission of inquiry (CPI). These types of legislation regimentally require signatures from one third of deputies (171), so their nature is different from voluntary co-sponsorships. Whereas the latter reflect affinity of interests and signaling to voters and colleagues, the former are more often a sign of logrolling. Note also that we treat all co-sponsorship relations as symmetrical and do not distinguish between the first and subsequent co-sponsors in the list. Figure 2 presents the co-sponsorship network. It contains 582 nodes and 31,122 edges. Although the total number of deputies is 513, there are cases where alternates step in for the original office holder, thus the number of 582 nodes. In this first figure we do not color the nodes nor change their size to reflect additional information. The average degree of the network, that is, the average number of co-sponsorships, is 106.95. In contrast, a random network created by using a 20% probability that each pair of nodes is connected, would have 33,913 edges and an average degree of only 58.47. We use 20%, as this is approximately the ratio of the average degree of the true network to the number of nodes. The shape of the network and its average degree thus make it very unlikely that it is a random graph. While a random graph has a degree distribution that approximates a Poisson distribution, this is clearly not the case of the co-sponsorship network (Barabási et al. 2016). Yet, while the network does exhibit a non-random structure, the lack of major hubs indicates that it also does not follow a power-law degree distribution as is sometimes the case with complex social networks. Given that the co-sponsorship network is not randomly generated we would like to determine what is its data generating process. Figure 2 -Co-sponsorship network of the 55 th legislature Our original data is in the form of a bipartite matrix that links congressmen to bills. A link exists when two congressman co-sponsored a given bill. With such two-mode data, one approach is to directly model a bipartite network where both congressmen and bills are nodes. This direct approach has the advantage that the full structural features of the data are considered and shown. Another approach is to transform the two-mode data into one-mode projections, where congressmen are linked to other congressmen if they co-sponsored any bills in common, or where bills are linked to bills if they had co-sponsors in common. The conversion approach is often used when one mode is of more interest to the analyst than the other, so that the data on the other variable is simply used to indicate links for the variable of interest (Everett and Borgatti 2013). In our case, the interest is in the relationship between congressmen, so we use the one-mode projection into a matrix where the nodes are congressmen. The disadvantage of transforming the data in this way is that it can entail a loss of information about the structural features of the data. Everett and Borgatti (2013) discuss the conditions when this loss of in-formation from conversion is a problem and when it can be justified for allowing the use of a greater number of network analysis techniques that require square matrices. In addition, they discuss several methods that allow for the use of projections without the loss of information. We recognize that by using a one-mode projection of our two-mode data we run the risk of overestimating the relationship between congressmen -a risk that the previous co-sponsorship literature also incurs (Fowler 2006a(Fowler , 2006bZhang et al. 2008;Briatte 2016;Lee, Magallanes, and Porter 2017)). That is, it may be that we are identifying as members of the same community, individuals that are not really associated in this way. Extensions of this paper could explore methods to deal with this overestimated information and with present biased results. For now, and in an exploratory way, we continue to use a weighted matrix, as if it were binary, for the relationship between each pair of congressmen and the number of bills that they co-sponsored in common (Stram, Reuss, and Althoff 2017). This approach takes into account the different intensity of interaction across pairs of congressmen. The co-sponsorship network is thus derived from a weighted adjacency matrix A of ties between congressmen, where A i,j is the quantity of bills that two congressmen mutually co-sponsored in the period. This is a symmetric matrix where diagonal elements are not considered, because a congressmen cannot have ties with himself. We have first listed, for each congressman, the bills he/she co-sponsored. Then, A i,j was found as the number of re-occurrences in the lists of congressmen i and j. Co-sponsorship networks for most countries typically exhibit the clear clustering by political parties shown in Figure 1 (Briatte 2016). In Figure 3 we redraw our network by coloring the nodes according to the political party of the deputy and set the size of the node to reflect its degree (i.e., number of links). There are 27 parties in the network. Given high party membership turnover, we designate each deputy to the last party to which he/she belonged. The edges follow the color of one of the nodes. The caption shows the proportion of members from each party. Figure 3 -Co-sponsorship network by political party Contrary to what happens in other countries, there is no obvious division of the graph according to parties. This finding endorses the common view in public opinion and in much of the literature that parties are not the most relevant unit of analysis. For example, Gallagher (1991) points out that Brazil has the highest degree of party fragmentation in House elections among 100 countries. To investigate other patterns of organization in the network, we use community detection methods from network theory to try to explain the division of the graph. If not political parties, which groups do legislators divide into in Brazil? In the jargon of Brazilian political analysis, these communities are called bancadas. Thus, for now we use this as a generic term for the communities in the network. We use a modularity seeking algorithm that partitions the network into groups that have a higher density than other divisions. Density is an intuitive measure of group cohesion. It is the number of existing relationships between nodes of a group divided by the number of all possible rela-tionships between nodes. Thus, a group in which all nodes are connected will have a density of 100%. Communities are denser than other divisions, and we show that it follows from the division into communities that the bancadas we uncover are much denser than the parties themselves in the Chamber. Before doing so, however, we point out that two other potential criteria for dividing the network were investigated: according to the geographic region of the deputy and by whether the deputy was re-elected in 2018 or not (lost or did not run). Neither of these seem to suggest a discernible pattern that could explain the determinants of community self-organization. Given that the network is not well explained by party, by region or by electoral performance, the next step is to partition the network into communities and then try to identify the underlying structure of that partition. A community is a group of nodes that has more relationships with each other than with the rest of the network, which is why in most countries the communities in a co-sponsoring network are the parties themselves. To partition the network, we applied the Louvain modularity algorithm (Blondel et al. 2008) and the ForceAtlas2 layout algorithm with each legislator allocated to one and only one community. The result is shown in Figure 4, with the network divided into 25 communities. 6 Each community was identified and labeled by examining the members of each group and using knowledge of their personal and political histories. The classification is straightforward, as in most cases the group's focal interest is reasonably obvious and uncontroversial. Most groups were identified as a thematic bancada or as a political party. In a few cases we were not able to pinpoint any common interest or origin of the group and these have been simply marked with letters. There is also one group for each year composed of the leaders of each party. This happens for regimental reasons that require the leaders to cooperate in some specific circumstances. In the next section we discuss some of the main caucuses identified, but first we attempt to quantify the relative strength of each group. 6 The choice of 25 communities is somewhat arbitrary as there is no established method for determining how many communities to look for (Newman and Reinert 2016;Riolo et al. 2017;Chen and Lei 2018). We used two criteria to determine this parameter. First, we chose not to have more communities than the number of political parties. Second, we sought a calibration in which the density of the largest communities would not be too diluted. The average density of the largest communities was highest with the number set at 25. Table 1 lists the size and density for all the communities and for each political party in Figure 4. The density for the network as a whole is 18.5%, but the average across the detected communities is 66.0%. This is almost twice the 35.8% average density of the 25 main parties. This is already one indication that bancadas can be relatively more influential than parties. But to meaningfully impact legislative proceedings cohesion may not be enough. It is also necessary for groups to be large and command enough roll-call votes. We measure this combination of size and cohesion through a metric we call 'strength'. It is calculated for each group by multiplying the group's percentage of all deputies in the House by the density of the group. The result is a relative measure that ranges from 0 to 100. A strength of 100 would occur in the hypothetical case of a community composed of all members of the Chamber and a density of 100%. Pedro Fernando Nery e Bernardo Mueller The picture of political representation and legislative organization told by Table 1 and Figure 5 is much different from that found in most other countries, where the network usually partitions closely along partisan lines. Our results show that the bancadas are not mere window dressing or cheap talk. This seems to be the case instead for the legislative fronts (Frentes Parlamentares) as none of them show up in the network. Although these groups are formally registered and officially recognized, whatever collective action they engage in does not translate into a community in the co-sponsorship network. The bancadas, on the other hand, though informal and unofficial, are some of the entities with highest strength in Table 1. The two first places are held by the Evangelical caucus and by the Rural caucus. These have considerably greater strength than the first two parties in the list: DEM and PT. Of the first 10 communities in the list, 5 are bancadas, 3 are parties, one is unidentified and another is the group of leaders. Figure 4 -Communities in the co-sponsorship network The relative rank of caucuses versus parties in Table 1 shows why presidents have increasingly circumvented traditional political parties to negotiate support for their agenda directly with these informal groups. Especially for contentious issues, such as pension reform, the President needs strong allies, that is, large and cohesive groups that can deliver support on the floor. Political parties in Brazil, however, tend to lack the cohesiveness to provide reliable support. Figure 5 shows the placement of the members of six major parties in our estimated co-sponsorship network. By comparing each isolated party's connections to the full network in Figure 5, it becomes apparent that parties do not have a homogeneous membership in terms of the interests expressed through co-sponsorships. The Brazilian party system does provide party leaders some means to discipline their members and get them to act in concert for many issues. Yet, the fact that party members have such diverse interests, illustrates some of the forces party leaders are up against. For many issues, it may be easier for Presidents to negotiate directly with the caucuses. Pedro Fernando Nery e Bernardo Mueller Which are the most important caucuses? In this section we discuss some of the most prominent bancadas revealed by the co-sponsorship network. We give examples of the issues and specific legislation that they sought to promote, block or alter. Not only is there no applied literature on co-sponsorship networks for the Brazilian Congress, but empirical efforts toward defining bancadas, based on data, are also missing. For Cascione (2018) it is noteworthy that an issue which is so prominent in national politics is so absent from academic scrutiny. The Evangelical Caucus We identify the largest group, with 61 members, as the Evangelical caucus. It is sometimes referred to in public discourse as the Bible caucus and tends to call itself the Family caucus. These less popular denominations are closer to the truth because the group includes some Catholic legislators. Figure 6 isolates this community and shows the names of the participating legislators. Node sizes have been made proportional to centrality. The density of the group, despite its large size, is high, at 72.9%. Only two political parties, PSOL and PEN, have a greater level of cohesion (Table 1). No medium or large party comes close to the density of the Evangelical caucus. Figure 6 -The Evangelical caucus The most popular proposition supported by this group has an impressive number of 40 co-sponsors within the group. It seeks to block the Executive's act that criminalizes discrimination of transvestite and transsexual people in educational establishments. Other propositions of interest to this group are related to policies that deal with sexual orientation and the decriminalization of abortion, which makes it straightforward to identify the focus of this group. Two salient proposals that they promoted were Amendment No. 41 of 2012, to Project No. 2,330 of 2011, with 13 co-sponsors, andBill No. 4,754 of 2016, with 11 co-sponsors. The first sought to ban the sale of alcoholic beverages at football stadiums during the 2014 World Cup. The second tried to criminalize behavior of Supreme Court Judges which they saw as usurping the authority of Congress or the Executive (presumably in issues such as abortion or drugs). The Figure shows that Jair Bolsonaro, who was subsequently elected as President of Brazil, is part of this bancada. Although he was often described as a backbencher from a small party with a meagre legislative resume, our methodology identifies him as one of the most relevant legislators in the largest community in the Chamber of Deputies. The Rural Caucus The second largest community in the co-sponsorship network is identified as the Rural caucus (Figure 7). It stands out for its high cohesion, as shown by the network graph and by the density level at 75.4%. This is a higher density than any of the largest parties and is one of the highest among all the big caucuses. Remember that density measures the number of existing connections for a given group in relation to the total number of possible connections. Density is therefore a good indicator of group articulation, organization and like-mindedness. In the case of the Rural caucus the high density fits the perception of the group by the media, public opinion and by the academic literature as a strong and influential group (Alston and Mueller 2007;Vigna 2007;Simionatto and Costa 2012;Lima 2017). They are strong because they are both numerous and cohesive. That is, they can act concertedly despite their internal differences (Araújo 2013). Figure 7 -The Rural caucus The range of activities of the Rural caucus is wide and its means are varied. One of the main proposals in the time period of our data, with 17 co-sponsors, sought to sustain a normative instruction from the Executive Branch that approved phytosanitary requirements for coffee from Vietnam -that the sector saw as unfair competition. Another, also with 17 co-sponsors, appealed for a public hearing to pressure for more investments toward paving the BR-163 highway, considered strategic for the flow of agricultural production in western Brazil. The list of propositions reveals action by the group aimed at entities as diverse as the National Traffic Council (Contran), public banks and the Itamaraty (the Brazilian Foreign Office). In addition to uncontroversial issues in straightforward agricultural areas, such as animal disease, there is also a discernible attention to corporate issues, including: commercial protection; access to public bank financing; renegotiation of rural credit debts; taxation; sanitary requirements; public procurement; trucking legislation, among others. The salience of the rural caucus in the network seems to be a corollary of a very well defined set of strategies used to pursue their interests. Among the more aggressive instruments used by this group are demands for depositions by ministers, which are harsher than simple invitations because they are more atypical and imply a crime of responsibility in the case of absence. There had been summons during this period, for example, for the Minister of the Environment to address the activities of the Brazilian Institute of the Environment and Renewable Natural Resources (Ibama), for the Minister of Finance (debt renegotiation), and for the Minister of Justice (demarcation of indigenous lands). The Centrão This is the third strongest community in Table 1. We cannot connect it to a specific theme, but the membership suggests it fits the description of the "Centrão", a somewhat amorphous group of central (more median) legislators that often are more interested in pork rather than policies. Most are part of the "lower clergy", which is what backbenchers are often called in Brazil. The group has a very high level of cohesion at 90%. It is well connected to other groups, is heterogeneous in terms of party origin and has come together to support different topics. It is not quite accurate to call this group the "Centrão" (big Center), as this term makes it sound as if it is a homogeneous group of legislators with similar center ideology, whereas it is actually very diverse in many dimensions but has in common the greater focus on pork rather than policy. The main mobilization by this group was for a proposal to stop an act of the Federal Audit Court (TCU) that required bidding in more than 6 thousand lottery companies. Other themes that brought together members of the group include the tightening of penal legislation, the inclusion of the name of Miguel Arraes (a historic politician from the northeast) in the Book of Heroes of the Fatherland, as well as themes dear to the evangelical and rural caucuses. Health caucus The projects that unite this community of co-sponsors do not seem to be related to corporate interests. It is therefore not a caucus for interests such as health plans or pharmaceutical companies. Instead, it seems to represent diffuse interests of voters and consumers. The term "health" is used here because the propositions cover not only the Unified Health System (SUS), but also guidelines referring to women, social assistance, people with disabilities, and early childhood. Themes that stand out have to do with the career of community health agents, violence against women and early childhood (for example, microcephaly). Proposals that deal with the approval of the use of the substance phosphoethanolamine, an experimental drug for cancer patients are emblematic of this caucus. Other Caucuses In addition to the caucuses described above several other smaller groups were also identified in the network. The fifth largest community represents interests of the state of Rio de Janeiro. Among the issues they pursued were demands for policies and action by federal agencies and organizations, such as the Pedro II School, Ipea (Institute for Applied Economic Research), Petrobras and federal hospitals. The Transport caucus is made up mostly of members of the PMDB and focused on infrastructure projects. The Environmental caucus turned out to be relatively weak with only 19 legislators and density of 39.8%. This pales in comparison with their main rivals for policies, the Rural caucus. This is probably an indication that whereas the latter seek to influence policy directly in Congress, environmental interests in Brazil adopt indirect strategies by convincing voters to pressure politicians rather than doing so directly (Yu 2005). Co-sponsorship network in the Senate The analysis thus far has referred exclusively to the Brazilian House of Representatives. Are the findings that caucuses play an important role in legislative proceedings also valid for the Senate? In this section we replicate the same analysis using co-sponsorship data for senators from the 54 th (2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019) and the 55 th legislature (2015-2019), since the terms in the Senate are 8 years long. The network that emerges has 97 nodes (greater than the number of 81 senators because of substitutes) and 3,868 edges. The average degree is 79.75. This is a very high number, equivalent to 82% of nodes. For the Chamber, in the same period, the average grade was 106.95, or 18% of nodes. Thus, there is a much higher degree of cooperation in the Senate. This is in line with its stereotype of being a more collegiate and less fragmented chamber. In part this may be due to the smaller size of the Senate, but it can also be related to the different electoral rules (majoritarian rather than proportional). Also, the 8-year terms implies that the senators tend to have longer relationships. While in the House the average density was 18.5%, in the Senate it is an impressive 83.1%. Because density measures the number of effective connections out of the total possible number of connections, this reflects a high level of cohesion in the Senate. Figure 8 shows the network, with nodes colored by party and proportional to the centrality of intermediation. Once again, we can see that it does not resemble a random network. There is however an important difference from the network estimated for the House. Here parties seem to matter. This interpretation is confirmed in Figure 9 where we use the community detection algorithm, setting the partition to four groups to keep the proportion of legislators per community similar to what we did with the House data (we ignore the isolate nodes). Two communities are readily identified. The first is the PT Bloc, with 28 members. It is composed of most of the PT and PDT senators, and all of the PCdoB and Rede senators. This was the support coalition for President Dilma and against President Temer. The density for this group is 82.3%. Among the main propositions were 15 co-sponsors requesting a vote of reproach for the governor of Paraná (from the PSDB) "for the brutal action by Military Police against teachers", 17 co-sponsors demanding a plebiscite to require a new election for President in 2016 after the impeachment of President Dilma, and 19 co-sponsors for a bill to provide free bus passes to all students in the country. The second group, with 27 members, concentrates almost all PSDB senators and almost all DEM senators (except one in each case). It includes members from other parties, but 67% are from the PSDB or the DEM. It is a mirror image of the previous bloc, opposition to the Dilma government, but supporting Temer. The density is 80.1%. Among its main propositions, are 17 co-sponsors sending a letter to the President of Venezuela defending democratic principles, 17 co-sponsors in favor of reproaching the President of Unasur for a declaration against the impeachment, and 19 co-sponsors for the creation of a committee to present a proposal to switch Brazil to a parliamentary system of government. The dominance of these two groups in the network shows that contrary to the House, the main organizing principle in the Senate are not the caucuses, but rather party blocs. The third group, also with 27 members, has representatives from 11 parties. In addition to being heterogeneous, there is a temporal component, since it harbors many of the senators who made their debut in the 55 th legislature. The group's cohesion is 99.7%. Among the main propositions, are 14 co-sponsors supporting the continuation of the constitutional amendment project for increasing internal control on public policy, 15 co--sponsors for a project promoting capoeira (a traditional fight/dance), and 18 co-sponsors for the continuity of a project about products containing phenylanine. Finally, there is a bloc of Peripheral States with 15 members, mostly from the PMDB, PP and PR. Its composition seems to be determined by regional criteria with no members from the South or from rich states like São Paulo and Rio de Janeiro. There is over-representation of the Midwest, and of the 10 states with the largest population, only 1 is represented. Among the 10 states with the lowest population, 7 are represented. Its cohesion is 100%. This bloc is an artifact of the electoral rules that gives less populous states more power in the Senate in relative terms. The analysis of the co-sponsorship network suggests that the Chamber of Deputies is in fact a more fragmented house, more prone to the formation of bancadas. In contrast, in the Federal Senate, not only is the network organized primarily along party lines, but the level of collaboration between legislators is higher. The communities present themselves as party blocs, with an emphasis on the division between government and opposition. It is possible that these differences are related to electoral rules (proportional versus majoritarian); size of the Houses (513 x 81), number of committees; and length of the term in office (4 years x 8 years). In Lijphart et al. (1999)'s terms, the analysis suggests "strong bicameralism". Conclusions We have shown evidence that in the House of Representatives in the Brazilian Congress caucuses are an important organizational form besides parties. We identified a caucus or a bancada as a community in the network of bill co-sponsorships. Because a community is a group of nodes that has more relationships with each other than with the rest of the network, communities in the legislative network are a good measure for caucuses: groups of legislators who act together. Although this result is in line with the conventional view in Brazil that parties are not the main organizing principle in Congress, it differs markedly from dozens of other parliaments across the world, where the communities of co-sponsorship networks almost always align with parties. In the House of Representatives during the 55 th legislature (2015-2019) only a fraction of the members of PT, PSB, PSDB and DEM were organized as communities, and only PSOL has all its members in the same community. This does not mean that parties do not matter, as the co-sponsorship data reveals information on only some types of interactions among legislators. It does, however, suggest that the formation, working and impact of caucuses in the Brazilian Congress deserves more attention. This exploration is the first to build co-sponsorship networks for Brazil and also seems to be the first attempt to create a method to identify Brazilian caucuses from public data. A better understanding of these networks and bancadas is relevant for the political economy literature. Several authors, such as Lisboa and Latif (2013), share the diagnosis that low economic growth in Brazil is explained by extractive institutions. They highlight that there is a "broad system of rent-seeking policies" and that "excessive protection and the dissemination of benefits resulted in high social costs". These policies are opaque, and Lisboa and Latif (2013: 51) conclude that "there is still a lot of work to be done, such as collecting all the evidence on the rent-seeking mechanisms, their economic effect and distortions, and assessing the role played by the political process on the development". We view this work as a part of that effort and hope that future studies can shed more light on the legislative decision-making process. Interestingly, we did not find the same role for caucuses in the Senate, where there is instead greater cohesion among legislators. Whereas in the House it was possible to identify caucuses as large and cohesive communities, this was not the case in Senate where cohesion is already high in the chamber as a whole (four times higher than in the Chamber). Thus, parties seem to have a more relevant role in the Senate when it comes to organizing legislators. If in the Chamber we speak of bancadas, in the Senate we speak of "blocs", aligned according to situation and opposition. There are several directions for future research to explore. The first is to use a two-mode network instead of the one-mode projection, as the latter may bias community detection leading to overly dense caucuses. A second extension is to investigate the sensitivity of the results, that is, identified caucus composition, to using given criteria for cutting the data. A third direction is to try different ways of dividing the network into communities. We allocate each legislator to one and only one community. But other rules, for example allowing multiple affiliations, might be a more accurate depiction of legislator behavior. Another extension involves looking at how the community structure changes from one legislative year to another, and from one legislature to another, and how this is related to the changing salience of the political debates of the day. Yet another possible extension might be establishing a higher threshold of co-sponsorship to connect legislators, which is not trivial but could prove insightful (see Everett and Borgatti (2013)). Also, modern exponential random graph models (ERGM) could be used to model the data as a two-mode network and use the data to predict support for the proposed projects (Lusher, Koskinen, and Robins 2013). Finally, it would be useful to replicate the same community identifying methods using other sources of data on legislator interaction as the edges in the network.
2022-04-08T15:22:28.307Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "7dd3d7944351855c39c2d6defc97857b2ed20a90", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/ee/a/jB5JJKTcPs6QgkrnhgnCppg/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "839017b8565d48a3e3dc318a9591905da31d22de", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
239009640
pes2o/s2orc
v3-fos-license
Assessing Risks and Modeling Threats in the Internet of Things Threat modeling and risk assessments are common ways to identify, estimate, and prioritize risk to national, organizational, and individual operations and assets. Several threat modeling and risk assessment approaches have been proposed prior to the advent of the Internet of Things (IoT) that focus on threats and risks in information technology (IT). Due to shortcomings in these approaches and the fact that there are significant differences between the IoT and IT, we synthesize and adapt these approaches to provide a threat modeling framework that focuses on threats and risks in the IoT. In doing so, we develop an IoT attack taxonomy that describes the adversarial assets, adversarial actions, exploitable vulnerabilities, and compromised properties that are components of any IoT attack. We use this IoT attack taxonomy as the foundation for designing a joint risk assessment and maturity assessment framework that is implemented as an interactive online tool. The assessment framework this tool encodes provides organizations with specific recommendations about where resources should be devoted to mitigate risk. The usefulness of this IoT framework is highlighted by case study implementations in the context of multiple industrial manufacturing companies, and the interactive implementation of this framework is available at http://iotrisk.andrew.cmu.edu. I. INTRODUCTION An increasing number of everyday devices are being connected to the Internet, creating a network of physical devices, home appliances, and other items embedded with unique sets of electronics, software, sensors, and actuators. Forecasts predict that by 2025, about 73 billion Internet of Things (IoT) devices will be employed throughout the world [1]. This rapid growth in the IoT creates opportunities for more direct integration of the physical world into computer-based systems but also introduces a plethora of risks. IoT devices, whether they be part of smart transportation systems, thermostats that adapt to daily lifestyles, or medical devices that can monitor a patient in real time, are not always built with security and privacy in mind, leaving them extremely vulnerable to attacks. An example of a pervasive IoT attack was the Mirai botnet, where an attacker gained unauthorized access to numerous IoT devices including IP cameras and older routers. These infected devices were used to make much of the Internet unavailable by overwhelming Dyn, a domain name system (DNS) provider. The malicious code took advantage of the fact that most users do not change the default usernames and passwords on their devices [2]. In addition to this example, there have been other instances of IoT botnets [3] including Persirai [4], Hajime [5], BrickerBot [6], and one targeting a university [7]. Other IoT devices are also susceptible to attacks, as researchers demonstrated that St. Jude Medical pacemakers can easily be compromised from up to 50 feet away, allowing attackers to administer inappropriate pacing or shocks or causing rapid battery depletion [8]. Other medical IoT devices can also be easily compromised including insulin pumps [9] and medical infusion pumps [10]. In addition, smart locks [11], smart lights [12], security cameras [13], smart TVs [14], and smart toys [15] have been shown to be susceptible to attacks. The susceptibility of IoT devices to these types of attacks is largely due to a few factors that distinguish the IoT from other areas such as information technology (IT) [16]- [18]. One factor is the presence of unfixable flaws. Since the lifetime of many IoT devices is quite large, devices containing vulnerabilities or flaws will continue to be deployed and used long after vendors cease to produce or support them. Secondly, the diversity found across IoT devices and the settings in which they are used means that traditional approaches of discovering attack signatures is insufficient and unscalable. Lastly, the implicit and explicit complex interactions between IoT devices allow attackers to leverage cross-device dependencies, couplings, and dynamic environments to increase the attack space [19]. Because these attacks can compromise large numbers of IoT devices with ease and because many factors that contribute to IoT device susceptibility are unique to the IoT, organizations must assess threats and risks in IoT environments with IoTspecific frameworks and tools. Organizations may play the roles of IoT consumers, IoT producers, and/or platform operators that provide software to which customers connect their devices. Regardless of the role, each organization must assess and respond to its own IoT security and privacy risks, deciding which actions or controls to take to mitigate those risks. Consequently, this paper firstly synthesizes and adapts previous threat models to design an IoT-specific threat modeling framework. Secondly, the IoT-specific threat modeling framework is leveraged to develop an online interactive tool (http: //iotrisk.andrew.cmu.edu) which implements a joint risk and maturity assessment framework that provides organizations with automated, quantitative analysis and recommendations about how to mitigate risk in their IoT ecosystems. Lastly, the usefulness of this interactive tool and IoT framework is demonstrated by analyzing multiple industrial manufacturing companies. The remainder of the paper is organized as follows. Section II introduces previous threat models, their shortcomings, and how they can be synthesized and supplemented with an IoT attack taxonomy to effectively model threats in the IoT. Section III presents an overview of the entire IoT framework used to analyze organizations' IoT-specific risk and demonstrates where the interactive tool fits into that framework. Section IV describes the implementation of the interactive tool in leveraging the threat modeling framework to conduct risk and maturity assessments. Section V applies this framework and tool to multiple industrial manufacturing companies, and Section VI concludes the paper. A. Previous Threat, Risk, and Maturity Models Currently, a variety of threat modeling, risk assessment, and maturity assessment frameworks are used by organizations to describe potential threats and indicate areas of improvement, each of which is centered around the measure of risk. 1) Current Approaches Risk is a measure of the extent to which an entity is threatened by a circumstance or event, and it is a function of the likelihood of the event and the adverse impact caused by the event. Risk assessment is commonly seen as a means to identify, estimate, and prioritize risk to national, organizational, and individual operations and assets. The traditional approach to risk assessment has been to identify relevant threats to organizations, internal and external vulnerabilities to organizations, the impact or harm to organizations that may occur if vulnerabilities are exploited, and the likelihood that harm will occur due to a successful attack [20]. After the identified risks have been prioritized, appropriate and effective actions and controls are chosen that mitigate those risks. Several popular and well-regarded approaches to threat modeling and risk assessment include STRIDE [21], PASTA [22], Trike [23], NIST SP-800-30 [20], ISO/IEC 27005 [24], ISO/IEC 31010 [25], CRAMM [26], FRAP [27], COBRA [28], CORAS [29], and OCTAVE [30]. Throughout the threat modeling and risk assessment process that each of these approaches take, a variety of risk factors and the relationships between each of the factors are assessed to determine levels of risk. Typical risk factors include threats, vulnerabilities, likelihood, impact, and predisposing conditions. The threat modeling process characterizes the various adversarial actions or threats that can adversely impact organizational operations and assets. Identifying vulnerabilities includes finding weaknesses in systems, security procedures, internal controls, or implementation that can be exploited by a threat source. By taking into account adversary intent, capability, and targeting, likelihood denotes the probability that a given threat is capable of exploiting a specific vulnerability. Impact, which is partly determined by identifying high-value assets to the organization, represents the magnitude of harm that would result from a specific threat being carried out. Predisposing conditions are conditions that exist within an organization that may increase the likelihood of attacks causing adverse impacts to organizational operations and assets [20], [24], [31]. Each of these factors are used in threat modeling and risk assessments in a variety of ways, including quantitatively, qualitatively, or semi-qualitatively [20], [24]. Regardless of which approach is utilized, risk frameworks usually analyze risk by centering and developing around a particular starting point, including threat-oriented, impactoriented, or vulnerability-oriented approaches [20], [31]. To mitigate the risks prevalent in a particular organization, security and privacy controls are implemented as safeguards or countermeasures to prevent and protect against threats. Maturity assessments are used to compile and evaluate the information needed by organizational officials in determining how effective security and privacy controls are in mitigating risks to organizational operations and assets. Maturity assessments are typically comprised of a set of assessment objectives, methods, and objects, and they provide the extent to which controls are implemented correctly, operating as intended, and meeting the security and privacy requirements of the organization [32]. 2) Shortcomings of Current Approaches Since existing threat modeling and risk assessment methodologies were established prior to the development of the IoT, they are ill-equipped to effectively address IoT-specific threats and vulnerabilities. Existing frameworks only focus on individual assets, devices, and communication platforms, but in the IoT, it is necessary for frameworks to thoroughly consider the relationships, processes, and couplings between IoT devices that arise from the complex and pervasive nature of IoT ecosystems. For example, an attacker could compromise a smart plug to turn off the air conditioning, triggering a temperature increase. If there is a connected service that opens the windows when the temperature rises above a certain level, then this temperature increase may cause the windows to open, resulting in a physical security breach. However, existing frameworks focus on individual devices and do not consider these interactions where the response of one IoT device serves as the input to other IoT actors [33]. In addition, existing frameworks implement periodic assessments which make the faulty assumption that systems do not change significantly in short periods of time [34]. This assumption does not hold well in the IoT where there is much variability in system scale, dynamics, and coupling, implying that there is a high probability of a new system emerging between periodic assessments. Current frameworks also have a simplistic view of organizational assets, only seeing them as things of value. In the IoT, however, organizational assets are more than just things of value and can be platforms from which attacks are launched as in the case of the Mirai botnet [33]. Another shortcoming of current threat modeling and risk assessment methodologies is not taking into account the distinctive features that render IoT devices susceptible to attacks, including unfixable flaws, a wide range of diversity, complex interactions between devices, and a direct impact on physical processes. Existing frameworks, including the most popular approaches such as [20], [30], [35], are typically qualitative rather than quantitative and depend on subjective input from consultants. As depicted in Figure 1, the consultant is usually heavily involved in the computation and analysis of the threats and risks as opposed to simply gathering relevant information to input to an automated framework. This results in a lack of precision and much variability between recommendations provided by different consultants. For example, one person's view of a threat being low (as opposed to medium or high) might not conform to another person's view of the same threat. In addition, these frameworks oftentimes do not provide specific recommendations or initiatives and instead provide general ambiguous metrics from which specific initiatives have to be inferred. For example, ambiguous metrics include generic measures of "high," "medium," or "low" representing the probability of being compromised and the severity of impact for a broad organizational domain. While addressing these shortcomings, the subsequent framework synthesizes and adapts the strengths of previous frameworks while centering itself around an IoT-specific attack taxonomy that can be leveraged for automated analysis. B. IoT Threat Modeling Due to the shortcomings that are present in these insufficient approaches, it is paramount that we establish new threat modeling and risk assessment frameworks that analyze security and privacy risks unique to IoT ecosystems. While previous work has assisted in modeling IoT threats [36], [37], we wish to provide a full framework of IoT threat modeling, risk assessment, and maturity assessment, where the risk and maturity assessments are informed by the IoT threat models. Our approach is to design and leverage an IoT attack taxonomy as the basis for threat modeling and risk assessment. The IoT attack taxonomy is comprised of a comprehensive list of individual attacks. Because we cannot suitably classify every IoT incident into a taxonomy, we instead classify the individual attacks that make up the incident. Each attack functions as another step towards effectively carrying out the IoT incident so that any IoT incident is composed of a series of individual attacks. Figure 2 provides a few examples of IoT incidents and the individual attacks that make up those incidents. For example, the Mirai botnet that was used to carry As depicted in Figure 3, any individual attack can be broken down into four parts, each of which is associated with a particular dimension of the IoT attack taxonomy: an attacker with a specific set of 1) assets carries out a particular 2) action to exploit one or more 3) vulnerabilities, compromising a particular set of 4) properties. For example, the individual attack of adding compromised IoT devices to the Mirai botnet can be broken down into 1) an adversary possessing the IP address of a vulnerable webcam, general technical skills, and commercial PC equipment 2) carries out the attack by installing the Mirai worm on the vulnerable webcam, 3) exploiting the default password vulnerability, 4) consequently compromising the authorized communications use of the webcam. Breaking down the steps of an individual attack into these four dimensions allows us to chain together the individual attacks that make up an IoT incident through the asset dimension which includes prior information and resources obtained from previous individual attacks. For example, in the second individual attack of the Mirai botnet IoT incident, the attacker leverages the resources obtained in the previous individual attack, namely the authorized communications use of the webcam, to attack Dyn. By using an IoT attack taxonomy as the foundation for threat modeling and risk assessment, we can comprehensively characterize and break down the complex interactions between IoT devices and cover the entire known IoT attack space. The IoT attack taxonomy also accounts for the unfixable flaws and diversity prevalent in the IoT in addition to the IoT's direct relationship to physical processes. Consequently, the taxonomy explicitly accounts for the unique factors that distinguish the IoT in its susceptibility to various types of attacks. In addition, using an IoT attack taxonomy as the foundation for threat modeling and risk assessment enables the framework to be quantitative, reducing the variability that comes as a result of heavily depending on the consultant. As seen in Figure 4, the IoT attack taxonomy enables the consultant to simply provide inputs and analyze outputs of the automated framework instead of being heavily involved in the framework itself as depicted in Figure 1. This foundation also allows the framework to easily adapt to the discovery of new attacks by simply adding or removing elements and relationships between elements in the taxonomy. Furthermore, the IoT attack taxonomy meets the criteria necessary for developing effective taxonomies [38]. This taxonomy is 1) mutually exclusive in the sense that each attack can only be classified into one category since the categories do not overlap. The IoT attack taxonomy is also 2) complete and exhaustive in that the provided categories account for all possible known attacks. In addition, this taxonomy is 3) comprehensible in that it contains precise, clear, and concise information so that any terminology and classification are not uncertain or confusing. Lastly, the IoT attack taxonomy is 4) useful by providing logical and intuitive categories that contribute insight to the IoT attack space, supporting easy classification of new attacks and allowing the addition of newly developed attack categories. A detailed view of the IoT attack taxonomy is available at http://iotrisk.andrew.cmu.edu. 1) Attacker Assets The first dimension of the IoT attack taxonomy enumerates the assets, capabilities, and capacity needed by an attacker to carry out an IoT attack. This dimension is composed of six elements, each of which describes a different aspect of the attacker's capabilities. Many IoT-specific attack capabilities are included in this dimension such as possessing an IoT botnet and being able to manipulate particular sensors. The six elements of this dimension are modeled after the standards presented in [20], [24], [39]- [41] and inform how likely it is for an IoT attack to be attempted. These six elements are described as follows. The 1) prior information gathered by an attacker refers to the prerequisite knowledge needed to carry out an IoT attack. This information can be publicly available, gathered through surveillance, supplied by an insider, or stolen information. The 2) location or access of an attacker refers to where the adversary is when carrying out an IoT attack. This location can be remote, within wireless range, on the same network, or have physical access. The 3) equipment used by an attacker may refer to commercial hardware equipment, a distributed system, or specialized equipment or facilities that are necessary in order to carry out an attack. The 4) technical skills required by an adversary to carry out an attack can include basic skills, general programming skills, specific niche skills, or multiple advanced specific niche skills. The 5) time requirement refers to the duration of time needed to successfully carry out an attack, whether that be a short period, long continuous period, or a specific time slot. Lastly, the 6) persistence requirements refer to how much an adversary needs to maintain a presence in order for an attack to be successful, and this can range from nothing to being able to adapt the attack to avoid detection. 2) Attacker Actions The second dimension of the IoT attack taxonomy describes the actions and mechanisms an attacker would use to exploit vulnerabilities, and it is modeled after the standards in [20], [24], [39]- [41]. Each action is largely independent from both the locus of the vulnerability and the consequence of the exploit. The highest taxonomic level in this dimension refers to the attack mechanism while the middle and lower levels of classification are given by the attack pattern category and the attack pattern itself, respectively. This dimension is composed of seven different categories of attack mechanisms, which include 1) collecting and analyzing information, 2) employing probabilistic techniques, 3) engaging in deceptive interactions, 4) manipulating data structures, 5) abusing existing functionality, 6) subverting access control, and 7) manipulating system resources. Each of these categories includes many IoT-specific actions such as tag tracking; node replication, spoofing, and injection; replay attacks; bypassing physical security; contam-inating the physical environment; and hardware tampering. 3) Exploitable Vulnerabilities The third dimension of the IoT attack taxonomy is the dimension that the threat modeling and risk assessment frameworks are centered around. This dimension enumerates all the possible vulnerabilities that can be exploited by an attacker, classified according to vulnerabilities in 1) communications, 2) software, and 3) hardware as suggested in [20], [24], [39]- [41]. Communications vulnerabilities are further divided into vulnerabilities relating to traffic encryption, traffic authenticity, privacy, protocol, and traffic obstruction, while software vulnerabilities are further divided into cloud control or storage interface vulnerabilities, embedded software vulnerabilities, and update mechanism vulnerabilities. Many of these software vulnerabilities are unique to the IoT such as unnecessary privileges and no integrity check during updates, and many of the hardware vulnerabilities are also IoT-specific including the lack of hardware tamper protection. 4) Compromised Properties The fourth dimension in the IoT attack taxonomy describes the properties that may be compromised and their associated consequences when any one of the vulnerabilities in the third dimension is exploited. This dimension is also derived from the standards presented in [20], [24], [39]- [41]. While the security properties of confidentiality, integrity, and availability are central to most frameworks, this dimension of the IoT attack taxonomy adds other relevant properties that are specific to the context of the IoT, such as physical safety. Furthermore, common properties such as integrity imply different meanings depending on the context under study. For example, integrity in the context of network communication may refer to a message not being spoofed, whereas integrity for a thermal sensor can refer to it reporting the correct temperature. The security properties identified in this dimension of the taxonomy are 1) confidentiality breaches, 2) integrity breaches, 3) losses of availability, 4) authorization breaches, and 5) safety breaches. C. IoT Risk Domains In order to mitigate any threats or risks identified in the IoT attack taxonomy, appropriate security and privacy controls must be implemented. These controls can be classified according to the domain in which they mitigate risk. The risk domains used in this framework are designed and chosen such that all the vulnerabilities in the third dimension of the IoT attack taxonomy clearly fall within one of the risk domains. The risk domains, along with the specific controls that constitute those domains, are derived from standard frameworks including OWASP IoT Project [37], IoT Security and Privacy Trust Framework [42], ISO/IEC 27001 [35], NIST SP-800-53 [43], ISO/IEC 27002 [44], and ISO/IEC 33004 [45]. The nine risk domains identified are 1) governance and accountability, 2) physical security, 3) encryption, 4) systems security, 5) identity and access management, 6) event logging and monitoring, 7) supply chain security, 8) threat and vulnerability management, and 9) communications security. Since the actions taken to mitigate risk differ significantly between producers and consumers, two sets of controls exist for each risk domain, one for producers and one for consumers. These controls include both technical controls and management controls. Unlike current frameworks, these controls include actions such as physical access control, maintenance, and disposal that address IoTspecific characteristics like physical security. A detailed view of these controls is available at http://iotrisk.andrew.cmu.edu. III. IOT FRAMEWORK OVERVIEW Having presented the IoT attack taxonomy that serves as the basis for threat modeling and risk assessment, we now introduce the structure and overall approach of the interactive tool (http://iotrisk.andrew.cmu.edu) we developed to model threats and assess risks in the IoT. This overall approach can be broken down into three stages as summarized in Figure 5, all of which are founded on the IoT attack taxonomy. The identify IoT-Specific Domains Fig. 5. IoT Framework Overview stage is comprised of a consultant gathering relevant information from the organization and using that information to inform which portions of the IoT attack taxonomy are relevant. Once this is completed, an automated and quantitative assessment stage is initiated which utilizes the information provided in the IoT taxonomy to compute a variety of metrics. These metrics are then used by the consultant in the recommend stage to provide specific suggestions and initiatives for the organization regarding its IoT ecosystems. A. Identify In the identify stage, a consultant compiles information about possible threat actors, potential vulnerabilities within the organization's IoT infrastructure, possible impacts suffered by the organization if it is attacked, and existing controls that the organization has in place to mitigate potential breaches. To ease the process of compiling this information, a few preset profiles such as threat actor profiles or IoT device profiles may be used. This stage is a central part of the threat modeling process. As depicted in Figure 6, the potential threat actors inform the taxonomy regarding how likely it is for an adversary to have access to a particular asset or to carry out a particular action. The potential vulnerabilities and existing organizational controls both inform the taxonomy by indicating how prevalent specific vulnerabilities are within the organization. Lastly, the negative impacts that an organization might suffer help inform the extent to which the properties within the taxonomy could be compromised. B. Assess The assessment stage, which is depicted in Figure 7, can be divided into two substages: risk assessment and maturity assessment. During risk assessment, the four dimensions of the IoT attack taxonomy are leveraged to compute measures of likelihood and impact for each vulnerability, representing the likelihood that a vulnerability is exploited and the negative impact that would result if it were exploited. These measures of likelihood and impact are then used to compute measures of inherent risk within various domains of an organization. Inherent risk simply refers to the raw or untreated risk present in an organization before any actions are taken to reduce that risk. During maturity assessment, information that was gathered in the identification stage is used to assign measures of how effective and well-implemented each control is. These measures are then used to compute control scores that describe how well each control mitigates risk. The risk and maturity assessments are then used in tandem to compute measures of residual risk within various domains of an organization. Residual risk is simply the amount of risk remaining after an organization has implemented all of its risk-mitigating actions, procedures, and policies. This information is then used to conduct a sensitivity analysis that demonstrates which specific controls best mitigate risk. C. Recommend In the recommend stage, a consultant leverages the measures of inherent risk, residual risk, the control scores, and the sensitivity analysis to provide specific recommendations and initiatives to the organization about how to better mitigate risk within its various IoT environments. As seen in Figure 8, these recommendations include suggestions as to which actions and controls an organization can introduce or better implement to make its IoT environments more secure. IV. INTERACTIVE TOOL IMPLEMENTATION One of the main contributions of this paper lies in the development of an interactive tool (http://iotrisk.andrew.cmu.edu) that functions as the quantitative and automated assessment stage of the overall IoT framework. This section describes the contents of the interactive tool with respect to the implementation of the assessment stage and how this assessment leverages and grounds itself in the IoT attack taxonomy. The assessment stage seeks to comprehensively characterize the threats an organization faces by leveraging the classifications given in the IoT attack taxonomy. Based on the information provided by CAPEC [46] and OWASP [37], relationships or mappings between each dimension of the taxonomy are defined, providing a list of attacker assets that may be used to carry out specific attack actions, a list of attacker actions that may be used to exploit specific vulnerabilities, and a list of properties that are compromised when specific vulnerabilities are exploited. Using the IoT attack taxonomy and these relationships between each of its dimensions, measures of likelihood and impact can be calculated for each vulnerability, allowing a measure of inherent risk to be calculated for each IoT risk domain. A. IoT Risk Assessment To calculate measures of likelihood for each vulnerability, specific threat actor profiles are defined according to who might carry out an attack and their intention and motivation for carrying out the attack. Commonly chosen threat actor profiles include ideologically motivated threat actors such as a hacktivist, financially motivated threat actors, and statesponsored threat actors such as a nation state. 1) Likelihood To calculate the likelihood of a specific vulnerability being exploited, we consider the appropriate combinations of threat actors, attacker assets, and attacker actions that can be used to exploit the particular vulnerability. Using historical data and expert consensus, a likelihood score p(s j i |t k ) is assigned to each asset s j i given a specific threat actor t k , representing the probability of threat actor k possessing asset i from subdimension j of dimension 1. In addition, a likelihood score p(a i |t j ) is assigned to each action a i given a specific threat actor t j , representing the probability of threat actor j taking action i. The likelihood of the union of all possible combinations of threat actors, attacker assets, and attacker actions is used as the measure of likelihood for a particular vulnerability being exploited. By using this measure of likelihood, the IoT threat modeling framework comprehensively leverages all the information in the IoT attack taxonomy and all the relationships between its elements. We first calculate the likelihood of a specific threat actor possessing at least one of the attacker assets in sub-dimension j of dimension 1 that can be used to carry out action k. Using the inclusion-exclusion principle and assuming independence between attacker assets, this likelihood is given by (1) where S j k represents the set of assets from sub-dimension j of dimension 1 that can be used to carry out action k, N j k represents the number of elements in this set, and |I| represents the cardinality of set I. We then calculate the likelihood of a specific threat actor possessing at least one vector of assets s q that can be used to carry out action k. Because dimension 1 is composed of six sub-dimensions describing an attacker's assets, asset vector q refers to a set of six elements, one from each sub-dimension of dimension 1. Assuming independence between attacker assets, this likelihood is given by where S k represents the set of asset vectors that can be used to carry out action k. Next, we calculate the likelihood of a specific vulnerability v r being exploited by a particular threat actor. Using the inclusion-exclusion principle and assuming independence between attacker assets and actions, we consider all possible combinations of asset vectors and actions that could be used to exploit the vulnerability. This likelihood is given by where A r represents the set of actions that can be used to exploit vulnerability r and N r represents the number of elements in this set. Lastly, we calculate the likelihood L r of a specific vulnerability being exploited by at least one of the relevant threat actors. Using the inclusion-exclusion principle and assuming independence between threat actors, this likelihood is given by where T represents the relevant set of threat actors and N represents the number of elements in this set. 2) Impact To calculate the impact associated with a particular vulnerability being exploited, we consider all the properties that are compromised when that vulnerability is exploited. Using historical data and expert consensus, an impact score w j p is assigned to each property p j , representing the monetary loss or damage incurred if property j is compromised. The impact I i of exploiting vulnerability i is simply calculated by adding together the associated impact scores so that where P i represents the set of properties that are compromised when vulnerability i is exploited. 3) Inherent Risk To calculate the inherent risk associated with a particular vulnerability, we use the standard equation for technology risk where the likelihood of exploiting the vulnerability is multiplied by the impact associated with exploiting the vulnerability [47]. When calculating the inherent risk R i for risk domain i, we carry out a weighted sum of the inherent risks associated with each vulnerability in risk domain i where the weights are the vulnerability prevalency scores. This computation is given by where V i represents the set of vulnerabilities associated with risk domain i. To normalize the inherent risk so that all measures of inherent risk lie between a minimum value r min and a maximum value r max , we scale the inherent risk by the maximum possible inherent risk. The maximum possible inherent risk occurs when all vulnerabilities are considered, the likelihood of exploiting any vulnerability is 100%, and the impact score for every vulnerability is the maximum possible impact score. The normalized inherent risk R norm i for risk domain i is given by where V represents the set of all possible vulnerabilities and I max j represents the maximum possible impact score for vulnerability j. B. IoT Maturity Assessment The IoT maturity assessment conveys an understanding of how well an organization has implemented risk-mitigating controls, allowing a measure of residual risk to be calculated for each IoT risk domain. Each control c i is assigned a control implementation score p(c i ) and a control effectiveness score p(e i ) that represent how well an organization has implemented that particular control and how effective that particular control is in mitigating risk, respectively. By interacting with the organization, these control scores can be assigned using answers to a maturity assessment questionnaire. 1) Maturity Score Using maturity scores, which represent the percentage reduction in inherent risk, a measure of residual risk can be calculated for each IoT risk domain. To calculate a maturity score for a specific vulnerability, we first calculate a mitigation percentage for each control where the mitigation percentage equals how well the control is implemented times how effective the control is. The mitigation percentage p(m i ) for control i is given by and represents how well control i mitigates risk. Using the inclusion-exclusion principle and assuming independence between controls, the maturity score M j for vulnerability j can be calculated as where C j represents the set of controls associated with vulnerability j and N j represents the number of elements in this set. 2) Residual Risk To calculate the residual risk associated with vulnerability j, we multiply the inherent risk L j I j associated with vulnerability j times the unmitigated risk percentage (1 − M j ). When calculating the residual risk Z i for risk domain i, we carry out a weighted sum of the residual risk associated with each vulnerability in risk domain i where the weights are the vulnerability prevalency scores. This computation is given by where V i represents the set of vulnerabilities associated with risk domain i. The residual risk is normalized in the same way as the inherent risk so that where Z norm i is the normalized residual risk for risk domain i. 3) Sensitivity Analysis After computing measures of inherent and residual risk, a sensitivity analysis is used to provide more detailed information about which specific controls should be better implemented to further reduce residual risk. In this framework, we conduct a one-at-a-time sensitivity analysis where a small value ∆p(c j ) is added to the control score p(c j ) for control j, representing a small improvement in the implementation of control j. The normalized residual risk Z norm i is recomputed as described in equations (8) to (11), and a sensitivity score ∆Z norm ij for risk domain i is assigned to control j. The sensitivity score ∆Z norm ij is simply the difference between the original residual risk and the recomputed residual risk. This process is repeated for each control j in each risk domain i until all sensitivity scores ∆Z norm ij have been assigned. The sensitivity scores in risk domain i are then ordered by decreasing magnitude to show which controls have the largest effect on further reducing residual risk in that domain. V. INDUSTRIAL MANUFACTURING CASE STUDIES A. Company X To test the accuracy and effectiveness of the proposed threat modeling, risk assessment, and maturity assessment framework, we applied the framework to analyze an anonymous Company X. Company X is a small industrial manufacturing company located in the midwestern United States with 5,000-10,000 employees and approximately $2 billion in turnover. The company engages in the design, manufacture, and distribution of small gasoline engines and outdoor powered equipment including residential and commercial products. This company has invested in IoT technology for many of its products, including lawn mowers, irrigation controllers, door locks, robot vacuums, turf mowers, concrete mixers, drones, and tractors. Company X is focusing on integrating connectivity amongst its devices to create an ecosystem enabling seamless communication. Existing products have been improved such as lawn mowers which trim grass by sensing the soil and the grass type and irrigation controllers which are controlled remotely or are scheduled to go off at certain times of the day. Mechanical door locks are equipped with sensors to open and lock doors based on the user's remote request via a smartphone. A cloudbased database hosts and manages all the data generated from these devices and is available on the go for consumers with real time reports indicating usage and ways to maximize efficiency. The company is also striving to grow its connected industrial devices, including turf mowers, tractors, concrete mixers, and drones. These devices are equipped with sensors which assist in sending maintenance reminders directly to customers via SMS or email based on usage of the equipment. Real-time GPS tracking is employed to provide the exact location and status of the equipment. Geofences are deployed to restrict the equipment to operate in predefined areas. Clients can access detailed information on the engine to troubleshoot any mechanical or software issues. Data generated from the equipment is stored in a centralized database which is extracted for reports including total travel time, duration of each stop, and routes taken by the crew. Company X is concerned with the availability and security of its key manufacturing systems, including any cyber event that would disrupt daily operations, cause loss of intellectual property, or cause an event affecting human safety. The company is also concerned about the reliability of its systems and products, including any failure of systems that would hinder internal daily operations or failure of products that would cause consumer frustration. In addition, the company is concerned with any financial loss it might potentially suffer, including anything that would hinder product development, sales, or tarnish its reputation. Consequently, our framework was used to provide an appropriate analysis of the company's risks and controls with respect to its IoT technology. In this analysis, the scores for each element of the IoT attack taxonomy and the scores for each control were assigned in conjunction with expert consultants. They leveraged their experience and expertise, inventory of threat actor profiles, and previous IoT incidents to assign scores that captured the relational patterns between each dimension of the taxonomy in the context of industrial manufacturing and the IoT. Once these scores were assigned, we applied our framework to compute measures of inherent and residual risk in each risk domain of Company X as shown in Figure 10. As can be seen, systems security and threat and vulnerability management are both relatively high areas of inherent risk for Company X. This is expected since Company X is concerned about human health and safety in addition to protecting the intellectual property in their devices. Figure 10 also shows that inherent risk has been reduced the most in event logging and monitoring and communications security, which is expected since Company X has implemented relatively mature controls in these domains. In general, the results presented in Figure 10 match the levels of risk the expert consultants expected to see. Figure 9 presents a scatter plot depicting the likelihood and impact scores associated with each of Company X's vulnerabilities, helping Company X decide which vulnerabilities to target when mitigating risk, namely those with high likelihood and impact. Figure 11 shows the reduction in residual risk associated with further implementing each control by 10%, assisting Company X in deciding which controls to implement and what actions to take to further mitigate risk. Figure 10 depicts the reduced residual risk resulting from increasing the implementation of these ten controls suggested in Figure 11 by 30%. As can be seen, the suggestions given in Figure 11 are effective in reducing Company X's residual risk in all risk domains, including a significant reduction in threat and vulnerability management. The results displayed in these plots generally match the recommendations of the expert consultants for reducing Company X's risk, which include technical controls for device hardening within the IoT ecosystem in addition to implementing more robust measures for updating and patching. Our framework also provides the suggestions given in Figure 11 for each risk domain, providing Company X with recommendations about further mitigating risk for specific domains. B. Company Y To demonstrate the accuracy of our framework in computing measures of risk, we applied our framework to Company Y and compared the results to those obtained from Company X. Company Y is identical to Company X except that Company Y uses devices that exhibit many fewer communications vulnerabilities such as cleartext authentication and vulnerable SSL [48], [49]. Consequently, our framework should supply a much lower inherent risk score for the communications security risk domain for Company Y than it does for Company X. Figure 12 displays the inherent and residual risk for each risk domain of Company Y. As expected, the inherent risk for communications security is significantly lower than that shown for Company X in Figure 10, confirming the accuracy of the framework. An interactive implementation of this framework is available at http://iotrisk.andrew.cmu.edu. VI. CONCLUSION With the continued growth of the IoT, organizations must begin assessing threats and risks in IoT environments. To do so, we have synthesized previous approaches to develop an IoT threat modeling framework. This framework utilizes an IoT attack taxonomy that describes the adversarial assets, adversarial actions, exploitable vulnerabilities, and compromised properties that are components of any IoT attack. We then leverage this taxonomy as the foundation for a risk assessment framework that provides a quantifiable measure of inherent risk in specific risk domains of an organization. We have also developed a maturity assessment framework that leverages information from the threat modeling framework to provide a measure of residual risk in the same risk domains. This framework provides organizations with specific recommendations as to where resources should be devoted to further mitigate risk. Rather than being a framework that relies heavily upon the subjective analysis of the consultant, this framework is more structured, quantitative, and consistent. It allows the consultant to focus on obtaining relevant information necessary for assessment, ensuring that information is as accurate as possible, and analyzing the outputs provided by the framework. This framework still allows for flexibility through modifications of the taxonomy but is no longer directly dependent upon the variability between consultants. We have demonstrated the effectiveness of this IoT framework by implementing it in the context of multiple industrial manufacturing companies to provide recommendations for mitigating risk.
2021-10-18T01:15:52.284Z
2021-10-14T00:00:00.000
{ "year": 2021, "sha1": "d19526756d9e3c5cae2d1170a5e0df4f440a1874", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d19526756d9e3c5cae2d1170a5e0df4f440a1874", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
113211858
pes2o/s2orc
v3-fos-license
ANALYSIS OF INNOVATIVE TWO-SPAN SUSPENSION BRIDGES ona.lukoseviciene@vgtu.lt Abstract. Recently, two-span, or the so-called single pylon suspension bridges, due to their constructing structure, have been widely applied. A reduction in deformation seems to be the main problem of the behaviour and design of such bridges. The deformation of suspension bridges is mainly determined by cable kinematic displacements caused by temporary loadings rather than by elastic deformations. Not all known methods for the stabilization of the initial form of suspension bridges are suitable for single pylon bridges. The employment of the so-called rigid cables that increase the general stiffness of the suspension bridge appears to be one of the innovative methods for stabilizing the initial form of single pylon suspension bridges. Rigid cables are designed from standard steel profiles and, compared to the common ones made of spiral and parallel wires, are more resistant to corrosion. Moreover, the construction joints, in terms of fabrication and installation, have a simpler form. However, calculation methods for such single pylon suspension bridges with rigid cables are not sufficiently prepared. Only single publications on the analysis of the behaviour of one or three-span suspension bridges with rigid cables have been available so far. The paper presents analytical expressions to calculate the displacements and internal forces of suspension bridges with rigid cables thus assessing the sequence of cable installation. Also, the paper describes the sequence of iterative Introduction For a long time, due to their effective behaviour and excellent architectural appearance, suspension bridges have been employed for carrying out long and averagesized spans (Ryall et al. 2000;Troyano 2003). By reason of dominating tension stresses, suspension bridges assure covering the longest spans in the world (Gimsing, Georgakis 2012;Strasky 2005). Recently, two-span, or the socalled single pylon suspension bridges have been introduced. The cables of these bridges are anchored either in the foundation or a stiffening girder. The latter ones are also called self-anchored bridges (Kim et al. 2002;Zhang et al. 2013). The cables of suspension bridges are manufactured from high strength steel spiral or parallel wires the flexural stiffness of which is equal to zero (Gimsing, Georgakis 2012;Kulbach 2007). It should be stressed that such cables are subject to specific anti-corrosion protection, and their constructions joints, from the point of view of the structure, are sufficiently complex (Betti et al. 2005;Bloomstine, Sorensen 2006;Nakamura, Suzumura 2009;Xu, Chen 2013;Yanaka, Kitagawa 2002), which in turn, increases bridge construction and exploitation costs. High deformability is one of the most serious disadvantage of suspension bridges (Gimsing, Georgakis 2012;Jennings 1987;Katchurin et al. 1971) and is mainly determined by the kinematic displacements of the suspension cable caused by asymmetric and local traffic loadings rather than by the elastic deformations of the cable (Juozapaitis, Norkus 2004;Kulbach 2007). It should be noted that the kinematic displacements of the asymmetrically loaded cable directly depend on its initial sag and do not rest on the length of its span (Jennings 1987;Juozapaitis, Norkus 2004). The stiffening girder is the main structural element that allows ensuring the required stability of the initial form of suspension bridges (Gimsing, Georgakis 2012;Ryall et al. 2000). This classical structural element of stabilization is not accepted as highly effective, because, under relatively high asymmetric loadings, the mass of the girder of high rigidity is required (Grigorjeva et al. 2010;Katchurin et al. 1971;Lewis 2012;Wollman 2001). Also, some other structural measures that allow reducing kinematic displacements are known (Gimsing, Georgakis 2012;Katchurin et al. 1971;Strasky 2005). However, some of those are quite complex or not effective enough from a technical-economic point of view (Jennings 1987). Moreover, not all available stabilization methods for the initial form of suspension bridges can be proper to single pylon bridges. To reduce the kinematic displacements of suspension bridges and to stabilize their initial form, a structured decision has been suggested, following which, the so-called rigid cables instead of the common ones are applied (Grigorjeva et al. 2010(Grigorjeva et al. , 2015Juozapaitis et al. 2006Juozapaitis et al. , 2010. The cables having a similar structure are designed using hot rolled or welded steel profiles. The forms of their crosssection may differ from I-beams to rectangular or round tubes and chutes (Grigorjeva et al. 2010;Juozapaitis et al. 2010Juozapaitis et al. , 2013. Both the constructions joints of such rigid cables are simple and reliable. Contrary to the cables made of high strength wires, dangerous local strains are not produced (Fürst et al. 2001;Gimsing, Georgakis 2012;Prato, Ceballos 2003;Strasky 2005). The cross-sections of rigid cables are remarkably resistant to the impact of corrosion. To achieve higher technical efficiency, high strength steel is recommended for producing rigid cables. A number of works focus on analysing the behaviour of traditional suspension bridges, i.e. the bridges with flexible cables (Clemente et al. 2000;Cobo del Arco, Aparicio 2001;Gimsing, Georgakis 2012;Katchurin et al. 1971;Kim, Thai 2010;Wollman 2001;Wyatt 2004). Thus, it should be emphasized that for calculating internal forces and displacements of suspension bridges, numerical methods are widely applied (Nevaril, Kytyr 2001;Wang et al. 2002). However, applying them not always assists in adequately evaluating the sequence of the suspension bridge and installation. The number of works on the analysis of two-span (single pylon) suspension bridges is not that high. The methods for performing calculations on such innovative suspension bridges with rigid cables have not been put into practice yet. A few publications on the analysis of the behaviour of one or three-span suspension bridges have been prepared (Grigorjeva, Kamaitis 2015;Juozapaitis et al. 2010Juozapaitis et al. , 2013. Thus, an important point is the analysis of the behaviour of the innovative two-span suspension bridge with the rigid cable and the preparation of its calculation method. The paper examines calculations on the innovative single pylon suspension bridge with the rigid cable under symmetric loadings. The proposed methodology evaluates the sequence of installing the rigid cable. Analytical expressions estimating the internal forces and displacements of such a bridge are presented thus discussing the sequence of iterative calculation. Specificities of calculating and designing the innovative suspension bridge A constructional scheme of the innovative two-span suspension (single pylon) bridge is typical of the structure of a classical bridge with a flexible cable. The bridge, instead of the flexible cable, uses the rigid one, i.e. a cable having flexural stiffness E c J c ≠ 0. The other structural parts of the suspension bridge remain the same and include stiffening girders, pylons and hangers (Fig. 1). The main problem of suspension bridges, including the single pylon ones, is relatively high deformability under asymmetric or local loadings. The displacements of such bridges effectively reduce applying innovative decisions one of which, as mentioned above, is the employment of the so-called rigid cables instead of conventional flexible ones. Rigid cables are known as having axial E c A c ≠ 0 and flexural stiffness E c J c ≠ 0. The value of flexural stiffness E c J c is selected according to eligibility for limit state conditions. The introduced innovative cables use hot-rolled or welded steel profiles having the I-beam and box cross-section (2010; Grigorjeva et al. 2010). It should be noted for producing such rigid cables, high strength steel is highly recommended. Obviously, the cross-sectional area of rigid cables, compared to the conventional spiral or parallel wire cable, may be slightly higher, but the cost will be significantly lower. On the other hand, compared to the suspension bridge with flexible cables, the total mass of new bridge supporting structures (cable and stiffening girder) will be lower, because the rigid cable, similarly to the stiffness girder, takes over asymmetric and concentrated loadings. Moreover, taking into account the operating costs for the anti-corrosive protection and maintenance of flexible cables made of parallel wires, the efficiency of applying rigid cables increases. Emphasis should be placed on two main options of forming (installing) the rigid cable . In the first case, the cable acquires flexural stiffness (E c J c ≠ 0) from the very beginning of forming the bridge and takes over both permanent g and contemporary v loadings through the processes of tension and flexure. In the latter case, the cable gains flexural stiffness after installing the bridge rather that at the very beginning. In this case, the cable takes over permanent loadings through tension only while the temporary ones as tension and as a flexural element. In the latter case, the general loadings of the rigid cable are significantly reduced. The behaviour of such suspension bridges is qualitatively similar to that of the bridges with conventional flexible cables, and for calculating them; the same well known assumptions are applied (Gimsing, Georgakis 2012;Katchurin et al. 1971;Wollman 2001). Calculating the two-span suspension bridge under permanent loading A calculation scheme for the innovative two-span (single pylon) suspension bridge is presented in Fig. 2. For making calculations on the bridge, the second more rational case of forming a rigid cable is examined, i.e. the bearing cable, as an absolutely flexible one, takes over the whole permanent loading g and, as a rigid cable (E c J c ≠ 0), temporary loading v c . It should be emphasized that, in this case, the stiffening girder takes only a part of temporary loading (v b ). The scheme for the loaded single pylon suspension bridge shows that two main structural elements (cable and stiffening girder) are, for the sake of clarity, relatively fragmented (Fig. 2). The general case, when the lengths between spans are not equal (l l ≠ l r ), has been investigated. The endings of both stiffening girders and bearing cables of such a bridge are pinned supported. The right span of the bridge The spans of the suspension cable of the single pylon bridge are installed at different levels, i.e. the cables of both spans are inclined. It should be emphasized that the calculation of the inclined specified cables applying local coordinates is much more complex (Juozapaitis, Daniūnas 2005). The angle of the tilt of the cables is slight enough, and therefore, for the sake of simplicity, the inclined cables in global ordinates will be calculated. Next, the right and left parts of the bridge will be examined. At the first stage of formation, the rigid cable is affected by permanent loading g making an impact on the bridge. The equilibrium condition for the right side (marked with symbol r) cable is defined by Eq (1): , , where H r0 − a horizontal component of cable tension under loading g; z r0 (x r ) − the initial curve of the cable (quadratic parabola is accepted), calculated from the line connecting upper and lower spans (Fig. 2); M rg (x r ) − the Fig. 2. Schemes for the calculations on the suspension bridge moment caused by permanent loading g in the girder of the analogous span. The horizontal component of the tension force of this flexible cable is calculated as follows: , where f r0 − the initial sag of the right flexible inclined cable in the middle of the span (x r = 0.5l r ), calculated from the line connecting upper and lower supports. It should be emphasized that the value of the tensile strength of the inclined flexible cable acting along the line will be higher and equal to . The left span of the bridge Under symmetric loadings, the calculation of the left side of the bridge (marked with symbol l) mainly does not differ from that of the right side. The initial state of this cable, under permanent loading g is defined using an analogical equation: , , where H l0 − a horizontal component of cable tension force under load g; z l0 (x l ) − the initial curve of the cable calculated from the line connecting the upper and lower supports of the cable (Fig. 2). M lg (x l ) − the moment caused by permanent loading g in the girder of an analogous span (l l ). The horizontal component of the tension force of the flexible left cable is calculated as follows: , where f l0 − the initial sag of the right flexible inclined cable in the middle of the span (x l = 0.5l l ), calculated from the line connecting the upper and lower supports of the cable. In parallel with the right cable, the value of the tensile strength of the inclined left cable acting along the line will be equal to . Selecting composed parameters for the two-span suspension bridge Striving for the equilibrium of the initial state of the twospan suspension bridge, the parameters of the structural elements of separate spans must be accurately selected, because, under permanent loading, the following condition must be additionally satisfied: . Consequently, dependence between the values of the sags of the right and left flexible cable is obtained: . (8) Eq (8) shows that a greater difference between the lengths of bridge spans results in a large difference in the values of the initial sags of the right and left cable. In the case condition (Eq (8)) is not satisfied, under loading g = const., the top of the pylon should experience horizontal displacements, which, in turn, would cause adverse displacements of kinematic origin. It must be emphasized that received condition (Eq (8)) is for the case when pylons are pined supported to the foundation. Calculation on the two-span suspension bridge under permanent and temporary loadings The chapter examines the symmetric loading of the bridge when the temporary loading of the same intensity acts on both spans, i.e. when v = v r = v l . As mentioned above, for examining the behaviour of this bridge, the second version of forming the cable is accepted. Under permanent loading g only, the initial equilibrium state of both flexible cables of the spans is defined by Eqs (1)−(5). According to the second case of installation, uniform flexural stiffness E cr J cr = E cl J cl = E c J c ≠ 0 is provided for flexible cables applying certain structural measures. Thus, under the action of variable loading, cables are rigid. A deformation scheme for the single pylon bridge affected by permanent g and variable v loadings is presented in Fig. 2. It should be emphasized that temporary symmetric loading v acting over the entire length of the bridge will distribute between rigid cables and the stiffening girder proportionally to their flexural rigidity. Moreover, variable loading v c on rigid cables will spread axial forces H r and H l as well as bending moments m cr (x r ) and m cl (x l ) inside them. It should be noted that, in the meantime, stiffening girders will take the remaining part of this variable loading v b that will cause only bending moments m br (x r ) and m bl (x l ) inside them. Calculation of the right span of the single pylon suspension bridge As mentioned above, when the entire bridge is under symmetric temporary loading v, the rigid cables of both spans and stiffening girders will take vertical displacements w cr (x r ); w cl (x l ); w br (x r ); w bl (x l ) (Fig. 2). It should be remembered that rigid cables will take all permanent and a part of temporary loadings g + v c , and stiffening girders -only a part of variable loading v b . As for the right span, a condition for displacement equality of the stiffness girder and the rigid cable is written as: . (9) The condition is correct if is accepted that suspension deformations have no influence on internal forces and displacements ). Next, the behaviour of the stiffening girder and the rigid cable, under temporary loading, will be separately examined. Considering Eq (9), the condition of the moment equilibrium of the stiffening girder of the right span will be obtained: where E b J b − the flexural stiffness of the stiffening girder; M b,r (x r ) − the moment caused by variable loading (v b ). The equilibrium for the right rigid cable is as follows: , (12) where m cr (x r ) = -E c J c ⋅w r ²(x r ) − the bending moment in the rigid cable; M c,r (x r ) − the moment passed from permanent g and variable v c loadings in the girder of an analogous span. Eq (12) shows that both tensile strength and the bending moment will appear in the rigid cable. A combination of the stiffening girder and the rigid cable for the purpose of common behaviour results in the following for equilibrium: . (13) Eq (13) defines a general case of calculating the single pylon suspension bridge. In case the cable was flexible, i.e. E c J c = 0, the equilibrium condition of a standard single span suspension bridge from condition Eq (13) would be obtained Wollman 2001). Eq (13) shows that the displacements of the innovative suspension bridge with the rigid cable will be smaller, because the general flexural rigidity of the bridge increases The application of the concept of the fictitious displacement of the rigid cable (Moskalev, Popova 2003) points to the following solution: , , , where − a fictitious displacement of the right rigid cable in the middle of the span ( ); -the general parameter of the slerderness of the right side of the bridge; λ -function of the slerderness parameters k r l r and k r x r . The analysis of Eq (14) shows that the solution is analogous to the already known formula calculating single rigid suspension elements (cables). Thus, the behaviour of the innovative suspension bridge is adequate for the behaviour of single rigid suspension elements. The flexural stiffness of the suspension bridge is made of stiffening girders and a sum of the flexural stiffness of the rigid cable. Thus, another valid conclusion can be drawn. Changes in the values of the flexural stiffness of the stiffening girder and rigid cable, i.e. their ratio, under the constant general stiffness of the bridge, may result in the adjustment of tension in the above mentioned structural elements. For example, if variable loading on the right rigid cable v c is known, its horizontal tension force is calculated as: . (17) The above equation clearly indicates that the fictitious displacement may assist in establishing the horizontal tension force of the rigid cable analogically to the flexible one, what allows decreasing the volume of iterative calculations. Eq (17) displays that the horizontal tension force of right rigid cable H r depends on fictitious displacement ∆f fic,r while the latter, in turn, depends on horizontal tension force (Eq (14)). Therefore, an additional equation linking these two values is required: , where s r − the length of the right with additional loading; s 0r − the initial length of the right cable; ∆s el,r − the elongation of the elastic right cable. The expression of the deformation of the inclined rigid cable will be written down following the assessment of a possible horizontal displacement of pylon surface ∆l r : . (19) Member ψ(k r l r α r ) in Eq (19) evaluates the impact of flexural stiffness on the deformation of the inclined cable. It should be noticed that, in this expression, member α r = cos -2 φ r specifies the parameter of the slenderness of the rigid cable moving from local to global coordinates. The calculation of the rigid cable is performed with the help of gradual approximation. The values of the main unknown ∆f fic,r of the first step of iterative calculation is accepted as those of an absolutely flexible cable. . Next, according to Eq (17), the values of spreading force as well as that of slenderness parameter are established. Subsequently, calculation is made using Eqs (18) and (19). The condition of iterative calculation is expressed as follows: , where . Upon finishing calculations with the required accuracy ε, the values of the fictitious displacements of cable ∆f fic,r and its horizontal tension force H r will be known. Then, the real vertical displacement of rigid cable w r (x r ) and its bending moment m cr (x r ) will be calculated. The bending moment of the stiffening girder in any crosssection can be established accordingly to the known expression while its displacement, considering the accepted assumption, will be equal to the displacement of the rigid cable. The moment of the stiffening girder acting in the middle of the span, when its displacement ∆f br is known, can be estimated according to the following simple expression . It should be noted that the application of the concept of the fictitious displacement of the rigid cable, compared to other calculation methods for suspension bridges allows significantly reducing the intensity of iterative calculation. On the other hand, this concept admits simple transformation to performing calculations on the standard suspension bridge with the flexible cable thus taking into account that E c J c = 0. The process of iterative calculation remains the same. Calculations on the left span of the single pylon suspension bridge The calculation of the structures of the left span of the suspension bridge is analogous to that of the structures of the right span. Only the geometrical parameters of the stiffening girder and rigid cable change. Thus, the initial equilibrium conditions of Eqs (12), (13), and (17) and their solutions Eq (14) will be analogical. Thus, the equations for calculating the horizontal tension force of the left rigid cable Eq (22) and its displacements Eq (23) is written as follows: , , , , where − a fictitious displacement of the left rigid cable in the middle of the span ( ); -the general parameter of the slenderness of the left of the bridge; λ − function of the slerderness parameters k l l l and k l x l . The equation for the coherence of the deformations of the left rigid cable is as follows: , , , where s l − the length of the left cable with additional loading; s 0l − the initial length of the left cable; Δs el,l − the elastic elongation of the left cable. The length of the left cable with added variable loading v c is equal to: . (29) The calculation of the structures of the left span of the bridge is performed similarly to that of the right side. The path of gradual approximation is analogous. Concluding remarks The paper discusses the behaviour of the innovative twospan (single pylon) suspension bridge and presents calculations on symmetric loadings. The paper considers a general case when the spans of the bridge are of a different length. The paper analyses the sequence of construction phase such a bridge when the rigid cable is formed adding permanent loading. This allows reducing the initial strains on the rigid cable. The equations for calculating vertical displacements and internal forces of the rigid cable and the stiffening girder of the right and left spans of the innovative suspension bridge are presented. The concept of the fictitious displacement of the rigid cable may assist in significantly reducing the extent of the iterative calculation of the single pylon suspension bridge. The paper deals with a possibility of adjusting the internal forces and displacements of the rigid cable and stiffening girder in such a suspension bridge. The values of the ratio of the flexural rigidity of these structural elements have the highest impact. It should be emphasized that the obtained expressions of calculations define the general calculation case of the symmetrically loaded two-span (single pylon) suspension bridge. These expressions can be easily adapted to making calculations on common single pylon bridges with the flexible cable considering that the flexural stiffness of the rigid cable is equal to E c J c = 0. It should be also underlined that rigid cables allow reducing the general displacements of the suspension bridge.
2019-04-14T13:03:33.891Z
2015-09-27T00:00:00.000
{ "year": 2015, "sha1": "7d4b9c531a42b8a74c36aeb9275923bc9734b84b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3846/bjrbe.2015.34", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "51740ccbb6c8b6e154abfe56bd7300ecc06b262c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
258857848
pes2o/s2orc
v3-fos-license
Household Industry Development Through The Concept of Creative Economy (Case Study of Kripsus Industry in Sidowungu Village, Gresik) This research was made to find out how much development the household industry is through the concept of creative economy, especially in one of the local potentials of the region, namely, chicken intestines that are processed into chicken intestine chips. Creative industries based on local economic development can be a great prospect to encourage the emergence of new economic innovations that can empower local workers and ultimately advance the economy of Sidowungu Village. This research uses analytical methods and qualitative descriptive data, where this method is used to explain a phenomenon that is happening. The results showed that the concept of creative economy has an urgency that is quite influential on the development of the household industry Introduction Gresik City is a city in the province of East Java, Indonesia. Gresik is located in the lowlands to the northwest of the provincial capital of East Java, namely Surabaya, which has a height of two to twelve meters above sea level. Which has an area of 1,191.250 km₂ consisting of 18 sub-districts, 330 villages, and also 26 sub-districts. The area which is usually referred to as the city of santri is also known as an industrial city which is an area with quite satisfactory economic growth in the second quarter of 2022. At the end of 2021, the economic growth of the city of Gresik touched up to 3.79 percent which is known to be higher than on national figures. The people's economy can increase if it is supported by many sectors such as trade, industry, transportation, and others. One of the sectors that has the greatest hope for achieving the goals of the people's economic welfare is the industrial sector. This is because the sector has a big influence on other sectors. The rapid growth of the industrial sector will also affect the growth of the trade sector. It is important to know, the industrial sector requires raw materials from the agricultural sector and other sectors and even the industrial sector itself. Changes in developments in the industrial sector also affect other sectors. The successive increase in development can be seen from the increasing number of companies in the industrial sector. This is also interpreted as a positive growth in the industrial sector. In the city of Gresik itself there are many types of industries here, ranging from small-scale industries such as household industries or often called home industries which usually produce food, beverages or textiles, then medium-scale industries to large-scale industries. Because of the various types of industries that exist here, it's not surprising that this Gresik city is also often seen as a buffer and pillar of the economy. On this occasion, the researchers wanted to be moved to observe one of the small-scale industries, namely the chicken intestine chips home industry, which is located in Sidowungu Village, Menganti District, Gresik Regency, which is also one of the regional mainstay products. In Sidowungu village, known as the chicken market, the economic potential can be seen clearly because there is a chicken market located next to the village hall office and there are many small-scale industries or home industries that trade in broiler chickens. It can be said that even though this industry is small-scale, it has high purchasing power if it can be utilized and developed properly. Therefore, there must be creation and innovation from the community, especially industry players, to upgrade their business through the creative economy, because several countries have conducted studies and research related to the creative economy and made the creative economy the main model for economic development. So, this is where the urgency of the creative economy concept is needed as a place to accommodate creativity as well as knowledge which is the main asset as an economic driver. Domestic Industry The industry has 2 kinds of understanding, namely when viewed in general, namely, industry which has a meaning as a company that carries out operations in the field of economic activity that is included in the secondary sector group. then further, in economic theory says that this industry has meaning as a collection of several companies that produce goods of the same type in a market. In terms of its type, the industry is divided into three, namely secondary, primary and tertiary industries [1]. Small industrial sector activity is secondary work done by farmers and village communities to earn income. Industry in rural areas aims to increase the village's economic activity while at the same time empowering existing home industries and handicrafts. In this case, the government contributes a lot in terms of guidance, direction, training, and providing various assistance needed. The Central Statistics Agency (BPS) states that industries in Indonesia are classified into four types according to the number of workers they have [2], namely: 1. Home industry: This industry has a small size with a maximum number of workers of four. This small industry can be further understood as an industry whose operations are in rural areas so that it is at least influenced by four main production factors, namely capital, natural resources, labor, and the ability to run a business. The Office of Cooperatives and Small Entrepreneurs Development states the following characteristics of small industries: 1. Finances are recorded using a simple bookkeeping system, not really referring to existing bookkeeping standards. In fact, sometimes it's not updated regularly so it's hard to do performance appraisal. 2. Operating margins are still thin because the competition is very fast. 3. The amount of capital is limited. Managers have limited managerial ability. 5. The size of economies of scale is still small, as a result it is not easy to focus on costs in order to achieve long-term efficiency. 6. Ability to do marketing as well as agreements and types of markets are limited. 7. The ability to obtain funds from the capital market is still low due to problems with the administration system that companies are required to have. Creative Economy The creative economy is a new economic design which is a combination of creativity and information by relying on ideas and knowledge produced by the quality of human resources which function as a factor of production. The creative economy is an industry that results from the utilization of each person's hidden capabilities, creativity and talents in order to create prosperity as well as employment through the arrangement and utilization of creations and creativity produced by each individual concerned [3]. On the creative industry map, the government sparked five pillars of the creative economy which are then described as follows: a. Resources or Resources, what is meant is creativity, capability, ideas and ideas owned by human resources which are then supported by the presence of natural resources or land components which are supporting factors in the industry. b. Industry or Industry, what is meant is part of the community's efforts that have to do with the process of production, distribution, to consumption of goods or services originating from a certain area. c. Technology or Technology, which also includes the pillars of the creative industry because of its function as a vehicle and device for the advancement of the knowledge base. d. Institutions or institutions, are included in the pillars of creative industry development and also as a social order in which habits, customs, rules, norms and applicable laws are incorporated. Financial Institutions or Financial Institutions, which have the duty to channel funds to business actors who need them, either in the form of equity or capital or loans and credit. The creative economy plays a very important role in the economy of a region and even a country, because it can advance the economy globally and as a whole. Currently, the majority of creative industry players come from businesses or industries that are still non-formal. In an effort to follow up ideas, ideas, and activities that are included in the creative economy classification in a region, especially Indonesia, it is very necessary to have intervention from the government and participation from the local community with the aim that existing or new creative industry sectors This emergence can be better managed so that it can survive and can promote national economic growth. Method Writing in this journal uses a combination of qualitative research methods, descriptive analysis, and is supported by literature studies. The literature study method used relates to the notion of home industry and the creative economy. The data sources used in this journal include primary data sources and secondary data sources. The primary data source obtained comes from books that discuss household industries and also the creative economy. While secondary data sources are material from various information or news obtained through journals or other sources that have problems and are related to the contents of this journal. The technique for collecting this data is to put together books that are appropriate and categorized as primary data sources, as well as journals or other literature which are categorized as secondary data sources. The first step in finding this data is to read all the material related to the problem to be studied and then record the important things in the form of a summary. After that, the data that has been obtained will go through the editing stage, namely by checking again, summarizing, and analyzing the summary that contains the information that has been collected. After going through the stages of analyzing the data, a qualitative analysis method is used, which is an attempt to deepen the basic understanding of the information to be studied. The purpose of this method is for readers to gain in-depth and broad insight into a problem. So that the contents of the submitted journal will be more letters or words than numbers. Thus, this qualitative research describes conceptual research that is descriptive in nature and uses more selfanalysis. Home Industry Conditions (Chicken KRIPSUS) Sidowungu village or commonly called mboro village is the largest chicken-producing area in Gresik regency, with an abundance of raw materials such as chicken, which is enough to support the community's wishes regarding home industry businesses. This is even more evident when one household industry continues to experience development, one of which is the chicken intestine chips industry which has quite large opportunities. The first intestinal chips home industry was founded in 2005 by one of the housewives and after production produced quite rapid developments, then as time went on, seeing the proven results, over time many housewives helped establish a chicken intestine chips business. The industry, which was initially only assisted by its own family members starting from preparing ingredients, the cooking process, to product packaging, on average now has its own employees recruited from outside the Sidowungu area. From 2005 to 2022, 20 of the same businesses have been established. The biggest reason is none other than the large business opportunities to meet the economic needs of their owners. The process of making chicken intestine chips is as follows: 1. Preparation of ingredients, consisting of the main ingredient, namely chicken intestine which usually requires ± 25 kg of raw intestine every day. Which later after the raw intestine processing occurs will experience a weight loss of 30%, which means that there will only be around 17kg of chicken intestine chips. 2. Washing the chicken intestines, carried out in running water and placed in a large container for 15 minutes, then washed the intestines clean to make them white. Then, drained until completely dry. 3. The flouring process uses a mixture of rice flour, tapioca flour, and seasonings such as masako and micin. 4. The frying process, after the intestines are evenly covered with flour, the intestines are ready to be fried until golden brown in color. 5. The packaging process, after the frying process is complete, then the intestinal chips are drained and then packed in plastic bags with a weight capacity of 250 grams, ½ kg, and 1 kg. According to one resident who runs this home industry business, he believes that the majority of people who decided to start this chicken intestine chips business started by trial and error and after seeing how the results were enough to help the community's economy, many people took part in setting up the intestinal chips business. this chicken. However, after observations made by researchers, there are several obstacles in the community or business actors in running their business, such as: the manufacturing process which can be said to still use a manual system, starting from draining after washing raw chicken intestines, pouring flour, to the packaging process is still running. Manually. Most of the intestinal chips products that are sold are too monotonous or lack of creation and innovation related to variations in taste where the intestinal chips products that are sold only present original and spicy flavors, in terms of packaging or packaging it is also very ordinary and can be considered unattractive to consumers. If you look at it from a marketing perspective, it's not too broad because the marketing is only spread to grocery stores, coffee shops, angkringan and some snack food selling agents. social media for marketing so they can sell their products widely. Creative Economy Concept as Home Industry Development In general, the meaning of the creative economy is a collection of concepts and innovations related to humans and their thoughts which are then applied to all products produced up to the marketing process. The existing concept becomes capital which is intangible but the effect is in market demand and supply. Developed countries or regions are characterized by an economy built on creativity and economy (Kreatif et al. 2020). In reviewing the discussion that has been explained on the condition of the intestinal cracker home industry in Sidowungu Village where there are several obstacles, namely the lack of innovation related to flavor variants, packaging to marketing which can be overcome by the intervention of the village government such as facilitating the community with education regarding urgency the concept of the creative economy where information related to the creative economy will be conveyed and at the same time how the urgency of the creative economy is to increase people's income. The existence of this creative economy was created from ideas and innovations to produce home industry production. The prospects for home industry players are very broad to create creative economic potential. This is an embodiment of the importance of the concept of the creative economy in various regions so that people are more financially productive which ultimately provides added value to their households. The number of residents entering productive age is a potential that can increase the total progress and sustainability of regional development if it can be properly empowered. An example is a housewife who is a producer of a business stating that they have obstacles in determining the target market group and limited ability to market their business so that demand increases. The existence of a creative economy can increase the ability to use technology in the digital era like this so that marketing is carried out based on technology. This is based on the many complaints of marketing limitations if they do not use technology in marketing chicken intestine chips produced by home industries. The creative economy allows marketing through the WhatsApp chat application, social media Instagram, Facebook and E-commerce such as shopee, Lazada, Tokopedia, Blibli and food delivery services such as go food, grab food, by implementing a discount system every few months or on holidays. holidays to increase consumer curiosity and interest in chicken intestine chips products. In terms of packaging, it is also something that needs attention, because creative product packaging is one way that can be done to add value to a product, if chicken intestine chips usually only use plastic packaging, through this creative economy concept, upgrading can be done using packaging. such as standing pouches, cartoon boxes, mini food containers, plastic doves labeled with the name of the home industry concerned, making it easier for consumers to see the contents of the chicken intestine chips product and find out who made the chicken intestine chips products. In fact, there are quite a lot of opportunities that home industry producers can take advantage of when it comes to creative economic potential. This is an important illustration of empowering the economy of the people in the area so that people are more productive as household producers in addition to being housewives. The existence of this productivity will increase economic value so that it can improve family welfare. Conclusions Based on the results of the journal above, the following conclusions can be drawn that the chicken intestine chips industry which started from the awareness and curiosity of the public, especially housewives, about how to use the slaughtered chicken intestines into products that have a fairly high selling value, the wider this home industry, the chicken intestines which are usually thrown away from the slaughtering industry the chicken was sold to the chicken intestine chips businessman. However, there are several obstacles such as a lack of innovation and creation related to packaging and marketing of this chicken intestine chip product and this is where the urgency of the creative economy concept is needed, because the home industry and the concept of the creative economy are like two things that cannot be separated, because both have strong ties. continuous. With the concept of the creative economy, it can provide motivation, increase knowledge, also increase the capabilities of the Sidowungu village community, especially business actors in producing intestinal chips. In addition, support from the local village government is also very important to increase the progress of this creative economic processing activity so that it can increase the economic income of the people of Sidowungu Village.
2023-05-24T15:01:15.007Z
2023-04-30T00:00:00.000
{ "year": 2023, "sha1": "5a068a56a3113dffb30d115369349ba7c96a30f4", "oa_license": "CCBYSA", "oa_url": "https://jurnal.untag-sby.ac.id/index.php/jmm17/article/download/7403/5531", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d082419b64195a1c60b64ba75bd91e20f7576171", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
235653611
pes2o/s2orc
v3-fos-license
Health Literacy, Health Behavior and States of Health among Trainee Personnel in Northern Germany (1) Background: The start of vocational education is a challenge for many people whose careers are just beginning. The working conditions exact new physical and mental tolls that can have an impact on their state of health and health behavior. Well-developed health literacy helps to encourage greater self-responsibility with respect to health and safety in the workplace. This study aimed to contribute to the evolution of health-related interventions in vocational training and instruction. (2) Methodology: This cross-sectional study examined health literacy, health behavior, and states of health among trainees engaged in work-and-study vocational training in 11 professions at the start of their education courses in northern Germany. The data were collected using a paper and pencil format. (3) Results: The survey was approved by 47 vocational schools (response rate 14%), with 1797 trainees returning their questionnaires (response rate 36%). The average age of the overall cohort was 21, and 70% of the trainees were female. A total of 47% of the participants began their careers with sufficient health literacy; health literacy was problematic in 40% of cases, and inadequate in 13% of cases. Around 50% of trainees exhibited poor dietary regime and risky alcohol intake, while 58% reported having a medical condition that had been previously formally diagnosed. (4) Conclusion: There is a need to provide support for developing a healthier approach to work at the start of vocational training. Introduction Health literacy (HL) provides resources and potential for enabling individuals to gain more control over their health and over factors that affect their health [1]. The importance of HL has resulted in the undertaking of many activities in the fields of research, government, and practical applications in recent years. The promotion of these skills has become part of political strategic documents such as the European Health 2020 conceptual framework [2] or the National Health Literacy Action Plan [3] for Germany. In a literature review, Sørensen et al. developed a definition of HL that was based on 12 conceptual models and 17 definitions [4]: "Health literacy is linked to literacy and entails people's knowledge, motivation and competences to access, understand, appraise, and apply health information in order to make judgments and take decisions in everyday life concerning healthcare, disease prevention and health promotion to maintain or improve quality of life during the life course." HL in this context is understood to be an individual dynamic ability that can be learned. It also relates to the healthcare system and to the needs of that system. If external Healthcare 2021, 9, 757 2 of 14 conditions or individual circumstances change, the acquired skills must be adapted and expanded accordingly [5]. HL influences well-being and health [6]. In the general population and also in different subgroups, associations of HL and indicators of health can be found in socioeconomically disadvantaged adults, patients, . . ., caregivers and their children and adolescents [7][8][9][10][11][12]. Since health behavior seems to be a mediator in the relationship between HL and health, there is also evidence in the literature on the relationship between HL and health behaviors [13][14][15][16]. International studies have demonstrated the HL deficiencies of the population in many countries [17]. Especially socioeconomic factors seem to determine limited HL. International studies show associations with low educational level, low social status, and migrant background or migrant experience [4,[18][19][20]. Berens et al. found decreasing HL with increasing age in a German sample [21]. Limited HL is also associated with a poor state of health and poor health behavior, reduced participation in health activities, and greater usage of the healthcare systems [6]. In Germany, more than half of the population (54%) exhibit a limited HL [22]. In the recent HLS-GER 2 survey from 2020, this proportion increased up to 59% [20]. In this representative study, participants with limited HL described their state of health as being worse, consumed less fruit, and were less physically active than participants with sufficient HL. Current concepts encouraging HL to promote a healthy working environment are designed to be employed in the workplace and in the education system [2,3]. There are around 1.3 million trainees in dual education in Germany in apprenticeships and vocational schools [23]. Around a quarter of vocational training agreements are terminated early each year (see reference above). The start of vocational education is a challenge for many people whose careers are just beginning. Work needs and work processes exact new physical and mental tolls and entail new social demands. These stresses are placed upon vulnerable young people who are still maturing into adults, who commonly do not believe that unhealthy behaviors will affect them adversely [24], and who are not yet broadly aware of occupational health and safety matters. These new demands can affect health behavior and their state of health. Studies have demonstrated the negative impact on dietary habits and the amount of physical exercise [25]. Likewise, health problems such as back pain or headache have been described in relation to work activities [26]. Good HL enables a person to be aware of and act in service of their health in relation to their occupation [3] and can help reduce work accidents and occupational diseases [6]. Purpose and Problem Measures for improving HL should be geared towards the life conditions and needs of the target group [1]. In the German vocational education system, HL has only been studied amongst university students to date [27]. To enable the development of measures in vocational education, it is first necessary to explain the occurrence of HL among trainees. To this end, 10 professions involving different physical stresses and with different gender distributions were selected, and popular professions in particular were also taken into consideration. The study population was expanded to include educators in vocational training as, much like employees in nursing, they are expected to teach health-related skills. The vocational training courses come from five different industries: healthcare: geriatric nursing specialist, medical nursing specialist, medical assistant; cosmetic: hairdresser; education: educators; technical: plant mechanic for sanitary, heating and air conditioning equipment, electronics technician for operational and building systems; business and administration: office manager/management assistant, retail manager/assistant, wholesale and export manager/assistant, industrial clerk. The aim of our study was to present (i) HL among trainees undertaking vocational training in a variety of professions, (ii) the distribution of HL relative to sociodemographic variables, and (iii) the distribution of HL relative to health behavior and the state of health at the start of vocational education. The study presented here is part of a longitudinal study. Further surveys will examine the development of HL, health behavior, and states of health as well as the associations between HL and indicators of health. Additionally, satisfaction with the training experience over the course of vocational education and when trainees transition into actual employment will be examined in future surveys. Study Design and Sample The cross-sectional study for the start of vocational education was performed in Northern Germany. The performance of the study was approved by the education authorities to whom inquiries were submitted in the Federal States of Bremen, Mecklenburg-Western Pomerania, Lower Saxony, and Schleswig-Holstein; Hamburg was an exception in this regard (see Figure 1). An ethics endorsement was obtained from the Hamburg Medical Association (PV5670). The vocational schools were identified by searching the internet for the specified vocational training courses in the included federal states. All of the 321 identified vocational schools were contacted, with 47 agreeing to participate (response 14.6%). A total of 5052 trainees received the study documents from their teachers. The survey questionnaire and contact details for future surveys were returned by post to an independent anonymization center, which then forwarded the coded surveys to the office performing the analysis. A signed declaration of consent had to be provided for inclusion in the study cohort; minors were also required to present the consent of their parents or legal guardians. The data were collected between November 2017 and March 2018, with 1797 questionnaires eligible for inclusion in the analysis (response rate 35.5%). Healthcare 2021, 9, x 3 of 14 variables, and (iii) the distribution of HL relative to health behavior and the state of health at the start of vocational education. The study presented here is part of a longitudinal study. Further surveys will examine the development of HL, health behavior, and states of health as well as the associations between HL and indicators of health. Additionally, satisfaction with the training experience over the course of vocational education and when trainees transition into actual employment will be examined in future surveys. Study Design and Sample The cross-sectional study for the start of vocational education was performed in Northern Germany. The performance of the study was approved by the education authorities to whom inquiries were submitted in the Federal States of Bremen, Mecklenburg-Western Pomerania, Lower Saxony, and Schleswig-Holstein; Hamburg was an exception in this regard (see Figure 1). An ethics endorsement was obtained from the Hamburg Medical Association (PV5670). The vocational schools were identified by searching the internet for the specified vocational training courses in the included federal states. All of the 321 identified vocational schools were contacted, with 47 agreeing to participate (response 14.6%). A total of 5052 trainees received the study documents from their teachers. The survey questionnaire and contact details for future surveys were returned by post to an independent anonymization center, which then forwarded the coded surveys to the office performing the analysis. A signed declaration of consent had to be provided for inclusion in the study cohort; minors were also required to present the consent of their parents or legal guardians. The data were collected between November 2017 and March 2018, with 1797 questionnaires eligible for inclusion in the analysis (response rate 35.5%). Data Collection Instrument The data were collected using a paper and pencil format in Spring 2018. Questions on sociodemographic data covered age, gender, country of birth, nationality, and highest level of educational attainment. HL was surveyed using the short version of the validated German Health Literacy Survey Questionnaire (HLS-EU-Q16) [28]. This data collection instrument is based on the definition of HL quoted here. The 16 items in the fields of healthcare, disease prevention, and health promotion are used to collect information on the four skills relating to the handling of health-related information (access, understand, appraise, apply). The data were analyzed using the recommended procedures. Cases with values missed in more than two items were excluded beforehand. The four-stage response categories were binarized, and an aggregate score of 0 to 16 points (P) was calculated. This was used to create three HL level categories: sufficient (13-16 P), problematic (9-12 P), and inadequate (0-8 P). Questions on state of health included a subjective assessment [25], medical conditions in the past 12 months [29], and mental well-being [30]. On the basis of the state of health and health behavior, we categorized subjects into health lifestyles according to the work of Betz et al. [26]. The state of health is represented by 9 of 10 medical condition groups in the abridged version of the Work Ability Index. Accident-related injuries are excluded. Information was interpreted as a medical condition if "yes, diagnosed by a doctor" was marked. A dichotomous variable was used to differentiate between trainees without medical conditions and trainees with medical conditions. Health behavior was evaluated by dichotomizing the variables physical exercise, dietary habits, fast food consumption, smoking, and alcohol consumption (1 = healthy behavior, 2 = risky behavior) and aggregated into an index. Poor health behavior was deemed to be physical exercise of <2 h a week; poor dietary habits; the consumption of pizza, fries, and/or burgers daily or several times a week; smoking daily or occasionally; and risky alcohol consumption. The aggregate score of ≤7 resulted in allocation to "positive health behavior", while an aggregate score of >7 resulted in allocation to "risky health behavior". The cross-tabulation of state of health and health behavior enables differentiation between trainees who are healthy, have a positive health behavior, are potentially at risk, and who have risky health behavior. A pretest was performed with trainees from four occupational groups. Statistical Analysis The sample was described in terms of absolute and relative frequencies (the latter specified as percentages), arithmetic mean values, and standard deviations. The individual items in HLS-EU-Q16 were presented as relative frequencies (percentages) and their 95% confidence intervals. To examine the relationships, we applied Spearman's rank correlation coefficient and, for normal distributions, Pearson's correlation coefficient, depending on the scale of measure. For unpaired group comparisons, the chi-squared test was applied for values with dichotomous outcomes. Data without normal distributions were analyzed using the Mann-Whitney U test and the Kruskal-Wallis test. The significance level was set at 5%. The software IBM SPSS Statistics for Windows, Version 23.0. (Armonk, NY, USA) was used for the statistical analyses. Sociodemographic Properties Among the 1797 trainees, those striving to become medical assistants constituted the largest group at 17%. Those working towards becoming retail managers/assistants, educators, and office managers/management assistants each also represented more than 10% (see Table 1). The technical professions of plant mechanic and electronics technician had smaller shares of less than 5%. A total of 70% of the trainees were female. The average age was 21, with ages ranging from 14 to 53. Around 9% of participants did not have German citizenship. Half of the trainees completed their school education with a "Realschule" qualification. In the German school system, all children attend elementary school (Grundschule) up to fourth grade. Afterwards, they are separated according to academic abilities and wishes of their families. They attend either Hauptschule, Realschule, or Gymnasium. The Hauptschule (grades 5-9) teaches the same subjects as the Realschule and Gymnasium, but at a slower pace and with some vocationally oriented courses. It leads to enrollment in a part-time vocational school. The Realschule (grades 5-10) leads to part-time vocational schools and higher vocational schools. The Gymnasium leads to a diploma called the Abitur and prepares students for university study or for a dual academic and vocational credential. Study participants who left school without any qualifications (1%) were mostly found in the hairdresser and electronics technician groups. Health Literacy Around 53% of the study participants had limited HL, with 40% possessing problematic HL and 13% inadequate HL (see Figure 2). A total of 47% of the trainees began their careers with sufficient HL, with the occupational groups hairdressers (57%), electronics technicians (55%), plant mechanics (51%), and geriatric nurses (51%) exhibiting shares of over 50%. Office managers/management assistants (34%) and industrial clerks (41%) exhibited the lowest shares of sufficient HL. These two groups also have the highest shares of participants with inadequate HL (office managers/management assistants at 21%, industrial clerks at 16%). There were significant differences between occupational groups (with weak effects), namely, in comparisons between office managers/management assistants and retail managers/assistants (p = 0.036), educators (p = 0.015), hairdressers (p = 0.005), and electronics technicians (p = 0.003). In the sample, there were no significant differences between the genders in terms of HL, and only very weak negative correlation with age (r s = −0.048, p = 0.043). Trainees who had left school without qualifications had significantly lower HL compared to trainees with Abitur qualifications (p = 0.020), Realschule qualifications (p = 0.009), Hauptschule qualifications (p = 0.001), and trainees with other school qualifications (p = 0.036). Trainees with Hauptschule qualifications have a much lower HL than those who possess qualifications from a vocational school (p = 0.024). The effects in each case were weak. Trainees without German citizenship did not differ from other participants in terms of HL. Responses to individual items regarding HL were highly diverse in terms of the proportions of persons responding to questions with "fairly difficult"/"very difficult" (see Table 2). While there were barely any difficulties in understanding the instructions of a doctor or pharmacist on how to take prescribed drugs (5%), one in two participants struggled to appraise whether information about health risks in the media was trustworthy (50%). The share of limited HL in terms of "appraising information" (57%) was higher than in terms of "accessing information" (48%), applying information (42%), and "understanding information" (21%). With a share of 48% in terms of limited HL, literacy relating to disease prevention was more poorly estimated than in relation to health promotion (43%) and healthcare (36%). Responses to individual items regarding HL were highly diverse in terms of the proportions of persons responding to questions with "fairly difficult"/"very difficult" (see Table 2). While there were barely any difficulties in understanding the instructions of a doctor or pharmacist on how to take prescribed drugs (5%), one in two participants struggled to appraise whether information about health risks in the media was trustworthy (50%). The share of limited HL in terms of "appraising information" (57%) was higher than in terms of "accessing information" (48%), applying information (42%), and "understanding information" (21%). With a share of 48% in terms of limited HL, literacy relating to disease prevention was more poorly estimated than in relation to health promotion (43%) and healthcare (36%). Table 2. Individual items in the HLS-EU-Q16 in relation to the relative frequencies (percentages) for the responses "fairly difficult"/"very difficult" based on Jordan and Hoebel [35]. English translation according to Sørensen Health Behavior and Health Literacy In all three HL groups, around one-third of the participants smoked daily (see Table 3). The proportion of smokers is significantly higher among men (p < 0.001) (see Figure 3). A total of 45% of female participants and 50% of male participants declared risky alcohol consumption-this proportion increased as HL decreased among women, while it decreased among men. Around one-third of the trainees engaged in two hours or more of physical exercise per week, men significantly more frequently than women (p < 0.001). As HL declined, so too did the proportion of trainees who engaged in physical exercise. A total of 52% of the trainees had an unhealthy diet, 29% had a normal diet, and 19% had a healthy diet, with the latter applying to women in particular. The proportion of participants with unhealthy diets was around 7% higher in the limited HL group than it was in the sufficient HL group. States of Health and Health Literacy A total of 37% of the trainees assessed their state of health as "very good" (see Table 3), with male trainees viewing their state of health significantly better than female trainees (p < 0.001) (see Figure 3). Over half of the participants (58%) began their training with at least one formally diagnosed medical condition. Female participants were significantly more frequently affected (p < 0.001). The most common conditions were disorders of the respiratory tract (18%), skin (16%), and musculoskeletal system (15%). Poor mental wellbeing was quoted by 44% of the respondents, with female study participants quoting this significantly more frequently than male respondents (p < 0.001). Trainees with sufficient HL had a more positive estimate of their state of health and reported a formally diagnosed medical condition or poor well-being less frequently than participants with limited HL (see Table 2). Health-Related Lifestyles One-quarter of the trainees had a conscious attitude towards their health and reported having no medical conditions ("healthy") (see Table 3). Another 34% had a conscious attitude towards their health and were affected by at least one medical condition ("conscious"). A total of 17% of those with risky attitudes to health had not yet suffered a medical condition ("potentially at risk"), while 25% reported suffering from one or more medical conditions with a risky attitude to health ("at risk"). As HL diminishes, the proportion of trainees with a risky health-related lifestyle increases. Discussion We believe that the study presented here is the first to examine health literacy among trainees in a broad range of occupations. According to our data, 53% of respondents possess limited HL. Schaeffer et al. found in a recent study among the German population a proportion of 59% [20]. In a sample of youths aged 15 from Austria, 58% were found to have limited HL [28], while a study on HL among adults in Germany put this figure at 44% [35]. With a proportion of 46% with limited HL, a cohort from North Rhine-Westphalia, Germany, fell within the middle range of a ranking of the eight participating countries in a European comparative study, at a similar level to Greece and Poland. The level in the Netherlands was only 29% [17]. Another study examined HL among students at the University for Health Sciences in Bochum, Germany, wherein 69% of the students had limited HL [27]. Compared to the trainees in the healthcare professions, the proportion among the students was 15 to 20% higher. It is possible that the university students are driven to make a more critical assessment of their own health literacy due to their direct exposure to the science of healthrelated matters. On the other hand, it is also conceivable that participants in our study incorrectly overestimate their health literacy due to a lack of knowledge. Interestingly an investigation on adolescents aged 14-17 found out that the applicability of the long version of the HLS-EU-16 is limited [37]. Due to a high abstraction level, some participants found some questions to be too difficult. It would be conceivable that in our study these "challenges" biased the responses towards a sufficient HL. There were no demonstrable gender-related differences in HL in our study or in the previously quoted studies [22,27,28,35]. While Schaeffer et al. demonstrated a high level of low HL among people aged 65 and over, the studies of Jordan/Hoebel and Reick/Hering found no significant age-related effects. Our study also only found a very weak effect of rising age on a decline in HL, which may have been related to the young age distribution of the sample. Among first-and later-generation immigrants, Schaeffer et al. found a high proportion of inadequate HL; there was no such correlation among the trainees in this regard. However, the state of being a first-or later-generation immigrant was only identifiable in our study using the nationality and country of birth variables, as it was not possible to collect information on parental origins due to data protection concerns. There was also a correlation demonstrated between limited HL and low education level among the trainees, affecting primarily participants who had left school without any qualifications-this was consistent with the findings of the studies of Schaeffer et al. and Jordan/Hoebel. For the individual items of the HLS-EU-16, proportions for the responses "fairly difficult"/"very difficult" were calculated on the basis of the Jordan and Hoebel methodology. Compared to the results of a study on HL among adults [35], trainees mostly had much greater difficulty with finding and processing information. One reason for this may be that they have less experience with medical conditions; the fact that health-related networks are only just developing; and that when transitioning from healthcare by the family to a state of personal responsibility, there is a need to learn how to handle health-related matters. In comparison with the study among adults, there were considerable differences in items 13 and 16. Trainees found it more difficult to appraise which daily habits were related to their health (23% vs. 14%) and in accessing information about behaviors that are good for mental well-being (37% vs. 21%). Around 44% of the trainees reported a poor sense of well-being. The new demands coming along with vocational training may have a negative impact on mental health [38] and increase the need for information on this topic. If the deficiencies in well-being are related to the new demands, it can only be expected that this proportion will drop again as the vocational education progresses. The acquisition and processing of health-related information may be made more difficult in youth both by limited literacy and by inadequate access to appropriately presented information. Gray et al. illustrated that even experienced internet users only had limited benefits from online health information [39]. The higher prevalence of risky behaviors among youth [24] was confirmed in our study at all HL levels. Contrary to expectation, it was shown that there were no differences in smoking prevalence in the different health literacy groups. For men, there was even a positive correlation between health literacy and risky alcohol consumption. This result is inherently contradictory, as health literacy should have a direct impact on health behaviors. Zok/Böttger found that trainees with good health behavior were less prone to absenteeism than those who had a risky health behavior [38]. Despite a positive assessment of their own state of health in most cases, over half of the trainees at all HL levels began their vocational education with a medical condition that had been formally diagnosed. To ensure that trainees who are just starting their vocational education can receive information on disease prevention and health promotion, it is first necessary to improve awareness of the relevance of health in the first place. It may be possible to encourage such awareness and make trainees more receptive with concepts related to well-being, enjoyment of life, and happiness [26,40]. When designing such services, the diverse nature of the group should also be taken into account, as illustrated by our results. Trainees with a positive attitude towards health can serve as role models-services should aim to motivate trainees to continue their positive approach to personal health [26]. For trainees at risk, on the other hand, prevention services are important to mitigate the problems of existing medical conditions and to encourage a more positive attitude towards health [25]. Effective networking between vocational schools and workplaces that provide vocational education placements may also enable long-term opportunities for the communication of healthrelated topics and help prevent sickness-related abandonments of vocational training. Reviewing the literature, one does not find examples of interventions designed to increase HL of trainees in vocational schools. Peralta and Rowling reviewed studies that observed implementations of HL programs in Australian schools [41]. Only one of the three included studies was based on a theory-based health literacy framework, being the only successful study [42]. With respect to HL in the workplace, it also appears that theory-based interventions are promising. Larsen et al. show in their study on reduction of musculoskeletal complaints in the workplace that an intervention based on the framework of Jordan and Hoebel targets the relevant abilities (access, understand, appraise, and apply information) and improves HL significantly [43,44]. Thus, it makes sense to integrate a theory-based framework of HL into the development of an intervention improving HL of trainees, whether in the vocational school or in the workplace. Strengths and Weaknesses To our knowledge, this is the first study in Germany reporting about HL in trainees. The cross-sectional design of the study means that it is not possible to draw conclusions on causal relationships. A longitudinal study design, however, would enable such conclusions for later data samples. According to our study protocol, two more follow-up assessments will be performed during training time and another two in subsequent working life. Although education authorities from four Federal States provided their support for the study, the response rate of the vocational schools was low, limiting the sample size. Therefore, the degree of generalization on the basis of the study results is limited. Despite the moderate response rate of the participants, it is not possible to eliminate the possibility of selection bias. The data protection requirements that education authorities are bound by prevented them from providing information on the trainees' parents, which is why it was only possible to derive data on social status and on parental migration histories to a limited degree. The sample consisted of 70% women and 30% men and was therefore not balanced. Interviews with teenagers have suggested that there were problems with understanding the language and content of the long version of the HLS-EU-Q [37]. These also relate to items in the short edition, which may have affected our study results. Furthermore, the comparability of study results with studies in literature generated with the long version of the HLS-EU-Q might be affected. Conclusions Communicating and encouraging health literacy is important for the study group under review, as the majority of them showed limited HL. Raising awareness of the need to take a healthier approach to working life and provide trainees with the skills they need to be aware of their health in an occupational context may mitigate the development of existing medical conditions, reduce risky behaviors, and have a positive effect on well-being. Our findings show that there is no gender-specific difference in HL. In contrast, we observed gender-specific differences in health behavior and health outcomes. Thus, independent of HL measures for the total group "in order to make judgments and take decisions in everyday life concerning healthcare, disease prevention and health promotion . . ." [4], health promotion and prevention measures should account for gender differences in this group of trainees. Further qualitative research should be conducted to shed light on the sometimes-contradictory relationship between health behavior and health literacy among trainees. It is possible that the health literacy questionnaire is prone to misperceptions among adolescents, meaning that members of this group might not be able to recognize their lack of ability towards health literacy. Since the HLS-EU-Q does not collect data in an objective way like other HL instruments [45], but rather assesses subjectively rated HL, misclassification as a reason for contradictory results cannot be excluded. All in all, more research on the health literacy of trainees is needed to confirm the findings identified here. Author Contributions: Conceptualization, S.S., P.K., J.L. and A.N.; methodology, S.S., P.K. and Z.S.; data curation; P.K. and S.S.; formal analysis, P.K., S.S. and Z.S.; investigation, S.S. and P.K.; writingoriginal draft preparation, S.S.; writing-review and editing, P.K., J.L., Z.S. and A.N.; supervision, A.N.; project administration, S.S. and P.K. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: Data are available on request due to restrictions, e.g., privacy or ethical. The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the fact that this was not subject to the informed consent.
2021-06-28T05:09:19.564Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "c054cbc4c4944bafa3f5b17a7dc7d508d8a41b7c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/9/6/757/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c054cbc4c4944bafa3f5b17a7dc7d508d8a41b7c", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
16374086
pes2o/s2orc
v3-fos-license
Clinical Interventions in Aging Dovepress the Impact of Hypomagnesemia on Erectile Dysfunction in Elderly, Non-diabetic, Stage 3 and 4 Chronic Kidney Disease Patients: a Prospective Cross-sectional Study hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms Background: Erectile dysfunction (ED) is common in older men with chronic kidney disease. Magnesium is essential for metabolism of nitric oxide which helps in penile erection. There is little information available about the influence of serum magnesium on ED. The aim of the study was to assess the influence of hypomagnesemia on ED in elderly chronic kidney disease patients. Subjects and methods: A total of 372 patients aged 65–85 years, with an estimated glomeru-lar filtration rate of 60–15 mL/min/1.73 m 2 , were divided into two groups according to serum magnesium levels: hypomagnesemia, n=180; and normomagnesemia, n=192. ED was assessed through the International Index of Erectile Function-5. Hypomagnesemia is defined as serum magnesium ,1.8 mg/dL. Results: The prevalence of ED was higher among hypomagnesemic subjects compared to Conclusion: Our data support that ED is related to hypomagnesemia in elderly patients with moderately to severely reduced kidney function. Introduction Longer lifespans create new health-related problems. Epidemiologic studies clearly show an increasing age-related prevalence and severity of erectile dysfunction (ED) among elderly men. [1][2][3][4] ED is a condition in which a man is persistently unable to attain and/or maintain penile erection sufficient for sexual intercourse and is the most common sexual dysfunction among men worldwide. 2,5 Increasing comorbidities and pathological changes in the erectile tissue and the supplying vessels result in a high prevalence of ED in geriatric population. 2,3 ED affects only 4% of men in their 50s but almost 50% over the age of 75, and up to 90% of elderly men with chronic kidney disease (CKD) have ED. 2 438 Toprak et al ED may result from psychologic, neurologic, hormonal, arterial, or cavernosal impairment or from a combination of these factors. 1,9 Sexuality is a significant quality-of-life consideration for all individuals, including older adults. 10 The identification of novel risk factors for ED may improve our understanding of the pathogenesis of ED and allow the development of new prevention strategies for ED. There is a close relationship between ED, endothelial dysfunction, and decreased production of nitric oxide (NO). 11,12 Hypomagnesemia inhibits NO release from endothelium and therefore associated with endothelial dysfunction. 13,14 Hypomagnesemia, endothelial dysfunction, and lower serum levels of NO are common in patients with CKD, especially among elderly. [14][15][16][17] Therefore, hypomagnesemia may be linked to ED in these subjects. We hypothesize that hypomagnesemia would increase the ED rates. In the literature, there have been few attempts to assess the possible relationship between serum magnesium level and ED. The present study was the first to determine the prevalence and severity of ED in elderly hypomagnesemic patients with moderately to severely reduced kidney function. Subjects and methods study population A single-center, prospective cross-sectional study was performed in non-diabetic male patients with stage 3 and 4 CKD, with age 65-85 years between November 2014 and August 2015 in Department of Medicine, Division of Nephrology, Balikesir University School of Medicine. All included patients in the study had a regular partner. A total of 372 patients met the inclusion criteria. They were divided, according to the serum magnesium levels, into two groups: hypomagnesemia (n=180) and normomagnesemia (n=192). ED was assessed in the study patients. The study complied with the Declaration of Helsinki and was approved by the Ethical Committee of Balikesir University School of Medicine. All patients gave written informed consent. Assays Blood samples were collected from an antecubital vein of patients by venipuncture into vacutainer tubes after an 8 h overnight fasting, and the samples were centrifuged and stored at −80°C until analysis. Serum magnesium levels were analyzed by the colorimetric method, with a clinical chemistry autoanalyzer. Calcium, urea, and urinary protein concentrations were also measured by the colorimetric method. Serum creatinine and urinary creatinine concentrations were determined enzymatically on an autoanalyzer using the Jaffé method. Uric acid levels were determined by uricase-peroxidase method and glucose levels by glucose oxidase method. Lipid profiles were analyzed by enzymatic methods using the Beckman Coulter AU680 Analyzer (Beckman Coulter, Inc., Brea, CA, USA) with commercially available kits. Albumin and C-reactive protein (CRP) were determined in serum by nephelometric method. Parathyroid hormone was measured by an enzyme-linked immunosorbent assay. Analytical methods and measurements Blood samples for measurement of serum glucose, creatinine, urea, uric acid, parathyroid hormone, CRP, albumin, magnesium, calcium, phosphorus, lipids (triglyceride, highdensity lipoprotein cholesterol [HDL-C], total cholesterol, low-density lipoprotein cholesterol), and hemoglobin were drawn after an 8 h overnight fasting conditions in a standing position. Urine samples were taken for measurement of the urine protein-to-creatinine (P/C) ratio. Anthropometric measurements, such as weight, height, waist circumference, and body mass index (BMI), were performed with the subjects wearing light clothing and without shoes. Weight and height were measured using a fixed scale with a stadiometer (Nan DR-MOD-85, Istanbul, Turkey). Each patient's BMI was calculated as their weight (kg) divided by the square of their height (m 2 ). Waist circumference was measured to the nearest 0.1 cm using a flexible metric measuring tape with the subject in a standing position. Glomerular filtration rate (GFR) were estimated using the Chronic Kidney Disease Epidemiology Collaboration. 18 Systolic and diastolic blood pressures were measured in the sitting position, after a rest period of more than 5 min. Patient's medical history was reviewed, and comorbidities, risk factors, and medications with the potential to affect ED and magnesium levels were recorded. Genital and systemic examinations were conducted with attention to any genital abnormalities. A urinary system ultrasound was done by an urologist for any surgical manipulations or injuries that affect the penis, prostate, or bladder. Clinical definitions Hypomagnesemia was defined by serum magnesium level ,1.8 mg/dL; 19 exclusion criteria Exclusion criteria included a diagnosis or a prior history of hypogonadism; genital malformations; penile implant or deformities; and surgical manipulations or injuries of urogenital system, spinal cord, or brain. Other exclusion criteria included a history of or current treatments for bladder, prostate, or testicle diseases or any cancer types; Parkinson's disease, multiple sclerosis, stroke, diabetes mellitus, hepatic impairment, respiratory failure, peripheral vascular disease, connective tissue diseases, congestive heart failure stage 3 and 4, uncontrolled hypertension or hypotension, use of antihistamines and illegal drugs, and mental impairment leading to inability to cooperate; diagnosis or current treatments for depression and anxiety; subjects who had no regular partner; concurrent use of magnesium-containing supplements, inflammatory bowel disease, malabsorptions, and alcoholism. statistical analysis Assuming an 80% prevalence of ED in the hypomagnesemia group and 65% prevalence in the normomagnesemia group, we found that a sample size of 368 (184 per group) patients would be required to detect a statistically significant difference with power of 90% (α=0.05). The primary study end point was the prevalence of ED. Secondary end points included severity and risk factors of ED. Comparisons of continuous variables between groups were made using Student's t-test for normally distributed data and the Mann-Whitney U-test for non-normally distributed data. Within-subject comparisons of continuous variables were made using a paired t-test or the Wilcoxon signedrank test for normally and non-normally distributed data, respectively. Categorical variables were analyzed using the chi-square test or alternatively Fisher's exact test. We performed logistic regression with the presence of ED as the dependent variable and the following 11 parameters as potential covariates: presence of hypomagnesemia, age $70 years, hypertension, smoking, high urine P/C ratio ($500 mg/dL), high CRP (.5 mg/L), abdominal obesity, metabolic syndrome, eGFR #30 mL/min/1.73 m 2 , low HDL-C (,40 mg/dL), and serum albumin levels. Variables that were statistically significant on univariate analysis were included in the multivariate model to identify predictors of ED. A two-sided 95% confidence interval (CI) was constructed around the point estimate of the relative risk (RR). Receiver operating characteristic (ROC) curve analyses of serum magnesium levels for the prediction of ED and the positive and negative predictive values of magnesium were performed. All tests were two-sided, and a P-value of ,0.05 was considered to be statistically significant. Continuous data are reported as mean ± standard deviation. Categorical data are presented as absolute values and percentages. Analyses were performed using IBM SPSS Statistics Version 20.0 (IBM Corporation, Armonk, NY, USA). Note: Values expressed as mean ± standard deviation or n (%). Abbreviations: sBP, systolic blood pressure; DBP, diastolic blood pressure; CrP, C-reactive protein; ArB, angiotensin receptor blocker; ACe-I, angiotensin-converting enzyme inhibitor; eGFR, estimated glomerular filtration rate; PTH, parathyroid hormone; P/C, protein-to-creatinine; CCB, calcium-channel blocker. 441 Magnesium and erectile dysfunction eD prevalences, eD scores, and eD severities in study groups The prevalence of ED was 81.7% in the entire survey population, and a significantly greater proportion of patients with hypomagnesemia had ED compared to those with normomagnesemia (93.3% vs 70.8%, P,0.001; Figure 1). Table 3). rOC curve analysis and predictive values Using the ROC curve analysis, we found that serum magnesium #1.85 mg/dL was the best cutoff point for the prediction of ED with a sensitivity of 73.0% and a specificity of 15.3%. Area under the curve was 0.842, 95% CI: 0.800-0.883, P,0.001 (Figure 2). The cutoff point of magnesium levels that we found in ROC curve analysis for the prediction of ED was almost the same as that we used for the definition of hypomagnesemia in our study. With an 81.7% prevalence of ED, we found that the positive predictive value of serum magnesium #1.8 mg/dL was 87.8%, and that the negative predictive value of serum magnesium #1.8 mg/dL was 66.7%. Univariate and multivariate variables associated with eD We used univariate analysis to study 11 different possible risk factors for developing ED. Univariate variables associated with ED were hypomagnesemia, age $70 years, hypertension, smoking, urine P/C $500 mg/dL, CRP .5 mg/L, and abdominal obesity (.112 cm). These variables were confirmed after a multivariate analysis. Table 4). Discussion The main novel findings of this study are that we found a 93.3% prevalence of ED in hypomagnesemic non-diabetic elderly patients with stage 3 and 4 CKD. The present study was the first which reported the prevalence of ED in hypomagnesemic CKD patients. Second, individuals with a serum magnesium level of ,1.8 mg/dL were 1.31 times more likely to develop ED than those in magnesium level .1.8 mg/dL (93.3% vs 70.8%). Third, hypomagnesemia is a risk for ED with a RR of 2.27. Fourth, this study shows a significant positive association between hypomagnesemia and the ED severities among the patients. Patients with hypomagnesemia have high prevalence of severe (62.8%) ED. Fifth, high prevalence of abdominal obesity, proteinuria, and increased CRP levels in hypomagnesemic patients may explain the high risk of ED. Finally, we found that serum magnesium level #1.85 mg/dL is the best cutoff point for prediction of ED. The self-administered questionnaire of the IIEF-5 has been used widely in studies to detect the presence and the severity of ED in elderly and in CKD patients. 5,24 Therefore, we used IIEF-5 for assessment of ED in the present study. ED is currently one of the most common sexual dysfunctions in the elderly CKD population. People aged $65 years comprise the fastest growing segment of the world. Sexual function is an important component of quality of life. [1][2][3] Our study population include elderly CKD persons who are at high risk for ED. If we can decrease the incidence of ED among elderly CKD males, quality of life of these patients would be improved. Several studies confirm the high prevalence and severity of ED among elderly men with CKD. 1,2,4,10 ED affects only 2%-4% of men in their 40s but almost 80%-90% of all men over the age of 80. 1,9,10,25 The prevalence of ED among male CKD patients is estimated to be ~80%, and up to 90% of elderly men with CKD have ED. [6][7][8]26 Mesquita et al reported that the prevalence rates of ED in CKD outpatients with stages 3, 4, and 5 were 72.3%, 81.5%, and 85.7%, respectively. 26 In our study, we have excluded most of the comorbidities that could contribute to ED because of the observed real effect of hypomagnesemia on ED. Therefore, the prevalence of ED in our study might be lower than in the non-selected patient population. However, the high prevalence (81.7%) and severity (53% severe ED) of ED among all elderly CKD patients in our study are compatible with the published reports. Several mechanisms have been suggested as etiologic factors for ED in elderly patients with CKD, most of which are associated with age, hypertension, obesity, hyperlipidemia, diabetes mellitus, cardiovascular disease, neurological and hormonal disorders, alcohol abuse, and smoking. [2][3][4][27][28][29][30][31] CKD has also gained attention as a risk factor for ED. 17,25,26,32 In the present study, we found that in addition to the wellknown classical risk factors, hypomagnesemia, high CRP levels, abdominal obesity, and proteinuria are predictors of ED. Some essential minerals, such as zinc, magnesium, and selenium, may have role in erectile function. 17,[33][34][35][36] Hypomagnesemia may lead to ED can be explained by an association between hypomagnesemia and decreased NO. 13 NO is important in initiating and maintaining erection, and ED is associated with reduced plasma NO levels and endothelial 443 Magnesium and erectile dysfunction dysfunction. 11,33 NO, the key element in endothelial function, needs magnesium for its synthesis. 13 Hypomagnesemia decreases the NO levels and decreases the blood circulation in penis by penile vasoconstriction. In addition, lack of magnesium affects the production of testosterone. 1,17,37 Some small studies in patients with normal kidney function postulate a role of decreased seminal magnesium in premature ejaculation. In a case-control study by Aloosh et al, 38 it was demonstrated that there was a significant relationship between seminal plasma magnesium and premature ejaculation. The same results were found in a case-control study of 38 patients by Nikoobakht et al. 36 However, these studies have not found any relation between serum magnesium and premature ejaculation. Inflammation, metabolic syndrome and its components were independently associated with hypomagnesemia and ED. 4,29,39,40 Compatible with the literature, in our study we documented that high CRP levels, abdominal obesity, and hypertension were risk factors for ED. 28,29,31,[39][40][41][42] However, we have not found any association between metabolic syndrome, low HDL-C, and ED in multivariate analysis. Similar to other studies, we found that smoking is a strong risk factor for ED. 26,28,32 Several studies have shown that proteinuria is associated with ED, especially in diabetic patients. 30,[43][44][45] In a cross-sectional study of 455 men with type 2 diabetes, ED had a strong association with macroalbuminuria than microalbuminuria. 46 In the present study, we found that there is a strong relation between urine P/C $500 mg/dL and ED in non-diabetic subjects. The possible explanation why we have found a higher prevalence of ED in hypomagnesemic subjects may be related to high levels of CRP, proteinuria, and abdominal obesity, which all may reflect endothelial dysfunction. 12 In multivariate analyses, we found that all these three conditions were associated with ED, and that all were high in hypomagnesemic patients. Strength and study limitations The strength of this study includes that this is the first report to investigate the correlation between ED and hypomagnesemia in elderly men with CKD. The present study has several limitations: First, we did not measure levels of gonadotropins, testosterone, prolactin, NO, and seminal plasma magnesium which are important causes of ED. Second, we have not assessed blood flow of the penis. Third, the possible reasons for hypomagnesemia in our study may be secondary to reduced dietary magnesium intake or renal loss of magnesium. We have not measured the dietary intake or renal loss of magnesium. Finally, this study included only Turkish men, and thus, cultural and sociodemographic differences might have affected our results. How can we adapt the findings of the present study to the clinical practice? Current findings suggest that ED is highly prevalent in men with hypomagnesemia. Therefore, elderly patients with ED should be screened for hypomagnesemia. Correction of hypomagnesemia may lead to improvement in erectile function. Conclusion ED is common among elderly patients with hypomagnesemia. Serum magnesium measurement is an easily available and inexpensive marker. Therefore, detecting the serum magnesium level in non-diabetic elderly men with CKD seems to be a useful guide in assessing the risk of ED. Disclosure The authors report no conflicts of interest in this work.
2018-05-08T18:05:58.907Z
0001-01-01T00:00:00.000
{ "year": 2017, "sha1": "6f7b19af0855c73fd6c836e384cdd3d3b49a9d8d", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=35155", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f7b19af0855c73fd6c836e384cdd3d3b49a9d8d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
246431119
pes2o/s2orc
v3-fos-license
E M ] 9 F eb 2 02 2 Regression Adjustments under Covariate-Adaptive Randomizations with Imperfect Compliance ∗ We study regression adjustments with additional covariates in randomized experiments under covariate-adaptive randomizations (CARs) when subject compliance is imperfect. We develop a regression-adjusted local average treatment effect (LATE) estimator that is proven to improve efficiency in the estimation of LATEs under CARs. Our adjustments can be parametric in linear and nonlinear forms, nonparametric, and high-dimensional. Even when the adjustments are misspecified, our proposed estimator is still consistent and asymptotically normal, and their inference method still achieves the exact asymptotic size under the null. When the adjustments are correctly specified, our estimator achieves the minimum asymptotic variance. When the adjustments are parametrically misspecified, we construct a new estimator which is weakly more efficient than linearly and nonlinearly adjusted estimators, as well as the one without any adjustments. Simulation evidence and empirical application confirm efficiency gains achieved by regression adjustments relative to both the estimator without adjustment and the standard two-stage least squares estimator. Introduction Randomized experiments have seen increasing use in economic research. In existing experiments, one of the popular randomization methods applied by economists to achieve balance between treatment and control is covariate-adaptive randomization (CAR) (Bruhn and McKenzie (2009)). In CAR modelling, units are randomly assigned to treatment and control within strata formed by a few key pre-treatment variables. Recent studies in economics using CAR include Burchardi, Gulesci, Lerva, and Sulaim (2019), Anderson andMcKenzie (2021), andde Mel, McIntosh, Sheth, andWoodruff (2022). With rare exceptions, experimental subject compliance with assignment is almost always incomplete. For example, recent experimental studies under CARs and with incomplete compliance include Blattman and Dercon (2018) and Dupas, Karlan, Robinson, and Ubfal (2018). When there is incomplete compliance, the local average treatment effects (LATEs) -the average treatment effects among those who comply with the assignment, can be identified, as formulated in the seminal work by Imbens and Angrist (1994). The central theme of this paper is estimation and inference of the LATEs under CARs. In practice, researchers usually run two-stage least squares (2SLS) with treatment status as instrumental variable for estimation, and use heteroscedasticity-robust standard errors for inference. The validity of this approach is first established by Angrist and Imbens (1994) for natural experiments or randomized experiments under completely random sampling. However, different from those experiments in observational studies or under complete randomization, the treatment statuses generated under CARs are cross-sectionally dependent due to its randomization scheme. When there are no additional covariates, Ansel, Hong, and Li (2018) show the 2SLS robust standard error is inconsistent and usually conservative. Such a phenomenon is common for inference under CARs and has been discovered for other causal parameters such as average treatment effect (ATE; see Shaikh, 2018, 2019) and quantile treatment effect (QTE; see Zhang and Zheng, 2020). With additional covariates, in practice, researchers simply include them as exogenous controls in the 2SLS estimation. Such a practice may lead to degradation of the estimation precision, which is known as the Freedman's critique (Freedman, 2008a(Freedman, , 2008b. These two issues raise questions of how to compute the non-conservative standard error for LATE under CARs and use the additional covariates to improve the estimation precision. To tackle these problems, we propose a regression-adjusted LATE estimator in this paper. Our method takes into account of the dependence structure that arises from CAR and is thus able to avoid the conservatism, achieving the exact asymptotic size under the null. It is also weakly more efficient than both the estimator without regression adjustment and the one obtained by the standard 2SLS. In addition, it is robust to adjustment misspecification and easy to implement. This paper makes four main contributions. The first is to propose a regression-adjusted estimator of the LATE and connect it with the standard 2SLS estimator. We show that the usual 2SLS estimator with additional covariates is a special case of our estimator when CAR achieves strong balance, and thus, identifies LATEs. This complements the recent discussion by Blandhol, Bonney, Mogstad, and Torgovitsky (2022) for observational studies. Second, we show the weighted average of the fully saturated (with strata dummies) 2SLS estimators, including the fully saturated estimator without additional covariates recently proposed by Bugni and Gao (2021), is a special case of our adjusted estimator as well. The second contribution of the paper is to develop an asymptotic theory for our regressionadjusted LATE estimator under high-level conditions required for the regression adjustments. Our analysis follows a new asymptotic framework that was recently established by Bugni et al. (2018) to study ATE estimators under CARs, which accounts for the cross-sectional dependence caused by the randomization. We prove that, even when the adjustments are misspecified, our proposed estimator is still consistent and asymptotic normal and that their inference method still achieves the exact asymptotic size under the null. When the adjustments are correctly specified, our estimator achieves the minimum asymptotic variance. In our third contribution, we investigate efficiency gains brought by regression adjustments in parametric (both linear and nonlinear), nonparametric, and high-dimensional forms. When adjustments are linear, we drive the most efficient estimator among all linearly adjusted LATE estimators, and in particular, show that it is weakly more efficient than the estimator without adjustment and the standard 2SLS estimator. We also derive a nonlinearly adjusted LATE estimator. We further construct a new estimator which combines the linearly and nonlinearly adjusted estimators, and show it is weakly more efficient than both as well as the one without any adjustments, i.e., the fully saturated estimator proposed by Bugni and Gao (2021). We further study nonparametric and high-dimensional adjustments and provide conditions under which they are weakly more efficient than all the other adjusted estimators considered in this paper and are as if the correctly specified regression adjustments are used. The final contribution of the paper is to provide simulation evidence and empirical support for the efficiency gains achieved by our regression-adjusted LATE estimator. We compare it with both the one without any adjustment and the one obtained by the standard 2SLS and confirm sizable efficiency gains that can be achieved by regression adjustments. In the empirical application, we revisit the experiment with a CAR design in Dupas et al. (2018). We find that by just using the same two covariates adopted in that paper, over nine outcome variables, the standard errors of our adjusted LATE estimator are on average around 7% lower than those without adjustments. For some outcome variables, regression adjustments can reduce the standard errors by about 15%. Compared with the 2SLS estimators, the standard errors of our estimators are generally smaller as well, although by a smaller margin. In a recent work, Bugni and Gao (2021) considered inference of the LATE estimators in CARs, but they did not use covariates in addition to the stratum indicator for regression adjustments. In one section of their paper, Ansel et al. (2018) also studied inference of a LATE estimator via 2SLS with covariates under CARs, which can be viewed as a linearly adjusted LATE estimator. We complement their works by proposing a novel LATE estimator that include 2SLS as a special case but also other nonlinear adjustments, deriving the optimal coefficient for the (potentially misspecified) linear adjustment, and developing a procedure to further improve the initial (potentially misspecified) linear and nonlinear adjustments. The final adjusted estimator of LATE is guaranteed to be weakly more efficient than the initially adjusted and the unadjusted estimator, the latter of which is just Bugni and Gao's 2021 fully saturated estimator. It is also guaranteed to be weakly more efficient than the 2SLS estimator with control variables which is commonly used in empirical researches. We further study the nonparametric and high-dimensional adjustments which are completely new to these two papers. Ren and Liu (2021) studied the regression-adjusted LATE estimator in completely randomized experiments for a binary outcome using the finite population asymptotics. We differ from their work by considering the regression-adjusted estimator in covariate-adaptive randomizations for a general outcome using the superpoluation asymptotics. It is possible to extend their works to LATE. Alternatively, we provide a way to improve efficiency in the data analysis stage, which is practically useful for researchers who are unable to run pilot experiments due to budget constraints or the fact that they analyze someone else's datasets or conduct subsample analyses. Furthermore, although the optimal designs can ensure the covariates used in the randomization are well balanced, their updated versions obtained in future waves are usually not, as pointed out by Bruhn and McKenzie (2009). Our methodology can therefore complement the 'optimal' randomization by using these updated covariates to further improve the efficiency. In general, experimenters may miss the opportunity to achieve 'optimal' in the design stage. Our method gives them a second chance to improve the precision of their estimators in the data analysis stage. The rest of the paper is organized as follows. Section 2 lays out the rest of our setup, and Section 3 introduces our regression-adjusted LATE estimatorτ . We establish the asymptotic properties (asymptotic normality etc) ofτ in Section 4. We then examine efficiency ofτ in contexts of parametric, nonparametric and high-dimensional models in Sections 5, 6 and 7, respectively. We conduct Monte Carlo simulations in Section 8 and an empirical application in Section 9. Section 10 concludes. Some implementation details for sieve and Lasso regressions and the proofs of the theoretical results are included in Appendix. Setup where Y i (1), Y i (0) are the potential outcomes for individual i's hypothetical treated and untreated outcome, respectively, and D i is a binary random variable indicating whether individual i received the treatment (D i = 1) or not (D i = 0) in the actual study. One could link D i to the treatment status A i in the following way: Consider a CAR with n individuals; that is, a researcher can observe the data X (n) := (X 1 , . . . , X n ), and A (n) := (A 1 , . . . , A n ). We make the following assumptions on the data generating process (DGP) and the treatment assignment rule. For each i, We allow X i and S i to be dependent. (iii) Suppose that p(s) is fixed w.r.t. n and positive for every s ∈ S. (iv) Let π(s) denote the propensity score for stratum s. Then, c < min s∈S π(s) ≤ max s∈S π(s) < 1 − c for some constant c ∈ (0, 0.5) and Bn(s) Several remarks are in order. First, Assumption 1(ii) implies the treatment assignment A (n) are generated only based on strata indicators. Second, Assumption 1(iii) imposes the sizes of strata are balanced. Third, Bugni et al. (2018) show that Assumption 1(iv) holds under several covariate-adaptive treatment assignment rules such as simple random sampling (SRS), biased-coin design (BCD), adaptive biased-coin design (WEI), and stratified block randomization (SBR). For completeness, we briefly repeat their descriptions below. Note we only require B n (s)/n(s) = o p (1), which is weaker than the assumption imposed by Bugni et al. (2018) but the same as that imposed by Bugni et al. (2019) and Zhang and Zheng (2020). Fourth, Assumption 1(v) implies there are no defiers. Last, Assumption 1(iv) is a standard moment condition. In this section, we propose a regression-adjusted LATE estimator for τ and connect it to the standard 2SLS estimator and the estimators developed in recent literature. We further show that those estimators are special cases of our estimator. D(a)). For a = 0, 1, we suppose that working models µ D (a, s, x) and µ Y (a, s, x) are employed, which may differ from the true conditional expectations µ D (a, s, x) and µ Y (a, s, x), respectively. We will also suppose that estimation is based on the working model; letμ D (a, s, x) andμ Y (a, s, x) be the estimated functions. In CAR, the propensity score is usually known or can be consistently estimated byπ(s) = n 1 (s) Then, the proposed estimator of LATE iŝ (3.4) It is worth noting that, asπ is the consistent estimator for the propensity score, our proposed LATE estimatorτ is consistent for τ even when the the working models µ D (a, s, x) and µ Y (a, s, x) are misspecified. To the best of our knowledge, we are the first to apply doubly robust methods to study the LATEs under CARs. Our analysis takes into account the cross-sectional dependence caused by the randomization and is different from the double robustness literature that mostly focuses on the observational data. For different choices of working models, (3.2) can be interpreted as 2SLS with or without interaction effects. First, with additional covariate X i , empirical researchers usually run a 2SLS regression of Y i on D i , X i , {1{S i = s}} s∈S using A i as the IV and use the coefficient of D i , denoted asτ 2sls , as the point estimate for the LATE. Then, by the indirect least squares, we havê areβ s andθ s , respectively, when S i = s, and n 1 and n 0 are the numbers of treated and control units, respectively. When the covariate-adaptive randomization achieves strong balance (such as BCD and SBR) so that π(S i ) =π(S i ) = π, n 1 = nπ, and n 0 = n(1 − π) such a 2SLS estimator is a special case of our estimator witĥ Second, our framework allows for the assignment probability π(·) to be varying across strata. Therefore, we can follow Bugni and Gao (2021) and run 2SLS with a full set of interactions by (β x,s ,β s ) are the estimators of X i and intercept from the OLS regression of D i on A i , X i , 1 using observations in stratum S i = s, (θ x,s ,θ s ) are the estimators of X i and intercept from the OLS regression of Y i on A i , X i , 1 using observations in stratum S i = s, and n 1 (s) and n 0 (s) are the numbers of treated and control units in stratum s. Then, the LATE estimator can be written as a weighted average ofτ 2sls 2 (s), i.e.,τ 2sls 2 = s∈SΠ 0 (s)τ 2sls 2 (s), whereΠ 0 (s) =p (s)Π 2 (s) s∈Sp (s)Π 2 (s) ,p(s) = n(s)/n, and n(s) is the number of units in stratum s. This is a reasonable estimator for LATE because when there is no X, this estimator reduces to the fully saturated estimator proposed by Bugni and Gao (2021). 1 In addition, with covariates X, we havê which is a special case of our estimator defined in (3.2) witĥ This also means Bugni and Gao's (2021) fully saturated estimator is a special case of our estimator without additional covariates. 2 Asymptotic Properties We first make the following high-level assumptions on the regression adjustments. Assumption 2. (i) For a = 0, 1 and s ∈ S, define Then, for a = 0, 1, b = D, Y , we have Assumption 2 is mild. Consider a linear working model µ Y (a, s, X i ) = X ⊤ i β a,s , where the coefficient β a,s may vary across treatment statuses and strata. Its estimatorμ Y (a, s, X i ) can be written as X ⊤ iβ a,s whereβ a,s is an estimator of β a,s . Then, Assumption 2(i) requires max s∈S,a=0,1 which holds wheneverβ a,s p −→ β a,s . Similar remark applies to Assumption 2(ii). In order to write down the limit distribution ofτ , we need to introduce additional notation. , and Then, Theorem 4.1 below shows thatτ is asymptotically normal in which the asymptotic variance, denoted as σ 2 , can be written as Next, we propose an estimatorσ 2 of σ 2 . Recall Ξ H,i defined in (3.3). We defineσ 2 aŝ Theorem 4.1. (i) Suppose Assumptions 1 and 2 hold, then (ii) In addition, if the working models are correctly specified, i.e., µ b (a, s, x) = µ b (a, s, x) for all (a, b, s, x) ∈ {0, 1}×{D, Y }×SX where SX is the joint support of (S, X), then the asymptotic variance σ 2 achieves the minimum. Theorem 4.1(i) establishes limit distribution of our adjusted LATE estimator and gives a consistent estimator of its asymptotic variance. Such a variance depends on the working model µ b (a, s, x) for (a, b) ∈ {0, 1} × {D, Y }. Theorem 4.1(ii) further shows the asymptotic variance is minimized when the working model is correctly specified. In fact, in this case, the asymptotic variance ofτ is which coincides with the semiparametric efficiency bound for LATE under SRS as derived by Frölich (2007). It means for such a randomization scheme, our adjusted estimate is semiparametrically efficient. For other randomization schemes, it is still unknown whether the semiparametric efficiency bound for LATE will remain unchanged. To derive the semiparametric efficiency bound for LATE under general CAR is outside the scope of this paper, and thus, is left for future research. Second, when there are no adjustments so that µ Y (·) and µ D (·) are just zero, we have As previously noted, in this case, our estimator coincides with Bugni and Gao's (2021) fully saturated estimator. Indeed, we can verify, by some tedious calculation, that σ 2 defined above is the same as the asymptotic variance of the fully saturated estimator derived by Bugni and Gao (2021). 3 Then, Bugni and Gao (2021) have shown that our estimator without adjustments is weakly more efficient than the strata fixed effects and two sample IV estimators. In the next section, we show that, with adjustments, we can further improve the efficiency, even when the working models are potentially misspecified. Parametric Adjustments In this section, we consider estimating µ b (a, s, x) for a = 0, 1, s ∈ S, and b = D, Y via parametric regressions. Note we do not require the parametric model to be correctly specified. Suppose is a known function of X i up to some finitedimensional parameter (i.e., θ a,s and β a,s ). The researchers have the freedom to choose the functional forms of Λ b a,s (·), the parameter values of (θ a,s , β a,s ), and the ways they are estimated. In fact, as the parametric models are potentially misspecified, different estimation methods of the same model can lead to distinctive pseudo true values. We will discuss several detailed examples in Sections 5.1, 5.2, and 5.3 below. Here, we first focus on the general setup. 3 Derivation is available upon request. Define the estimators of (θ a,s , β a,s ) as (θ a,s ,β a,s ), and thus, the corresponding feasible parametric regression adjustments aŝ (ii) There exist a positive random variable L i and a positive constant C > 0 such that for all a = (0, 1) and s ∈ S, Assumption 3(i) means our estimators are consistent. Assumption 3(ii) means the parametric models are smooth in their parameters, which is true for many widely used regression models such as linear, logit, and probit regressions. This restriction can be further relaxed to allow for non-smoothness under less intuitive entropy conditions. Theorem 5.1 generalizes the intuition in (4.1) to general parametric models. It means Assumption 2 holds for parametric models as long as the parameters are consistently estimated. Optimal Linear Adjustments Suppose, for a = 0, 1 and s ∈ S, µ Y (a, s, X) = Ψ ⊤ i,s t a,s and µ D (a, s, X) = Ψ ⊤ i,s b a,s , where Ψ i,s = Ψ s (X i ) is a function and the functional form can vary across s ∈ S. The restriction that the function Ψ s (·) does not depend on a = 0, 1 is innocuous as if it does, we can stack them up and Similarly, it is also innocuous to impose that the function Ψ s (·) is the same for modeling µ Y (a, s, X) and µ D (a, s, X). The asymptotic variance of the adjusted LATE estimatorτ is denoted as σ 2 , which depends on (µ Y (a, s, X), µ D (a, s, X)), and thus, depends on (t a,s , b a,s ). The following theorem characterizes the optimal linear coefficients that minimize the asymptotic variance ofτ over all possible (t a,s , b a,s ). where for a generic symmetric matrix A, λ min (A) and λ max (A) denote the minimum and maximum eigenvalues of A. Assumption 4 requires the regressor Ψ i,s does not contain stratum invariant regressors such as the constant term. In fact, (3.3) and (3.4) imply that our estimator is numerically invariant to stratum-specific location shift because by definition, Theorem 5.2. Suppose Assumptions 1 and 4 hold. Then, we have The optimality result in Theorem 5.2 rely on two key restrictions: (1) the regressor Ψ i,s is the same for treated and control units and (2) both the adjustments µ Y (a, s, X) and µ D (a, s, X) are linear. The first restriction is innocuous as we can stack up regressors for treated and control units as previously mentioned. But it indeed rules out the scenario that some regressors are just available for treated or control units, but not both. The second restriction means it is possible to have nonlinear adjustments that are more efficient. We will come back to this point in Sections 5.2, 5.3, and 6. In view of Theorem 5.1, the optimal linear coefficients are not unique. In order to achieve the optimality, we only need to consistently estimate one point in Θ * . For the rest of the section, we choose (θ LP a,s , β LP a,s ) with the corresponding optimal linear adjustments Then, the feasible linear adjustments can be defined aŝ , · · · , S} for some integer S > 0. It is clear thatθ LP a,s andβ LP a,s are the coefficients of Ψ i,s 1{S i = s}1{A i = a} in the following two linear OLS regressions: Theorem 5.3. Suppose Assumptions 1 and 4 holds. Then, {µ b (a, s, X i )} b=D,Y,a=0,1,s∈S and {μ b (a, s, X i )} b=D,Y,a=0,1,s∈S defined in (5.4) and (5.6), respectively, satisfy Assumption 2. Denote the adjusted LATE estimator with adjustment {µ b (a, s, X i )} b=D,Y,a=0,1,s∈S defined in (5.6) asτ LP . Then, all the results in Theorem 4.1(i) hold forτ LP . In addition,τ LP is the most efficient among all linearly adjusted LATE estimators, and in particular, weakly more efficient than the LATE estimator with no adjustments. However, Theorem 5.3 shows ourτ LP is weakly more efficient than both by letting Ψ i,s = X i . In general,τ LP and the 2SLS estimators are not asymptotically equivalent because the optimal linear adjustments (μ D (a, s, x) andμ Y (a, s, x) defined in (5.6)) can be different for treated and control groups while those for 2SLS estimators are forced to be the same. Linear and Logistic Regressions It is also possible to consider a linear model for µ Y (a, s, X i ) and a logistic model for µ D (a, s, X i ), i.e., (u)) is the logistic CDF. As the model for µ D (a, s, X i ) is non-linear, the optimality result established in the previous section does not apply. We can consider fitting the linear and logistic models by OLS and MLE, respectively, and call this method the OLS-MLE adjustment. Specifically, definê are the OLS and MLE estimates of the coefficient ofΨ i,s 1{S i = s}1{A i = a} in the following linear and logistic regressions, respectively, In OLS and MLE methods, we do allow the regressorΨ i,s to contain the constant term. Supposê whereĥ OLS a,s is the coefficients of the constant terms inΨ i,s . Then, because our adjusted LATE estimator is invariant to the stratum-specific location shift of the adjustment OLS a,s produce the exact same LATE estimator. In addition, we can obtainθ OLS a,s via the OLS regression which is exactly the same as (5.7 as its unique maximizer. Several remarks are in order. First, the OLS and MLE estimates are not optimal in the sense that they minimize the asymptotic variance of the corresponding LATE estimator. Second, the OLS-MLE adjustment is not necessarily less efficient than the optimal linear adjustment studied in Section 5.1 as the regression model for µ D (a, s, X i ) is nonlinear. In fact, as Theorem 4.1 shows, if the adjustments are correctly specified, then the adjusted LATE estimator can achieve the global minimum asymptotic variance. Compared with the linear probability model considered in Section 5.1, the logistic model is expected to be less misspecified, especially when the regressor Ψ i contains technical terms of X i such as interactions and quadratic terms. Third, we will further justify the above intuition in Section 6 below, in which we let Ψ i,s be the sieve basis functions with an increasing dimension and show that the OLS-MLE method can consistently estimate the correct specification. Fourth, one theoretical shortcoming of the OLS-MLE adjustment is that, unlike the optimal linear adjustment, it is not guaranteed to be more efficient than no adjustment. We address this issue in Section 5.3 below. Further Efficiency Improvement where d ψ is the dimension of ψ i,s . Then, the OLS-MLE adjustment can be written as Because our estimator is invariant to stratum-level location shift of adjustments, the OLS-MLE adjustments and the linear adjustments produce the same estimator. Similarly, we can replicate no adjustments and the optimal linear adjustments with Φ i,s defined in (5.12) as regressors by letting Based on Theorem 5.2, we can further improve all three types of adjustments by setting the is unknown, we can replace it by its estimate proposed in Section 5.2, i.e., definê Then, we define the estimators of θ LP * a,s and β LP * a,s aŝ (5.14) The corresponding feasible adjustments arê Assumption 6. Suppose Assumption 4 holds for Φ i,s defined in (5.12). Theorem 5.5 shows that by refitting OLS-MLE adjustment in a linear regression with optimal linear coefficients, we can further improve the efficiency of the adjusted LATE estimator. As a byproduct,τ LP * is guaranteed to be weakly more efficient than the LATE estimator without any adjustments. Nonparametric Adjustments In this section, we consider the nonparametric regression as the adjustments for our LATE estimator. Specifically, we use linear and logistic sieve regressions to estimate the true specifications µ Y (a, s, X i ) and µ D (a, s, X i ), respectively. For implementation, the nonparametric adjustment is exactly the same as OLS-MLE adjustment studied in Section 5.2. Theoretically, we will let the regressorsΨ i,s in (5.9) be sieve basis functions whose dimensions will diverge to infinity as sample size increases. For notation simplicity, we suppress the subscript a and denote the sieve regressors as Ψ i,n ∈ ℜ hn , where the dimension h n can diverge with the sample size and the corresponding sieve estimators asθ N P a,s andβ N P a,s as (5.10) whereΨ i,s is replaced by Ψ i,n . The corresponding feasible regression adjustments arê and the corresponding adjusted LATE estimator is denoted asτ N P . Assumption 7. (i) There exist constants 0 < c < C < ∞ such that with probability approaching one, (ii) For a = 0, 1, there exists an h n × 1 vector θ N P a,s and β N P a,s such that for (iii) For a = 0, 1, there exists a constant c ∈ (0, 0.5) such that Theorem 6.1. Suppose Assumptions 1 and 7 hold. Then, {μ b (a, s, X i )} b=D,Y,a=0,1,s∈S defined in (6.1) and µ b (a, s, X) = µ b (a, s, X) satisfy Assumption 2. Then, all the results in Theorem 4.1(i) hold forτ N P . In addition,τ N P achieves the minimum asymptotic variance characterized in Theorem 4.1(ii). The OLS-MLE and nonparametric adjustments are numerically identical if the same set of re-gressors are used. Theorem 6.1 then shows that the OLS-MLE adjustment with technical regressors performs well because it can closely approximate the correct specification. Under the asymptotic framework that the dimension of the regressors diverges to infinity and the approximation error converges to zero, the OLS-MLE adjustment can be viewed as the nonparametric adjustment, which achieves the minimum asymptotic variance of the adjusted LATE estimator. High-Dimensional Adjustments In this section, we consider the case that the regressor Ψ i,n ∈ ℜ pn is high-dimensional so that p n ≫ n. In this case, we can no longer use the OLS-MLE (nonparametric) adjustment method. Instead, we need to regularize the least squares and logistic regressions. Specifically, let and the corresponding adjusted LATE estimator is denoted asτ HD , wherê {̺ n,a (s)} a=0,1,s∈S are tuning parameters, andΩ b = diag(ω b 1 , · · · ,ω b pn ) is a diagonal matrix of datadependent penalty loadings for b = D, Y . We provide more detail aboutΩ b in Section A. We maintain the following assumptions for Lasso and logistic Lasso regressions. (v) There exists a constant c ∈ (0, 0.5) such that c ≤ inf a=0,1,s∈S,x∈Supp(X) (vi) Let ℓ n be a sequence that diverges to infinity. Then, there exist two constants κ 1 and κ 2 such that with probability approaching one, where ||v|| 0 denotes the number of nonzero components in v. Assumption 8 is standard in the literature and we refer interested readers to Belloni, Chernozhukov, Fernández-V (2017) for more discussion. Based on the approximate sparsity, Lasso can consistently estimate the correct specification, which then cause the adjusted LATE estimator to achieve the minimum variance, due to Theorem 4.1(ii). (iii) Let Z be i.i.d. according to standardized Beta(2, 2), S i = 4 j=1 1{Z i ≤ g j }, and (g 1 , g 2 , g 3 , g 4 ) = where Ω is the Toeplitz matrix For each data generating process, we consider the following four randomization schemes as in Zhang and Zheng (2020) with π(s) = 0.5 for s ∈ S: (i) SRS: Treatment assignment is generated as in Example 1. (iii) BCD: Treatment assignment is generated as in Example 3 with λ = 0.75. (iv) SBR: Treatment assignment is generated as in Example 4. We compute the true LATE effect τ 0 using Monte Carlo simulations, with sample size being 10000 and the number of Monte Carlo simulations being 1000. We test the true hypothesis H 0 : τ = τ 0 by the test described in Theorem 4.1 in order to gauge the size of the test. The power is investigated by the hypothesis All the tests are carried out at 5% level of significance. Estimators for Comparison For DGPs(i)-(ii), we consider the following estimators. (ii) 2SLS: the two-stage least squares (2SLS) estimator of τ . That is, run the following IV regression where D i is instrumented by A i . We use the IV heteroskedasticity-robust standard error for inference. (iii) LP: the optimal linear estimator with Ψ i,s = X i and the pseudo true values being estimated byθ LP a,s andβ LP a,s defined in (5.5). (iv) LG: the OLS-MLE estimator with Ψ i,s = X i , and the pseudo true values being estimated bŷ θ OLS a,s andβ M LE a,s defined in (5.10). (v) F: the further efficiency improving estimator with Ψ i,s = X i , and the pseudo true values being estimated byθ LP * a,s andβ LP * a,s defined in (5.14). (vi) NP: the nonparametric estimator outlined in Section 6. For DGP(i), the regressors are where t 1 and t 2 are the sample medians of {X 1,i } i∈[n] and {X 2,i } i∈[n] , respectively. For DGP(ii), the regressors are The pseudo true values are estimated byθ N P a,s andβ N P a,s defined in (6.1). For GDP(iii), we consider the estimator with no adjustments (NA), and the high-dimensional lasso estimatorsθ HD a,s andβ HD a,s defined in (7.1) with Ψ i,n = X i . The implementation details are given in Section A. Table 1 presents the empirical sizes and powers of the true null H 0 : τ = τ 0 and false null H 0 : τ = τ 0 + 1, respectively, under DGPs (i)-(iii). Note that none of the working models is correctly specified. Consider DGP (i). When N = 200, only the NA estimator is slightly under-sized while all other estimators have sizes close to the nominal level 5%. This confirms that our estimation and inference procedures are robust to misspecification. In terms of power, the NA estimator has the lowest power, corroborating the belief that one should carry out the regression adjustment whenever covariates correlate with the potential outcomes. Power of the 2SLS estimator is also LG, F and NP estimators are much better. In particular, power of the LP estimator is slightly higher than that of the LG estimator even though a logistic model is less misspecified. This shows some robustness of the LP estimator. The power of the F estimator is considerably higher, which is consistent with our theory that the F estimator is weakly more efficient than the NA, LP and LG estimators. The NP estimator enjoys the highest power as a nonparametric model could approximate the true specification very well. When the sample size is increased to 400, all the sizes and powers of the estimators improve, and all the observations continue to hold. Moreover, the similar patterns exist in DGP (ii). Simulation Results We now consider the DGP (iii). In this high-dimensional setting, only the NA and HD estimators are feasible. When N = 200, both estimators have the correct sizes but the HD estimator has considerably higher power. When N = 400, the sizes of these two estimators remain relatively unchanged, while their powers improve with a diverging gap. Practical Recommendation We suggest using estimation method F which is guaranteed to be weakly more efficient than simple 2SLS, LP, and LG. We can include linear, quadratic and interaction terms of original covariates as regressors (Ψ i,s ). Empirical Application Banking the unbanked is considered to be the first step toward broader financial inclusion -the focus of the World Bank's Universal Financial Access 2020 initiative. 4 In a field experiment with a CAR design, Dupas et al. (2018) examined the impact of expanding access to basic saving accounts for rural households living in three countries: Uganda, Malawi, and Chile. In particular, apart from the intent-to-treat effects for the whole sample, they also studied the local average treatment effects for the households who actively used the accounts. This section presents an application of our regression adjusted estimators to the same dataset to examine the LATEs of opening bank accounts on savings -a central outcome of interest in their study. We focus on the experiment conducted in Uganda. The sample consists of 2,160 households who were randomized with a CAR design. Specifically, within each of 41 stratum formed by gender, occupation, and bank branch, half of households were randomly allocated to the treatment group, the other half to the control one. Households in the treatment group were then offered a voucher to open bank accounts with no financial costs. However, not every treated household ever opened and used the saving accounts for deposit. In fact, among the treated households, only 41.87% of them opened the accounts and made at least one deposit within 2 years. Subject compliance is therefore imperfect in this experiment. The target fraction of treated households is 1/2. Because sup s∈S | Dn(s) n(s) | ≈ 0.056, it is plausible to claim that Assumption 1(iv) is also satisfied. Since households in the control group need to pay for the fees of opening accounts while the treated ones bear no financial costs, no-defiers statement in Assumption 1(v) holds plausibly in this case. One of the key analyses in Dupas et al. (2018) is to estimate the treatment effects on savings for active users -households who actually opened the accounts and made at least one deposit within 2 years. We follow their footprints to estimate the same LATEs at savings balance. 5 Specifically, for each item in the savings balance, we estimate the LATEs on savings for active users by the methods "NA", "2SLS", "LP", "LG", "F", and "NP". To maintain comparability, for each outcome variable, we keep X i similar to those used in Dupas et al. (2018) for all the adjusted estimators. 6 Table 2 presents the LATE estimates and their standard errors (in parentheses) estimated by these methods. These results lead to four observations. First, consistent with the theoretical and simulation results, the standard errors for the LATE estimates with regression adjustments are lower than those without adjustments. This observation holds for all the outcome variables and all the regression adjustment methods. Over nice outcome variables, the standard errors estimated by regression adjustments are on average around 7% lower than those without adjustment. In particular, when the outcome variable is total informal savings, the standard errors obtained via the further improvement adjustment -"F" method is about 14.9% lower than those without adjustment. This means that regression adjustments, just with the same two covariates used in Dupas et al. (2018), can achieve sizable efficiency gains in estimating the LATEs. Second, the standard errors for the regression-adjusted LATE estimates are mostly lower than those obtained by the usual 2SLS procedure. Especially, when the outcome variable is savings in friends/family, the standard errors estimated by the optimal linear adjustment -"LP" method is around 6.9% lower than those obtained by the two-stage least square. This means that, compared with our regression-adjusted methods, the two-stage least square is less efficient to estimate the LATEs under CAR. Third, the standard errors for the LATE estimates with regression adjustments are similar in size. This implies that all the regression adjustments achieve close efficiency gain in this case. 5 Savings balance includes savings in formal financial intuitions, mobile money, cash at home or in secret place, savings in ROSCA/VSLA, savings with friends/family, other cash savings, total formal savings, total informal savings, and total savings (See Dupas et al. (2018) for details). Our analysis uses these variables obtained from the first followup survey. Finally, as in Dupas et al. (2018), for the households who actively use bank accounts, we find that reducing the cost of opening a bank account can significantly increase their savings in formal institutions. We also observe the evidence of crowd-out -mainly moving cash from saving at home to saving in bank. Conclusion In this paper, we address the problem of estimation and inference of local average treatment effects under covariate-adaptive randomizations using regression adjustments. We first propose a regression-adjusted LATE estimator under CARs. We then derive its limit theory and show that, even under the potential misspecification of adjustments, our estimator maintains its consistency and its inference method still achieves an asymptotic size equal to the nominal level under the null. When the adjustment is correctly specified, our LATE estimator achieves minimum asymptotic variance. We also examine the efficiency gains brought by regression adjustments in parametric (both linear and nonlinear), nonparametric, and high-dimensional forms. When the adjustment is parametrically misspecified, we construct a new estimator by combining the linear and nonlinear adjustments. This new estimator is shown to be weakly more efficient than all the parametrically adjusted estimators, including the one without any adjustment. Simulations and empirical application confirm efficiency gains that materialize from regression adjustments relative to both the estimator without adjustment and the standard two-stage least squares estimator. A Implementation Details for Sieve and Lasso Regressions Sieve regressions. We provide more details on the sieve basis. Recall Ψ i,n ≡ (b 1,n (x), · · · , b hn,n (x)) ⊤ , where {b h,n (·)} h∈ [hn] are h n basis functions of a linear sieve space, denoted as B. Given that all the elements of vector X are continuously distributed, the sieve space B can be constructed as follows. 1. For each element X (l) of X, l = 1, · · · , d x , where d x denotes the dimension of vector X, let B l be the univariate sieve space of dimension J n . One example of B l is the linear span of the J n dimensional polynomials given by Another example is the linear span of r-order splines with J n nodes given by where the grid −∞ = t 0 ≤ t 1 ≤ · · · ≤ t Jn ≤ t Jn+1 = ∞ partitions Supp(X (l) ) into J n + 1 subsets I j = [t j , t j+1 ) ∩ Supp(X (l) ), j = 1, · · · , J n − 1, I 0 = (t 0 , t 1 ) ∩ Supp(X (l) ), and I Jn = (t Jn , t Jn+1 ) ∩ Supp(X (l) ). 2. Let B be the tensor product of {B l } dx l=1 , which is defined as a linear space spanned by the functions dx l=1 g l , where g l ∈ B l . The dimension of B is then K ≡ d x J n if B l is spanned by J n dimensional polynomials. We refer interested readers to Hirano, Imbens, and Ridder (2003) and Chen (2007) for more details about the implementation of sieve estimation. Given the sieve basis, we can compute the {μ b (a, s, X i )} a=0,1,b=D,Y,s∈S following (6.1). Lasso regressions. We follow the estimation procedure and the choice of tuning parameter proposed by Belloni et al. (2017). We provide details below for completeness. Recall ̺ n,a (s) = c n a (s)F −1 N (1 − 1/(p n log(n a (s)))). We set c = 1.1 following Belloni et al. (2017). We then implement the following algorithm to estimateθ HD a,s andβ HD a,s : (ii) For k = 1, · · · , K, obtainσ Then, we have Next, we divide the proof into four steps. In the first step, we obtain the linear expansion of √ n(Ĝ − G). Based on the same argument, we can obtain the linear expansion of √ n(Ĥ − H). In the second step, we obtain the linear expansion of √ n(τ − τ ) and then prove the asymptotic normality. In the third step, we show the consistency ofσ. In the fourth step, we show that when µ(a, s, x) = µ(a, s, x) for all (a, s, x) ∈ {0, 1} × Supp(SX), the asymptotic variance σ 2 achieves the minimum. Step 1. We have Lemma J.1 shows that This implies Similarly, we can show that Combining (B.1), (B.2), and (B.3), we obtain the linear expansion forτ as Step 2. Lemma J.2 implies that and the three terms are asymptotically independent, where This further impliesĤ Step 3. We aim to show the consistency ofσ 2 . First note that (0)). Step 4. Let Similarly, we have Last, we have where does not depend on the working modelsμ b (a, S i , X i ) for a = 0, 1 and b = D, Y . Then, we The equal sign holds when A a (S i , X i ) = 0 for i = 1, · · · , n and a = 0, 1, which can be achieved C Proof of Theorem 5.1 The proof is divided into two steps. In the first step, we show Assumption 2(i). In the second step, we establish Assumptions 2(ii) and 2(iii). Step 1. Recall and {X s i } i∈[n] is generated independently from the distribution of X i given S i = s, and so is inde- To see the last equality, we note that, for any ε > 0, with probability approaching one (w.p.a.1), we have max s∈S ||θ a,s − θ a,s || ≤ ε. Therefore, on the event A n (ε) ≡ {max s∈S ||θ a,s − θ a,s || ≤ ε, min s∈S n 1 (s) ≥ εn} we have By Assumption 3, F is a VC-class with a fixed VC index and envelope L i . In addition, Therefore, for any δ > 0 we have By Chernozhukov, Chetverikov, and Kato (2014, Corollary 5.1), Therefore, By letting n → ∞ followed by ε → 0, we have For the same reason, we have and (C.1) holds. D Proof of Theorem 5.2 Following Step 4 in the proof of Theorem 4.1, we have where σ 2 * does not depend on (t a,s , b a,s ) a=0,1,s∈S and where for a = 0, 1, and (μ Y (a, s, x),μ D (a, s, x)) and (ν Y (a, s, x),ν D (a, s, x)) are defined in (4.2) and (B.4), respectively. Specifically, we havẽ In order to minimize V ((t a,s , b a,s ) a=0,1,s∈S , it suffices to minimize for each s ∈ S. In addition, we have and By solving the first order condition, we find that Similarly, we have This concludes the proof. E Proof of Theorem 5.3 In order to verify Assumption 2, by Theorem 5.1, it suffices to show thatθ LP a,s p −→ θ LP a,s and β LP a,s p −→ β LP a,s . We focus on the former with a = 1. Let {W s i , X s i } i∈[n] be generated independently from the joint distribution of (Y i (D i (1)), X i ) given S i = s and denote Ψ s As 1 In addition, by the standard LLN, Similarly, we can show thatθ LP 0,s p −→ θ LP 0,s andβ LP a,s p −→ β LP a,s for a = 0, 1 and s ∈ S. Therefore, Assumption 2 holds, and thus, all the results in Theorem 4.1 hold forτ LP . Then, the optimality result in the second half of Theorem 5.3 is a direct consequence of Theorem 5.2. F Proof of Theorem 5.4 Let {D s i (1), X s i } i∈[n] be generated independently from the joint distribution of (D i (1), X i ) given S i = s, Ψ s i,s = Ψ s (X s i ), andΨ s i,s = (1, Ψ s,⊤ i,s ) ⊤ . Then, we have, pointwise in b, As the logistic likelihood function is concave in b, the pointwise convergence in b implies uniform convergence, i.e., G Proof of Theorem 5.5 We note that the adjustments proposed in Theorem 5.5 are still parametric. Specifically, we have Therefore, in view of We first show that 1 n a (s) Let v, u ∈ ℜ 4 be two arbitrary vectors such that ||u|| 2 = ||v|| 2 = 1. Then, we have where the first inequality is by the Hölder's inequality, the second inequality is by the facts that and the last equality is by the fact that As the inequality holds for arbitrary u, v, it implies (G.1). Similarly, we can show that Following the same argument in the proof of Theorem 5.3, we can show that In addition, by Assumption 6, with probability approaching one, there exists a constant c > 0 such that Combining (G.1), (G.2), and (G.3), we can show that Similarly, we haveβ LP * a,s p −→ β LP * a,s , which implies all the results in Theorem 4.1 hold forτ LP * . The optimality result in the second half of the theorem is a direct consequence of Theorem 5.2. H Proof of Theorem 6.1 We focus on verifying Assumption 2 forμ D (a, s, X i ). The proof forμ Y (a, s, X i ) is similar, and thus, is omitted. Following the proof of Theorem 5.4, we note that, for each a = 0, 1 and s ∈ S, the data in cell I a (s) ({D s i (a), X s i } i∈[n] ) can be viewed as i.i.d. following the the joint distribution of (D i (a), X i ) given S i = s conditionally on {A i , S i } i∈ [n] . Then, following the standard logistic sieve regression in Hirano et al. (2003), we have max a=0,1,s∈S ||β N P a,s − β N P a,s || 2 = O p h n /n a (s) . Furthermore, we note that F has a bounded envelope, is of the VC-type with VC-index upper bounded by 2h n , and has Therefore, by Chernozhukov et al. (2014, Corollary 5.1), Similarly, we can show I 2 = o p (n −1/2 ). In addition, we note that . Similarly, we have III = o p (n −1/2 ). Combining the bounds of I, II, III with (H.1), we have which verifies Assumption 2(i). I Proof of Theorem 7.1 We focus on verifying Assumption 2 forμ D (a, s, X i ). The proof forμ Y (a, s, X i ) is similar, and thus, is omitted. Following the proof of Theorem 5.4, we note that, for each a = 0, 1 and s ∈ S, the data in cell I a (s) ({D s i (a), X s i } i∈[n] ) can be viewed as i.i.d. following the the joint distribution of (D i (a), X i ) given S i = s conditionally on {A i , S i } i∈ [n] . Then, following the standard logistic Lasso regression in Belloni et al. (2017) a,s ) − λ(Ψ ⊤ i,n β HD a,s ) − M a,s (β HD a,s , θ HD a,s )) n 0 (s) Following the argument in the proof of Theorems 5.1 and 6.1, in order to show I 1 = o p (n −1/2 ), we only need to show where ε is an arbitrary but fixed constant, and for an arbitrary but fixed constant C > 0, Furthermore, we note that F has a bounded envelope and where c 1 , c 2 are two fixed constants, N (·) is the covering number, e Q (f, g) = (Q|f − g| 2 ), and the supremum is taken over all discrete probability measures Q. Last, we have Therefore, by Chernozhukov et al. (2014, Corollary 5.1), The bounds for I 2 , II and III can be established following the same argument as in the proof of Theorem 6.1. We omit the detail for brevity. This leads to Assumption 2(i). J Technical Lemmas Used in the Proof of Theorem 4.1 Lemma J.1. Suppose assumptions in Theorem 4.1 hold. Then, we have where for a = 0, 1, Proof. We have where the last equality is due to Consider the first term of (J.1). where the last equality is due to Assumption 2. Thus In addition, we note that Note that under Assumption 1(i), conditional on {S (n) , A (n) }, the distribution of is the same as the distribution of the same quantity where units are ordered by strata and then ordered by A i = 1 first and A i = 0 second within strata. To this end, define N (s) := n i=1 1{S i < s} and F (s) := P(S i < s). Furthermore, independently for each s ∈ S and independently of {S (n) , A (n) }, let X s i : 1 ≤ i ≤ n be i.n.i.d with marginal distribution equal to the distribution of Then, we have, for s ∈ S, In addition, we have Combining this with the facts that max s∈S |π(s) − π(s)| = o p (1) and min s∈S π(s) > c > 0 for some constant c, we have Therefore, we have The linear expansion of R n,2 can be established in the same manner. For R n.c , note that Thus we have We now consider the second term on the RHS of (J.3). First note that where the second equality is by (3.1). Therefore, we have Similarly, we have Combining (J.3) and (J.7), we have where the second equality holds because 1 π(s) − 1 π(s) 1 √ n n i=1W i A i 1{S i = s} = o p (1) and 1 π(s) − 1 π(s) 1 √ n n i=1Z i (1 − A i )1{S i = s} = o p (1) due to the same argument used in the proofs of R n,1 . Lemma J.2. Under the assumptions in Theorem 4.1, we have Ξ 2 (S i ) N (0, EΞ 2 2 (S i )), and the three terms are asymptotically independent. Proof. Note that under Assumption 1(i), conditional on {S (n) , A (n) }, the distribution of is the same as the distribution of the same quantity where units are ordered by strata and then ordered by A i = 1 first and A i = 0 second within strata. To this end, define N (s) := n i=1 1{S i < s} and F (s) := P(S i < s). Furthermore, independently for each s ∈ S and independently of {S (n) , A (n) }, let D s i : 1 ≤ i ≤ n be i.i.d with marginal distribution equal to the distribution of D|S = s. Then, we have In addition, since Ξ 2 (S i ) is a function of {S (n) , A (n) }, we have, arguing along the line of a joint distribution being the product of a conditional distribution and a marginal distribution, Proof. To derive the limit of 1 n n i=1 A iΞ 2 1 (D i , S i ), we first definẽ Then, we have Next, following the same argument in the proof of Lemma J.2, we have 1 n 1 (s) Last, following the same argument in the proof of Lemma J.2, we have 1 n 1 (s) i∈I 1 (s) (1)|S i = s) and the last convergence is due to the fact that conditionally on S (n) , A (n) , {X s i ,W s i ,D s i (1)} i∈I 1 (s) is a sequence of i.i.d. random variables so that the standard LLN is applicable. Combining all the results above, we have shown that
2022-02-01T04:47:47.839Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2d5ad371df393a0cb9fca77d035f336785c3d783", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2d5ad371df393a0cb9fca77d035f336785c3d783", "s2fieldsofstudy": [ "Mathematics", "Economics" ], "extfieldsofstudy": [] }
7696545
pes2o/s2orc
v3-fos-license
The Diverse and Dynamic Nature of Leishmania Parasitophorous Vacuoles Studied by Multidimensional Imaging An important area in the cell biology of intracellular parasitism is the customization of parasitophorous vacuoles (PVs) by prokaryotic or eukaryotic intracellular microorganisms. We were curious to compare PV biogenesis in primary mouse bone marrow-derived macrophages exposed to carefully prepared amastigotes of either Leishmania major or L. amazonensis. While tight-fitting PVs are housing one or two L. major amastigotes, giant PVs are housing many L. amazonensis amastigotes. In this study, using multidimensional imaging of live cells, we compare and characterize the PV biogenesis/remodeling of macrophages i) hosting amastigotes of either L. major or L. amazonensis and ii) loaded with Lysotracker, a lysosomotropic fluorescent probe. Three dynamic features of Leishmania amastigote-hosting PVs are documented: they range from i) entry of Lysotracker transients within tight-fitting, fission-prone L. major amastigote-housing PVs; ii) the decrease in the number of macrophage acidic vesicles during the L. major PV fission or L. amazonensis PV enlargement; to iii) the L. amazonensis PV remodeling after homotypic fusion. The high content information of multidimensional images allowed the updating of our understanding of the Leishmania species-specific differences in PV biogenesis/remodeling and could be useful for the study of other intracellular microorganisms. Introduction Leishmania spp. are dimorphic trypanosomatid parasites that alternate between extracellular promastigote forms found in insect vectors and intracellular amastigote forms found in mammalian hosts. In infected cells, Leishmania amastigotes are sheltered within phagolysosome-like structures called parasitophorous vacuoles (PVs). The PV membranes and contents change as PVs fuse with the endoplasmic reticulum (ER), late endosomes, lysosomes, or other host cell vesicular elements, conferring to them distinctive properties and a hybrid nature [1][2][3]. In the majority of Leishmania species, including L. major, one or two amastigotes are enclosed within PVs, which display a modest vacuolar space. In contrast, the large PVs that shelter parasites of the L. mexicana complex, such as L. amazonensis, can contain numerous amastigotes, often bound by their posterior poles to the internal face of the PVs [4]. The biogenesis of these two types of PVs involves the acquisition of host cell late endosomes membrane markers, as shown in infected cells immunostained for lysosomeassociated membrane proteins (LAMPs), Rab GTPases, cathepsin, proton ATPases, and MHC class II molecules [2,[5][6][7][8]. The acquisition of these markers is a coordinated event that results in a ''mature'' PV, which is presumably required for the survival and multiplication of the parasites. Because the relatively large dimensions of their PVs that allow them to be easily recognized at low magnification, L. amazonensis and L. mexicana PVs have often been used in studies of the fusogenic properties of Leishmania PVs. These PVs could be demonstrated to selectively fuse with each other or with phagosomes containing macromolecules, colloids, inert particles, and other live parasitic microorganisms [9][10][11][12][13][14][15][16][17]. The fusogenicity and easy access of certain particles and molecules to these large structures increase with the duration of infection [1]. The spacious PVs also incorporate acidic pH markers such as Lysotracker and neutral red [17,18] and may be probed using pH-sensitive dyes [19,20]. By taking advantage of differences in fluorescein emission under different pH conditions, it was reported that the pH of L. amazonensis PVs falls from approximately 5.2 at 24 h to 4.8 after 48 h of intracellular infection, whereas the pH of secondary lysosomes of around 5.4 remained constant in non-infected control cells [19]. These studies led to the characterization of the biochemical and functional features of Leishmania PVs, which may not apply to the majority of Leishmania species studied that are lodged in tight PVs and assumed to undergo fission as parasites divide [21,22]. The accessibility of particles, macromolecules, and probes to these tightfitting PVs and the identification of their contents are hindered by the limited vacuolar space available between the parasites and their PV membranes. Chang and Dwyer [21] and Berman and colleagues [23] observed by electron microscopy that thorium dioxide (''Thorotrast'') particles, which were pre-loaded in lysosomes, were transferred to the small vacuolar space of L. donovani and L. major PVs, respectively. The granules were absent within PVs when parasites and PV membranes were only in close contact. These studies suggest that tight-fitting Leishmania PVs can fuse with lysosomes, although the retention of lysosomal markers differs accordingly to PV dimensions. Additionally, the pH in tight-fitting PVs may be different from that within loose vacuoles: the pH within L. donovani tight PVs was reported to reach 5.5 after 2 h of infection [24]. Most of the available information on Leishmania PV biogenesis has been obtained by experiments on fixed cells, a drawback we sought to overcome in the present study. We examined the biogenesis of large or tight-fitting, membrane-bound Leishmania PVs recorded by the multidimensional imaging of live infected macrophages. The fission of Leishmania tight-fitting PVs was studied for the first time in live infected cells and characterized as a two-step process that involves the replication of amastigotes in a single PV prior to separation into two distinct PVs that accumulate transient amounts of lysosomotropic probe. The process is accompanied by the depletion of macrophage small acidic compartments, as previously described for Leishmania large vacuoles [25]. The biogenesis of these large structures was also studied and revealed to involve PV enlargement in volume and diameter to the detriment of other PVs in the same infected cells. The homotypic PV fusion between L. amazonensis PVs was recorded and involves PV volume restoration. Ethics statement All experiments involving animal work were conducted under Brazilian National Commitee on Ethics in Research (CONEP) and French National Committee on Ethics and Animal Experimentation (CNREEA) ethic guidelines, which are in accordance with international standards (CIOMS/ OMS, 1985). The present study was approved by CEP/UNIFESP (Comitê de É tica em Pesquisa da Universidade Federal de São Paulo/Hospital São Paulo) under the protocol number 0856/07. Host cells and parasites BALB/c and BALB/c nude mice (8 weeks of age) were used as sources of bone marrow macrophage precursor cells and lesion-derived Leishmania amastigotes. Macrophages were obtained from bone marrow precursor cell suspensions cultivated in vitro for 7 days in RPMI 1640 medium with 10% fetal calf serum, 5% L929 cell conditioned medium, 100 U/ml penicillin, and 100 mg/ml streptomycin [7]. RAW 264.7 macrophage-like cells were cotransfected with LAMP1-GFP and Rab7-GFP plasmids (using FuGene HD transfection reagent, ROCHE) kindly donated by Dr. Norma Andrews (Maryland University) and employed in infection experiments in order to observe PV membranes in live recordings. Macrophages were transferred to glass coverslips or round dishes (ibidi, GmbH or Mattek Corporation) suitable for maintaining living cells in incubators coupled to microscopes. Before their use in experiments, cultures were incubated overnight at 37uC in a humidified air atmosphere containing 5% CO 2 . Infection of macrophage cultures Leishmania amastigotes were added to macrophage cultures at a multiplicity of infection of 5 and incubated at 34uC and 5% CO 2 in complete medium for different periods according to the experiment. Cultures were washed with Hanks' Buffered Salt Solution to remove free parasites and cultivated in complete medium at 34uC in a 5% CO 2 atmosphere. Observation and image acquisition of live or fixed macrophage cultures under the employed microscopes started after periods ranging from 2 to 48 h depending on the experiment. Cultures were maintained at 34uC and 5% CO 2 within the incubators coupled to the microscopes. Immunolabeling of Leishmania PVs Macrophages on coverslips were washed and fixed for 1 h with 3.5% formaldehyde in phosphate-buffered saline (PBS). Leishmania PVs and other compartments were identified by immunolabeling of the membrane proteins LAMP1 and LAMP2 (monoclonal antibodies obtained from DSHB, Iowa University, USA). L. amazonensis amastigotes were immunostained with 2A3-26 antibody conjugated to FITC (kindly provided by Dr. Eric Prina, Institut Pasteur, France) or loaded with 5 mM 5,6-carboxyfluorescein diacetate succinimidyl ester (CFSE, Invitrogen, Life Technologies). Samples were stained for 15 min with 100 mg/ml 49,6-diamidino-2-phenylindole (DAPI, Invitrogen, Life Technologies) and mounted with 50% glycerol in PBS containing 0.01% pphenylenediamine. Confocal images were obtained using a Bio-Rad 1024 UV system coupled to a Zeiss Axiovert 100 microscope or a Leica TCS SP5 II system. Images acquired with a 1006(1.44 NA) oil immersion objective were rendered with Imaris Software (Bitplane AG) by using Blend filters. Acquisition of multidimensional images Live imaging of cultures was performed using a Nikon Biostation IM-Q Live cell recorder system (Nikon Corporation), a Perkin-Elmer UltraView RS Nipkow-disk system (PerkinElmer Inc.) attached to a Zeiss Axiovert 200 M microscope with a Hamamatsu ORCA II ER CCD camera, or in a Leica TCS SP5 II system (Leica Microsystems). To identify Leishmania PVs, 50 nM Lysotracker green DND-26 (Invitrogen, Life Technologies), a lysosomotropic probe for acidic compartments, was added to complete medium 1 h before microscopic recordings and maintained throughout image acquisition. In contrast with L. amazonensis PVs that were rich in Lysotracker, vacuoles containing a single L. major-DsRed2 parasite displayed a feeble Lysotracker Author Summary Leishmania parasites lodge in host cells within phagolysosome-like structures called parasitophorous vacuoles (PVs). Depending on the species, amastigote forms can be individually hosted within small, tight-fitting PVs or grouped within loose, spacious PVs. Using multidimensional live cell imaging, we examined the biogenesis of the two PV phenotypes in macrophages exposed to L. major (a representative of the tight PV phenotype) or L. amazonensis (an example of the loose PV phenotype) amastigotes. L. major PVs undergo fission as parasites divide; we demonstrate that in the course of fission there are transients of the lysosomotropic fluorescent probe Lysotracker. In contrast, during the course of amastigote population size expansion, L. amazonensis PVs do accumulate Lysotracker while increasing in diameter and volume. The large PVs fuse together, and the products of fusion undergo size and shape remodeling. The biogenesis/remodeling of the two types of Leishmania PVs is accompanied by a reduction in the number of macrophage acidic vesicles. The present imaging study adds new morphometric information to the cell biology of Leishmania amastigote intracellular parasitism. signal, possibly due to the lower acidity of their PVs. The effect of a tight vacuolar space between PV membranes and amastigotes on Lysotracker intensity is not discarded, although Lysotracker is a relatively small molecule (MW = 398.69). Alternatively, 1 mg/ml FITC-dextran (average mol wt 42,000, Sigma-Aldrich) was used as lysosomotropic probe, with a 1 hour pulse, removal of the probe by 6 washings prior to image acquisition. The Nikon Biostation IM-Q was used to acquire, in 10 different microscopic fields, serial images of infected macrophage cultures in dishes. The Biostation acquired images in phase contrast and in two fluorescent channels (for Lysotracker-and DsRed2-labeled parasites) with a 406 (0.8 NA) objective in 5-min intervals. Points in time of time-lapse image acquisitions are displayed as day, hours, and minutes (d:hh:mm). Images of the DsRed2 fluorescence signal displayed by L. major were processed by Acapella software (PerkinElmer Inc.) for algorithm-based quantification of these parasites during infection in macrophage cultures [17]. The Perkin-Elmer UltraView RS and the Leica TCS SP5 II system were used to acquire stacks of 20 to 30 optical sections from live infected cells in 5 to 12 microscopic fields. Stacks along the zaxis (z-stacks) were obtained with an optical section separation (zinterval) of 0.2 to 1 mm. Multidimensional imaging software analysis We acquired images of infected cell cultures after different postinfection times. Some image acquisitions began 2 h after parasite addition to macrophages; other started after 24 h or 48 h. Thus, we chose to present temporal data as ''time of image acquisition'' instead of ''time of infection'' due to different time ranges of intracellular infection. The time of multidimensional acquisition is displayed as d:hh:mm. Measurement of parasites and PV features using isosurfaces. Acquired z-stacks were reconstructed into multidimensional image projections by using Imaris software (version 7.3.1664, Bitplane AG, Zurich, Switzerland). Blend or MIP filters were used for tridimensional visualization. Imaris creates isosurface/isospot objects by filtering the original data set and overlaying mathematical models over the original data. The software reports measurement statistics based on the set of voxels (pixels with z dimension) that are completely inside isosurface/ isospot objects. The attribution of isosurface/isospot objects was based on the Lysotracker or DsRed2 fluorescent channels, which enclosed voxels with RFIs (Relative Fluorescence Intensities) as given by the acquisition system (in arbitrary units). The mean and minimum RFI measurements were associated with the results as indicators of fluorescence quality and reliability of isosurface/ isospot detection; the mean RFI can be associated with biological variations in fluorescence intensities (i.e. Lysotracker RFI fluctuations in different vacuolar pHs), whereas minimum RFIs represent the set of voxels with lower fluorescence signals and can be taken as a quality control for the fluorescence signal. Temporal data were discarded when microscopic fields display decrease in Lysotracker minimum or sum of RFI measurements. Mean, minimum and sum are standard mathematical functions on voxel analysis. Isosurface rendering was used to measure volume, diameter, and RFI from an identified structure. Isosurfaces corresponding to L. major-DsRed2 amastigotes were constructed from DsRed2 fluorescence signals, enabling the parameters of background subtraction and split touching object (seed detection diameter of 2.5 mm). Although they were constructed from the DsRed2 fluorescence channel, L. major amastigote isosurfaces enclosed Lysotracker voxels, allowing the measurement of Lysotracker RFIs associated with the regions in which amastigotes are located. Isosurfaces corresponding to Lysotracker clusters associated to L. major amastigotes were constructed with background subtraction parameter enabled; for the spacious vacuoles formed by L. amazonensis, isosurfaces were attributed using the enabled background subtraction and split touching objects parameters (seed detection diameter of 5-10 mm). Leishmania PV isosurfaces were colorized according to their volumetric measurements, with bluecyan representing smaller volumes and purple-pink representing larger volumes. Vacuole diameters were measured by examination of the intermediate optical sections of each sample. The accuracy of software volume measurements was tested by comparing L. major-DsRed2 amastigote volumes measured by the software and the volume calculated assuming that amastigotes are ellipsoid geometric structures/bodies. The formula for ellipsoid volume is 4/3pxyz, where x is the width axis radius, y is the length axis radius, and z is the height axis radius. The software volumetric measurements were compatible with ellipsoid forms with radii near 2, 2.5, and 2, respectively. Macrophage acidic vesicles detection and quantification using isospots. Multidimensional images of macrophages infected with L. amazonensis or L. major-DsRed2 and non-infected macrophages were separately acquired in the same experiments using multi-chamber dishes (ibidi Hi-Q4, GmbH). Lysotracker was kept in complete medium throughout the experiments and images were obtained using the same acquisition parameters. Using Imaris software, acidic vesicles were recognized as circular structures of 0.5 or 1 mm in diameter with a specified RFI threshold (parameters were adjusted in the first time point of each multidimensional image). To avoid the identification of acidic vesicles from vicinal macrophages, the available cell identification tool was used. Although stained with the same fluorescent marker, the RFI quantitative method can distinguish the different RFI values of small, dispersed vesicles from those of large Leishmania PVs, allowing separate software analysis for these two different compartments. Statistics All statistics were performed using SPSS software (SPSS Inc.). The statistical tests employed are indicated in the figure legends, and they were chosen on the basis of normal and non-normal distributions and equal and non-equal variances. Figures represent the same result reproduced in at least 2/3 out of all analyzed multidimensional images acquired in 3 different experiments, using 2 different equipments. Results The L. amazonensis parasitophorous vacuoles The first row of images shows Lysotracker signals exhibited by three large PVs during the recordings. Multidimensional acquisition started after 2 h of infection, and the time of acquisition is shown as hh:mm. The second row shows the software recognition of the three PVs, represented by tridimensional objects; for each object, the software attributed an isosurface (a, b, and c), which permitted measurements of PV volume, diameter, and RFI. In the third row, isosurfaces a, b, and c, representative of each PV, display a statistic-coded color in accordance with volume measurements ranging from cyan (smaller volume) to magenta (larger volume). Bar = 10 mm. (D-E) The isosurface measurements of volume (D) and diameter (E) are shown; although isosurfaces objects a and b increased in volume and diameter, isosurface c presented a discreet decrease in its dimensions. Gray dots in the graphs indicate the time points to which images presented in C are associated. The graphs are an example of 20 macrophage multidimensional images in which PVs enlarged in size. (F) The volumes of L. amazonensis PVs were measured after 2 and 48 h of intracellular infection using multidimensional images acquired using 0.1-or 0.3-mm z-intervals, respectively. The graph shows the median PV volumes measured from three different microscopic fields for each time point (median 6 confidence interval, n = 3). There is a significant (P,0.05) increase in PV volumes (Wilcoxon Signed Rank Test), and the measurements were compatible with those acquired using multidimensional images constructed from a z-interval of 1 mm. doi:10.1371/journal.pntd.0001518.g001 Multidimensional Imaging of Leishmania PVs www.plosntds.org amazonensis PVs with moderate PV enlargement (Video S1-B), suggesting a stationary phase in the growth of these structures. The strong Lysotracker fluorescence signal displayed by L. amazonensis PVs permitted the measurement of PV volume and diameter throughout intracellular infection (Figs. 1C-E and Video S2). The analysis software used can process the sample focal stacks at different time points (Fig. 1C, first row) into tridimensional objects, recognize L. amazonensis PVs, and attribute to them an isosurface object with volume and diameter information (Fig. 1C, isosurfaces a, b, and c, second row). The constructed isosurfaces in the third row represent each L. amazonensis PV that were colorized according to their measured volumes. Measurements of PV volume for each time point are plotted in Figs. 1D-E. The increase in PV volume/diameter occurred while other L. amazonensis PVs maintained their dimensions. In the first 20 h of image acquisition, PVs doubled in volume (Fig. 1D), whereas their diameters increased by approximately 20% (Fig. 1E). A plateau in volumetric measurements was observed for isosurfaces after approximately 12 h of image acquisition (Fig. 1D). The median volume values of PVs from different microscopic fields were reproduced using shorter z-stack intervals, which are more accurate for volumetric measurements (Fig. 1F). The PV volume measurements acquired after 2 h or 48 h of L. amazonensis infection and using 0.1-mm or 0.3-mm z-stack intervals yielded volume measurements comparable to those presented in Fig. 1D. The number of host cell acidic vesicles decreases over the course of L. amazonensis infection. We next examined whether L. amazonensis PV enlargement could be associated with the classically described depletion of the host cell secondary lysosomes, assumed to be one of the main membrane sources for L. amazonensis PV biogenesis [25]. In the present experiments, we estimated the number of small (up to 1.0 mm in diameter) Lysotracker-positive vesicles in uninfected macrophages or those infected with L. amazonensis amastigotes. Fig. 2A and Video S3 show the Lysotracker thresholding procedure that can be used to distinguish Leishmania PVs from macrophage acidic vesicles. The attribution of isosurfaces (for PVs) and isospots (for vesicles) used different Lysotracker signal thresholds. Thus, from an image containing all Lysotracker clusters ( Fig. 2A, image on the left) in which isospots detect different Lysotracker relative fluorescence intensities (RFIs), it was possible to select those isospots with a specific Lysotracker pattern that corresponded to macrophage acidic vesicles ( Fig. 2A, image on the right). Graphs of Figure 2B show the mean and minimum Lysotracker RFIs detected by isosurfaces (attributed to PVs, graph on the left) and isospots (attributed to vesicles, graph on the right), confirming the different thresholds in which the detection of acidic vesicles was based. Using the attributed isosurfaces and isospots, PV volumes and the numbers of macrophage acidic vesicles in infected cells were estimated (Fig. 2C, right panel). Extending classic qualitative observations, we found that the number of small acidic vesicles decreased as the PV mean volume increased in infected macrophages. This approach was applied to microscopic fields containing infected or non-infected macrophages, separated by multichamber culture dishes. The graph presented in Fig. 2D shows (1 mm in diameter) in macrophages infected with L. amazonensis (green line) and non-infected macrophages (blue line). Images of these two conditions were separately acquired in the same experiment using multi-chamber dishes with the same acquisition parameters. Data are representative of 8 infected macrophages and 12 non-infected macrophages (mean and SEM) and reproduced in two experiments. doi:10.1371/journal.pntd.0001518.g002 Multidimensional Imaging of Leishmania PVs www.plosntds.org the quantification of acidic vesicles in these two examined groups of macrophages. Over the course of image acquisition, the number of acidic vesicles in the infected macrophages decreased in comparison with non-infected macrophages. These results indicate that L. amazonensis PV enlargement is correlated with a decrease in the host cell acidic vesicles reservoir. L. amazonensis PVs recover their dimensions after homotypic fusion. Membrane input to L. amazonensis PVs is also provided by the homotypic fusion between these vacuoles [16]. In the present studies, fusion events between L. amazonensis PVs in infected macrophages were recorded by time-lapse and multidimensional microscopy ( Fig. 3 and Video S4). Acquisitions started after 48 h of intracellular infection, when PVs are sufficiently enlarged to contact and fuse [16]. In fluorescence time-lapse microscopy, a 5-min interval between images permitted the observation of fusion between L. amazonensis PVs with temporal resolution (Fig. 3A, asterisks). The fused PV exhibited an intermediary shape compatible with the hemifusion stage of membrane fusion events (Fig. 3A, arrowhead) and recovered its circular diameter in a few minutes. To measure PV volume fluctuations during and after fusion, images were acquired from infected cells displaying recognizable PV fusion events; isosurfaces were attributed to each Lysotrackerpositive PV (Figs. 3B-C and Video S4). Fig. 3B shows the multidimensional image of four L. amazonensis PVs in the same infected macrophage loaded with Lysotracker. Fusion between PVs (indicated by asterisks) was captured. The isosurfaces constructed from the Lysotracker signal displayed by these PVs, were colorized according to their measured volumes and shown in Fig. 3C. Fusion between L. amazonensis PVs yielded a larger vacuole, which unexpectedly recovered its initial dimensions 6 h after homotypic fusion. This result was also observed in other infected cells (Figs. 3D-E). PV volume measurements at different time points were plotted in the graph in which L. amazonensis PVs are represented by isosurfaces named a-d. Fusion-prone PVs with similar initial volumes resulted in a 2-fold larger fused PV. However, after fusion, PV dimensions approached the initial volume measurements displayed by the vacuoles involved in fusion. This result was observed in successive fusion events in the same infected cell (Fig. 3D, with image acquisition started after 2 h of infection) and also in other recorded fusion events in different macrophages infected for different time periods (Fig. 3E, image acquisition started after 48 h of infection). The L. major Parasitophorous Vacuoles Division of L. major amastigotes within tight-fitting PVs. In the following observations we recorded the behavior of L. major tight-fitting PVs during parasite division in infected macrophages by time-lapse fluorescence microscopy and multidimensional imaging (Figs. 4-6). The algorithm-based quantification of L. major-DsRed2 amastigotes hosted by macrophages over a 48-h period is shown in Fig. 4A. The time sequence in figure 4A and Video S5 shows four L. major-DsRed2 amastigotes hosted by a macrophage that divide into eight amastigotes sheltered in non-observable PVs in the course of the recordings. After 12 h of acquisition, dividing L. major-DsRed2 amastigotes were often observed (arrowhead), and they remained next to each other for several hours before complete separation occurred. The numbers of L. major-DsRed2 amastigotes per field were plotted in the graph on the right: the population of amastigotes doubles in 24 h of image acquisition, which began after 2 h of L. major-DsRed2 infection. In macrophages fixed after 48 h of infection, the immunostaining for the PV membrane components LAMP1 and LAMP2 indicates that L. major PVs containing two or three amastigotes (some of them undergoing binary division) may be observed (Fig. 4B). PV membrane furrows are often observed in these double occupancy PVs (arrowheads in Fig. 4B) near amastigote anterior pole. These results suggest that parasite division within PVs precedes L. major PV fission. To detect single-or double-occupancy tight-fitting PVs and their dynamics during parasite division in live recordings, infected macrophages were loaded with Lysotracker after infection (Fig. 4C). An amastigote is presented before (upper row) and after its division (lower row), when Lysotracker may concentrates in the interface between dividing amastigotes (Video S5). Analytical isosurfaces were attributed for each parasite considering the DsRed2 signal (a, before division; a9 and a0, after division). The isosurfaces contain information on amastigote dimensions and the Lysotracker RFIs in their surroundings (representative of the probe concentration within tight-fitting PVs), and these measurements were plotted in Figs. 4D-E. The L. major-DsRed2 isosurface increased in volume before division in two other recognizable isosurfaces, each with approximately half of its original volume (Fig. 4D, first graph). L. major-DsRed2 amastigotes presented a volume of approximately 30 mm 3 , measured from multidimensional images using z-stack intervals of 0.1 or 0.3 mm, as shown by the bar graph. Fig. 4E shows the Lysotracker RFIs around replicating L. major-DsRed2 amastigotes. Amastigotes exhibited higher Lysotracker RFI before division, which is associated to a polarized concentration of the dye probably related to the parasite flagellar pocket. In the first hours postdivision, amastigotes present an intermediate Lysotracker RFI (Fig. 4E, after 12 h), sometimes associated to a clusterization of the probe between two dividing amastigotes. At the beginning of image acquisition, these Lysotracker RFIs are 1.6 times lower than the intensities found in phagosomes containing aldehyde-fixed L. major-DsRed2 parasites (Video S5). Similar results were obtained when FITC-dextran was used as lysosomotropic probe instead of Lysotracker (Video S5). The higher concentration of Lysotracker or FITC-dextran enclosed within fixed amastigote phagosomes indicates that live L. major amastigotes customize their PVs and restrict the access of late phagosome content markers. Fission of L. major PVs. Fission of L. major PVs was inferred from differences in Lysotracker RFI retained in fissioned vacuoles and directly observed by staining PV membranes with fluorescent phagolysosomal markers. Figure 5A and Video S6 show L. major-DsRed2 amastigotes during division, displaying the previously described Lysotracker cluster. At early time points of acquisition, the vacuolar interface remains between amastigotes in which isosurfaces (a9 and a0) detect the approximate values of Lysotracker RFI (as shown by the associated graph, Fig. 5B). After 3 h of acquisition, an increase in Lysotracker RFI was measured surrounding one parasite (a9), whereas the measurements were constant for the other parasite (a0). At the end of image acquisition, although the amastigotes remained close to each other, no Lysotracker-positive interface could be detected and the Lysotracker RFI surrounding amastigotes returned to the initial values. L. major-DsRed2 PV fission was also observed in infected RAW 264.7 macrophage-like cells expressing LAMP1 and Rab7 proteins tagged with GFP ( Fig. 5C and Video S6). In this experiment, PV membrane could be observed during the fission process: L. major-DsRed2 dividing amastigotes are confined to phagolysosomal tight-fitting vacuoles during the entire formation of new PVs, from single occupancy, passing through double occupancy and finally to segregation in two structures. The interval between division of the amastigote and PV complete fission was registered in these multidimensional images, and periods range from 84 to 153 minutes (108.8611.8 SEM., n = 5). The Lysotracker-positive interface observed between dividing amastigotes appears to be associated with PV double occupancy. During amastigote division, a Lysotracker cluster may be detected between some division-prone parasites, suggestive of a vacuolar space in double-occupancy PVs or a stable interaction site between PVs and acidic vesicles (Fig. 5D, arrowhead, and Video S7). This intense Lysotracker signal between parasites was tracked in multidimensional images, and its volume measured at each acquisition time point using isosurfaces for the Lysotracker signal. The cluster presented a dynamic volume, as shown by the volumebased color of the attributed isosurface (Fig. 5D, second row). Volume was plotted at all acquisition time points (Fig. 5E) and increased as macrophage acidic vesicles interacts with the analyzed structure (Video S7). The results thus suggest that after sharing the same PV, dividing L. major-DsRed2 amastigotes were sheltered in different PVs that differed in the amounts of Lysotracker they contained, although preserving phagolysosomal membrane markers. Host cell acidic vesicles and L. major infection. Considering the phagolysosome-like features of Leishmania PVs, it may be expected that the partition of doubly occupied L. major-DsRed2 PVs would require the recruitment of different membrane sources such as small acidic vesicles to the sites of dividing parasites. To During division, a Lysotracker cluster in the interface between two dividing L. major amastigotes is observed in some cases; MIP filter. Using the DsRed2 signal expressed by amastigotes (second column, Blend filter), isosurfaces were constructed representing the parasites before (a) and after division (a9 and a0), shown in the third column. The Lysotracker voxels can be detected by the isosurfaces (with transparency applied). Bar = 2 mm. (D) Volume measurements of L. major-DsRed2 amastigotes from isosurfaces in multidimensional images. A volumetric increase in isosurface a (before division) was detected, followed by division into isosurfaces a9 and a0, which present half of the initial value of isosurface a (line graph, in the left). The amastigotes volumes were also measured in other multidimensional images with z-stack intervals of 0.1 and 0.3 mm (bar graph, mean 6 SEM, n = 3) with non-significant (ns) statistical difference (one-way ANOVA). (E) Mean Lysotracker RFI values detected in L. major isosurfaces before (a) and after parasite division (a9 and a0). Values diverge between divided isosurfaces a few hours hours after their identification. Results shown in D-E were reproduced in 5 other multidimensional images in which L. major amastigote divisions and lysotracker-positive interface were identified. doi:10.1371/journal.pntd.0001518.g004 Multidimensional Imaging of Leishmania PVs www.plosntds.org determine whether new tight-fitting PVs could be formed while acidic vesicles are depleted, acidic vesicles (1 mm in diameter) were separately scored in the multidimensional images of macrophages infected with L. major. Fig. 6A shows a multidimensional image of an infected macrophage hosting four L. major-DsRed2 amastigotes derived from the replication of two amastigotes (Video S8). As described previously, the software attributed isospots for macrophage acidic vesicles (Fig. 6A, on the right) and analytical isosurfaces for L. major-DsRed2 amastigotes (Fig. 6A, insert), both tracked at different time points (Video S8). The quantification of macrophage acidic vesicles during L. major-DsRed2 infection is shown in parallel with the quantification of parasite numbers in the same macrophage (Fig. 6B, two macrophages analyzed). The numerical decrease of acidic vesicles correlates in time with L. major multiplication. Figure 6C shows four other cases in which this decrease was also quantified. Arrowheads in the graph indicate the time points when an amastigote division began. The mean number of acidic vesicles detected in non-infected and infected macrophages was plotted in Fig. 6D. These two conditions were acquired separately in multi- The Lysotracker RFIs surrounding parasites was measured during 10 h of image acquisition. An increase in the Lysotracker signal was detected for one of the dividing amastigotes, whereas the signal remained low in the other amastigote. After 8 h of acquisition, although both amastigotes remained next to each other, they were surrounded by Lysotracker at low intensities, and no Lysotrackerpositive vacuolar interface was observed between them. Gray dots in the graphs indicate the time points to which images presented in E are associated. The data are representative of 5 cases with similar results. (C) Multidimensional image of RAW 264.7 macrophages expressing LAMP1-GFP and Rab7-GFP (green) infected with L. major-DsRed2 (red). Arrowheads indicate the complete division of double-occupancy PV into two individual PVs. This process was followed in 5 other multidimensional images and the time period in which amastigotes share a single vacuole before fission was about 100 minutes. Acquisition started after 4 h of intracellular infection and the time of acquisition is shown as hh:mm. MIP filter, bar = 5 mm. (D) The Lysotracker-positive cluster is observed between two dividing parasites (first row, MIP filter). The attribution of an isosurface to the Lysotracker-positive interface (second row, Blend filter) allowed the measurement of its volume at the image time points. The isosurface contains statistic-based color information ranging from cyan (smaller volumes) to magenta (larger volumes). Image acquisition started after 2 h of infection, and the time of acquisition is shown as hh:mm. Bar = 5 mm. (E) The graph presents the volume measured from the cluster between dividing parasites in the multidimensional imaging. Using a z-stack interval of 1 mm, a detection limit for volumetric measurement of this structure was determined to be 5 mm 3 . Gray dots in the graphs indicate the time points to which images presented in D are associated. These results were reproduced in 5 other cases. doi:10.1371/journal.pntd.0001518.g005 Multidimensional Imaging of Leishmania PVs www.plosntds.org chamber dishes in the same experiment, using the same acquisition parameters. Quantifications yielded similar amounts of acidic vesicles in the first 10 h of image acquisition; from this moment, macrophages infected with L. major amastigotes presented a decrease in the mean number of acidic vesicles. This decrease is not so evident than the decrease found in case-case analysis because of the lack of synchronicity of L. major divisions detected in different cells. Discussion PVs are the host intracellular compartments in which Leishmania parasites differentiate and multiply. They are lined by a dynamic membrane originated at the host cell plasma membrane and formed by successive and coordinated fusion/ fission events with vesicles from early/late endocytic pathways, secondary lysosomes, ER, and possibly autophagic vesicles, resulting in an acidic compartment similar to phagolysosomes [3,5,7,19,27]. Most of this information has been obtained in experiments with Leishmania cell-cycling amastigotes of the L. mexicana group in fixed preparations. This study presents high-resolution morphological characterization of the main features displayed by Leishmania PVs in live infected macrophages, such as PV volumetric expansion/retraction and homotypic PV fusion/fission, using the fluorescent lysosomotropic probe Lysotracker. Additionally, a software-based methodology permitted the quantification of host cell acidic vesicles, which were distinguished from Leishmania PVs in the same fluorescence channel. As secondary lysosomes are considered the main source of membrane for Leishmania PV biogenesis, we investigated how macrophage acidic vesicles reservoirs (which include, but are not restricted to secondary lysosomes) were related to Leishmania PV biogenesis. The development of spacious PVs is an exception rather than a rule for most Leishmania parasites studied. Soon after the internalization of L. amazonensis, PV membranes acquire Rab5 and EEA1, two early endosomes markers; by membrane remodeling, PV membranes lose these early markers and rapidly acquire the late endosome and lysosome markers Rab7p and LAMP1 [7]. In the first hours of intracellular infection, fusion between PVs and lysosomes could be a determinant of PV enlargement, as the depletion of secondary lysosomes [25] and acidic vesicles, as shown in this report, is observed during parasite establishment. Additionally, the up-regulation of host cell lipid biosynthesis triggered by the parasite [28] could increase the repertoire of membrane donors to Leishmania PVs development. This up-regulation was not observed in the analysis of gene expression of macrophages infected with L. major promastigotes [29]. After lysosomal marker acquisition (by means of fusion with acidic vesicles, lysosomes and/or late endosomes), the membrane input into Leishmania PVs and their stability as large compartments could be related to continuous fusion with ER-derived or Golgiderived vesicles [1,3] instead of fusion with secondary lysosomes, late endosomes, or vicinal L. amazonensis PVs. The homotypic fusion between L. amazonensis PVs was described by Real and colleagues and occurs mainly after 24-48 h of intracellular infection [16]. Fusion between PVs before 12 h of infection was seldom observed. Indeed, Lippuner and colleagues found that a minority of Rab5-positive L. mexicana PVs fuse with each other in the first hours of intracellular infection [30]. The fusion between L. amazonensis PVs did not contribute to sustain PV enlargement because fused PVs regained their dimensions, a process probably related to compensatory membrane recycling. Although the results suggest a selective nature for PV membrane composition, the host cell/parasite components retained by large PVs to maintain their homeostasis remain to be elucidated. The output of membrane from large PVs could be incorporated by the plasma membrane (displaying parasite proteins on host cell surface) and other cytoplasmic organelles such as the ER. The spacious PV represents a strategy to subvert host cell defenses, providing an environment with lower activities and/or concentrations of hydrolytic enzymes [9,[31][32][33]. Considering that Leishmania from the L. mexicana complex are resistant to IFN-cmediated macrophage activation and high NO concentrations [34], the unique morphology of large vacuoles could be vital for the establishment of these parasites. Recent studies documented that L. major procyclic promastigotes survive and multiply within L. amazonensis PVs after interspecific PV fusion in doubly infected macrophages [17]. As expected, the procyclic promastigotes were destroyed within their own phagolysosomes but were spared from destruction once they entered the spacious L. amazonensis PVs. L. major, the first organism in the genus to have its genome sequenced [35], is one of several Leishmania species in which amastigotes develop tight PVs that undergo fission during parasite multiplication in host cells. These amastigotes are generally individualized in PVs that maintain their dimensions and possibly require different strategies to avoid host cell defenses. PV fission is displayed by several important pathogens that live in vacuoles throughout their intracellular life cycle, such as Leishmania and Mycobacterium [36]. The process is likely to require the mobilization of membrane sources because replicating amastigotes require increases in PV dimensions prior to division, which implies an increase in PV membrane surface area and volume. After parasite division, there is an intermediary state of doubleoccupancy PVs in which a vacuolar interface between recentlydivided parasites and the PV membrane is visible using lysosomotropic probes. This detectable vacuolar space is polarized in agreement with the polarized event of amastigote division, which suggests a period of hours in which two dividing L. major amastigotes share the same PV with a small, acidic lumen. Indeed, in the late 1970s, a Thorotrast-rich vacuolar space located between two dividing L. major amastigotes was documented by electron microscopy [23]. Amastigote division within tight PVs could cause increase in PV fusogenicity with small vesicles by modifying the membrane curvature, which could physically assist SNARE machinery for membrane fusion [37]. An alternative interpretation of the Lysotracker clusters on dividing L. major amastigotes is that acidic vesicles could be mobilized to PV membrane sites in hotspots, where membrane input preferentially occurs. The numeric decrease of host acidic vesicles during L. major PV fission or L. amazonensis PV enlargement is likely related to a higher demand of host membrane sources for the biogenesis of Leishmania PVs, at least in the first 4 days of intracellular infection. The host cell reservoirs of acidic vesicles would partially account for membrane incorporation of fission-prone L. major PV and partition in two new PVs or massive incorporation of vesicles into enlarging L. amazonensis PVs. Considering the membrane surface area of Leishmania PVs, a large L. amazonensis PV with 20 mm of diameter has the approximate surface area of 16 L. major tight-fitting PVs, indicating that Leishmania PVs would require approximate amounts of host cell membrane for their biogenesis regardless of the different PV architectures (tight-fitting vs. loose vacuoles). The contribution of each acidic vesicle to PV biogenesis may be also hypothesized: approximately 50 detected acidic vesicles (1 mm in diameter) in macrophages infected with L. major or L. amazonensis are consumed in the course of intracellular infection. This represents a hypothetical volume contribution of 30 mm 3 to Leishmania PVs, which only partially accounts for L. major or L. amazonensis PV dimensional doubling. Indeed, other mechanisms of Leishmania PV volume control were addressed in the literature and include fusion with ER vesicles, acquisition of water and ion transport channels, and parasite secretion of exosomes and macromolecules inserted in PV membranes, or displayed in parasite membrane surfaces [3,33,[38][39][40][41]. Additionally, Leishmania can internalize PV membrane components: in a process resembling host membrane ''clearance,'' L. amazonensis can actively internalize and digest MHC class II, possibly by its posterior pole that interacts with PV membranes [42]. This process could also be conserved in Leishmania with membrane-bound PVs participating in the control of membrane input into the PVs. The polarization of lysotracker clusters in dividing L. major parasites suggests that parasite poles could also participate in the PV biogenesis of tight-fitting phenotype. Although L. major PVs incorporate phagolysosomal markers on their membranes, they were unable to retain amounts of lysosomotropic content probes comparable to other tight-fitting phagosomes (i.e. latex beads or aldehyde-fixed amastigotes). A less acidic environment could account for Lysotracker-negative L. major PVs, which could present lower abundance of vesicular proton ATPases than L. amazonensis PVs. Osorio y Fortea and colleagues [28] showed that eight isoforms of vesicular proton ATPases subunits are up-regulated in macrophages infected with L. amazonensis amastigotes; in contrast, Gregory and colleagues [29] showed that only one isoform of these ATPases, V1 subunit H, is up-regulated in macrophages infected with L. major promastigotes. In these same experiments, a V0 subunit A2 is down-regulated and other lysosomal components (such as two isoforms of a lysosomal-associated transmembrane protein 5 and an isoform of acid phosphatase 2) are down-regulated. Non-acidic phagosomes displaying phagolysosomal markers on their membranes were associated with exclusion of these proton ATPases in Mycobacterium phagosomes [43]. It is possible that L. major PVs maintain the exclusion of vesicular proton ATPases for amastigote replication, in a process similar to what occurs in L. donovani promastigote PVs [44]. By presenting these fundamental pre-mechanistic data obtained by live imaging, we highlighted that Leishmania PVs are dynamic structures that remodel their shapes, allowing these structures to develop a privileged intracellular niche where Leishmania parasites survive and multiply. Several pathogens can utilize this process; describing and understanding the nature of these vacuoles in live infected cells are crucial steps towards the understanding of how parasites evade host immune responses [45]. Supporting Information Video S1 Enlargement of L. amazonensis PVs. Live imaging of L. amazonensis Multidimensional imaging of dividing L. major-DsRed2 amastigotes (red) hosted by a macrophage loaded with Lysotracker (green). Isosurfaces were attributed to replicating amastigotes, allowing the measurement of Lysotracker RFI surrounding parasites. The amastigotes remain next to each other after replication and the vacuolar interface between them is observed at the initial time points. An increase in the Lysotracker signal was detected in one dividing amastigote, whereas the other remained in a PV with a low Lysotracker signal. By the end of the recordings, the amastigotes were surrounded by Lysotracker at low intensities, and no Lysotracker-positive vacuolar interface was observed between them. Image acquisition started after 48 h of infection, and the time of acquisition is shown as hh:mm.
2014-10-01T00:00:00.000Z
2012-02-01T00:00:00.000
{ "year": 2012, "sha1": "222c4706f0d1283d25a16caee82e45f84973a890", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0001518&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "222c4706f0d1283d25a16caee82e45f84973a890", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1945906
pes2o/s2orc
v3-fos-license
Are Mesenchymal Cells Indeed Pluripotent Stem Cells or Just Stromal Cells? OCT-4 and VSELs Biology Has Led to Better Understanding Stem cells have excited researchers because of their potential to regenerate. However, which stem cells will be the best candidate for regenerative medicine remains an enigma. Compared to pluripotent stem cells with associated risks of immune rejection and teratoma formation, adult stem cells especially the mesenchymal stem cells (MSCs) are hyped to be a suitable alternate since they also exhibit pluripotent properties. This review shows that there is a subpopulation of pluripotent very small embryonic-like stem cells (VSELs) among MSCs culture. The two populations differ from each other in expression pattern of OCT-4. VSELs exhibit nuclear OCT-4A, whereas the MSCs have cytoplasmic OCT-4B, similar to our earlier findings in testis and ovary. Pluripotent VSELs with nuclear OCT-4A exist in various adult body organs, and the immediate progenitors express cytoplasmic OCT-4B which is eventually lost as the cell differentiates further. To conclude it is essential to discriminate between nuclear and cytoplasmic OCT-4 expression and also to acknowledge the presence of VSELs. Introduction Stem cells represent a novel cell type in the body which has the potential to regenerate any worn out tissue and maintain tissue homeostasis. Stem cells can be multiplied in large numbers in vitro and may serve to replace the damaged cells for regeneration rather than the existing means of managing diseases by treating the damaged cells with drugs. Stem cells are broadly classified based on their source into embryonic (hESCs) and adult (ASCs) stem cells. Embryonic stem cells are pluripotent in nature and can be differentiated into 200 odd cell types in the body belonging to the three germ layers, namely, ectoderm, endoderm, and mesoderm. On the other hand adult stem cells are isolated from adult body tissues and are multi-to unipotent in nature. Since the initial isolation of hES cell lines [1], there has been a divide amongst the embryonic and adult stem cell biologists. It has been the endeavor of the adult stem cell biologists to demonstrate that ASCs are equally good compared to hES cells, and thus hES cell research is not required (because of associated ethics since spare human embryos are used and manipulated). In January 2013, hES cell biologists were greatly relieved, when US Supreme Court refused to hear a case that could have prohibited government funding for hES cells [2]. Various approaches have been used to demonstrate that ASCs can replace hES cells. In particular with the ability to reprogram adult somatic cells to pluripotent state by iPS technology, the lobby against hES cells has become still more strong. Another issue that has been highlighted is that mesenchymal stem cells are pluripotent and besides the differentiation into mesoderm can also transdifferentiate into ectoderm and endoderm [3] and is the focus of this special issue. Mesenchymal stem cells (MSCs) are spindle-shapedplastic-adherent cells that can be isolated from the fetus, extra embryonic tissues; and adult organs including bone marrow and several other body tissues. MSCs were first described by Friedenstein and group [4] as hematopoietic supportive mesenchymal stromal cells of bone marrow. Owen and Friedenstein [5] proposed that these cells may be termed mesenchymal stem cells as they had the ability to differentiate into lineages of mesenchymal tissues including bone, cartilage, tendon, ligament, marrow stroma, adipocytes, dermis, 2 Stem Cells International muscle, and connective tissue. However, whether they are a true, stem cell still remains controversial. The names mesenchymal stem or stromal cells are interchangeably used in the literature. The International Society for Cellular Therapy (ISCT) has recommended that these spindle-shaped, plasticadherent cells be termed, mesenchymal stromal cells [6]. It has been proposed that a yet unidentified stem cell may exist amongst the MSCs, but MSCs themselves must be termed mesenchymal stromal cells [7]. The recent literature suggests that MSCs are a crucial component of the niche for the HSCs in the bone marrow [8,9]. MSCs undergo lineage-specific differentiation into mesoderm, but the ability to transdifferentiate into other lineages remains controversial. Various groups have published that MSCs can transdifferentiate into ectodermal and mesodermal lineages including hair [10], pancreatic islets [11,12], hepatocytes [13], and neurons [14,15]. Greco et al. [16] have further shown that a similar regulatory mechanism for OCT-4 exists among ES cells and MSCs. However, this remains highly controversial especially because the functional properties of MSCs transdifferentiated into ectoderm and endoderm are not as expected. Similarly Osonoi et al. [17] reported that human dermal fibroblasts are able to differentiate directly to all 3 germ layer derivatives that is, neurons (ectodermal), skeletal myocytes (mesodermal), and insulin-producing cells (endodermal). They exhibit nestin, desmin, and insulin when exposed to specific cocktail of growth factors. Thus it is felt that achieving transdifferentiation on the basis of immunolocalization or presence of transcripts may not suffice. Rather, evidence needs to be generated regarding the functional maturation-which has not yet been achieved. There are two main facets of stem cells biology that have indeed baffled researchers and have led to this confusion about the functional attributes of MSCs. These include (i) OCT-4 biology and (ii) presence of a subpopulation of pluripotent very small ES-like stem cells (VSELs) amongst MSCs. Oct-4 Biology and Pluripotency Oct-4 is the most crucial POU domain transcription factor responsible for maintaining the self-renewal and pluripotent properties of stem cells including inner cell mass, embryonic stem cells, embryonic germ cells, and embryonic carcinoma cells. Oct-4, Nanog, Sox2, and FoxD3 together form an interconnected autoregulatory network to maintain ES cells pluripotency and self-renewal [18]. Oct-4-deficient mice do not develop beyond blastocyst stage due to lack of pluripotent inner cell mass cells [19]. Oct-4 is downregulated with loss of pluripotency, and knockdown of Oct-4 in ES cells results in differentiation [20,21]. It has two major isoforms Oct-4A and Oct-4B of which only Oct-4A is responsible for the pluripotent state, whereas no biological function has been associated with Oct-4B isoform [22]. Atlasi et al. [23] reported another Oct-4 spliced variant which is primarily expressed in the pluripotent stem cells and is downregulated following differentiation; however, its function is still not clear [24]. It becomes crucial to discriminate between the various isoforms while concluding pluripotent state of a cell [23,25]. But stem cell biologists have overlooked this aspect during their studies, have reported Oct-4 in several nonpluripotent cell types, and have resulted in a great deal of confusion [24,26]. Similarly, there was a lot of excitement recently when various groups reported derivation of pluripotent ES-like cultures from adult testicular biopsies in mice [27][28][29][30] as well as in men [31][32][33] by spontaneous reprogramming of adult spermatogonial stem cells without any genetic modification. However, Warthemann et al. [34] have shown that false-positive antibody signals for OCT-4A in testis-derived cells may have led to erroneous data and misinterpretations. Oct-4 has been reported in several somatic cell types, placenta, amniotic and cords-derived cells, and also in primary tumor tissues (refer to Supplemental Table 1 in [35]). Zangrossi et al. [36] demonstrated the presence of Oct-4 in peripheral blood and thus challenged whether OCT-4 should really be a marker for pluripotency. Greco et al. [16] showed that OCT-4 functions through similar pathway in human MSCs and ES cells. However, all these reports studied Oct-4 and failed to discriminate between the alternatively spliced Oct-4 transcripts. In an attempt to clarify the confusion between ASCs and ESCs with respect to Oct-4 expression, Lengner et al. [35] deleted Oct-4 in several tissues with rapid turnover including intestine, bone marrow, hair follicle, liver, or CNS but found no effect on tissue maintenance or injury-induced regeneration. Thus they concluded that Oct-4 expressing cells are not required for maintaining homeostasis in adult body organs. They further discussed that somatic OCT-4 expression could be due to nonspecific staining since the amount of mRNA was very low in somatic cells compared to the ES cells and invariably amplified after 30-40 cycles of PCR amplification. However, their concluding statement is rather intriguing. They do not deny presence of Oct-4 in adult body tissues, but the levels are very low compared to the ES cells. This is very true for the pluripotent very small ES-like stem cells (VSELS) in adult body tissues. Pluripotent Stem Cells in Adult Body Tissues Very small embryonic-like stem cells (VSELs) represent a very promising group of stem cells which have the potential to bring together embryonic and adult stem cell biologists. These are pluripotent stem cells in adult body tissues. They exhibit pluripotent characteristics including nuclear Oct-4 albeit at very low level compared to hES cells. However, they can be isolated from autologous source and do not form teratoma in mice (thus all the three major issues associated with hES cells including using spare human embryos to derive hES cell lines, immune rejection, and risk of teratoma formation are taken care of). They are easily mobilized in response to any injury, maintain life-long homeostasis [37,38], and are also considered as embryonic remnants responsible for various cancers in the body [39], as proposed 150 years ago by Rudolf Virchow and Julius Conheim. Pioneering work done by Professor Ratajczak and his group have shown that pluripotent, VSELs exist in various adult body tissues [40] and are possibly the primordial germ cells or their precursors which rather than migrating only to the gonadal ridges during early embryonic development migrate to various body organs and persist throughout life. The confusion in the literature about presence of Oct-4 in adult body tissues is actually because of VSELs. VSELs with nuclear OCT-4 exist in various tissues and give rise to the tissue-specific progenitors which further differentiate into tissue-specific cell types. As the VSELs start differentiating, OCT-4 is observed in the cytoplasm and as the cells differentiate further, it is eventually lost. Our work on mammalian gonads has shown that indeed VSELs with nuclear OCT-4 and their immediate progenitors spermatogonial stem cells (SSCs) in testis [40] and ovarian germ stem cells (OGSCs) in the ovary have cytoplasmic OCT-4 [41]. We used a polyclonal antibody against OCT-4 which detects expression for both the isoforms (i.e. nuclear and cytoplasmic) and has shown that VSELs have nuclear Oct-4, and once differentiation is initiated in the progenitors, OCT-4 is cytoplasmic. Q-PCR analysis clearly shows the abundance of Oct-4B over Oct-4A. In order to show presence of pluripotent VSELs in the adult mammalian gonads, we have always shown the presence of Oct-4A rather than Oct-4. We also reported the presence of VSELs in the discarded pellet of RBCs during volume reduction step while processing cord blood and bone marrow [42] and also in MSCs culture (Figure 1). Umbilical cord tissue, especially Wharton's jelly and bone marrow, is considered as a rich source of MSCs. Immunohistochemical studies of Wharton's jelly clearly show the presence of a subpopulation of VSELs amongst the MSCs (Figure 2), [42]. Similarly, early passages of MSCs from mouse bone marrow show the presence of VSELs as a distinct subpopulation (personal observations). Interestingly OCT-4 showed nuclear expression in Wharton's jelly VSELs and was cytoplasmic in the MSCs. Similarly, Taichman et al. [43] demonstrated that VSELs could be on top of hierarchy for mesenchymal stem cells (MSCs) in mice. We made a case for VSELs present in the mammalian testis [44] that may actually give rise to the ES-like colonies during testicular tissue cultures [27][28][29][30][31][32][33]. Observations made by Lengner et al. [35] are indeed true because Oct-4 is expressed at very low levels in the VSELs (detected only after >35 cycles during RT-PCR) compared to ES cells (detected after 20-25 cycles during RT-PCR), and the immediate progenitors that is, the adult stem cells that exist in various adult tissues, express cytoplasmic Oct-4 which is eventually lost as cells become more committed. Berg and Goodell [45] coauthored a preview on the Lengner study and correctly summarized in the first sentence that "absence of evidence is not evidence of absence" or stated another way "one cannot prove a negative. " They also hinted to the existence of a stem cell population that was not tested in the studies reported and now we understand that it was possibly the VSELs. Nayernia et al. [46] first reported that BM stem cells/MSCs can transdifferentiate into male germ cells both in vitro and in vivo. They transplanted BM cells into busulphan treated mice and observed colonization and proliferation but no differentiation beyond premeiotic spermatocytes stage. After this several groups have reported restoration of testicular function by transplanting MSCs. Lue et al. [47] transplanted GFP-tagged BM cells into the testicular interstitium and tubules of wild type mice and reported that the transplanted cells differentiate into Leydig cells, Sertoli cells, and also into germ cells. Similarly, Aziz et al. [48] also reported that bone marrow-derived MSCs when transplanted into the rete testis of busulphan-treated azoospermic rats transdifferentiate into spermatids and spermatocytes. Sabbaghi et al. [49] studied the ability of BM derived MSCs in revival of sperm in rat model for testicular torsion. They have reported that transplantation of MSCs via rete testis can revive spermatogenesis. Cakici et al. [50] also recently reported that fertility is restored in azoospermic rats by injecting adipose-derived MSCs. But this whole body of the literature is confusing because these studies fail to acknowledge the presence of VSELs in mammalian testis which are indeed resistant to busulphan treatment. VSELs are also resistant to damage induced by radiation because of their quiescent nature [51]. VSELs persist in busulphan treated testis and possibly differentiate into germ cells/sperm in the presence of growth factors/cytokines secreted by the transplanted MSCs [52]. To conclude, we propose that MSCs indeed arise from VSELs [53] in agreement with earlier reports by Taichman et al. [43] and are multipotent implying that they can give rise to various mesodermal cell types. Their pluripotent properties implying transdifferentiation are questionable and whatever minimal transdifferentiation that is reported may actually be due to the existing subpopulation of VSELs. The very presence of MSCs in so many diverse body tissues forces us to think that they actually represent a highly specialized ground substance or the microenvironment (source of growth factors and cytokines) for the VSELs and their progenitors to maintain life-long tissue homeostasis and are capable of immune modulation. The growth factors and cytokines secreted by the MSCs keep the VSELs under quiescent state and maintain normal proliferation and differentiation. But with increased age, MSCs function is compromised resulting in uncontrolled proliferation of stem cells at any level resulting in increased incidence of cancers. If VSELs function is disrupted the tumors are more embryonic in nature and more lethal. Nature of the tumors will vary if more committed progenitors function gets disrupted due to the altered secretome of the niche providing cells. Thus the interaction of MSCs with VSELs and the tissue-committed stem cells "progenitors" and age related changes in the MSCs secretome warrants further investigations.
2016-05-12T22:15:10.714Z
2013-09-25T00:00:00.000
{ "year": 2013, "sha1": "9b86f8d36d0ca431802488507d7947a59a414bbb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/sci/2013/547501.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5c2c3aed377e84b893e3d27d63b8bb903d0840d2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267090514
pes2o/s2orc
v3-fos-license
The use of the ERAS protocol in malnourished and properly nourished patients undergoing elective surgery: a questionnaire study Background Enhanced recovery after surgery (ERAS) is a modern approach to perioperative management. This study aimed to evaluate compliance with certain aspects of the ERAS protocol in malnourished and properly nourished patients undergoing elective surgery. Methods A questionnaire study was conducted among 197 patients undergoing elective surgery at the university hospital. We divided patients into two groups according to nutritional status. Results The study’s results showed that 67 patients (34%) lost weight before admission (the weight-loss group). Twenty-five participants (37%) in the weight-loss group and 15 patients (12%) in the preserved-weight group underwent surgery due to cancer (P < 0.001). More patients in the weight loss group (45 of 67) than in the preserved-weight group (40 of 129, P < 0.001) limited their food intake a week before the surgery. The preserved-weight group participants were mobilized earlier than the weight-loss group (P = 0.04). The median number of hours since drinking their last fluids and eating their last meals before the surgery were 12.2 hours and 25.4 hours for both groups, respectively. Only eight patients received preoperative carbohydrate loading. We found higher serum protein concentrations in the preserved-weight group (7.10 [0.5] vs. 6.92 [0.71], P = 0.023); however, white blood cell count was higher in the weight-loss group (7.85 (2.28) vs.7.10 (0.50), P = 0.04). Both groups were highly satisfied with their hospital treatments. Conclusions Our study revealed relatively high malnutrition in patients undergoing elective surgery. As a standard of perioperative care in the studied centre, the ERAS protocol implementation level is low. ished and properly nourished patients undergoing elective surgery. METHODS The Medical University of Lublin Ethics Committee provided ethical approval for this study (number KE-0254/281/2018 on November 29, 2018).This was a prospective, observational study involving a group of adult patients following elective surgery.Patients answered the questions included in the questionnaire 1 to 4 days after undergoing their respective surgical procedures (a minimum of 24 hours following the procedure).For the study, we included adults (≥ 18 years), undergoing elective surgery in the gynaecological and surgical departments of our hospital, anesthetised with general and/or regional techniques.Patients who could not give informed consent, after procedures in local anaesthesia, and/ or not involving anaesthesiologists, admitted to the intensive care unit were excluded.The patients spent at least two postoperative nights in the hospital.The data were collected by medical students after obtaining written consent from the participants.The medical students obtained consent from the patients on the day of the survey collection after informing them about the aims of our trial and ensuring the anonymity of their identities. Survey The questionnaire form consisted of 14 questions reflecting alterations in food and fluids intake in the perioperative period, complications, and satisfaction with perioperative treatment.The questionnaire form is presented in Appendix 1. Outcomes The primary outcome was the proportion of patients who lost weight in the last six months before the surgery (Question 1 in the survey).According to the patients' answers to this question, we divided participants into two groups: the weight-loss group and the preserved-weight group.The other outcomes included in the survey were limited food and fluids intake due to illness and surgery, medication compliance, specific preoperative preparations, and patient satisfaction concerning the perioperative period.Moreover, we also evaluated several of the patients' complications, such as postoperative bleeding, infections, deaths, re-surgery, and readmissions connected with the previous hospitalisation up to a year following the surgery. Statistical analysis We analysed continuous variables with the t-test or Mann-Whitney U test and categorical parameters with Fisher's exact test.We used means (standard deviations) for normally distributed variables, medians (interquartile ranges [IQR]) for non-normally distributed parameters, and numbers (percentages) to present categorical data.All measurements were performed using the Statistica 13.1 software (Stat Soft.Inc., Tulsa, OK, United States). RESULTS A hundred and ninety-seven patients after general, oncological and gynaecological surgery procedures took part in the study.Medical students collected questionnaire forms from January to March 2020.Participant demographics and laboratory results at hospital admission are presented in Table 1.More patients in the weight-loss group had cancer surgery than participants in the preserved-weight group (37% vs. 12%, P < 0.001).Furthermore, we found that serum protein concentration was higher in the preserved-weight group (7.19 [0.5] vs. 6.92 [0.71]; P = 0.023); however, white blood cell counts Outcomes Patients in the weight-loss group lost a median (IQR) of 6 (4-10) kg in 6 months before the surgery (Question 3).We found a significant difference between the studied groups in terms of the limitation of their food intake during the week prior to the surgery due to illness (Question 3).Forty-five of 68 participants in the weight-loss group limited food intake; however, only 40 of 129 patients in the preserved-weight group confirmed food reduction a week before the surgery (P < 0.001).The preserved-weight group participants were mobilised earlier than the weight-loss group (P = 0.04, question 9).The results of the survey are presented in Table 2. We noted 22 postoperative complications in our patients.There were 11 complications per group, P = 0.15.Seven patients in the weight-loss group underwent re-surgery, in contrast to 11 in the preserved-weight group, P = 0.8.Six patients in the weight-loss group required readmission in comparison to three participants in the other group, P = 0.06.Five patients eventually died; four belonged to the preserved-weight group while one belonged in the weight-loss group, P = 0.66. DISCUSSION In our cohort, 35% of patients had lost a relevant amount of weight due to illness before the surgery (Table 1).The prevalence of weight loss was slightly lower in our research than in other studies assessing the risk of malnutrition in a surgical population (40-44%) [8,9].Portuondo et al. found that 44% of patients were malnourished before elective surgery.Moreover, the study's authors also found an association between low albumin concentration and malnutrition.We did not observe a significant difference in albumin concentration (P = 0.07); however, we found a significant difference in protein concentrations between the two studied groups (P = 0.023) (Table 1). Weight-loss group Preserved-weight group (n = 129) P-value Moreover, we also identified a higher WBC in the weight-loss group (P = 0.04), a finding that is consistent with studies showing a correlation between malnutrition and inflammation [10,11].Although albumins and proteins are no longer considered markers of malnutrition, they are often linked with body weight loss.Some new parameters are debated as potential laboratory tests in malnutrition screening [12].Our previous study concerning the evolution of patients in a pre-anaesthetic clinic revealed a lower prevalence of weight loss among these individuals [13].Only 20% of patients (93 of 467) experienced relevant weight loss in the pre-anaesthetic clinic.This difference between the two studies (35% vs. 20%) could be associated with the assessment of more surgical departments in the pre-anaesthetic clinic and not only general surgery and gynaecological wards, as in the current study.Patients with cancer experienced a higher risk of weight loss in our research (25 of 68 vs. 15 of 129 patients, P < 0.001) (Table 1).This result is consistent with a recent international study published in The Lancet [14].In the abovementioned paper, 41.8% and 26.4% of patients in high-and upper-middle-income countries, respectively, were severely malnourished. The study results reveal several differences between perioperative procedures in our hospital and the assumptions of the ERAS protocol.Eightyfive out of 197 (43%) patients in our study limited their food intake due to illness, with more doing so in the weight-loss group (66% vs. 31%, P < 0.001).Only eight patients in our study took carbohydrate supplements before the surgery.Although some authors have postulated that there are benefits associated with carbohydrate loading, a recent metaanalysis did not reveal any benefits relating to this intervention [15,16].Mechanical bowel preparation (enema) used to be a routinely performed procedure before many abdominal procedures.New evidence suggests that bowel preparation is not necessary [17].In our study, an enema was conducted in 44 cases (22%) during the preoperative period. According to the ERAS protocol, it is not recommended for patients to cease consumption of nutrition and fluids too early before the surgery [18].Preoperative fasting guidelines of the American Society of Anesthesiologists (ASA) support the safety of allowing clear liquids up to 2 h and solid foods for up to 6 h (fatty foods for up to 8 h) [19] before elective procedures requiring general anaesthesia, regional anaesthesia or procedural sedation and analgesia.Moreover, in most cases, oral feeding should resume several hours following the procedure [20,21].In our study, the median time between cessation of fluids and the surgery was 12.2 hours, and the last meal was taken 25.4 hours before the procedure.We did not find a difference in these aspects between the studied groups.Most of our patients resumed fluid intake on the next day after the surgery (Question 9). Early mobilisation is a crucial component of the ERAS pathway that shortens hospitalisation, helps preserve muscle function and reduces the risk of postoperative complications [22].Early rehabilitation, which may include exercising in bed and sitting out of bed, should begin on the day of surgery.In our study, only approximately 25% of patients achieved early mobilisation. Despite relatively low compliance with the ERAS protocol in our cohort, both groups were highly satisfied with the hospitalisation, the quality of information obtained in the perioperative period, and the kindness of the medical personnel (Table 2).Moreover, we did not identify differences in the long-term outcomes and postoperative complications between the two studied groups. Our study showed low compliance with the ERAS protocol among surgical patients in our centre.The reasons for that fact included the habits of medical personnel and the lack of knowledge concerning new recommendations and guidelines.The potential improvement of this state can be achieved with better adherence to new recommendations and providing audits periodically in our centre. Our study had several limitations.It was a single-centre study covering a relatively small cohort.The survey was conducted in two departments.The satisfaction of hospitalization was measured under relative pressure of investigators.Furthermore, only some aspects of the ERAS protocol were covered in our survey. CONCLUSIONS The results obtained in our study reveal a relatively high prevalence of malnourished patients undergoing elective surgery in our hospital.The ERAS protocol implementation level as a standard of perioperative care in the studied centre is low.Due to the possible benefits for the patient and the hospital, the current preoperative and postoperative procedures should be modified to better meet the ERAS assumptions. TABLE 1 . Patient demographics and laboratory results at admission All data are presented as numbers (%) and means (SD).SD -standard deviation, WBC -white blood cell count were higher in the weight-loss group (7.85 [2.28] vs. 7.10 [0.50]; P = 0.04).
2024-01-24T05:08:29.285Z
2023-12-30T00:00:00.000
{ "year": 2023, "sha1": "c637079fadfda033d1a90596133cc52623922284", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c637079fadfda033d1a90596133cc52623922284", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227319317
pes2o/s2orc
v3-fos-license
Comparison of fentanyl, clonidine and nalbuphine for attenuation of pressor response during endotracheal intubation under general anaesthesia Background: Increased sympathoadrenal activity invariably occurs during endotracheal intubation. Various drugs have been used to obtund this pressor response. This study was done to find out most favourable drug among fentanyl, nalbuphine and clonidine for prevention of this pressor response. Materials and Methods: This was a randomized, prospective study involving ninety patients of ASA grade 1, equally divided into three groups. Group F, C and N received fentanyl 2 mcg /kg, Clonidine 2 mcg/kg and nalbuphine 2mg/kg i.v respectively, 5 minutes prior to induction. Vitals parameters were noted at frequent intervals Chi square test and Anova test were used for statistical analysis. P value <0.05 was considered as statistically significant. Results: Maximum increase in heart rate and blood pressure was seen at the time of intubation in F and N groups, whereas decrease in these parameters occured with clonIdine, difference was found to be stastistically significant. Haemodynamic stability was seen in F and N group after 5 minutes of intubation. Clonidine showed maximum decrease in heart rate and systolic as well as diastolic blood pressure at all time intervals from intubation as compared to other two groups. Conclusion: In this study it was found that Clonidine produced an earlier and more stable haemodynamics as compared to Fentanyl and Nalbuphine, and it can be concluded that Clonidine given intravenously in doses of 2 mcg/kg 5 minutes prior to intubation is superior to Fentanyl and Nalbuphine in preventing heamodynamic changes at the time of laryngoscopy and intubation. © This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction Laryngoscopy and endotracheal intubation are associated with stressful stimuli that provoke tachycardia, hypertension, arrhythmias due to increased sympathoadrenal activity 1 which may be detrimental in patients with cardiovascular disease, congestive heart failure (CHF), geriatric population and cerebral haemorrhage. To blunt this pressor response various drug regimens have been tried including opioids, barbiturates, benzodiazapines, beta blockers, calcium channel blockers, vasodialators but each drug has its own shortcoming [2][3][4][5][6][7] have been studied and compared individually and amongst each other but have never been studied together in similar set of patients. The purpose of this study was to compare Fentanyl, Nalbuphine and Clonidine for attenuation of pressor response due to laryngoscopy and intubation. Materials and Methods A prospective study was done involving 90 adult patients of age group 18-50 years, ASA grade 1 and 2, Mallampati grade 1 and 2 who were posted for surgery under general anaesthesia. Research proposal was submitted to and approved by ethical committee. A thorough pre-anaesthetic evaluation was done, after written informed consent. Pregnant patients, patients having history of cardiovascular, renal or hepatic disease; allergy to opioids and in whom difficult intubation was anticipated, were excluded from the study. Patients were randomly distributed into 3 groups (30 each) by computerized random allocation. Study drugs were given 5 minutes prior to laryngoscopy, Fentanyl 2.0 mcg/kg (Group F), Clonidine 2.0 mcg/kg (Group C), Nalbuphine 0.2 mg/kg (Group N). All standard ASA monitors were attached and continuously monitored. Intravenous Inj. midazolam 2mg and Inj. glycopyrrolate in dose of 0.02mg/kg was given as premedicants. After preoxygenation with 100% oxygen, induction was done with Inj. Propofol (2mg/kg). Inj. Vecuronium 0.1 mg/kg i.v was used to facilitate intubation. oxygen and nitrous oxide in ratio of 40:60, isoflurane (0.5-1.5%) and intermittent Vecuronium was used for maintainance of anaesthesia. For first 10 minutes post intubation no other drug or surgical stimulus was given to the patients. In patients where laryngoscopy time was more than 30 seconds or in whom a second attempt was required were excluded. Heart rate, systolic blood pressure and diastolic blood pressure were noted at the time intervals starting with baseline parameters (BL), during intubation (Ti) and then at one minute (T1), three minutes (T3), five minutes (T5) and ten minutes (T10) after intubation. After recording the parameters at the above mentioned intervals, surgery was started. Sample size of 30 in each group was calculated on basis of pilot study. Data was analysed using SPSS v21. Categorical data was represented as frequencies & percentages. Continuous data was represented as mean & standard deviation. Chi square test was used as test of significance for categorical data. Anova test was used as test of significance for comparison of means between three groups. P value <0.05 was considered as statistically significant. Line diagrams were drawn to represent the trend over a period of time. Results There was no statistical difference among the three groups with respect to the demographic variables (age, weight and gender) of patients. (Table 1) During intubation(Ti) maximum change in heart rate was seen in group F (Fentanyl group, % change from baseline +17.44%) whereas group C (Clonidine group) showed decrease in heart rate (-4.2%) while group N (Nalbuphine) group had an increase of 14.8%. This difference was statistically significant between the three groups. At 1 minute (T1) and 3 minutes (T3) after intubation there was increase in HR in group F (+15.9 and +6.4) and group N (+13.4% and +5.7%), with more increase in group F, whereas in group C there was decrease in HR (-5.2% and -8.2%). There was statistical difference between F and C, C and N while there was no statistical difference between F and N. At 5 minutes after intubation (T5) there was decrease in HR in all the groups with maximum decrease in group C (-8.5%). This was statistically significant difference among the groups. At 10 min after intubation (T10) there was decrease in HR in all the groups with more decrease in groups C and N while it was less in group F (-5.3%). This was statistically non significant among the groups. (Tables 2 and 3) At Ti maximum increase in SBP was in group F (+10.4%) while in group C there was decrease in SBP (-4.3%). This was statistically significant between F and C, C and N but not between F and N. At T1 and T3 maximum increase in SBP was in group F while there was decrease in SBP in group C. This was statistically significant among F and C, C and N but not between F and N. At T5 there was increase in SBP in groups F and N with more increase in group F while there was marked decrease in group C (-18.7%). The changes were statistically significant among all the groups. At T10 there was decrease in SBP in all the groups with more decrease in group C. This was also statistically significant among all the groups. (Tables 4 and 5) At Ti there was increase in DBP in F and C groups with maximum increase in group F. (12,3%). There was slight decrease in group C. This was statistically significant between F and N, C and N but not between F and C. At T1 and T3 there was increase in DBP in group F and N with more increase in group F while there was decrease in group C. This was statistically significant among group F and C, C and N but not between F and N (at T1) while at T3 this was statistically significant among all the groups. At T 5 there was increase in DBP in group F & N while decrease in group C. This was statistically significant among all the groups. At T10 there was decrease in DBP in all the three groups. Maximum decrease was seen in group C (-15.3%) This was statistically significant between group F & C and F & N while not significant between C & N. (Tables 6 and 7) Discussion Most of the patients undergoing surgery under general anaesthesia require endotracheal intubation. Laryngoscopy performed for intubation causes sympathetic stimulation which causes increase in blood pressure as well as heart rate. 1 Upto 36 to 45% increase in systolic blood pressure and 20 to 45% increase in heart rate of baseline value may occur, if no specific preventive measures are undertaken. 14,15 Some patients may even develop arrhythmias. Various agents like calcium channel blockers, vasodilators, alpha 2 agonists, narcotics have been used to attenuate the pressor response to laryngoscopy and intubation. [2][3][4][5][6][7] Narcotics like fentanyl and nalbuphine, besides obtunding the haemodynamic response to intubation, also have advantage of maintaining depth of anaesthesia. Fentanyl, a pure agonist, acts on mu receptors. 16 It's onset of action is rapid and duration of action is short. It causes increase in parasympathetic tone while decreasing sympathetic tone. Clonidine is an alpha 2 receptor agonist which has been used previously as an anti hypertensive agent and also used as an agent to blunt pressor response due to laryngoscopy and intubation. As shown by Zalundro et al. intravenous clonidine was better than oral clonidine in attenuating the pressor response. 17 Bhalereo et al found that Clonidine given i.v as premedicant was effective in preventing stress induced heamodynamic response. 18 Chawda, Pareek and Mehta found that 0.2mg/kg of nalbuphine given 3-5 minutes before intubation is effective in preventing the pressor response. 19 Effectivity of Nalbuphine in dose of 0.2 mg/kg in achieving acceptably stable vital parameters and good anesthetic effect were proved by study of Nath R et al. 20 These drugs have been studied and compared individually and amongst each other but have never been studied together in similar set of patients. Study by Ko et al. 21 concluded that optimal time to give Fentanyl so as to prevent pessor response to intubation was 5 minutes before intubation. In our study also, drugs were given 5 minutes prior to intubation. 5 microgram per kilogram (mcg/kg) body weight of Fentanyl can effectively obtund the haemodynamic response to laryngoscopy as found by Kay et al. 22 but at the expense of side effects like nausea vomiting, muscular rigidity and bradycardia. Apnoeic episodes were seen by McClain in 4 patients with doses of 3.2 to 6.5 micro/kg of Fentanyl. 23 We decided to use a dose of 2.0 mcg/kg of Fentanyl to prevent pressor response. Study by Yushi et al. found that Fentanyl in a dose of 2microgram/kg is effective in suppressing pressor response to endotracheal intubation. 24 Post operative respiratory depression may occur in surgeries lasting less than one hour with high doses of Fentanyl. 25,26 Gupta and Tank also found that Fentanyl given in dose of 2 mcg/kg before induction was effective in obtunding pressor response to endotracheal intubation and laryngoscopy. 27 However, they did not comment upon the optimum time for intubation after administration of the drug. In our study, at the time of intubation, significant increase in heart rate was seen in F and N group whereas C group did not show any increase in heart rate (statistically significant) signifying that Clonidine was better in maintaining heart rate at the time of intubation. Increase in heart rate in both groups (F,N) persisted at 3 minutes and gradually settled after 5 minutes. There was no significant variation in heart rate in group C through out the observation period. Our findings of change in heart rate in F and N group are similar to those by Rawal and Wennhager. 28 Study by Ahsan et al on Nalbuphine also found gradual settlement of heart rate after intubation. 15 Group C showed best control of HR as compared to Group F and Group N which is due to reduction in sympathetic outflow and increase in parasympathetic tone of central origin. The trend can be appreciated from Figure 1. 29 Our findings are supported by study by Dipak and Malini who found Clondine was effective in attenuating pressor response to intubation. 30 There was increase in blood pressure in both F and N groups at the time of intubation, which persisted even after 3 and 5 minutes after intubation, while a slight decrease in blood pressure was observed in group C. Systolic blood pressure and diastolic blood pressure settled down to pre intubation levels after 10 minutes in group F and N where as maximum decrement was seen in group C (Figures 2 and 3). Our findings are in concordance with studies by Kalra NK et al. 9 who found intravenous Clonidine was better in comparison to Magnesium sulphate in controlling blood pressure during laproscopic cholecystectomy. Study by Chaudhari M et al also found stastically significant increase in blood pressure and heart rate just after intubation in Nalbuphine group as compared to Clonidine group, corroborating our study findings. 31 Our findings are also in concondrance with Bhalereo et al. who found that Clonidine given i.v as premedicant was effective in preventing stress induced heamodynamic response. 18 Conclusion Obtundation of pressor response during laryngoscopy is important to reduce the morbidity associated with increased sympathoadrenal drive. In this study we found that all three agents, Fentanyl, Nalbuphine and Clonidine were effective in achieving the objective of controlling the haemodynamic response to stress of endotracheal intubation. However, Clonidine proved to be superior compared to the other two as, the latter took 5-10 minutes to reach stable haemodynamic parameters while, Clonidine produced more stable haemodynamics in lesser time when given in dose of 2.0 microgram per kilogram 5 minutes prior to intubation. Limitation Limitation of our study is that in post-operative period patients could have been assessed for sedation and analgesia in all the three groups. Source of Funding None. Conflict of Interest None.
2020-11-26T09:04:37.830Z
2020-11-15T00:00:00.000
{ "year": 2020, "sha1": "3517a1ae67ace65280577250a9a8aafdc9f95cd4", "oa_license": "CCBYNCSA", "oa_url": "https://www.ijca.in/journal-article-file/12688", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b52eb70bbc123da3d64fe41f5d544c8bc5bf74ad", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199434880
pes2o/s2orc
v3-fos-license
Evaluation of cases of pediatric extrapulmonary tuberculosis: a single center experience Aim: Extrapulmonary tuberculosis is observed more frequently and leads to complications with a higher rate in children compared with adults because the risk of lymphohematogen spread is higher. In this study, the clinical, laboratory, and radiologic findings and treatment outcomes were evaluated in pediatric patients who were followed up in our clinic with a diagnosis of extrapulmonary tuberculosis. Material and Methods: Seventy patients aged 0–18 years who were followed up with a diagnosis of extrapulmonary tuberculosis between 2008 and 2017 in the Division of Pediatric Infectious Diseases in our hospital were examined retrospectively. Results: The median age of the patients was 8,8 (range, 0,4–17) years and 47.1% were female (n=33). Twenty-seven patients (38.6%) were aged 0–4 years, 15 (21.4%) were aged 5–9 years, and 28 patients (40%) were aged 10–18 years. Forty-four patients (62.9%) were diagnosed as having extrapulmonary tuberculosis and 26 (37.1%) had pulmonary + extrapulmonary tuberculosis. The most common form of extrapulmonary tuberculosis was extrathoracic lymphadenopathy, which was found in 22 patients (31.4%). The other patients were diagnosed as having musculoskeletal system tuberculosis (n=10, 14.3%), gastrointestinal system tuberculosis (n=9, 12.9%), miliary tuberculosis (n=8, 11.4%), intrathoracic lymphadenopathy (n=7, 10%), renal tuberculosis (n=6, 8.6%), central nervous system tuberculosis (n=5, 7.1%), and pleural tuberculosis (n=3, 4.3%). Among a total of 58 patients in whom tuberculin skin test and interferon gamma release tests were studied together, tuberculin skin test positivity (n=37, 63.8%) was found with a higher rate compared with interferon gamma release test positivity (n=32, 55.2%), but the difference was not statistically significant (p=0.35). The median treatment period was 12 (range, 6–24) months. Among the patients whose treatments were terminated, improvement was observed in 52 patients (74.2%) and the development of sequela was observed in six patients (8.5%). Two patients who were diagnosed as having central nervous system tuberculosis (2.8%) died. Conclusion: Clinical, laboratory, and radiologic data should be evaluated together when making a diagnosis of extrapulmonary tuberculosis in children. Interferon gamma release tests alone are not superior to tuberculin skin test, but should be considered to be used in combination in the diagnosis. Introduction Tuberculosis is the ninth leading cause of death worldwide and is the leading cause of mortality from a single infectious agent (1). According to the World Health Organization (WHO) Tuberculosis 2017 Report, children aged below 15 years constituted approximately 10% of about 10.4 million new tuberculosis cases in 2016. Around 1.3 million HIV-negative patients have died because of tuberculosis. In the same report, 66% of 12,417 new cases reported from our country were recorded as pulmonary tuberculosis. These data belong to adults and there is a limited number of reports related to children (2). Extrapulmonary tuberculosis (EP-TBC) is observed more frequently in children compared with adults because the risk of lymphohematogen spread is high, especially in young children. Tuberculosis may affect any organ in the body including mainly lymph nodes and the central nervous system (CNS). Accompanying extrapulmonary organ involvement may be observed in patients with pulmonary tuberculosis. Attention should be paid to extrapulmonary organ involvement in patients diagnosed as having pulmonary tuberculosis because treatment should be administered for a longer period, especially in CNS tuberculosis and bone and joint tuberculosis. In children, it is particularly difficult to make a diagnosis of tuberculosis because most signs and symptoms of tuberculosis, which is one of the most important causes of mortality in the childhood age group, are non-specific, the sensitivity of diagnostic tests is low in pediatric patients, and tuberculosis may mimic many other disease entities (3). However, the most important factor that affects morbidity and mortality rates is early initiation of treatment. Therefore, it is recommended that treatment should be initiated after the assessment of clinical and radiologic findings together when it is not possible to prove the disease through laboratory findings (4). The most important step required for making the diagnosis is a high level of suspicion. In this regard, publication of tuberculosis data in children in our country is considerably important. In this article, we aimed to contribute to our country's data by evaluating the clinical, laboratory, and radiologic findings and treatment results in our pediatric patients who were followed up with a diagnosis of EP-TBC in our clinic between 2008 and 2017. Material and Methods Seventy pediatric patients aged 0 to 18 years who were followed up in the Division of Pediatric Infectious Diseases in our university with a diagnosis of EP-TBC between 2008 and 2017 were included in the study. The sex, age, history of contact to tuberculosis, number of Bacillus Calmette-Guerin (BCG) scars, symptoms at the time of presentation, physical examination findings, laboratory, radiologic, and microbiologic data, and treatment regimens belonging to these patients were exaimed from the patient files. Definition of cases Pulmonary tuberculosis was defined as the presence of involvement of pulmonary parenchyma. Extrapulmonary tuberculosis was defined as the presence of acid-resistant bacillus (ARB) in samples obtained from extrapulmonary organs or clinical, radiologic, and histologic clinical findings. Extrapulmonary tuberculosis was classified as lymphadenitis, bone, skin, CNS, gastrointestinal system and perioneum, eye, genitourinary system, and miliary tuberculosis. In patients in whom pulmonary tuberculosis and EP-TBC existed simultaneously, it was stated that both involvements were present; the extrapulmonary organs involved were recorded (5). The diagnosis of tuberculosis meningitis was made with exploration of ARB in CSF samples and with the presence of a positive culture result and at least one radiologic finding. Patients who had no previous tuberculosis treatment or who had received tuberculosis treatment for less than one month were defined as 'new cases.' Patients who were previously diagnosed as having tuberculosis, completed treatment succesfully and once again developed ARB positivity and clinical and radiological findings were defined as 'recurrence cases.' Patients whose disease was newly diagnosed and in whom the bacillus was demonstrated with smear or culture in sputum samples obtained five months or later after initiation of treatment were defined 'case coming from treatment failure' (5). Radiologic evaluation The radiologic imagings of all patients at baseline and at the end of treatment were evaluated by a pediatric radiologist. Pulmonary tuberculosis was screened using posteroanterior lung radiography and thoracic computed tomography and patients with EP-TBC were screened in terms of tuberculosis-specific radiologic findings by infection site. Microbiologic evaluation Exposure to tuberculosis bacillus was investigated through tuberculin skin tests (TST) and interferon gamma release assasys (IGRA). For the TSTs, the transverse diameter of enduration was evaluated 48-72 hours after 0.1 mL of 5TU solution was injected intradermally in the 2/3 inner surface of the forearm using a 27-gauge needle such that a 6-10mm papule was formed ("Mantoux" method). A positive tuberculin skin response was defined as an induration size of ≥15 mm in individuals who had no risk factors and had a BCG scar, >10 mm in individuals who had no BCG scars, and >5 mm in individuals who had risk factors (6). Sputum, fasting gastric juice, bronchoalveolar lavage fluid, and CSF samples obtained through lumbar puncture in appropriate cases and tissue samples obtained by the organ involved, were used for microbiologic tests. In direct smears, ARB was explored and cultures were studied. Löwenstein-Jensen medium and the BACTEC-Middle-Brook test were used for culture. The diagnosis of tuberculosis was made by detecting the agent in culture and/or presence of histopathologic, clinical, and radiologic findings (6). Ethics committee approval was obtained from Istanbul University Istanbul Medical Faculty Ethics Committee for this study (2018/581). The study was conducted in accordance with the principles of the Decleration of Helsinki. Informed conset was not required because the study was conducted retrospectively. Statistical analysis The Statistical Package for the Social Science (SPSS) statistical program was used for statistical analyses (Version 21, Chicago). Qualitative measurements are expressed as numbers and percentages, and quantitative measurements are expressed as mean±standard deviation (by specifying median, minimum and maximum values, when necessary). The McNemar test was used for the evaluation of dependent groups. A p value of <0.05 was considered statistically significant. The diagnosis of CNS tuberculosis was made through culture in one subject, radiologic findings in two patients, and with growth in culture and radiologic findings in two patients. Among the patients who were diagnosed as having gastrointestinal tract tuberculosis, one subject had biopsy findings (granulomatous inflammation) and radiologic findings (intraabdominal LAP+peritoneal thickening), one subject had positive peritoneal fluid culture, and two patients had radiologic findings compatible with tuberculosis. Clinical findings and laboratory examinations The most common symptom at presentation was the presence of neck swelling, which was found in 24 (34.3%) patients. Fever and night sweating were present in 11 patients (15.7%), restricted movement and pain in the extremities were present in nine (12.9%), cough was present in seven (10%), hematuria was present in seven (10%), and weight loss was present in six patients (8.6%). Blurred consciousness, vomiting, and headache were found in three patients (4.3%), vomiting was found in two patients (2.8%), and growth and developmental delay was found in one patient (1.4%). As a result of history and family screening, the presence of contact with an individual with tuberculosis in the immediate vicinity was found in 20 patients (28.6%). BCG scar was positive in 63 patients (90%). The result was found to be positive in 39 (60%) of 65 patients who underwent TSTs. Interfero-gamma release test was found to be positive in 33 of 61 patients (54.1%). In a total of 58 patients in whom TSTs and IGRTs were performed together, the frequency of TST positivity (n=37, 63.8%) was found to be higher compared with IGRT positivity (n=32, 55.2%), but the difference was not statistically significant (p=0.35). When the laboratory findings at the time of diagnosis were evaluated, anemia was found in 12 patients (17.1%), leukocytosis was found in three patients (4.2%), increased C-reactive protein level was found in 34 patients (48.5%), and increased erythrocyte sedimentation rate was found in 49 patients (70%). The test was found to be negative in all patients in whom anti-HIV tests were studied (36/70). An increase in transaminase levels due to treatment was found in two patients (2.8%). Anaphylaxis developed following administration of streptomycin in one patient (1.4%). Recovery was observed in 52 patients (74.2%) and sequela development [hydrocephalus (n=2, 2.8%), kyphosis-lordosis (n=3, 4.2%), hydronephrosis (n=1, 1.4%)] was observed in six patients (8.5%) after treatment. Recurrence was observed nine months after treatment was discontinued in one pa-tient (1.4%). One (2.8%) of the two patients who were diagnosed as having CNS tuberculosis was lost on the 10 th day and the other one who had hydrocephalus was lost at the 1 st month following development of acute loss of consciousness in the follow-up. In the first one of these patients in whom herniation was considered primarily, steroid treatment could not be administered, because he presented with closed consciousness and the diagnosis was made postmortem. Steroid treatment was used in the other subject who had diffuse tuberculomas and hydrocephalus. Discussion In Turkey, the incidence of tuberculosis was reported as 17.2/100,000 in the Tuberculosis Control Report 2015 (7). When the distribution of the patients with tuberculosis by age groups was examined, it was found that the case rate was about 4.7/100,000 below the age of 15 years. In all cases, extrapulmonary organ involvement is found with a rate of 35.4% and both pulmonary and extrapulmonary involvement are found with a rate of 4.6%; it is noted that these rates are higher in the age group below 15 years. It is known that the risk of transformation of tuberculosis infection to morbidity and development of severe disseminated disease is increased in children, especially in the first year of life (8). In this age group, the incidence of EP-TBC increases because the risk of lymphohematogenous dissemination is high. Similarly, children aged between 0 and 4 years constituted more than one-third of our patients diagnosed as having EP-TBC in our study. In contrast to adults, the possibility of accompaniment of EP-TBC in pulmonary tuberculosis is increased in childhood. In our study, this rate was found as 37.1%. In the light of this finding, it should be emphasized that a high level of suspicion should be maintained in terms of screening extrapulmonary organ involvement when a diagnosis of pulmonary tuberculosis is made, especially in young children. Tuberculosis lymphadenitis is generally the most common form of EP-TBC (9). In the study conducted by Coşar et al. (10) in which childhood tuberculosis was evaluated, the frequency of EP-TBC was found as 38.6% and the frequency of tuberculosis lymphadenitis was found as 11.7%. Similarly, the most common form of EP-TBC was extrathoracic lymphadenopathy (31.4%) in our study. According to the statistics of our country, pleural tuberculosis is the second most common extrapulmonary tuberculosis including children aged below 15 years. Bone-joint involvement is observed rarely and constitutes 3% of all tuberculosis cases (9). Its frequency among extrapulmonary tuberculosis cases has been reported to be about 10-35%. In our case series, the second most common extrapulmonary tuberculosis was found to be musculoskeletal tuberculosis (14.3%) in contrast to country-wide data. Pleural tuberculosis alone was the rarest form of EP-TBC. This may be related to the fact that complex cases were referred to us The most severe form of extrapulmonary tuberculosis is CNS tuberculosis and miliary tuberculosis. It has been reported that the risk of development of these two morbidities is high, especially in children aged between 6 months and 4 years (11)(12)(13)(14). Central nervous system tuberculosis may be observed as parenchymal or meningeal tuberculosis (15). tuberculin skin tests is positive in only 30% of cases (16). Radiologically, the most common findings include hydrocephalus, basal meningeal involvement, and increased contrast uptake in the meninges (17,18). In adults, these findings occur in 4-6 weeks, whereas they may be observed in a short period after disease onset (in 5-10 days) in pediatric cases (19). Therefore, investigation in terms of miliary tuberculosis and possible CNS tuberculosis, especially in children aged below 4 years will reduce the rates of complications and mortality. Statistics related to the complications of tuberculosis meningitis in children show variance in the literature. In the study conducted by Anjum et al. (20) in Pakistan, the mortality rate was found as 5% in 40 children with tuberculosis meningitis, and it was reported that neurologic sequela developed in all patients who survived. In another article reported from Vietnam, the mortality rate was reported as 15% and the rate of neurologic sequala was reported as 33% in children with tuberculosis meningitis (21). In our study, the median age was found as 8,8 (range, 0,4-17) years in the patients who had CNS tuberculosis. Two patients died of herniation during the follow-up period and ventriculoperitoneal shunts were applied to two patients. In accordance with the literature data, TST and IGRT positivity rates were found to be low in our patients who had CNS tuberculosis. However, TSTs were found to be positive with a reasonably high rate (71.4%) in our patients who had miliary tuberculosis. In a study conducted by Devrim et al. (22) in which pediatric cases of pulmonary tuberculosis and EP-TBC were evaluated, it was found that constitutional symptoms including fever, weight loss, and fatigue were found with a significantly lower rate. This may cause underdiagnosis of EP-TBC. However, it has been reported that the TST positivity rates in cases of EP-TBC are lower compared with pulmonary tuberculosis (23,24). In cases of extrapulmonary tuberculosis, the data related to IGRT positivity rate show variance in the literature (25). In a study conducted by Azghay et al. (26), the rate of quantiferon test positivity was found to be higher in patients with tuberculosis lymphadenitis compared with the patients with pulmonary tuberculosis. However, the sensitivity of IGRT was reported to be considerably low (45%) in bone-joint tuberculosis. In our study, TST positivity (63.8%) was found to be higher compared with IGRT positivity (55.2%) in pa-tients in whom TST and IGRT were studied together, but the difference was not statistically significant. When our study is evaluated in view of the literature, it can be stated that IGRT alone is not superior to TST in geographic areas where the incidence of tuberculosis is high and it is appropriate to evaluate patients with clinical, radiologic, and microbiologic data in combination with TST. Treatment of extrapulmonary tuberculosis is generally similar to that of pulmonary tuberculosis; its duration may be longer according to the region involved. However, it can be stated that children generally tolerate antituberculosis drugs better than adults (27). In our study, elevated tranaminase levels were found with a rate of 2.8% and anaphylaxis was found with a rate of 1.4%. In addition, the development of complications should be closely monitored in these patients who have a high life expectancy because the possibility of extension of the disease is high. In our patient group, complications including hydrocephalus, kyphoscoliosis, and hydronephrosis were observed in six patients. The limitations of our study include the relatively low number of patients and retrospective design. However, we think that our study is important in terms of contributing to our country's data related to pediatric tuberculosis. In conclusion, clinical, laboratory, and radiologic data should be evaluated in combination when making a diagnosis of EP-TBC in children. Interferon gamma release tests alone are not superior to TST and they should be used in combination in the diagnosis. Ethics Committee Approval: The study was approved by the institutional ethical review board (2018/581). Informed Consent: As the study was a retrospective review of the laboratory data, no patient consent was obtained.
2019-05-17T13:33:48.188Z
2019-07-11T00:00:00.000
{ "year": 2019, "sha1": "0a2568f125c725fd24e053bad85c9599a888742c", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc6666364?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "0a2568f125c725fd24e053bad85c9599a888742c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257805534
pes2o/s2orc
v3-fos-license
Large-scale purification of functional AAV particles packaging the full genome using short-term ultracentrifugation with a zonal rotor Adeno-associated virus (AAV) vector-based gene therapy is potentially curative for various genetic diseases; however, the development of a scalable purification method for full-genome AAV vectors remains crucial to increase productivity and reduce cost of GMP production. In this study, we developed a large-scale short-term purification method for functional full-genome AAV particles by using 2-step cesium chloride (CsCl) density-gradient ultracentrifugation with a zonal rotor. The 2-step CsCl method with a zonal rotor improves separation between empty and full-genome AAV particles, reducing the ultracentrifugation time (4–5 h) and increasing the AAV volume for purification. The highly purified full-genome AAV particles were confirmed by analytical ultracentrifugation (AUC), droplet digital PCR (ddPCR) in the whole region of the AAV vector genome, transduction efficiency in target cells, and transmission electronic microscopy (TEM). The high-purity AAV9 particles were obtained using culture supernatant during vector preparation rather than cell lysate. CsCl could be simply removed by a hydroxyapatite column. Interestingly, ddPCR analysis revealed that “empty” AAV particles contain small fragments of the inverted terminal repeat (ITR), probably due to unexpected packaging of Rep-mediated ITR fragments. This large-scale functional AAV vector purification with ultracentrifugation would be effective for gene therapy. INTRODUCTION Adeno-associated virus (AAV) vectors that express therapeutic gene products have shown great promise for gene therapy. Recently, AAV vector-based gene therapy trials have been reported in various hereditary diseases, including Duchenne muscular dystrophy (DMD), X-linked myotubular myopathy (XLMTM), hemophilia A, and hemophilia B [1,2]. However, in DMD and XLMTM gene therapy trials, systemic injection of highdose AAV9 vectors resulted in lethal liver failure at an early phase and death, most likely due to innate immunoreaction against the AAV genome and complement activation with AAV particles [1,[3][4][5]. In gene therapy in hemophilia A and hemophilia B using AAV2, AAV8, and AAV10 vectors, liver enzyme elevation and AAV capsid-specific T-cell activation were detected with subsequent declines in factor VIII and factor IX activity, respectively [2,6]. To date, manufacturing purification methods for AAV vectors are generally based on ion-exchange and affinity chromatography [7] and this process can remove host cell proteins (HCPs). Recently, AAV8 and AAV9 serotypes have been more commonly used because of their higher efficiency gene delivery compared to AAV2, enabling the harvesting of AAV vectors from cell culture supernatant instead of cell lysate [8]. Moreover, these purified AAV vectors still vary according to the packaged genome sizes, including full-genome, intermediate, and empty particles, which are produced during the AAV biomanufacturing process [4]. Empty capsids are thought to reduce transduction efficiency and induce unnecessary immune responses. In addition, doublestranded RNA (dsRNA) can be generated by bidirectional promoter activity from the inverted terminal repeat (ITR) of AAV, enhancing innate immunity [9]. Ultracentrifugation with a density gradient of cesium chloride (CsCl) or iodixanol allows AAV vectors to more efficiently separate the full-genome and empty particles compared to chromatography [10][11][12][13]. However, this system is limited by its small scale, and long exposure to CsCl (conventionally for 2 days) reduces the transduction efficiency of AAV vectors [5,14]. In addition, iodixanol is not suitable for clinical use because of its cross-reactivity with iodine allergy. For short-term purification of full-genome AAV vectors, we previously developed a 2-step CsCl density-gradient ultracentrifugation method; however, this is limited to a small scale (180 mL). Therefore, in this study, we developed a large-scale (1000 mL), short-term purification system for functional full-genome AAV vectors using ultracentrifugation. In this system, a zonal rotor (1.7 L capacity) was used to increase the AAV vector loading volume during ultracentrifugation, and a 2-step CsCl density gradient in the zonal rotor allowed for faster separation of full-genome particles, resulting in shorter exposure to CsCl during ultracentrifugation and efficient recovery of full-genome AAV vectors. MATERIALS AND METHODS Preparation of AAV vectors AAV vectors were prepared in a large scale and harvested from culture supernatant (conditioned media), as previously described [3]. In brief, a 293EB cell line expressing adenoviral E1a, adenoviral E1b, and Bcl-x L [15] was expanded in two 500 mL flasks (HYPERFlask, Corning, Corning, NY, USA) or a 1 L bioreactor (iCELLis Nano Bioreactor, Pall, Port Washington, NY, USA) for 5 days or 4 days, respectively, in Dulbecco's Modified Eagle Medium (DMEM high glucose, FUJIFILM Wako, Chuo-ku, Osaka, Japan) with 10% fetal bovine serum (Thermo Fisher, Waltham, MA, USA). Transfection was then performed with polyethylenimine max, (Polysciences, Warrington, PA, USA) using pAAV-ZsGreen1 (TaKaRa Bio, Kusatsu, Shiga, Japan), pRC9 (serotype 9), and helper plasmids in DMEM including 2 mM L-Alanyl-L-glutamine Solution(100x) (Nacalai Tesque, Nakagyo-ku, Kyoto, Japan), 0.12% NaHCO 3 (Nacalai Tesque), and 0.13% D-glucose (Nacalai Tesque) without serum. Five days posttransfection, culture supernatants were harvested and treated with 18.5 U/ mL endonuclease (KANEKA CORPORATION, Minato-ku, Tokyo, Japan) with 5 mM MgCl 2 (Nacalai Tesque) for 30 min at 37°C. All cells were checked for mycoplasma contaminations resulting were reported negative. (Table 1). A zonal rotor consists of a large cylindrical chamber subdivided into four sector-shaped compartments by vertical septa that radiate from the axial core to rotor wall. The entire chamber was used during centrifugation and loaded with a single density gradient, and each sector-shaped compartment served as a large centrifuge tube. The large chamber capacity of these rotors (1.7 L) eliminates the need for multiple runs and density gradients. A CsCl density gradient was generated in a zonal rotor (P32CT or P35ZT, Eppendorf Himac Technologies, Hitachinaka, Ibaraki, Japan) at 3000 rpm by loaded 200 mL HNE or HN buffer, AAV vector containing 5% CsCl, 300 mL of 25-27% CsCl in HNE or HN buffer, and 300 mL of 38-40% CsCl in HNE or HN buffer. AAV vectors were separated by ultracentrifugation (Himac CP 80NX, Eppendorf Himac Technologies) at 30,000-35,000 rpm for 4-10 h. After separation, 2 L of 42-45% CsCl buffer was slowly added to the inside of the zonal rotor at 3000 rpm, and each fraction within the zonal rotor was pushed out from the outside (Tables 2, 3). RI were measured in each fraction using an refractometer NAR-1T LIQUID or RX 5000i (Atago, Minato-ku, Tokyo, Japan). Each fraction sample was dialyzed with 20 kDa molecular weight cut-off dialysis cassettes (#66003 Thermo Fisher) in 0.5 mM MgCl 2 (Nacalai Tesque) in water for~2 h at 4°C, and 0.5 mM MgCl 2 in PBS (#27575-31, Nacalai Tesque) overnight at 4°C. Ultracentrifugation of AAV vectors with a zonal rotor Evaluation of genome copies, capsid proteins, and transduction efficiency of AAV vectors by quantitative polymerase chain reaction (qPCR), western blotting, and flow cytometry After ultracentrifugation with a zonal rotor, AAV genome copies of each fraction were evaluated using the AAVpro Titration Kit (for Real Time PCR) Ver.2 (TaKaRa Bio, Kusatsu, Shiga, Japan) in a QuantStudio 3 Real-Time PCR System (Applied Biosystems, Waltham, MA, USA). The sample size was n = 3 minimally needed for statistically significance. Evaluation of full-genome and empty AAV particles by AUC The purity of AAV vectors was analyzed using a Proteome Lab XL-I ultracentrifuge (Beckman Coulter, Indianapolis, IN, USA). Bulk AAV vector samples (400 μL) were applied to the Centerpiece on Cell Housing, and three cell houses with samples and one counterbalance were inserted into an AUC rotor. After equilibrating to 20°C, samples were ultracentrifuged at 12,000 rpm at 20°C, and the absorbance (260 nm) and interference were measured at 92 timepoints for 4-5 h. The percentages of full-genome, intermediate, and empty AAV particles were analyzed using SEDFIT (National Institutes of Health, Bethesda, MD, USA) [16] and visualized using GUSSI (UT Southwestern Medical Center, Dallas, TX, USA). Evaluation of whole genome regions packaged in AAV vectors by droplet digital PCR (ddPCR) Whole regions of the AAV vector genome were evaluated in each fraction of the samples from ultracentrifugation with a zonal rotor using ddPCR. 1.1 μL of sample less than 10,000 copies/µL (total 22 μL) was mixed with target primer/probe mixes (ddPCR Copy Number Assy, BioRad) ( Table 4); 900 nM primers and 250 nM probe in droplets containing these materials were generated by an Automated Droplet Generator (BioRad) followed by PCR reactions in a C1000 Touch Thermal Cycler (BioRad). A QX200 droplet reader (Bio-Rad) using the QuantaSoft software package (Bio-Rad) was used to detect fluorescent signals in each droplet. Morphological analysis of AAV vectors by transmission electron microscopy (TEM) Collodion membranes (Nissin EM, Shinjuku, Tokyo, Japan) were hydrophilized using an ion bombarder (Nisshin EM Co., type PIB-10), and 3 μL of AAV samples were placed to a hydrophilized grid for 1 min. After three times washing with 3 μL water, samples were stained with phosphotungstic acid (PTA) for 10 s. The samples loaded onto the membrane were analyzed using TEM (HT7800, Hitachi High-Tech, Minato-ku, Tokyo, Japan). Polishing of AAV vectors by hydroxyapatite column Chromatography was performed using an ÄKTA avant 25 system (Cytiva, Marlborough, MA, USA) with a SuperloopTM 150 mL at a flow rate of 1.0 mL/min. A column (4.6 × 35 mm, Sugiyama Shoji Co., Ltd. Kanagawa, Japan) packed with CHT Ceramic Hydroxyapatite Type I, 40 m (Bio-Rad Laboratories Inc., Hercules, CA, USA) was equilibrated with 10 mM HEPES and 150 mM sodium chloride, pH 7.2. The samples were loaded onto the column and eluted with 50 mM sodium phosphate buffer and 150 mM sodium chloride at pH 7.2. The resulting eluate was monitored for ultraviolet (UV) absorbance at 260 and 280 nm and conductivity. The collected fractions were evaluated by qPCR, using primers and probes targeting ZsGreen1. Data analysis All values are expressed as means ± SEM. Statistical analysis of the data was conducted using a one-way ANOVA. For all statistical analyses, significance was defined as P < 0.01. RESULTS Separation of large-scale AAV vectors among full-genome, intermediate, and empty particles by density-gradient ultracentrifugation in a zonal rotor We previously demonstrated a small-scale short-term purification method (180 mL, 2 h) for AAV vector fractions in 2-step CsCl density-gradient ultracentrifugation and Tangential flow-filtration, to remove contaminant HCPs and residual DNA [10]. To increase the amounts of AAV vectors in a short-term separation among fullgenome, intermediate, and empty particles, this 2-step CsCl density-gradient was adapted to zonal rotor-mediated ultracentrifugation (Table 1). AAV vector-containing culture supernatant (inside) and 2-4 escalating densities of CsCl solutions (outside) were separately placed within a zonal rotor during low-speed centrifugation (3000 rpm), and the centrifuge speed was increased (30,000-35,000 rpm), allowing for density-gradient separation of large-scale AAV vectors (300-1000 mL). First, we performed fourstep CsCl density gradient (15%, 25%, 33%, and 40%) ultracentrifugation using 300 mL AAV vector-containing solution for 10 h (Supplementary Fig. 1 and Table 2), resulting in a nearly linear CsCl gradient detected by refractive indexes (RI) among various fractions ( Fig. 1 and Table 2). AAV capsid proteins were detected in two fractions of RI 1.366-1.367 and RI 1.368-1.371, and AAV genome copies and ZsGreen1 transduction efficiency (biological activity) peaked at the RI 1.368-1.371 fraction, demonstrating a separation between non-functional empty particles (RI 1.366-1.367) and functional full-genome particles (RI 1.368-1.371) ( Fig. 2 and Table 3). Interestingly, the empty fraction contained some ITR (Fig. 2). Intermediate particles should be included between empty and full fractions. The four-step density-gradient allowed for separation among full-genome, intermediate, and empty AAV particles; however, 10hour CsCl exposure could reduce the biological activity of AAV vectors [5]. Therefore, we hypothesized that two-step densities of CsCl solutions instead of four-step can generate a small-range density-gradient focusing on AAV fractions, allowing for reduction of centrifugation time and an increase in the rotor space for AAV vectors instead of CsCl solutions (Table 1). Strikingly, two-step CsCl density-gradient ultracentrifugation (25-27% and 40%) could be completed with a shorter centrifugation time (4-5 h) as well as a larger volume of AAV vectors (900-1000 mL). The RI (density gradient) was more sharply elevated at a narrower range inside the zonal rotor (used for vector separation) and remained at a low level outside the zonal rotor (allowing for faster separation) ( Fig. 1 and Table 3). High-purity full-genome AAV particles detected by AUC and transmission electron microscopy (TEM) We performed AUC to evaluate the purity of the full genome (RI 1.370) and empty fraction (RI 1.368) [17], which were separated by two-step density-gradient ultracentrifugation with zonal rotor. We detected a single peak of AUC signals (70 ± 2%) with separate sedimentation coefficients between empty (approximately 60 S) (Fig. 3A) and the full genome (approximately 90 S) particles (Fig. 3B), demonstrating the high purity of AAV separation. We then morphologically analyzed the full-genome and empty AAV fractions using TEM with phosphotungstic acid (PTA) stain, which were separated by two-step density-gradient ultracentrifugation with a zonal rotor. In empty AAV fractions, a black dot was mostly observed in the center of the hexagon particles (Fig. 4A), whereas full-genome AAV fractions were detected as hexagon particles with light-density contents (Fig. 4B), suggesting that empty AAV particles might partially shrink and are more strongly stained by PTA. Inclusion of ITR fragments in the empty fraction of AAV vectors (low sedimentation coefficient fraction), evaluated by droplet digital PCR (ddPCR) To investigate which DNA fragments were packaged into full and empty AAV particles, 22 probe/primer sets were designed to detect the whole region of the AAV genome as well as the plasmid backbone ( Fig. 5 and Table 4), as evaluated by ddPCR. We confirmed that the full-genome AAV particles (RI 1.369) contained the whole area of the AAV genome, along with slightly lower signals in the ITR area (Fig. 5). Interestingly, small ITR regions were detected in empty particles (RI 1.367). Polish of full-genome AAV vectors after ultracentrifugation with a hydroxyapatite column Ceramic hydroxyapatite (CHT) has been successfully used to separate viral vectors [18]. To remove CsCl from purified AAV vector fractions, we developed a polishing method for fullgenome AAV particles using hydroxyapatite column chromatography. Full-genome AAV particles were attached to hydroxyapatite and eluted with density-escalating phosphate buffers. We observed different elution peaks of full-genome AAV particles, contaminants including proteins, and dsDNA from the culture medium and host cell proteins (HCPs), allowing for the purification of the full-genome AAV particles (Fig. 6A, B). The rAAVs polished using CHT were then concentrated (Fig. 6C). Several experiments were performed to increase the AAV9 vector-binding capacity of CHT resins. One key improvement to the function of the CHT resins was obtained by the addition of Ca 2+ [19]. When the solution lacked calcium ions, the vectors were Fig. 1 Adeno-associated virus (AAV) vectors were separated by density-gradient ultracentrifugation in a zonal rotor. AAV vectors were prepared in large-scale and harvested from the culture supernatant. Large amounts of AAV vectors were separated and fractionated by ultracentrifugation in a zonal rotor with serial (#Z1) or 2-step (#Z2-7) cesium chloride (CsCl) density-gradient. Each fraction was analyzed to measure the RI. detected in the flow-through fraction (Fig. 6A). The yield loss was approximately 70%, and the addition of CaCl 2 increased the recovery ratios of AAV particles to approximately 85% (Fig. 6B, D). These data demonstrate that CsCl can be easily removed using a hydroxyapatite column. DISCUSSION In this study, we developed a large-scale and short-term purification method for high-purity and full-length AAV vectors using two-step CsCl density-gradient ultracentrifugation with a zonal rotor (Fig. 1). This method allows for the reduction of ultracentrifugation time to 4-5 h and an increase in the AAV vector volume up to 1000 mL (Fig. 2). We confirmed that the highpurity separation of full-genome AAV particles by AUC and TEM (Figs. 3,4), and small DNA fragments, including the ITR, were detected within empty particles by ddPCR (Fig. 5). The purification of functional full-genome AAV particles is preferred to improve the efficacy and safety of AAV vector-based gene therapy, owing to the removal of contaminant HCPs and residual DNA, as well as non-functional intermediate and empty AAV particles. To date, capsid antibody-based affinity columns and/or anionexchange columns are commonly used for AAV vector purification [20][21][22][23], since they are more scalable and well-established for clinical-grade production without complete separation among full-genome, intermediate, and empty AAV particles [11,24]. In contrast, these AAV particles can be separately detected by AUC according to the variance of densities among AAV particles, and thereby, ultracentrifugation-based purification is theoretically preferable for purification of full-genome AAV particles [25,26]. We have previously demonstrated lab-scale, short-term purification of full-genome AAV particles using a 2-step CsCl density gradient [11], and in this study, this method was utilized for zonal ultracentrifugation. The full-genome AAV vector particles were fractionated with zonal ultracentrifugation, and purity was confirmed by infectivity (Fig. 2), AUC (Fig. 3), and TEM images (Fig. 4). In this system, 1000 mL of AAV vectors can be applied for one-cycle ultracentrifugation, and a further increase in sample volume is preferable for clinical-grade production. Surprisingly, ITR signals were detected in "empty AAV particles," as analyzed using ddPCR (Fig. 5). In contrast, "full-genome AAV particles" contain whole regions of the DNA genome between two ITRs, along with slightly lower signals in the ITR regions. In our current understanding, AAV Rep recognizes and nicks a terminal resolution site close to both 5' and 3' ITRs, and it might generate not only 2-ITR full-genome but also 1-ITR full-genome and small ITR only fragments, which can be packaged into AAV capsids [27,28]. Both 2-ITR and 1-ITR full-genome particles should express the transgene as functional vectors, but ITR-only "empty" particles should be non-functional. This may be a major reason why empty AAV particles are significantly generated in AAV vector preparation. Moreover, AAV genome-based titers are sometimes evaluated using ITR-specific primers; therefore, ITR-based AAV genome titers can be overestimated because of the inclusion of ITR-packaged empty particles. In gene therapy, the administration dose of AAV vectors is usually calculated by vector genome titer (v.g./mL); thus, accurate titrations of AAV vectors are important for clinical usage [29]. The promoter activity of ITR produces dsRNAs, most likely inducing innate immunity against AAV vectors. The accumulation of dsRNAs was reported to stimulate the MDA5 sensor in human hepatocytes after AAV vector transduction, leading to the expression of type I interferons [30,31]. A low-dose administration of the AAV5 vector resulted in no elevation of liver enzymes, minimal T-cell activation, and sustained coagulation factor activity among most patients in a hemophilia gene therapy [9,32]. These data suggest that high purification of full-genome AAV vectors can reduce a total dose of AAV vectors as well as the inclusion of small ITR fragments within empty particles and may prevent the immune activation in response to AAV vectors. Further evaluation of immunoreactions to full-genome and empty AAV particles is required. In the process of AAV vector production, removal of CsCl is required for the clinical use of two-step CsCl density gradient zonal ultracentrifugation. In this study, we developed a bufferchange method to remove CsCl using a CHT and adding calcium ion allowed higher recovery of AAV vectors (Fig. 6). This method should be applicable to purify large-volume AAV vectors, and the elution buffer in this method appears to be milder than the solution used in chromatography at low pH. This system can be used to polish AAV vectors to remove HCPs with minimal damage following ultracentrifugation (Supplementary Fig. 2). In addition, we purified AAV2 vectors using the two-step CsCl density-gradient ultracentrifugation with a zonal rotor (Supplementary Fig. 3). AAV2 vectors were collected from cell pellets, followed by affinity chromatography before zonal ultracentrifugation. It yielded about 1.0 × 10 13 v.g of full-genome particles, which are similar to that of AAV9 vectors collected from the culture supernatant. These data suggest that zonal ultracentrifuge could be applicable to various serotypes. In summary, we developed a large-scale, short-term purification method for full-genome AAV vectors using two-step CsCl density gradient ultracentrifugation with a zonal rotor. High-purity fullgenome particles were confirmed using genome copies per capsid protein, ddPCR signals in the whole AAV genome, ZsGreen1 expression, AUC, and TEM images. We also demonstrated that empty AAV particles contain small ITR fragments, possibly because the Rep-mediated nick generates ITR fragments, resulting in packaging of AAV capsids. Moreover, polishing of AAV vectors with CHT successfully concentrated AAV vectors and remove HCPs and CsCl. This large-scale AAV purification system can be effective in clinical research such as GMP manufacturing. DATA AVAILABILITY The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Fig. 6 Polish of full-genome AAV vectors with a hydroxyapatite column. Full-genome AAV vectors (#Z7) were attached to a hydroxyapatite column and eluted by sodium phosphate. UV absorbance at 260 nm and 280 nm, and viral genome titer of each fraction of chromatography without (A) or with CaCl 2 (B) were evaluated. AAV genome (vg/ml), UV absorbance at 260 nm, UV absorbance at 280 nm and the conductivity of elution buffer were measured. Capsids were detected by western blotting with anti-VP1, VP2, and VP3 antibodies (C). D Recovery of rAAVs (Fr.E) polished by CHT in all fractions was determined by viral genome titer (n = 3, means ± SEM).
2023-03-30T06:16:30.548Z
2023-03-28T00:00:00.000
{ "year": 2023, "sha1": "da65e7136ffafbe31b70b919f5a13cc878bf0102", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41434-023-00398-x.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "91137818945ab7808638bef1268cd275d64f905f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4415924
pes2o/s2orc
v3-fos-license
Increased Levels of Pro-Inflammatory and Anti-Inflammatory Cellular Responses in Parkinson’s Disease Patients: Search for a Disease Indicator Background Parkinson’s disease (PD) is the second most prevalent neurodegenerative disorder and it arises when most of the dopaminergic neurons of substantia nigra region die. Several mechanisms have been postulated as the causative event in PD pathology, and neuroinflammation is most crucial among them. Material/Methods We analyzed T-helper 17 (Th17) cells and myeloid-derived suppressor cells (MDSCs) from 80 PD patients to assess inflammatory processes and to find a cost-effective means to evaluate PD prognosis. Results We found significantly increased numbers of Th17 cells and MDSCs count in peripheral circulation in PD patients compared with controls (p<0.001). A positive correlation was found between Th17 cells and MDSCs in PD patients (r=0.421, p<0.05). Conclusions Our results show the effector role of Th17 cells and MDSCs in PD pathology and shows their utility as effective biomarkers for PD diagnosis. Background The pathophysiology of Parkinson's disease (PD) arises when progressive neurodegeneration in the substantia nigra region occurs, which is the confirmatory clinical diagnostic feature of PD [1]. As the disease expresses phenotypically only when 80-90% neuronal loss has already occurred, the early diagnosis of PD is a difficult challenge. Moreover, the global scenario of a growing trend of PD makes it a serious concern [2]. Over the last 100 years, studies have shown that several mechanisms influence the pathogenesis of PD, including oxidative stress, inflammation, apoptosis, and endogenous and exogenous neurotoxins [3][4][5]. Among them, inflammation holds a crucial role as most brain cells are glia, which have an important role in quickly spreading inflammation [6,7]. More specifically, clustered differentiation of 4+ (CD4+) T lymphocytes produces a subset of cells called T-helper 17 (Th17) cells [8], which act as a pro-inflammatory factor in inflammation [9][10][11]. Reports have shown that Th17 cells are active participants in PD pathogenesis [8]. On the other hand, CD14-/CD11b+/CD33+ cells in humans are one of the subtypes of myeloid-derived suppressor cells (MDSCs), which have the potency to inhibit the ongoing inflammatory process by acting on Th17 cells [12]. Reports have shown that the growing numbers of MDSCs are traceable in various inflammatory diseases [13][14][15], but the actual mechanistic pathway underlying MDSCs induced inhibition of Th17 cells in PD is largely unknown [16]. However, it has been reported that the occurrence of PD is highly correlated with the occurrence of inflammation [17]. Particularly in PD, the role of TH17 has been documented as a critical determinant for neurodegeneration through the inflammatory pathway [18], and MDSCs are also found to be highly active in neurodegeneration [19]. Hence, the estimation of Th17 cells as well as MDSCs is important in the diagnosis of PD. Therefore, in the present study, we have determined the quantity of Th17 and MDSCs to get an insight into the PD pathology and to introduce the measure as a novel biomarker in PD diagnosis. Subject and experimental design Subjects were chosen randomly and matched for age and sex. At the end of the selection procedure, a total of 80 patients in the initial stages of PD were enrolled from the No. 101 Hospital of Chinese PLA (from January 2012 to December 2016). Hoehn and Yahr classification (H&Y) and Unified Parkinson's Disease Rating Scale (UPDRS) were used to access the PD pathology. Inclusion criteria also included the following: willingness to undergo neurological assessment, data on patient nutritional status, and willingness to provide blood samples (blood, glucose, liver function, lipid profile) at checkup. As the study was focused on the early diagnostic feature of PD, we excluded PD patients with tremors, PD+, secondary parkinsonian symptoms, immunity-related complications, and patients on anti-PD drug treatment. The study was approved by the Medical Ethics Committee of Affiliated No. 101 Hospital of Chinese PLA. We also enrolled and obtained informed consent from 80 volunteers who served as the control group. Detection of Th17 in peripheral blood Blood was collected from the peripheral circulation from both PD and control subjects in heparinized tube with utmost care and security. Collected blood cells were incubated in the medium containing RPMI 1640 with Ionomycin 500 ng/mL, PMA 50 ng/mL (Sigma-Aldrich, St Louis, MO). In the initial step, GolgiPlug™ 1 µg/mL (Becton Dickinson Biosciences, San Jose, CA) was added in the culture. Ficoll-Paque density gradient separation was performed prior to storage of lymphocytes in culture medium. Culture media was stored at 37°C in a 5% CO 2 atmosphere for 4-6 h. Activated cells were washed with PBS and stained with APC-CD3, FITC-CD8, and PE-IL-17A antibodies (BD PharMingen, San Diego, CA), then fixed and permeabilized with the Cell Fixation and Permeabilization Kit (BD PharMingen, San Diego, CA). The re-suspended cells were analyzed by FACSCalibur (Becton Dickinson Biosciences, Shanghai, China) equipped with FlowJo software. Detection of MDSC in peripheral blood For the detection of MDSCs, EDTA was used as anticoagulant and whole blood was processed without any time lag. MDSCs cells were separated by Ficoll-Hypaque density gradient centrifugation and then stained with FITC-CD33 antibodies (BD PharMingen, San Diego, CA, USA). MDSCs cells were visualized and analyzed using FACS Calibur (Becton Dickinson Biosciences, Shanghai, China) equipped with FlowJo software. Statistical analyses The experimental data were mean ± standard deviation for the statistical description. Differences between groups were determined by nonparametric statistical analysis. Correlations between 2 variables were quantified by determining the Spearman's rank correlation coefficients. p<0.05 was considered to indicate a statistically significant difference. SPSS Characteristics of subjects No significant difference was observed (p>0.05) between the 2 groups in age, sex, hemoglobin, white blood cell counts, monocytes, and the percentage of neutrophils and lymphocytes. On the basis of PD severity and symptoms (H&Y and UPDRS scores), patients had mild-to-moderate PD. Data are presented as mean ± standard deviation in Table 1. Correlation of the percentages between Th17 cells and MDSCs The result showed that the percentage of Th17 cells and MDSCs in peripheral blood in the PD group was positively correlated (r=0.421, p<0.05) ( Figure 3A); however, 2 indexes had no correlation in the control group (r=0.116, p=0.5) ( Figure 3B). Discussion It has been reported that in PD pathogenesis, inflammatory responses play crucial roles, which has been evaluated by the increased expression of interferon gamma (IFN-g), interleukin 6 (IL-6), and interleukin 1 beta (IL-1b) in the brain [20]. Such overexpression leads to neuroinflammation and becomes the crucial event in the neurodegeneration in the dopaminergic center of the substantia nigra region of the midbrain. Exclusive work of Brochard and colleagues has documented that Th17 cells actively participate in nigral neurodegeneration by infiltrating the region, which results in excessive activation of microglial cells [21]. It is well known that brain parenchyma is separated by the presence of the blood brain barrier (BBB), which restricts the entry of inflammatory substances. However, physical damage to the BBB has been reported in chronic inflammatory spectrum, which is also evident in the scenario of PD [22]. Damage in the BBB allows inflammatory cells and various cytotoxic entities into the brain parenchyma of people with PD, which not only initiates the detrimental pathways of neuroinflammation, but also influences other mechanistic pathways associated with neurodegeneration, such as oxidative stress and mitochondrial dysfunction [23]. Infiltration of T lymphocytes is quite common in individuals with a damaged BBB [24,25]. Such infiltration has been reported several times in different disease profiles, where the infiltrated Th17 plays a crucial detrimental role [26,27]. It has been reported that Th17 2974 increases release of IL-17, which is an important inflammatory factor, and is also associated with activation of other detrimental inflammatory factors like tumor necrosis factor alpha (TNF-a) and interleukin-1 (IL-1). These inflammatory factors have been shown to be released from brain microglial cells, which are the most numerous type of brain cell; therefore, inflammatory responses quickly spread throughout the brain [28]. MDSCs are immature bone marrow cells, which are assumed to have a crucial role in inhibition of inflammation [29]. It is interesting that differentiation of initial CD4+ T cells or Th17 cells are greatly influenced by different subsets of MDSCs [30]. Induction of CD14 with HLA-DR has been reported to induce Th17 cell differentiation, which promotes brain inflammation [30]. However, a similar combination with low CD14 has been shown to have a different mechanism of action that includes production of T reg cells, which are a type of CD4+ T lymphocyte responsible for proper immune regulation. This process also promotes negative regulation of neuroinflammation in brains affected by PD [29,30]. It was also reported that MDSCs are responsible for the maturation and transformation of Th17 and T reg cells, and this transformation is highly controlled by cytokines. These findings show that MDSCs can regulate the balance between Th17 and T reg cells to regulate immune balance in the brain [31]. Because administration of an anti-PD drug could alter the expression of Th17 cells [32], the present study assessed newly-diagnosed PD patients without anti-PD drug treatment. We demonstrated an increased percentage of Th17 and MDSCs in the peripheral circulation of PD patients, which was significantly higher than in the control subjects. Such a scenario confers the assurance of neuroinflammation and also depicts the involvement of these cell types in brain inflammation. Earlier reports have documented that, in the PD brain, the quantity of inflammatory markers, such as IL-6, transforming growth factor beta (TGF-b), and IL-1, increases significantly [33,34]. Moreover, MDSCs are reported to induce the CD4+ T cells to transform into Th17 cell types [18,35]. The prime function of MDSCs is to suppress the immune responses, but transforming CD4+ T lymphocytes into Th17 cells could increase the inflammation for a limited time, which promotes PD progression. The occurrence and development of Th17 and MDSCs are involved in PD because they showed positive correlation in our study, but the detailed mechanism of action behind Th17-MDSCs interaction in PD needs further study. The present study is the first to show that Th17 and MDSCs are involved in the early stages of PD, and might be useful as biomarkers in the diagnosis of PD. Because reducing the percentage of Th17 could help delay the progression of PD, the study also shows the promise of a new therapeutic intervention for PD and associated neuroinflammatory complications. Conclusions The present study is the first to report the possibility of early diagnosis of PD through quantification of TH17 and CD33 Data were presented as mean ± standard deviation; * p<0.001, compared with the control group. 2976 MDSC from peripheral circulation. Experimental evidence showed that high levels of TH17 are associated with neuroinflammation and play the determining role in PD disease progression. Similar quantification from cerebrospinal fluid could give better and more accurate results. However, for the time being, clinical diagnosis of PD from peripheral circulation could be the most cost-effective approach. Conflicts of interest None.
2018-04-03T03:15:38.075Z
2017-06-18T00:00:00.000
{ "year": 2017, "sha1": "174b566028bfbceea4ae5b57936fd8325b5c07d5", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc5484607?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "174b566028bfbceea4ae5b57936fd8325b5c07d5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231880037
pes2o/s2orc
v3-fos-license
Core Imaging Library - Part II: multichannel reconstruction for dynamic and spectral tomography The newly developed core imaging library (CIL) is a flexible plug and play library for tomographic imaging with a specific focus on iterative reconstruction. CIL provides building blocks for tailored regularized reconstruction algorithms and explicitly supports multichannel tomographic data. In the first part of this two-part publication, we introduced the fundamentals of CIL. This paper focuses on applications of CIL for multichannel data, e.g. dynamic and spectral. We formalize different optimization problems for colour processing, dynamic and hyperspectral tomography and demonstrate CIL’s capabilities for designing state-of-the-art reconstruction methods through case studies and code snapshots. This article is part of the theme issue ‘Synergistic tomographic image reconstruction: part 2’. Introduction Over recent years in X-ray computed tomography (CT), there has been a growing interest in dynamic and spectral CT thanks to the technological advancements on detector speed and sensitivity and on multichannel photon-counting detectors (PCDs), as depicted in the EPSRC Tomography roadmap [1]. In dynamic CT, the aim is to reconstruct a series of images and depict the complete spatiotemporal response of the scanned object. These temporal variations may occur because the composition/structure evolved, e.g. corrosion, or the object is subject to external input, e.g. compression, or the object moved during the scanning process. In spectral CT, using a pixelated energy sensitive detector, it is possible to collect n energyspecific radiographs, where n is the number of energy channels. As a result for any voxel in the system, it is possible to reconstruct the profile of attenuation coefficient as a function of energy, or conversely to create a tomogram corresponding to each energy bin. Since each chemical element has a characteristic attenuation profile this provides a fingerprint of the elements in each voxel. This fingerprint is especially clear for attenuation spectra that includes the energies corresponding to the X-ray absorption edges (K-edges) for the elements concerned, because there is an abrupt change in attenuation on either side of the edge [2]. Moreover, in pulsed neutron imaging data, sharp edges can also be imaged [3]. In this case, the edges, i.e. the Bragg edges, correspond to abrupt increases in the transmitted spectrum, when the energy is below that possible for Bragg diffraction out of the beam for each diffraction peak providing unique fingerprints corresponding to different crystal structures. In general, for spectral imaging, because the signal is allocated to a number of energy bins rather than accumulated to give a single image, the energy-resolved data acquired usually suffer from low signal-to-noise ratio, acquisition artefacts and angular undersampling making tomographic image reconstruction difficult. The scope of this paper is to present the capabilities of the Core Imaging Library (CIL) (https:// www.ccpi.ac.uk/cil), releases available at [4,5], of the collaborative computational project in tomographic imaging (CCPi) for multichannel tomography. It allows one to reconstruct higher quality images and ensure more accurate spatiospectral K-edge identification, see for instance [6,7], where novel reconstruction methods are introduced for laboratory-based hyperspectral CT and neutron tomography, respectively. CIL is an open source, object-oriented library (primarily written in Python) for processing tomographic data. We can read, load, preprocess, reconstruct, visualize and analyse tomographic data from different applications, e.g. X-ray CT, X-ray laminography, neutron tomography (NT) and positron emission tomography (PET). (a) Outline of the paper In the first section, we give a brief overview of CIL and introduce notation for the optimization framework necessary for the reader to make the transition from mathematical formulation to code. Then, we consider a simple exemplar case study involving two simple three-channel imaging tasks, i.e. colour denoizing and inpainting. For the first task, the aim is to solve the total variation (TV) denoizing problem using the fast gradient projection (FGP) algorithm [8]. For the inpainting problem, we use the total generalized variation (TGV) regularization and solve it using the primal-dual hybrid gradient algorithm (PDHG) [9]. In the following sections, we consider two real tomography applications, namely dynamic X-ray and hyperspectral CT. In §4, we focus on dynamic tomographic imaging with severely limited projection data. We compare different regularizers defined over the spatio-temporal domain and under different undersampled acquisition geometries, including Tikhonov, TV and directional total variation (dTV) regularizers. In the final example, we deal with four-dimensional hyperspectral tomographic data. We use a stochastic version of the PDHG [10], with TV regularization to reconstruct the data with different coupling between spatial and spectral dimensions. Core imaging library (a) Overview In Core Imaging Library -Part I [4], we described the main building blocks of CIL: cil.framework, cil.optimization, cil.processors, cil.io and cil.utilities. We illustrated the basic usage of CIL data structures, as applied to a number of X-ray CT cases with different geometries, e.g. parallel, cone and laminography and also different modalities such as NT and PET. CIL wraps a number of third-party libraries, using cil.plugins, to perform various operations required for CT reconstruction. For instance, we can use the Astra-Toolbox [11] or TIGRE [12], to perform forward and backward projection steps, filtered back projection (FBP) and Feldkamp-Davis-Kress (FDK) reconstructions for different acquisition geometries and can use the CCPi-Regularization Toolkit (CCPi-RGL) [13], to employ several regularizers with a CPU/GPU hardware acceleration. In addition, CIL is designed such that the data structures of the Synergistic Image Reconstruction Framework (SIRF) [14], from the collaborative computational project in synergistic reconstruction for Biomedical Imaging (CCP-SynerBi), www.ccpsynerbi.ac.uk, can be used for PET and magnetic resonance imaging (MRI) reconstruction [15]. (b) Optimization framework The cil.optimization framework contains three structures, namely Function, Operator and Algorithm that formalize a generic optimization problem for imaging applications as u * = arg min u∈X f (Ku) + g(u) ≡ arg min u∈X n−1 i=0 f i K i (u) + g(u). (2.1) We let X, Y denote finite-dimensional vector spaces, K : X → Y a linear operator with operator norm K = max{ Ku Y : u X ≤ 1} and proper, convex functions f : Y → R, 1 g : X → R. Note that in certain cases, it is convenient to decompose Y = Y 0 × . . . × Y n−1 , n ≥ 1 and consider a separable function f (y) := f (y 0 , . . . , y n−1 ) = n−1 i=0 f i (y i ) which results in the right-side formulation in (2.1). In the following case studies, using different definitions for the triplet (K, f , g), we can express optimization problems for several imaging tasks. For example, in denoising, we let K be the identity operator, in inpainting it is a mask operator that encodes missing pixel information, while it is a projection operator for tomography. The functions f , g allow us to define a fidelity term, that measures the distance between the acquired data b and the forward-projected reconstruction image as well as a regularizer, which enforces a certain regularity on u. If the noise follows a Gaussian distribution an appropriate choice for the fidelity term is Kx − b 2 2 . In the case of impulse noise, the L 1 norm Kx − b 1 leads to more efficient restorations and for Poisson noise the Kullback-Leibler divergence Kx − b log Kx is the most suitable choice. The choice of the regularizer, e.g. Tikhonov, TV and TGV, favours minimizers of (2.1) with certain geometric features and is usually weighted by positive parameters to control the influence between data fidelity and regularization terms. In this paper, we mainly focus on model-based variational problems, meaning that the forward operator K and the probability distribution of the observational noise are known a priori. In particular, for X-ray CT [4], PET or MRI [15] applications, noise distribution and hence the appropriate distance functions are well established, see [16]. CIL may also be employed in an ad hoc fashion if the noise type is unknown, to experiment with which norm provides the best 4 royalsocietypublishing.org/journal/rsta Phil. [17]. More specific blind noise methods are an active research beyond the current scope of CIL and we hope to expand in these directions, within a general data-driven framework in the future, see for instance [18] and references therein. In order to find an approximate solution for minimization problems of the (2.1) form, we use a different CIL Algorithm for smooth and non-smooth objective functions such as the conjugate gradient least squares (CGLS), simultaneous iterative reconstruction technique (SIRT) and proximal type algorithms, which are extensively used in this paper, such as the FGP and the PDHG, SPDHG algorithms. In the FGP algorithm, we require that the function g has a proximal operator defined as which has a 'simple' closed form solution or can be computed efficiently numerically. Also, we assume that f is continuously differentiable and has Lipschitz continuous gradient L. On the other hand, in the PDHG algorithm, we allow functions f and g to be non-differentiable and express (2.1) into a saddle point problem, where f * denotes the convex conjugate of f . Under this set-up, PDHG can decouple the initial problem (2.1) into two simple problems, using as before the proximal operators of g and in addition the proximal operator of f * , Case study I: colour image processing We begin our first demonstration with a case study within a colour imaging framework, i.e. a vector-valued image that has just three channels: red, green and blue. Our test data are a high resolution double rainbow 2 image taken from a smartphone of 1194 × 1353 pixels and three channels, see figure 1a. We let be a rectangular discretized grid representing our image domain and define an RGB colour image u as u : where u k ∈ R M×N , k = 1, 2, 3 represent the red, green and blue channels. We consider the cases of (a) denoising a noisy image corrupted by simulated Gaussian noise, see figure 1b, (b) inpainting + denoising of a noisy image corrupted by simulated Salt & Pepper noise with missing text information, see figure 1d. (a) Colour denoising We start with one of the most fundamental and well-studied problems in image processing, that is image denoising. In the pioneering work, Rudin, Osher and Fatemi (ROF), [19], introduced the TV regularizer to tackle image denoising for greyscale images u : Ω → R. Given a noisy image b corrupted by additive Gaussian noise, they solve the following optimization problem u * = arg min respectively, we write The above definition can be extended for vector-valued images, which results in a vectorial version of the TV. The gradient for the RGB case is now Du = (Du 1 , Du 2 , Du 3 ), where for each k = 1, 2, 3, Du k := (D y u k , D x u k ). The vectorial (channelwise) total variation (VTV), see [20] for more definitions of VTV, is defined as For both greyscale and coloured images, we can set up the regularizers (3.2) and (3.3) using the TotalVariation function in CIL. The optimization problem that we solve for the colour denoising is similar to (3.1) but using the VTV regularizer, i.e. u * = arg min where b is shown in figure 1b. One can observe that (3.4) is in fact the proximal operator (2.2) with τ = 1.0 evaluated at b. We solve (3.4), using the fast gradient projection (FGP) algorithm that is contained in the proximal method of the TotalVariation function. It is clear from figure 1 that noise is reduced, while preserving the edges of the image. However, TV is known for promoting piecewise constant reconstructions leading to images with blocky structures. This is called the staircasing effect and becomes apparent in smooth regions, see for instance the area around the rainbow in figure 1c. (b) Colour inpainting Given an image where a specific region is unknown, the task of image inpainting is to recover the missing region from the known part of the image. A popular application of inpainting is in art restoration, where damaged or missing areas are repainted, i.e. filled, based on the surrounding context. We let D ⊂ Ω be a subdomain of Ω, i.e. the inpainting domain, where no data are known and missing information should be interpolated. In this example, our input image is shown in figure 1d, where in addition to Salt & Pepper noise, missing pixels from a repeated text are incorporated. A suitable data fidelity term for this kind of noise distribution is the L 1 norm that acts on the domain Ω \ D. To overcome the staircasing artefacts that TV promotes, we employ a higher-order regularizer, namely the total generalized variation introduced in [21]. We let α, β > 0 be regularization parameters and define TGV α,β (u) = min w α Du − w 2,1 + β Ew 2,1 , (3.5) where E denotes the symmetrized gradient operator defined as Ew = (1/2)(Dw + Dw T ). The optimization problem above provides a way of balancing between the first and second derivative of an image u. In particular, one expects that in the neighbourhood of edges, the second derivative D 2 u is relatively 'large', hence a reasonable choice is to let w = 0 in (3.5) and recover the TV regularizer. On the other hand, D 2 u is relatively small in smooth regions of an image and w = Du is a proper condition for the minimization problem (3.5). Under this format, edges are preserved, as in the TV regularizer, and additionally piecewise smooth structures are promoted. The minimization problem under the TGV regularizer and the L 1 norm as a data fidelity term is the following: where the M is a diagonal operator with 1 in the diagonal elements corresponding to pixels in Ω \ D and 0 in D. In CIL, we use the MaskOperator that accepts as an input a two-dimensional boolean array, i.e. mask. Since we have a colour image, we employ the ChannelwiseOperator to encode the effect of missing pixels to the RGB channels. In order to solve (3.6), we use the PDHG algorithm, where the first step is to express (3.6) in the general form of (2.1). where O, I denote the zero and identity operators respectively. We continue with the definition of the functions f and g. The function f is a separable function that contains the three terms in (3.6) and is defined as which is exactly the objective function in (3.6). In CIL, (3.7) can be expressed easily with the BlockOperator K and it is filled row-wise. The elements are the GradientOperator D, the IdentityOperator I, the SymmetrisedGradientOperator E, the ChannelwiseOperator M and two ZeroOperator O. The separable function in (3.8) can be expressed by the BlockFunction f , whose elements are the L1Norm, and two MixedL21Norm functions. Finally, g is the ZeroFunction function. We choose the PDHG algorithm to solve such an optimization problem. Without any user input, CIL will by default use primal/dual stepsizes σ , τ with σ = 1.0 and τ = 1.0/σ K 2 that satisfy σ τ K 2 < 1 to guarantee convergence. We can monitor its convergence every update_objective_interval=100 using verbose=2. However, to speed up convergence it may be necessary to change such default values. The TGV reconstruction is presented in figure 1e, where there are no staircasing issues and most of the repeated text is eliminated. We observe that the inpainting process behaves quite well when the background is relatively smooth, e.g. sky. However, in regions with specific textures, such as trees, leaves and grass, TGV inpainting could not completely restore the missing pixels, see the absolute difference between the ground truth and reconstruction in figure 1f. In the above two examples, the optimal regularization parameters are chosen to maximize the structural similarity index measure (SSIM), [22]. In terms of TGV, it is usually sufficient to find an optimal ratio β/α, thus reducing the number of parameters to be optimized, in order to obtain a high-quality reconstruction, [21,23]. How to select the best regularization parameter(s) is an important question and automated methods for doing this is an active research area beyond the scope of this article. This is of particular importance in real case scenarios when the comparison with the ground truth cannot guide this choice. In this paper, we choose to follow a direct grid search of parameters for simplicity. In a future release, we hope to provide reconstruction algorithms with automated parameter selection methods providing the user with an end-to-end pipeline. We refer the reader to the following papers where a number of methods to select the regularization parameters are presented for different imaging applications, e.g. the L-curve method [24,25], the S-curve method [26], the Morozov Discrepancy principle [27][28][29][30] and when the regularization parameter is spatially dependent, one can use [31][32][33][34]. Case study II: dynamic tomography (a) Motivation The focus of this section is on dynamic CT, [35], where the aim is to scan a sample that undergoes some change, be it internal, such as an evolution of its composition or due to external input, like applied torque or compression, [36]. These changes must be slow with respect to the time it takes to acquire a single tomogram [37], otherwise the reconstructions would suffer from severe motion artifacts or the quantification would be meaningless. The duration of a CT scan is determined by the time needed to acquire a sufficient number of projections of the sample viewed from different angles with the required signal-to-noise ratio. This is determined mainly by detector performance and X-ray source intensity, and can vary from few projections per minute, as for laboratory X-ray CT scanners, to thousands of projections per second, as in the case of synchrotrons. One way to increase the temporal resolution is by faster scanning through undersampling, i.e. by reducing the number of acquired projections, leading to sparse tomographic views. Sparse CT reconstruction is a highly ill-posed problem and has received great attention lately in the tomography community, [38][39][40] and especially in view of dynamic CT, [41][42][43][44]. Another beneficial consequence of Sparse CT is that it allows the reduction of radiation dose to the sample, which is for instance extremely useful in medical imaging where one can reduce both the radiation dose to the patient, and the duration of the imaging [45]. In the following, we focus on different reconstruction methods for sparse dynamic CT, using an open-access dynamic dataset available from [46]. The aim is to demonstrate how to increase the temporal resolution, or to reduce the radiation dose of a CT scan, without sacrificing the quality of the reconstructions. After the description of the dataset and of three possible undersampling configurations, we demonstrate how the standard reconstruction algorithm FBP, applied separately for each time step, leads to severe streak artifacts due to the limited number of projections. We then demonstrate how to use CIL to employ iterative reconstruction algorithms with three different regularization methods that incorporate prior information in the spatio-temporal domain to obtain quantitative information, to suppress the undersampling artifacts and noise on the reconstruction. At the end of the section, we compare the results obtained with all the reconstruction methods presented and demonstrate the improvements in temporal resolution and image quality enabled by a suitably chosen iterative reconstruction algorithm. (b) Data information (i) Description The sample was an agarose-gel phantom, [47], perfused with a liquid contrast agent in a 50 ml Falcon test tube (ø 29 × 115 mm). The aim of this experiment was to simulate diffusion of liquids inside plant stems, which cannot withstand high radiation doses from a denser set of measurement angles. After the agarose solidified, five intact plastic straws were made into the gel and filled with 20% sucrose solution to guarantee the diffusion by directing osmosis to the gel body. (ii) Acquisition Each scan was acquired in 4.5 min with intermissions of approximately 15 minutes between consecutive measurements. In total, the acquisition process lasted about 3 h leading to 17 sinograms, one for each time state. In addition, pre-scan and post-scan measurements are acquired with a noticeably higher number of projections; 720 and 1600 projections, respectively. The acquired sinograms are pre-processed using Lambert-Beer negative logarithm conversion. (iii) Dataset Every measurement consists of 360 projections with 282 detectors bins obtained from a flat-panel circular-scan cone-beam microCT-scanner. Only the central slice is provided, resulting in a 2D fanbeam geometry. For this experiment, our reconstruction volume is 256 × 256 of 17 time frames. Additional metadata information, such as distance from source to the detector and distance from source to the origin are provided to set up the cone beam geometry. (c) Dynamic sparse CT set-up Firstly, we configure our AcquisitionGeometry ag for a two-dimensional cone-beam geometry using information about the position of the source and the detector, the dimensions of the panel using set_panel and the projection angles using set_angles. In order to set up a multichannel geometry, we use set_channels which refer to the 17 time frames. Next, we allocate space for our acquisition dataset of 360 projection angles, data_360, that is filled with the corresponding sinograms for each of the 17 time frames. The Zenodo data are provided as a MATLAB mat-file that can be read in e.g. using scipy.io.loadmat, which produces a list of sinograms containing the 17 sinograms, see figure 2. We obtain the default corresponding ImageGeometry ig for the acquired dataset of 360 projection angles using the get_ImageGeometry method from ag. The default dimensions are the same as the number of detector bins; here we reduce it to 256 by 256. Note that our image domain is a 2D + time spatiotemporal volume, i.e. We begin our reconstructions with the classical FBP algorithm with a Ram-Lak filter that is applied separately to the full data in each time frame, see top row in figure 4. In a practical sparse CT set-up, 360 projection angles would not be available, however, we use the full-data FBP reconstruction as our ground truth for assessing reconstruction quality from undersampled data quantitatively and for finding the optimal regularization parameters for the methods described below. In order to create undersampled dynamic data for different projection angles, we employ the Slicer processor, that is described in [4]. We create different equi-angular undersampling patterns, using the step sizes of [5,10,20], along the angle direction, leading to the datasets data_72, data_36, data_18 of 72, 36 and 18 projections, respectively. This means that we are able to increase the temporal resolution or reduce the radiation dose by a factor 5, 10 and 20, respectively. Finally, new projection operators are defined, e.g. A_72, A_36, A_18, based on the undersampled acquisition geometries and the image geometry that remains the same for all cases. (d) Tikhonov regularization In [4], we presented in detail how one can set up Tikhonov regularization for single channel X-ray tomography. For our current dynamic case, we can formulate the identical problem, namely where, b is now the multichannel sinogram, e.g. data_360 or data_18, containing all 17 time frame sinograms and A is now the corresponding multichannel projection operator, e.g. A_360 or A_18. The second term in (4.1) acts as a smooth regularizer, where the linear operator L, can be for example an identity or a gradient operator D, acting over the multichannel image data. In the case of L = D, we offer the user two different modes for the GradientOperator, where finite differences are computed only along the spatial dimensions or for both the spatial and channel dimensions. Therefore, if L=GradientOperator(ig,correlation), the derivatives across every direction in a three-dimensional volume are considered, i.e. Du = (D t u, D y u, D x u), if correlation='SpaceChannels', whereas if correlation='Space', we take into account only the derivatives across the spatial dimensions, i.e. Du = (D y u, D x u). In the algorithm comparison, we demonstrate here finite differences over both space and channels, i.e. we use correlation='SpaceChannels'. The code snippet to set up (4.1) in CIL is identical to the one presented in [4], hence it is omitted here. (e) Spatio-temporal TV As a second regularization method, we apply an edge-preserving prior by replacing the L 2 term in (4.1), with the TV regularizer, which in a spatio-temporal setting can be employed either in a channelwise fashion or as here over the full spatio-temporal volume, i.e. Under this isotropic coupling between space and time, the finite differences along the directions t, y and x are penalized equally with a single regularizing parameter, promoting piecewise constant structures in the spatio-temporal volume by solving The above minimization problem can be solved using the (explicit) PDHG algorithm, [48], exactly as in the single-channel case as described in [4], decomposing it into two subproblems, where the two proximal operators, (2.2) and (2.4) have an explicit closed form solution. As in §3b, f is now a separable function, i.e. BlockFunction, containing the · 2 2 , for the acquisition data, and · 2,1 norms. Consequently, we can express the operator K as a BlockOperator containing the multichannel ProjectionOperator A and the GradientOperator. Finally, to enforce a non-negativity constraint, we let g be the IndicatorBox with lower=0.0. In the code snippet below, we define the triplet (K, f , g) used in PDHG for the case of 18 projections. (f) Directional TV The third and final regularization method uses a structure-based prior, namely the dTV. To apply this variational method to sparse CT reconstruction, we adopt the framework of parallel level sets introduced in [49] and used for different applications such as multi-modal imaging, [50,51]. For example, an image from another modality, e.g. MRI, reference image, is known a priori and acts as additional information from which to propagate edge structures into the reconstruction process of another modality, e.g. PET. Another popular set-up is to use either both modalities or even channels in a joint reconstruction problem simultaneously, improving significantly the quality of the image, see for instance [52][53][54]. In a parallel level set framework, two images, u and v, are called structural similar if ∇u is parallel to ∇v, where u is the image to be reconstructed, given the known reference image v. In this sense, we are able to encode additional information on the location or direction of edges for the (u, v) pair. The dTV regularizer of the image u given the reference image v is defined as where the weight D v depends on the normalized gradient ξ v of the reference image v, The vector-field ξ v is able to capture structural information from the reference image depending on the edge parameter η and determine which directions to be penalized. For instance, we have that Equivalently, if |ξ v | 2 > 0, aligned gradients are favoured. On the other hand, if ∇u ⊥ ∇v then D v ∇ = ∇. Finally, note that 0 ≤ |ξ v | 2 < 1, where the lower bound is attained for |∇v| 2 = 0 (constant regions) and the upper bound when |∇v| 2 → ∞ (edges). In figure 3, we show the pre-and post-scan FBP reconstructions acting as the reference images, along with |ξ v | 2 that illustrates how edge information is captured by ξ v to be included by the dTV regularizer. For each time frame t, we solve the following problem where A sc , b t , u * t , denote the single channel ProjectionOperator, the sinogram data and the reconstructed image for the time frame t respectively. In terms of the reference images (v t ) T−1 t=0 , we use v 0 = v pre_scan , i.e. the FBP reconstruction of the pre-scan data with 720 projections, and v t = v post_scan , t = 1, . . . , T − 1, for the FBP reconstruction for the data with 1600 projections. We follow this configuration, because we notice a slight movement of the sample at the beginning of the experiment. One could apply other configurations for the reference image in the intermediate time frames. For example, in order to reconstruct the (t + 1)th time frame, one could use the tth time frame reconstruction as reference. A more sophisticated reference selection approach is applied in hyperspectral computed tomography in [54]. Similarly to (4.3), we solve (4.6) using the PDHG algorithm, but with an alternative setup for the triplet (K, f , g). This time one of the subproblems is not solved explicitly but an inner iterative solver is used, known as implicit PDHG, see [55]. In particular, we let g be the FGP_dTV regularizing Function from cil.plugins.ccpi_regularization module of CIL, which wraps a GPU-accelerated implementation of the FGP algorithm in the CCPi-RGL toolkit. Since each time frame is solved independently of the others, the operator K is now the projection operator for a single channel K_sc and the functions f and f 0 are · 2 2 norms. To store the two-dimensional reconstruction for every time frame, we use the variable solution allocating space from the allchannel image geometry ig. Then, we use the fill method to store the reconstruction of every time step to the solution. For the FGP_dTV Function, we need to specify the corresponding reference image, the regularization parameter and the smoothing parameter η appeared in (4.4). The optimal parameters α and η are reported in table 1. In the above code block, we define the projection operator, the image and acquisition geometries for the case of 18 projections. Then, the functions f 0 , g 0 are used in the PDHG algorithm to reconstruct the first time frame using the v_pre_scan as a reference and the functions f , g concern all the other frames using the v_post_scan as a reference. (g) Results In this section, we present results for all the reconstruction methods presented above, e.g. channelwise FBP algorithm, Tikhonov, TV and dTV regularizations. In figure 4, we present a static comparison for all the reconstructions, for three undersampled data with 18, 36 and 72 projections as well as the full 360 projections for the 8th time frame. In figure 5, we provide a temporal comparison for four different frames, for the most interesting case of 18 projections, as it provides the greatest reduction in acquisition time. Although, FBP produces satisfying reconstructions for 360 projections, reducing the number of projections results in streaking artifacts and a decrease in signal-to-noise ratio which is more pronounced the higher the undersampling, see first row of figure 4. Moreover, the quantification of the dynamic process is severely hindered by the undersampling, see for instance the first two rows of figure 5, showing the FBP reconstructions for 360 and 18 projections for four different time frames. Compared to the FBP reconstructions, we observe that Tikhonov regularization can suppress both the noise and the streak artefacts, especially for a very low number of projections, see second row of figure 4. However, due to the L 2 penalty term appearing in (4.1), edges as well as small details of the image are oversmoothed. In the third row of figure 4, we observe that noise is completely eliminated by the TV regularization and edges are preserved around the five circular cavities. This is more obvious in the cases of 72 and 360 projections. For fewer numbers of projections, staircasing artefacts, introduced by TV, are more apparent both spatially and temporally, see the blocky artifacts outside the boundaries of these cavities in figure 5. In addition, due to the low iodine concentration level at the earlier stage, see time frames 0 and 5 in figure 5, we witness a significant loss of contrast, particularly for the case of 18 projections. In terms of the dynamic dTV reconstructions, we observe a significant contrast improvement for all the undersampled data, see the interior of the five cavities. Furthermore, edges are now well preserved for all the time frames due to the structural information that is integrated from the reference images v pre_scan and v post_scan . For instance, we note sharper boundaries around the cavities compared to the TV reconstructions, specially for the lowest number of projections. This is also evident for different time frames for the 18 projections case, as one can see in the last row of figure 5, where overall dTV produces the best reconstructions. In particular, we note how the outer cylindrical edge is correctly reproduced as circular by dTV at 18 projections unlike all other methods which produce a polygonal outer edge due to the low number of projections. In figure 6, we compare the time activity (i.e. the reconstructed attenuation value over time) of the ground truth with the Tikhonov, TV and dTV reconstructions with 18 projections, for the single-pixel ROI appeared in the left image of figure 5. As expected, we observe very high oscillations for the FBP reconstruction with 18 projections, which can be reduced using Tikhonov regularization. Since there is no remarkable temporal variation until the 8th frame, we observe an almost similar behaviour for the TV and dTV reconstructions. However, dTV reconstruction provides a better contrast compared to the TV reconstruction that is more apparent after the 9th frame. Overall both dTV and TV are able to reproduce at 18 projections the single-pixel centre-of-cavity time activity at the same (or even better) quality as FBP using the full 360 projections. In the problems the highest PSNR for all time steps, followed by the TV and Tikhonov regularizations and finally FBP reconstruction. We also report the PSNR and SSIM values for all cases of undersampled data and the optimized parameters α, η for all the regularization methods. We observe that for the very limited angular cases, e.g. (18,36), dTV reconstructions produce better results, whereas increasing the number of projections dTV and TV reconstructions are comparable. (h) Discussion and conclusion In conclusion, we have described three multi-channel regularized reconstruction methods for reducing undersampling artifacts in sparse Dynamic CT and their implementation using the modular building blocks of CIL. We conducted a comparative study of algorithm performance at three different undersampling levels of the full dataset which simulated an increase of the temporal resolution of the acquisition by a factor of 5, 10 and 20, respectively. The results demonstrated that the dTV method in particular is capable of obtaining high-quality reconstructions from reduced data and using it one can obtain the same quantitative information at a factor of 20 undersampling compared to channelwise FBP applied to full data. It is worth noting that for the dTV method of §4f access to high-quality pre-and post-scans is required. We want to stress that the design of the acquisition is crucial. For instance, should the experiment be acquired using a golden-ratio angular sampling scheme, [37], the experiment could have been let to run continuously and the time frame separation could have been decided as a post-processing step. In such a case, figure 6 Case study III: hyperspectral tomography (a) Motivation For our final case study, we focus on hyperspectral X-ray CT imaging and how CIL can provide tools to reconstruct and analyse the internal elemental chemistry of an object. In every pixel, hyperspectral photon-counting detectors can measure the deposited energy during a certain exposure time and consequently calculate the associated photon energy of that pixel in that frame. This is repeated many times during the scan, where finally all events are binned into a single spectrum per pixel. Moreover, these types of detectors can achieve a high energy resolution (typically less than 1 keV) and can image over hundreds of spectral bands, and allow us to distinguish materials based on the element characteristic absorption edges, i.e. K-edges. The main goal of this study is to identify elements of gold and lead from a mineralized ore sample from a goldrich hydrothermal vein. These materials (Au and Pb) typically appear in very low concentrations, with deposit size similar to our reconstructed voxel size. With no a priori knowledge of the distribution of these materials, and the inherently low-count hyperspectral data it is difficult to achieve satisfactory reconstruction with conventional methods. We demonstrate here how CIL can be used to easily implement a number of bespoke multi-channel regularized reconstruction methods in order to accurately identify and segment these deposits. In the following, we describe the dataset, then we propose and compare three different reconstruction methods. First, we consider the standard SIRT algorithm which does not make use of any prior information, along with a variant in which the reconstruction of channel i is used to warm-start reconstruction of channel i + 1. Next, we describe two advanced regularization techniques to reconstruct four-dimensional hyperspectral data with different correlation between spatial and energy information. Finally, we describe how the Stochastic PDHG (SPDHG) algorithm, [10,56] that uses only a subset of the whole data in every iteration, can be used to accelerate computations of large dataset reconstruction. (ii) Acquisition For the data acquisition, the authors, [57], use a colour imaging bay, in the Manchester X-Ray Imaging Facility (www.mxif.manchester.ac.uk). It is designed to be a flexible work bench for spectroscopic X-ray imaging and tomography. The corresponding detector is a High Energy X-ray imaging Technology (HEXITEC) spectroscopic detector that is installed in a Nikon XTH 225 system. The sample is scanned in cone-geometry setup along five separate horizontal positions for increased field of view. Each sub-projection was acquired with exposure time of 45 s (that is 225 s for a full stitched projection) with 120 projections covering 360 • . The total scan time was 7.5 h. The acquired four-dimensional raw sinogram data has three spatial dimensions and one spectral dimension, i.e. 120 projection angles of 80 × 400 pixels with 800 energy bins ranging from 1.82 keV to 186.07 keV. The reconstruction volume will be 400 by 400 by 80 voxels. (iii) Dataset All the files for this study are freely available and can be downloaded from [58]. It contains (a) 4D hyperspectral (energy-resolved) X-ray CT projection data, (b) flat-field data, (c) energy in keV for every energy bin and (d) geometric metadata of the cone-beam setup. Sinogram data from selected energy channels are presented in figure 8. These have been pre-processed by taking the . Selected sinograms at three different energies from hyperspectral X-ray CT dataset consisting of 800 energy channels, 120 projection angles and 400 detector bins; sinogram for the 20th vertical slice out of a total 80 slices. natural logarithm of the normalized intensity in every spectral band and corrected for severe vertical stripes that would cause ring artifacts, using the RingRemover processor. During the acquisition process 800 energy channels were recorded. It is possible to consider the full energy range, however for demonstration purposes, we choose to examine an energy interval, [75.15 keV, 93.37 keV], of 80 channels that encompasses the K-edges of gold (Au, 80.725 keV) and lead (Pb, 88.005 keV). (c) SIRT and warm-started SIRT reconstruction For our first reconstruction method, we use the SIRT algorithm, an algebraic iterative method for a particular weighted least-squares problem which in addition accepts certain convex constraints such as a non-negativity constraint, see [4]. We enforce this as in §4e with the IndicatorBox function. SIRT is applied channelwise for each three-dimensional dataset. In addition to channelwise SIRT, one can enable a warm initialization, that allows a basic form of channel correlation in the reconstruction as used in [57]. Here, we initialize the SIRT algorithm for the (i + 1)th channel with the solution of the ith channel. As shown in the code, we first need to extract the acquisition and image geometries, i.e. ig3D and ag3D, respectively, using the geometry information for one of the channels of the fourdimensional data. Then, we set up the corresponding single-channel projection operator A3D. For every channel, we run 100 iterations of the sirt3D instance of the SIRT algorithm and its solution is filled to the corresponding channel i of the final four-dimensional reconstruction sirt_recon4D. (d) Spatiospectral TV & (3D + spectral) TV regularization To be able to preserve edges both in the spatial domain and absorption K-edges in the energy spectrum we propose to use the TV regularization, subject to a non-negativity constraint. We first consider the TV regularizer extended to a four-dimensional volume, where the gradient Du = (D e u, D z u, D y u, D x u) is coupled isotropically. We solve u * = arg min This regularizer combines the spatial and energy variations which are penalized by a single regularizing parameter. However, this may not be a good choice as the magnitude of the gradient in the spectral dimension will not necessarily be of the same order of magnitude as the one on the spatial dimensions. Therefore, it may be better to enforce separate regularization with respect to the energy and spatial gradients, i.e. D e u and (D z u, D y u, D x u), respectively. We therefore consider an alternative formulation with separate decoupled TV regularizers for the spectral and spatial dimensions, i.e. u * = arg min One can solve the above problems using for instance the (explicit) PDHG algorithm. For the triplets (K, f , g), the function g is the same for both problems (IndicatorBox) and the difference is with respect to the operator K and the functions f . In (5.1), the operator K and function f are similar to the (explicit) PDHG algorithm described in the dynamic CT section. To achieve the separation of the spatial and energy components of the GradientOperator, we can set the parameter split=True so that it will split the spatial gradients and the gradient along the energy direction, i.e. (D e , (D z , D y , D x )). Similarly, we need to provide a decomposition for the function f , using two BlockFunction that contain the three terms presented in (5.1). The following code blocks present the definition of the triplets (K, f , g) for the problems (5.1), (5.2), required to run the PDHG algorithm as described in the previous sections. (e) SPDHG algorithm Although we can follow exactly the same set-up presented in the previous section to solve the above problems, one has to perform forward and backward operations of the projection operator A for the whole multichannel dataset every iteration. These operations are computationally expensive, especially for large datasets. In order to overcome this problem, CIL allows the user to employ the stochastic PDHG (SPDHG) algorithm, where the above operations are applied to a randomly selected subset of the data in every iteration. SPDHG has been used for different clinical imaging applications, such as PET, [56] and motion estimation/correction in PET/MR, [15], and produces significant computational improvements over the PDHG algorithm. Algorithm 1 Stochastic PDHG (SPDHG). Inputs: n = #subsets, γ = 1.0, ρ = 0.99, probability p i , i = 0, . . . , n − 1 Stepsizes: Select i ∈ [n] at random with probability p i 4: The set-up of the SPDHG algorithm is similar to the PDHG algorithm with the notable differences that we need to define the subsets, which are the n terms in the sum (2.1), as well as a list of probabilities for each subset to be selected in every iteration, see algorithm (e) for SPDHG pseudocode. SPDHG, as well as PDHG, can be set up in an explicit form, when the regularizer is a term in the sum, or in the implicit form where it is represented by g. In our hyperspectral reconstruction, we use the explicit form of SPDHG. The 120 projections which constitute the acquisition data are split into S = 10 data subsets of 12 angles each, b = (b i ) S−1 i=0 . In addition, we will have one regularizer subset. We can now rewrite (5.1) and (5.2), in the form (2.1) where n = S + 1 and A i represents the projection operator for a data subset, b i , and with f i = 0.5 A i u − b i 2 2 for i = 0, . . . , S − 1, and A n−1 = ∇ with f n−1 being the alpha * MixedL21Norm () for the spatio-spectral TV regularizer or the splitTV for the (3D + spectral) TV regularizer. We can configure different sampling patterns, i.e. the choice of the probabilities p i to select the ith term of (2.1). One may choose to assign an equal probability for every term n, i.e. p i = 1/n, i = 0, . . . , n − 1; we refer to this as uniform sampling. Another option, known as balanced sampling, is to give a probability 0.5 to select the regularizer subset and 0.5/S to select any one data subset. In the following, we choose the balanced sampling approach and refer the reader to [56] for a discussion on SPDHG sampling patterns. We use the Slicer processor to obtain the data subsets (b i ) S−1 i=0 . For each data subset's acquisition geometry, we create a list of operators (A 0 , . . . , A S−1 ), using the ProjectionOperator that share the same image geometry ig. For the function f , we use a list of the L2NormSquared Function with respect to each of the data subsets b i for every i = 0, . . . , S − 1. Depending on which problem we solve, the corresponding gradient operator is appended to the list of (A i ) operators, i.e. Grad1 or Grad2 and wrapped using the BlockOperator. Similarly, the list of data fidelity terms are wrapped as a BlockFunction f . Finally, the code to set up and run the SPDHG algorithm for both (5.1), (5.2) minimization problems under a non-negativity constraint and using a list of subset probabilities specifying balanced sampling is: (f) Results To assess performance of the algorithms considered, we reconstruct the hyperspectral dataset and compare reconstructions visually (figure 9) and in terms of their ability to reproduce the expected sharp K-edge jumps in gold and lead containing voxels (figure 10). In the first rows of figure 9, we present results for the two versions of SIRT at three different energies, below, in between and above the K-edges of Au at 80.725 keV and Pb at 88.005 keV. Since in the basic SIRT algorithm there is no regularization to remove the noise, it is difficult to locate the ROIs of gold and lead for low energies, see the first row in figure 9. When the channels are linked by warm-started SIRT, we observe better contrast on these specific ROIs and both gold and lead materials are easy to distinguish. However, due to high spectral noise, the SIRT energy plots Figure 9. SIRT reconstructions without and with channel correlation, Spatiospectral TV and (3D + spectral) TV reconstructions using the SPDHG algorithm. Three different energies are presented at the 20th vertical slice: (1st column) below the gold K-edge, (2nd) after the gold K-edge but before the lead K-edge and (3rd) above the lead K-edge. show highly oscillatory attenuation profiles, particularly for the Au plot, see figure 10. This could make the detection of the K-edge and hence the material unreliable, particularly for cases where we do not have prior knowledge of elemental composition. In the last rows of figure 9, we present the spatio-spectral TV and (3D + spectral) TV reconstructions for three different energy channels, using the SPDHG algorithm with 10 subsets and 25 epochs that correspond to 500 iterations. The regularization parameters α are chosen by visual comparison reducing noise and preserving edges both spatially and along the spectral direction. Compared to the SIRT reconstructions, noise is reduced spatially and contrast is enhanced using the spatio-spectral TV regularizer. However, moving along different channels noise is still apparent, see the first row in figure 9 and green energy curves in figure 10. This is because the spectral differences have less impact compared to the spatial differences in the Figure 10. Attenuation plots as a function of X-ray energy for SIRT (without and with channel correlation), Spatio-spectral TV and (3D + spectral) TV reconstructions for the gold (ROI 1 ) and lead (ROI 2 ) shown in figure 9. isotropic coupling of (5.1). However, the spectral noise could be reduced further by choosing a higher regularizing parameter for the spatio-spectral TV, but then small features would be lost spatially due to loss of contrast. This is an inherent limitation of the coupled spatio-spectral regularization. On the contrary, in the decoupled spatial and spectral regularization approach of the (3D + spectral) TV we have the freedom to balance the strength between space and spectral directions by suitable choices of the parameters α and β. In this way with the (3D + spectral) TV, we obtain higher quality reconstructions with better contrast and less noise, as seen in bottom row of figure 9 and red energy curve of figure 10. (g) PDHG versus SPDHG In figure 11, we demonstrate the computational benefit of SPDHG compared to the PDHG algorithm. On a cropped four-dimensional dataset with only five channels and five vertical slices, we present the spatio-spectral TV reconstructions for these algorithms with respect to the number of epochs. One epoch is the expected number of iterations for the algorithm to have processed all the data, i.e. all data subsets once. For PDHG, the full data are used in each iteration, so an epoch here equals an iteration. On the other hand, for SPDHG an epoch is determined by the number of data subsets. In our case, we use S = 10 data subsets with balanced sampling, which means that on average half the iterations call the regularizer and in the other half one of the data subsets is chosen with uniform probability. Hence, 20 iterations are required on average to process all the 10 data subsets, so an epoch for SPDHG equals 20 iterations. We run PDHG for 2000 iterations = epochs and SPDHG for 1000 iterations = 50 epochs. We observe that even after five epochs, a meaningful reconstruction is obtained using the SPDHG algorithm, whereas in PDHG no structures of the rock are observed. In fact, the SPDHG reconstruction after five epochs is visually closer to the PDHG reconstruction after 50 epochs. This is also verified by the PSNR plot in figure 11. There, we compute for every epoch, the PSNR of the SPHDG and PDHG reconstructions against the PDHG reconstruction after 2000 epochs, which is considered as the reference image u * , that is the converged solution of equation (4.2). A discussion of the computational advantages of the SPDHG versus the PDHG algorithm in tomography applications is beyond the scope of this article. Our intention was merely to demonstrate the CIL implementation of the SPDHG algorithm, which allows researchers to experiment with accelerated reconstruction of their own problems of interest. (h) Discussion and conclusion In this particular rock sample, absorption K-edges are well defined and identifying the elements from these abrupt changes on the spectrum is relatively easy. However, this is not always the case, for instance, if there is no prior knowledge of the sample composition, or low chemical concentration and the sample deposits are on the order of detector voxel size. In such cases, K-edges may be completely concealed by the background noise of a channelwise FBP reconstruction as shown in [6]. Hence, we need to rely on a more sophisticated reconstruction that has the ability to suppress noise in both the spatial and spectral domains and confidently identify and quantify the elemental distribution of each material. This is particularly important when looking to perform further spectral analyses, such as the use of K-edge subtraction (KES), where we segment elemental phases based on identification of their K-edges. For a detailed taskbased reconstruction quality assessment based on KES analysis, we refer the reader to [6], where we compare more advanced spatio-spectral reconstruction methods for a biological sample. Conclusions Multichannel CT imaging opens up many new possibilities in material and life sciences. Multichannel CT is intrinsically 'photon-hungry' because detected photons are shared between multiple energy or time bins. Therefore, acquired tomographic datasets typically do not provide sufficient information for high quality reconstruction using traditional FBP-type algorithms. The absence of effective reconstruction methods and software capable of handling noisy and/or undersampled multichannel data hamper scientific applications of the technique. The inverse problem framework provides methods to treat these challenging multichannel CT data through iterative reconstruction with suitable regularization which efficiently exploits prior knowledge and inter-channel correlation. CIL implements essential building blocks, which can explicitly support multichannel reconstruction and four-and higher-dimensional datasets. Here, we have demonstrated the potential of CIL for multichannel CT data with three representative case-studies. Starting with a simple colour denoising and inpainting problem, we illustrated the ability to incorporate various regularization techniques, such as classical TV, vectorial TV and TGV. We also outlined how a conventional formulation of iterative reconstruction through the optimization framework is mapped onto CIL objects. In the second case study, we exploited reconstructions on a dynamic sparse CT framework enforcing different prior information on spatiotemporal volume. We observed that spatio-temporal TV is able to remove noise and streak artefacts, but due to loss of contrast, important features are lost. Using a reference image from data with dense measurements, a structural prior (dTV) is shown to enhance the reconstructions when very low number of projections are acquired. To highlight the flexibility of CIL, we constructed both explicit and implicit PDHG to solve the corresponding reconstruction problems. Finally, in the last case study, we endeavoured to reconstruct an energy-resolved X-ray CT dataset with high energy resolution. We followed the same regularization strategy as in the second case study, i.e. a combination of edge preserving prior both in space and spectral directions, but this time we used a stochastic version of PDHG algorithm to speed-up large-scale CT reconstruction. Regularization aided better identification of K-edges in the energy-resolved X-ray CT dataset. The ability to incorporate and balance various regularization terms in the reconstruction routine is a promising approach to treat noisy and undersampled multichannel CT data, especially when using different regularization strength for the spatial and energy domains. It is widely understood that one of the main challenges of iterative reconstruction is the high computational cost compared to traditional FBP-type methods. Although the focus of CIL is on modularity and on enabling the expression of complex optimization problems into working code, CIL wraps hardware accelerated libraries to perform costly forward-and back-projection steps and to calculate the proximal operators of regularization and fidelity terms, and there is a continuous effort to improve performance and resolve computational bottlenecks. In terms of supported imaging modalities, we can currently handle any tomographic modality which can be described by the Beer-Lambert law. We also provide interoperability with the Synergistic Image Reconstruction Framework (SIRF) [14] enabling positron emission tomography and magnetic resonance imaging reconstruction using CIL. We continue to enrich the library of available algorithms, regularizers, pre-and post-processing tools along with supported imaging models and available back-ends.
2021-02-12T02:15:47.419Z
2021-02-10T00:00:00.000
{ "year": 2021, "sha1": "a9f1686a307cb234a6c49686d20d460b7e0f7894", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2020.0193", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5b965846381945c0292c404b106f65b1d0bfc84f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Mathematics", "Medicine" ] }
254685634
pes2o/s2orc
v3-fos-license
How the Russian invasion of Ukraine depolarized the Finnish NATO discussion The Russian invasion of Ukraine in February 2022 dramatically reshaped the European and global security landscape. In Finland, the invasion rapidly overturned the long-held policy of military non-alignment and led the country to apply for NATO membership, as a result of the largely converged public opinion on NATO that used to be polarized along the left-right axis. We investigate how this change took place among polarized actors on Finnish social media, and how the NATO discussion was depolarized by the external threat posed by Russia. By analyzing Twitter retweeting patterns, we find three well-separated user groups before the invasion: a pro-NATO, a left-wing anti-NATO, and a conspiracy-charged anti-NATO group. Soon after the invasion, members of the left-wing anti-NATO group broke out of their retweeting bubble and established connections to the pro-NATO group despite their difference in partisanship, while the conspiracy-charged anti-NATO group mostly remained a separate cluster. Our content analysis reveals that the left-wing anti-NATO group and the pro-NATO group were likely bridged by a shared condemnation of Russia’s actions and shared democratic norms. Meanwhile, members of the other anti-NATO group, who built arguments mainly upon conspiracy theories, disinformation, and Russian war propaganda, consistently demonstrated a clear anti-NATO attitude and retained strong within-group cohesion. Our findings show that a dramatic external threat can bridge partisan divides in issues linked to the threat, while groups upheld by conspiracy theories and disinformation more likely persist. Introduction Despite a period of momentum building, the Russian invasion of Ukraine on Feb 24, 2022 came as a shock to most observers. The shock was most acute in Ukraine but was indirectly felt also in countries bordering Russia. Finland, the militarily non-aligned European country that shares a 1344-kilometer border with Russia, witnessed a sharp shift in its public opinion on NATO membership, based on a reappraisal of the external threat posed by Russia. Traditionally, around 20-30 percent of the Finnish population have been in favor of joining NATO. After the invasion, support for joining NATO soared into as high as 70-80 percent. Behind this major change in opinion, the rising external threat seems to have a depolarizing effect on the Finnish NATO discussion. For long, Finnish opinions on NATO embodied a polarization that is largely partisanship-based: voters of the main right-wing party (National Coalition) were largely in favor of joining, and voters of left-wing parties had been the most vocal opponents of NATO. After the invasion, however, many left-wing supporters changed their opinion, and eventually the Finnish parliament almost unanimously voted in favor of joining NATO (188 for,8 against). Social media opens a window into how this dramatic change emerged among people who are more vocal in their political stances (Bail 2021), and who often play an important role in steering the discussion (Matsubayashi 2013). The digital traces of user interactions make it possible to measure structural polarization by constructing endorsement networks of individuals and observing cohesive groups in them (Garimella et al. 2018;Salloum, Chen and Kivelä 2022). Further, network analysis can reveal the structure of these user groups and track how they change over time (Chen et al. 2021), providing insight into the information spreading and user interaction dynamics that drive opinion (de)polarization. We analyze the Finnish NATO discussion on Twitter, where the more politically active and partisan segment of the population is likely to be present (Ruoho and Kuusipalo 2019;Bail 2021). Specifically, we employ network analysis methods to inspect how the Russian invasion of Ukraine changed the polarization landscape of the Finnish NATO discussion online. While previous empirical research has found limited evidence that external threats decrease partisan polarization (Myrick 2021), our study echoes the long-held theory that external conflicts increase internal cohesion (Coser 1956) by showing how a dramatic external threat brings partisan actors together. However, we find that those engaged in conspiracy theories and disinformation are likely more persistent in their communication patterns and stances. Results We collected Finnish tweets from Dec 30, 2021 to Mar 30, 2022 that contain any NATO-related keyword (see Appendix). We mainly examined four time periods: before (Feb 10 to Feb 23), right-after (Feb 24 to Mar 2), 1-week-after (Mar 3 to Mar 9), and 4-weeks-after (Mar 24 to Mar 30). For each period, we constructed a retweet network of users, where a directed link connects user A to user B if A retweeted B within the period. We focus our analysis on users who were active in the before network in order to track stance changes induced by the invasion. Using a graph partitioning algorithm (Traag, Waltman and Van Eck 2019), we find three clusters of users in the network (Fig. 1A). By coding the stances of a sample of tweets spreading in each user group (see Materials and Methods), we find one of the groups to be pro-NATO and the other two to be anti-NATO ( Fig. 1F-H). A qualitative reading of the sampled tweets suggests that one of the anti-NATO groups based their arguments on traditional leftists' concerns, such as pacifism and feminism not being compatible with joining a military alliance, and NATO having been involved in violation of human rights. The other anti-NATO group showed a clear engagement in conspiracy theories and disinformation in framing their opposition to NATO. For example, they claimed "NATO equals supporting 'globalism', the global elite, and the World Economic Forum, all of which are supposed co-conspirators that are set out to destroy the Finnish nation", and that "those who want people to inject themselves with 'poisonous vaccines' are the ones who want to join NATO". Our user partisanship analysis further enriches the profile of each group. Specifically, we plot a list of Finnish politician accounts in the before network, colored by their publicly available party affiliation (Fig. 1B). Quite surprisingly, we find that politicians of most parties, including many that traditionally took a neutral or an anti-NATO stance, already fell on the pro-NATO side in the before network; presumably, this results from the buildup to the war since the end of 2021. However, politicians affiliated with the Left Alliance -the traditionally most anti-NATO party -still fell exclusively in the left anti group. Meanwhile, the conspiracy anti group seems to accommodate few politicians, which suggests its relatively fringe position in political communication. Plotting the pro, left anti, and conspiracy anti users in the retweet networks after the invasion, we observe a significant change in the network structure. In the right-after network, members of the left anti group became much less connected internally and more connected to the pro-NATO side, while most members of the conspiracy anti group largely remained in their own internally connected bubble ( Fig. 1C) 1 . This observation is confirmed by the number of external retweets of the pro group, the number of internal retweets, and the external-internal (E/I) ratio (Krackhardt and Stern 1988) in each anti group: although the E/I ratio of the conspiracy anti group also more than doubled in the first week after the invasion, the E/I ratio of the left anti group had an almost tenfold increase in the same period. This change in retweet network structure reflects a breakage of the cohesive cluster formed by the left anti users, as they instantly developed connection and alignment with the pro group after the invasion. The partisanship plot after the invasion (Fig. 1D) confirms that the invasion bridged the communication divide between politicians of the Left Alliance and those of the other parties. By contrast, the sustained bubble structure of the conspiracy anti group suggests that the invasion did not change its communication dynamics as much. Our reading of the sampled tweets suggests that the left anti group shared with the pro group a critical attitude toward Russia's invasion of Ukraine, which potentially connected them in the retweet network. After the invasion, many people in the left anti group also moved away from explicitly voicing anti-NATO stances to asking for more discussion on NATO, in addition to arguing that NATO opponents should not be ostracized. Although this implies that they did not shift their opinion completely toward the other end, the change in their expression opened up a possibility for their interaction with the pro group, as some NATO supporters also agreed that an open discussion involving both sides is necessary. Meanwhile, members of the other anti group consistently built explicitly anti-NATO arguments upon conspiracy theories and disinformation. Many were also repeating messages of the official Russian propaganda, and some of them, as well-known figures in the Finnish disinformation and conspiracy theory scene, have been interviewed on Russian state TV as supposed experts. Thus, this conspiracy-charged and pro-Russia group presumably did not find much common ground with the pro group, and was not changed much by the invasion. The stance distribution of the sampled tweets confirms that the conspiracy anti group held a consistently strong anti-NATO attitude even after the invasion (Fig. 1H). Meanwhile, the left anti group saw a notable decrease in the expression of anti-NATO attitude after the invasion (Fig. 1G), yet it also did not turn clearly pro-NATO. This change potentially reflects some extent of self-censorship in this group: while many users might have retained an anti-NATO leaning, they avoided stating anti-NATO stances explicitly after becoming a minority in the discussion. Our user activity analysis reveals another possible form of self-censorship in the left anti group. Before the invasion, the two anti-NATO groups had a comparable percentage of active users in each retweet network (Fig. 1E); yet after the invasion, the percentage was consistently lower in the left anti group. The partisanship plot (Fig. 1D) also shows that only two of the eight Left Alliance politicians in the before network were still present in the right-after network. Coupled with the change in retweeting dynamics, the decreased user activity in the left anti group hints at a spiral of silence (Noelle-Neumann 1974) among a part of the left anti users, who likely chose to not share their opinions in response to the shifted discussion climate. Discussion Our analyses provide an overview of how the Russian invasion of Ukraine changed the polarization dynamics of the Finnish NATO discussion on Twitter: the left-wing anti-NATO users broke out of their retweeting bubble and connected with the traditionally right-wing pro-NATO group based on established common ground, but the conspiracy-charged anti-NATO group mostly remained a densely connected cluster of its own and persisted in holding an anti-NATO attitude. Our study sheds light on how a dramatic external threat can change the discussion dynamics among polarized actors. In contrast to existing empirical evidence that suggests the resilience of partisanship-based polarization (Myrick 2021), our results show that polarization in partisanship-divided issues can be weakened overnight by a dramatic external threat, as people of opposite leanings start building connections on the basis of a shared target of criticism (Russia) and a shared understanding of democratic norms (discussion about, and even opposition of NATO are part of democracy). Although this depolarization can take the form of self-censored opposition (Brody and Shapiro 1989) and the actual opinion change might be limited, it still creates an opportunity for information exposure and conversation between the different parties, which can serve as a first step toward actual ideological depolarization (Mutz 2002). Our results also suggest that people who engage in conspiracy theories and disinformation can be very persistent in their attitudes, and more likely communicate within their own cluster. This echoes and strengthens the finding of previous work on how consumers of conspiracy theories tend to concentrate on within-group content and interaction (Bessi et al. 2015), and provides insight into when external threats may not pave a way toward conversation and consensus between polarized actors. Materials and Methods For tweet stance coding, 42 tweets were randomly sampled for each group in each period, with the sampling probability of each tweet proportional to its number of in-group retweets in the period. A group of four coders, all co-authors, labeled the stance of each tweet to be pro-NATO, anti-NATO, unclear, or unrelated to NATO. The coders were split into two teams, with two coders on each team. From 504 tweets in total, 24 tweets were randomly sampled for both teams to code; for the remaining 480, one team coded half and the other team coded the remaining half. Within each team, each coder first labeled the 264 tweets independently, then the two coders discussed cases of disagreement and reached a consensus as a team. The inter-team agreement for the 24 double-coded tweets, as evaluated by Krippendorff's alpha (Hayes and Krippendorff 2007), is 0.80. Please refer to the Appendix for other details of our data collection, retweet network construction, tweet sampling, and tweet coding. Code and anonymized data are available at https://github.com/ECANETresearch/finnish-nato. Retweet network construction We divided the timeline into two-week periods before Feb 24 and one-week periods after Feb 24, in consideration of the asymmetric activity level before and after the Russian invasion of Ukraine. For each period, we constructed a retweet network of users, where a directed link of weight w connects user A to user B if A retweeted B w times within the period. We used retweet records (not including quote retweets) for constructing the user networks, as retweet is a relatively certain indicator of endorsement-based connection (Metaxas et al. 2015). The anonymized networks are available at https://github.com/ECANET-research/finnish-nato. Following prior work (Garimella et al. 2018;Chen et al. 2021), we used only the largest connected component of each network for subsequent analysis and plotting. We also removed the self-loops in the networks. We mainly focused our analysis on four periods: before (Feb 10 to Feb 23, 31,399 total tweets, 12,891 retweets), right-after (Feb 24 to Mar 2, 81,433 total tweets, 39,936 retweets), 1-week-after (Mar 3 to Mar 9, 49,585 total tweets, 23,365 retweets), and 4-weeks-after (Mar 24 to Mar 30,20,792 total tweets,9,103 retweets). The largest connected component of the retweet network contains 3,836 users and 10,774 links in the before period, 8,986 users and 32,454 links in the right-after period, 6,173 users and 19,309 links in the 1-week-after period, and 3,383 users and 7,598 links in the 4-weeks-after period. Tweet sampling For each group and each period, we sampled tweets from those that got retweeted at least once in the group in the period. In the before period, 1,800 tweets got retweeted at least once in the pro-NATO group, 221 tweets got retweeted at least once in the left anti-NATO group, and 416 tweets got retweeted at least once in the conspiracy anti-NATO group. The numbers are 4,188/343/1,118 in the right-after period, 2,698/257/779 in the 1-week-after period, and 1,022/88/481 in the 4-weeks-after period for respectively the pro, left anti, and conspiracy anti group. To preferentially sample tweets that were popular within the group, we set the sampling probability of each tweet proportional to its number of in-group retweets in the period. Tweets with unclear stance In our tweet stance coding, a tweet is labeled "unclear" if it does not explicitly express a positive or negative attitude toward NATO. Thus, the label "unclear" does not necessarily imply an ambiguous attitude toward NATO, but rather that the tweet does not clearly indicate any attitude. For example, tweets labeled "unclear" are often reactions to what was currently taking place in the Ukraine war or in the Finnish NATO policy process. More specifically in the pro-NATO group, many tweets were labeled "pro" in the earlier periods because they were advocating for two citizen initiatives that were pro-NATO; but later on, these initiatives became irrelevant because the needed signatures (50,000 at the minimum) were collected, and the NATO policy process moved on. Thus in later periods, many clearly pro-NATO tweets disappeared from the pro-NATO group and, for example, many tweets condemning Russia's actions in Ukraine took their place. The latter are often labeled "unclear" as they are less clearly in favor of NATO, even though such a stance might be implicit. In general, the increase of tweets with unclear stance does not suggest that the group moved toward an ambiguous stance on NATO. Political parties in Finland Finland has a multiparty political system with coalition governments. In our study, we focus on the six main parties in Finland, each with over 10 members in the current Finnish Parliament: the Left Alliance (Left), the Social Democratic Party (SDP), the Green League (Green), the Centre Party (Centre), the National Coalition Party (Coalition), and the Finns Party (Finns). The parties are ordered by political leaning from left to right. Presumably, the multiparty system in Finland plays a non-negligible role in shaping the polarization dynamics we observe in the NATO discussion. Specifically, the level of polarization in the system before the invasion might not have been as extreme as in other two-party systems, and therefore it might be easier to depolarize from that state. Also, parties that are closer to the Left Alliance in political leaning but already moved to the pro-NATO side before the invasion (e.g., SDP, Green, Centre) might have served as a mediator that connects the Left Alliance to the pro-NATO group, especially considering that the Left Alliance was in a coalition government with the Social Democratic Party, the Green League, and the Centre Party during the time of the invasion.
2022-12-16T10:30:08.099Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "33be2cfd8185ef707457fe9a369547ca78c28653", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "33be2cfd8185ef707457fe9a369547ca78c28653", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16163869
pes2o/s2orc
v3-fos-license
The fish gill: site of action and model for toxic effects of environmental pollutants. The gill epithelium is the site of gas exchange, ionic regulation, acid-base balance, and nitrogenous waste excretion by fishes. The last three processes are controlled by passive and active transport of various solutes across the epithelium. Various environmental pollutants (e.g., heavy metals, acid rain, and organic xenobiotics) have been found to affect the morphology of the gill epithelium. Associated with these morphological pathologies, one finds alterations in blood ionic levels, as well as gill Na,K-activated ATPase activity and ionic fluxes. Such physiological disturbances may underly the toxicities of these pollutants. In addition, the epithelial transport steps which are affected in the fish gill model resemble those described in the human gut and kidney, sites of action of a variety of environmental toxins. ImagesFIGURE 1. aFIGURE 1. bFIGURE 3. Introduction The increasingly obvious effects of pollution of the biosphere in general and aquatic ecosystems in particular are known to everyone and are the subject of daily accounts in the popular press as well as textbooks (1) and scientific monographs (2,3). It is the intent of this short review to: (a) examine the morphology and physiology of the fish gill epithelium and its underlying vasculature, (b) briefly review selected data which indicate that a variety of toxicants in the aquatic environment can affect gill structure and function, (c) delineate specific sites of action in the gill which may account for the toxic effects, and (d) suggest that the fish gill may be used as a model system for study of the more generalized effects of toxicants on ion transport across cellular and epithelial membranes, as well as vasoactivity of blood vessels. The literature review and discussion will be limited to the gills of teleost (bony) fishes and is meant to be more representative than exhaustive. Cited references will emphasize review papers or chapters in books in many cases in order to facilitate entry into a vast literature and to save space. Structure and Function of the Fish Gill Epithelium The gill epithelium is the dominant site of gas exchange, ionic-regulation, acid-base balance, and nitrogenous waste excretion for fishes (4,5), thereby serving a multitude of vital functions for these aquatic animals. The evolutionary and morphological development of the *Department of Zoology, University of Florida, Gainesville, FL 32611 USA; Center for Membrane Toxicity Studies, Mt. Desert Island Biological Laboratory, Salsbury Cove, ME 04672. gill has recently been reviewed by Hughes (6) and need not be detailed here. Suffice it to say that the epithelium covers four (in rare cases, two or three) branchial arches plus, when present, the pseudobranch (remnant of the gill on the mandibular arch) and, in some cases, the inner surface of the operculum and buccal cavity. The epithelium on each branchial arch is subdivided into two dorsoventral columns of filaments (flattened extensions running at right angles to the branchial arches), with dorsal and ventral rows of secondary lamellae being further subdivisions on each filament, lying at right angles to the filamental plane (Fig. 1). The secondary lamellae are the site of gas exchange, with blood to water diffusion distances of less than one micrometer in active species and 1 to 10 ,um in more sluggish fish species (6). The epithelium is supplied with blood directly from the heart, through the ventral aorta, with afferent and efferent branchial arteries in the arches. The pattern of blood flow through the filaments and secondary lamellae is relatively complex, with unoxygenated blood flowing through afferent filamental arteries and afferent lamellar arterioles into the secondary lamellae. Oxygenated blood leaves the lamellae and returns to the efferent branchial artery via efferent lamellar arterioles and efferent filamental arteries. Anastomoses between the efferent filamental artery and the central venous sinus of the filament provide for a parallel drainage of oxygenated blood directly back to the branchial vein, and subsequently, the heart (Fig. 2) (7). In addition, some species display anastomoses between the afferent filamental arteries and the central venous sinus, providing for a potential shunt around the lamellae (8). Blood flow into these afferent and efferent lamellar pathways is controlled by catecholamines. There is now abundant evidence (9,10) that epinephrine, via f-adrenoceptors, produces a fall in gill vascular resistance by opening a b FIGURE 1. Scanning electron micrographs of (a) branchial arch and filaments of gill from the teleost, Opsanus beta; calibration line is 1 mm (10 x 105 nm); (b) filament and secondary lamellae of gill from the teleost, Opsanus beta; calibration line is 300 ,um (30 x 104 nm). prelamellar arterioles, and an increase in flow into the efferent filamental artery (at the expense of the venous flow into the central venous sinus) via a-adrenoceptors. Alterations in the pattern of blood perfusing the filaments vs. the secondary lamellae theoretically could have profound effects on osmoregulation because of the relative distributions of "leaky" tight junctions and chloride cells on the filaments and secondary lamellae (see below). The relative role of blood-borne hormones vs. catecholamines from nerve terminals in the gill tissue is still open to some debate (10,11). The cellular composition of the branchial epithelium is usually divided into a filamental epithelium and a much simplified lamellar epithelium, although recent evidence indicates that at least one cell type (chloride cell, see below) formerly thought to be unique to the filamental epithelium is also found in the lamellar epithelium in some species (8) (Fig. 3). The filamental epithelium is composed of five major cell types, including squamous pavement cells which characteristically possess microridges; mucous (goblet) cells; heavily innervated, neuroepithelial cells which contain biogenic amines; accessory cells (which may be precursors of chloride cells) (12); and chloride cells, whose role in ion transport is now well established (13)(14)(15)(16). The epithelium of the secondary lamellae is much more simple and thinner (see above) than that of the filament, and consists of two major cell types: the superficial cells and the basal cells, the latter thought to be cells differentiating to replace the former. The basal cells might also differentiate into chloride cells when they are found in the lamellar epithelium (17). Despite its relative thinness, the lamellar epithelium is considered to be relatively impermeable to ions, water, and organic molecules because of extensive intercellular strands forming "tight junctions" (18). This is to be contrasted with the filamental epithelium, where relatively diffuse strands between chloride cells, or between chloride and accessory cells, (especially in seawater-acclimated fishes) indicate leaky "tight junctions," and therefore relatively high solute permeability (14,19). It is generally considered that these leak pathways represent the site of the dissipative ion and water movements which must be countered by active osmoregulatory systems by fishes in both sea water and fresh water (20). Since, like all vertebrates except the hagfishes, teleost fishes maintain the NaCl content of their body fluids at approximately 40% that of sea water, it is clear that they face a net diffusional loss of salt into fresh water and net diffusional uptake of salt from sea water. In the marine teleosts the diffusional salt load is exacerbated by salt influx across the gut dictated by the need to absorb ingested water to balance the water lost osmotically to the hyperosmotic marine environment (20). The molecular pathways which allow the gill of marine teleosts to extrude unwanted salts (14,21) are basically the same as those utilized by the salt-excretory, rectal gland of elasmobranchs (22) as well as those described for NaCI uptake across the marine fish gut epithelium, subsequent to ingestion of sea water (23,24). Figure 4 depicts the current working model for the steps for NaCl extrusion by the marine fish gill epithelium. Recent microprobe analysis demonstrated conclusively that this transport step resides in the chloride cells of the teleost gill epithelium (25). Importantly, numerous studies have demonstrated that the basolateral aspect of the chloride cells contains high concentrations of the transport enzyme Na,K-activated ATPase (13,14). Freshwater teleosts (and presumably all gilled, freshwater vertebrates) extract needed NaCl from the me- dium via parallel, antiport systems involving exchange of intracellular H+ or NH4' for external Na+ and internal HCO3-(or possibly OH ) in exchange for external Cl- (Fig. 5) (26). The actual cell involved in these transport steps is unknown, but is generally assumed to be the chloride cell, despite its usually reduced population in freshwater species (13). Moreover, the energizing step for these antiports is also presently unknown, but the chloride cell does contain substantial concentrations of Na,K-activated ATPase even in freshwater species (13). In addition, intracellular carbonic anhydrase may play a role, supplying H+ and HCO3 for the apical ionic exchanges (Fig. 5). In the past several years it has become apparent that both the Na+/ H+ and Cl-/HCO3 antiports are also present in the gill epithelium of marine teleosts (27,28), being involved in acid-base regulation and nitrogenous waste excretion in both marine and freshwater forms (26,29). Recent evidence indicates that the antiports evolved first in the hagfishes for acid-base regulation, even before the vertebrates entered fresh water (30). Finally, despite the potential for extrusion of unwanted ammonia (fish are ammonotelic) via the gill Na+/NH4' exchange, it is apparent that diffusive pathways for both NH4' and NH3 exist across the fish gill epithelium (31). It is important to note that these branchial pathways for extrusion of ammonia and acid/base equivalents predominate over renal extrusion pathways, even under conditions of experimental manipulation of blood pH (29). Effects of Selected Pollutants on Gill Structure and Function One need only consult a recent review chapter on histopathology of tissues from aquatic organisms to determine that gill pathologies are common symptoms of toxic effects on fishes of a wide variety of aquatic pollutants, including organochlorines, petroleum compounds, organophosphates, carbamates, miscellaneous herbicides, acidification, nitrogenous compounds, heavy metal salts, and chemotherapeutic agents (32). The morphological anomalies commonly include "hyperplasia with lamellar fusion, epithelial hypertrophy, telangiectasia (marked dilation of terminal blood vessels), edema with epithelial separation from basement membranes, general necrosis, and/or epithelial desquamation" (32). It is important to note that a recent statistical analysis of common gill histopathologies produced by a variety of toxicants indicates that, rather than toxicant-specific responses, these gill structural damages may merely be reflections of generalized stress responses, often secondary to a failure of gill cellular osmoregulation in freshwater species (33). Unfortunately, morphological pathologies are often described without concomitant description of physiological changes, and physiological studies often do not include data on morphological changes. Nevertheless, examination of some selected studies can provide insights into the effects of at least three classes of environmental pollutants on fish gill Heavy Metals* It is clear that exposure of various species of fishes to heavy metals in the environment is associated with obvious structural damage to the gill epithelium. For example, Skidmore and Tovell (36) demonstrated that exposure of rainbow trout (Salmo gairdneri) to 40 ppm Zn + for approximately 3 hr resulted in severe curling and edema of the secondary lamellae, with the epithelium lifted away from the basement membrane. Chloride cells were also partially detached and swollen. Olson et al. (37) described less severe morphological changes (decreased height of lamellar cell ridges, appearance of vacuolated epithelial cells, and chloride cell degeneration) after exposure of rainbow trout to either mercuric chloride or methylmercury (approximately 50 ppb for 1 week or 0.25 ppb for 6-8 weeks). Matthiessen and Brafield (38) examined the effect of exposing sticklebacks (Gasterosteus aculeatus) to 0.5 to 1.0 ppm Zn2+ for 1 to 3 days in distilled water (usually fatal) or 2 to 6 ppm Zn2+ for up to 29 days in hard water (not fatal). In the distilled water exposures the most characteristic response was "detachment and sloughing of epithelial cells and coalescing of adjacent secondary lamellar epithelia." Gill responses in the hard water experiments were characterized by the appearance of chloride cells on the secondary lamellae. These authors suggest that, at least in hard water, survival of Zn2+ exposure may be associated with excretion of the unwanted cation via activated chloride cells. This conclusion was supported by their finding that if fish were allowed to recover in Zn2'-free hard water for 9 days after exposure to 1 ppm Zn2+ in distilled water for 16 hr the gill epithelium was often characterized by the appearance of chloride cells on the secondary lamellae. One might argue (33) that this epithelial detachment was secondary to increased osmotic influx of water in this hyperregulating, freshwater species, but Baker (39) found swelling and clumping of secondary lamellae in the gills of the marine, winter flounder (Pseudopleuronectes americanus) exposed to copper. Since this species normally is faced with an osmotic loss of water, it is clear that copper is not merely increasing osmotic permeability of the gill epithelium in this case. Given these structural changes, it is clear that heavy metals should produce profound effects on gill solute and water transport. Acute exposure of rainbow trout to copper (12.5 to 200 ppb) for 12 to 24 hr is correlated with a decline in blood Na+ and Clconcentrations (40), corroborating earlier studies with copper (41,42). This Cu2'-induced decline in blood NaCl was secondary to an inhibition of uptake of both ions at lower Cu2+ concentrations (12.5 to 50 ppb); at higher Cu2+ concentrations ionic effluxes were also stimulated (40). The au-*I will use the standard notation of "heavy metals" for ions such as Cu2+ Hg2+, and Zn2+, but they are probably more properly referred to as "borderline" (Cu2' and Zn2+) and "class B" (Hg2+) metals based upon their respective reactivities with various ligands. See Nieboer and Richardson (35) for an interesting discussion of the validity of the term "heavy metals." thors propose that this latter effect on ionic permeability may have been secondary to displacement of Ca2" from anionic sites of the intercellular cement, thereby opening "tight junctions" and allowing Na+ and Clto diffuse across the gill epithelium, down their respective electrochemical gradients. Cu2+, as well as other heavy metals, can potentially displace Ca2+ from biological ligands (35). Inhibition of ionic uptake, at least in the case of Na+, may have involved reduction of Na+/NH4+ exchange since blood ammonia levels also rose under copper treatment (40). It is important to note that these authors found that the threshold concentration for these effects of copper on trout osmoregulatory pathways was 12.5 ppb, only twice the water quality level set by the International Joint Commission (43). Exposure of killifish (Fundulus heteroclitus) to 125 ppb of mercuric chloride for 24 hr completely blocked net Na+ uptake. Interestingly, similar exposure to methylmercury resulted in only transient inhibition of uptake, followed by uptake at control rates after 30 min (44). The authors suggested that this recovery from methylmercury toxicity may have been secondary to rapid redistribution of the toxicant from the gill tissue to the liver and kidney. They also found that both mercury compounds significantly reduced the gill Na,K-activated ATPase in exposed killifish (44). In these experiments, the isotopically measured Na+ efflux from the killifish was unchanged by exposure to either mercury compound, consistent with the proposition that the passive ionic permeability of the gill was unaffected. More recently Lock et al. (45) have found that exposure of rainbow trout to either mercuric chloride (1-2 ppm for 4 hr or 100 ppb for 1 week) or methylmercury (100 ppb for 4 hr or 5 ppb for 1 week) was associated with a significant reduction in blood Na+ and Clconcentrations. However, gill Na,K-activated ATPase levels were reduced only at concentrations of either mercury compound at twice these concentration/exposure levels. Since the authors also measured increased water uptake by isolated gills, they suggest that ionic reduction under mercury stress may be due merely to increased water influx, rather than inhibition of ionic uptake mediated by Na,K-activated ATPase. However, passive ionic losses were not monitored in this study, and could have played a role in the blood concentration decline which was noted. The effects of zinc exposure are less well defined. Lewis and Lewis (46) found that exposure of channel catfish (Ictalurus punctatus) to lethal zinc concentrations (12-30 ppm) was associated with a decline in the serum osmolality, which was reversed by addition of NaCl to an osmotic pressure of 235 mOsm. However, Spry and Wood (47) found no significant changes in blood Na+, K+, or Clconcentrations of rainbow trout exposed to either 0.8 or 1.5 ppm Zn2+ for up to 12 hr, despite significant, and fatal, mixed acidosis (increased PCO2 and blood lactate levels) and hypoxia, at least at 1.5 ppm Zn2+. These data are at odds with an earlier study on rainbow trout which demonstrated that exposure to 40 ppm Zn2+ resulted in a slight fall in blood osmotic and Na+ concentrations (48). Nevertheless, the author proposed that respiratory stress was the critical parameter affected by Zn2+ exposure. In a more recent study, Spry and Wood (49) found that exposure of rainbow trout to much lower levels of Zn2+ (0.8 ppm for 72 hr) resulted in a mixed acidosis, normoxia, and reversal of branchial net uptake of both Na+ and Clto net loss. Interestingly, blood Na+ and Clconcentrations did not change, despite 53% mortality over the 3-day experimental period, and a substantial stimulation of passive NaCl losses across the gills. This may have been due to the fact that branchial uptake of both ions was also stimulated, especially after 48 hr of Zn2+ exposure. The authors proposed that some of the metabolic component of the acidosis observed was secondary to a stimulation of net base loss (equivalent to acidic equivalent uptake) produced by stimulation of Cl-/HCO3 exchange vs. Na+/H+ exchange (Clinflux increased more than Na+ influx, at least initially). The authors also suggested that these acid-base disturbances were insufficient to account for the observed mortality, and hence, the causes of death may be disruption of cellular events, such as oxygen delivery and/or utilization (49). It is important to note that, despite the uncertainty of osmoregulatory disruption produced by low concentrations of Zn2+ (less than 1 ppm), 1 to 100 ppm Zn2+ significantly inhibited both gill Na,K-activated ATPase (50) and carbonic anhydrase (51). Since both enzymes are probably intimately involved in gill NaCl transport (Figs. 4 and 5), it is rather surprising that more definitive changes in blood NaCl concentrations have not been demonstrated in fishes exposed to environmental Zn2+. Heavy metals also have the potential to affect fish osmoregulation in sea water. Stagg and Shuttleworth (52) demonstrated that exposure of the marine flounder (Platichthysflesus) to 170 ppb Cu21 for 42 days resulted in significant increases in blood Na+ and Clconcentrations. In a subsequent study (53) they showed that acute exposure of perfused flounder gills to Cu21 (in the perfusate) resulted in a concentration-dependent (1 to 100 FtM = 65 to 6500 ppb) reduction of the electrical potential across this tissue. Since this electrical potential has been shown to be a direct measurement of the active ionic extrusion mechanisms of the perfused gill (54), it is clear that the heavy metal must be interfering with one of the transport steps outlined in Figure 4. Since the ouabain-sensitive component of the oxygen consumption of isolated gill tissue was reduced in the presence of Cu2+, as was the Na,K-activated ATPase, the authors proposed that the site of action of the heavy metal was the enzyme itself. In support of this conclusion, Crespo and Karnaky (55) found that acute application of 2.6 ppm (4 x 10-5 M) Cu2+ or Zn2+ to the serosal side of the isolated, short-circuited opercular membrane (rich in chloride cells and accepted as a model for the seawater ionic extrusion systems of marine fishes) (16) of the killifish reduced the short-circuit current (I,,) and electrical potential significantly, indicating a direct effect on active transport mechanisms. Concomitant measurements of Na,K-activated ATPase ac-tivities indicated substantial inhibition at Cu2" or Zn2+ concentrations above 325 ppb (55). Importantly, the electrical resistance ofthe isolated tissue was unaffected by this treatment, and neither IS, nor resistance was affected by addition of the heavy metals to the mucosal side of the epithelium, supporting the proposition that the effect was directly on the basolateral transport steps. Given the known effects of heavy metals on the ionic permeability of the branchial epithelium in freshwater fish species, and the leaky "tight junctions" in the gill epithelium of marine fishes (28), it is surprising to find that heavy metals did not also increase the passive ionic permeability of the marine fish gill. One might argue that this is due to the fact that the permeability is already high; however, studies have shown that its permeability is still sensitive to external Ca2+ concentrations (56,57), and therefore presumably to heavy metal effects. A more likely explanation is that the in vitro studies of Crespo and Karnaky (55) involved acute doses, while metal-induced changes in permeability take longer to be effected. The foregoing indicates that heavy metals (only Cu2+, Hg2+, and Zn2+ have been described here) do produce toxic effects on the fish gill, both morphological and physiological. Physiological effects involve a reduction in blood ionic levels or acidosis, associated with increased ionic permeabilities (produced probably by displacement of Ca2`from paracellular channels) and inhibition of enzymes (e.g., Na,K-activated ATPase and carbonic anhydrase) involved in transport of ions. Indeed, it is now clear that a variety of gill transport ATPases may be affected by heavy metals (53). Aquatic Acidification In light of current concerns over acid precipitation, there are surprisingly few published studies of the effects of low environmental pH on fish gill morphology. Daye and Garside (58) found separation of epithelial layers of the secondary lamellae on gills of brook trout (Salvelinus fontinalis) exposed for 7 days to pH 5.2. Yearling Sunapee trout (Salvelinus alpinus oquassa) Exposed to pH 4.5 for 192 hr showed slight swelling of secondary lamellae, which became more pronounced at pH 4.0 (lethal limit) (59). Leino and McCormick (60) demonstrated a significant increase in the number of chloride cells, an increase in the number of chloride cells on the secondary lamellae, and a striking increase in the number of chloride cells with apical pits [usually indicative of acclimation to increased salinities (61,62)] in the gills of fathead minnows (Pimephales promelas) exposed for 129 days to pH 5.0. Most recently, Chevalier et al. (63) found that brook trout (Salvelinus fontinalis) residing in acidified lakes (pH 5.5) in the Canadian Shield (Quebec) displayed "extensive epithelial damage, mainly separation of the epithelial layer from underlying tissue, deformation of secondary lamellae, and degeneration of chloride cells, which was accompanied by pronounced hyperplasia of undifferentiated epithelial cells in the primary lamellae." Importantly, Bolis et al. (64) have recently shown that phospholipid composition of gill tissue from rainbow trout exposed to pH 4.0 to 4.5 for 4 to 5 days changes significantly and is accompanied by a substantial increase in the percentage composition of unsaturated fatty acids. The authors propose that such changes could affect the bilayer structure and fluidity of the gill epithelial cell membranes, and hence their osmotic and ionic permeabilities. Finally, mucus production on both the gills and skin is significantly enhanced by acid exposure (58,59,65,66); indeed, it is one of the best characterized responses to acid stress. It is now abundantly clear that exposure of freshwater fishes to environmental pHs below approximately 5.0 is associated with a pronounced decline in blood Na+ and Clconcentrations (26,(67)(68)(69)(70)(71). The magnitude of the acid-induced perturbation of blood ionic levels is correlated with ambient Ca2+ concentrations (at least in rainbow trout), with fish in softer waters showing more significant falls in blood NaCl concentrations than those in higher Ca2" waters (72,73). It is clear from Figure 4 that such a reduction could be the result of either decreased active uptake or increased diffusional loss of both ions. Studies on a variety of species (26,71) have demonstrated that both transport pathways are affected. The increased efflux may be, at least in the case of Na+, partially due to changes in the electrical potential across the gill (74), but it is more likely that the generalized NaCl loss is secondary to increased leakiness of the branchial epithelium produced by acid titration of the Ca2" on the gill membrane. This view is supported by the fact that low Ca2" solutions exacerbate the effect of low external pH (see above), and low pH solutions significantly increase the rate of efflux of bound Ca2" from gills isolated from brown trout, Salmo trutta (75) and increase the electrical conductance, as well as the Na+, Cl-, and mannitol effluxes, across the isolated opercular epithelium of S. fontinalis (76). The precise locus and mechanism(s) of inhibition of Na+ and Cluptake remain unknown. External H+ could certainly interfere with an apical Na+/H+ exchanger directly, either by reversing the direction of exchange or by noncompetitive inhibition, thereby producing both the fall in blood Na+ and blood pH which is normally seen (69,73). The mode of inhibition of Cl-influx which is usually also seen (69,73) is less easy to explain. It could actually be a rapid response to the fall in blood pH, producing a compensatory decline in Cl-/HCO3 exchange, or a generalized, noncompetitive inhibition of the exchanger by the high external acidity. Acidification of fresh waters is usually associated with mobilization of aluminum from the substrate (65), and compounds of this heavy metal may produce some of the ionoregulatory symptoms of acid poisoning in salmonids, even at relatively higher pHs (such as 5.0) (77), where acid stress alone is rather slight (65). Under these conditions inhibition of transport enzymes may play a role, since Staurnes et al. (77) found that exposure of young specimens of Atlantic salmon, Salmo salar and S. gairdneri, to pH 5 and 200 ppb of AlCl3 was corre-lated with significant reductions in the activities of both carbonic anhydrase and Na,K-activated ATPase. However, it appears that at least Na,K-activated ATPase can be inhibited by acid stress alone, since Saunders et al. (78) have shown that the branchial enzyme activity is more than halved by rearing S. salar parr in pH 4.2 to 4.7 fresh water. In addition, blood Na+ and Clconcentrations were reduced in the individuals reared at low pH. Inhibition of the basolateral Na,K-activated ATPase could presumably disrupt the electrochemical gradients favoring Na+/H+ exchange, thereby inhibiting Na+ influx and H+ efflux. Inhibition of intracellular carbonic anhydrase would presumably interfere with both Na+/H+ and Cl-/HCO3 exchange, thereby resulting in inhibition of both Na+ and Cl-uptake. It is interesting to note that the acid-induced secretion of mucus by the fish gill may actually be adaptive, providing for a reduction in epithelial ionic permeability by binding to environmental Ca2+ (although the effects offish mucus on ionic diffusion are controversial) (79,80), as well as binding to external Na+ to provide more substrate for Na+/H+ exchange (81). However, if increased external H+ concentrations have already titrated the polyanionic sites on the mucus, these effects may be minimal. In fact, Miller and MacKay (82) found that the ability of fish mucus to bind to copper was completely abolished at pH 3.5. Thus, the role of mucus secretion in acid stress remains unclear. The foregoing indicates that low environmental pHs can affect fish osmoregulation by either increasing passive ionic loss (secondary to displacement of Ca2+ from the gill epithelium) or by direct effects on the apical ionic exchange systems or basolateral Na,K-activated ATPase (especially if aluminum species are present). In both cases, the most common symptom is a reduction of blood ionic concentrations, as well as reduced blood pH. However, the actual cause of death in acid-exposed fishes may be much more complicated than merely ionoregulatory failure. Cardiovascular collapse may be the final cause, secondary to increased erythrocyte volume and shift of extracellular fluids into cells, both of which lead to increased hematocrit and blood viscosity. Concomitant catecholamine mobilization, producing increased cardiac output and vasoconstriction, along with the increased blood viscosity, lead to increased arterial blood pressure, and eventually circulatory failure may occur (83,84). Organic Xenobiotics Gross morphological anomalies are seen in the gill epithelium of yearling coho salmon (Oncorhynchus kisutch) exposed to the herbicides dinoseb (100 ppm for 114 hr), paraquat (100 ppm for 120 hr), and atrizine (15 ppm for 140 hr), including necrosis, desquamation, hypertropy and hyperplasia, and telangiectasia (32). Similar morphological changes characterize exposures of fishes to fumigants such as methyl bromide (85); pyrethroid insecticides such as permethrin (86); detergents such as sodium lauryl sulfate (87,88); organochlorines such as DDT, endrin, and dieldrin; petroleum compounds such as phenol and naphthalene; organophosphates such as malathion and methylparathion; and carbamates such as sevin (32,89). Interestingly, injection of Tilapia aurea (adapted to either 2/3 sea water or fresh water) every 3 days with 10 mg/kg DDT for 30 days produced no abnormalities in the chloride cells (90). One would expect that the usual morphological changes would be correlated with osmoregulatory malfunction, and this appears to be the case, despite rather depauperate literature. For example, Grant and Mehrle (91) demonstrated that exposure of goldfish (Carassius auratus) to high concentrations of endrin (430 pug/kg body weight) for 157 days did result in a slightly reduced blood Cl-concentration; however, blood Na+ levels were unchanged at this exposure and were actually increased at lower doses. Leadem et al. (92) did not find any significant change in either blood osmolality or Na+ in S. gairdneri dosed orally with 2.75 mg/kg or 8.30 mg/ kg DDT every 48 hr for 2 weeks (both doses were "environmentally realistic" according to previous studies of pesticide residues in fishes) (93). These findings are especially puzzling since Leadem et al. (92) did demonstrate a significant inhibition of gill Na,K-activated ATPase in their studies, and McBride and Richards (94) found that aldrin (180 ppb) inhibited Na+ uptake by the isolated, perfused carp (Cyprinus carpio) gill. Moreover, Davis and Wedemeyer (95) showed that the organochlorines, DDT, dicofol, and endosulfan inhibited rainbow trout gill Na,K-activated ATPase by 60 to 100% in vitro at concentrations between 10-5 and 10-4 M. However, these in vitro concentrations were 1 to 2 orders of magnitude greater than acutely toxic levels in vivo. It therefore remains unclear whether these xenobiotics affect gill solute transport in freshwater fishes, despite apparently clear effects of gill Na,K-activated ATPase. The picture seems to be somewhat clearer for marine species. Eisler and Edmunds (96) demonstrated hypernatremia in the marine, northern puffer (Sphaeroides maculatus) exposed to endrin, and Kinter et al. (97) found that exposure to the polychlorinated biphenyl Aroclor 1221 (75 ppm for 24 hr) produced significant increases in blood Na+ concentrations in seawateradapted killifish, while exposure to DDT (0.25 ppm for 6 hr or 0.075 ppm for 24 hr-both of which produced approximately 50% mortality) did not alter blood Na+ concentrations. However, in companion experiments, exposure of the seawater-adapted eel (Anguilla anguilla) to 1 ppm DDT for 6 hr did result in a significant increase in blood Na+ concentrations (97). In a subsequent study, Miller and Kinter (98) found that exposure of killifish to either 0.1 ppm or 1 ppm DDT for 4 to 24 hr resulted in an increase in blood Na+ concentrations (however, only 0.1 ppm for 24 hr was statistically significant due to extreme variability) concomitant with some 30% mortality in the exposed fishes. Again, the site of action appears to be the transport enzyme Na,Kactivated ATPase, since Janicki and Kinter (99) showed that the enzyme isolated from gill tissues from P. americanus was inhibited 54% when incubated with 50 ppm DDT and a subsequent study (97) found that 50 ppm DDT also inhibited gill Na,K-activated ATPase from A. anguilla. Detergents represent another class ofxenobiotic compounds that produce gill structural pathologies (see above). A single study showed that the diffusional inflow of water across perfused rainbow trout gills was enhanced when linear alkylate sulfonate (LAS) was added to the irrigate at a concentration of less than 100 ppm (exclusive of vascular effects) (100). However, other published studies have shown that a major site of action of detergents may be adrenoreceptors on vessels controlling the perfusion of various regions of the gill epithelium, rather than on cellular ionic transport per se. Of course, changes in the pattern of blood flow (e.g., increased lamellar perfusion or increased flow into the central venous sinus, which underlies the majority of the chloride cells, see above) could have profound secondary effects on gill transport. Bolis and Rankin (101) demonstrated that perfusion of isolated gills from various salmon species (Onchorhynchus sp.) with perfusate containing 0.6 to 3 ppm LAS produced a concentrationdependent vasodilation that was blocked by the 1B-adrenoceptor antagonist propranolol, and was therefore presumably via direct interaction with a 3-adrenoceptor. In a subsequent study they extended these findings to both S. trutta and A. anguilla and also found that LAS (1 ppm in acclimation medium or 2 x 10-7 M in perfusate) actually interfered with norepinephrine-induced vasodilation (102). More recently, Stagg et al. (103) found that 2 x 10-8 M (6 ppb) sodium lauryl sulfate (SLS) in the perfusate reversibly (and noncompetitively) inhibited the vasodilatory action of noradrenaline on the perfused gills ofA. anguilla. They proposed that the detergent interacted with the vascular membrane, thereby producing direct or indirect (reversible and noncompetitive) changes in the 3-adrenoceptor that produced vasodilation and/or inhibition ofthe sensitivity of the system to norepinephrine. They also propose that such subtle hemodynamic effects may have severe deleterious effects on fish gas exchange and ion balance, at concentrations well below those known to produce visible changes in the gill morphology (18 ppm for 45 hr) (87). Thus, it appears that a wide variety of organic xenobiotics are capable of affecting fish gill morphology, and that pesticides, such as DDT, are toxic because of inhibition of gill Na,K-activated ATPase. Surprisingly, the concomitant osmoregulatory effects are rather unclear. Detergents appear to produce severe hemodynamic effects on the gill vasculature, which could certainly affect osmoregulation indirectly, via alterations in perfusion of specific areas of the gill epithelium. Use of the Fish Gill as a Model System for Cellular Modes of Action of Toxicants It is obvious from the foregoing that the fish gill is morphologically and physiologically affected by a variety of environmental pollutants. The specific, subcellular sites of action on the fish gill have been proposed, but not proven, for many pollutants, and it is clear that toxicant interaction with various gill transport steps or perfusion patterns could have profound effects on the ability of fishes to osmoregulate in either fresh water or sea water. These potentially sensitive steps include: (a) carrier-based ionic exchanges such as apical Na+/ H+, Na+/NH4+, or Cl-/HC03exchanges, basolateral Na+/K' or Na+/NH4' exchanges, and basolateral NaCl + KCI cotransport; (b) transport enzymes such as basolateral NaK-activated ATPase and intracellular carbonic anhydrase; (c) paracellular pathways (affected by mucus and/or external Ca2+); and (d) vascular hormone receptors, such as those for catecholamines. More important, in general terms, is the fact that these potentially sensitive transport and hemodynamic sites in the fish gill are common to many human tissues and organs that are known sites of action of a wide variety of environmental pollutants. In fact, various authors have proposed that membrane effects may account for many of the pathological responses to these substances in the "membrane theory of toxicity" (104-106). For example, because the human kidney receives 20 to 25% of the resting cardiac output, is capable of actively extracting and concentrating various blood solutes in renal cells and tubular lumina, and may concentrate filtered and secreted solutes in the distal tubules passively because of water reabsorption, relatively high concentrations of toxicants are presented to renal cells. In addition, renal tubular contents may become acidified in some segments, which may provide for interactions with toxic substances not taking place at the normal cellular pHs in other tissues. It is abundantly clear that the kidney is one of the major foci of the toxic effects of a wide variety of environmental pollutants (107)(108)(109). Since the renal epithelia contain high concentrations of transport enzymes such as Na,K-activated ATPase and carbonic anhydrase, ionic-permeable paracellular pathways, as well as the ionic carriers mediating Na+/H' and Cl-/HCO3 exchanges, and the NaCl + KCl cotransport system (110), it is clear that human renal pathophysiologies in response to environmental pollutants could be mediated via perturbations in transport pathways which can be modeled by the gill epithelium of fishes. Finally, some pollutants (especially heavy metals) (119) affect the cardiovascular system by interfering with the vasoactivity of peripheral vessels, including coronary arteries (120,121). The fish gill vasculature has been shown to possess a variety of receptors for vasoactive substances: adrenergic and cholinergic (9), purinergic (122,and Evans,unpublished), and peptidergic, such as those sensitive to glucagon, vasoactive intestinal peptide, and somatostatin (123) and, most recently, those sensitive to atrial natriuretic factor (124,125). The fish gill vasculature is, in fact, the evolutionary precursor of the coronary vessels of mammals (126). Thus, hemodynamic studies of the vasculature of the fish gill may give us access into the stimulus-response coupling processes which may be underlying hemodynamic pathologies produced by environmental pollutants. Summary The teleost fish gill is covered by a complex epithelium whose function is controlled by perfusion through a rather intricate vascular system. In addition to being the site of gas exchange for these aquatic animals, the gill epithelium possesses transport steps which mediate active and passive movements of ions, counteracting dissipative movements down electrochemical gradients between the fish's blood and either fresh water or sea water. Finally, these same transport steps play major roles in acid-base regulation and excretion of unwanted nitrogen in the form of ammonia. It is clear that a variety of aquatic pollutants produce gross histopathologies of the gill epithelium, which are often associated with osmoregulatory, acid-base, or hemodynamic malfunction. It is proposed that such symptoms are secondary to toxin interaction with specific transport steps or membrane-bound receptors. Since similar pathways and receptors are common to a variety of human tissues, which are affected by environmental pollutants (e.g., kidney, intestine, liver, and blood vessel), the fish gill presents a model system which may be used to more carefully investigate general epithelial pathologies produced by toxic substances. Research in the author's laboratory has been supported by various grants from the National Science Foundation, the latest being PCM 8302621. The Center for Membrane Toxicity Studies at the Mt. Desert Island Biological Laboratory is supported by NIEHS EHS 1 P30 ES03828
2014-10-01T00:00:00.000Z
1987-04-01T00:00:00.000
{ "year": 1987, "sha1": "0f65d154832e31c858df89fbc1dbd9198c4ac9fe", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/annotation/f7108c9e-bab9-421f-9c0b-1227a421399b", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f65d154832e31c858df89fbc1dbd9198c4ac9fe", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
230237080
pes2o/s2orc
v3-fos-license
CHANGE IN THE STATUS OF INTERNALLY DISPLACED BOSNIAKS OF THE SREBRENICA MUNICIPALITY IN THE TUZLA CANTON DURING THE PERIOD 2005-2015 Change in the status of internally displaced Bosniaks of the Srebrenica municipality in the Tuzla canton during the period 2005-2015 The war during the period 1992-1995 has caused massive forced migrations of the population in Bosnia and Herzegovina and in this period about 1.2 million people fled beyond the borders of Bosnia and Herzegovina while about 1 million were displaced inside the country. After the Dayton Peace Agreement, there started the process of return of refugees and displaced persons in Bosnia and Herzegovina. However, even after more than 20 years since the signing of the Agreement a significant number of refugees and displaced persons has not returned to their pre-war places of residence. This paper explores the number and the change in the status of the internally displaced Bosniaks of Srebrenica Municipality in the period 2005-2015, those who were residing in Tuzla Canton of the Federation of Bosnia and Herzegovina. According to the Federal Ministry of Refugees and Displaced Persons in Tuzla Canton, in 2005, the status of internally displaced persons had 2016 households with a total of 5549 members, and by 2015 this status has kept 1099 households with a total of 2867 members. The aim of this paper is to point out to some more significant factors that led to a reduction in the number of Bosniaks from the area of Srebrenica municipality who had the status of internally displaced persons in Tuzla Canton. Introduction Bosnia and Herzegovina is known as an area of dynamic migration trends that were particularly intensified at the end of the twentieth century as a result of forced migrations.In the period from 1992 to 1995, half of the pre-war Bosnian population was displaced with about 1,2 million residents fled out of Bosnia and Herzegovina (Vallador Alvarez 2015, 6).Neighboring countries have accepted around 40% of the refugees from Bosnia and Herzegovina, and a significant number of Bosnians took refuge in Germany, Austria, Canada, USA and Australia.Around a million people were displaced within Bosnia and Herzegovina (Pasic 2015, 7).Similar data on the intensity of displacement of the Bosnian-Herzegovinian population in the period 1992-1995 was given by the IDMC, and the Norwegian Refugee Council, according to which over half of the Bosnian-Herzegovinian population were displaced during the period 1992-1995, of which about 1,3 million were forced out of Bosnia and Herzegovina; about 500000 fled to neighboring countries and around 700,000 to the countries of Western Europe, including about 350000 in Germany (A profile of the internal displacement situation, iDMC, NRC 2006). A particular example of negative demographic trends and forced displacement in the period 1992-1995 represents the Municipality of Srebrenica, which is situated in the eastern part of Bosnia and Herzegovina, on the border with Serbia.In this period Serbian forces carried out mass expulsion and killing of Bosniak population.In the area of this municipality, according to the census of 1991, there lived 36666 inhabitants, of which 75.2% represented the Bosniak population.By occupation of the "UN Safe Area of Srebrenica" in 1995 the Serbian army committed genocide against the Bosniak population.In July 1995 there were killed around 8,000 men of the Bosniak population (Van de Bildt 2015, 115), and at the same time the expulsion of this municipality's population was carried out so they were forcibly displaced throughout Bosnia and Herzegovina or fled to other countries of the world (Kulenovic, Suljic 2006, 12).According to the 2013 census in this municipality were enumerated 15242 inhabitants (Agency for Statistics of Bosnia and Herzegovina 2013), which means that as a result of mass murder and persecution, slow and weak post-war return process, this municipality lost about 21000 residents or 584 % of the pre-war population. According to Article 4 and 5 of the Law on Displaced Persons and Returnees in the Federation of Bosnia and Herzegovina and Refugees from Bosnia and Herzegovina, a displaced person is a citizen of Bosnia and Herzegovina who, after 30.April 1991, has been displaced in the territory of the Federation as a result of conflict, persecution, a well-founded fear of being persecuted or having his/her rights violated within the territory of Bosnia and Herzegovina, and who is neither able to return in safety and with dignity to his/her former place of residence nor has voluntarily decided to settle in a new place of living.A returnee is a refugee from Bosnia and Herzegovina or a displaced person who has expressed a wish to return to his/her former place of residence to the responsible body and who is in the process of returning, as well as a refugee from Bosnia and Herzegovina and a displaced person who has returned to his/her former place of residence.Returnees shall cease to be considered returnees upon the expiry of a six-month deadline, counting from the day of their reestablishment in their former place of residence.Returnee is not a person who has established himself/herself in another place of residence within Bosnia and Herzegovina (Law on Displaced Persons and Returnees in the Federation BiH and Refugees from BiH. Official Gazette of the Federation BiH, No. 15/05).Although after the Dayton Peace Agreement the process of returning refugees and displaced persons has been going on, according to the data of the Federal Ministry of Displaced Persons and Refugees in the Federation of Bosnia and Herzegovina there were 38,820 displaced persons registered by 31.12.2014.On the territory of this Bosnian-Herzegovinian entity in 2005 the status of internally displaced persons from the area of the Srebrenica municipality had 10139 persons, and in 2013 3174 persons (Database of displaced persons and refugees 2005). The largest number of displaced inhabitants of Srebrenica, that is, more than a half of the displaced residents of Srebrenica in FBiH have found refuge in Tuzla Canton.In 2005, in Tuzla Canton, there were registered 5549 displaced persons from the area of Srebrenica municipality, and by 2015 that number was reduced to 2867 people (Database of displaced persons and refugees.Ministry of Work, Social affairs and Return of Tuzla Canton.Tuzla, 2015). However, the reduction in the number of Srebrenica people with the status of displaced persons did not happen in this period as a result of returning the displaced persons to the municipality of Srebrenica.By the end of 2005, 1754 households with a total of 3946 members returned to their pre-war places of residence in the municipality of Srebrenica, whereas in the year 2015, in the Srebrenica municipality were less than 2000 inhabitants of Bosniak population.A certain number of displaced persons decided not to return to their pre-war places of residence due to past trauma of war, sense of insecurity, adverse economic and social conditions in the Republic of Srpska.The population of returnees often face poverty due to lack of employment opportunities, poor access to social and health services, the education system is ethnically divided and so on.In addition, younger generations of the displaced population have integrated into the new environment, their ties to the area of origin weakened, and thus their desire to return has weakened.Therefore, the reduction in the number of Srebrenica people with the status of displaced persons in the area of Federation of Bosnia and Herzegovina and Tuzla Canton, to a lesser extent was caused by the process of returning displaced persons to the municipality of Srebrenica.For the most part, it is the consequence of the social, economic, and political situations and the legislation in the Federation of Bosnia and Herzegovina and Tuzla Canton in the post-war period. Materials and Methods The study of the change in the number and status of internally displaced Bosniaks from the Srebrenica Municipality in the area of Tuzla Canton and Federation of Bosnia and Herzegovina, in the period 2005-2015, was conducted on the basis of the Database on displaced persons and refugees of the Federal Ministry of Displaced Persons and Refugees (data for 2005 and 2015) and the Ministry of Labour, Social Affairs and Return of Tuzla Canton (data for 2015).The database on displaced persons and refugees for 2005 was formed on the basis of re-registration process of refugees and displaced persons in order to determine the actual number of persons in the said status.Data for 2005 contain an individualized list of the holders of displaced person status, as well as the number of members of their family households according to the pre-war place of residence and the current place, municipality and canton of residence in the Federation of Bosnia and Herzegovina.Data for 2015, with similar content as in 2005, include exclusively displaced Bosniaks of Srebrenica in the area of Tuzla Canton. In addition to the mentioned data on the internally displaced Bosniaks of Srebrenica Municipality in Tuzla Canton, other statistical data were also used, which are directly or indirectly related to the internally displaced Bosniaks of Srebrenica.These are the following databases: • Individualized list of Bosniak victims of genocide in the municipality of Srebrenica, in the period from 1992 to 1995 (containing 5400 killed and fallen persons); • Database of the Service for the return of Srebrenica Municipality, about the number of returnees to the municipality of Srebrenica by 31.12.2004.(contains information on 1687 households with a total of 3835 members by their settlements in the municipality of Srebrenica); • An excerpt from the Central Voters Register of Bosnia and Herzegovina 2006. The Central Election Commission of Bosnia and Herzegovina, Sarajevo (contains data on 10283 voters with the right to vote in the municipality of Srebrenica). The other sources of information relating to the issue and the subject of research were also consulted, such as professional and scientific papers, statistical data of governmental and non-governmental organizations, as indicated in the list of references. All the mentioned data were compared with data from the Database of displaced persons and refugees, which was obtained from the relevant ministries at the level of the Federation BiH and Tuzla Canton.The purpose of these comparisons was to reach more exact indicators on the status and the number of internally displaced Bosniaks of the Srebrenica Municipality in the area of Tuzla Canton, as well as the actual number of displaced persons after the fall of the "UN Safe Area of Srebrenica", July 1995, and deportation of civilians, mainly women, children and the elderly to the area of Tuzla district (today Tuzla Canton), as well as the number of returnees in the municipality of Srebrenica.In addition to the method of comparison, the other used methods were the method of case studies and method of meta-analysis. Number of internally displaced Bosniaks of the Srebrenica Municipality in the area of Tuzla Canton in the period 2005-2015 In the period from 1992 to 1996 the temporary accommodation in Tuzla Canton was given to about 15000 displaced Bosniaks from the Srebrenica Municipality.After signing the Dayton Peace Agreement in late 1995, and the cessation of the state of war in Bosnia and Herzegovina in the first half of 1996, and finally opening of traffic roads both within Bosnia and Herzegovina and to foreign countries, there occurred migrations of internally displaced Bosnians of the Srebrenica municipality.These migrations have taken place both within the Federation of Bosnia and Herzegovina and towards the Western European or overseas countries.According to unofficial estimates, between 5 and 10 thousand of Srebrenica Bosniaks live outside Bosnia and Herzegovina.Although there are no exact figures to indicate how many Bosniaks of Srebrenica municipality have really been displaced in the territory of Tuzla canton (Tuzla district), it is assumed that in this period, in Tuzla Canton were residing about 80% of the total number of displaced Bosniaks from Srebrenica.However, through the process of registration of displaced persons and refugees in the post-war period, by the competent institutions in the Federation of Bosnia and Herzegovina, the actual number of displaced persons from the area of Srebrenica within Tuzla Canton and the entire Federation of Bosnia and Herzegovina was determined. By 2005, more than half of the internally displaced Bosniaks from the Srebrenica municipality had a residence in the area of Tuzla Canton.Out of the total number of expelled persons from Srebrenica who lived in FBiH, in the municipalities of Tuzla Canton lived approximately 54% of all displaced families, or 54,7% of all displaced persons (Kulenovic et. al. 2006, 16).Thus, in the area of Tuzla Canton in 2005, there were 2016 households resident with a total of 5549 members of the internally displaced people from Srebrenica.By 2015, this number decreased to 1099 households with a total of 2867 members (Tab.1). Tab In relative numbers shown, the number of households of the internally displaced Bosniaks from Srebrenica in the period 2005-2015 decreased by 45,5%, and the total number of household members decreased by 48,3%.The process of reducing the number of displaced persons in Tuzla Canton, and other administrative units of the Federation of Bosnia and Herzegovina, was affected by several factors, the most significant being demographic (bio-reproductive), economic and housing.Of demographic factors the most important is low birth rate among the refugee population which was influenced by the so-called external factors, in other words, by the loss of a large number of male population in fertility age during the period from 1992 to 1995, which led to a large gender disproportion in the displaced population from Srebrenica and thus to termination of bio-reproduction in women who were married.Economic factors also had a major impact on the number of displaced residents of Srebrenica in Tuzla Canton.A large number of displaced and expelled persons in the municipality of Srebrenica had lost immediate family members (children, husbands, parents) who were killed, and thus according to current legislation in the Federation of Bosnia and Herzegovina they could have provided adequate socio-economic benefits such as the right to a permanent family care allowance.In addition, a number of displaced persons from Srebrenica lost their status of displaced persons by solving their employment-legal status, that is, by getting employment and housing (Suljić et. al. 2015, 9).However, even though there was a significant reduction in the number of households (that is, the total number of internally displaced Bosniaks from the Srebrenica Municipality), still their relative proportion increased in the area of Tuzla Canton in relation to other areas of the Federation of Bosnia and Herzegovina.In 2005, the share of internally displaced Bosniaks of Srebrenica Municipality who stayed in the area of of Tuzla Canton amounted to about 55% of the total number of displaced Srebrenica people in FBiH.In 2015, this share was much higher being approximately 76,7% of the total number of internally displaced households, or 77,7% of the total number of internally displaced persons from Srebrenica who have the status of displaced persons in the Federation of Bosnia and Herzegovina.There are several reasons that caused the expelled Bosniaks from Srebrenica to, in large numbers, retain the status of displaced persons in Tuzla Canton in relation to other cantons of the Federation of Bosnia and Herzegovina where this population resided.However, there are two most important reasons: one is geographical, the other is socioeconomic.The area of Tuzla Canton is geographically closest to the municipality of Srebrenica, so the majority of refugees and internally displaced Bosniaks of Srebrenica hope that, sooner or later, they will return to their pre-war places of residence in the municipality of Srebrenica.Another reason is the status or socioeconomic nature, because people who have got legally recognized status of displaced or expelled persons are entitled to some legally provided benefits, such as for example the right to a permanent family care allowance, the right to health care, etc. The very process of returning to pre-war places of residence of internally displaced persons from Srebrenica did not significantly contribute to the overall reduction in people with the status of displaced persons.Although some official reports of government at all levels in Bosnia and Herzegovina show that the return of internally displaced persons to their pre-war places of residence was satisfactory, it can not be claimed for the municipality of Srebrenica.As an example, the data from the "Service for return"of Srebrenica municipality by the end of 2004 may be used.According to the aforementioned service of the Srebrenica municipality by the end of 2004 in this municipality have returned 1310 Bosniak families with a total of 3048 members.That these figures are much lower is shown by the data obtained on the basis of selected lists from the central electoral register of voters of Bosnia and Herzegovina for the general elections in Bosnia and Herzegovina in 2006.Namely, the permanent residence in the area of Srebrenica had 1384 persons of Bosniak nationality who were aged 18 years and over (persons with the right to vote).If to this number is added about 20% of Bosniaks who are under the age of 18 years, it gives a total of 1730 people, or approximately 1800 Bosniaks who lived in the area of Srebrenica municipality in mid-2006(BiH Central Voters Register. Central Election Commission. Sarajevo, 2006).From the above said, it comes out that the process of return of displaced persons to their pre-war places of residence can not be taken as an important cause of the decrease in the number of people with the status of a displaced person, especially when it comes to the number of returns to the municipality of Srebrenica. Spatial distribution of internally displaced Bosniaks of the Srebrenica Municipality in the area of Tuzla Canton in 2005 and 2015 The territorial distribution of internally displaced persons from Srebrenica in the municipalities of Tuzla Canton was conditioned by several factors.The first and foremost factor is the original accommodation of internally displaced people of Srebrenica after their deportation from the occupied, so-called, "UN Safe Area" of Srebrenica in July 1995.The people from Srebrenica who were expelled, were first placed in military facilities and tents in Dubrave, near Tuzla, and later relocated to other municipalities of Tuzla Canton, primarily in the following municipalities: Banovići, Lukavac, Srebrenik, Tuzla and Živinice, and later in the municipalities of Gračanica, Gradačac and Kladanj.This can be seen as well from the total number of internally displaced persons from Srebrenica who mostly left in the municipalities in which they were placed at the end of 1995 and beginning of 1996 (Tab.2).Other factors that have influenced the distribution of displaced persons from Srebrenica to the area of Tuzla Canton can be included in the group of geographic factors.The most important are geographical, topographical and traffic positions of municipalities in Tuzla Canton, or the proximity to the Tuzla City as the administrative, cultural, health and university center of Tuzla Canton. Tab. 2: Comparison of distributions of displaced persons from The best view of territorial and numerical distribution of internally displaced Bosniaks of Srebrenica Municipality in 2005 in the municipalities of Tuzla Canton can be obtained through the relative numbers.The order of municipalities from Tuzla Canton is given by the largest share of internally displaced Bosnians: Srebrenik 25,6%, Lukavac 19,2%, Živinice 15,3%, Tuzla 14,5%, Banovići 7,6%, Gradačac 6,8%, Gračanica 5,6%, Kalesija 2,6%, Kladanj 1,8%, Doboj Istok 0,6% and municipality Ćelić 0,4% of the total number of internally displaced Bosniaks of Srebrenica municipality throughout Tuzla Canton.So, from the above relative numbers, that is, shares of the number of internally displaced persons from Srebrenica over Tuzla Canton, it can be seen that most of the displaced persons had a residence in the municipalities in which the displaced persons were located at the end of 1995 and beginning of 1996, which gravitate towards Tuzla City.In addition to the aforementioned, a significant impact on the regional distribution of the displaced Bosniaks of Srebrenica Municipality in the area of Tuzla Canton had a traffic position and size of the settlement of residence.Namely, the regional distribution of families displaced from Srebrenica in these municipalities was conditioned, first of all, by favorable geo-traffic position of these settlements compared to municipal urban centers (Kulenović et. al. 2006, 17-19). Based on these data, a conclusion can be made that in the period 2005-2015, certain changes happened in the proportion of internally displaced Srebrenica people in the municipalities of Tuzla Canton.The most significant decrease in the share of the displaced persons was recorded in the municipality of Srebrenik, and an increase in the municipality of Živinice.In other municipalities of Tuzla Canton there was no significant variation in the number of displaced persons from Srebrenica.These changes do not mean that a large number of internally displaced persons left some municipalities, and moved to others in the Tuzla Canton, even though such migrations existed.In most cases, if there was no return to pre-war places of residence, displaced persons resolved their housing and economic needs in the municipality area of residence, and hence by the force of law they lost the right to the status of displaced persons.The status of displaced person in the Federation of Bosnia and Herzegovina may cease for various reasons, the most important are: voluntary return to their former place of residence; refusal to return to a former place of residence, although a voluntary return to his/her former place of residence is possible, in safety and dignity, and when there are no compelling reasons arising out of previous persecution or other strong humanitarian reasons; when a displaced person has voluntarily decided to permanently settle elsewhere in the territory of the Federation; when a displaced person has made a free use of his/her pre-war property in his/her former place of permanent residence (sale, exchange, rent); when a displaced person has made a free use of his/her property in his/her place of temporary residence (purchase or construction of house, apartment); when a displaced person has used assistance/donation for an urgent repair of his/her house, apartment in his/her place of permanent/temporary residence; in case of death (Law on Displaced Persons and Returnees in the Federation BiH and Refugees from BiH. Official Gazette of the Federation BiH, No. 15/05).One of the indicators that there were no significant intramunicipal migrations of displaced persons is that these people in the 10-year period, for the most part, retained the same place of residence.The best examples are the municipalities of Živinice, Tuzla, Srebrenik and Lukavac. Distribution of internally displaced Bosniaks of the Srebrenica municipality in municipalities of the Tuzla Canton by gender and selected age groups in the year 2015 According to the data from the International Commission on Missing Persons 7755 residents of the municipality of Srebrenica were killed around 11 July 1995.Of that number, DNA analysis identified 6918 people (89% of the total number of killed), and 803 persons are still missing, that is, 10% of the total number of deaths in the genocide (International Commission on Missing Persons, 2015).Males make up the majority of the victims, but there were also women and children among victims (a little more than 5% of victims are children under the age of 15 years) (Leydesdorff 2011, 12).Therefore, most of the displaced Bosniaks of Srebrenica are female persons, which is understandable, bearing in mind that in the area of Srebrenica during the war, from 1992 to 1995, were killed more than 5000 males (Suljic et. al. 2015, 9).Tab. 3. shows data on distribution of the displaced people from Srebrenica in the municipalities of Tuzla Canton, by gender and corresponding age groups. Tab. 3: Distribution of displaced persons from Srebrenica in municipalities of the Tuzla Canton by gender and selected age groups in 2015.Based on the data presented in table 3 it is evident that the part of female persons in the total number of displaced persons in Tuzla Canton is 58,4%, and the proportion of males only 41,6%.In the age group 18-50 years there is an almost equal proportion of male and female population, 29,3% of men and 29,6% for women.However, a significant difference exists between the proportion of male and female population aged 51 and over; only 6,5% of male and 23,3% of female population.One of the main reasons for this gender disproportion is the suffering of the male population in the municipality of Srebrenica during the period 1992-1995.Also, the proportion of the male population aged 18-50 years in the total male population is 70,4% and the share of females, aged 18-50 years, in the total female population is 50,6%.This difference was, probably, conditioned by entering marriages for a part of the female population, which created legal requirements for losing the status of displaced persons. From the above, it can be concluded that the share of mature and late-age population will increase, and the share of young population will decrease.Not only it is the matter of the biological aging process, and the associated demographic process of aging of the displaced from Srebrenica, but also it is the loss of the status of displaced persons on the basis of the acquisition of certain socio-economic conditions that are prescribed by law and regulated.The very return to the pre-war place of residence does not have a significant role in the process of reducing the number of people with the status of internally displaced persons. Conclusion Among a large number of factors that have influenced the number of internally displaced Bosniaks of Srebrenica in Tuzla Canton the political, geographical, socioeconomic, legal and demographic factors can be sorted out.The political factors include war events around the so-called "UN Safe Area of Srebrenica" in July 1995, mass killings of young and middle-aged men, the expulsion of women and children to the area of today's Tuzla Canton and others.Geographical factors are determined by the shortest distance between the area of Tuzla Canton and the municipality of Srebrenica.Socio-economic factors, such as employment, housing, marrying, etc., have caused giving up or losing the status of displaced persons.The latter is related to the legal or statutory factors.Demographic factors are reflected in the low birthrate with refugee populations, gender disproportion which was conditioned by the mass murder of young and middle-aged men during the genocide in July 1995, especially young, married men and others. As for the territorial distribution of internally displaced people of Srebrenica in the municipalities of Tuzla Canton, during the period 2005-2015, it can be concluded that there were no significant changes, as observed in relative numbers.A significant reduction in the share of the displaced persons occurred in the municipality of Srebrenik, with simultaneous increase that occurred in the municipality Živinice, while in other municipalities this change was not significant.Considering their pre-war places of residence, the internally displaced Bosniaks in Tuzla Canton, originally were from 61 inhabited places of the municipality of Srebrenica, whereas from 10 settlements there was not even one person with the status of a displaced person. Only in two municipalities of Tuzla Canton, Sapna and Teočak, there were no internally displaced Bosniaks of Srebrenica. The majority of the displaced Bosniaks of Srebrenica are women as a result of the destruction of the male population during the war and mass killings during the genocide in July 1995.The highest and almost equal share for both genders is in the population age group of 18 to 50 years.A significant difference between the shares of both genders occurs in the population aged 51 and over, where the share of women compared to men is 3.6 times higher.The main reason for this gender disproportion is the destruction of male population during the genocide in the municipality of Srebrenica, from 1992 to 1995. CHANGE IN THE STATUS OF INTERNALLY DISPLACED BOSNIAKS OF THE SREBRENICA MUNICIPALITY IN THE TUZLA CANTON DURING THE PERIOD 2005-2015 Summary The studies on changes in the status of internally displaced Bosniaks of Srebrenica Municipality in Tuzla Canton during the period 2005-2015 were aimed to point out the main factors that have contributed to reducing the number of displaced persons, and that it was not significantly influenced by the return to the area of Srebrenica Municipality.Of the total number of internally displaced Bosniaks of Srebrenica Municipality within the Federation of Bosnia and Herzegovina, 55% of them lived in the area of Tuzla Canton by 2005, and 10 years later, that proportion increased to 77%, although the absolute number of internally displaced persons from Srebrenica in that period decreased by about 64% in the Federation of Bosnia and Herzegovina, or about 52% in Tuzla Canton.The most important factors that influenced the process of changing the status of displaced persons are classified in the group of socio-political (war and post-war situation in Bosnia and Herzegovina), geographic (distance between the area of Tuzla Canton and Srebrenica municipality), socioeconomic (resolving employment-legal relations and housing issues), demographic (low birthrate and gender disproportion in the refugee population) and legal factors (loss of status due to changes prescribed by law). In late 1995 and early 1996, in Tuzla Canton area there were about 18500 Bosniaks from Srebrenica, including 15000 people who were expelled or fled after the occupation of the Srebrenica enclave in July 1995, and about 3500 people who were evacuated from the Srebrenica enclave in the spring of 1993. Internally displaced Bosniacs stayed within the area of whole Tuzla Canton, except in the municipalities of Sapna and Teočak, and the most numerous were in the following municipalities: Srebrenik 25,6%, Lukavac 19,2%, Živinice 15,3% and Tuzla 14,5%, according to data from 2005, or in the municipalities: Živinice 19,8%, Lukavac 18,2%, Tuzla 17,6% and Srebrenik 14,8%, according to data from 2015.The territorial distribution of families displaced from Srebrenica in these municipalities was conditioned, first of all, by favorable geo-traffic position of these settlements in relation to the municipal urban centers. In the area of Tuzla Canton in 2015 internally displaced Bosnians were originally from 61 settlements of the municipality of Srebrenica, whereas from 10 pre-war settlements of this municipality there was not even one internally displaced person. The majority of the internally displaced Bosniaks of Srebrenica is made up of women with a share of 58,4%, while the proportion of males is only 41,6%.In the age group 18-50 years there is an almost equal proportion of male and female population, with 29,3% of the male, and 29,6% of the female population.A significant difference exists between the proportion of male and female population aged 51 and over; only 6,5% of male and 23,3% female population.The main cause of this gender disproportion is the destruction of male population during the genocide in the Municipality of Srebrenica, from 1992 to 1995. . 1: Number of persons expelled from Srebrenica, by municipalities of the Tuzla Canton, in 2005 and 2015.Database of displaced persons and refugees.Federal Ministry of Displaced Persons and Refugees.Sarajevo, 2015. Srebrenica by municipalities of the Tuzla Canton in 2005 and 2015. The paper was created within the framework of the scientific-research project "The change in the status of the refugees and internally displaced Bosniaks of the Srebrenica Municipality in the territory of FBiH" which was approved and funded under the 5 thInternal call of the University of Tuzla for financing/co-financing of projects in the field of science of importance for the Federation BiH in 2014, entitled "Support for research of importance for the Federation" (No. 01/2-2995/15) from26.05.2015.
2021-01-04T04:02:06.244Z
2016-06-30T00:00:00.000
{ "year": 2016, "sha1": "6d5ed4a7f56cdd6d43de8b53f68c38c77427c371", "oa_license": "CCBY", "oa_url": "https://journals.um.si/index.php/geography/article/download/3952/2774", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "cd29614d96a42405788c84f9226f087ae574e2fa", "s2fieldsofstudy": [ "Sociology", "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
221307016
pes2o/s2orc
v3-fos-license
A general framework for functionally informed set-based analysis: Application to a large-scale colorectal cancer study Genome-wide association studies (GWAS) have successfully identified tens of thousands of genetic variants associated with various phenotypes, but together they explain only a fraction of heritability, suggesting many variants have yet to be discovered. Recently it has been recognized that incorporating functional information of genetic variants can improve power for identifying novel loci. For example, S-PrediXcan and TWAS tested the association of predicted gene expression with phenotypes based on GWAS summary statistics by leveraging the information on genetic regulation of gene expression and found many novel loci. However, as genetic variants may have effects on more than one gene and through different mechanisms, these methods likely only capture part of the total effects of these variants. In this paper, we propose a summary statistics-based mixed effects score test (sMiST) that tests for the total effect of both the effect of the mediator by imputing genetically predicted gene expression, like S-PrediXcan and TWAS, and the direct effects of individual variants. It allows for multiple functional annotations and multiple genetically predicted mediators. It can also perform conditional association analysis while adjusting for other genetic variants (e.g., known loci for the phenotype). Extensive simulation and real data analyses demonstrate that sMiST yields p-values that agree well with those obtained from individual level data but with substantively improved computational speed. Importantly, a broad application of sMiST to GWAS is possible, as only summary statistics of genetic variant associations are required. We apply sMiST to a large-scale GWAS of colorectal cancer using summary statistics from ∼120, 000 study participants and gene expression data from the Genotype-Tissue Expression (GTEx) project. We identify several novel and secondary independent genetic loci. Introduction Single variant analysis in genome-wide association studies (GWAS) has been successful in identifying thousands of variants associated with various diseases and traits [1].However, these variants all together explain only a fraction of heritability, suggesting that many variants remain to be discovered.Until now, most of these discoveries have been mainly driven by increases in sample size.The gain from substantially increasing sample size is diminishing, but incorporation of functional knowledge about the genome will likely play a critical role in informing discovery of novel loci as well as understanding the pathways in which the genetic loci may be involved. Research for integrating functional knowledge into GWAS has been active recently.This is in part due to success of large collaborative projects such as the Genotype-Tissue Expression (GTEx) project [2] and the Encyclopedia of DNA Elements (ENCODE) [3], which have generated extensive knowledge about functions of genetic variants that can be used for aggregating and weighting genetic variants.Widely available GWAS summary statistics for individual variants has made it possible to leverage these functional information, leading to many more discoveries of novel genetic loci.For example, PrediXcan [4,5] and TWAS [6] test the association between genetically predicted gene expression levels and phenotypes.A comprehensive review and comparison of various methods can be found in Barbeira et al. (2018) [5].The TWAS-like analysis can also be framed as a class of Mendelian randomization [7,8], in which under some assumptions the mediator effect of gene expression can be estimated by the inverse variance weighted ratios of regression coefficients of genetic variants for the phenotype and those for the gene expression.All of these methods also apply to other types of mediators including methylation and lifestyle variables (e.g., smoking) that may be regulated by genetic variants. These approaches could be considered as a type of set-based association test, in which the predictor is the weighted sum of a set of genetic variants with weights being the effect sizes on gene expression.However, these methods do not take into account the potential effects of genetic variants beyond their effects on expression of a specific gene.Complex trait loci typically map to regions of the genome clustered with regulatory elements, which in turn have combinatorial effects on the expression of several target genes [9,10].Variants may have functional effects on more than one gene through their disruption of multiple regulatory elements [9].Consequently, these approaches likely only capture part of the total effect of expression quantitative trait loci (eQTL). We recently proposed a Mixed effects Score Test (MiST), which formulates the association of mediators (fixed effects), while allowing for effects of individual variants on disease risk directly adjusting for mediators (random effects) [11,12].Thus, MiST can increase power if some variants individually influence disease risk through other functional mechanisms besides mediators (e.g., gene expression).Another advantage of MiST is that the test statistics for the mediation and direct effects are independent, providing a flexible framework for optimally combining the two components to achieve the maximal power.However, the test statistics of MiST were derived from individual-level data, which cannot be applied if there is difficulty in accessing individual-level data.To maximize power for detecting novel genetic association, it is desired to conduct association analysis using GWAS summary statistics, facilitating pooling data across studies and consortia. In this paper, we propose forming the test statistics of MiST based on summary statistics, which we term as sMiST.There are several novel contributions: 1) simultaneous testing of multiple mediators (e.g., gene expression and methylation); 2) testing of the variance component of the direct effects of genetic variants independent of the mediation effects; 3) combining the test statistics of both mediation and direct effects to form a single overall test that can capture information from both mediation and direct effects; 4) conditional testing of mediation and direct effects, adjusting for multiple other genetics variants.For example, one may perform conditional testing to examine whether a finding is novel conditional on known loci.When there is only one mediator, the test statistic for testing the association of the mediator under the assumption of no direct effects has the same form as PredXcan and TWAS [5,6] and the Mendelian randomization two-stage estimator [8,13].Our method also avoids the direct inversion of the covariance matrix of genotypes as in the Mendelian randomization approach for dealing with correlated variants [8], therefore, it can work on genes with varying degrees of correlation structures, without any variant pruning.We show that our method of combining summary statistics gives p-values that are consistent with those constructed from individual level data, and that it is much more efficient computationally.We applied sMiST to a large-scale GWAS of colorectal cancer using summary statistics, and identified three novel loci that are located 1MB outside of known CRC loci regions, as well as one additional secondary novel locus. MiST framework Consider an outcome Y, which can be continuous or binary.We are interested in the association of a set of P variants G = (G 1 , . .., G P ) with outcome Y. Assume there are K mediating variables M = (M 1 , M 2 , . .., M K ) T , with which G are associated and K < P; here the superscript T is for transpose.Let X be a vector of confounders including the intercept.The confounders may include study, age, sex, and principle components to account for population structure in the data.A generalized linear model can be used to assess the association of M and G while adjusting for confounders where g(�) is a logit function if Y is binary and an identity function if Y is continuous.The regression coefficients η, γ = (γ 1 , . .., γ K ) T , and δ = (δ 1 , . .., δ P ) T are the effects of the confounders, K mediators, and direct effects of G, respectively.To obtain estimators for γ and δ, ideally the measurements of genotypes G, mediators M, and outcome Y are collected on the same set of individuals.However, GWAS usually have very large sample sizes because of their need to detect modest genetic effects; as such, collecting mediators M such as gene expression and methylation on all individuals in GWAS can be costly and logistically difficult.As M is not measured, it is instructive to examine E(Y|X, G) by integrating out the unmeasured M under the true model (1) [14].If g is linear, it is straightforward to see that EðYjX; This suggests that if there is a model that can predict M k well using G, we can use this model to impute the missing M k by E(M k |G, X).If g is logit, there is no closed form for E(Y|X, G) but we can approximate gfEðYjX; GÞg � fXZ þ EðMjG; XÞg þ P P p¼1 d p G p g=�, where ϕ = {1+ γ T cov(M|G, X)γ/1.7 2 } 1/2 [15].Despite that the parameters are attenuated by 1/ϕ under this model, testing of null association for the mediation with E(M|G, X) and direct effects with {G p , p = 1, . .., P} is equivalent to testing γ and δ's = 0 in model (1).For simplicity, we used the same notation {γ, δ} for the attenuated parameters in the following models. Specifically, we fit a linear regression model to the mediators: where W pk is the weight or regression coefficient of pth variant associated with the kth mediator.Here, to avoid introducing too many non-critical notation, we use η to denote generically the regression coefficients for confounders X and they may not be same as η for confounders in other models.The weight W pk is set to 0 if the pth variant is not associated with kth mediator.In some situations, some variants that are associated with mediators M are not part of the set {G p , p = 1, . .., P}.We can expand {G p , p = 1, . .., P} to include these variants but set the corresponding δ's in model (1) to 0. Now plugging (2) into (1), we obtain where b M k ¼ P P p¼1 W pk G p for k = 1, . .., K. To obtain the weights {W pk , p = 1, . .., P, k = 1, . ..K}, we can use a reference dataset that has both genotyping and mediator data.The dataset needs not overlap with the GWAS data.For example, PrediXcan uses genetic variants and gene expression data from GTEx and other studies to build a genetically predicted gene expression linear regression model for each gene [4]. As the number of mediators K is typically small, we assume γ as fixed effects.On the other hand, for testing the direct effects, the number of genetic variants P can be large.An omnibus χ 2 test with P degrees of freedom may not be powerful.Instead, we assume that δ p , p = 1, . .., P, follow an arbitrary distribution with mean 0 and variance τ 2 .Thus, we test the overall null H 0 : g ¼ 0 and t 2 ¼ 0: If the test rejects the null hypothesis at a pre-specified significance level, it suggests there is evidence against the null total effects of genetic variants G on Y.We note that model (3) can also be formulated by a hierarchical model as shown in Su et al. (2018) [12], where in a generalized linear model of G and Y, the main effects of G are further modeled by incorporating the functional information of the genetic variants such as weights in predicting gene expression. Summary statistics-based mixed effects score test (sMiST) Su et al. [12] proposed the Mixed effects Score Test (MiST) to test nullity of the fixed effects γ and variance component τ 2 using individual level data.Here we introduce a method that requires only summary statistics to perform the tests for H 0 : γ = 0 and τ 2 = 0. Following the convention, assume we have the summary statistics at hand: . . .; P; • Covariance of the genotypes, cov(G). The summary statistics can also be replaced by score statistic Ũ p and variance . When the variants are rare or less frequent, score statistics are numerically more reliable than the estimates of marginal regression coefficients, because score statistics are calculated under the null.The covariance cov(G) can be obtained from an internal random subset of control samples or an external reference database.In the latter case, the reference data should match as close as possible to the underlying population for the summary statistics to avoid false positives [5]. Based on the summary statistics, we derive the test statistics for the overall mediation effects γ = 0, as well as individual mediator effects γ k , k = 1, . .., K under τ 2 = 0.In addition, we derive the test statistics for τ 2 = 0, conditional on b M. By conditioning on b M, the test statistic for τ 2 is independent of the test statistic for γ [12].We can then straightforwardly combine the two test statistics by using the p-value-based Fisher's or minP combination procedure.Alternatively, we can also use data-driven weighted combination methods as in MiST: optimally weighted linear combination and adaptively weighted linear combination, neither of which requires individual level data.We termed the summary statistics-based combined mixed effects test as sMiST. In theory, p-values derived from summary statistics and p-values derived from individuallevel data are asymptotically equivalent under the null if there are no confounders and the estimate of cov(G) is accurate, the latter of which is important especially if the genotype data are from an external dataset that differs from the data that generate the summary statistics. Identifying novel genes associated with CRC risk using sMiST We analyzed a large GWAS of colorectal cancer (54,454 cases and 64,163 controls) [16].We considered the mediation effect of gene expression and downloaded the estimates of genetic effects on gene expression from the PredictDB Data Repository (http://predictdb.org/).We controlled the overall type I error at 0.05, allocating 0.04 for genome-wide discovery and 0.01 for conditional analysis to identify novel loci while adjusting for known CRC loci.Specifically, we tested 8,893 genes and used a Bonferroni correction to account for multiple testing, which yields a significance level at the gene level 0.04/8893 = 4.5 × 10 −6 .For the conditional analysis, we set the significance level at the gene level to be 0.01 divided by the number of significant genes from the genome-wide discovery. A total of 90 genes reached the genome-wide significance level of 4.5 × 10 −6 using optimally weighted linear combination of sMiST (S1 Table).To evaluate whether these genes are novel for CRC, we performed conditional analysis adjusting for the CRC known loci [16] on the same chromosome using sMiST.We constructed a weight matrix W Q+P, Q+1 such that the first Q columns are 1 on the diagonal corresponding to known loci and 0 everywhere else, and the last column is 0 for the first Q rows and weights of P variants used in predicting gene expression for the remaining P rows.We arranged summary statistics for the Q known loci and the P variants as a vector.It is straightforward to see that the adjusted p-value for the predicted gene expression conditioning on the known loci is as if each of the known loci were a "mediator". After adjusting for the CRC known loci risk, four genes remain significant at 0.01/ 90 = 1.1 × 10 −4 (Table 1), three of which have no known loci within 1Mb of transcription start and end sites of the gene.For all four genes, the main association signal comes from the variance component of the random effects of the SNPs, not from the predicted gene expression.A further examination of marginal association along with eQTL weights shows that variants with the larger weights in predicting gene expression do not have evidence for association (NT5DC2 and VPREB3).For PLD6 and ANKRD10, variants that up-regulate (or down-regulate) gene expression have incosistent direction of association with outcome, yielding non-significant p-values for the predicted gene expression (S1-S4 Figs).The odds ratio estimates are close to 1 and their 95% confidence intervals cover 1 (S2 Table ).On the other hand, for these genes, several variants do show association with disease risk, for which the variance component test is powerful to detect when the signals are sparse in a set-based test. Next, we conducted a sequential analysis to explore whether the significance of each identified genes is mainly driven by only one variant or a subset of the variants.We first selected the most marginally significant variant after adjusting for known loci.Then we included known loci and also this most significant variant in sMiST to evaluate the association.If the p-value from either the predicted gene expression or variance component was <0.05, we continued the process and selected the next most significant variant, adjusting for known loci and previously included variants, until neither the predicted gene expression nor variance component p-value reached significance at 0.05.The association of all of these genes is driven by two or more variants (S3 Table ).In particular, for gene ANKRD10, 8 variants are associated with CRC risk.All of these highlight the power of set-based association testing that incorporates functional information. Performance of sMiST in simulation We evaluated the performance of summary statistics-based sMiST in testing the mediation and variance components.We examined the type I error of sMiST by generating Y assuming both γ = 0 and τ 2 = 0.The type I error for sMiST as well as for the mediation and variance component tests is well kept (S4 Table ).Importantly, we would like to examine how closely sMiST p-values are compared with the p-values from MiST that were calculated based on individual level data, which we treated as the gold standard.This is because an essential property of summary statistics-based test statistics is that they should agree well with the test statistics obtained as if individual level data were available.We selected three different genes due to their different genetic structures.As the performance of sMiST is similar for all three genes, we only present the results for the CXCR1 gene here, and show the results of the other two genes (C18orf32 and ARHGAP11A) in S5 and S6 Figs, respectively.Gene CXCR1 has 42 variants with several clusters of high correlation.Details of simulation are provided in Methods and Materials. Impact of confounding. As the asymptotic equivalence between sMiST and individuallevel data based MiST holds when there are no confounders, we examined extensively the impact of confounders on sMiST.We calculated cov(G) using the same genotyping data as for generating the outcome.The robustness of cov(G) estimated from smaller sample size and external data will be assessed in the section of "Performance of sMiST in real data analysis".For CXCR1, there is one known locus outside the gene, which is highly correlated with the predicted gene expression (mediation) with correlation of −0.66.We thus created a confounding variable by summing this known locus and other independent genetic variants weighted by their marginal effect sizes.We varied the number of independent variants added to the confounding variable to yield the correlation between the confounder and predicted gene ranging from 0.1 to 0.4, representing moderate to high correlation.We also varied the effect size of the confounder, β, from 0.3 to 0.9 for modest to strong effect. We considered 4 general scenarios: Performance of sMiST with multiple mediators.Our method can be generalized to instances when there are multiple mediators.To illustrate, we generated two correlated mediators.One mediator was predicted gene expression of of CXCR1, and the other "mediator" is the known CRC locus outside of the gene, which is in nearly perfect correlation with one of the variants in CXCR1.This is to mimic the scenario for testing the joint and conditional effect of predicted gene expression and known locus.We combined the genotype data of CXCR1 and of the known CRC locus into a mega-genotype n × (P + 1) matrix, where n is the number of subjects and P is the number of eQTLs in the CXCR1 gene.We assigned a (P + 1) × 2 weight matrix of the form (W 1 , W 2 ), where W 1 = (w 1 , . .., w P , 0) T and W 2 = (0, . .., 0, 1) T .Here the weight is again from the the PredictDB Data Repository. We Performance of sMiST with rare variants We compared the performance of summary statistics based sMiST with individual level data based MiST for rare variants with MAF from 0.1% to 1.0%.We calculated sMiST using p is < 0, and 0 otherwise.We denote this by sMiST-Standardized Score. Additional simulation results We assessed the power of sMiST and its comparison with MiST under a wide range of scenarios: (1) varying strength of the association of G with M with R 2 = 0.05, 0.2, and 0.8; (2) varying proportion of associated variants in the direct effects: Prop = 0.1, 0.2, 0.4, 0.6, and 0.8; and (3) mis-specification of the model for M given G where the true link function is log but the linear link is used to fit the model.As expected, as R 2 increases, the power for the mediation effect increases, while the power for the direct effect stay the same (S5 Table ).As the proportion of variants with direct effects increases, the power for mediation effect is constant but the power for the direct effects increases.As a result, the power for the total effect of mediation and direct effects increases under both scenarios. When the relationship of G and M is mis-specified, the power for the mediation effect is reduced substantially when R 2 = 0.05 but not as much when R 2 = 0.2 and 0.8 (S6 Table ).Interestingly, testing of direct effects can pick up some of the power loss for mediation effects due to model mis-specification.The power for the total effects under model misspecification is nearly the same as the power when the model is correctly specified when R 2 � 0.2 or the proportion of variants with direct effects � 0.6. Performance of sMiST in real data analysis An important input to sMiST or any summary statistics-based test statistics is the covariance matrix of the genetic variants.Instead of focusing on one or few genes, we evaluated the impact of the covariance matrix based on genome-wide real data analyses of GECCO studies for which we have individual level data for both outcomes and genotypes.Thus, we can directly compare the summary statistics-based test sMiST with the individual-level data based test MiST for a broad spectrum of genetic architecture and weight distribution in calculating predicted expression. We obtained the summary statistics of marginal association log-odds ratio estimates and standard errors from GECCO data for sMiST to calculate the p-values for the mediator effect and variance component.In addition, using the individual level data, we obtained the mediation and variance component p-values using MiST, and treated these p-values as the gold standard for sMiST to be compared with. In reality, the LD structure is often not available from the same source where the summary statistics are generated; hence external reference population are used to provide the estimated LD matrix on the variants.To evaluate our proposed method under such situation, we conducted a genome-wide analysis with summary level information from two different cohorts, GWAS summary statistics from GECCO and LD matrices calculated from CORECT.We compared the p-values of fixed effects and variance components from our method to those obtained from MiST based on individual level data in GECCO.From the scatter plots of the two sets of p-values as presented in Fig 4, we observe that the p-values on fixed effects and variance components from our method are comparable to the results from MiST using individual level data.The patterns of points aligning around the line of equality validate the proposed method under the situations when LD information from a similar external reference population is leveraged. We then assessed the impact of the sample size of genotyping data needed for calculating the covariance matrix.We randomly sampled different sizes of sub-samples from GECCO and estimated the covariance of genotypes from the sub-samples.Fig 5 shows the scatter plots of −log10(p-values) of mediation and variance components obtained from sMiST with the covariance matrix based on n = 1, 000, 5, 000, and 10,000 samples, respectively, as compared with p-values from MiST.The p-values for sMiST and MiST generally fall on the 45 degree line; however, as the sample size becomes smaller, there are more and more outliers for the variance component test, where sMiST yields much smaller p-values compared to MiST.Upon close examination, these genes have an extreme correlation structure: all variants are in nearly perfect correlation with each other.For these extreme genes, the covariance estimates from small samples can be even more singular or perfectly singular.Although our method does not directly invert the whole covariance matrix for the mediation test, for the variance component testing it still involves inverting W T cov(G)W while projecting out the mediation component.Therefore, in the situation of nearly singular matrix, it can be numerically unstable. To avoid this numerical problem, we regularized the correlation matrix of G by adding λI, where I is the identity matrix, as in the ridge regression.Following the asymptotic consistency results of Knight and Fu (2000) [17] for penalized regression, we chose λ based on sample size (n) used in calculating the covariance matrix: l ¼ , such that the parameter estimates are consistent as n increases.We performed the regularization for all genes, since for a gene that is of moderation correlation structure, its covariance matrix is insensitive to the regularization.Fig 5b shows the scatter plots of regularized sMiST compared to MiST and it is clear that all outliers are gone even when n = 1, 000, and the regularization has minimal impact on the overall performance of sMiST.There are some points below the 45 degree line for the variance component test, suggesting sMiST may be slightly conservative.However, these generally occur when the p-values are large.When the p-values are small where they matter, sMiST even with regularization matches very well with MiST. Comparison of sMiST with S-PrediXcan and TWAS We compared the p-values for the predicted gene expression from sMiST (sMiST-mediation) as well as the two popular summary statistics-based methods S-PrediXcan [5] and TWAS [6] with the p-values calculated based on individual level data from GECCO, as described as in the p Þ and the MAF of the pth SNP, to approximate the variance of the outcome while sMiST approximates the correlation of β � by the correlation matrix of genotypes.TWAS takes the weighted sum of Z statistics, which differs from S-PrediXcan and sMiST-mediation by a factor of the proportion of the phenotype explained by a SNP's genotype [5,6].In general, this factor is close to 1; hence, we do not expect substantial difference between TWAS and S-PrediXcan and sMiST-mediation as shown in Fig 6. Discussion We proposed a versatile set-based approach using summary statistics, sMiST, for testing the total effect of multiple mediators and direct effect.The computational time for sMiST is much faster than individual-level data based MiST.For example, for a dataset of 10,000 cases and 10,000 controls, MiST takes 0.955 seconds but sMiST takes only 0.022 seconds to calculate the p-value.sMiST also provides p-value for each mediator in the presence of other mediators under the assumption of τ 2 = 0 and p-value for direct effects conditional on all the mediators.When there is evidence for mediator effect, we may further perform co-localization analysis using methods proposed previously [18,19,20,21] to examine whether any specific genetic variant is pleiotropic to both the mediator and disease risk. We offer a few observations from our extensive simulation and real data-based studies.Generally speaking, larger sample sizes lead to better estimates of the covariance matrix, and thus better alignment with results from individual level data.With growing external genotyping databases, having a large enough sample size to calculate the covariance is generally not a problem.To prevent numerical problems for genes with extreme correlation structures, we applied regularization for all genes, irrespective of correlation structures, in the hope that the regularization will minimize the numerical instability for genes with extreme correlation structures, while having minimal impact on other genes.Using this approach, we can mitigate bias that may arise due to high LD regions with a sample size as low as 1000.However, this regularization could potentially lower the power of our method, and yield slightly more conservative results.Such negative impact will be diminished, as our regularization is a function of sample size and it approaches 0 as the sample size increases. Under all four scenarios, weaker correlation between confounder and fixed effects leads to better alignment between individual-level mediation effect p-values and summary-based mediation effect p-values, while the effect size of confounder does not affect the performance much.In particular, when the correlation is at the highest (0.4), summary statistics based mediation effect p-values can be somewhat over-conservative.The performance of the direct effect is not affected because of the orthogonalization of mediator and genotype in the data generation. Generally, sMiST gains power by testing for the total association of mediation and direct effects compared to testing for only the mediation effect.However, when there is no direct effect, sMiST may lose some power due to testing an additional parameter of variance component that has null effect.More powerful combination methods can be employed to combine the test statistics for mediation and variance component to mitigate this impact [12].These combination methods rely on only the p-values or test statistics and can be applied to sMiST.When the direct effect has a different sign than the mediated effect (inconsistent mediator) [22], the power for testing mediation effect and sMiST can be considerably reduced; however, if the direct effect is sufficiently strong, the power for mediation can approach to 1 (S7 Table ).Under this situation, one needs to be cautious about the interpretation of mediation.Methods have been proposed to test the inconsistent mediation effect for one variable (here, one genetic variant) [23].However, there lacks research for testing inconsistent mediation effect with multiple genetic variants.It is probably unlikely that a mediator is inconsistent with all genetic variants in practical situations.Nevertheless, it is a topic that warrants future research. From our application of sMiST to CRC GWAS data, we identified three novel genes contributing to CRC risk that were not previously identified in the single variant analysis of the same dataset.NT5DC2 has been shown to markedly reduce the expression of Fyn, a Src family proto-oncogene and has been implicated in glioblastoma [24], though not yet linked to CRC susceptibility.Of interest there are a couple of other nearby genes in the region, NISCH and SEMA3G, which share gene-linked regulatory elements, are expressed in T cells, and have been shown to play a role in CRC [25,26].For VPREB3, the protein encoded by this gene is thought to be involved in B-cell maturation, and may play a role in assembly of the pre-B cell receptor.A nearby gene, CABIN1, plays an important role in the T-cell receptor-mediated signal transduction pathway.Expression of this gene has previously been associated with CRC recurrence [27].PLD6 is a phospholipase of the outer mitochondrial membrane and acts as a regulator of mitochondrial shape by facilitating mitochondrial fusion [28].Interestingly, a previous study showed that depletion of PLD6 prevents MYC repression of ANKRD1 and several other target oncogenes of YAP/TAZ.It has been hypothesized that mitochondrial dynamics, influenced in part by PLD6, might be an integral part of MYC-induced anabolic metabolism [29].For ANKRD10, it is in a region dense with cancer-related genes (CDKN2A CDKN2B) and thus it is not surprising there may be multiple variants with independent regulatory effets. There is no report about the function of this gene; however, its paralog ANKRD6 recruits CKIepsilon to the beta-catenin degradation complex and allows efficient phosphorylation of betacatenin, thereby inhibiting beta-catenin/Tcf signals [30].As such, ANKRD10 may have similar function in regulation of the Wnt pathway as ANKRD6. It is of great interest to study the effect of interplay between mediation variables M and genetic variants G on the phenotypes.Huang et al. (2014) [14] derived g(E(Y|G, X) under the interaction model by integrating out M and found that the model depends not only on the linear terms of confounders X and G's but also on the cross-product between X and G's and the second order of G's.Conceptually the proposed sMiST using summary statistics can be extended to study the interaction effect.However, the currently available summary statistics on marginal association do not permit modeling of interaction effects.Summary statistics on pairwise interaction among G as well as interaction between X and G will be needed in order to study the interaction effect of mediation. Ethics statement This study uses the summary statistics of genome-wide association studies for colorectal cancer, and the de-identified genotyping data from GECCO, CCFR and CORECT.The study was approved by the Institutional Review Board at the Fred Hutchinson Cancer Research Center in Seattle, WA under file numbers 3995 and 6501. Derivation of sMiST Assume that there are no confounders.We first focus on linear regression model.Consider a study of n independent individuals.Let Y be a n × 1 vector of outcomes, G a n × P matrix of P variants for the n individuals, W a P × K matrix with W pk being the regression coefficient of pth variant for the kth mediator, and D is a diagonal matrix of cov(G).Further, let b b For simplicity, we center G and Y so that the intercept is 0. It is easy to see that where β We can obtain covðb gÞ as where : Here cor(G) is the correlation matrix of G, which is the exact correlation of b b � under the null but an approximation under the alternative.Then, the test statistic for the mediation effect is Under H 0 : γ = 0 and τ 2 = 0, U g � w 2 K .The test statistic for the kth mediator γ k = 0 is b g k =seðb g k Þ � Nð0; 1Þ for k = 1, . .., K, where seðb g k Þ is the square root of the kth diagonal element of covðb gÞ. For the variance component test, we derive the test statistic under τ 2 = 0.By this, the variance component test adjusts for the mediator effect, and is independent of U γ (Su et al. 2018) [12].When combining the two test statistics using e.g., weighted linear combination, if they were correlated, the search space for the weight would be restricted.Independent test statistics can circumvent such restriction.Further, due to the non-conventional distribution for the variance component test, having independent test statistics can avoid the complex correlation structure and make it straightforward to derive the distribution of the combined test statistics.This is very useful, as it allows us to calculate p-values fast in a genome-wide search.Further, there are many methods to combine independent test statistics including popular p-values-based Fisher's and Tippett's combinations and data-adaptive combinations, which can be readily applied to our independent test statistics for mediation effect and variance component [12,31].b � ÞA T .The test statistic for the variance component is where U a � ¼ b a � =varðb a � Þ.Under the null, the variane component test U τ 2 follows a mixture of w 2 1 with the mixture weighting as the eigenvalues of matrix D α � R � D α � , where D α � is a diagonal matrix with 1=seðb a � Þ, and R � is the correlation matrix of b a � , both of which can be easily obtained from covðb a � Þ. Under the logistic regression model, by the Taylor's expansion, we have b g Here G is centered.For simplicity of presentation, we omit any differences in the order of o p (n −1/2 ) because for the ffi ffi ffi n p asymptotic normality, these differences will be 0. Assume Δ is constant on the diagonal, we can reorganize b g À , which has the exactly same form as b g under the linear model.Under the null, U g � w 2 K .We note that Δ is constant under the null.However, even when the null does not hold, Hu et al. (2013) [32] shows Δ does not strongly depend on covariates.Our extensive simulation also shows the proposed test statistics perform well under this approximation.Similar to the derivation for b g, we can also obtain the test statistic for the variance component under the logistic regression model, which has the same form as the variance component test under the linear model. When there are confounders, the derivation for test statistics using summary statistics becomes complicated.Under the liner model, b g is the weighted sum of XY and W T GY with the weight as the corresponding elements in the inverse of the covariance matrix of (X, W T G).If the effects of confounders are 0 or X and G are independent, then b g only depends on G and the above test statistics are the same.If the effects of confounders are not 0 and X and G are correlated, b g will depend on X; however, we observe our proposed test statistics hold very well based on our extensive simulations and real data analysis, suggesting that our test statistics are robust even in the presence of confounders. Datasets Summary statistics (log-odds ratio estimates and standard errors of genome-wide genetic variants) were obtained from a meta-analysis of GWAS studies from three large consortia, including: the Genetics and Epidemiology of Colorectal Cancer Consortium (GECCO), the Colon Cancer Family Registry (CCFR), and the Colorectal Cancer Transdisciplinary Study (COR-ECT) [16].In total, the consortia have 54,454 cases and 64,163 controls of European Ancestry.The genotyping data were imputed to the Haplotype Reference Consortium [33] with �40 million variants.The linkage disequilibrium or covariance of the genotypes was calculated using individual level data from GECCO (n = 26, 554).The details of study designs, genotyping QC, association and meta-analysis can be found elsewhere [16].A brief summary of studies in these consortia is provided in S8 Table .We downloaded the weights or regression coefficients of cis (< 1Mb from gene start or end) regulatory variants associated with gene expression for whole blood from the PredictDB Data Repository (http://predictdb.org/).The regression coefficients were estimated from a regularized linear regression model with elastic-net penalty [4].The models were developed using a reference dataset of genotype and whole blood transcriptome data from 922 normal individuals from Depression Genes and Networks [34].We considered genes of which the predictive R 2 > 0.01 in the gene expression model, resulting in 8,893 genes.Using regulatory information derived from whole blood is relevant for studying susceptibility to CRC for two primary reasons.First, a subset of the immune-relevant cell types present in whole blood are relevant to CRC risk.In particular, T-cell populations of the intestine play a critical role in orchestrating the careful balance between immune activation and tolerance at the mucosal layer.Second, whole blood is the largest reference transcriptome dataset.As many tissues and cell types share common heritability in gene expression, in some cases whole blood models are preferred for building robust predictive models because of their large sample size. Performance of sMiST in simulation We selected three genes (CXCR1, C18orf32, and ARHGAP11A) from the eQTL database from the PredictDB Data Repository.Both CXCR1 and C18orf32 are of moderate size (�40 genetic variants), while ARHGAP11A is larger set (92 variants).In terms of the LD structure, both C18orf32 and ARHGAP11A show largely independence or weak correlation among variants; however, CXCR1 contains several clusters of variants that are nearly perfectly correlated. We used the GWAS genotyping data from GECCO as the template (n = 26, 554), and generated the disease status under the generalized linear regression model (1) with logit link.We set the intercept to be −3, yielding about 5% baseline disease probability.We generated the mediator M = cB + �, where B ¼ P P p¼1 w p G p was the genetically predicted gene, and � � N(0, σ 2 ).Here, c and σ 2 were set such that variation of M explained by G is 0.05, 0.20, and 0.80, while keeping the variance of M constant, which we set to be 1.5.The weights {w p , p = 1, . .., P} were obtained from the PredictDB Data Repository.The effect of the mediator M was set to be log (2).Further, we let the random effects δ p � N(0, 0.05).To mimic the individual variant contributions that were not explained by predicted gene expression, we took the residuals from regressing the sum of direct effect P P p¼1 d p I p G p on B where I p is 1 if pth variant has a direct effect and 0 otherwise, and added the residuals as direct effects to the model.The proportion of variants with direct effects was set to be 0.1, 0.2, 0.4, 0.6, 0.8, and 1.0.To save space, for most simulations presented in the main text, we set R 2 = 0.05 and all variants have direct effects unless otherwise noted.Results for other parameter settings are provided in S4-S7 Tables.For each simulation setting, we generated 1000 simulated data sets, each set consisting of 1000 cases and 1000 controls. Implementation We implemented sMiST using R programming language.The software is available for download at https://research.fhcrc.org/hsu/en/software.html. 1 . Complete null, 2. Null mediation effect and non-zero variance component, 3. Non-zero mediation effect and null variance component, and 4. Non-zero for both mediation effect and variance component. Fig 1 Fig 1 shows the scatter plots of −log10(p-values) for the mediation effect (top) and variance component (bottom) of sMiST and individual-level data based MiST, when the correlation between confounder and predicted gene expression is 0.25, and β = 0.6.It is clear that the points fall on the 45 degree line, suggesting sMiST provides virtually identical results to the individual level data based MiST for both mediation and variance components under all four scenarios.In fact, sMiST performs very well compared with MiST even when the correlation is as high as 0.4 and β is 0.9 (S7 Fig).Performance of sMiST with multiple mediators.Our method can be generalized to instances when there are multiple mediators.To illustrate, we generated two correlated mediators.One mediator was predicted gene expression of of CXCR1, and the other "mediator" is the known CRC locus outside of the gene, which is in nearly perfect correlation with one of the variants in CXCR1.This is to mimic the scenario for testing the joint and conditional effect of predicted gene expression and known locus.We combined the genotype data of CXCR1 and of the known CRC locus into a mega-genotype n × (P + 1) matrix, where n is the number of subjects and P is the number of eQTLs in the CXCR1 gene.We assigned a (P + 1) × 2 weight present the p-value for testing the joint mediation effect and the p-value for the variance component, as well as the individual p-values associated with each component.sMiST again shows virtual identical p-values with individual-level data based p-values for both the joint mediation effect and individual mediator's effect conditional on the other mediator (Fig 2). f b b � p ; seð b b � p Þg denoted by sMiST-Wald and score statistics f Ũ p ; Ṽ p g denoted by sMiST-Score. Fig 1 .Fig 2 . Fig 1. Scatter plots of −log10(p-values) for testing the mediation effect and variance component for sMiST compared with individual level data based MiST in the presence of confounding.https://doi.org/10.1371/journal.pgen.1008947.g001 Fig 3 showed the comparison of these sMiST test statistics with MiST under the null and alternative hypothesis.It is clear that sMiST-Wald yields many outliers for both the mediation Fig 5 . Fig 5. Effect of sample sizes in calculating the genotype covariance matrix on the mediation and variance component p-values for sMiST without regularization (top panel) and with regularization (bottom panel).https://doi.org/10.1371/journal.pgen.1008947.g005 � is the limit of b b � .As n for GWAS typically is very large, under regularity conditions, by the continuous mapping theorem and the central limit theorem, n 1=2 ð b b � À b � Þ converges to a multivariate normal distribution with mean 0 and covariance of b b � .Under τ 2 = 0, the estimator for the mediation effect is b 5 ; 1 W The key to deriving the variance component test statistic conditioning on b M is that, as opposed to using b b � p , we derive the summary statistics for each of P genetic variants b a � p by conditioning out b M k ; k ¼ 1; . . .; K, which are given by and A ¼ ð0; 0; :::; 0; 1ÞC À where D p is the pth diagonal entry of D, and C is a (K + 1) × (K + 1) matrix with C jk ¼ W T j covðGÞW k with W j and W k the jth and kth columns of W, and C ðKþ1Þ: ¼ C T :ðKþ1Þ ¼ ½covðGÞ p: W; D p �.The covariance of b a � can then be straightforwardly obtained as covðb a � Þ ¼ Acovð b Table 1 . Novel CRC associated genes and secondary genes. Novel genes: 0 known loci within 1Mb The unadjusted and adjusted p-values are without and with adjusting for the known CRC loci that are on the same chromosome of the gene.2Thecolumn names are as follows.R 2 is the variation of gene expression explained by eQTLs from the PrediXcan model; N SNPs is the number of variants in the gene; chr is the chromosome #; Pred Exp is the p-value for predicted gene expression; Var Comp is the p-value for the variance component; sMiST is the combined p-value of predicted gene expression and variance component tests using optimally weighted linear combination. https://doi.org/10.1371/journal.pgen.1008947.t001
2020-08-26T13:07:32.750Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "870a45c41dec20ea4d55a4324735d0c918834210", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1008947&type=printable", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "a3873e073e1ee2af1f665ca81d3e501282e391d5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
14066373
pes2o/s2orc
v3-fos-license
International Journal of Nephrology and Renovascular Disease Dovepress Successful Creation of an Anemia Management Algorithm for Hemodialysis Patients Introduction: Several anemia guidelines for hemodialysis patients have recommended a target hemoglobin (Hb) range of 10–12 g/dL. However, maintaining Hb values continuously within a narrow target has been difficult, and there has been no generally accepted anemia management algorithm for hemodialysis patients. Methods: In our study, we created an anemia management algorithm that considers the length of erythrocyte lifetimes, focuses on the combination of erythropoiesis-stimulating agent management and iron administration, and prevents iron deficiency and overload. Our algorithm established a target Hb range of 10–12 g/dL. Results: We evaluated our algorithm in 49 patients for 6 months. The mean Hb values were approximately 11 g/dL during our study period. The percentage of patients in the target Hb range of 10–12 g/dL increased from 77.6% (38 of 49) at baseline to 85.7% (42 of 49) at 4–6 months. Throughout monthly regular blood tests during 1–6 months after we introduced our algorithm, Hb values remained within the target range in 55.1% (27 of 49) of patients. The standard deviation of Hb values significantly decreased at 5 and 6 months (P=0.013 and P=0.047, respectively; 1 g/dL at 0 month, 0.7 g/dL at 5 months, and 0.7 g/dL at 6 months). Our algorithm also succeeded in suppressing cumulative doses of iron (#800 mg) and decreasing the ferritin values significantly (P=0.011). There were no significant differences in erythropoiesis-stimulating agent doses between 0 and 6 months (P=0.357). Conclusion: Our anemia management algorithm successfully increased the number of patients in the target Hb range, significantly decreased the Hb standard deviation, suppressed cumulative doses of iron, and decreased ferritin values. These results suggest a better prognosis for hemodialysis patients. Further studies are required to evaluate our algorithm. Introduction The use of erythropoiesis-stimulating agents (ESAs) and iron administration have enabled anemia control in hemodialysis patients without blood transfusions. [1][2][3] However, hemoglobin (Hb) values that are too high or too low can worsen the prognosis. [4][5][6][7][8] The European Renal Best Practice position statement recommends a target Hb range of 10-12 g/dL. 9 Similarly, the Japanese Society for Dialysis Therapy recommends a target Hb range of 10-11 g/dL and a target Hb range of 11-12 g/dL in relatively young active patients. 6 In the USA, the majority (66%) of hemodialysis patients had mean Hb levels in the range of 10-12 g/dL, 10 although the concept of a target Hb range had been removed. 11 However, maintaining Hb values continuously within a narrow target range has been described as difficult. 12,13 Thus, there have been many attempts to create anemia management algorithms for hemodialysis patients. [14][15][16][17][18][19][20][21][22] Nevertheless, there is no generally accepted algorithm. 14 A majority of algorithms do not consider the length of erythrocyte lifetimes 23 and mainly focus on ESA management without considering the combination of ESA management and iron administration. To appropriately manage the ESA dose, we believe that an anemia management algorithm should consider the length of erythrocyte lifetimes and focus on the combination of ESA management and iron administration. Further, we believe that an anemia management algorithm should prevent not only iron deficiency but also iron overload. Thus, we created an algorithm that addressed these issues and evaluated its control of Hb values and iron indices. Materials and methods study design The study population was drawn from 88 hemodialysis patients at the Yokkaichi Social Insurance Hospital (this hospital name was changed to Yokkaichi Hazu Medical Center in April 2014). The exclusion criteria included patients with chronic hepatitis, patients whose principal physicians wanted to control any anemia by themselves, hospital patients, patients who did not receive ESAs, patients who received continuous erythropoietin receptor activator, patients who declined consent, and patients who had started hemodialysis within the past 6 months ( Figure 1). Therefore, we introduced our anemia management algorithm to 53 patients from May 2013 to November 2013 and evaluated how our algorithm could control Hb values and iron indices. Withdrawal criteria included patients whose Hb values decreased to $1.5 g/dL from the last Hb results because of hemorrhage, patients who needed a blood transfusion, patients who dropped out from our algorithm because of a change in hospitals, and patients who died. Our study was approved by the ethics committee of the Yokkaichi Social Insurance Hospital (now known as Yokkaichi Hazu Medical Center). All participating patients signed informed consent forms. Blood test schedule Blood tests were performed at the beginning of the first dialysis of the week, which had the longest interval from the last dialysis. At a regular blood test, which was a monthly blood test, Hb, serum iron, total iron-binding capacity, and albumin were measured. At an intermediate blood test, which was taken between two consecutive regular blood tests, Hb was measured ( Figure 2). Ferritin was measured once every three regular blood tests or at 0, 3, and 6 months after we introduced our algorithm. The transferrin saturation rate (TSAT) was calculated from serum iron and total iron-binding capacity (TSAT=100× serum iron/total ironbinding capacity). anemia management algorithm Our anemia management algorithm comprised an iron algorithm and an ESA algorithm. Our algorithm established a target Hb range of 10-12 g/dL. All decisions in our algorithm were evaluated by physicians. If a decision was approved by physicians, it was reflected in the treatment. Iron algorithm To prevent iron deficiency, the iron algorithm made decisions regarding iron administration at every regular blood test ( Figure 3). If ferritin was not measured during a month, the last ferritin result was used. If iron administration was selected, the patient received intravenous administration of saccharated ferric oxide (Fesin ® ; Nichi-iko Pharmaceutical Co, Ltd, Toyama, Japan) of 40 mg per week for 4 weeks, which was defined as one course. Criteria for TSAT and ferritin were compliant with the Japanese Society for Dialysis Therapy Anemia Guidelines. 6 esa algorithm The ESA algorithm comprised the processes shown in Third, using the decision table (Figure 11), the algorithm selected a treatment according to the state of iron administration. If an ESA dose was going to be changed, the dose per week was either increased or decreased step by step according to Figure 12. Additionally, we created a special condition to prevent the stagnation of low Hb values. When the Hb value remained ,10 g/dL despite the last ESA increase 3 months ago and the Hb increase for the last 3 months remained #0.5 g/dL, our ESA algorithm would elect to increase the ESA dose unconditionally (Figure 8). In our study, epoetin beta (EPOGIN ® ; Chugai Pharmaceutical Co, Ltd, Tokyo, Japan) and darbepoetin alpha (NESP ® ; Kyowa Hakko Kirin Co, Ltd, Tokyo, Japan) were used intravenously. The dose conversion ratio between epoetin beta and darbepoetin alpha was 225:1 in our ESA algorithm. Thus, doses of epoetin beta of 2,250, 4,500, and 9,000 IU were interchangeable with doses of darbepoetin alpha of 10, 20, and 40 µg, respectively, in our ESA algorithm ( Figure 12). After the ESA dose per week was determined, epoetin beta was mainly administered at each dialysis session, and darbepoetin alpha was administered once a week. The way to use our algorithm in our study Because our algorithm became somewhat complicated, we designed a program that could quickly make decisions based on our anemia management algorithm using Microsoft Office Excel ® 2007 (Microsoft Corporation, Redmond, WA, USA); we used the program to make decisions in our study. statistical analysis For comparisons between baseline data and data at the end of our study, the paired t-test was used. For comparisons of the Hb standard deviation between baseline and other time points after we introduced our algorithm, the F-test When was the ESA dose increased or decreased last time? • increased or decreased 0.5 month ago → Chart no 2 ( Chart no 1 for two population variances with correlated observations was used. 24 All data were analyzed using Microsoft Office Excel ® 2007. Differences were considered significant at P-values of ,0.05. Results We enrolled 53 hemodialysis patients initially, but four patients dropped out during our study period: two patients who dropped out from our algorithm because of changing hospitals, one patient whose Hb values decreased to $1.5 g/ dL because of bleeding in the digestive tract, and one patient who needed a blood transfusion ( Figure 1). No enrolled patient died during our study period. Therefore, in total, we analyzed 49 patients (Table 1). In addition, one patient refused to have blood drawn at every intermediate blood test, but our algorithm successfully managed the patient's anemia The erythropoiesis-stimulating agent dose was increased or decreased 0.5 month ago. This chart was used to monitor hemoglobin (hb) changes at the 0.5-month mark and was created for monitoring short-term hb increases or decreases. Condition D Figure 7 chart number 3: The erythropoiesis-stimulating agent (esa) dose was increased 1 month ago. compared with chart number 1 ( Figure 5), this chart was more sensitive to hemoglobin (hb) decreases when the hb value was $9 g/dl and more permissive to hb increases when hb values ranged from $9 to ,11.5 g/dl, because the hb value may increase partially because of an increase in esa; however, our esa algorithm could watch for unexpected hb decreases. according to results of regular blood tests. Therefore, for analytical use, we used data from regular blood tests because it contained complete data for these 49 patients. Moreover, although blood tests should be performed at the first dialysis of the week, which had the longest interval from the last dialysis, one patient did have blood drawn 2 days after the last dialysis once during 0 month and another patient did the same thing once during 1 month; however, no negative effect in their anemia control was recognized. Regarding 49 analyzed patients, eleven had entered hospital during our study period: two for examination, two with infection, two with heart failure, two with bone fracture, one for parathyroidectomy, one for percutaneous peripheral intervention, and one with vertigo. No patient had myocardial or cerebral infarction. In our study period, all decisions about our anemia management algorithm were approved by physicians. Each ESA was administered without fail. There were 34 ESA increases and 26 ESA decreases at regular blood tests (n=49), and seven ESA increases and six ESA decreases at intermediate blood tests (n=48). In contrast, there were 59 courses of iron administration (one course is 40 mg/week for 4 weeks). The amount of administered iron during our study period ranged from 0 to 800 mg, and the mean dose of iron during our study period was 193±179 mg. Using our iron algorithm, we decided not to administer iron 235 times, 149 times of which TSAT was .20%. Mean Hb values were approximately 11 g/dL during our study period (Figure 13 Figure 9 chart number 5: The erythropoiesis-stimulating agent (esa) dose was decreased 1-3 months ago. compared with chart number 1 ( Figure 5), this chart was more sensitive to hemoglobin (hb) increases when hb values ranged from $11 to ,12 g/dl, more sensitive to hb decreases when hb values ranged from $9 to ,11 g/dl, and more permissive to hb decreases when hb values ranged from $11.5 to ,12 g/dl, because the hb value should decrease because of a decrease in esa. Our esa algorithm could watch for unexpected hb increases and excessive hb decreases. during our study period. The Hb standard deviation decreased over time and also decreased significantly at 5 and 6 months (P=0.013 and P=0.047, respectively: 1 g/dL at 0 month, 0.7 g/dL at 5 months, and 0.7 g/dL at 6 months; Figure 14) compared with the values at baseline. There were no significant differences in ESA doses and Hb values between baseline data and data at the end of study (P=0.357 and P=0.682, respectively; Discussion Our anemia management algorithm, which comprised an iron algorithm ( Figure 3) and an ESA algorithm (Figures 4-12), succeeded in controlling Hb values and iron indices in the following ways. First, our algorithm increased the number of patients in the target Hb range of 10-12 g/dL ( Figure 14). Second, our algorithm significantly decreased the Hb standard deviation ( Figure 14). We believe that the increase in the number of patients in the target Hb range was associated with the stability of mean Hb values at approximately 11 g/dL ( Figure 13) and the decrease in the Hb standard deviation (Figures 13 and 14). Third, our iron algorithm suppressed cumulative doses of iron and significantly decreased ferritin values (Table 2). These results imply a better prognosis for hemodialysis patients. Month Mean hemoglobin (g/dL) 5 6 n=49 Figure 13 Trend in mean hemoglobin values ± standard deviation at regular blood tests. target Hb range implies better outcomes. Maintaining Hb values within a certain range is important because too high or too low Hb values have been shown to worsen prognosis. [4][5][6][7][8] The Japanese Society for Dialysis Therapy reported that the 5-year survival rate of hemodialysis patients was best in the Hb range of 10-11 g/dL; in young hemodialysis patients, that was best in the Hb range of 11-12 g/dL. 6 Therefore, because the target Hb range of 10-12 g/dL in our study was associated with an improved prognosis, the increase in the number of patients within this range implies a better prognosis overall. Second, the decrease in the amount of the Hb standard deviation suggests better outcomes. Pisoni et al had reported that the facility-level Hb standard deviation was strongly and positively associated with mortality. 25 Although further investigations are required for evaluating whether this result can be applied to a patient, Pisoni et al had also reported that facility-level Hb standard deviation was strongly associated with within-patient Hb variability, 25 which had been reported to be positively associated with mortality. 26 Therefore, a decrease in the Hb standard deviation may indicate a better prognosis. Finally, suppressing cumulative doses of iron and decreasing ferritin values also possibly led to better outcomes. Kuo et al had reported that a cumulative intravenous iron dose of .800 mg for 6 months significantly increased risks of cardiovascular events and overall mortality in hemodialysis patients. 27 To this, our iron algorithm prevented excessive iron administration in patients with high Hb values and suppressed consecutive iron administration when Hb values ranged from $10 to ,11 g/dL ( Figure 3). Consequently, no patient received a cumulative intravenous iron dose of .800 mg during our 6-month study period, suggesting a better prognosis. In addition, our iron algorithm also decreased ferritin values. Hasuike et al had reported that the high ferritin group ($100 ng/mL; median ferritin, 161.9 ng/mL) was associated with a poor prognosis compared with the low ferritin group (,100 ng/mL; median ferritin, 37 ng/ mL), 28 whose ferritin values were similar to those at the end of our study (Table 2). Therefore, decreased ferritin values also suggest a better prognosis. In contrast, we think that our algorithm also succeeded in preventing iron deficiency because there were no significant differences in ESA doses and Hb values between baseline data and data at the end of study (Table 2). There have been many anemia management algorithms for hemodialysis patients. [14][15][16][17][18][19][20][21][22] Kalicki and Uehlinger have argued that algorithms should take into account all information that includes at least one erythrocyte lifetime. 23 Using this concept, Lines et al reported about a predictive algorithm that considered the length of erythrocyte lifetimes and predicted Hb values at 90 days after the last ESA dose change. 21 However, we believe that their algorithm did not consider any short-term Hb changes and thus lost the flexibility to adapt to unexpected occurrences. In contrast, several algorithms incorporating iron indices have been reported, 14,17,[19][20][21] but we think that even with these algorithms, the combination of ESA management and iron administration was insufficient. Our ESA algorithm incorporated two mechanisms that would lead to appropriate ESA management. One mechanism acknowledged the importance of the length of erythrocyte lifetimes, and the other incorporated the combination of ESA management and iron administration. Our successful anemia management was based on these two mechanisms. By incorporating the mechanism related to the length of erythrocyte lifetimes, our ESA algorithm became particularly sensitive or permissive to Hb changes that were associated with recent ESA dose changes. Kalicki and Uehlinger also have argued that the time needed to achieve a steady state for hematocrit values after an increase in the ESA dose is equal to one erythrocyte lifetime, 23 which is approximately 60-90 days. 29,30 Thus, our algorithm broadly classified ESA dose changes into the following cases: cases after a recent ESA increase (Chart numbers 3 ESA decrease (Chart number 5), and a case in a steady state (Chart number 1). Because of these classifications, our algorithm became sensitive or permissive to Hb changes so that our algorithm could appropriately manage the ESA doses. Moreover, Mizuguchi has argued that after an ESA dose change, it takes more than 1 week before there is a change in Hb values because an erythroid colony-forming unit, which ESA acts on, will need 1 week to develop into an erythrocyte. 31 Because of this delay in the Hb reaction, the Hb change at 1 month after an ESA increase will be relatively insufficient. Therefore, we made the chart at 1 month after the ESA increase (Chart number 3; Figure 7) more permissive to Hb decreases than the chart at 1.5-3 months after the ESA increase (Chart number 4; Figure 8). In contrast, we regarded a delay in the Hb reaction to an ESA decrease as ignorable because other than ESA decreases, there may have been myriad causes that led to decreases in Hb values. 21 Next, by incorporating the mechanism for the combination of ESA management and iron administration, we were able to use our ESA algorithm to appropriately manage ESA doses depending on the state of recent iron administration. In general, Hb values increase after iron administration 32 and decrease in iron deficiency. 33 However, if Hb changes attributed to the iron status had been incorrectly categorized as Hb changes because of an excess or a deficiency in ESA doses, unnecessary ESA dose changes may be made and a stable control of Hb values will be difficult to achieve. Therefore, our ESA algorithm incorporated the management of ESA doses according to the state of iron administration for preventing unnecessary ESA dose changes. This combination mechanism was integrated in Conditions B and D-F. Condition B was created for preventing unnecessary ESA decreases. With Condition B, when Hb values increased after recent iron administration without subsequent iron administration later, our ESA algorithm did not decrease ESA doses because this increase was probably due to iron administration and was probably transient. Nakanishi et al had estimated that if the Hb value increased by 0.2-0.3 g/dL, 30-50 mg of the intravenous iron had been used for erythropoiesis. 34 With this information, after 160 mg of intravenous iron (one course in our study) was administered, the Hb value may increase by 0.6-1.6 g/dL; this range was similar to that in Condition B (Figures 5 and 7-9). Thus, our ESA algorithm distinguished Hb increases after recent iron administration from Hb increases that were probably due to excessive ESA doses and prevented unnecessary ESA decreases. Conversely values were stagnated or decreased because of insufficient iron administration and iron could be administered later, our ESA algorithm did not increase ESA doses because these Hb changes were likely due to iron deficiency. By doing so, our ESA algorithm prevented unnecessary ESA increases. However, for preventing additional Hb decreases, decisions to increase ESA doses were easier with Conditions E and F than with Condition D. Using Condition E, if iron had been administered for 3 consecutive months, our algorithm increased ESA doses because the Hb reaction to iron administration was poor ( Figure 11). Using Condition F, our algorithm increased ESA doses unless there was a plan to administer iron hereafter; if there had been no recent iron administration, the Hb reaction to iron administration may still be good ( Figure 11). Our study has several limitations. First, our study evaluated a small sample size during a relatively short period, and it was not a randomized control trial. Although we indicated the possibility of our algorithm for a better prognosis, to evaluate whether our algorithm actually improves the prognosis of hemodialysis patients, we would require a larger sample size during a longer period. Additional studies will be required to evaluate our algorithm. Second, the benefit from maintaining Hb values continuously within the target range remains unknown as it has been described as difficult and has been rarely achieved. 12,13 For 6 months, we succeeded in maintaining Hb values continuously within the target range of 10-12 g/dL in more than half of the patients. Using our algorithm, the benefit from maintaining Hb values continuously may become clear. Third, because our algorithm became somewhat complicated, a software program may be useful for supporting hemodialysis practices. Therefore, we designed and used such a program in our study. Conclusion Our anemia management algorithm successfully increased the number of patients in the target Hb range of 10-12 g/dL, significantly decreased the Hb standard deviation, suppressed cumulative doses of iron, and decreased ferritin values. These results suggest a better prognosis for hemodialysis patients.
2018-05-08T17:35:15.223Z
0001-01-01T00:00:00.000
{ "year": 2015, "sha1": "fd83865feafb408e3992a0e4db608fa4c94cb1fd", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=25600", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd83865feafb408e3992a0e4db608fa4c94cb1fd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
69344722
pes2o/s2orc
v3-fos-license
Optimized Operation of cascade reservoirs Considering Complementary Characteristics between wind and photovoltaic based on variational auto-encoder The strong variations and uncertainties of high-penetration variable renewable energy sources (RESs) has greatly challenged the operation of power systems. Considering the rapid adjustability of hydropower to complement various RESs, in this paper, a coordinated optimization model is proposed for hydro-wind-solar integrated systems. Based on the variational auto-encoder, a scenarios generation method is proposed to capture the spatial-temporal complementary and fluctuant characteristics of wind and PV power with high model accuracy and low computational complexity. The detailed hydraulic and electrical relationship between cascade reservoirs are established to make full use of the controllable regulation capacity of cascade hydropower stations. With the linearization of the head-sensitive power generation and nonlinear water head, the optimization model is reformulated into a tractable mixed integer linear programming (MILP) formulation. A basin in Northwest China integrated with prolific wind and solar resources is selected for a case study. The computational experiments on actual data demonstrate the applicability of the proposed method. RODUCTION THE deficiency of fossil fuels and deterioration of environment situation all over the world promotes the development of renewable energy sources(RESs) generations. The efficient scheduling and management of RES is of significant importance for the operation of modern power system. However, the increasing amount of RESs including wind and photovoltaic (PV) greatly threatens the safety and stability of power grids. On the other hand, hydroelectric sources are able to smooth these mutable RESs generations with abundant capacity and fast adjustable ability. And a growing number of researchers are gradually paying more attention to the joint operation of multi-energy hybrid systems such as hydro-wind-solar hybrid system [1]. The optimal operation of multi-energy coupling system considering complementary characteristics has become the trend of current research. Previous studies revealed that total power generation of various renewable energy power sources might became smoother in several areas of China [2][3]. Meanwhile, with the strong regulation capacity from rich water resources and large-scale hydroelectric systems, the joint optimization of power system with solar, wind and hydropower was becoming a hot topic in China. In Longyangxia, solar-hydroelectric hybrid system had been practically put into operation in 2013, and the hydropower stations is utilized to track the fluctuation of PV [4]. [5] analysed the complementary characteristics of photovoltaic and hydropower over different time step and proposed the methods for capacity planning. Power output uncertainty was the major obstacles for the penetration of high proportional RES generations. Many works showed the advantages of combining multiple power sources. In addition, the accurately modeling of the RES considering their uncertainties is vital for the optimal operation of multi-energy hybrid systems. Scenarios based methodology is widely used to describe these uncertain resources. With a set of typical generation scenarios, the stochastic economic dispatch problems can be solved and optimal operation strategies can be determined [6]. Many works have focused on the model-based methods for scenarios production [7][8][9]. [7] used the empirical cumulative distribution function to model the wind power generation, while in [8], copula function is utilized to model the joint distribution of multiple renewable energy power output. [9] proposed a dynamic factor model for the estimation of the correlation between load and wind power. Nevertheless, due to the nonlinear power conversion process, spatial-temporal interactions and variant weather conditions, model-based scenarios generation methodology is difficult for practical use with the shortcomings in complexity and generalization ability. Recently, machine learning and deep learning algorithms are applied widespread in the industry such as computer vision (CV) and natural language processing (NLP). These algorithms were used for scenarios production as well, such as Radial Basis Functional Neural Networks (RBFNN) [10] and deep neural network (DNN) models [11]. Most of the time, these machine learning algorithms can better capture the nonlinear correlations between various RES. However, the features extraction and the acquisition of tagged data are still challenging. Currently, there are still few studies related to the optimal coordination of multi-energy hybrid systems including wind, PV as well as hydro resources, particularly in a river basin, where cascade reservoirs can provide strong adjustable capacity. Furthermore, the detailed model of multiple RES and the quantified benefits of their complementarity still need further researches. In this paper, a coordinated optimization model is proposed for the operation of cascade hydro-wind-solar integrated system in a basin. Based on the variational auto-encoder (VAE), this paper proposes a scenarios generation method to capture the spatial-temporal complementary and fluctuant characteristics of wind and PV power with high model accuracy and low computational complexity. The detailed hydraulic and electrical relationship between cascade reservoirs are established to make full use of the controllable regulation capacity of hydropower stations. With the linearization of head-sensitive power generation and nonlinear water head, the optimization model is reformulated into a tractable mixed integer linear programming (MILP) formulation. Finally, the performance and effectiveness of proposed methods are demonstrated in the actual system. SHORT-TERM OPTIMAL OPERATION OF WIND-SOLAR-HYDRO HYBRID SYSTEM In this section, we propose the formulation for the shortterm optimal operation of hydro-wind-solar integrated system. Considering the spatial and temporal correlations between the wind and PV power, the joint distribution of them is obtained based on the VAE models. The refined model of cascade hydropower stations is established taking the impacts of diversified factors into account to further mitigate the variability of RESs with regulation capacity. Several methods are used for the linearization of the proposed optimization model as well. Variational auto-encoder for scenarios generation At present, VAE is one of the most commonly used generative models. The goal of the generative model is to build the target data X based on the hidden variable Z . Usually, we assume that Z obeys some given distributions (e.g. normal distribution or uniform distribution), and then train a model to map the original probability distribution of Z to the training sets. Because we cannot determine the most appropriate distribution of training data, we transform the probability distribution of X as follow: Where ( ) p X and ( ) p Z represent the probability distribution of X and Z respectively. Assuming that Z is the standard normal distribution, we can sample from it to get the set of Z, and generate the scenarios based on the sample set. However, to make sure the one-to-one match of sample k X and target k Z , we assume the ( ) | k p Z X is the exclusive distribution of k X and ( ) | k p Z X obeys the normal distribution [12]: Where k μ is the mean and 2 k σ is the variance, I is the identity matrix. To fit two parameters of normal distribution, we construct two corresponding neural network to deal with the nonlinear characteristics. In this paper, we choose to fit ( ) 2 log k σ rather than 2 k σ because of the selection of the activation function. With above assumptions, the probability distribution of Z will obey the standard normal distribution: Thus it is practical to sample from the standard normal distribution directly for scenarios generation. On the other hand, in order to make ( ) | k p Z X approximate the normal distribution, we add the regularization terms into the loss function based on Kullback-leibler (K-L) divergence: Where d is the dimension of Z . The target of K-L divergence is to get a mean of 0 and a variance of 1 to keep the accuracy and robustness of the scenarios generation model. Moreover, the mean and variance should be optimized in the back propagation process, but the sampling process is not derivable. So we sample from the standard normal distribution and obtain the derivative intermediate results with the mean and variance. Activation function is also crucial for the performance of neural network. Rectified Linear Unit (ReLU) is the most widely used activation function with the form: Where i x is the result of i-th node before activation. ReLU can prevent the vanishing and exploding gradients, but some units may not be activated all the time, causing the neural network to be hard for training [13]. Therefore, we choose scaled exponential linear units (SELU) for activation. The formula of SELU is given as: Where λ and α are tunable parameters. [14] showed that the outputs of a fully-connected neural network would approximate the standard normal distribution when 1.058 λ ≈ and 1.673 α ≈ , which is suitable for the application of our model. The structure of proposed VAE is shown in Fig.1. The Input size and output size is set as 24 24 × for the convolution layer. The activation function for convolution layers and fully-connected layers are ReLU and SELU respectively. In addition, we choose the same padding mode. The "Lambda" mode is used to transform the sampling results from standard normal distribution based on obtained mean and variance. During the parameters tuning, it is found the VAE reaches its best performance when the dimension of mean and variance is 6. Finally, we can use the second half of the trained model for scenarios generation. Cascade Hydropower Stations For the determination of optimal operation strategy for the integrated hydro-wind-solar hybrid system, the key point is the model of cascade hydroelectric plants. With nonlinear factors in many aspects, the optimization model including cascade hydropower stations is usually highdimensional, non-convex and nonlinear. First, the goal is set to maximize the generating efficiency. Because the maximum efficiency is usually attained with the highest head, the objective function is defined as follow: . The operational benefit of hydropower stations is determined by dynamic characteristic curves to a great extent. Given the water discharge, water head and the conversion efficiency, we can get the power generation based on the input-output curves as follow: Where ( ) Furthermore, the head, power generation, water discharge and the reservoir volume must meet the minimum and maximum constraints. Meanwhile, many nonlinear correlations should be considered. For instance, the forebay elevation of a reservoir is nonlinearly related to the volume, while the tailwater elevation will be affected by water flow from upstream reservoir. Therefore, the formula of head is as follow: represents the forebay elevation- ( ) loss i h t is the water head loss at time t for i-th hydropower plant. In order to simplify the formulation of water head, we assume the head is linearly related to the forebay elevation and tail water level, and the water head loss is constant. Also, the water relations between different cascade reservoirs should be taken into account. Thus we get hydraulic continuity equations as below: Solutions of the optimization model Although a refined model for hydro-wind-solar hybrid system is established, the proposed optimization is difficult to solve because of nonlinear factors. There is still no general and perfect algorithm to deal with highdimensional, non-convex and non-linear problems. Although some heuristic algorithms, such as genetic algorithm and particle swarm optimization, can be used to solve such problems, the results are usually not optimal and the convergence process is slow. In this paper, we make full use of several linearization methods for the (7), (12), (13), and transform this model into a mixed integer linear programming (MILP) problem, which can be solved by mature commercial software, such as CPLEX and Gurobi. To deal with the nonlinear water head, we use piecewise linearization method to linearize the forebay elevation-reservoir volume curves and tailwater elevationwater flow curves. Given 1, 2, U χ =  intervals for upper stream water level and 1, 2, D γ =  for tail water level, we obtain: Thus we can get: Accordingly, we get the linearized water head with above assumptions. Moreover, the volume of reservoir with large capacity usually remains nearly constant in the day-ahead time scale, so the formulation can be directly linearized around the current head level, which can meet the accuracy requirements with greatly reduced number of binary variables. For the nonlinear input-output curves, we assume that the dynamic characteristics of each hydro unit in the same hydropower station are consistent. Besides, the reservoir volume and runoff will not change drastically in a day, we can make the hypothesis that the power generation efficiency is constant. Nonetheless, the power generation characteristic of hydropower stations is still a non-convex bilinear function. To this regard, we use McCormick's inequalities to build the convex envelope for dynamic curves of hydropower units and transform the model into a tractable one [15]. The computational errors are determined by the relaxation intervals. So we are able to further reduce the search area by increasing the binary variables to restrict the range of continuous variables, but the computational complexity is supposed to be considered. The detailed process can be found in [16]. Based on the obtained scenarios, the power generation of wind or PV can be seen as certain variable in a particular scenario. Then, the key point is the control of cascade reservoirs. To sum up, the complete procedure is as seen in Fig 2. RESULTS AND DISCUSSION In this section, we will illustrate the performance of our methods. First, the generated scenarios for wind and PV power considering spatial and temporal correlations are compared with real historical data, proving the efficiency of VAE model. Next the MILP problems for hydro-windsolar hybrid system is solved to get the optimal operation strategy considering complementary coordination. Data Description The training and validation dataset for wind and photovoltaic power is built based on the actual data from a province in northwest China. Five cascade hydropower stations system including 4 adjustable hydropower stations and 1 run-off hydroelectric plant is chosen for analysis. The power output of 6 wind farms and 18 solar arrays all the year round is utilized. By resampling the original data, the input data has resolution of 1 hour. Thus we can form the 24 24 × input size without zero padding. To reflect the advantage of power fluctuation and nonlinear correlations handling, we use the VAE to directly fit the power generation. Scenarios Generation In this paper, the VAE network is established based on Keras 2.2.2 and Tensorflow 1.10.0. The Adam optimizer is selected for back propagation. A laptop with four 1.80GHz CPUs is used for training. The detailed parameters and structure of proposed VAE is seen in Fig 1. We use ReLU as the activation function of convolutional layer, SELU as the activation function of fully-connected layer and Sigmoid for the output layer. Then we can intercept the decoder part as the generative model if the VAE is completely trained. Some generated scenarios are illustrated to verify the validity of our proposed method as seen in the Fig 3. Although the power output in the test samples is not trained, the generative model automatically produces some scenarios close to the test samples as Fig 3 shows. The fluctuation of wind and PV power are accurately fitted. The proposed VAE structure can not only reproduce the data in the training set, but also create new but practical scenarios based on the learned features. The results demonstrate that the scenarios generated by VAE model is consistent with the actual situation . On the other hand, we can analyze the statistical characteristics of generated scenarios in the long term. The probability distribution functions (PDF) of actual power generation data and generated scenarios for wind and PV are displayed in Fig 4 and Fig 5. And the cumulative distributions function (CDF) of actual power generation data and generated scenarios are shown in Fig 6 and Fig 7. Among them, the time periods when the power output is zero are removed for probability distribution of PV power to compare the results in better visualization. As seen from Fig 4 to Fig 7, the probability distribution and cumulative distribution of generated scenarios accurately fit the historical samples, which indicates that the VAE model is able to learn the distribution characteristics of training samples. Based on the unsupervised training theory, we achieve the performance of probability modeling of traditional methods. Furthermore, in order to estimate the spatial correlation of generated scenarios, we calculate the correlation coefficients between historical data and generated scenarios for wind and PV power respectively as seen in Fig 8. As shown in Fig 8, although the spatial correlation coefficients between some wind farms and solar arrays are slightly different for actual data and generated data, the VAE model can capture the spatial correlation characteristics as a whole. It is demonstrated that VAE model can capture both the short-term characteristics and long-term characteristics of wind and PV power, such as frequent fluctuation, peak-valley changes, PDF, CDF, and spatial correlations. The generated scenarios almost reproduce the characteristics of actual data without modifying the VAE network structure. Compared to the traditional approach, proposed generative model can be more representative of the actual operation characteristics of the wind farms and solar arrays. Optimal Operation of Hydro-wind-solar Hybrid System Based on the obtained scenarios for wind and solar generation, we can analysis the operation of hydro-windsolar hybrid system in the given power transmission demand. The wind and PV power output are definitive in a certain scenario. Therefore, the target is to balance the remaining load through the regulation of cascade hydropower stations. In our model, in order to get the maximum generating head and system operation efficiency, the five cascade hydropower stations should be operated appropriately. The first and second reservoirs should undertake the base load in the maximum generating efficiency and control the water discharge to the downstream considering the characteristics of wind and PV. Next, the third and fourth reservoir is supposed to mainly track the fast fluctuation of wind and solar power. The last hydropower station should provide stable power support in the peak-load hours. Then we run the proposed optimal operation model and solve the MILP problems. Fig 8 shows the power output from cascade hydropower stations, wind farms and solar arrays separately over a week. The load curve is also displayed in this graph. The result proves that the power generation of only hydro-wind-solar hybrid system can meet the given load demand most of the time. During the simulation time scale, renewable energy curtailment occurs only on the fourth day because of large amount of wind-PV power generation and limited adjustable ability from hydropower. Typically, the operation of five cascade hydropower stations on a certain day is described in Fig 9. Although the power output of the first two cascade hydropower stations is influenced by inflow, it is relatively stable to provide the base load. With large regulation capacity and fast adjustable ability, the third and fourth cascade hydropower stations can balance the fluctuation of wind and PV power around 12h. The fifth hydropower station is capable of undertaking partial load in the peak hour and impound water for the rest of time because of the constraints of other factors. Finally, with the full use of the regulation capacity from cascade hydropower stations, the optimal operation of only hydro-wind-solar hybrid system achieves a good performance. CONCLUSION AND FUTURE WORK In this paper, we propose a VAE based model for scenarios generation considering the spatial and temporal correlations between wind and PV power. The coordinated operation strategy is made to fully utilize the regulation capacity of cascade reservoir. By utilizing the VAE network, the results indicate that VAE structure can accurately capture the short-term and long-term characteristics and correlations between wind and PV power. By solving the MILP model, we find that the power output of only hydro-wind-solar hybrid system can meet the target load demand most of the time, which is capable of reducing computational complexity of latter scheduling and decreasing the renewable energy curtailment. In future research, the structure of VAE network and loss function can be further optimized. On the other hand, the operation of hydro-wind-solar hybrid system will be analyzed in drought and flood season. The uncertain load variation and electricity market may be considered as well.
2019-02-19T14:08:19.338Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "97e71352426c80e4a3b4e63ffa6f57192aa77a7d", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/105/matecconf_iswso2018_01077.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "87e5a4ff5afe2f33cdfa575088d1a1aab12e4444", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
255932674
pes2o/s2orc
v3-fos-license
Reduced percentage of inconclusive reports and need for repeated fine needle aspirations in ultrasound versus free-hand parotid cytology Fine needle aspiration cytology has been established as a minimally invasive, non-tumour seeding investigation of choice in the initial diagnostic pathway of parotid lesions. The purpose of this study was to compare the accuracy of fine needle aspiration cytology performed with and without ultrasound to determine whether one method should be preferred to the other. A retrospective review of all patients undergoing fine needle aspiration cytology with and without ultrasound for parotid masses in a large district general hospital between 2012 and 2016 was performed. Specificity, sensitivity, accuracy, positive and negative predictive value, percentage of inconclusive fine needle aspiration cytology and percentage of second fine needle aspiration cytology were determined for each group. A total of 397 fine needle aspiration cytology results were available for analysis. The numbers performed with ultrasound guidance and free-hand were roughly equal (208 (52.3%) versus 189 (47.7%)). The number of inconclusive fine needle aspiration cytology reports was significantly higher in the free-hand group (65/189 (34.4%)) than the ultrasound group (25/208 (12%)) (p < 0.0001). A significantly higher number of repeated fine needle aspiration cytology were undertaken in the free-hand group vs ultrasound group (43 vs 15, p < 0.0001); overall 7.2% of ultrasound-guided fine needle aspiration cytology required a second fine needle aspiration cytology, compared to 22.8% in the free-hand group. The sensitivity, specificity, positive and negative predictive values were all higher in the ultrasound group versus the free-hand group. Ultrasound-guided fine needle aspiration cytology is superior to free-hand fine needle aspiration cytology in the investigation of parotid tumours. There is a significant benefit in reducing the number of inconclusive results and repeat fine needle aspiration cytology, and a potential benefit in improving the sensitivity and positive predictive value, when immediate cytology assessment of the sample quality is not performed. Background Tumours of the parotid gland constitute approximately 70% of all salivary gland tumours with majority being benign [1,2]. They can represent up to 50% of all malignancies from salivary glands [2,3]. Patients usually present to a head and neck clinic with a palpable neck lump although not uncommonly (15%) patients will be referred to the clinic with an incidental finding on imaging [4]. Fine needle aspiration cytology (FNAC) is the initial investigation of choice and has been shown to be both accurate and safe [5]. Pre-operative diagnosis of a parotid mass, including differentiating between types of benign or malignant neoplasms is clinically useful in helping to plan the correct management [6]. Inconclusive results can jeopardise the management plan. The decision to follow a conservative path, or the extent of parotid or neck surgery required is impossible without reliable cytological or histological diagnosis. In palpable parotid masses FNAC is often performed free-hand but the use of ultrasound (US)-guided FNAC is common and may improve accuracy [5,7,8]. Furthermore, radiological features may indicate the diagnosis and can supply synchronous radiological staging of the parotid tumour and cervical nodes [9]. In our unit, the use of US-guided FNAC in parotid lesions has become commonplace and there is trend for head and neck surgeons to perform FNAC in clinic by using US guidance without a designated radiologist [10]. The purpose of this study was to quantify the benefit of US-guided FNAC in parotid tumours, specifically to compare the accuracy of FNAC with and without US guidance. Methods A non-experimental cross-sectional study comparing US-guided FNAC in parotid masses versus free-hand FNAC was performed. Patient data was retrospectively collected using the cytology reporting system (including all consecutive FNAC reports from the parotid gland) across three different hospitals within the same district and health board from January 2012 to December 2016 (period of use of both FNAC techniques). The excluded cases were the FNAC cases that were duplicated in the database and the FNAC cases with missing report in the database. To assess the differences between groups, several quality indexes were compared. The index tests included proportion of inconclusive cases (including inadequate or non-diagnostic results and indeterminate results) and the need for secondary FNAC. In these tests, all the patients with available FNAC were used. Sensitivity, specificity, accuracy, positive predictive value, and negative predictive value were assessed and compared only in patients with confirmed histology. The diagnosis of malignancy was the base for the index test. The reference standard was the histopathology result if available. Pre-assessment of the study using the ethical committee guidelines deemed that this study did not require formal ethical committee assessment. All cytology reports were classified according to "The Milan System for Reporting Salivary Gland Cytopathology" [11]. All FNACs undertaken on clinical or radiological suspicion of a mass in the parotid were included in the study. Free-hand or A total of 182 reports had FNAC with benign diagnosis without confirmed histology available. The most common benign diagnosis from these was Warthin's tumour (34.6%) followed by pleomorphic adenoma (31.9%). On the other hand, 16 reports had FNAC with a malignant diagnosis without confirmed histology available. The most common diagnosis from these were squamous cell carcinoma (SCC, 18.8%) and adenocarcinoma (18.8%, Table 1). A total of 91 reports had subsequent confirmed histology after a negative FNAC for malignancy. The most common diagnosis was pleomorphic adenoma (47.2%) followed by Warthin's tumour (27.5%). Eighteen reports had subsequent confirmed histology after a positive FNAC for malignancy. The most common diagnosis was squamous cell carcinoma, representing 33.3% of the reports. Finally, 38 reports had subsequent confirmed histology after an inconclusive FNAC. The most common benign diagnosis was Warthin's tumour (23.7%), and the most common malignant diagnosis was mucoepidermoid carcinoma (5.3%). A total of 4 report (3 patients) had an indeterminate histology but all of them were benign. One histology differential diagnosis was between basal cell adenoma and pleomorphic adenoma, another was between pleomorphic versus monomorphic adenoma and a further one was between normal parotid tissue and a cystic lesion not viable for analysis. Duplicate FNAC with same histology were removed for the final histopathologic diagnosis of parotid gland tumours (Table 2). A total of 139 definitive histology was obtained from 307 patients (45.2%). Of these, 121 (88%) had superficial parotidectomy or extracapsular dissection, 10 had an open biopsy (7%), 2 (1%) had core biopsy, and 6 (4%) had histology of the primary tumour (metastasis). The overall median time between FNAC and the availability of histology was 107.5 days, 132.5 days for US-guided FNAC and 78 days for free-hand FNAC. The time to obtain the definitive histology varied depending on the FNAC results. For negative FNAC the median was 132 days with interquartile 25% (IQ25) of 80.3 days to interquartile 75% (IQ75) of 247 days. For inconclusive FNAC, the median (IQ25-IQ75) was 96 days (63-188). Finally, for positive FNAC, the median (IQ25-IQ75) was 45 days . The main analysis showed that the proportion of inconclusive FNAC (inadequate sampling or indeterminate results) using US (12.0%) was significantly smaller than doing free-hand FNAC in clinic (34.4%). The difference in proportion was 22.4 (CI 95%: 14.2 to 30.3), p < 0.0001. Power was 0.99 for a significance alfa p = 0.05. To perform the comparison between the different accuracy measures, only 109 patients were used. As seen in the flow diagram, from the 397 FNAC cases the study only included 109 cases, those with available histology and positive (n = 91) or negative (n = 18) FNAC. The results from diagnostic accuracy measures seem similar whether US or free-hand was used. The sensitivity of USguided FNAC trended higher (89%) than for free-hand FNAC (82%), whereas specificity seemed more similar (100% and 97%, respectively) ( Table 3). Our study also considered the number of secondary FNAC requests after an inconclusive initial FNAC in the two groups. Secondary FNAC was defined as repeated FNAC that was requested 6 months after first FNAC in the same patient. A post-hoc study to assess the difference in proportion of secondary FNAC between USguided and free-hand FNAC was performed. Bonferroni correction significance was set to 0.025. From the 273 FNAC that had a negative result for malignancy, 6 USguided FNAC and 11 free-hand FNAC had another FNAC performed subsequently. From the 34 FNAC that had a positive result for malignancy, only 1 US-guided FNAC and 3 free-hand FNAC had another FNAC performed. Finally, from 90 inconclusive FNAC, 8 USguided FNAC, and 29 free-hand FNAC had another FNAC performed. Overall, US-guided FNAC generated 15 repeated FNAC from a total of 208 and free-hand FNAC generated 43 repeated FNAC from a total of 189. The proportion of secondary repeated FNAC using USguided FNAC (7.2%) was smaller than doing free-hand FNAC in clinic (22.8%). The difference in proportion was 15.5 (CI 95%: 8.6 to 22.6), p < 0.0001. Discussion Our study comprises one of the largest consecutive series of FNAC reports for parotid tumours and provides an insight into a common first line investigation for parotid masses in head and neck units in this country. The results demonstrate that free-hand or palpation-guided FNAC is acceptably accurate but has a higher inconclusive rate, therefore their usefulness remains questioned. The accuracy of parotid FNAC with or without US (without immediate cytology assessment) has been reported in single studies but not described separately within the same study; combining results from US-guided and freehand techniques is a problem given the differences this study has shown [5,[13][14][15][16]. The difference in accuracy for free-hand versus ultrasound-guided FNAC of parotid masses has been documented in a previous meta-analysis [5]. The meta-analysis results showed a sensitivity of 0.78 (95% CI, 0.74-0.78) for all FNAC groups that is comparable to sensitivity of 0.82 in the current study. The US group sensitivity 0.84 (95% CI, 0.76-0.91) also compares to 0.89. The meta-analysis specificities again were comparable and they are close to 1. In our study, we have failed to demonstrate a significant difference between the accuracy of US-guided and free-hand FNAC, although a trend can be seen in all measures. This is to be expected since the sample of 109 patients is too small to determine significance for small percentage differences. The rate of inconclusive and repeat FNAC is significantly higher in free-hand FNAC than when it is performed under US guidance. The inconclusive percentage difference comparing free-hand versus USguided FNAC, without immediate cytology assessment of the sample quality, has been previously documented in a single study for head and neck masses, but no specific studies were found in relation to parotid masses [17]. The study showed similar percentage of inconclusive results with a 33.5% for free-hand versus 34.4% as noted in the present study and 15.3% for US-guided FNAC versus 12% as noted in the present study. In the comparing study, the percentage of free-hand inadequate specimens was 21.5% compared to 26% in the present study whereas the percentage of free-hand indeterminate samples was 12% compared to 7.9%. In the reference study, the US-guided FNAC inadequacy percentage was lower (3.4%) than in the current study (8.7%) whereas the indeterminate samples percentage was higher (11.9% versus 3.4%). The use of US reduced the inadequacy (21.5 to 3.4) whereas it did not affect the percentage of indeterminate samples (12 to 11.9) in the comparing study when using US-guided FNAC. Conversely, the present study showed both reduction in inadequacy (26.5 to 8.7) and indeterminate samples (7.9 to 3.4) when using US-guided FNAC. It is plausible that US use could improve both inadequacies and indeterminate samples in parotid masses. The percentage of inadequacy and indeterminate samples has been documented recently in a study with immediate assessment of the FNAC sample. The percentage of inadequacy was similar in free-hand versus US-guided FNAC (11 to 12). Likewise, the percentage of indeterminate samples was similar as well (4 to 6). In this situation, using ultrasound with FNAC would become beneficial when there is no immediate assessment of the sample quality [18]. Likewise, another recent study has shown that ultrasound-guided FNAC by the same surgeons without immediate cytology assessment had lower inadequacy rate than cytopathology free-hand FNAC with immediate sample assessment (3 to 7.2). In this last study, no information was provided regarding indeterminate samples [10]. The current study supports this hypothesis. The rate of inadequacies has been reported in a previous meta-analysis for all but not specific for free-hand or US-guided FNAC. The overall results for this metaanalysis showed 5.3% percentage of non-diagnostic or indeterminate samples which is higher than US-guided FNAC (3.4%) and lower than free-hand FNAC (7.9%) in the current study. Likewise, inconclusive results account for 14.7% percentage of the meta-analysis which is higher than US-guided FNAC (12%) and lower that free-hand FNAC (34.4%). However, it needs to be considered that when doing meta-analysis there is a lack of good information about non-diagnostic and inconclusive reports as documented by the same meta-analysis which could be selecting the best studies and biasing the actual results [5]. The reasons for the difference between free-hand and US-guided FNAC are not well documented in the literature. They may include operator experience and pathologist experience (if it is not the operator) [19]. It is known that the inadequacy percentage can be related to the presence of a one-stop service. A recent systematic review for head and neck FNAC has demonstrated the benefit of the service [20]. The importance of an abnormal rate of inconclusive FNAC results is that it seems to be related to an increased risk of malignancy as documented in previous studies [8,21]. In this study, there was an incidence of 11% in malignant tumours after an inconclusive FNAC report and as such it is recommend all inconclusive FNAC reports be treated with an appropriate index of suspicion. The use of US-guided core needle biopsy is an alternative technique to consider since it can increase the sensitivity to 0.96 with specificity of 1. However, it comes with increased risk of facial hematoma (1.6%), facial nerve weakness (0.2%) and a possibility of seeding [22,23]. To define the limits of the present study, STARD recommendations have been followed. QUADAS assessment guidelines have been helpful as well [12,24,25]. The description of FNAC process can introduce bias to the study since more than 10 surgical clinician and 3 radiologists have been involved with FNAC over the studied period. This problem, undoubtedly, increases variability of the described FNAC procedure [12,19,26]. Selection bias is likely to have happened when referring patient for US-guided FNAC. It is expected that patients with difficult clinical assessment or consultants with less experience in FNAC have referred more patients for US-guided FNAC while patients with suspected malignancy have had clinical FNAC to reduce time to get final diagnosis. Malignancy rate provides some light to the possible bias. In the study, 12.7% US-guided cases were positive for malignancy and 17.0% free-hand FNAC were malignant. According to these results, some bias can be expected that could minimise the difference found [12,19,26]. Selection bias could have happened as well since only 45.2% (139/307) of patients with FNAC eventually had available histology results by the first half of 2017. Some of these patients may have had some contraindications for surgery and others may have wished to have further radiological and clinical follow up instead of having the excision. That seems to be a common situation in parotid and head and neck studies including large series of around one thousand FNAC ranging from 28.6 to 52.8% [27][28][29][30]. Moreover, it is common not to report the amount of total FNAC performed or to include only the patients that have undergone surgery, excluding those that had the FNAC without surgery [4,10,[14][15][16]18]. This situation seems to be critical since there is rarely documentation regarding what were the results of the FNAC that did not have surgery or histology. In the present study, all FNAC have been documented to compensate for this bias (Table 1). It would be recommended that parotid cytology studies include any results of FNAC without histology as a quality marker. Finally, this study was retrospective which could have introduced some bias by not selecting cases that had US and the US was enough to reduce need for FNAC whereas in the clinical setting this information may be missing unless there is a previous US report available. Verification bias affects the current study as with all, particularly retrospective FNAC studies since malignancies have more confirmed histology available than benign neoplasms. Review bias affects the study since it was not blinded. Misclassification bias is expected at a rate of 3% [12,19,26]. The median time to obtain the confirmed histological diagnosis was 55 days faster (78 vs 132.5) for the free-hand group compared to the US group. Since most parotid masses turn out to be benign the referral pathway preferring urgent, rather than suspected cancer might explain this delay. The increased percentage of malignancy within free-hand group (17 vs 12.7) and the increased inconclusive rate (34.4 vs 12) could have prompted histology being available more readily in the free-hand group [12,19,26]. Although it is not recommended free-hand FNAC for parotid lesions based on these results, US-guided FNAC may not be readily available in some clinics. In these examples, the benefit of free-hand FNAC is to reduce the time to obtain the cytology. The knowledge of FNAC technique whether US-guided or not is a valuable skill for the head and neck specialist, and it should be a standard for trainees [10]. The recent proliferation of US FNAC instructional courses aimed at non-radiology trained practitioners suggest that in the future it may be more common for US FNAC to be performed by the surgeon in the head and neck clinic, should a designated head and neck radiologist not be available. However, the expert head and neck radiologist is invaluable in describing the mass features such as suspicious malignancy and probable diagnosis that are relevant to the management of the patient independently from FNAC outcome. Clearly, the gold standard would be a one stop US FNAC neck lump clinic and option to core biopsy with both radiologist and cytologist in attendance, but such clinics are unfortunately not the norm in most national centres. We hope this study helps the planification of equipment in geographic area where there is more prevalence of parotid pathology [7,31,32]. Conclusions Free-hand FNAC is a safe procedure with comparable results to US-guided FNAC and it is still used to reduce cytology report delays. US-guided FNAC significantly reduces the number of inconclusive results and repeat FNAC compared to free-hand FNAC, when immediate cytology assessment of the sample is not performed. The US FNAC neck lump clinic with immediate assessment of the sample is still the gold standard, but it is not feasible in many departments. Further training of surgeons to perform US may increase the use of US-guided FNAC in the head and neck clinic, reducing time for cytology diagnostic.
2023-01-17T17:14:35.524Z
2023-01-13T00:00:00.000
{ "year": 2023, "sha1": "c78a7fc14de2ee4522a1dd4dbb05d79a9a0000a6", "oa_license": "CCBY", "oa_url": "https://ejo.springeropen.com/counter/pdf/10.1186/s43163-023-00375-6", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "02b59ab9f9bf4f5c6d08e28a29d13dcaac73a8ef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
153310770
pes2o/s2orc
v3-fos-license
Contaminant and Environmental Influences on Thyroid Hormone Action in Amphibian Metamorphosis Aquatic and terrestrial environments are increasingly contaminated by anthropogenic sources that include pharmaceuticals, personal care products, and industrial and agricultural chemicals (i. e., pesticides). Many of these substances have the potential to disrupt endocrine function, yet their effect on thyroid hormone (TH) action has garnered relatively little attention. Anuran postembryonic metamorphosis is strictly dependent on TH and perturbation of this process can serve as a sensitive barometer for the detection and mechanistic elucidation of TH disrupting activities of chemical contaminants and their complex mixtures. The ecological threats posed by these contaminants are further exacerbated by changing environmental conditions such as temperature, photoperiod, pond drying, food restriction, and ultraviolet radiation. We review the current knowledge of several chemical and environmental factors that disrupt TH-dependent metamorphosis in amphibian tadpoles as assessed by morphological, thyroid histology, behavioral, and molecular endpoints. Although the molecular mechanisms for TH disruption have yet to be determined for many chemical and environmental factors, several affect TH synthesis, transport or metabolism with subsequent downstream effects. As molecular dysfunction typically precedes phenotypic or histological pathologies, sensitive assays that detect changes in transcript, protein, or metabolite abundance are indispensable for the timely detection of TH disruption. The emergence and application of ‘omics techniques—genomics, transcriptomics, proteomics, metabolomics, and epigenomics—on metamorphosing tadpoles are powerful emerging assets for the rapid, proxy assessment of toxicant or environmental damage for all vertebrates including humans. Moreover, these highly informative ‘omics techniques will complement morphological, behavioral, and histological assessments, thereby providing a comprehensive understanding of how TH-dependent signal disruption is propagated by environmental contaminants and factors. INTRODUCTION Thyroid hormone (TH) signaling is a cornerstone of molecular events that mediate the profound morphological changes characteristic of early vertebrate development (1). The obligate requirement for TH is perhaps best exemplified by metamorphosing anuran amphibians for which the essential stimulation by TH initiates transitions from larval to juvenile stages under conducive environmental conditions (2). Amphibians undergo complex and comprehensive morphological changes as functionally athyroid premetamorphic tadpoles progress through prometamorphosis (with concurrent, increasing endogenous TH levels) and into juvenile frogs after metamorphic climax (Figure 1) (4). These changes encompass the coordinated maturation and remodeling of organs, de novo generation of limbs, regression of the tail, and the consequent alteration in behavior, diet, and niche as most aquatic tadpoles develop into more terrestrial-dwelling frogs (Figure 1) (5). TH production is controlled by the hypothalamic-pituitarythyroid (HPT) axis (Figure 2). The hypothalamus stimulates the pituitary with corticotropin releasing factor (CRF) to release thyroid stimulating hormone (TSH). TSH promotes the synthesis of TH in the follicular cells of the thyroid gland (2). The central dogma of TH signaling is that the newly synthesized prohormone thyroxine (T 4 ) is transported from the thyroid gland by transporter proteins (e.g., transthyretin). Once at the destination peripheral tissue, T 4 is converted into its more active form, 3,3 ′ ,5-triodothyronine (T 3 ), by the enzymatic activity of deiodinases (Figure 2). Additionally, the bioactivity of T 4 , without conversion, has recently been demonstrated (6)(7)(8)(9). TH binds its TH receptors (TRs), TRα, and TRβ, which are constitutively bound to cognate receptor elements that regulate genes sensitive to TH. Metamorphosis is initiated in anurans upon TH production, which stimulates gene expression FIGURE 1 | Thyroid hormone (TH) levels and key morphological hallmarks during frog postembryonic development. Amphibian metamorphosis is a postembryonic process driven by TH signaling. The free-swimming tadpole (0% relative time) has virtually undetectable levels of TH. The morphological changes that occur in the development of a tadpole to a juvenile frog (100% relative time) are inextricably aligned to internal rises in TH levels. These rising TH levels lead to progression through the stages of development, which can be seen through morphometric measurements including hindlimb development, forelimb emergence, tail regression, head shape changes, and thyroid follicle production. The Gosner and Nieuwkoop and Faber (NF) staging system comparisons are from Just (3). cascades and subsequent proteomic and metabolomic alterations (Figure 2) (10,11). TH metabolism is regulated through various enzymatic activities (glucuronidation, sulfation, and deiodination), which can target the hormone for degradation and thereby modulate TH activation of gene expression (Figure 2). For more detailed descriptions of thyroid hormone production, activity, and metabolism, the reader is encouraged to consult the following publications and the references therein (2,(12)(13)(14)(15). The spatiotemporal control of TH-dependent molecular and physiological activities during metamorphosis is particularly sensitive to abiotic and xenobiotic perturbations. Although the mechanism of molecular interference is not known for most adverse exposures, disruption can potentially target any aspect of TH synthesis, activity, and metabolism (Figure 2). Such disruptions include the exposure of premetamorphic tadpoles to exogenous TH, which results in a precocious induction of metamorphosis that can be exploited to experimentally assess toxicant perturbations during this developmental period (2). In the present review, we discuss the effects of chemical and environmental disruptors of metamorphic TH signaling on anuran amphibians. Anurans are particularly tractable for the study of TH disruption due to the absolute necessity for TH to initiate metamorphosis, and consequently, the well-demarcated developmental transitions in amphibians (11). Chemical disruption of anuran metamorphosis almost exclusively originates from anthropogenic sources: industry, agriculture, FIGURE 2 | Overview of thyroid hormone (TH) production, transport, activity and regulation. The thyroid hormone signaling pathway involves a complex interplay between TH synthesis, transport, signal transduction, and catabolism. TH is synthesized within the hypothalamus-pituitary-thyroid (HPT) axis where the pituitary is stimulated to release thyroid stimulating hormone (TSH) by corticotropin releasing factor (CRF) from the hypothalamus. TSH induces the production of thyroxine (T 4 ) and, in lesser amounts, triiodothyronine (T 3 ) from the thyroid gland. The production of TH self-regulates through a negative feedback loop that inhibits further CRF and TSH production. TH travels through the blood via transporter proteins to peripheral tissues where it is imported into target cells. Here, T 4 is converted to T 3 through deiodinases (DIO), although T 4 can bind to receptors as well. Binding of THs to TH nuclear receptors (TR) leads to the activation of TH response genes. This change in transcript abundance results in downstream proteomic and metabolomic responses that produce the phenotypic changes resulting from the TH signal. The TH signal is also regulated within the cell by catabolism that includes processes such as sulfation, glucuronidation and deiodination. pharmaceuticals, and personal care products (PPCPs; Figure 3). Additionally, environmental factors, including temperature variations and ultraviolet radiation, have demonstrated effects on metamorphosis (Figure 3). Numerous studies have examined the effects of single chemical, complex chemical mixtures, or environmental exposures on amphibian morphology during metamorphosis and we focus our discussion on those that have additionally demonstrated a TH-dependence of these effects. Adverse toxicant and environmental exposures can compromise other endocrine and molecular signaling pathways beyond TH, with sub-lethal physiological consequences for reproductive success, behavior, and broader dysfunction (16)(17)(18)(19). We have restricted our discussion to select representatives from each of the major classes listed above and regret being unable to undertake an exhaustive review of all the excellent work done on TH disruptors. The adoption of molecular biology techniques to assess the perturbation of TH-dependent metamorphosis has complemented conventional morphological characterizations and provided further insight into the sensitive responses of TH-induced gene expression (Figure 1) (20,21). We discuss how the application of quantitative polymerase chain reaction (qPCR), DNA microarrays, next generation sequencing and other 'omics techniques can ascertain TH disruption through the timely detection of biomarkers prior to the manifestation of morphological phenotypes (11,22,23). A list of TH-responsive gene transcripts mentioned in the current review is presented in Table 1. PHARMACEUTICALS AND PERSONAL CARE PRODUCTS Pharmaceutical and personal care products (PPCPs) are an abundant source of diverse anthropogenic contaminants in global aquatic and terrestrial environments (24, 25). Increasing evidence links TH disruption in frogs with a variety of PPCPs, some of which are highlighted below and summarized in Table 2. FIGURE 3 | Perturbation of thyroid hormone (TH)-dependent amphibian metamorphosis by xenobiotic and abiotic exposures. Chemical and/or environmental factors can disrupt TH action at multiple points along this pathway (red arrows), although little is known about the specific mechanism of action for many factors. Due to its absolute reliance on proper TH signaling, metamorphic endpoints can be used to reveal the TH-disrupting capabilities of these factors. However, a more complete understanding of endocrine disruption and insight into modes of action can be achieved through the use of advanced techniques to assess alterations in the transcriptome, proteome, metabolome, and epigenome within metamorphosing tadpoles. CRF, corticotropin releasing factor; TSH, thyroid stimulating hormone; TR, TH receptor; PTU, propylthiouracil; ETU, Ethylenethiourea; TBBPA, Tetrabromobisphenol A; PBDE, polybrominated diphenyl ethers; PCB, polychlorinated bisphenols; BPA, bisphenol A; NA & PAHs, napthenic acid and polycyclic aromatic hydrocarbons; WWE, wastewater effluent; UVBR, ultraviolet B radiation. THs as Pollutants (T 3 /T 4 ) THs can be found as pollutants in environmental water systems. As thyroid medication is the third-most prescribed drug in Canada for women aged 25-66, TH can be found in municipal wastewater (37). Brown and Wong measured the concentrations of T 4 at a wastewater treatment plant in Winnipeg, Canada and found a range from 60 to 79 ng/L (∼0.1 nM) with T 4 persisting through the treatment phases (38). The majority of recent studies examining precocious metamorphosis induced by THs have used physiological levels (e.g., 10-50 nM). More recently, however, studies have shown that premetamorphic tadpoles are competent to respond to lower, more environmentallyrelevant levels of T 3 and T 4 found in wastewater (6, 7). Maher et al. found that in Rana [Lithobates] (R.) catesbeiana dio2 and cebp1 are responsive to as little as 0.05 nM T 4 in the brain and back skin, respectively (7). Slightly higher concentrations of 0.1 nM T 3 and 0.5 nM T 4 led to an increased number of TH-responsive transcripts such as thrb, thibz, klf9, and rlk1 in the back skin, brain, intestine, liver, and tail fin ( Table 2). In the same species, Jackman et al. found that olfactory epithelium exposed to 0.5 nM T 4 also exhibited a significant increase in thrb, thra, and thibz (7). The responsiveness of TH-linked transcripts to environmentally-relevant levels of THs indicates that these low concentrations may be enough to affect metamorphosis. An early study demonstrating TH-induced metamorphosis found that premature induction resulted in mortality when TH amounts were greater than environmental levels (7). Exposure to T 3 is also associated with behavioral changes in which tadpoles lose the ability to detect a predator cue (36). Surprisingly, comparable T 4 exposures had no effect on this behavioral endpoint (36). Molecular analyses of the olfactory epithelium using qPCR and RNA-seq methods revealed that this tissue was extraordinarily sensitive to both hormones and, while many gene responses were shared between the two hormones, a substantial number were unique to each hormone with T 3 significantly affecting a ¼ more contigs than T 4 (6, 7). Notable differences in sensory perception, potassium ion transport, DNA repair, mitochondrial energetics and transcription/RNA processing gene ontologies provide some insight into the different effects of these hormones (36). These studies accentuate that the two TH contaminants should be treated separately when looking at responses to environmentally-relevant levels of THs. Propylthiouracil and Ethylenethiourea 6-Propylthiouracil (PTU) is a TH synthesis antagonist that is clinically used to treat hyperthyroidism. Ethylenethiourea (ETU) is also an anti-thyroidal compound that, similar to PTU, inhibits thyroid peroxidase, the enzyme that synthesizes TH (39). Xenopus (X.) laevis tadpoles independently exposed to PTU and ETU had inhibited metamorphic progression (30, 40). X. laevis tadpoles exposed to ETU at stage 51 exhibited delays and arrest of natural metamorphosis, as measured by forelimb emergence (21). Histological aberrations in thyroid gland formation were evident with increased glandular size and follicle size and partial colloid depletion following exposures to ETU and PTU (21,30). Elevated abundance of tsha and tshb transcripts were measured by qPCR in the pituitary tissue of tadpoles exposed to ETU (21). Similar metamorphic delays and aberrant thyroid gland histology were also observed in X. (Silurana) tropicalis and R. rugosa tadpoles following PTU exposures (41,42). Early prometamorphic X. laevis tadpoles (Niewkoop and Faber [NF] stage 54) exposed to 20 mg/L PTU did not have significantly altered thra, thrb, or klf9 transcript abundance in the brain, hindlimb or tail (31, 43). MAGEX cDNA array analysis of naturally metamorphosing X. laevis tadpoles at NF stage 54 exposed to PTU recorded a greater number of transcripts with decreased abundance than increased abundance in the brain at 24, 48, and 96 h post-treatment ( Table 2) (32). Differential transcription was ontologically associated with transcriptional regulation at 24 h and at 96 h, transcription, hormonal regulation and structural proteins (32). Correspondence analysis was used to identify possible metamorphic biomarker candidates and qPCR analyses confirmed the increased expression of myelin basic protein (mbp) and myelin proteolipid protein (plp) in the brain upon PTU exposure ( Table 2) (32). Using similar experimental conditions, the PTU-dependent effects were further examined in the X. laevis hindlimb and tail (34). Seven transcripts were identified by cDNA arrays to have differential abundance in the hindlimb at 24 and 96 h post-exposure and were associated with hormonal regulation and structural proteins at 24 h and protein processing, transcription, and transport and binding at 96 h ( Table 2) (34). Using cDNA arrays, 4 transcripts were detected to have differential levels in the tail at 48 h and were linked to transcription, cell growth control, and transport and binding ontologies ( Table 2) (34). Potential biomarkers were screened using qPCR and cytokeratin type I (krt1) transcripts were elevated significantly in both the hindlimb and tail ( Table 2) (34). Naturally metamorphosing X. laevis tadpoles exposed to ETU exhibited developmental arrest and aberrant thyroid histology: goiter formation, colloid depletion and follicular cell hypertrophy and hyperplasia (35). Treatment with this goitrogen induced significant decreases in thrb, klf9, pcna, mcm2, kif2C, and increased dapl1 transcript abundance in the brain as measured by qPCR (35). ETU treatment also resulted in increased tshb transcripts in the pituitaries. A qPCR candidate biomarker screening was performed on thyroid tissue and 49 of 60 genes had significantly differential abundance following ETU exposure compared to the controls (35). Of these, 43 genes had increased transcript abundance, while six were decreased. These ETUinduced differential transcripts were ontologically associated with the synthesis, secretion, and metabolism of THs, protein synthesis and transport, growth arrest, apoptosis, and cellular stress responses (35). Methimazole Methimazole is an established disruptor of amphibian HPT axis function and has been frequently used as a metamorphosis inhibitor (30). Similar to PTU and ETU, methimazole is a goitrogen and anti-thyroid drug that affects TH signaling by inhibiting thyroid peroxidase (44). Exposure to methimazole for 14 days during metamorphosis resulted in a significantly decreased metamorphic rate in pre-and prometamorphic X. laevis tadpoles and thyroid gland hypertrophy and follicular cell hyperplasia ( Table 2) (30, 33). The molecular effects of up to 72 h of methimazole exposure on early prometamorphic X. laevis tadpoles were queried by qPCR analysis of known THregulated genes. Zhang et al. found a significant decrease in thra and app gene expression in the brain; a decrease in thra and increase in ipo and krt1 mRNAs in the hindlimb; and a decrease in ipo transcripts in the tail ( ). In the brain, an increase of 20 and decrease of 76 gene transcripts related to transcription, hormonal regulation, and structural pathways was observed (32). In the hindlimb, the 11 increased transcripts were related to cell growth control, hormonal regulation, protein processing, signal transduction, structural, transcription, and transport/binding pathways. The tail had four increased and one decreased transcript that were related to hormonal regulation, structural, protein processing and signal transduction pathways (34). Ontological analyses of differentially affected brain transcripts were associated with apoptosis/protein processing, cell growth control, chromatin structure, hormonal regulation, metabolism, signal transduction, structural, transcription, translation, and transport/binding pathways with qPCR analysis revealing an increase in ipo and krt1 and a decrease in thra mRNA levels ( Estrogen The steroid hormone and TH axes are closely related. As the synthesis of both endocrine hormones is controlled through hypothalamic-pituitary axes and both bind nuclear receptors that stimulate gene expression cascades, it is unsurprising that there is some cross-talk between these two pathways. The majority of studies that have looked at the effects of 17β-estradiol (E 2 ) or the synthetic estrogen 17α-ethinylestradiol (EE 2 ) on metamorphosis have found a decreased metamorphic rate (Supplementary Table 2). The thyroid itself was found to show no change in number of follicles or overall thyroid volume, although there was a decreased follicular cell height upon exposure to EE 2 . To determine the response of the gene program, Jackman et al. investigated the transcriptomic effects of E 2 in the olfactory epithelium of R. catesbeiana and found none of the classic THresponse genes, such as thra, thrb, thibz, or dio2 changed upon an acute exposure to either environmentally-relevant or higher levels of E 2 (6). This is corroborated by Bulaeva et al. who exposed R. sylvatica to much higher levels of E 2 and still saw no significant response of thrb (50). With more in-depth RNA-seq analysis, Jackman et al. found 112 significantly changing contigs that also responded to exposure to T 3 and/or T 4 (6). However, compared to almost 45,000 contigs that respond to exposure to TH, this cross-talk signaling is quite minimal. As estrogens are found throughout our wastewater systems (51), it is imperative to determine the mechanism by which estrogens are affecting with TH signaling and proper development. Triclosan and Triclocarban Triclosan [5-chloro-2-(2,4-dichlorophenoxy)phenol; TCS] is a bactericidal and antifungal agent that is ubiquitously incorporated into thousands of industrial and consumer products including clothing, toys, cleaning supplies, personal care products (i.e., soap, shampoo, toothpaste, etc.), and surgical soaps and sutures (52, 53) with 10.5 million pounds produced globally in 2015 (53). Triclosan and triclocarban (TCC), another widely used antibacterial in PPCPs, are the most common, broad-spectrum antimicrobial agents used in household items and PPCPs (54). While sewage treatment removes most triclosan, it still contaminates sewage effluent and, consequently, aquatic environments (24). The U.S. Food and Drug Administration banned the use of TCS, TCC, and 17 other antimicrobials in personal wash products in 2016 to minimize the exacerbation of bacterial resistance and health risks, including endocrine disruption (54,55). TCS has structural similarity to TH and disruption of TH action in frogs provided some of the earliest evidence of this endocrine disruption. Low and environmentally-relevant amounts of TCS can affect different aspects of TH signaling in amphibians (30, 41, 51-57). Exposure of premetamorphic R. catesbeiana tadpoles to environmentally-relevant amounts of triclosan can induce altered growth and transcript responses that are exacerbated upon T 3 -induced metamorphosis (28). The combinatorial effects of TCS and T 3 on tadpoles resulted in greater body mass reductions and precocious metamorphosis. These phenotypic changes were accompanied and preceded by changes to THresponsive gene expression (28). Expression of thrb was transiently decreased in the tadpole tail at 48 h, while the brain had increased expression of thrb and proliferating cell nuclear antigen transcripts (PCNA). Under comparable TCS ± T 3 treatments, cultured X. laevis XTC-2 cells had increased expression of thra, thrb, and klf9 after exposure to both chemicals, supporting the developmentally-sensitive TCS effects in different anuran species (28). Recent work demonstrated that X. tropicalis exposed to TCS levels considered safe in drinking water developed metabolic pathologies resembling prediabetes and produced progeny exhibiting delayed metamorphosis and diminished reproductive success (58). Adaptation of the Amphibian Metamorphosis Assay for the Pacific tree frog, Pseudacris (P.) regilla, (TREEMA) revealed comparable morphological and molecular disruption by TCS when administered in conjunction with T 4 (27). By the second day of exposure, TCS enhanced the T 4 -stimulated increases in thra, thrb, and pcna in the tadpole brain and disrupted expression of TH-responsive genes in the tail ( Table 2) (27). The earliest morphological effects of TCS and T 4 exposures were evident at day 4 with increased foot paddle formation and later impairments in developmental stage progression. Tadpoles exposed to both TCS and T 4 also had accelerated development and increased hindlimb length/snout-vent length ratio (27). Like other anurans, the perturbed metamorphic profile in P. regilla is indicative of disrupted developmental coordination (27). Exposure of X. laevis tadpoles to TCS resulted in increased thrb mRNA in the tail fin after 21 days followed by thyroid gland hypertrophy at 32 days ( Table 2) (56,57,59,60). Methyl triclosan (mTCS) is a bacterial metabolite of TCS and is more persistent in the environment than TCS, which is readily degraded by photolysis (61). This metabolite, along with TCS and TCC, were tested using premetamorphic R. catesbeiana cultured tail fin (C-fin) assays. TCS did not affect TH-responsive rlk1 or thrb transcript abundance, but did increase hsp30 levels ( Table 2) (26). mTCS exposure increased both rlk1 and thrb transcripts in the absence of T 3 (26), suggesting that some, but not all, of the TCS activity observed in intact animals may be due to the conversion to mTCS. TCC exposure caused a reduction in rlk1 transcripts and an increase in hsp30 mRNA ( Table 2) (26), indicating a TH-like activity of this antimicrobial agent. Ibuprofen Ibuprofen is a commonly used non-steroidal anti-inflammatory analgesic that is now a prevalent component of complex municipal wastewater effluents that permeate aquatic environments (62,63). Ibuprofen is primarily considered to act through prostaglandin synthesis inhibition, however, it can also interfere with multiple regulatory pathways (29, 64). Little is known about the effects ibuprofen can have on aquatic organisms during sensitive developmental periods, which is concerning given the multiplicity of molecular pathways ibuprofen targets and its abundance in global freshwater environments. Exposure of R. catesbeiana tadpoles to environmentallyrelevant concentrations of ibuprofen disrupted TH-stimulated metamorphic reprogramming of the liver transcriptome and in C-fin assays ( Table 2) (29). MAGEX cDNA microarray analyses of tadpole livers exposed to ibuprofen and T 3 detailed molecular pathways affected by these combined exposures: transcription, calcium transport, proteolysis, cell cycle, and protein phosphorylation. Additionally, ibuprofen treatment affected pathways related to oxygen transport, arginine metabolism and urea production (29). Ibuprofen exposure of T 3 -stimulated tadpoles enhanced the upregulated expression of thra and thrb. Quantitative nuclease protection assay analysis of C-fin cultures showed that ibuprofen exposure alone could increase expression of dio3, while both ibuprofen and T 3 treatment resulted in an increase in hsp30 transcripts, indicating potential tissue-specific responses (29). Ibuprofen can also affect transcriptional programs in the tail fin and back skin of R. catesbeiana under temperature-dependent, T 3 -stimulated conditions and this is further discussed below (65). INDUSTRIAL AND AGRICULTURAL CHEMICALS Polychlorinated Bisphenols (PCBs) Polychlorinated bisphenols (PCBs) are ubiquitous environmental contaminants that were widely used in capacitors and transformers between 1929 and 1979 (66). Concern about the endocrine disrupting potential of PCBs resulted in their import and use being banned in North America by 1979. However, the extreme environmental persistence and bioaccumulation of PCBs continue to plague us (66). With the effects of PCBs on TH homeostasis well-characterized (67), there was a clear need to investigate the effect of these compounds on amphibian metamorphosis. As the toxicity of PCBs is typically due to bioaccumulation over time, Gutleb et al. examined the effects of ingested PCBs in R. temporaria and X. laevis after an exposure of either 10 days or several weeks (68). They found that dietary exposure to a technical mixture of PCBs, clophen A50, decreased metamorphic rate in both species after 10 days. Furthermore, exposures to PCB 126 decreased the rate of metamorphosis after several weeks (Supplementary Table 1). In a later study, Gutleb et al. showed that immersion in PCB 77 and apolar sediment extracted from PCB-contaminated ponds significantly reduced the rate of metamorphosis in X. laevis (Supplementary Table 1) (69). Gutleb et al. confirmed these effects using a X. laevis thiourea-synchronized metamorphosis assay and a 60 day dietary exposure. In this study, they found that clophen A50 and an apolar sediment extract from polluted ponds decreased the rate of metamorphosis (Supplementary Table 1) (70). To assess the effects of PCB exposure on TH-mediated gene expression, Lehigh et al. examined the toxicity of another technical mixture of PCBs, A1254 (71). qPCR analysis of pooled mRNA from X. laevis tadpoles showed that A1254 exposures decreased dio2 and dio3 expression and increased ttr expression ( Table 3). These results, in combination with the previous studies performed by Gutleb et al., show that mixtures of PCBs exhibit significant effects on TH-driven amphibian metamorphosis ( Table 3). Perchlorate Perchlorates, such as ammonium perchlorate, potassium perchlorate, and sodium perchlorate, are well-known as powerful oxidizing agents, which has led to their widespread usage in explosives such as rocket propellants, fireworks, and signal flares (97). They are also used to treat TH diseases (98) as perchlorates competitively inhibit the uptake of iodine by the sodium-iodide symporter, leading to lack of iodine for the production of THs (99). Unfortunately, due to its widespread industrial use, perchlorate is a persistent pollutant. As amphibians have an almost identical TH system to humans, it is unsurprising that perchlorates also affect their TH-regulated processes [reviewed by Carr and Theodrakis (100) (104) and in vitro (105). This indirectly results in the enlargement of the thyroid glands as well as hyperplasia and hypertrophy of thyroid follicles due to the lack of negative regulation of TSH (20,35,104,106,107). Predictably, the decrease in T 4 levels also leads to decreased metamorphic rates (35, 101, 102, 106, 107). The involvement of the TH-induced gene expression program in this metamorphic delay seems to be organ-dependent. Using cDNA array analyses of acute exposures of sodium perchlorate in X. laevis, Helbing et al. found that the brain was the most responsive with a maximum of 39 responsive genes involved mostly in transcription, transport/binding, apoptosis/protein processing, and structure ( Table 3) (32). Tshb mRNA significantly increased after 48 h, suggesting an acute exposure already leads to dysregulation of the negative feedback loop. The cDNA array only indicated 8 and 4 responsive genes in the tail and hindlimb, respectively (34), indicating that these tissues may be less responsive to acute exposures of perchlorate. However, in chronic exposures of environmentallyrelevant levels of perchlorate, there is a more consistent response. Flood & Langlois (108) observed decreased TH-responsive genes, thra, and thrb, in the liver of X. tropicalis chronically exposed to potassium perchlorate. A similar result was seen in the brain of X. laevis chronically exposed to sodium perchlorate ( Table 3) (35). Bulaeva et al. (50) found that R. sylvatica had decreased thrb transcript levels in the tail and liver, which could be continually observed even 40 days after a 2 week exposure to sodium perchlorate, indicating that the effects from perchlorate may be persistent and possibly irreversible. Brominated Flame Retardants (BFRs) Brominated flame retardants (BFRs) have been and continue to be ubiquitously incorporated into a variety of items to confer fire resistance (109). These materials include textiles, plastics, electronic circuitry, wood, paper, dust, and inadvertently in the 1970's, livestock feed (109)(110)(111). Roughly 5,000,000 metric tons of bromine are produced worldwide annually, with demand increasing each successive year (111). BFRs include polybrominated diphenyl ethers (PBDEs), polybrominated biphenyls (PBBs), tetrabromobisphenol A (TBBPA) and hexabromocyclododecane (HBCD). Depending upon the mechanism by which BFRs are integrated within materials, BFRs can be classified as brominated monomers, reactive (i.e., TBBPA) or additive (i.e., PBDE, HBCD). BFRs can readily leach from materials if they are not strongly chemically bound to the composite polymer, thereby contaminating the environmental biota, leading to mortality, compromised development and other toxicity-dependent pathologies among animal populations. A growing concern is that increasing amounts of BFRs have been found in the environment throughout different trophic levels, including humans, underscoring the need to better understand the biological implications of BFRs (111). Many BFRs are lipophilic and this facilitates their persistent bioaccumulation in the biota of both aquatic and terrestrial environments (112). Due to the deleterious effects of penta-and octa-BDE BFRs and PBBs, they have since been banned, which has spurred the development of novel BFRs (111). However, the environmental effects of these novel BFRs, which are not limited to TBBPA derivatives, are under increasing scrutiny (110,113). Herein, we review BFRs that have a demonstrated effect on amphibian metamorphosis ( Table 3). Polybrominated Diphenyl Ethers (PBDE) PBDEs are widely disseminated throughout invertebrates, vertebrates, sediments, and diverse environments, including Arctic marine biota (72). PBDEs can readily accumulate and magnify within trophic levels (114). Mammalian biotransformation of PBDEs to hydroxylated metabolites by cytochrome P450 enzymes result in products that are more toxic than the parent congeners. As previously reviewed, these metabolites can disrupt thyroid homeostasis via several mechanisms including: decreased free and total TH through the competitive binding of thyroid transport proteins and perturbed TH metabolism through glucuronidation, sulfation, and deiodination (72). Notably, there are strong structural similarities between THs and PBDEs. X. laevis tadpoles (NF stage 50) fed 1,000 or 5,000 µg/g of a commercial mix of PBDE congeners, DE-71, exhibited significant inhibition of metamorphosis as displayed by delayed limb development and tail resorption, lack of pigmentation and head shape changes (72). No major cellular or morphological differences of the thyroid gland were observed following histological analyses. Intraperitoneal injections of DE-71 and BDE-47, but not BDE-99, resulted in delayed metamorphosis through significant reductions in tail resorption (72). Both BDE-47 and BDE-99 are major congeners of DE-71. Although the morphological results of this study implied the disruption of TH activity, such involvement could not be conclusively ascertained. R. pipiens tadpoles fed lower, environmentally relevant amounts of DE-71 at Gosner stage 25 to stage 42 had delayed metamorphic climax by 22-36 days (3,115). The elimination of PDBEs following depuration was studied in R. pipiens tadpoles that had consumed environmentally-relevant concentrations of DE-71 for 50 days at Gosner stage 25. Following 28 days of depuration, tadpoles had removed more than 94% of PBDE congeners from their bodies (114). The ability to eliminate PBDEs from tissues can vary according to life stage. Metamorphosing frogs (Gosner stage [42][43][44][45][46] were unable to eliminate PBDEs following depuration, however, juvenile frogs eliminated 89.7% of PBDEs over a 70 day depuration (114). Wild R. limnocharis adult frogs found proximal to contaminated e-waste recycling sites similarly showed reduced PBDE levels following 54 days of depuration (116). A link between PDBE-altered amphibian metamorphic morphology and disrupted TH metabolism was demonstrated by the treatment of X. laevis tadpoles with increasing concentrations of BDE-47 (73). After a 21 day BDE-47 dietary exposure, tadpoles exhibited reduced developmental stage progression and decreased hindlimb length. Histological analysis of the thyroid gland showed decreased follicular epithelial cell height and a smaller thyroid lobe area in tadpoles exposed to BDE-47 (73). Corresponding reductions in hindlimb length were observed in X. tropicalis tadpoles following BDE-47 exposure (117). BDE-99 exposure in X. tropicalis similarly resulted in slower developmental stage progression and reduced hindlimb length (117). qPCR analyses in X. laevis to assess transcriptomic changes in the tails and livers of stage-matched tadpoles between NF stage 52 to 56 found tissue-specific TH-dependent regulation (73). No significant differences were observed in tail thra, thrb, dio1, or dio2 transcripts. However, the brain was sensitive to BDE-47 treatment and significant reductions were observed in thra, thrb, klf9, tshb, dio2, mct8, and oatp1c1 mRNA (73). The diversity of affected transcripts underscores the broad extent to which thyroid metabolism is adversely affected by BDE-47. Tetrabromobisphenol A (TBBPA) Tetrabromobisphenol A (2,2 ′ ,6,6 ′ -tetrabromo-4,4 ′isopropylidenediphenol; TBBPA) is one of the most abundantly used BFRs, with 150,000 metric tons produced each year. Although the majority of TBBPA is covalently bound within polymer materials, ∼10-20% can leach into the proximal environment (118,119). As such, TBBPA is found dispersed within environments around the world and in the tissues of affected organisms (112,119). TBBPA was introduced as a replacement for PBDEs, in part due to their comparatively short half-life in mammals (120). However, TBBPA has been detected in environmental samples and humans, including breast milk (121). TBBPA bears structural similarity to T 4 and binds to human transthyretin more strongly than T 4 (122), but is weak competitor to T 3 for binding TRα in rat (123). TBBPA is also reported to disrupt T 3 binding to TRs in rat (123). TBBPA antagonizes tail resorption during TH-mediated metamorphosis in the wrinkled frog, R. rugosa, and the T 3associated gene expression of thrb and thibz in X. laevis (76,123). TBBPA can also act as a TH agonist during metamorphosis in P. regilla (75). These contradictory findings may reflect unique endocrine sensitivities due to differential anuran metamorphic trajectories (124). P. regilla tadpoles (Gosner stages 30-31) exposed to 10 nM TBBPA had increased tail regression and mmp9 expression following T 3 -induced metamorphosis. MMP9 is a metalloproteinase involved in the deconstruction of the extracellular matrix and is required for tail resorption (125). Following 100 nM TBBPA exposure in the context of T 3stimulated metamorphosis, thra mRNAs were significantly increased in the brain relative to TBBPA exposure alone while the abundance of pcna transcripts was decreased in the tail (75). TBBPA is proposed to have developmental stage-specific effects on X. laevis metamorphosis, potentially related to endogenous levels of TH. During pre-and prometamorphosis, endogenous levels of TH are low and TBBPA exposure was associated with increased hindlimb length and the promotion of development. However, during metamorphic climax when TH amounts are maximal, developmental stage transitions were delayed (80). An additional potential confounder may be the amount of TBBPA that metamorphic anurans are exposed to (74). Molecular analysis of Pelophylax (P.) nigromaculatus intestines showed that tadpoles exposed to low concentrations of TBBPA (1 nM) had agonistic effects on T 3 -induced expression of TH-response genes ( Table 3). In contrast, higher TBBPA concentrations (100-1,000 nM) had antagonistic effects in the same experimental paradigm (74). The molecular mechanisms by which TBBPA may act as both an agonist and antagonist of tissuespecific development while endogenous TH levels vary need to be ascertained. Bisphenol A (BPA) Bisphenol A (4,4 ′ isopropylidenediphenol; BPA) is a widely used monomer in the manufacture of polycarbonate plastics, epoxy resins and food containers. More than 2.2 million metric tons of BPA were globally produced in 2009. Since the 1930's, BPA was known to be xenoestrogenic and growing concerns about the exposure of humans to BPA culminated in the US Food and Drug Administration banning BPA from baby bottles in 2012 (126). Despite debates between food and drug administrations and researchers about the endocrine disrupting effects of BPA, this monomer has been implicated in a plethora of etiologies including diabetes, obesity, and hypothalamic neuroendocrine dysfunction. Early developmental periods are also ostensibly sensitive to the effects of BPA (127)(128)(129). BPA is found ubiquitously throughout the environment, soils, surface waters, sewage, and more. Detoxification of BPA within organisms occurs through glucuronidation and the biotransformed oxidative metabolites that result can have greater endocrine disrupting effects than the parent BPA or analog (130). While the effects of BPA on estrogenic dysregulation are well-studied, BPA can also affect signaling pathways of THs, androgens, and glucocorticoids (130). BPA exposure inhibits amphibian metamorphosis by targeting TH signaling and is extensively reviewed in Heimeier and Shi (131). X. laevis embryos exposed to BPA displayed delayed metamorphosis by 2-4 stages at NF stages 52-54 ( Table 3) (93). Tadpoles exposed to BPA had similarly delayed natural and T 4 -induced metamorphosis. Cultured tadpole tails treated with BPA had repressed T 3 -induced tail shortening and had BPAinhibited thrb expression in the presence and absence of T 4 stimulation (93). Twenty-one day exposure of X. laevis tadpoles to BPA concentrations that were equivalent to human infant exposures also protracted T 3 -induced metamorphosis by 8 stages and stalled intestinal development ( Table 3) (94). By 4 days, however, maladaptive molecular effects were observed in the reduced expression of early T 3 -responsive genes, st3 and thibz, and the late responders, mmp2 and timp2, in the intestine following combined BPA and T 3 exposures. An oligo DNA microarray analysis of the intestinal transcriptome confirmed that BPA antagonizes the expression of genes involved in T 3 signaling pathways ( Table 3) (94). Genistein Genistein is a plant-synthesized isoflavinoid found in high amounts in soy products (132). As a phytoestrogen, the endocrine disrupting capabilities of this compound have been well-studied for estrogen signaling [reviewed by Henley and Korach (133)]. However, its effects on TH signaling have been far less studied. Ji et al. acutely exposed premetamorphic R. catesbeiana tadpoles to T 3 and then cultured the tail tips in the presence or absence of genistein to determine the effects of this contaminant on THinduced metamorphic changes (82). Exposure to genistein led to the ablation of tail tip regression seen upon exposure to only T 3 . This morphological response is correlated with a decreased abundance of the thrb transcript ( Table 3). In support of this finding, Hinther et al. also found decreased thrb upon exposure of cultured tail fin of R. catesbeiana to genistein, both induced and not induced by T 3 ( Table 3) (81). A possible mechanism by which TH signaling is being disturbed is through modulation of phosphorylation pathways. Genistein is a tyrosine protein kinase inhibitor (134), which is demonstrated in this amphibian model by leading to reduced overall tyrosine phosphorylation in T 3exposed R. catesbeiana tail tips cultured with genistein (82). As tyrosine phosphorylation of protein kinase C (PKC) is known to increase the activity of this kinase (135), the decreased tyrosine phosphorylation induced by genistein is correlated with negative PKC activity. It is postulated that this phosphorylation pathway impacts TH signaling through PKC serine phosphorylation of TRα. Upon acute exposure to T 3 , there is a significant increase in serine phosphorylation in R. catesbeiana tail tips, which can be reversed with PKC inhibitors (82). This response is attenuated by exposure to genistein, which likely leads to the observed decrease in the TH response gene thrb. Genistein can also affect thyroid peroxidase function in mammalian systems [reviewed by Doerge and Sheehan (136)]; however, whether this affects TH signaling in amphibians has yet to be determined. Further studies are needed to determine the role of phosphorylation pathways in cellular level TH signaling and whether other areas of the greater TH signaling pathway are affected by this contaminant. Phthalates Phthalates are plasticizers added to increase the flexibility of plastics. These contaminants can be found in the air, soil, freshwater, and saltwater (137)(138)(139). The ubiquity of phthalates in the environment is concerning as they have shown to have TH disrupting effects [reviewed by Mathieu-Denoncourt et al. (140)]. Using a T 3 -activated X. laevis reporter cell system ( Table 3), Sugiyama looked at the effects of five different phthalates on T 3 signaling within the constructed cells ( Table 3) and found di-n-butyl phthalate, n-butylbenzyl phthalate and dicyclohexyl phthalate caused a decrease in activity (95). These TH-disrupted responses were all associated with a decrease in endogenous thrb mRNAs in the reporter cells. N-butylbenzyl phthalate also led to decreased thrb with a T 3 -induced whole tadpole exposure. In line with these findings, Shen et al. found that chronic exposure of X. laevis tadpoles to di-n-butyl phthalate and its metabolite mono-n-butyl phthalate resulted in decreased thrb (96). The mechanism by which phthalates disrupt TH signaling within the cell likely involves the regulation of TRs. Using a TR-mediated reporter gene assay, Shen et al. found that dibutyl phthalate, mono-n-butyl phthalate, and di-2-ethylhexyl phthalate demonstrated TRβ agonist activity (141). As TRs have various methods by which they can be regulated, Shen et al. queried the involvement of the TR corepressor silencing mediator for retinoid or TH receptors (SMRT) in the phthalate-dependent TR regulation and found that both di-n-butyl phthalate and mono-n-butyl phthalate increased the interaction between SMRT and TR in a mammalian two-hybrid assay ( Table 3) (96). Furthermore, in the amphibian system, decreased methylation of the promoter region of thrb was found upon exposure to monon-butyl phthalate, which could be involved in TR-mediated regulation of the thrb gene. However, the same result was not seen with di-n-butyl phthalate, indicating potential differences in phthalate response (96). The involvement of other epigenetic mechanisms, such as histone post-translational modification, has yet to be elucidated. In contrast to the aforementioned studies, Mathieu-Denoncourt found that chronic exposure to monomethyl phthalate, a dimethyl phthalate metabolite, led to an increased metamorphic rate in X. tropicalis that associated with no TH response gene expression changes (Supplementary Table 1) (142). This suggests that various phthalates may have different mechanisms of disruption and/or the timing of TH response gene effects have differing response kinetics that were not captured in the study. Further work on these substances on a broader range of amphibian species is warranted. Metals Metals acting as environmental contaminants stem from a variety of natural and anthropogenic sources (143). Heavy metals are notable environmental endocrine disrupting chemicals (EDCs) and can dysregulate TH-driven amphibian metamorphosis upon exposure. Cadmium (Cd) exposure has been shown to significantly decrease metamorphosis in B. americana (144), as well as completely block completion of metamorphosis in other amphibians like Pleurodeles waltl (145). There is a significant correlation between Cd concentration and decreasing rates of metamorphosis in X. laevis (146). Furthermore, the effects of Cd exposure are exacerbated in male X. laevis tadpoles when the environmental pollutant estradiol-17β (E 2 ) is present (147). Sun et al. observed significant decreases in dio2, thra, and thrb transcripts following Cd exposures in B. gargarizans at concentrations an order of magnitude lower than previously reported to decrease metamorphic rate (83). At the lowest Cd concentration, an increase in thra expression was observed, but this may be due to using actb as a single normalizer, which can be TH-responsive (87). Thyroid histology revealed significant follicular cell hyperplasia in the cadmium-exposed animals. Copper is naturally ubiquitous in the environment and influxes of anthropogenic copper occur due to soil disturbances or agricultural runoff (148). In several Ranidae species and B. gargarizans, chronic exposure to copper can significantly delay the rate of metamorphosis (Table 3) (85,148,149). Wang et al. showed that copper exposure in B. gargarizans significantly increased dio3 expression and significantly decreased dio2, thra, and thrb expression at copper concentrations greater than what caused metamorphic delay (85). Although a transcriptional response is expected at lower concentrations, it is possible that measurements were done too late to observe significant changes in TH-related transcription as tadpole exposures commenced at Gosner stage 26 and transcript quantification did not occur until stages 42 and 46. Copper exposure also induced follicular cell hyperplasia in the thyroid gland. Chronic mercury exposure exhibited a similar phenomenon in B. gargarizans as did copper; metamorphosis was delayed at lower concentrations than what caused significant decreases in dio2, thra, and thrb expression and induced follicular cell deformation in the thyroid gland ( Table 3) (86). Again, transcript measurements were performed much later than the initial exposure such that lower concentration transcript effects may have been missed. Other metals that resulted in a delay in metamorphosis include lead (Pb) in R. pipiens (Table 3), iron (Fe; ionized or ore particulates) or manganese (Mn) in R. catesbeiana, and depleted uranium (U) in X. laevis tadpoles ( Table 3 and Supplementary Table 1) and further research on their effects on TH signaling is needed (150)(151)(152). Nanoparticles Several metals have been manufactured as constituents of nanoparticles. Nanoparticles are any particles that have at least one dimension <100 nm (153). These nanoparticles possess unique properties compared to their ionic counterparts that make them highly desirable for wide use in industrial and medical applications. However, this has led to significant environmental contamination by nanoparticles and the endocrine disrupting potential of nanoparticles has been well-documented (153). As nanoparticles have unique aggregation and surface charge distributions, their exposure often results in different endocrine disrupting effects compared to their corresponding metal ions (154). It is important to study the endocrine disrupting potential of metal ions and nanoparticles separately as the effects of one are not necessarily predictive of the other. Nevertheless, few studies directly compare the effects of nanoparticle and metal ion exposures in the same study. Further complications in comparing the effects of nanoparticle and constituent ion exposures arise from differences in experimental conditions and species studied. Chronic exposure to zinc, copper, and titanium oxide nanoparticles can delay metamorphosis in X. laevis tadpoles (155)(156)(157)(158). However, titanium oxide-based nanoparticles or their ionic counterparts had no effect on TH signaling in the R. catesbeiana C-fin assay (159). Nanoparticle interference significantly decreased the rate of metamorphosis in R. sylvatica tadpoles chronically exposed to nanogold (Supplementary Table 1) (160). Specific gene targets of nanoparticle endocrine disruption were investigated by Hinther et al. using a R. catesbeiana C-fin assay and 48 h exposures (84). They found that exposure to silver nanoparticles or Cd telluride quantum dots in combination with T 3 significantly decreased the expression of the TH-responsive genes: rlk1 and thrb ( Table 3). The extent of TH-mediated gene disruption arising from 28 day nanosilver exposures was further evaluated by Carew et al. in pre-and prometamorphic X. laevis tadpoles (87). They found that, while exposure did not alter the overall rate of metamorphosis, there were transient perturbations of leg length and snout/vent length that were pre-or prometamorph-specific. Using a MAGEX cDNA array and qPCR performed on liver tissue extracted from these tadpoles, they identified 3 induced and 4 repressed transcripts in premetamorphs and 12 induced and 4 repressed transcripts in prometamorphs exposed to nanosilver ( Table 3) (87). Of these, mmp9, pparg, and trip4 have linkages to TH signaling pathways. Acetochlor [2-chloro-N-(ethoxy-methyl)-N-(2-ethyl-6methylphenyl) acetamide] is a widely used preemergent herbicide and persistent organic pollutant that contaminates groundwater (161). More than 10 million kg of acetochlor are used per year in the United States, with surface water concentrations ranging from median levels of 2.7 nM (730 ng/L) to as high as 10 nM (2.7 µg/L) within the 80th percentile of measurements sampled in the Midwestern United States (162,163). Acetochlor can induce TH-dependent dysfunction and other pathologies in a variety of aquatic species (164)(165)(166)(167). In combination with other pesticides, acetochlor may contribute to altered comorbid fungal infections in amphibians (168). Concurrent treatment of premetamorphic R. pipiens tadpoles with acetochlor and T 3 resulted in the acceleration of metamorphosis as evidenced by precocious forelimb emergence (169). As priming tadpoles with T 3 prior to acetochlor treatment did not cause accelerated metamorphosis, it was concluded that acetochlor was interacting with T 3 in a TR-independent manner to elicit precocious development (169). R. catesbeiana tadpoles exposed to environmentally relevant concentrations of acetochlor (10 nM) did not affect thrb expression in tail fin biopsies (89). However, the combined treatment of acetochlor with T 3 caused a synergistic increase in thrb, which concurred with earlier morphological findings of accelerated metamorphosis (89). Acetochlor induced the upregulation of thra and thrb in the brains of athyroid premetamorphic R. catesbeiana tadpoles and these increases were amplified upon exogenous T 3 treatment (88). These results suggest a tissue-specific sensitivity to acetochlor. The thra/thrb transcript ratios were also altered and these transcript changes were not associated with any effects on escape behavior following acetochlor treatment (88). Understanding of the TH-dependent molecular mechanisms disrupted by acetochlor was refined by cDNA microarray studies in X. laevis. Crump et al. demonstrated that changes in gene expression precede the morphological changes of T 3induced accelerated metamorphosis ensuing from acute and environmentally-relevant acetochlor exposures (90). After 48 h, acetochlor exposure caused a T 3 -mediated increase in thra and thrb and the overall magnification of genes otherwise upregulated by T 3 (90). Of interest is that genes normally downregulated by T 3 showed an attenuated response in the presence of acetochlor, suggesting that acetochlor perturbs mechanisms of transcriptional regulation ( Table 3). Such impairment of transcription implies that acetochlor may disrupt epigenetic modes of regulation (90). During prometamorphosis, endogenous levels of TH naturally increase and acetochlor exposure caused an accumulation of thra and thrb transcripts in tail fin biopsies from R. catesbeiana tadpoles. The brains of these acetochlor-treated prometamorphic tadpoles were assessed after a 59 day depuration period and no significant differences were observed in thra and thrb transcripts, although the ratios between them were altered at higher acetochlor concentrations (88). No major developmental changes were observed either in forelimb emergence, tail regression or mouth development (88). Carbaryl Carbaryl belongs to the carbamate class of insecticides and is commonly used in agricultural and home garden applications to control insect populations (170). Though presumed to have low toxicity, carbamates have structural similarities to organophosphate insecticides and can modify acetylcholinesterases, which has important implications for neurotransmission (171,172). Carbaryl exposure can limit the resistance of amphibians to parasitic infection and its toxicity is exacerbated by previous Ranavirus infection of R. sylvatica (173,174). Of outstanding interest are the implications for metamorphosing organisms in carbaryl-treated areas. R. clamitans tadpoles exposed to environmentally relevant carbaryl concentrations did not have altered metamorphosis according to morphological metrics: tadpole development and time to metamorphosis (91,175). However, both short-and longterm alterations in gene expression were observed in brain and tail tissues of tadpoles acutely exposed to carbaryl at 8 and 16 weeks post-hatching ( Table 3) (91). Gosner stage 25 tadpoles exposed to carbaryl for 3 days at 16 weeks post-hatching had higher thra and thrb expression in the brain at Gosner stage 46. Greater thrb expression was also observed in tadpoles exposed at 8 weeks post-hatching (91). DNA microarray analysis highlighted the persistent transcript effects of carbaryl on altered brain pathways that included transcription, signal transduction and cell growth control. Immediately following carbaryl exposure, thra is increased in the tadpole tail (91). Pesticide exposures during such sensitive early developmental periods have potential consequences for fitness and health of the organism during its lifespan. Glyphosate and Surfactants Glyphosate is a commonly used herbicide for both domestic and agriculture applications around the world. Many commercially available formulations, such as Roundup R , contain glyphosate, which is rendered more toxic due to the inclusion of surfactants, whose toxicity can be influenced by pH, temperature, and species and developmental stage of exposed organisms (176,177). Several North American amphibians (R. clamitans, R. pipiens, R. sylvatica, and B. americana) exposed to glyphosate, different commercial herbicides and the surfactant polyethoxylated tallowamine (POEA) exhibited varying sensitivities depending on developmental stage and species (92). Glyphosate alone did not elicit deleterious effects, but in combination with POEA in Roundup Original R and Roundup Transorb R , metamorphic defects were observed, particularly in R. pipiens, which was sensitive to these exposures ( Table 3). Consequent to exposures at Gosner stage 25, tadpoles exhibited increased time to metamorphosis. Gonadal abnormalities were also observed as was tail damage that included necrosis, blistering, and abnormal growth (92). As observed with other disruptions to TH signaling, molecular aberrations were observed prior to phenotypic changes. At stage 25, but not 42, increases in thrb expression resulted from exposure to Roundup Original R and Roundup Transorb R (92). However, newer glyphosate herbicide formulations that do not include POEA are less toxic, making them more promising potential alternatives for agricultural and domestic use. COMPLEX MIXTURES Although there is considerable focus on the effects of individual toxicants on TH activity, such chemicals do not persist alone in the environment. Mixture effects arising from the combination of different toxicants can result in TH-dependent disruptions not predicted by the individual chemical constituents (178). Metal Mixtures Heavy metals exhibit increased toxicity as a consequence of mixture effects (179). Dorchin and Shanas examined the endocrine disrupting potential of a mixture of metals (Cu, Pb, and Ni) in concentrations comparable to that of runoff from busy highways (180). Exposure to this metal mixture significantly decreased the metamorphic rate of Bufo (B.) viridis tadpoles ( Table 4) (180). A similar effect of metal mixtures was observed in Limnodynastes peronei, which exhibited a decreased rate in metamorphosis after being exposed to coal-mine wastewater containing low metal amounts ( Table 4) (185). Wastewater Effluents Wastewater effluents (WWE) are complex mixtures that can contain contaminants from agricultural, industrial, and domestic sources and hence, can disrupt TH function. A primary source of contamination comes from PPCPs in human waste. Although wastewater goes through extensive filtration prior to dispersal, TH disruption still ensues from effluent exposures (36). The TH disruption potential of WWE was first examined in 2009 when Sowers et al. found that a 50% dilution of municipal WWE significantly decreased the rate of R. pipiens metamorphosis ( Table 4) (186). A delay in metamorphosis was also observed in R. catesbeiana after exposure to pond water that had been a receptacle for municipal WWE ( Table 4) (187). Searcy et al. examined the effects on TH-mediated metamorphic gene expression within X. laevis tadpole ex vivo tail tip cultures exposed to WWE (184). Using oligo microarray and qPCR analyses, they found that WWE and T 3 exposures significantly increased the expression of TH-sensitive genes: thrb, dio2, crhbp, and fap ( Table 4). The in vivo effects of WWE on TH-linked gene expression was also demonstrated by Castillo et al. in a transgenic X. laevis harboring a thibz-GFP reporter construct that was activated by WWE exposure ( Table 4) (188). As wastewater treatments do not completely eliminate EDCs, Wojnarowicz et al. assessed the removal of EDCs by three methods of wastewater filtration using the C-fin assay (182). Despite clearing conventional contaminants, all three treatments produced WWEs that increased TH-sensitive gene expression (thibz, thra, thrb) upon exposure ( Table 4). The treatment types also had conflicting results in their ability to clear TH signaling effects depending upon the season in which the WWEs were collected ( Table 4) (182). In a later study, Wojnarowicz et al. demonstrated the inefficiency of municipal wastewater treatment plants by showing that there is little difference in the endocrine-disrupting potential of WWE to that of the original influent using TH-mediated molecular endpoints and C-fin assays ( Table 4) (183). The considerable compositional variation within WWEs poses a challenge when assessing their endocrine disrupting potential. Heerema et al. generated a wastewater standard composed of common PPCPs to evaluate the exposure effects of the simulated WWE and test the efficiency of wastewater treatment systems (36). After filtration using an anaerobic membrane bioreactor (AnMBR), the standard WWE induced a significant upregulation of TH-sensitive thibz in the olfactory epithelium of R. catesbeiana tadpoles. This suggests that the effluent was influencing THdependent pathways ( Table 4). This study also assessed the behavioral effects, particularly predator cue avoidance, associated with WWE exposure. Once a tadpole is exposed to T 3 , it will stop responding to a simulated predator cue (36). WWE exposure mimicked the effects of T 3 signaling in the olfactory epithelium and decreased predator cue avoidance (36). As a follow-up to this work, Jackman et al. showed that membrane enhanced biological phosphorous removal (MEBPR) performed better at removing EDCs from WWE than AnMBR (6). However, both effluent types resulted in the perturbation of TH-responsive gene transcript levels in the olfactory epithelium. TH agonist activity was observed in the AnMBR WWE and antagonist activity from MEBPR WWE, likely reflective of the influent source material ( Table 4). Petroleum Oil Products Oil spills from a variety of sources can contaminate freshwater systems, thereby affecting the local biota (189). Major toxic components of oils, such as napthenic acid (NA) and polycyclic aromatic hydrocarbons (PAHs), are dispersed within watersoluble fractions after a spill (190,191). NA and PAHs act as EDCs in amphibians. NA can directly reduce the rate of metamorphosis in X. tropicalis and R. pipiens and PAHs from tar-based pavement sealers can significantly reduce the rate of metamorphosis in X. laevis (Supplementary Table 1) (192,193). The endocrine disrupting potential of these compounds seems to be quite persistent in the environment as R. sylvatica tadpoles exposed to pond water from wetlands proximal to reclaimed oil sands had significantly altered T 3 /T 4 ratios and had accelerated or delayed rates of metamorphosis depending on the age of the reclaimed wetland (Supplementary Table 1) (194). The effects of NA and PAHs on TH-sensitive gene expression was evaluated by exposing X. laevis tadpoles to simulated oil spill conditions using water accommodated fractions (WAF). WAFs were prepared using bunker crude oil or refinery oil (181). Bunker crude WAF exposures resulted in a significant decrease in dio2 and thrb and differential expression of pparg at various WAF concentrations ( Table 4). Refinery WAF exposures resulted in a significant decrease in thrb expression and a significant increase in pparg expression (Table 4). Therefore, water soluble components of oil spills can adversely affect TH-sensitive gene expression critical for amphibian metamorphosis. EFFECTS OF ENVIRONMENTAL FACTORS Temperature Temperature serves as an important environmental cue for seasonality changes. As such, poikilothermic anurans have evolved to allow this environmental factor to serve as a critical cue in their developmental program. The role of temperature in modulating developmental timing is clearly demonstrated during natural metamorphosis when warmer temperatures lead to increased endogenous T 3 and thereby a faster metamorphic rate (Supplementary Table 2) (195)(196)(197)(198). Along with the increase in TH levels, there is an upregulation of TH-regulated transcripts (including thra, thrb, thibz, dio2, and dio3) that initiate the metamorphic program (Table 5) (195,202). Conversely, metamorphic rate slows as temperatures decrease and can be halted altogether at 4-5 • C (Supplementary Table 1) (203,207,208). Although premetamorphic tadpoles will not undergo precocious TH-induced metamorphosis at cold temperatures, a TH-induced memory is established whereby the metamorphic program resumes when permissive temperatures are attained, even when no TH signal remains (207). This cold temperature arrest is observed at the transcriptomic level (Table 5) (65,203,204,209). Hammond et al. induced metamorphosis at 5 • C in premetamorphic R. catesbeiana tadpoles through T 3 exposures and assessed transcript responses in the brain, liver, back skin, tail fin, and lung (65). Across all tissues, thrb did not show the rapid induction observed at permissive temperatures. Transcripts encoding the transcription factor thibz, however, were upregulated in response to T 3 in all tissues, although this was not found in the liver by Suzuki et al. (Table 5) (203). Other TH response genes, including dio2, dio3, cebp1, klf9, and transcripts encoding urea cycle and energy metabolism enzymes showed varied responses across tissues, indicating there may be a tissue-specific response to T 3 in cold temperatures (65,(202)(203)(204). Cold temperatures also inhibited the T 3 -induction of plasma glucose and decreased lipid polyunsaturation consistent with an effect on energy metabolism in tadpoles ( Table 5) (203). Regulation of chromatin structure is postulated to be a mechanism by which this differential gene expression occurs at permissive and non-permissive temperatures [reviewed by Hammond et al. (210)]. Using chromatin immunoprecipitation, Mochizuki et al. found that upon T 3 exposure of R. catesbeiana at 4 • C compared to 28 • C, there was decreased association of positive transcription histone H3K36 marks within two known temperature responding genes in the liver: thrb and cebp1 ( Table 5) (204). How histone post-translational modifications may differentially regulate genes with upregulated transcription in cold temperatures and simultaneously establish a molecular memory of the TH signal to be activated under permissive conditions is unknown. As changing climate becomes an increasing threat to declining frog populations, it is critical to understand the effect that temperature has on the proper regulation of TH signaling during development. Ultraviolet B Radiation Ultraviolet B radiation (UVBR) is becoming a growing concern as stratospheric ozone levels deplete (211). Paired with an increasing penetration of UVBR into the water column (212), embryo and tadpole stages, which reside in aquatic environments, are at a greater risk. Metamorphic or developmental consequences of UVBR exposure on these more sensitive stages varies depending on species and life stage exposures [reviewed in Croteau (205)]. Blocked or delayed postembryonic development is the most commonly seen defect upon UVBR exposure (Supplementary Table 1) (213,214). Croteau et al. examined the relationship between UVBR-induced developmental delays in R. pipiens and whether it may be related to the disruption of TH signaling (206). Exposure to UVBR showed no effect on total body T 3 or brain CRF levels, indicating that synthesis of TH through the HPT axis is unlikely affected, although total T 4 was not measured (206). Rather, UVBR effects may act locally on peripheral tissues. Increased dio3 found in the tail during stages preceding the observed morphological delay may cause decreased local TH through enhanced turnover ( Table 5). There is also decreased expression of dio2, which may act to further regulate the activity of THs through decreased conversion of T 4 to T 3 (2). This local response highlights the need to look at various tissues as their response to UVBR may differ leading to uncoordinated metamorphosis. As levels of UVBR are expected to rise, it is imperative to determine its mechanism of action in TH disruption, especially for sensitive life stages like postembryonic development. Photocycle: Light-Dark Cycle Implications Endocrine systems are entrained to the circadian clock. The TH axis is no exception, with THs following a rhythmic 24 h cycle (215). As photoperiod, along with temperature, is an environmental cue for seasonality changes, it is not surprising that many studies have found that the light:dark (L:D) cycle has an impact on metamorphic rate (Supplementary Table 1) (216)(217)(218). In the majority of studies, increasing photophase (light phase) of a 24 h L:D cycle or decreasing cycle length increases metamorphic response to TH stimulation (218). However, when the 24 h cycle is not maintained, the L:D ratio is no longer indicative of metamorphic rate. In a case where Wright et al. found decreased tail reduction with an increased photophase, there was 18L:12D, which surpasses the standard 24 h cycle (219). The mechanism by which the L:D cycle alters THinduced metamorphosis is poorly understood. It has been determined that altering the L:D cycle leads to differences in the fluctuating rhythm of T 4 [reviewed by Wright (215)] (Supplementary Table 2). However, under any L:D cycle, there is an inextricable rise in T4 as development progresses until metamorphic climax (220). This indicates that the alteration in metamorphic rate is not due to a disruption in TH concentration. It is more likely that a disruption in the circadian rhythms of THs may lead to different interactions with agonists and antagonists (220,221). It remains to be determined how this variation in cycling affects TH responses at the transcriptomic level, which may provide a better mechanistic understanding of how the L:D cycle impacts metamorphic timing. Pond Drying Many tadpole species reside in ephemeral bodies of water. Loss of these temporary habitats is fatal to water-dwelling tadpoles; therefore, it is unsurprising that across species, there is a positive correlation between pond desiccation rate and speed of development into terrestrial frogs (Supplementary Table 1) (198,222,223). The ability to translate this environmental cue to a phenotypic response is proposed to occur through the HPT axis. CRF levels increase in response to habitat desiccation, preceding the morphological observation of hastened metamorphic rate (224). This stress-induced increase in CRF leads to augmented secretion of THs (Supplementary Table 2) (199,224,225), which can be reversed through exposure to CRF antagonists (225). This increase in activity in the HPT axis leads to accelerated metamorphosis through the downstream regulation of TH response genes. Johansson et al. used cDNA microarray and qPCR analyses to determine the hepatic transcriptomic response to simulated pond drying in R. temporaria ( Table 5) (200). This study found that classic TH response genes, such as thra and thrb, increase along with decreasing water levels, which corroborates previous findings that decreased water levels lead to increased TH levels and higher expression of thrb in the blood (199). More liver-specific TH response transcripts also demonstrated significant changes, including urea cycle enzyme cps1 (200). The ability of tadpoles to respond to decreasing water levels demonstrates the plasticity of TH-mediated metamorphosis that allows tadpoles to adapt to changing environments. Food Restriction Similar to pond drying, availability of other resources, such as food, plays an important role in developmental timing. As metamorphosis often entails a niche transition from aquatic to terrestrial environments, it stands to reason that when resources available in the aquatic habitat are no longer sufficient, it may prompt a transition to a new environment where resources may be greater or competition lower. The impact of food restriction on metamorphosis has varied results (Supplementary Table 1) (226)(227)(228)(229)(230)(231). Complete starvation and consistently low or actively decreasing food levels leads to increased metamorphic rate in Scaphiopus (S.) hammondii (226), Phrynobatrachus guineensis (227), and R. temporaria (228). In contrast, consistently low or actively decreasing food sources reduces metamorphic rate in Hyla cineria and Hyla gratiosa (229,230). The varied response may be due to the different life histories of the different species. Another contributing factor is the developmental timing of food restriction. D'Angelo et al. determined that there is a critical developmental point, around limb bud formation, before which metamorphosis will be stalled but after which, metamorphosis will be accelerated (232). Bulaeva et al. restricted food for R. sylvatica prior to this critical time point and found that the decrease in metamorphic rate coincided with a decrease in thrb transcript levels, indicating disrupted TH signaling ( Table 5) (50). In contrast, histological analysis of the thyroid gland of the tadpoles starved past this critical period give evidence to a short burst of increased secretory activity (232). Evidence of this burst of thyroid activity was corroborated in vitro by Wright et al. who found a brief increase in secretion of T 4 in cultured thyroids excised from R. catesbeiana tadpoles that were starved for 1 week compared to those that were fed consistently (Supplementary Table 2) (233). Boorse and Denver also found increased levels of T 3 and T 4 in vivo after food restriction in S. hammondii (234). Future studies have yet to determine how this burst of T 4 affects the TH-induced transcriptomic program leading to increased metamorphic rate. Combined Chemical and Environmental Effects It is well-established that individual environmental factors play an important role in the proper regulation of TH signaling during metamorphosis. In natural systems, however, temperature, photoperiod, UVBR intensity, and resource restriction (ponddrying and food restriction) effects are inextricably linked. Not only are they influenced by each other, but frogs are simultaneously exposed to all anthropogenic chemicals that enter their habitat. The combinatorial effect of environmental and chemical stressors can be exponentially more detrimental as they may work synergistically to increase toxicity or reduce an organism's capacity to respond to other stressors. The additive toxicity of UVBR and various chemical contaminants has been well-documented [reviewed by Blaustein et al. (235) and Croteau et al. (205)]; however, the sublethal effects on development have not been as well-studied. Crump et al. found that environmentally-relevant levels of the estrogenic compound, octylphenol, had a combined effect with UVBR that increased metamorphic rate, unlike exposure to either factor individually (236). The TH-based mechanism by which this combined effect occurs was further studied by Croteau et al. who observed that at earlier developmental stages, there is a significant increase in thrb upon exposure to UVBR combined with octylphenol compared to either factor alone ( Table 6) (206). This indicated that TH-signaling during metamorphosis is being differentially affected by the combination of chemical and environmental factors. Increases in metamorphic rate induced by warmer temperatures are compounded by concomitant contaminant exposures that also induce metamorphosis. Freitas et al. exposed R. catesbeiana tadpoles to the pesticide diuron and its metabolite, 3,4-dichloroaniline, at 28 and 34 • C and observed an increased developmental response to both chemicals at the higher temperature compared to either exposure at the lower temperature or the temperature-matched control ( Table 6) (195). This combination of exposures increased the expression of dio2 and the warm temperature plus diuron exposure increased klf9, which likely explains the observed change in metamorphic rate. Contaminants can also have an impact on the ability of environmental cues to regulate developmental timing. For species that overwinter as tadpoles, transitioning while the temperatures are still too low could be fatal. Hammond et al. investigated the impact of two known TH EDCs discussed above, ibuprofen, and TCS, on the temperature-controlled TH response in premetamorphic R. catesbeiana tadpoles ( Table 6) (28, 29, 65). After a 48 h exposure to each contaminant at 5 • C, both produced a significant increase in klf9 in a cultured R. catesbeiana back skin biopsy assay (C-skin). In contrast, when tail fin biopsies from the same tadpoles were cultured in a C-fin assay, exposed to the chemicals at 5 • C and then shifted into clean media at more permissive temperatures, there was a significant decrease in thrb after TCS exposure ( Table 6). This indicates that both EDCs have the potential to disrupt proper TH signaling during the cold-induced establishment of TH molecular memory. As well, TCS exposure at cold temperatures may be remembered when warmer temperatures occur, potentially leading to detrimental effects throughout metamorphosis. Natural systems contain combinations of environmental factors and chemical contaminants. It is therefore important to conduct studies with multiple stressors to provide more meaningful information. A changing climate and intensifying UVBR combined with increased anthropogenic contamination are escalating the need to elucidate how these factors influence critical biological systems such as TH signaling in metamorphosis, both independently and in combination with each other. CONCLUDING CONSIDERATIONS As our understanding of the disruption of TH-dependent metamorphosis by environmental and chemical perturbations improves, it is apparent that there are several pressing challenges that must be addressed. Anurans are keystone sentinel species that can portend the deleterious and combinatorial effects of contaminants and changing climate effects on all trophic levels within different environmental niches. While it is important to understand the mechanisms affected by a single contaminant, the environmental context in which the exposure occurs must be considered. Within an affected environment, disruption of TH-dependent metamorphosis is rarely, if ever, derived from an isolated contaminant. To this end, the interplay between environmental conditions and complex mixtures should be assessed in tandem to ascertain the cumulative effects on THdependent metamorphosis. Amphibian screening assays have been developed that address the need for the timely detection of contaminants that affect TH-dependent metamorphosis and that accurately reflect in vivo changes (237). The Xenopus metamorphosis assay (XEMA) was initially conceived to assay the TH-disrupting capacity of compounds and derivatives of XEMA have since been developed (40,78,238). Similarly, cell lines and serum-free organ culture techniques, including tail fin (C-fin) and back skin (C-skin), have utility in ascertaining TH disrupting effects (81,209,239,240). Organ culture techniques are particularly useful since they retain the three-dimensional structure of tissues while facilitating a repeated-measures analysis, including the rapid assessment of TH-dependent molecular changes in gene expression. As such, organ cultures provide an informative and complementary counterpart to conventional morphological and histological assessments (89). As changes in gene expression typically precede morphological variations during metamorphosis, transcriptomic assessments provide a more timely and sensitive assessment of altered metamorphic trajectories that may not be readily observed as a pathological phenotype. Non-lethal molecular assessments of tail fin biopsies are additionally well-suited to long-term studies involving repeated measures (84). Careful consideration should be paid to the selection of amphibian species used to assess the ramifications of environmental contaminants on metamorphic dysfunction. Although Xenopus species are widely accepted animal models in research laboratories, their natural habitats, life cycles and physiologies are quite distinct from other anurans, such as Ranids, Hylids, and Bufonids (241). Consequently, physiological responses to environmental or chemical perturbations can differ widely between anurans. Therefore, the adoption of amphibian models that closely resemble native species in affected areas would provide the most meaningful assessments of THdependent metamorphic disruptions (242). The use of qPCR, DNA microarrays, and RNA-seq in conjunction with morphological characterizations have demonstrated the sensitive and differential tissue-specific gene expression arising from environmental or toxicant exposures during TH-dependent metamorphosis (23, 32, 34, 209, 243). As cutting-edge 'omics techniques-transcriptomics, genomics, epigenomics, proteomics, and metabolomics-are increasingly utilized with bioinformatics in ecotoxicology studies, it will be possible to elucidate the mechanisms affected by TH-disrupted metamorphosis in a comprehensive manner (210,(244)(245)(246)(247). Considerable progress has been made in recent years with the sequencing of the X. laevis, X. tropicalis, Nanorana parkeri, and R. catesbeiana genomes and increasing numbers of transcriptomes; all of which are invaluable resources (247)(248)(249)(250)(251). Thoughtful consideration of the species, tissue-specificity, and developmental stage observed; the timing and duration of exposure and study conditions will be indispensable in establishing large-scale studies for meaningful meta-analyses. Additional attention should be paid to the examination of metabolized derivatives that may be more potent than the parent congener. Biotransformed derivatives can be generated through metabolic activities within the affected organism or by physical transformation in the environment (for instance, weathering or photo-oxidation) (252)(253)(254). Compared to the parent congener, activated derivatives may consequently be more stable, better able to mimic or target different aspects of TH regulation (i.e., TH receptors, metabolizing enzymes, etc.) or be rendered more lipophilic, which would facilitate their uptake or excretion. The mechanisms underlying increased derivative toxicity, whether they are independent of or compromise TH regulation, is an important area of study. With more than 70% of ∼7,000 extant amphibian species threatened and declining around the world, there is an urgent need to address how anthropogenically-derived environmental disruptions are affecting vulnerable species (255). Humans are not impervious to the deleterious changes affecting wildlife; metabolomic studies demonstrated that metabolites altered during anuran metamorphosis are also associated with human disease outcomes (245). Moreover, a recent study demonstrated that TH-related gene expression and early brain development were altered in X. laevis following exposure to concentrations of chemicals (including TCS, phthalates, pesticides, and others) detected in human amniotic fluid (256). Given the developmental parallels between TH-dependent amphibian metamorphosis and mammalian postembryonic development, it is apparent that exposures negatively affecting amphibians will also impair human health (257). As sobering as these ramifications are, such deleterious outcomes are not necessarily irrevocable if timely remediation actions are taken. The genomic plasticity afforded by epigenetic alterations, while able to endure maladaptive stresses, similarly has the posited capacity to adapt to remediation. Such potentially ameliorative effects merit further investigation that would be best addressed using the genomics-based approaches discussed in conjunction with morphological analyses. Remediation efforts will require understanding the complexity of the ecological stresses and the interplay between complex toxicant mixtures and changing environmental conditions. The unique sensitivity of anurans to TH ideally positions them as indicators for not only metamorphic and developmental effects, but also for the fitness and reproductive success of all vertebrates that depend upon TH function.
2019-05-14T14:00:33.950Z
2019-05-14T00:00:00.000
{ "year": 2019, "sha1": "620e9aa9e816957694cd38df0b6b0ce15c6f4123", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2019.00276/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "620e9aa9e816957694cd38df0b6b0ce15c6f4123", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
256464010
pes2o/s2orc
v3-fos-license
Dimensioning of a permanent magnet synchronous machine for electric vehicles according to performance and integration requirements Finding the optimum design of electrical machines for a certain purpose is a time-consuming task. First results can be achieved, however, with scaling known machine designs in length and turns per coil by means of analytical equations, while scaling in diameter requires finite element analysis (FEA), since electromagnetic properties change significantly. In this paper, the influence of diameter, length and turns per coil on the torque, power and efficiency of a permanent magnet synchronous machine (PMSM) are investigated in a sensitivity analysis. Furthermore, their impact on energy consumption in different drive cycles and different vehicle types is outlined. A highway car and a city car are compared in a highway cycle, a city cycle and the Worldwide Harmonized Light Vehicle test Cycle. The results describe significant differences in energy consumption for different machine designs in one application but also between different applications. This highlights the necessity to decide whether or not the powertrain should be optimized for a single purpose or for universal use. Introduction Since battery electric vehicles (BEV) stop being a niche product and gain increased importance in every passenger car segment, the OEMs have to design electrical machines for various requirements [2]. The task is to find the optimum design e.g. for segments like small city cars, middle and upper class sports cars, and SUVs. Each vehicle segment has a specific demand for the powertrain concerning speed and torque [4]. Furthermore, the operating points span a wide range of speed and torque depending on the use of the car. While the use in the city usually means low speeds, a car on a highway-ride should operate best at high speeds. Both the vehicle segment and the vehicle usage affect the propulsion system, the typical use of the car and its type, due to properties like weight and size. A manufacturer has to decide, whether to optimize a powertrain for every specific purpose, like a dedicated city car or a vehicle that serves best on a highway-ride. The second option is to design a powertrain that is the best compromise between all specifications. This, however, goes hand in hand with drawbacks in each use case when it comes to performance. On the other hand, the elevated quantity of similar powertrains can lead to economic benefits. The electromagnetic properties of a permanent magnet synchronous machine depend on various factors like the shape and size of the magnets, flux barriers and stator teeth for example. Next to these parameters, the arrangement of the windings plays an important role, including the number of Turns per Coil ( TC ). For similar machine dimensions, different electromagnetic characteristics can be achieved by changing the TC value. Another way to scale the performance without expensively redesigning the machine is changing its length. The maximum length is usually only set by bending loads and the available space [14]. Scaling the performance by changing the diameter requires a recalculation of the electromagnetic behavior and mechanical limitations have to be considered. For every application, the exact dimensions in the rotor and stator would have to be optimized in order to achieve the desired torque and power at maximum system efficiency. Since this is complex and 1 3 time-consuming [7][8][9][10], first studies can be done by scaling the machine radially without any further adaptions of the geometry. Radial scaling of the electric machine is illustrated in Fig. 1. Since the general shape of magnets and flux barriers remains the same, the iron-bridge thickness increases with increasing diameter of the machine. For large diameters, this can lead to magnetic short circuits and therefore bad electromagnetic performance. Small diameters, by contrast, decrease the thickness of supporting iron-bridges. The lower limit of the diameter is therefore set by production limits. The present work demonstrates how the respective optimal design within the above mentioned design boundaries of a PMSM can be derived from a reference machine design by scaling it in length, diameter and turns per coil. First, the design space will be defined and boundary conditions will be introduced, as well as the characteristics of driving cycles and vehicle types under investigation. Next, the applied model is defined. The results are divided in three subsections: a sensitivity analysis describes the influence of the design parameters on torque, power and efficiency of the PMSM. The energy consumption of the machine designs is then evaluated for the six different use cases. Boundary conditions In this work the length, diameter and turns per coil are changed in a parameter study. The length is varied from 6 to 20 cm and the diameter from 16 to 25 cm, each in 1 cm steps. In addition, numbers of turns per coil between 5 and 15 are evaluated. Furthermore, the study considers a maximum current of I = 650 A of the power electronics and a maximum current density J = 29 A/mm 2 . The limit of the current density arises from the requirement of the time that the machine has to be able to operate at peak torque. In all investigations, the constant Voltage U = 350 V is applied. First, the influence of the three parameters on torque, power and efficiency of the PMSM is discussed. For the sensitivity analysis, FE-Analysis and a Matlab model is applied. Second, the performance of the different scaled PMSM in different scenarios, are investigated. Torque characteristics as well as loss maps form the previous sensitivity analysis are integrated in a Matlab/Simulink backward model that simulates the vehicle behavior. A city car and a highway car are compared in three different cycles: City (CC), Highway (HC) and the Worldwide Harmonized Light Vehicle test Cycle (WLTC). Mohan et al. demonstrated the suitability of backward models for the powertrain component sizing of combustion engines [11]. The characteristics of both cars and the cycles are summarized in Tables 1 and 2. Typically, a car used in the city has to accelerate and decelerate frequently at low speeds and operates at low powers. Opposed to that, on a highway the electric machine operates at high speeds and high power requirements. The WLTC cycle tries to reflect the average use of a car and has moderate torque and power requirements with maximum speeds of about 130 kph. The different use cases differ largely in their profile of requirements towards the powertrain [4]. Accordingly, the optimum machine design is different in each case. Another influence on the requirements for the electric machine result from the choice of the car. Similar values for the drag coefficient c w and friction f r are considered for a city car and a highway car. However the latter has both, a bigger crosssectional area A x , and a higher mass m . It further has a higher top speed requirements v max and longer acceleration times t 0−100 from 0 to 100 kph. The powertrain and driving characteristics are simulated in a Matlab/Simulink model and consider a gearbox with a gear ratio that is chosen in a way that the maximum speed of the PMSM is sufficient for a vehicle velocity of 140 kph for a city car and 180 kph for a highway car. The torque required for a given acceleration is calculated by Newton's second law and a force balance taking into account aerodynamic drag F d , friction of the tires as well as mechanical losses in the transmission and in the side shafts [5]. Electromagnetic model The majority of losses in electric machine occur in the copper windings, stator iron, rotor iron, and magnets. The ohmic losses P v,ohm in the windings depend on the ohmic resistance R ohm and the electric current I . The latter can also be expressed by the current density J and the conducting cross sectional copper-area A ph Cu . The ohmic resistance itself depends on the specific resistance el of the conductor, its cross-sectional area A wire and its length l wire [12]. The area A ph Cu depends on the area per phase of the stator slots that is filled with copper. Typically, the area of copper in a slot is significantly lower than the one of the slot A slot itself. This is considered by the filling factor with the number of wires n and wire diameter d wire [15]. Both the necessary electrical insulation and the round shape of the conductors with diameter d Cu decrease maximum number of n wires that fit in the slot. Typically filling factors between 45% and 55% are achieved in electrical machines [13] [3]. A filling factor for a reference machine is calculated and kept constant for all further studies in this paper. An increase in machine diameter considered in this paper also increases the slot area A slot and therefore the wire diameter d wire can be bigger if the number of turns per coil TC remains constant. If the number of turns per coil TC is increased instead, the wire diameter has to be decreased in order to the fit more wires in the constant slot area A slot for a given machine diameter. Considering the constant filling factor the product of turns per coil and wire area has to be constant resulting in Equation (3) describes the scaling of the wire diameter d wire,i relative to a reference diameter d wire,0 if the turns per coil are changed from a reference TC 0 to TC i . Equation (3) indicates, that the current density J increases with increasing TC-values at a constant current I or vice versa, since the turns per coil describe the number of wires in a slot that are connected in series. This has to be taken into account in the further investigations of this paper, as well as the fact that the diameter of the wires changes with the number of turns per coil according to Eq. (2) and thus the ohmic losses change according to Eq. (1). Next to the conducting copper area in the slots, the turns per coil also affect the magnetic flux of the coils: with magnetic flux density B , magnetic permeability , area of the coil A coil , and length of the coil l coil [12]. Accordingly, the same torque can be generated with less current, if the turns per coil are increased. Furthermore, with increasing TC the voltage limit of the electrical drive system is reached at lower speeds and the field weakening area is shifted accordingly. Next to the ohmic losses in the conductors, the losses in the rotor and stator iron P l,Fe represent the main loss source. For a sinusoidal excitation, the specific iron losses can be calculated as with the electrical conductivity of the iron , the density , the coercivity H c , a material specific constant C , the maximum of the magnetic flux density B max , the form factor k , the thickness of the iron sheets d and the frequency f [1]. The magnetic flux density in rotor and stator depends on the specific machine design as well as the speed and torque of each operating point. A finite element analysis (FEA) is used in this work to evaluate the flux density for all operating points for every machine diameter. Since parts in the machine center are not affected by the axial ends, the length shows not influence on the magnetic flux density distribution. Therefore, it is considered similar for each diameter at different machine lengths. The same assumption is made for the magnet losses that can be calculated similar to the iron losses, considering only the eddy currents. These are described by the first summand in Eq. (5). The overall loss P l of a PMSM is the sum of the losses in copper, iron and magnets. Compared to the specific losses described in Eq. (1) and (5), the overall losses are weighted with the mass of the respective machine parts. Mechanical model Since the energy content of automotive batteries is lower than that of fuel tanks in conventional vehicles, driving range and thus powertrain efficiency is a crucial characteristic value in battery electric vehicles [6]. The efficiency of an electric machine in motor and generator operation is defined by the mechanical power P mech and the electrical AC power P AC : The difference between mechanical and electrical power equals the losses P l of the machine. In the applied backwards model, the driving maneuver determines the mechanical power of the machine. For a given vehicle with the vehicle mass m vehc , the deceleration caused by drag forces and rolling resistance of the wheels and the desired acceleration result in a necessary accelerating force. This force equals a torque that has to be provided by the electrical propulsion system as an output torque of the electrical machine. The required torque mainly depends on the dynamic wheel diameter, mechanical losses and the gear ratio of the transmission. Similarly, the rotational speed of the electrical machine depends on the vehicle velocity, the wheel diameter and the gear ratio. Thus, for a given driving cycle and given vehicle properties, the required machine torque and speed are well defined. The energy consumption defined as the integral of the losses over time during a driving cycle allows an evaluation of the possible driving range with different propulsion systems. Focus of this paper is the influence of different machine properties on the vehicle consumption. In this work, the gear ratio is adapted to the machine diameter D in a way that the maximum rotational speed is not exceeded. The mechanical design of the rotor is chosen in a way that the rotor can endure a speed that is the maximum required speed times a safety factor. For a constant maximum required vehicle velocity v max , the gear ratio has to be decreased if the rotor diameter increases in order to compensate for higher centrifugal forces. Electromagnetic sensitivity analysis Before discussing the scaled PMSMs in a certain use case, the specific influence of each parameter, namely diameter, length and TC will be investigated, considering torque, power and efficiency. As boundary conditions, a diameter of 220 mm and eight turns per coil are chosen for the investigation of the influence of different lengths. Figure 2 shows the results as the normalized peak torque (top left), normalized peak power (middle left) and normalized efficiency (bottom left) plotted over the normalized speed in each case for every considered length of the machine. On the right hand side mot = P mech P AC (6) gen = P AC P mech the normalized maximum torque, power and efficiency (top to bottom) at each machine length are depicted. All three PMSM characteristics increase with the machine length. However, there is a saturation in efficiency and power that cannot be discovered in the linearly increasing torque. The field-weakening area shifts to lower speeds with increasing length, since the BEMF increases. This cancels out the rise in torque leading a flattening gradient of the power. Decreasing the required current at a given operating point, the maximum efficiency increases with increasing lengths. In addition, the isocurve of maximum = 96% covers an increasing area and shifts to lower speeds. Varying the diameter instead of the length leads to the results shown in Fig. 3 following the same logic as before. The maximum motor-torque increases with diameter and the field-weakening area shifts to lower speeds. However, the latter effect is less pronounced than for the variation in length. Accordingly, the impact of the diameter on the power is stronger in this investigation. A lower gradient can be identified at big diameters for torque and power in the respective diagrams on the right. This matches with the decreasing growth in maximum torque. In these cases, the power electronics limits the performance with the maximum current of I = 650 A. As discussed in Sect. 3, the diameter of the wires increases with the machine diameter and therefore the current density decreases for a given current. At a diameter 22cm < D < 23cm , the maximum current limits the peak torque instead of the current density. The maximum efficiency shown in the last diagrams hardly seems to be affected by the machine diameter. However, the area covered by > 96% is growing with increasing diameter, partly because the flux density decreases due to the increase in iron cross-section. A third sensitivity analysis is dedicated to the turns per coil. Results are summarized in Fig. 4. The turns per coil show no effect on the maximum torque for the given machine dimensions, except for the minimum number of TC = 5 . According to Eq. (3), the wire diameter increases with decreasing turns per coil. At TC = 5 , the increase in wire diameter is big enough, so that the current limit of the power electronics is limiting the torque. For differently sized slots, this can be the case at different numbers of turns per coil and therefore has to be considered for every machine diameter. As introduced with Eq. (4), the flux induced by the windings is linked to the number of turns per coil as well. This causes a beginning of the field-weakening area at lower speeds. Corresponding to the findings above, the maximum power decreases with the number of turns per coil. Although the maximum efficiency of the machine is hardly affected by the number of turns per coil, the area of efficiencies above 96% , decreases when TC is increased due to saturation effects in the iron. The results of the different investigations are summarized in Table 3. Highway car energy consumption For the different cycles, and vehicles, the evaluation criterion is the system efficiency reflected by the energy consumption. Each drive cycle is calculated with all the machines characterized by diameter, length and TC introduced above. The consumption is compared to the most efficient machine in each case. Then the relative difference in consumption is plotted over active diameter and active length in Fig. 5 (top) for a highway car in the WLTC. Different turns per coil lead to different consumptions at constant active dimensions of the machine. High lengths perform best when combined with low TC , whereas high TC have an advantage for short machines. The surface connecting the most efficient combinations at the different machine dimensions is depicted in Fig. 5 (bottom). Those machines, that do not meet the specified operating points of the drive cycle due exceedance of the maximum current density, are not plotted, resulting in blank spaces in the diagrams. It can be seen that the WLTC is designed in such a way, that small machines are beneficial in terms of efficiency. In addition, the isocurves of constant active volume show that high lengths and small diameters are slightly better than machines with the same active volume but higher diameters. These results are a direct consequence of the efficiency of the electrical machine improving in the area where the machine is operated during WLTC. The same investigation is made for the highway car in the city cycle and in the highway cycle. Results are shown in Fig. 6 on the left and on the right, respectively. The 3D-plot is not shown but only the isosurface of the machines with the optimum TC at every geometry. Similar to the WLTC, small machines perform best in the highway cycle, whereas in the city cycle, large machines have less consumption than small ones. Again only machines that are able to perform the drive cycle are plotted. The differences are most pronounced in the highway cycle with an additional consumption of more than 12% compared to the best case. It becomes clear, that no machine exists, that suits all three cycles perfectly. Therefore, manufacturers have to decide, whether to build cars with machines dedicated for one of the use cases or to build ones that are a compromise. City car energy consumption In contrast to a highway car, city cars are smaller and lighter (see Table 2). Therefore, the drag force at a certain velocity is less than with a highway car and the same acceleration requires less force. Performing the same evaluation as above on a city car therefore leads to different results, as outlined in Figs. 7 and 9. Because of the reduced torque required for accelerating lighter vehicles, more machines are able to perform the drive cycles with the given current density limit. Again small machines perform best, when it comes to the consumption and again the isocurves indicate that at the same volume, it is favorable to reduce the diameter rather than the length in the design space under investigation. An exception of the above stated conclusions is the city car in the city cycle. At the same active volume, big lengths are still favorable compared to big diameters. However, there is not a monotonous decrease in consumption with the machine length. Areas of high efficiency shift away from the operating points of the city cycle again. Comparing highway and city cars, the latter show a bigger difference in fuel consumption between best and worst case since the utilization of the machines decreases and a larger number of machines of the design space meet the specifications. Note, that for example in the highway cycle, the consumption increases approximately by one third between the best machine and the worst with the right choice of the coils per turn. When the worst choice of coils per turn is taken into account, however, the consumption almost doubles as depicted in the 3D-plot of Fig. 8 (top). Overall energy consumption The results above demonstrate, how an optimum machine design can be found in a given design space for a certain vehicle type and drive cycle. Consumption differs significantly between the different modifications which has a direct impact on vehicle range for a given battery capacity. However, the optimum machine for each purpose is different. This raises the question, which is the machine that would be the best compromise between the different cycles for example. Adding the consumption per kilometer for all driving cycles for each machine and calculating the relative difference compared to the best case results in the diagrams in Fig. 9 for the highway car (left) and city car (right), respectively. Here C represents the powertrain consumption of powertrain i , the asterisks represents the optimal design, and cycle represents the city cycle, the highway cycle and the WLTC. Again, only the isosurface of the best TC for every diameter-length-combination is plotted. Compared to the results for a single driving cycle, this time the TC has to be selected in a way, that all driving cycles are feasible. Table 4 shows the machines that minimize the energy consumption for all drive cycles with the highway car and the city car, respectively. For the heavy and large car investigated in this study, the designer should choose a long machine with a small diameter and low TC. However, using this machine in the city car would result in 5% more losses than with the machine specifically designed for this use case. For this purpose, it is be beneficial to select a shorter machine which has advantages concerning weight, space and price at the same time. The requirements for the highway car cannot be met with the small machine. While for single drive cycles small numbers of turns per coil seemed beneficial, the overall consumption can be minimized with higher values. Again, the difference is more pronounced for city cars. With a highway car, the differences in the driving cycles cancel out each other so that the machine design has a lower impact on the consumption than with a city car under the assumptions made in this paper. It further has to be noted that again only the isosurface over the TC that lead to the lowest energy consumption is plotted in Fig. 9. A bad choice of TC can still lead to significant differences in energy consumption with a highway car. The increased variance in overall city car consumption is partially linked to the increased amount of powertrains that fulfill the requirements of the driving cycles. In accordance with the single driving cycles, a small machine is also best in the combined use case. Conclusion In this paper, a sensitivity analysis for the design of a permanent magnet synchronous machine was demonstrated. For given voltage, current and current density limits, the influence of machine length, machine diameter and turns per coil were evaluated. After demonstrating the impact on maximum torque, power and efficiency, the optimum design for a highway car and a city car was identified. Big differences in the machine consumption of up to ΔC = 100 % were detected in the highway cycle with a city car. Even if only the optimum number of turns per coil was taken into account for every machine dimension, there was a difference in consumption of more than 30 %. It was then demonstrated, that the difference becomes less pronounced, yet must not be neglected, if an overall consumption in all driving cycles is taken into account. However, the optimum machine design differs largely with the different driving requirements. The designer has to choose a compromise for all different scenarios. Including thermal requirements, weight, space and price considerations makes the optimization even more challenging. This will be investigated in future work. The investigation described in this paper serves the main purposes to demonstrate the impact of vehicle type and vehicle use case on the optimum machine design with respect to energy consumption. Funding Open Access funding enabled and organized by Projekt DEAL. Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-02-02T15:34:32.498Z
2022-01-05T00:00:00.000
{ "year": 2022, "sha1": "ad64610346364617c71a3564a2e0706cc2768312", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41104-021-00097-y.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "ad64610346364617c71a3564a2e0706cc2768312", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
239046805
pes2o/s2orc
v3-fos-license
ASSESSMENT OF POTENTIAL MARINE CURRENT ENERGY IN THE STRAITS OF THE LESSER SUNDA ISLANDS The Lesser Sunda Islands extend from Bali to Timor and consist of two geologically distinct parts formed by a subduction system of oceanic crust along the Java-Timor Trench. The northern part which includes Bali, Lombok, Sumbawa, Flores, Wetar, Pantar and Alor, is volcanic in origin; whilst the southern part is non-volcanic, encompassing the islands of Sumba, Timor and Rote. The straits along the Lesser Sunda Islands are formed as a result of very complex geological processes and tectonics in this area. These straits are the most important cross-sections in the southern part of the Indonesian Throughflow (ITF), functioning as outlets for the mass flows of seawater from the Pacific Ocean to the Indian Ocean through the Flores and the Savu Seas. In these straits, relatively high current speeds are occurred, not only caused by the ITF but also due to its geometry, the influence of tidal flow, and monsoonal currents. Site study and ocean current measurement were conducted by using an echosounder, a pair of Acoustic Doppler Current Profilers (ADCP), and other supporting equipment. In general, the average of most ocean current speeds is less than 1.5 m/s with a duration flow of 8 -12 hours a day, and the maximum speed reaches up to 3 m/s. The tidal types in almost all the straits are mixed semidiurnal tides, in which two high waters and two low waters occur twice a day, with the high and low tides differ in height. The Lesser Sunda Straits were selected as the potential sites for ocean current power plant because their current speeds are relatively high and their characteristics are more predictable compared with other straits from other regions. Based on the results of bathymetry survey and current characteristics from the deployed ADCP at a fixed (stationary) location on the seabed, the best location for the current power turbines is at the depth of 15-30 m where the seabed gently sloping. INTRODUCTION The possibility of generating electrical power from the ocean current in Indonesia has been recognized for many years. It started since 2007 when the Ministry of Energy and Mineral Resources established Nusa Penida Island as the selected Renewable Energy Village for renewable energy development program (Lubis and Yuningsih, 2012). However, significant research and development of renewable energy from the ocean such as ocean current, tidal, wave and ocean thermal energy conversions in this area has only started recently. Currently, ocean current power demonstrates as a possible and significant energy resources for developing renewable energy. There are several advantages of ocean current energy utilization compared to other energy generation. The production of electricity generated from ocean currents is renewable, more predictable and more environmentally friendly. An important initial step in exploring ocean energy is to characterize and to map the resources of ocean currents for generating electrical energy. The utilization of ocean currents to produce electricity, however, is still not developed favorably and need to study in more depth (Hidayati et al., 2016). Hence, site study and ocean current resources observation has been conducted by Marine Geological Institute in the selected straits of the Lesser Sunda Islands, such as Bali and East Nusa Tenggara area ( Figure 1). The Lesser Sunda Islands are a group of islands located between the waters of Southeast Asia and Northern Australia. The Lesser Sunda Islands is a volcanic strip of the Sunda Arc, consists of two geologically distinct parts formed by a subduction system of oceanic crust along the Java-Timor Trench. The northern part of the Lesser Sunda Island are active volcanic in origin consists of the islands of Bali, Lombok, Sumbawa, Flores, Wetar, Pantar and Alor. Meanwhile, the islands in the southern part, such as the islands of Sumba, Timor and Rote, are non-volcanic archipelagos which are geologically derived from the Australian plate. Geologically, Flores Sea is a morphostructure feature, coinciding a back arc basin due to the collision between the Nusa Tenggara island arc and the Australian continent (Prasetyo and Sarmili, 1994). The geological structure of this area is quite complex. Compression tectonics in the Flores Basin to Timor High resulted in the formation of a horizontal fault and a rising fault along the Lesser Sunda Islands. The intersections of the fault zones in the Lesser Sunda Islands formed straits, connecting the Flores Sea and the Savu Sea. These straits become the most important cross section of the Indonesian Through Flow (ITF), further function as an outlets for the mass flow of seawater from the Pacific Ocean through the Flores Sea to the Savu Sea and then to the Indian Ocean (Gordon A.L., 2005), In these straits, relatively high current speeds occur, not only caused by the ITF but also due to its geometry, the influence of the monsoon currents, and the South China Sea -Indonesian Seas Transport/Exchange (SITE). The locations of ITF, SITE and Trades influences are shown in Figure 2. In general, the electricity distribution in Bali and Nusa Tenggara is still insufficient for the entire islands, resulting electrical blackout arrangements in some settlements for more than 6 hours per day. To hinder insufficiency of electricity, PT PLN, the national electricity company, has been hiring more diesel generators to increase the supply of electricity. Based on data issued in 2013, the ratio of settlement electrification facilitated by PT PLN in the islands is still quite low at only less than 45 %. This number is much less below the national average electricity consumption which reached 60 -80 % at the same year. The purpose of the study is to identify the characteristics of ocean currents in the straits of the Lesser Sunda Islands in order to observe the potential of ocean currents as renewable energy resources in the strait areas. Research was conducted to collect data from the selected sites by determining the seabed morphology and the hydro-oceanographic characteristics. METHODS Survey methods applied during field works at the strait of the Lesser Sunda Islands from 2008-2015 were measurement of currents, tidal observation, observation of meteorological parameters, conditions of the seabed morphology, seabed nature and the coastal characteristics. Additionally, some methods of observations were applied to collect field data to assess the selected site and the proper type of ocean current power turbine for generating sufficient electricity. Referring to the ocean current turbine site selection criteria from Marine Current Turbine Ltd., a decent location to develop ocean current power generation must be located not far from the beach, be closed to the power grid, have an ocean current speed of 2.0 -3.0 m/s (depending on the turbine type), and have relatively flat seabed morphology (Ainsworth and Thake, 2006). In this study, Tidal Range Observation was measured by using an Electronic Tide Gauge and a tidal harmonic analysis was carried out by applying Royal Admiralty method. To obtain the tidal characteristics, this measurement was also necessary in order to determine the Chart Datum for bathymetric survey result correction and to obtain correlating data with the current observations. Ocean Current Characteristics was measured by using an Acoustic Doppler Current Profiler (ADCP). There were two survey methods applied, such as Transect and Stationary Surveys. A transect survey is carried out by towing an ADCP instrument to measure the currents under a moving boat. A static survey is so called because it engages the deployment of a sea current measurement device (ADCP) at a site to moor on the seabed. Typically, ADCP measurements of the tidal currents must be taken for at least 30 days. This allows a tidal harmonic analysis of the flow to be completed (EMEC, 2009). Bathymetric survey was conducted using a single beam echosounder (SBES) with spacing of sounding lines approximately 50 -100 meters. Mobile ADCP and Single Channel Echosounder equipment were integrated by Global Positioning System (GPS) device. Both equipment recorded data of current speeds and ocean depths according to survey ship trajectories. The ocean depth data was then correlated with the result stream data from the recording of mobile ADCP to infer the relationship Figure 2. The influences of ITF, SITE, and Trades (monsoon currents) in Indonesian water area (after Susanto, et al., 2000). between the morphology of the seabed and the current velocity distribution. Ocean Current Data recorded from mobile ADCP and static ADCP was correlated with the tidal data to indicate pattern movements of the ocean currents at high and low tide conditions. Weather Observation was conducted by using an Automatic Weather Station (AWS) during the field works. The AWS measures weather parameters, such as air temperature, humidity, wind speed and directions, air pressure, and rainfall. Weather conditions can change the current flow. Large pressure system may enhance or reduce the current flow, and storm surges can cause strong flow that can damage the turbines. Weather can also affect deployment and maintenance by limiting access to the site (EMEC, 2009). Current Data obtained from field measurements were presented in time graph series, scatter plots, stick plots, and current rose. In this study the current energy conversion calculated from the current speed data is indicated in the form of power density unit (watt/m 2 ). We adopt the formula from Fraenkel (2002), the power density can be obtained through the following equation: where P is power (Watt); ρ is the density of seawater (kg/m 3 ); A is the cross-sectional area of the turbine location (m 2 ); and V is the speed of the ocean current (m/s). The density value of seawater used here is the same as 1025 kg/m 3 . The turbine cross-sectional area (A) is considered to be 1 m 2 so that the most influential variables in the conversion calculation process into electric current are the current velocity and the turbine area (Fraenkel, 2002). Tidal Characteristics Based on the harmonic tidal analysis using the Royal Admiralty method, the tidal type in almost all of the straits are mixed semidiurnal tides, with two high tides and two low tides each day. The high and low tides variability can be seen in Figure 3. The tidal curve of 30-day cycle at Toyapakeh Strait (Figure 4) shows the tidal current pattern in which two high tides and two low tides occurs within 24 hours with a maximum water level of 2.15 m. This situation causes the time length during high tide and low tide conditions are about 6-7 hours on average in the spring tide, while the neap tide has shorter time length. Have the spring tide shown the slope of the water level at high and low tides, this condition will be followed by an increased current speed. Thus, the current speed will reach its maximum condition. Current Speed and Direction. The Lesser Sunda Straits were selected as the potential site for sea current power plant due to their relatively high current speeds and their more predictable characteristics. The route of the current exchanges reveals that owing to the narrow straits of the Lesser Sunda Islands, inflow waters are trapped here before flowing to the Indian Ocean. This is the reason why the current speeds of the straits are relatively high. Table 1 shows the range of current velocities for certain depths. The current velocity in this table is the minimum and maximum velocities recorded by a device during high and low tides at up to 27 m depth. The maximum velocity is above 2 m/s, particularly above 3 m/s from 3 m to 22 m depth where the greatest speed is at the depth of 5 m. Based on the current distribution data and the performance of continuous long-term current measurements, one can estimate the best position and depth for the placement of the ocean current power plant equipment/turbine. The result of stationary current measurement and current transect survey during high and low tides indicates that in high tide conditions the current direction tends toward the northeast; and during the low tide conditions the current tends to flow to southwest direction (Yuningsih and Lubis, 2011). The current speed and direction at Larantuka Strait with the two conditions, i.e. during the high and low tides, are shown in Figure 5; whereas the distribution of the current speed derived from the shipmounted ADCP survey during the field work are shown in Table 1. Specifically for the Toyapakeh Strait -Nusa Penida, the current direction has a unique pattern. The results of field measurements and observations during the survey are plotted in a current rose diagram. It shows the dominant current direction to the southwest in all the high and low tides conditions, as well as in all the conditions of the spring and neap tides ( Figure 6). Based on the results of current data processing using World Current 1.03 after Leverani et al. (2016), the types of currents in Toyapakeh Strait are tidal currents. The current direction describes the movement of two directions (bi-directional current), namely southwestnortheast direction. The direction of the current is formed due to the changes in water level elevation and seabed morphology. (a) During high tide, northeastward current (b) During low tide, southwestward current Bathymetry Vertically and horizontally, the current velocity distribution in the straits of the Lesser Sunda Island are not only influenced by the tidal conditions, but also by the condition of seabed morphology, the width of the strait and the depth of the sea. Based on data from several trajectories of the ship-mounted ADCP, the current velocity in these strait correlates with the depth of the sea, where the current speed is relatively high. The highest current velocity occurs in the deepest and narrowest part of the strait channel during the high and low tides (Figure 7 and Figure 8). In general, potential energy of ocean current that is possible to harvest depends on both parameters, i.e. the current speed and the swept area of the marine turbine blades (Thomas, 1991;Fraenkel, 1999). Another important consideration that has a significant impact to the ocean current power plant is the right position of turbine installation at the proper depth where the optimal current speed occurs. The basic technical factors that may affect the assessment of a site for turbine deployment are the site must have a large enough current speed between 0.5 m/s -3.0 m/s with relatively uniform velocity from the surface to the bottom, the site must be not too far from the beach (ideally less than 1 km), the sea depth must be ranging from 15 m -50 m, and the morphology of the seabed must have a gentle slope. The water depth at the Toyapakeh Strait and Boleng Strait generally ranges between 5-50 meters in the outer channel with a gentle seabed morphology, whereas in the middle channel the depth is increasing gradually to more than 200 meters (Figure 9 and 10). Bathymetric contour pattern from both straits shows a steep and narrow morphology. Some closed-pattern contours are found at the depth up to 200 meters, indicating several deep hole morphologies. Herein, some of the closed bathymetric contour patterns of more than 100 meters depth have the potential to cause a vortex of ocean currents. The bathymetry of almost all straits reflects the tectonic activity in the area, which is characterized by a deep channel in the middle and a small shallow platform in the outer channel on a gentle seabed morphology. This condition gives rise to a strong tidal current that can be utilized to generate electricity. One of various information from the area for designing an ocean current turbine is the sediments that composed the seafloor and coastal area (Kurnio et al., 2018). At the Larantuka Strait, the water depth ranges from 5 to 80 meters on a gentle slope morphology (Figure 8). Observing the strait width, the bathymetric contour pattern, the depth, and the seabed morphology, there is a possibility for a fast channeling of the mass flow of seawater to occur from the Flores Sea to the Savu Sea, especially of the surface ocean currents. No potential for a vortex of ocean currents was found. The current pattern in the Larantuka strait is also more predictable than other straits. From some of these technical aspects, the Larantuka Strait is more preferable to be selected as a site for ocean current power plant compared to other straits. Current Energy Conversion Based on ocean current characteristic and the territorial water profile, the straits of the Lesser Sunda Islands have a great potential for the utilization of ocean current power plants. Although the ocean current power is not widely implemented at present in Indonesia, it has an important potential for future renewable electricity generation, especially in remote coastal areas. There are many selected sites of the straits of the Lesser Sunda Islands that have potency to generate electricity due to the significant current speed of more than 2.5 m/s. In general, the average of sea current speeds in the Lesser Sunda Straits are less than 1.5 m/s with the duration flow of 8-12 hours a day. The maximum speed reaches 2.5 m/s up to more than 3.0 m/s with the duration flow of 2-3 hours a day. The calculation result of ocean current energy using Fraenkel formulation (2002) in the several straits of the Lesser Sunda Islands, e.g. Toyapakeh Strait, Larantuka Strait, and Boleng Strait, is shown in Table 2. The ocean current power calculated from the current speed data is indicated in the form of power density unit (W/m 2 ). CONCLUSION The study conducted in the Lesser Sunda Straits draws some conclusions as follows: • The straits through Bali and Nusa Tenggara namely the Lesser Sunda Straits can be selected as the potential sites for the sea current power plant due to the strong current speeds between 0.5 m/s to more than 3.0 m/sec. • In general, the current direction tends toward the northeast during the high tides and toward the southwest during the low tides. • Based on several moving ADCP trajectories data, the relatively high current velocity in these straits is correlated with the depth of the sea. The highest current velocity occurs in the deepest and narrowest part of the strait channel during high tide and low tide conditions. • From some of the technical aspects, i.e. the tide, the current speed, the bathymetry and the seabed morphology, the Larantuka Strait is more preferable to be the selected site for developing ocean current power plant. • Based on the speed and the duration of the flowing current, the ocean current turbine technology suitable to apply in the waters around Bali-Nusa Tenggara is the technology which requires a minimum speed to move (low cut in speed, 0.5 -1 m/s) to obtain sufficient electrical power.
2021-10-20T15:21:49.159Z
2021-09-18T00:00:00.000
{ "year": 2021, "sha1": "c857a46aa773baece3f2fc9fffd8c0b172dcecca", "oa_license": "CCBYNCSA", "oa_url": "http://ejournal.mgi.esdm.go.id/index.php/bomg/article/download/703/521", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6e43b7dd97582e7c44ba333acca345e9e1677ef2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
270180545
pes2o/s2orc
v3-fos-license
Robust PWM control scheme for switched-capacitor MLI with leakage current suppression in grid-connected renewable energy application Typically, parasitic capacitances exist between the ground and the solar panel terminals in grid-connected PV systems. These parasitic capacitances provide a path for a leakage current, which leads to significant safety concerns, observable and seriously hazardous harmonic orders aligned with the injected grid current, and significant safety difficulties. In this research, a robust PWM controlling method that used competently in reducing the level of the leakage current and improving the power quality of a switched-capacitor Multilevel Inverter. This technique creates developed reference signals from the main signal to generate the switching scheme for the converter circuit. Additionally, the suggested control strategy only works with a small number of carrier signals, resulting in a quick system response and a simpler controller algorithm. Likewise, this controlling approach offers a stable way to maintain a constant output voltage in the suggested converter by adjusting the switching capacitors' voltages, which is not possible with traditional control techniques. MATLAB/Simulink is used to simulate the outcomes for both the suggested control approach and the traditional Phase Disposition (PD) PWM control method whereas the leakage current component reduces to 25 % compared to the captured component with the PDPWM. The simulation and the practical results based on the dSPACE-1103 hardware are quite similar. Introduction The utilization of renewable energy is considered to be the most important source of electrical power in the future.Additionally, it is far better than conventional means of generating electricity, including burning coal in steam generators [1,2].Renewable energies provide a clean, affordable, and continuous source of energy [3,4].Additionally, it eliminates hazardous CO and CO2 emissions while minimizing global warming.In the same regard, it works in reducing the depletion of many quantities of fresh water used in cooling fuel stations, which works to reduce the scarcity of fresh water [3].However, the relatively high cost of PV systems and the contribution of photovoltaic (PV) energy to the overall amount of energy consumed worldwide is currently quite low [5].Therefore, optimizing the PV system's performance is essential for using these renewable energy sources. Multilevel inverters (MLIs) offer superior power quality, greater performance, reliability, and dependability as it is involved in Grid-connected transformerless systems when they replace the traditional converters [6,7].The higher efficiency, multi-output steps, and less voltage stress are the most prominent features of the multilevel inverters [8,9].Many MLI architectures are involved in RES to absorb the most available power from the RE and supply it to the grid or local loads based on the customers' requirements [10].Those systems employ different control methods with variable features to catch up with better performance with higher efficiencies [11].As a result, MLIs have emerged as a top inverter structure for REAs like photovoltaic (PV), wind, and fuel cell energy [12,13]. In most grid-connected PV systems, parasitic capacitances exist between the ground and the terminals of the solar panels.These parasitic capacitances act as a conduit for a leakage current, which in turn creates significant safety issues, Electromagnetic Interference (EMI) issues, and detectable and extremely damaging harmonics orders linked with the injected grid current [14].Taking into account the isolation technique between the input (DC) and output (AC) sides of the converter circuit, the grid-connected applications of multilevel inverters may be divided into two major groups: transformer-based converters and transformer-less converters.The use of transforms in grid-tie converters has several benefits, including the prevention of potential ground faults due to the electrical isolation between the input and output side and the maintenance of magnetic coupling, which could block the path for the injection of leakage current into the grid by providing galvanic isolation [15].However, adding transformers to the grid-tie converter circuit has significant disadvantages since it results in a heavy and expensive system.Additionally, it tends to score less efficiently than non-isolated systems (1-2%) [16]. In the transformer-less converter circuit, many techniques are presented to overcome the leakage current.The following groupings include these solutions: the structural method, the filtering-based technique, and the modulation method.The structural approach, such as the H5 [17,18] converter, modify the converter's structure to get rid of leakage current, H6 [19,20], HERIC [21] converters, and other structures [22][23][24][25].However, these solutions generate bulky and costly systems.The filtering-based method; where a filtering stage is installed in the converter circuit to attenuate the leakage current in the converter circuit.Using symmetrical filter components at output terminals of the converter circuit helps to reduce leakage currents by attenuating the differential-mode voltage on leakage current [26,27].Inverter volume and weight are negatively impacted by filter solutions, which are often expensive. The modulation method keeps the converter circuit small and well-packaged because it doesn't call for any extra parts like switching devices as in the structural method or chocking coils as in the filtering-based method, which are needed to overcome the leakage current from the converter circuit.By modifying the modulation scheme for the converter circuit's switching devices and selecting the appropriate modulation method for each converter circuit, the leakage current may be completely removed [28].Level Shift Pulse Width Modulation (LSPWM); and Phase Shift Pulse Width Modulation (PSPWM) are the most widely used and straightforward modulation methods of multilevel inverter [29].However, one major drawback of these control approaches is the imbalance in power distribution.Additionally, the leakage current component is not eliminated, and the inverter's efficiency is decreased by large power losses [30].A modified phase disposition pulse width modulation method is applied to the converter circuit in Ref. [31] and the control method succeeded in surpassing the leakage current in the output to the allowable standard level. The goal of this study is to develop a PWM controlling technique that helps to provide the proper switching order for the switching devices in the investigated MLI circuit by using some auxiliary reference signals generated from the main reference waveform.A regular output voltage waveform is produced and system efficiency is increased thanks to the created LMPWM controlling method's assistance in balancing the inverter circuit's capacitor voltages and regulating their voltages.Furthermore, it addresses the problem of the leakage current component and reduces it to the permitted DIN VDE 0126-1-1 norms.This study recommends a comparison between the PDPWM and the suggested LMPWM to regulate the performance of the MLI circuit to highlight the benefits of the proposed controlling strategy.In terms of altering capacitor voltages and lowering the amount of leakage current, the suggested regulating approach performs better. The structure of this work is as follows: Section II defines the parasitic capacitance problem and the leakage current.Section III has explored the analysis for the PDPWM and LMPWM controlling approaches.The same Section also discusses the performance of the converter as well as the pertinent simulation results for the MLI under each of these regulating conditions.In Section IV, the experimental results for the converter circuit are summarized and presented.Finally, Section V provides a summary of the work presented in Fig. 1.The 9-Level MLI configuration [32]. A. Hassan et al. this study. Single phase 9-Level MLI proposed topology Referencing the proposed 9-Level MLI structure that is considered in Ref. [32] and shown in Fig. 1, the proposed converter circuit consists of a single DC source a pair of switched capacitors, a single diode, and a set of 10 power switches.This combination generates an output voltage with nine levels.The proposed structure employs a pair of switched capacitors to generate the output levels.The capacitors are used to charge from the single DC supply each with half of the DC voltage source.The self-charging with natural balancing capabilities are the most attractive features of this configuration.The generalized configuration of this converter can provide a higher output power level with finer output waveforms which keeps higher performance for the converter circuit. The proposed 9-Level converter circuit is mentioned that it can be involved in many applications, where the most attractive application is the renewable energy utilization system.The availability of the DC-Links in renewable sources especially PV farms as well as the low-cost support using this kind of application.Refs [32,33] explored the involvement of the proposed structure in the RES and showed good performance for the system on both the DC and the AC sides.However, the performance of the system hasn't been checked with the parasitic capacitance and studying the leakage current effect on the system outputs.As the PV panels are employed in the input side of the converter circuit to supply the system with the DC input power, the insulation dielectric martial between the metallic frame of the PV panels and the ground has a capacitance effect and forms a stray capacitance or sometimes called parasitic capacitance. Several factors affect the capacitance of these stray capacitors such as the material of the outer frame of the PV panel during the manufacturing processes, moisture percentage, as well as the extent of wetness, and the amount of dust on the PV panel.This parasitic capacitance might thus increase with different air conditions [34,35].The capacity of the parasitic capacitance ranges from tens of Pico-farads (pf) to hundreds of Nano-farads (nf).In Ref. [35] the measured value of the capacitance is around 150 pF in case of dry weather, and 9nf if it rains.Several methods are discussed in Ref. [36] to estimate the capacity of the stray capacitors in the PV systems.The conclusions from these calculations state that not easy to find a specific value for the capacitance and the approximate rate of variation is 150nf/kW. The following section gives a closer vision of how leakage current is generated and passed through the PV circuit.The mathematical representation of the general grid-tie system defines some concepts such as common-mode and differential-mode voltages components.Fig. 2(a) shows the general representation of the grid-tie converter circuit that fed from the PV unit.The presented figure shows the parasitic capacitors between the terminals of the PV panel and the ground (C pv1 and C pv2 ), the DC-link capacitor C. The input terminals of the converter circuit are (P and N) whereas the output terminals are (A and B).The output from the converter circuit is connected to the grid through the filtering stage which is represented with the inductors (L f1 and L f2 ).In addition, the pass for the leakage current in the converter circuit is mentioned.In Fig. 2 where V CM and V DM are the common-mode and the differential-mode voltage components respectively.The potential difference voltage components can be expressed in terms of the common-mode and the differential-mode voltage components in (3), and (4) and the updated formulas are presented as follows [37]: equation (3&4) are involved in the mathematical simplification circuit and the updated circuit is illustrated in Fig. 2(c).This simplified circuit model uses mainly the common-mode and the differential-mode voltages components.The common-mode circuit in Fig. 2(c) can be presented in a more simplified format when it is introduced in a single closed loop as in Fig. 2(d) after calculating the equivalent for the common-mode voltage and the parallel branches [38].The equivalent common-mode voltage is addressed as follows in equation ( 5): The equivalent circuit in Fig. 2(d) confirms that the value of the leakage current in the circuit is affected by both common-mode and differential-mode voltages.Furthermore, the inductance of the flirting inductors influences the magnitude of the leakage current. The effect of the stray capacitances is considered in studying the performance of the 9L MLI topology with the PV transformer-less applications.As it is considered one of the most important causes for the emergence of the leakage current.The next section presents the controlling scheme for the proposed converter circuit under the conventional PWM control strategy and the developed PWM control strategy.To evaluate the performance of the converter circuit under different control strategies. Controlling schemes analysis of the proposed MLI configuration Different control strategies may be utilized to regulate the proposed converter circuit's operation, and the converter's performance can be assessed using the regulating algorithm that was employed.The converter circuit may be controlled using a variety of approaches, and the system's behavior can be altered based on the controlling strategy used.Notably, Phase Disposition Pulse Width Modulation (PDPWM) is one of the most popular regulating strategies that guarantees improved performance. This control technique is considered a common PWM technique that is widely used to reduce the impact of leakage current.It also has a propensity to produce output voltage waveforms with a lower harmonic content.First, the 9L converter circuit's performance is assessed using the PDPWM control scheme, and the findings are drawn.Second, the proposed designed PWM regulating method is made available to the system to improve performance for the considered 9L MLI setup.As a result, the effectiveness of the converter circuit under various regulating techniques is contrasted. A. Conventional PDPWM modulation strategy To generate the switching sequence for the switching devices in the converter circuit, the PDPWM control approach synthesizes 8 carrier signals to be compared to the reference signal due to the nature of the output waveform.The reference signal intersects with all of the carrier signals, which are synthesized by the PDPWM to operate in phase.The carrier and reference signals of the PDPWM Fig. 3. Phase disposition PWM's carrier and reference waveforms. A. Hassan et al. needed for the suggested converter circuit are shown in Fig. 3.The output results from the suggested converter circuit under the PDPWM have been described in Fig. 4. The following outcomes are obtained when the control strategy is applied to the converter circuit. The leakage current value was measured, and the leakage current waveform is shown in Fig. 4(a) with the signal's RMS value recorded as 78 mA in Fig. 4(b).The VDE-0126-1-1 standard code has been used to evaluate the performance of the controlling A. Hassan et al. approach, as the leakage current component was found to be rather high.On the other hand, if the capacitor voltages in the suggested converter circuit aren't perfectly balanced, the system's stability won't be fully attained, which can damage the output voltage waveform's regularity percentage.The capacitors' voltages are monitored and the capture result is considered in Fig. 4(c).Fig. 4(d) shows the voltage across the negative terminal of the DC source and AC ground terminal.The PDPWM is considered a straightforward control approach with a simple computing procedure.Furthermore; it scores a good performance through the presented output waveforms.In addition, also it presents an elimination stage for the leakage current component.However; this conventional controlling technique has some lacks that can be noticed from the captured list of output waveforms such as; the higher number of triangle waveforms which make the system loaded and upgrading this system requires more additional triangle waveforms.Moreover, the Parasitic Capacitance Voltage is not stable consequently the leakage current component is still acting, and the switched capacitor voltages in the converter circuit are not completely balanced which affects the regularity of the output waveform.all these defects can be handled in the proposed developed controlling PWM technique. B. Developed LMPWM control algorithm To gain better performance of the proposed converter circuit, a developed PWM control algorithm has been considered in this work.This method is used to mitigate the leakage current component in the circuit, so it could be defined as the Leakage current Mitigate PWM control method abbreviated idiomatically to (LMPWM) controlling method.The principle of operation of this method is based on modifying the reference sinusoid geometrically and the generated compensatory signals have been used with the aid of the carrier signals to generate the switching signals for the switches.Unlike PDPWM and Phase Shift (PS) PWM approaches, which use eight carriers to provide gate signals for switches, LMPWM only needs four carriers to perform this task instead of eight carriers as in the traditional techniques.V cr1 , V cr2 , V cr3 , and V cr4 are the carrier signals that are used in the LMPWM, where the auxiliary reference signals have been created through equations ( 6)-( 11) as a function of the main sinusoidal reference signal as follow. Fig. 5. Switching pattern for the proposed MLI topology under the LMPWM controlling method. A. Hassan et al.The switching scheme of the LMPWM that is mentioned in Fig. 5 is applied to the converter circuit and the characteristics of the system and the outputs are illustrated in Fig. 6.The features of the output waveforms are analyzed to show the effect of applying the developed LMPWM on the performance of the converter circuit.The leakage current component is illustrated in Fig. 6(a).This controlling method has succeeded in mitigating the level of the leakage current component that acting on the grid current component. The leakage current component records only 22 mA which is compatible with the VDE-0126-1-1 standard code [39].The representation of the RMS value of the leakage current component is considered in Fig. 6(b).Also, one of the smartest features of the proposed LMPWM control method is that it helps in adjusting the controllability of the switched capacitors' voltages and keeps the system balanced.This could help in creating a regular output voltage waveform.The capacitors' voltages have been monitored and presented in Fig. 6(c).The voltage across the negative terminal of the DC source and AC ground terminal V Ng is shown in Fig. 6(d).The proposed LMPWM controlling algorithm has proven efficiency in maintaining stable system performance and reducing the level of the leakage current component to the allowable standard.The presented set of results reviews the superiority of the proposed control approach to control the performance of the system over using the traditional control method, by producing an output current with a lower percentage of the leakage current in proportion to the rated values, as well as controlling the compatibility and balance between the switched capacitors' voltages.The next section addresses the experimental validation of applying the LMPWM control algorithm to the grid-connected 9L MLI converter. Experimental results An experimental setup is considered to validate the effect of the usage of the LMPWM control algorithm on controlling the converter circuit.The specifications of the experimental system are mentioned in Table .1.Fig. 7 shows a phototherapy for the experimental setup.The picture includes the SC-MLI and the controller stages.Besides, it illustrates the monitoring and protection stages. Fig. 8 presents the output voltage waveform of the 9L converter.The output waveform looks periodic, and regular, and has a streamlined shape.The grid voltage and injected current components are considered in Fig. 9.The alignment between the grid voltage and current waveform proves the in-phase operation of the system and indicates that the system injects only the real power component to the grid which proves the robustness of the controller in adjusting the performance of the system.The recorded value for the phase angle between the grid voltage and current components is included in the same figure.The spectrum analysis of the injected current waveform has been considered in Fig. 10 the small value for the THD percentage that is presented in the figure reflects the finest of the injected current waveform and ensures preventing any harmful harmonics' orders from being injected into the grid.The capacitors' voltages in the proposed converter circuit have been measured and presented in Fig. 11.The capacitors' voltages are balanced and aligned to their reference value, which guarantees regular and clear output voltage waveform.The leakage current component in the circuit is measured and illustrated in Fig. 12.The leakage current component value recorded in Fig. 12 is largely consistent with the captured value of the simulation result method, which validates the robustness of the controlling algorithm in mitigating the leakage current. The efficiency of the system has been calculated for both of the controlling schemes under various power levels.The simulation for the calculated values of efficiencies for both of the control methods has been considered in Fig. 13.The comparison shows the superiority of the proposed LMPWM controlling method as it scores higher output efficiency (above 96.5 % @1kw).This comparison demonstrates that suggested control approach outperforms the traditional control method in terms of efficiency.The enhancement in the efficiencies is noticed for all of the power levels due to the low power losses that this controlling technique scores. Conclusion A developed LMPWM regulating approach was suggested in this study as an active solution to the leakage current problem in the transformer-less grid-connected MLIs.In this paper, the converter circuit is tested using a grid-connected system built on the 9L MLI architecture.The converter circuit is controlled using the PDPWM control technique.The system performance under the proposed LMPWM control method guarantees the effectiveness of the proposed control method in minimizing the leakage current in the system.The suggested LMPWM control approach was successful in lowering the number of carrier waveforms from eight signals as in the PDPWM techniques to four signals.Furthermore, it minimizes the leakage current component that affects the grid current waveform by roughly a fourth compared to the reported value of PDPWM.The VDE-0126-1-1 specifications are compatible with this component for leakage current.Additionally, the controller of the suggested inverter circuit was successful in regulating the capacitor voltages and maintaining the balance of the voltages.The output voltage waveform is created consistently with the aid of this function, which also helps to reduce the size of the output filtering components.Additionally, the system can regulate the injected current using the regulating approach such that it maintains a unity power factor and runs in phase with the grid voltage.This characteristic, which only injects an active power component into the grid, supports the system's stability.Utilizing simulation results and an experimental prototype, the efficacy of the recommended LMPWM regulating approach is evaluated and recorded at < 96.5 % @1kw.Therefore, it can be said that the suggested LMPWM control method surpasses the conventional PDPWM strategy in terms of output efficiency and enhances the performance of the nine-level MLI architecture at different power levels. Fig. 2 . Fig. 2. General diagram and mathematical simplification for the grid-tie converter circuit-based PV unit (a) Single-phase grid-tie PV converter, (b) model using voltage sources, (c) model showing common-mode and differential-mode voltages, (d) equivalent simple model. Fig. 4 . Fig. 4. Output results for the proposed MLI circuit under the PDPWM controlling algorithm (a) Leakage current component.(b) RMS of Leakage current (c) capacitors voltages.(d) the voltage across the negative terminal of the DC source and AC ground terminal V Ng . Fig. 6 .} ( 11 ) Fig. 6.Output results for the proposed MLI circuit under the LMPWM controlling algorithm (a) Leakage current component.(b) RMS of Leakage current (c) capacitors voltages.(d) the voltage across the negative terminal of the DC source and AC ground terminal V Ng . Fig. 7 . Fig. 7.A photograph of the experimental system. A .Hassan et al. A .Hassan et al. Fig. 13 . Fig. 13.Efficiency of the system under both of the controlling methods. A .Hassan et al.
2024-06-02T15:04:51.891Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "f984178b87323976d6120f577c5005e3837d4490", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844024082458/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e160df2ab30049f8a86e159cc4f512ccb330ac8", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
251493475
pes2o/s2orc
v3-fos-license
Temporal-spatial risk assessment of COVID-19 under the influence of urban spatial environmental parameters: The case of Shenyang city Respiratory infection is the main route for the transmission of coronavirus pneumonia, and the results have shown that the urban spatial environment significantly influences the risk of infection. Based on the Wells-Riley model of respiratory infection probability, the study determined the human respiratory-related parameters and the effective influence range; extracted urban morphological parameters, assessed the ventilation effects of different spatial environments, and, combined with population flow monitoring data, constructed a method for assessing the risk of Covid-19 respiratory infection in urban-scale grid cells. In the empirical study in Shenyang city, a severe cold region, urban morphological parameters, population size, background wind speed, and individual behavior patterns were used to calculate the distribution characteristics of temporal and spatial concomitant risks in urban areas grids under different scenarios. The results showed that the correlation between the risk of respiratory infection in urban public spaces and the above variables was significant. The exposure time had the greatest degree of influence on the probability of respiratory infection risk among the variables. At the same time, the change in human body spacing beyond 1 m had a minor influence on the risk of infection. Among the urban morphological parameters, building height had the highest correlation with the risk of infection, while building density had the lowest correlation. The actual point distribution of the epidemic in Shenyang from March to April 2022 was used to verify the evaluation results. The overlap rate between medium or higher risk areas and actual cases was 78.55%. The planning strategies for epidemic prevention and control were proposed for the spatial differentiation characteristics of different risk elements. The research results can accurately classify the risk level of urban space and provide a scientific basis for the planning response of epidemic prevention and control and the safety of public activities. Introduction The COVID-19 pandemic has had an unprecedented impact on human health and public safety. Up to today, since the virus is widely spread around the world, under the situation of a normalized epidemic, effective prevention strategies for urban public spaces are an important influencing factor to maintain urban safety and promote economic recovery. As outdoor spaces for daily life and social activities of urban citizens (Wu and Li 2011), urban public spaces are the central place for human contact and communication and also serve as an essential source for urban vitality (Jacobs 1964;Tibbalds 1992). However, with the highly populated complex environment, urban public spaces are critical for respiratory infectious disease prevention and control. Carrying out the multidimensional dynamic evaluation on the risk of COVID-19 respiratory infection in urban space and accurately classifying the risk level of urban spaces are the premises for scientific prevention and critical control of the epidemic . Nowadays, many countries have established information platforms on the dynamics of the Covid-19 epidemic 1 and carried out risk-area ranking. Platform information is mainly used to monitor case occurrence locations and patient activity trajectories (Kamel Boulos and Geraghty 2020; Zachreson et al. 2021), intercity population movement (Peixoto et al. 2020;Jiang et al. 2021), health care resource allocation distribution (Kang et al. 2020;Sha et al. 2020), etc. For the classification of risk areas, urban administrations are more likely to classify the risk levels of different administrative areas based on epidemiological surveys, the number of existing infection cases, and the clearance interval. In contrast, in some research work, the classification of urban risk zones is more considered to introduce more influencing factors, and the spatial scale of the classification is more detailed. Taking Eswatini in Africa as an example, Dlamini et al. (2020) formulated the COVID-19 comprehensive risk evaluation map in three risk levels based on the national dimension with multivariate clustering method on the basis of the annual average traffic volume, case report position and ward number in hospitals. They explored the possibility of prediction of the spatial risks for the epidemic with the shortage of the statistical data on the cases and the limited medical and healthcare capacity. Azevedo et al. (2020) formulated the risk evaluation map in the national dimension in urban spaces based on the geological model of block sequential simulation according to the confirmed case number reported every day in Portugal. They optimized the infection risk evaluation tolerance resulting from the difference of the urban population scale without considering the risk classification of traditional choropleth map. Wang et al. (2020) based on the epidemic cases and POI data in Shaanxi Province, China, used mathematical statistics, spatial analysis and other methods to identify the hot risky zones in the downtown dimension by making the geographic profiling for the street risk indexes in the urban area of Xi'an city through the overlay assignment for density grid in the epidemic communities, crowd gathering point and the designated medical institution. These related researches around the spatial distribution of the risk of Covid-19 epidemic make up for the deficiencies of traditional epidemic risk zone delineation in terms of spatial accuracy. Generally, they show the following development trends. (1) Introducing information on the urban environment and human flow characteristics for risk assessment and improving spatial delineation accuracy is applicable to control during severe epidemic outbreaks and can guide citizens' behavior guidance during the epidemic normalization period. (2) Evaluation results focus on the combination of immediacy and dynamism, with the function of extrapolation and prediction of epidemic risk under various scenarios. In addition, some countries also specify the division of spatio-temporal concomitant risk of the new coronavirus. In China, for example, the conditions for spatio-temporal concomitant risk are co-existence for 10 minutes or accumulation of more than 30 hours on either side in an 800 m × 800 m grid. The latter is applicable in indoor spaces such as homes and offices in terms of exposure time. At the same time, the former is more applicable to spatio-temporal intersections arising from the flow of people in public open spaces. This paper focuses on the former, i.e., the assessment of respiratory infection risk in urban public spaces due to spatio-temporal intersections arising from the flow of people. Moreover, it proposes a method to accurately evaluate the Covid-19 epidemic in urban public spaces based on the combination of the urban built environment and spatial distribution of the population. We propose a method to accurately evaluate the risk of a Covid-19 epidemic in urban public space based on the combination of the urban built environment and spatial distribution of the population. It can be used as an in-depth discussion of the current method to classify spatio-temporal concomitant risks in epidemic prevention and control. WHO (2020) report has shown that COVID-19 goes along with the transmission modes and infection mechanism of airborne infectious diseases. The respiratory infection probability model is commonly used to evaluate the risk of airborne infectious diseases such as COVID-19. The Wells-Riley (W-R) model was proposed by Wells (1955) with the "quanta" concept and was finally developed and formulated by Riley et al. (1978) on this basis. Afterward, this model was widely used to evaluate infection risks in crowded spaces, such as hospital wards, public transport means, and prisons (Noakes et al. 2006;Zhu et al. 2012;Urrego et al. 2015). This model was initially used under the premise of uniform distribution of virus concentration in space and could not reflect the spatial heterogeneity of infection risks resulting from the flow field difference under different environments (Furuya 2007;Chen and Liao 2008). With the progress of the studies, some scholars shifted the perspectives from the single space environment to the complex space environment influenced by multiple factors. Qian et al. (2009) Noakes and Sleigh (2008) combined Wells-Riley equation with the mixed ventilation equation, for simulative calculation of infection risks of respiratory diseases for 18 patients in three wards connected by a corridor, finding that the number of newly infected patients in different communities has different growth curve over time, and being nearer to the source of infection indicates higher risk level. Based on the W-R model, this study explored the spatial heterogeneity of infection risks for respiratory diseases after considering the factors imposed upon the environment, such as flow field difference, and has provided a theoretical basis and method for our study. This study explored the important parameters for human body respiration and effective scope of influence in W-R model through the literature combing, made the respiratory infection risk evaluation in the grid cell with the local environmental monitoring data, inferred this method to the urban dimension, and strived to seek the space path and planning response strategy in atmospheric protection by formulating the accurate classification of the infection risk level in urban public spaces. Modeling This study has proposed the framework for infection risk assessment of infectious respiratory diseases in urban public spaces (in Figure 1), We divided the study area into 100 m × 100 m grids, and achieved quantitative evaluation of respiratory infection risk in urban public space by coupling the results of human flow and local wind environment assessment within the grids. Module 1: W-R formula extension based on the scope of individual respiratory influence Based on the Wells-Riley model for classical risk assessment of respiratory diseases, the calculation formula is as follows: where, P is the infection probability, C is the number of newly infected persons, S is the total number of susceptible persons. I is the number of infected persons. In this study, the research zone is divided into 100 m × 100 m grids, where the initial value for the number I of the newly infected persons in the grid cell is set to 1, followed by the statistics of the mean value of the number of persons in different time segments for the grid cell, taking its logarithmic function for the weighting of the initial value, as the number of potentially infected persons in this grid. q is the quanta production rate of an infected person, and the currently available research result is adopted in this study (Buonanno et al. 2020;Dai and Zhao 2020;Zhang et al. 2020), taken as 122 quanta/h. p is the ventilation rate for respiration by the individual (m 3 /h), referring to the related respiratory tract infectious disease parameters (Qian et al. 2012), taken as 0.6 m 3 /h. t is the exposure time (h), and is discussed as a variable in multiple scenarios. Q is the ventilation rate of the local space (m 3 /h), and its determination basis will be discussed below. W-R model is based on the presupposition for a particular value of quanta. It is supposed with uniform distribution of virus concentration in the scope of the space under the research, so it is not required to consider the spatial position of the patients being the source of infection. However, current studies have found that air currents in the actual urban environment lead to a non-uniform distribution of the viral concentration. The spatial location of the infectious source and the dilution of the exhaled air are closely related to the risk of infection in the surrounding area. Therefore, the discussion of the range of human respiratory impact and the risk of virus infection has become a hot topic of research in related fields in recent years (Gralton et al. 2011;Villafruela et al. 2013;Xu et al. 2017). As shown in Table 1, this study combed the related data regarding the scope of respiratory influence in recent years, which has been used as the basis for determination of Q, the ventilation rate of local space in a specific environment, and improved the original W-R model. The current study proves ( Table 1) that as the distance between people increases, the effective respiratory segment area increases accordingly. There is a definite relationship between them when the virus concentration gradually decreases with the increase in the distance between the test point and the infected source. At the same time, the researchers found that the wind environment in the space can have a drastic effect on the trajectory of the droplets exhaled by the infected person, which in turn affects the virus concentration. Therefore, to determine Q, both of these influencing factors should be considered (Figure 2), i.e., the spatial scale within the influence of the infected person's breath and the instantaneous ventilation under the influence of the vector wind speed (U i ). The W-R equation is extended as follows: where, U i is the local wind speed in different environments, which is determined after calculation with Module 2; k is the virus filtration factor for personnel wearing masks (values taken in this paper are 0.5 and 0.95); S is the section area for effective influence by the respiration of the infected person, referring to Table 1, expressed as H × L, where the height H, is the function of the distance between persons D, expressed as H = f(D). Module 2: ventilation environment and local wind speed assessment of grid cell This study quantifies the local surface ventilation environment for different grids by extracting urban morphology parameters in the grid cell. It then infers the local wind speed by simulating with the urban background wind speed. The ventilation environment of the grid cell is estimated by using the dynamic roughness length (RL) of the underlying surface and sky view factor (SVF) (Liu et al. 2019). Roughness length is a crucial parameter influencing the local wind environment, and the research methods include meteorological observation (Grimmond 1998;Gualtieri and Secci 2011) and model calculation (Borak et al. 2005;Chen et al. 2010). This study uses the urban completed (Xie et al. 2007;Liu et al. 2017) building as the roughness element. According to the urban morphology model Grimmond and Oke (1999) raised, RL is estimated with parameters such as building density and building height. SVF is estimated according to the geometric arrangement presented by Oke (1987) with the highresolution digital building elevation model (Zhang et al. 2015). The local wind speed can be calculated with RL in the logarithmic law formula for urban background wind speed profile (Bañuelos-Ruedas et al. 2010;Mei et al. 2018). The related procedures are shown in Figure 3. In this paper, according to the literature (Liu et al. 2019), the local wind speed within the grid is derived by aerodynamic roughness, and the accuracy is verified by the CFD method. 1 km 2 area of the study area is selected, and the wind speed simulation results of CFD software Phonics are compared with the calculation results of this paper. It can be seen from Figure 4 that the two have a good fit, which verifies the accuracy of the average wind speed calculation within the grid of this paper. Module 3: calculation of the average number of contacts in a grid cell In this paper, the initial value of the number of infected persons in the grid cell is set to 1, the immediate number of persons in the grid is N, and the average number of contacts I of infected persons in unit time is taken as the number of potential infected persons. The definition of the average number of contacts I: the average number of other people encountered by each infected person in a unit time. where the calculation about I is referred to the formula for calculating the average free range and average collision frequency of molecules. We obtain the average collision frequency of molecules in the two-dimensional case as follows (the average number of collisions per molecule with other molecules per unit time): where n is the total number of people in the 500 m grid, d is Table 1 Determination basis for the scope of effective influence by the respiration of the infected persons in urban public space Reference Research content and method Main parameter values for the scope of influence by the respiration of human body dimension Xie et al. 2007 Study the evaporation and movement of the droplets expelled during respiratory movement through the calculation with the physical model, and explore the basic transmission mechanisms of infectious diseases. The droplet jet trajectory is related to the initial jet velocity and the mouth-opening size. At the horizontal distance of 1m, the vertical influencing distance is about 0.5m, and at the horizontal distance of 2m, the vertical influencing distance is about 1m. The horizontal influencing distance of droplets generated by sneezing (50m/s) is greater than 6m, the horizontal influencing distance generated by coughing (10m/s) is greater than 2m, and the horizontal influencing distance generated by breathing (1m/s) is within 1m. Liu et al. 2017 Study the evaporation and spreading of the droplets in different diameters generated by coughing under different temperatures, humidity and breathing mean through the calculation with physical model and the droplet evaporation experiment. When RH=0%, the horizontal influencing distance of 20μm droplets is greater than 4m, the horizontal influencing distance of 60μm droplets is about 2m, and the horizontal influencing distance of 100μm droplets is 1m. At the horizontal distance of 0.8m, the vertical scope of respiratory influence is about 0.3m, the vertical scope at the horizontal distance of 1.2m is about 0.4m, and the vertical scope at the horizontal distance of 1.6m is about 0.5m. Zhang et al. 2019 Study the influence of different ventilation systems on the indoor spatial distribution of the droplets generated by coughing and breathing through LES CFD model coupling with the Lagrangian method and human body Simulation. When the airflow velocity of breath exhalation is up to maximum, the scope of influence of the respiratory airflow is 0.12-0.14m, and the scope of influence of the airflow by coughing is 0.22-0.24m. Xu et al. 2020 Study the influence of personalized ventilation means on the dilution of the concentration of the droplets exhaled by the person through the human body model simulation and the calculation with dose-response model, and evaluate the effect of the personalized ventilation in preventing the airborne transmission of infectious diseases in a short distance. In the experiment, the distance between persons is set as 0.86m, and it is believed that the critical face-to-face condition with a relative distance <1m will be likely to transmit the infectious droplets through air in short distance. Shafaghi et al. 2020 Study the distribution and spreading of the droplets generated by human breathing, coughing, and sneezing through CFD simulation based on the finite volume method and the calculation with numerical models, and give the safe horizontal distance for indoor population regarding the prevention of COVID-19. The horizontal influencing distance of the droplets breathed by human body is less than 1m, the horizontal influencing distance of the droplets generated by sneezing is about 3m, and the horizontal influencing distance of the droplets generated by coughing is 6-7m. The vertical influencing distance of the droplets generated by sneezing is about 0.3m at the horizontal distance of 1m, and about 0.6m at 2m. the distance between two people, taken as 1 m, v is the average speed of movement of people, taken as 1-2 m/s. Considering the influence of road network density on pedestrian flow, the road network density coefficient K is extracted from remote sensing data (Gong and Mo 2021), and the average number of contacts I is obtained by analogy as follows: Module 4: Evaluation on respiratory infection intensity of the population on different types of the land There is a significant time-space difference for the distribution (m), Zd/Zh is the zero-plane normalized displacement height, Z0/Zh is the normalized roughness length, Uh is the wind speed, u * is the frictional speed, λF is the frontal area density of the urban buildings on the unit surface area, A(θ)proj(Δz) represents the projected area on the frontal side of the building at certain height increments (Δz) and certain wind direction (θ); AΤ indicates the standard unit grid under the bottom of the building (Wong 2010). Equation (8): α is the azimuth angle, β is the maximum building height angle in the sector of the corresponding azimuth angle within the study radius. n=360/α, where the azimuth angle shall not be smaller than 36, that is, α shall be no more than 10°, and the value of radius (R) shall not be smaller than 20 (Wang et al. 2018). Equation (9): VPC is the ventilation potential coefficient (m), Z0 is the aerodynamic roughness length, SVF is the sky view factor. Equation (10): U(z) is the average wind speed at the height z above the ground. In this paper, z is taken as the pedestrian walking height of 1.5 m, u * is frictional speed, K is the Karman constant, which is generally approximately 0.4, and Z0 is the aerodynamic roughness length. To reflect the potential respiratory infection intensity of the people on different types of the urban area and thus to raise the control strategy for the various urban spaces. Our research has used the following evaluation model to obtain the respiratory infection risk intensity of the population on different types of the metropolitan area according to the calculation results of the infection probability in the abovementioned grid cell and combination with the spatial distribution data of the urban population: where, E L is the respiratory infection risk intensity of the population in the parcels of the city (persons/hm 2 ), P i is the respiratory infection probability in grid i, N i is the number of population in the grid i, n is the number of grids in the parcel, A L is the parcel area, and L represents different parcels in the city. Study area This paper takes Shenyang city, Liaoning Province, China as the research object, which is a hub city in northeast China with dense and high mobility of population and more obvious characteristics of urban environment in severe cold areas. In this paper, an area of 45484.7 hm 2 within the 4th Ring Road of Shenyang was selected as the experimental area ( Figure 5). In addition, we extracted a new round of Covid-19 epidemic data in the area between March and May 2022, and compared them with the simulated data to Data source and processing It is based on urban satellite remote sensing images, urban 3D spatial information, meteorological data, and population flow monitoring data. Urban morphology data include Landsat8-OLI with 30 m resolution in 2020 and Quickbird image data with 2 m resolution in 2020 within the 4th Ring Road of Shenyang, Shenyang city General Urban Planning (2011-2020) and 1:2000 mapping of the current situation of buildings and roads in Shenyang. The meteorological data include Shenyang city 2016-2019 meteorological data. The population distribution and flow monitoring data include the results of the sixth national census to get the data of the resident population within the Third Ring Road of Shenyang, the data of cell phone base stations for multiple periods on April 23, 2020, and the extraction of spatial heat distribution data of outdoor population. Setting of risk variables based on multiple scenarios This study has made a comparative analysis on the distribution characteristics of infection risks under different variables by considering multiple scenarios with the gridded population, the distance between persons, exposure time, and average background wind speed (Table 2). And we use these multiple scenarios to discuss the influence of different urban environmental elements on the distribution of respiratory infection risks. According to the calculation results of infection probability in multiple scenarios, SPSS analyzes the correlations between the variables and the infection probability and dangers. Concerning monitoring the population flow data, the data collection was conducted during days off and working days. The spatial distribution data for the outdoor population in Shenyang city during multiple periods from March 15 to May 25, 2020, was chosen as the basis for the value assignment of the potentially infected population coefficient within the grid. Results and analysis 4.1 Analysis on the spatial distribution change of the population Figure 6 shows the statistical results for the thermal capacity distribution of the population study during a typical working day (April 23). In this part, we used the method in which the number of points for the outdoor population is extracted for kernel density analysis (Wu and Ye 2016;Lan et al. 2018). The case can be classified into six levels of thermal capacity, from high to low, as determined by the cluster analysis. High thermal areas and sub-thermal areas corresponding to the red and orange zones can represent the highly populated zone of urban people. And the result can show the main area for the activities people partake in in public spaces (Li et al. 2019). The trend of the heated area with time shows that the red high heat area on weekdays is higher at 10:00 p.m., relatively decreases at 3:00 p.m., and is lower at 6:00 p.m. The spatial distribution of the high-heat and sub heat areas shows that the dense human activity areas mainly exist in the commercial and transportation hub areas within the First Ring Road. Note: w is the weight coefficient which is multiplied in estimating the number of infected persons. There is a clear decreasing trend of population density from the First Ring Road to the Second and Third Ring Roads. Figure 7 shows the assessment results of ventilation capacity on the ground surface within the Fourth Ring Road of Shenyang city. The basic morphology parameters of building height (BH), building density (BD), and frontal area density (FAD) are extracted, and calculation is performed with Eqs. (4)- (7) to obtain the spatial distribution of aerodynamic roughness length (RL) in Figure 3. It may be seen from Figures 7(a)-(d) that the roughness length has a high degree of fitting with the change of building height distribution, and the old downtown area within the First Ring Road is basically covered by a high-value zone (RL > 2). BH and RL are at the highest level on the south of Hun River outside the Second Ring Road. High-rise buildings are densely distributed, creating a large ventilation barrier region. BD and FAD have a similar distribution. Aside from the high-rise buildings within the First Ring Road, there is still a sizeable high-density zone on the urban-suburban residential land and industrial land at the edge of Third Ring Road. Figure 7(e) shows the spatial distribution of sky view factor (SVF), which gradually increases from the city center to the surrounding areas, and reaches the minimum value in the area on the south of Hun River, where SVF remains in the interval of 0-0.42 because, in this region, the building density is not high. Still, the building height reaches the maximum level within the Third Ring Road, and the sky is highly sheltered due to building height. Figure 7(f) shows the potential ventilation coefficient, where there is a sizeable high-value area with poor ventilation (VPC > 3.8 m), covering nearly half of the area within the Third Ring Road, with the radial extension from the old downtown area within the First Ring Road. At the same time, there is the poorest ventilation in the area with large roughness and low sky view factor in the area on the south of Hun River. It can be seen from the above analysis that urban surface ventilation capacity is significantly affected by spatial morphology, and the local environment has a big difference in wind speed. In meteorological conditions with low urban background wind speed, the ventilation effect of some parcels will be less desirable, increasing respiratory infection risk probability. Influence of multiple variables on respiratory infection risk in urban spaces Our calculation was performed with Eqs. (1)-(13) to obtain the distribution of infection probability risk levels on a typical working day (Figure 8). After the cluster analysis, the risk areas were classified into high-risk area (3.5%-5%), medium to high-risk area (2%-3.5%), medium-risk area (0.75%-2%), low-risk area (0.3%-0.75%), minimal-risk area (0%-0.3%), and risk-free area (0%). Different from the distribution of outdoor population, the high infection probability area mainly concentrates on both sides of the north-south axis of the city (Youth Street) and the south of Hun River outside the Second Ring Road. Some other high-risk points are mostly distributed within the First Ring Road and sporadically distributed between the First Ring Road and the Second Ring Road and on the edge of the Second Ring road. Considering changes in human body spacing, exposure time, number of potential infections, and background mean wind speed, the distribution of infection probability in urban public space obtained according to the multiple scenarios set in Section 3.3 is shown in Figure 9. The trend of changes in each risk area is consistent under different scenarios, but there are large differences in the area of risk levels. In order to further explore the degree of influence of each variable on the probability of infection, the correlation analysis of population number N, the exposure time t, the human body spacing D, the background mean wind speed U and the change of area of each risk area was conducted using SPSS. The results of the analysis are shown in Table 3, and all four variables and the area of each risk area passed the significance verification. Among them, population size and exposure time were positively correlated with the increase of infection probability, while human body spacing and background mean wind speed were negatively correlated with the increase of infection probability. The mean absolute values of the correlation coefficients of population size, exposure time, human body spacing, and background mean wind speed in each risk area were 0.618, 0.612, 0.632, and 0.752, respectively, which showed that the significance of the above four variables on the degree of infection risk was: background mean wind speed > human body spacing > number of potential infections > exposure time. Four variables were selected to establish a one-dimensional linear equation with the change in the area of the high-risk area as shown in Figure 10, and the R2 of N, t, D, and U with the area of the high-risk area were 0.900, 0.823, 0.407, and 0.540, respectively, indicating that two of the variables, number and exposure time, had the strongest degree of correlation on the results. Meanwhile, from the trend of point distribution in the figure, population number and exposure time have a significant positive linear relationship with the change of area of high-risk area, and their influence on the change of results continues to increase with the change of variables, while both variables, human body spacing and background wind speed, show limitations in the trend of point distribution on the results, i.e., the results tend to be stable and unchanged after the values of variables reach a specific value. Therefore, in the prevention and control of respiratory infectious disease, reasonable control of human density and reduction of exposure time in public space are the most effective measures to reduce the risk of infection. From the scatter diagram, it can be seen that the area of high risk area increases significantly when the human body spacing is less than 1 m, so reasonable control of safety distance in public activities is also a scientific and effective control measure. From the data analysis results, the background wind speed has a moderate correlation with the change of the risk area. From the analysis on the correlation between the urban morphology parameters (building height (BH), building density (BD), roughness length (RL), sky view factor (SVF) and the change of risk levels, it can be known that the performance of the correlation between four parameters and the change of risk levels is BH (0.837) > RL (0.721) > SVF (−0.286) > BD (0.018). Figure 11 is the box plot for the distribution of urban morphology parameters in the risk levels, and after the elimination of the influence of abnormal values, it can be clearly seen that the numerical distribution interval values of BH and RL increase significantly with the rise of the risk level, and have relatively low data fluctuation; the interval values of BH and SVF have weak law of distribution in the different risk areas, and the maximum and minimum values of the interval and the quartile are far from the median value and have strong data dispersion. Therefore, improving the local ventilation environment in the city by optimizing the urban spatial morphology also plays an important role in reducing infection risk probability, while adjusting building height and roughness length can achieve a better optimization effect. We also conducted a correlation analysis between urban green space, water body area, and POI. We can find that the feedback of POI (population activity points of interest) on the results are self-explanatory, and its values are all higher in high-risk areas, while they tend to be lower in low-risk areas. We found that the median values of green areas and water bodies in high-risk areas were lower. The median values of green areas and water bodies in low-risk areas were higher, indicating that changes in green areas and water bodies seem to impact the risk of virus infection. However, there was no significant change in the overall values. This also indicates that the changes in green areas and water bodies do not directly affect the risk of virus infection. As for the trend in the figure below, we believe Table 3 Correlation analysis of each group of variables with changes in regional infection risk that the changes in green space and water bodies indirectly affect human behavior patterns, which weakly affects the risk of virus infection. The above results are also more consistent with our initial considerations when designing the model. Validation of the results of infection risk zone classification Of the 274 actual cases in the study area, the number of cases distributed within the risk level was 271, accounting for 98.91%. Among them, 58 cases were in the high-risk zone of infection probability, accounting for 21.17%; 99 cases were in the medium-high risk zone, accounting for 36.13%; 59 cases were in the medium-risk zone, accounting for 21.53%; 21 cases were in the low-risk zone, accounting for 7.67%; 34 cases were in the micro-risk zone, accounting for 12.41%; and 3 cases were in the no-risk zone, accounting for 1.09% (Figure 12). The overlap between the medium-risk or higher area and the actual cases was 78.55%, indicating that the risk partition of infection probability is reasonable, which can effectively predict the high-risk incidence area and take preventive measures in advance. Influence of urban morphology on the risk distribution of outbreak infection The above evaluation results show that urban morphological parameters are important factors affecting the risk of respiratory infection in public spaces. From the viewpoint of building height, the average level inside the Second Ring Road is higher than that outside the Second Ring Road, while the average level along the south side of Hun River is the highest in the city. The building density is not related to the urban radial structure, and high-density spaces exist in both old and new urban areas. The local area between the Second and Third Ring Roads is even higher than the old urban area. The population distribution pattern shows a decreasing trend from the central area of the city to the outer ring area, and there is apparent spatial heterogeneity of various elements. The spatial distribution characteristics of the probability of infection risk are a significantly better fit with urban morphological parameters than with spatial population distribution, and "building high-density space" tends to be more consistent with the characterization of a respiratory infection risk than "population high-density space". In addition, exposure time and body spacing have a more significant influence on the risk of infection. These two parameters depend on the activity pattern of the population in public spaces. Urban spatial morphology and environment also indirectly influence the activity pattern of the population. Thus, the spatial distribution of infection risk is closely related to urban morphology and human activity patterns. Urban morphology potentially influences the degree of infection risk of an epidemic through the influence of local microclimate and the guidance of population activity patterns. Population respiratory infection risk intensity by site type Calculations were made according to the planned land use situation in Shenyang city to obtain the respiratory infection risk intensity distribution map of the population in different parcels (in Figure 13), which disclosed the correlations between various types of land and respiratory infection risk. From the analysis on the area proportion of multiple types of land in Figure 14(a), it can be seen that the higher proportion of residential land in the central downtown area accounts for a large proportion of all risk levels. In addition, the core business districts of Shenyang city are concentrated within the First Ring Road, thus seeing dense population activities. Therefore, in a high-risk area, land used for the facilities of commercial and business industries accounts for a very high percentage. The potential infection risk intensity was further analyzed after the normalization of various land indexes. As seen from Figure 14(b), residential land is the category with the highest intensity of urban infection risk, followed by commercial land and public service land, with the three categories accounting for more than 80% of the land use. Residential land as the most concentrated area of urban population activities, the general high density of old residential areas and the obvious growth of residential height in new urban areas will lead to poor local ventilation capacity and increase the risk of respiratory infection. So this type of land also occupies a high proportion in high-risk and higher-risk areas. The commercial space itself is a frequent area of urban population activities, and the core commercial areas of Shenyang are closely linked to the city's major transportation hubs. In the land for residential use, which represents the area with the highest concentration of activities for the urban population, the old residential zones generally have high density, and the height of residential buildings in new urban areas increases significantly. Both will result in poor local ventilation and increasing the risk of respiratory infection. Under these circumstances, such land types account for a high proportion of the high-risk and medium to high-risk areas. The above evaluation results align with the statistical distribution of epidemic cases in the cities. So, land for commercial and business facilities and land for residential should be the key zones for epidemic prevention in cities. In addition, it can be seen from the analysis that administration and public services land, such as administrative offices, medical, and educational use, is also a critical control area in epidemic prevention and control due to regular human activities. At the same time, industrial-use land with low spatial density and low building height have low infection risk levels. Normalized strategy for epidemic prevention in public spaces in cities (1) Urban spatial morphology should be optimized and ecological elements adequately allocated. The influence of urban spatial morphology parameters on the respiratory infection risks shows that ventilation barrier areas are easily created in high-density urban spaces, which is unfavorable for the dissipation of air pollutants. Thus the local environment is poor in coping with outbreaks. Therefore, in planning and urban design, more importance should be attached to the impact of spatial morphology on the local microclimate, water cycle, and biodiversity, thus organizing the spatial morphology in a manner favorable for optimizing the ecological process. And reasonable allocation of spatial elements, such as green spaces and open spaces, and the construction of ventilation galleries not only meet the aesthetic requirements of urban design but also benefit citizens in developing positive emotions, and also serve as an essential guarantee for the healthy development of a city, improving urban inclusion and the ability to deal with emergencies in case of epidemics ). (2) From the analytical results of the infection risk intensity on various types of land in the city, it can be seen that residential community, commercial, and office spaces have relatively high population density and comprehensive urban development intensity as well as generally high infection risk levels. This is consistent with the statistical results of the infection fields for diseases worldwide. Therefore, strengthening the real-time monitoring of high-risk spaces and the management of mass gatherings, limiting population density, and controlling the exposure intensity of the population are effective measures to reduce the propagation risk of epidemics. Taking communities, streets, and villages as the primary managing and controlling units and starting from the community for accurate implementation of prevention and control measures is the current key for normalized epidemic prevention in China (Duan et al. 2020;Liu 2020) and may also serve as a reference for the cities of other countries in the world. (3) The scale of the control unit is explored in comparison with the distribution of medical resources. Currently, in China's cities, during the period of concentrated outbreaks, most of the closed control measures are based on the spatial unit of residential areas, which may have adverse effects on the psychological aspects of residents due to the limited range of activities during the closed period. Under the premise of improving community functions and management capabilities, appropriate expansion of the control unit can solve some of the citizens' work, commercial, and sports activities in the community, significantly alleviating the anxiety caused by the closure. Figure 15 is the division of control units based on the 15-minute living circle in Shenyang. The unit's color represents each control unit's comprehensive infection risk index, and each point is the distribution of medical resources at different levels. In the units with a high comprehensive infection risk index, only the central city is more affluent in medical resources. In contrast, in the units with more high-risk areas in the north and south, the degree of matching the level and quantity of medical resources is lower. (4) Scientific guidance for urban activities and behavioral modes is essential. The research results in the multiscenario setting have shown that reasonable control of human physical distance and exposure time can reduce regional infection probability. In this case, guiding individual activity morphology and behavioral characteristics is necessary to curb the spread of the epidemic. Officials can develop the safety threshold of the related parameters for citizens in different places when risk levels are formed in urban areas (Mehta 2020;Sun and Zhai 2020), put forward necessary prevention guidelines, monitor people's health and safety conditions, and provide safety measures for the communication and interaction of the susceptible population in public spaces, such as the elderly, children and pregnant women. Disadvantages and future development In this study, since there is no more traditional calculation method for estimating outdoor public space ventilation, this paper refers to the existing results in the literature. It proposes a method for estimating the instantaneous airflow within the respiratory influence of sick patients under ideal conditions to estimate the probability of respiratory infection in a grid cell. Secondly, in estimating the urban local ventilation environment, we only considered the influence of building space morphology on ventilation, and elements such as green space and water bodies on air quality were not included in the model. In future studies, these need to be discussed in-depth to improve the accuracy of this evaluation model. Conclusion The novel coronavirus epidemic has made us re-examine the impact of the urban spatial environment on public health, and it is urgent to develop an urban planning system that can guarantee public health and health safety in cities. This paper aims to construct an urban-scale risk assessment method for respiratory infections in public spaces and use spatial GIS to form a visual map to provide dynamic risk warning for high-risk areas in cities. So that city managers can formulate specific risk intervention countermeasures in response to the spatial differentiation characteristics of risk elements, and control the safety of public activities in cities in a timely manner during the normalization phase of the epidemic, and shift the responsibility to the new coronavirus epidemic from a "reactive emergency mode" to a "reactive emergency mode". This will change the response to the Covid-19 epidemic from a "reactive emergency mode" to a "proactive prevention and control mode". On the other hand, creating resilient cities that can resist risks and recover quickly has become a consensus in urban planning and design after the outbreak. What kind of urban structure and spatial form is more conducive to constructing resilient cities? How do optimize the configuration of public open space and ecological infrastructure to create a healthy local climate environment? These questions are still at the preliminary stage of exploration. And establishing a data platform to reveal the coupling relationship between urban environment and population can provide sufficient scientific support to analyze and explain these fundamental questions. This is also the exploratory research of this paper for the construction of urban public safety and resilient cities through the precise evaluation of the risk of Covid-19 infection in urban space.
2022-08-12T05:15:47.851Z
2022-08-10T00:00:00.000
{ "year": 2022, "sha1": "0fd7005a4a7cd206f9a1b2a218d45076904670aa", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s12273-022-0918-8.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "0fd7005a4a7cd206f9a1b2a218d45076904670aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
211098402
pes2o/s2orc
v3-fos-license
Influence of the structural design of rail fastenings on ensuring the stability of track gauge in operating conditions As the rolling stock speed increases, the requirements for stability parameters increase accordingly. One of these parameters is the track gauge. The stability of this parameter depends on many of the structural elements of the upper track structure and on the environmental conditions. In the research work, the authors have considered the influence of such a structural element as an intermediate rail fastening on its ability to provide a stable track gauge. The most common intermediate rail fastenings on the Ukrainian Railways have been accepted to be compared, including the KB-65 type fastening, the KPP-1 type fastening and the KPP-5 type fastening. In order to determine exactly the influence of each structural element, the other gauge parameters for all three types of fastenings have been taken equivalent. The data have been obtained from the track measuring car. With the obtained data, a statistical analysis has been conducted, which enables to draw conclusions about the ability of the intermediate rail fastenings under consideration to ensure the track gauge stability. The conclusions have been made on the applicability of each of the fastenings under consideration. Introduction Increased speeds of rolling stock, increased level of force interaction in the wheel-rail system, on the one hand, leads to an increase in geometric deviations during the track infrastructure operation, and on the other, the requirements for the limit levels of deviations of the upper track structure become more stringent. These factors complicate the maintenance of the track and lead to an increase in the volume of work to ensure its efficiency. In this case, the correct selection of the design of the upper track structure, which could provide the maximum period of its operation with minimal external interference, becomes of great importance. One of the main constantly monitored parameters is the track gauge. This is due to the fact that exceeding the permissible track gauge can cause the vehicle to derail, and reducing the width of the track below the permissible value causes the wheels pair to jam and the wheel flange to slide over the rail. On Ukrainian railways, in accordance with [1], the track gauge in straight and curved sections on reinforced concrete sleepers with a radius of more than 300 m at speeds of 50-140 km/h is set at 1520 mm. The tolerance for deviation by extension is +8 mm, narrowing -4 mm. Increasing the speed of movement requires tightening tolerances for deviations of track gauge from the normative value [2]. The main structural elements affecting the track gauge stability and its longitudinal stability are intermediate fastenings [3]. Paper [4] show that in the process of operation, fastenіng clamp of the type KPP are worse pressed to the rail, which can affecting on the track gauge stability and its longitudinal stability too. Therefore, the influence of the rail fastening design on the stability of the track gauge parameters has been investigated in the research work. Methods and course of the study It is known that the research can be carried out both by the mathematical simulation technique [5][6][7][8][9], and by the methods of laboratory [10,11] and operational testing [9,[12][13]. In the research the designs of rail fastenings operated on the railway having been investigated, the authors have analyzed the data obtained during the track operation. The studies have been conducted on a straight track with a load capacity of 60 million gross tons per year with mixed freight-passenger traffic. The considered section uses three types of intermediate rail fastenings: KB-65, KPP-1 and KPP-5. KB type fastenings, most common on track with reinforced concrete sleepers, have a plate clipbolt (bolt-and-nut terminal) design. Fastenings of KPP type have a non-plate design with elastic rod clips. The data on the control measurements received by the track recording car during three years have been accepted for consideration. Since such control measurements are held once every 3 months, twelve measurements have been taken over a three-year period. The analysis has been performed using standard methods of mathematical statistics. The following parameters of the upper track structure have been obtained: maximum width, minimum width, mode, median, root-mean-square deviation, sample variance. The sample variance is called the deviation of a random variable from its mean value [14]: where x mrandom distribution centre; i p -probability of occurrence of a random variable; i x -value of a discrete random variable. The standard deviation that characterizes the scattering of a random variable from its mean value is determined by the formula [1]: Mode is called the value i x , for which the probability i p of occurrence is the maximum. The median is called the value i x , for which the probability of occurrence of a random variable of less or greater value is the same. Figure 1 shows a graph of the change in the minimum track gauge width value at the track section where measurements were taken. Research results As can be seen in the figure, the maximum deviation of the narrowing of the track gauge is characteristic of the fastenings type KPP-1 and KPP-5. In most cases, KB fastenings exceed the minimum track gauge width value in comparison with the established one. However, all three types of fastenings allow the track to operate within tolerances. Figure 3 shows a graph of the change in mode of track gauge measurements at the track section under consideration. The minimum amplitude of the change in the measurement mode is fixed for KB type fastening. The maximum is for KPP-5 type fastening. Figure 4 shows the sample variance of track gauge widths at the track section. The minimum variance value is fixed for KB type fastenings, maximum for KPP-1 type fastenings. The highest average value of variance is characteristic for the KPP-5 fastenings. Figure 5 shows the change in the median value of the track gauge width at the track section during observations. The largest is the median value of the track gauge width for KB type fastenings, with the minimum amplitude of change for this fastening type at the same time. The minimum median value of track gauge width is characteristic for the KPP-1 fastening. The maximum amplitude of change of median width value of a track is characteristic for fastening of type KPP-5. Сonclusions Thus, considering the main statistics of track gauge changes during operation, we note that all three types of fastenings are capable of providing track gauge stability within tolerances. For fastening type KPP-1 there have occurred single events when the width of the track gauge slightly exceeded the deviations tolerances established by the instruction. It should also be noted that KB-type fastenings have demonstrated the highest track gauge stability by most statistics parameters. According to the authors of the presented work, such results may be related to the fact that the track section, where the research has been conducted, has a rather high load capacity, and the freight rolling stock exerts a higher dynamic effect on the track compared to the passenger one. In order to reach a clear conclusion, additional research is needed on the track sections with highintensity of passenger service and low-intensity of freight transportation. The authors plan to carry out such studies in the next work, which will allow drawing more definite conclusions.
2020-01-02T21:12:36.694Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "7062a490a3f11fed1c1640b6a39a68237c2d79a5", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/708/1/012001", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "24bb46a767d10b86d46f2108f66cf4e490770dd7", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
257266284
pes2o/s2orc
v3-fos-license
Quasi-experimental evaluation of national border closures on COVID-19 transmission With over 200 pandemic threats emerging every year, the efficacy of closing national borders to control the transmission of disease in the first months of a pandemic remains a critically important question. Previous studies offer conflicting evidence for the potential effects of these closures on COVID-19 transmission and no study has yet empirically evaluated the global impact of border closures using quasi-experimental methods and real-world data. We triangulate results from interrupted time-series analysis, meta-regression, coarsened exact matching, and an extensive series of robustness checks to evaluate the effect of 166 countries’ national border closures on the global transmission of COVID-19. Total border closures banning non-essential travel from all countries and (to a lesser extent) targeted border closures banning travel from specific countries had some effect on temporarily slowing COVID-19 transmission in those countries that implemented them. In contrast to these country-level impacts, the global sum of targeted border closures implemented by February 5, 2020 was not sufficient to slow global COVID-19 transmission, but the sum of total border closures implemented by March 19, 2020 did achieve this effect. Country-level results were highly heterogeneous, with early implementation and border closures so broadly targeted that they resemble total border closures improving the likelihood of slowing the pandemic’s spread. Governments that can make productive use of extra preparation time and cannot feasibly implement less restrictive alternatives might consider enacting border closures. However, given their moderate and uncertain impacts and their significant harms, border closures are unlikely to be the best policy response for most countries and should only be deployed in rare circumstances and with great caution. All countries would benefit from global mechanisms to coordinate national decisions on border closures during pandemics. Border Closure Coding For the border closure coding, we hand-coded matrices of border closures represented as dichotomous variables by date of implementation and country targeted for 179 countries using data from the Oxford COVID-19 Government Response Tracker, with corrections, updates and additions detailed in S1 Data 11 . This matrix was then used to code categorical indicators of the daily travel restriction status between all country-pairs: 1) no border closurethe country has not implemented any closure of land, air, or sea border in response to the COVID-19 pandemic; 2) targeted border closurethe country restricts non-essential entry of foreign nationals from one or more specified countries; 3) total border closurethe country restricts all non-essential entry of foreign nationals; or 4) reopeningthe country had previously implemented a total border closure and has re-opened borders to at least one country. Visa suspensions and closure of land borders were coded uniquely as de facto border closures and analyzed as targeted border closures in quantitative analyses. Eleven jurisdictions (Aruba, Barbados, Bermuda, Cabo Verde, Greenland, Guam, Hong Kong, Kosovo, Macao, and Puerto Rico, and Solomon Islands) were excluded due to low data availability in either outcome or covariate data and to limit analysis to countries. Two countries, North Korea and Turkmenistan, reported no cases and were not included in analyses. China was also exempted from primary quantitative analyses because, as the first country to report cases of COVID-19 and as the initial epicenter of the pandemic, national border closures could not have controlled domestic transmission in this study period. The final dataset contains border closure data for 179 countries, and full outcome and intervention data for 166 countries. Covariates A full list of control variables hypothesized to have been associated with the effectiveness of border closures is available in Table A. All covariates were specified a priori and obtained from publicly available databases and were selected to cover categories of country characteristics, economic factors, gender parity, health indicators, policy control indicators, and border closure details. Where data was not available for the current year, we used data from the most recent year available. Further information is made available in a metadata repository alongside our open-access dataset on Scholars Portal Dataverse with download dates, updated data sources, and variable format and description (S1 Data). Quantitative Methods The proportion of the world's population targeted by border closures was calculated as the sum of the population of targeted countries divided by the population of all countries included in the study. A measure excluding own population was also calculated to account for the limited impact of border closures for countries with large populations. The proportion of global cases targeted by border closures was calculated as the sum of cumulative incidence for every country targeted up to the day being evaluated divided by the global cumulative incidence. The decision to limit our study to the first 22 weeks of the COVID-19 pandemic was driven by considerations related to our quasi-experimental methods as well as global border closure trends. Our analysis plan (S3 Data) was developed once our team determined that new border closures had begun to stall in May 2020, as observed in Fig 2. Because interrupted time series analysis can be performed on a balanced dataset with a minimum of 12 data points 12 , adding a lag time of five days to the final country-specific border closure on May 27, 2020 led us to the 22-week mark. All calculations were conducted in Stata release 15 and maps created using Tableau version 2020.2 unless otherwise stated 13,14 . Interrupted Time-Series Analysis Country-level intervention points were evaluated for all targeted and total border closures that met the following criteria: 1) a minimum of seven days of data exists prior to and after the intervention point, 2) for multiple intervention time series, a minimum of seven days has passed since the last intervention point, and 3) for multiple sequential targeted border closures, the second (or third) intervention represents an increase of at least 20% of the world's population being targeted by the new border closures. As robustness checks, Dickey-Fuller tests were run for every time series, and none exceeded the 5% critical value of the t-distribution for a unit root (S2 Data). Results correcting for serial autocorrelation were calculated using Cumby-Huizinga general tests for autocorrelation in time series analysis (S4 Data) 15 . We additionally considered alternate models aligning by the date of total border closures for both the global average and population-weighted global average of Rt. Stratified analyses of high-income and low-and middle-income countries (as classified by the World Bank), were conducted using pooled data with new intervention points calculated for each group exceeding the 20% global population threshold for both targeted and total restrictions, and the same analytical methods as global analyses 16 . Falsification tests were run to evaluate whether results were being driven by chance or analytical choices. A false intervention date was created at the midpoint between the first date 90% of the world's population was living in countries implementing border closures and the final date in the study period, and for each country that implemented only a total border closure, country-specific false intervention points were created at the midpoint between the date of total border closure and the final date in the study. In consideration of the visually apparent peak in global Rt occurring between February 18-21, 2020, whose extreme values could skew findings, a robustness check was run dropping four values that exceed an Rt of 4.0. Finally, a robustness check restricting the length of time after the global total border closure intervention to match the period of time after global targeted border closure intervention (45 days) was conducted, with the global ITS results remaining unchanged (Tables G-H). Meta-regression The proportion of the global population and cases targeted by each restriction was calculated for the first targeted restriction for each country, and countries were categorized into three groups for the timing of restrictions relative to incidence within each country and relative to border closures implemented by other countries. Each country's first restriction, first targeted restriction, and first total restriction was categorized as happening either prior to 50, between 50 and 500, or after 500 cumulative domestic COVID-19 cases. Each country's first border closure, first targeted border closure, and first total border closure was also categorized as occurring within the first, second, or third tercile (third) of border closures globally. Every factor listed in Table A was first run against the sum of positive, negative, and null effects for targeted, total, and all border closures for both unlagged and five-day lagged outcomes and evaluated for significance at the 95% level (Table T). Full model regressions of every covariate found to have been independently associated with changes in Rt were then conducted. Robustness checks with unlagged outcomes produced similar results, but also identified higher GDP as being associated with beneficial total border closures and higher GDP per capita associated with less beneficial closures (Table W). Fixed-Effects Time-Series Regression Fixed effects regression was used to inform a robustness check for CEM analyses through identifying country-level indicators which significantly modified the association between border closures and the transmission of COVID-19. The primary evaluations used a five-day lag in Rt and after surveying published literature, a list of country characteristics was organized into a theoretical framework, including countrylevel indicators for the economy, gender, health, and domestic containment and closure policies (Table A). Within the fixed-effects regression model, the index represents the individual country ( = 1, … ,166), is the day of the study period between Jan 1 to June 8, 2020 ( = 1, … ,160), and is the vector of independent variables evaluated for association (Equation 1). For the primary panel analysis, each of the twenty-one country-level indicators = [ , . . . , ] were independently interacted with the percentage of global population targeted (excluding own population). A fixed-effects regression model mitigates for potential confounding by controlling for country-level observables, while the model assumptions also accounted for the unobservable country-level factors. 17 The additional use of interaction effects examined the intra-country association between the vector of independent variables and Rt outcomes as related to the intra-country variation observed in changes in the percentage of global population targeted (excluding own population). Nineteen country-level factors were identified at a significance level of 90% (Table AH). Scenario analyses using the percentage of global cases targeted (excluding own cases) as the intervention variable and Rt values lagged by zero, 10 and 15 days were also conducted (Table AI). Coarsened Exact Matching Coarsened exact matching (CEM) was used to match and reduce the multivariate imbalance within countries with no border closure (control group) and countries with a border closure (treatment group). 18 An adjusted regression analysis using maximum likelihood estimation (MLE) was then fitted on the dataset with improved balance and used in predicting the size of the treatment effect averaged over all treated countries in our sample. 18 The four border interventions of interest are: i) targeted border closures; ii) proportion of global population targeted by a targeted border closure; iii) proportion of global cases targeted by a targeted border closure; and iv) total border closures, with instances of border de-escalation censored from all CEM analyses. A diagrammatic representation of the coding of treatment status by type of border closure studied is provided in Fig B in S1 Text. To evaluate targeted border closures, the treatment group was comprised of all countries that had ever implemented a targeted border closure and the control group included countries that had never implemented a targeted border closure. In one model, data after a country escalated to a total border closure were censored for both treatment and control groups, while in another model, total border closure data were left intact and controlled for in MLE. To evaluate total border closures, two approaches to assigning treatment status were used. The primary model assigned all countries that had ever implemented a total border closure to the treatment group and all countries that had never implemented a total border closure to the control group. A conservative model was also constructed, assigning countries that had only ever implemented a total border closure to the treatment group and all countries that had either never implemented a border closure, only implemented a targeted border closure, or escalated from a targeted border closure to a total border closure to the control group. The same two techniques outlined above for the targeted border analyses was used for both models in evaluating total border closures: censoring data during period of total border closure for both treatment and control groups or controlling for any instance of total border closures in MLE. Due to the trade-off between matching on more information (a higher number of variables) and a reduction in the dataset size, varying degrees of coarsening were used in the matching process. Countries were matched on a subset of the independent variables identified to have significant interaction effects in the fixed-effects regression, with variations in the degree of coarsening. All country-day data points without a matched pair and a zero value for the CEM weight was excluded from further analysis. The adjusted regression analysis used MLE to determine the average treatment effect on the treated in the newly balanced dataset. and are dichotomous variables representing treatment status, with indicating a country having implemented a targeted border closure (Eq 2) and indicating a country having implemented a total border closure (Eq 3). The coefficient quantifies the average treatment effect across all countries having either implemented a targeted border closure or a total border closure. The primary analysis (Table 2) moderately coarsened variables with higher causal plausibility, and a robustness check minimally coarsened lower priority variable (Table Z). A robustness check on the primary model verified the improved balance after matching (Table AJ). Additional robustness checks using higher causal plausibility included greater coarsening (Table X) and minimal coarsening (Table Y). Furthermore, a robustness check to address concerns of confounding from changes in domestic containment and closure policies on the effects of border closures was conducted using moderate coarsening (Table AE). A robustness check was conducted using variables from the highest R-squared model with moderate coarsening (Table AK). Further robustness checks coarsened by first limiting to the countries matched on in the primary analysis moderately coarsening on the closure policy variables and higher plausibility variables (Table AF), and then a follow-up robustness check matching only on closure policy variables (Table AG). More robustness checks included limiting by the initially matched countries and restricting the time period pre/post global intervention dates by forty-five days (Table AA) or by sixty days (Table AB). The length of forty-five days prior to the global targeted border closure is around the time when the first estimated case of COVID-19 was recorded, and about forty-five days later is the global total border closure date. The same period of 60 days was used in the total border closure analysis. Lastly, we conducted scenario analyses using a subset of the matched dataset and segregated by higher-or lower-tier covariate distributions, using the mean as the baseline for targeted border closures (Table AC) and for total border closures (Table AD). Regardless of the approach employed in selecting the number of variables used in the matching process and the level of coarsening, all results were highly consistent. Limitations There were a number of challenges to conducting a rapid evaluation of national border closures in the midst of the ongoing COVID-19 pandemic. Although there is no perfect source of information to quantify COVID-19 incidence, we strived to maximize comparability between countries, legitimacy of data sourcing, and completeness of information. The OxCGRT Data Repository was deemed to best satisfy these objectives, however, issues relating to testing, data reporting, and data comparability remain. Reported cases of COVID-19 are highly dependent on the degree of testing being done to detect transmission, and the accuracy and timeliness of data reporting. Some challenges to coding the border closure matrix include data validation, reconciling unspecified countries by targeted closures, and missing country data. Firstly, due to the vast size of the OxCGRT dataset, we purposively verified the correctness of coding and cited news sources for 1) countries with large populations; 2) countries with early border closures; and 3) randomly selected countries for a final layer of accountability. Secondly, four countries targeted countries with greater than a stated number of cases but did not explicitly specify the countries to be targeted. We determined the list of countries targeted for these countries based on John Hopkins University case counts. 4 Lastly, seven countries with populations over 1 million that were missing from the OxCGRT dataset were added for analysis (Armenia, Equatorial Guinea, Guinea-Bissau, Latvia, North Macedonia, Timor-Leste, and Togo). Table AB. CEM robustness check censoring for 60 days only, before and after the two intervention dates, using higher causal plausibility variables, moderately coarsened. Maximum likelihood randomeffects estimation for targeted border closures. Country-days are matched on moderately coarsened higher causal plausibility variables, with a reduced time period of 60 days, before and after the global intervention dates for targeted (February 5, 2020) and total border closures (March 19, 2020 Table AG. CEM robustness check using containment and closure variables and higher causal plausibility variables, moderately coarsened. Maximum likelihood random-effects estimation for targeted border closures. Country-days are matched on moderately coarsened containment and closure policy changes and the higher causal plausibility variables, restricting only to the countries that were matched upon in the primary analysis. Variables being matched on include the GHSI score, logged GDP, logged passengers flown, health expenditures, closures on workplaces and public transit, stay at home orders, and restrictions on public gatherings. Countries 44 44 110 110 *** p < 0.01, ** p < 0.05, * p < 0.10; standard errors in parentheses Table AH. Fixed-effects regression analyses. Results are presented for all factors found to produce statistically significant fixed-effects interactions between the extent of border closures and their effect on five-day lagged Rt for the Population Model (based on the proportion of population targeted) and Case Model (based on the proportion of cases targeted). The number of countries, regression coefficient, and direction of effect are shown below. Countries 83 85 122 144 *** p < 0.01, ** p < 0.05, * p < 0.10; standard errors in parentheses
2023-03-02T16:29:38.468Z
2023-02-28T00:00:00.000
{ "year": 2023, "sha1": "452bd0fffd5cf5c35a34e4d29bae4d4b8bca275f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/globalpublichealth/article/file?id=10.1371/journal.pgph.0000980&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "290adfc6a58239376a6e990dc140e5090cc4c5f3", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
7950850
pes2o/s2orc
v3-fos-license
High Blood Pressure in Overweight and Obese Youth: Implications for Screening In the absence of evidence-based guidelines for high blood pressure screening in asymptomatic youth, a reasonable strategy is to screen those who are at high risk. The present study aimed to identify optimal body mass index (BMI) thresholds as a marker for high-risk youth to predict hypertension prevalence. In a cross-sectional study, youth aged 6 to 17 years (n=237,248) enrolled in an integrated prepaid health plan in 2007 to 2009 were classified according to their BMI and hypertension status. In moderately and extremely obese youth, the prevalence of hypertension was 3.8% and 9.2%, respectively, compared with 0.9% in normal weight youth. The adjusted prevalence ratios (95% confidence intervals) of hypertension for normal weight, overweight, moderate obesity, and extreme obesity were 1.00 (Reference), 2.27 (2.08–2.47), 4.43 (4.10–4.79), and 10.76 (9.99–11.59), respectively. The prevalence of hypertension was best predicted by a BMI-for-age ≥94th percentile. These results suggest that all obese youth should be screened for hypertension. In the absence of evidence-based guidelines for high blood pressure screening in asymptomatic youth, a reasonable strategy is to screen those who are at high risk. The present study aimed to identify optimal body mass index (BMI) thresholds as a marker for high-risk youth to predict hypertension prevalence. In a cross-sectional study, youth aged 6 to 17 years (n=237,248) enrolled in an integrated prepaid health plan in 2007 to 2009 were classified according to their BMI and hypertension status. In moderately and extremely obese youth, the prevalence of hypertension was 3.8% and 9.2%, respectively, compared with 0.9% in normal weight youth. The adjusted prevalence ratios (95% confidence intervals) of hypertension for normal weight, overweight, moderate obesity, and extreme obesity were 1.00 (Reference), 2.27 (2.08-2.47), 4.43 (4.10-4.79), and 10.76 (9.99-11.59), respectively. The prevalence of hypertension was best predicted by a BMI-for-age ≥94th percentile. These results suggest that all obese youth should be screened for hypertension. J Clin Hypertens (Greenwich). 2013;15:793-805. ª2013 Wiley Periodicals, Inc. Between 1% and 5% of youth have hypertension. 1,2 Hypertension early in life can predict adult hypertension, a condition that is associated with shorter life span due to higher cardiovascular mortality. 3,4 Structural cardiac changes and organ damage due to hypertension can begin at very young ages. A recent study suggested that changes in cardiac structure caused by hypertension can be detected in children as young as 24 months. 5 It is speculated that obesity may be the strongest modifiable risk factor for hypertension during childhood, 6,7 but prior to developing interventions to modify high blood pressure (BP), the magnitude of the association must be determined. Hypertension in youth is associated with obesity. 1 Some racial/ethnic minorities have a high prevalence of obesity, with a shift in the body weight distribution toward extreme obesity. 8 Insights into the interplay of obesity, race/ethnicity, sex, and the occurrence of hypertension may provide support for decisions related to screening high-risk groups for high BP and eventual effective, targeted interventions to prevent premature cardiovascular disease. Because studies are lacking in assessing whether screening for high BP in youth reduces adverse health outcomes or delays the onset of hypertension, the evidence is currently insufficient to make recommendations for or against routine screening for high BP in asymptomatic children and adolescents. 9 Because the prevalence of hypertension in normal weight is low, the question arises whether body mass index (BMI)-for-age can be used to identify youth at highest risk for hypertension. This information can be used to target a high-risk population for BP screening until current evidence gaps are filled. Using information in the electronic health records (EHRs) of a population-based, multiethnic cohort of insured youth in Southern California, we estimated the magnitude of the association between high BP and body weight categories in children and adolescents. We also determined BMI-for-age thresholds that best predict pediatric prehypertension and hypertension to guide future recommendations for a targeted screening program to identify high BP in youth. Study Design and Patients Patients enrolled in this study were pediatric members of a pre-paid integrated health plan between January 1, 2007, and December 31,2009. Kaiser Permanente Southern California (KPSC) is the largest health care provider in Southern California. In 2012, KPSC provided health care services to more than 3.6 million members, approximately 22% of whom were 17 or younger. 10 Detailed demographic characteristics of the KPSC membership population are described elsewhere. 10 Members receive care in medical offices and hospitals managed by KPSC. A comprehensive EHR system, HealthConnect, was implemented region-wide prior to 2007. The study protocol was reviewed and approved by the institutional review board of KPSC. For this cross-sectional study, we used EHR data from a subset of patients enrolled in a large populationbased cohort, the KPSC Children's Health Study, from January 1, 2007, through December 31, 2009. 11 The date of the first available BP was considered the day of study enrollment. As shown in Figure 1, we excluded those who were younger than 6 years or older than 17 years (n=444,887) and patients who became pregnant anytime during the 36-month study period (n=6856). We also excluded patients with ≥1 preexisting diagnoses of chronic conditions known to significantly affect growth or BP (n=2712), such as growth hormone deficiency (International Classification of Disease, Ninth Revision [ICD-9] 253.3) or overproduction (ICD-9 253.0), aortic coarctation (ICD-9 747.10), chronic renal disease (ICD-9 585.x), congenital adrenal hyperplasia (ICD-9 255.2), Cushing syndrome (ICD-9 255.0), hyperaldosteronism (ICD-9 255.1), and/or hyperthyroidism (ICD-9 242). Youth who had filled a prescription for an antihypertensive medication and who had at least one outpatient diagnosis of hypertension (ICD-9 401, 402, 403, or 404) (n=984) prior to study enrollment were identified as having hypertension. Among the remaining youth, BP measurements at 3 separate visits were required for the identification of hypertension. 12 We therefore excluded participants of the KPSC Children's Health Study with fewer than 3 BP measurements within 36 months following the day of study enrollment (n=228,331), allowing annual health care visits. This resulted in a final analytical cohort of 237,248 children and adolescents aged 6 to 17 years. BP Measurements and Classification BP was measured routinely at the beginning of almost every outpatient clinical visit. Nurses and medical assistants were trained according to guidelines of the American Association of Critical Care Nurses for pediatric care. 13 Digital devices (Welch Allyn Connex series, Welch Allyn Inc, Skaneateles Falls, NY) are the preferred BP measurement devices at KPSC. In some cases, a wall-mounted aneroid sphygmometer (Welch Allyn Inc) was used. The cuff size was estimated after inspection of the bare upper arm at the midpoint between the shoulder and elbow using a bladder width approximately 40% of the arm circumference. Staff were trained to ensure that the bladder inside the cuff encircle 80% to 100% of the circumference of the arm according to standard recommendations. 13 A full range of different cuff sizes were available at the locations where vital signs, including BP, are recorded in the clinics. After at least 3 to 5 minutes of rest, children were measured in a seated position with the midpoint of the arm supported at heart level. The brachial artery was palpated and the cuff was placed to ensure that that midline of the bladder was over the arterial pulsation and then the cuff was snugly wrapped and secured around the child's bare upper arm. In pediatric clinics, nurses and medical assistants are instructed to repeat the measurement if there is an elevated BP. If the level remains elevated on the repeated BP measurement, the primary care provider will measure BP using an auscultatory device in the examination room. However, repeated BP measurements are not systematically recorded in the EHR and aneroid readings cannot be distinguished from oscillometric readings in the EHR. All personnel measuring BP is certified in BP measurement during their initial staff orientation and recertified annually. In pediatrics and family practice, staff must complete a Web-based training session and successfully pass a certification process that includes knowledge of preparing patients for measuring BP, selecting correct cuff size, and using standard techniques for BP measurement. In addition to the Web-based training, staff is observed measuring BP to verify their competency. However, the intensity of this training may vary by medical center and deviations of the preferred measurement method may have occurred. BP measures for all outpatient encounters were extracted from the EHR from the date of enrollment until 36 months after this date unless the measured body temperature at the time of the encounter was >100.4°F or >38.0°C. In clinical settings, follow-up visits may not be scheduled as recommended and may lead to an underestimation of the prevalence of hypertension. Given this clinical settings, the rules to classify BP were widened to a 36-month study period that allowed for the inclusion of BP measurements from 3 regular annual visits. The use of the first 4 consecutive BPs allowed one BP to be an outlier below classification requirements. We classified BP using the recommendations of the Fourth Report On the Diagnosis, Evaluation, and Treatment of High Blood Pressure in Children and Adolescents of the National High Blood Pressure Education Program (NHBPEP) 14 combined with the recommendations for adults of the Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7). 15 Prehypertension was defined as at least one BP between the 90th percentile and <95th percentile (or ≥120/80 mm Hg even if lower than the 90th percentile) of age, sex, and height BP distribution charts. Because of high variability of BP in this age group, the NHBPEP definition of hypertension in children and adolescents requires a BP ≥95th percentile (or ≥140/90 mm Hg even if lower than the 95th percentile) on at least 3 separate occasions. We classified youth with 1 or 2 BPs ≥95th percentile as "blood pressure in the hypertensive range." As previously described, patients with a diagnosis of essential hypertension and at least one prescription of antihypertensive drug were classified as having hypertension if there was no information in the EHR to suggest a different diagnosis. Body Weight and Height Body weight and height were routinely measured and extracted from the EHR. BMI was calculated as weight (kilograms) divided by the square of the height (meters). Definitions of overweight and obesity in children and adolescents are based on the sex-specific BMI-for-age growth charts developed by the Centers for Disease Control and Prevention. 16 Because BMI-for-age cross the adult thresholds for overweight and obesity, which may lead to an underestimation of overweight and obesity in adolescents older than 15 years, the definitions were combined with the World Health Organization definitions for overweight and obesity in adults. [16][17][18] Children were categorized as underweight (BMI-for-age <5th percentile), normal weight (BMI-for-age ≥5th to <85th percentile), overweight (BMI-for-age ≥85th to <95th percentile or a BMI ≥25 to <30 kg/m 2 ), moderately obese (BMI-for-age ≥95th to <1.2995th percentile or a BMI ≥30 to <35 kg/m 2 ), and extremely obese (BMI-for-age ≥1.2995th percentile or a BMI ≥35 kg/m 2 ). Based on a validation study including 15,000 patients with 45,980 medical encounters, the estimated error rate in weight and height data was <0.4%. 19 Race, Ethnicity, and Socioeconomic Status We obtained race and ethnicity information from health plan administrative records and birth records. We hierarchically categorized race/ethnicity as Hispanic (regardless of race), non-Hispanic white, black, Asian or Pacific Islander, and other, multiple, or unknown race/ethnicity combined. In a validation study comparing health plan administrative records and birth certificate records of 325,810 children, 20 the positive predictive values (PPVs) were 89.3% for Hispanic ethnicity, 95.6% for white, 86.6% for black, 73.8% for Asian/Pacific Islander, 51.8% for other, and 1.2% for multiple race/ethnicity. In cases for which race and ethnicity information was unknown (31.7%), administrative records were supplemented by an imputation algorithm that used surname lists and address information derived from the US Census Bureau. 21-23 Hispanic ethnicity and Asian race were assigned based on surnames. For blacks and non-Hispanic whites, the child's home address was used to link racial/ethnic information from the US Census Bureau. Race/ethnicity was hierarchically assigned using probability cutoffs of >50% for Asian surname, >50% for Hispanic surname, >75% for black race from geocoding, and white race >45% from geocoding if no other assignment could be made before. The specificity and PPV were >98% for all races/ethnicities. 8 To assess socioeconomic status, we used neighborhood education, which was estimated from geocoded addresses linked to 2010 US census data at the block level. 24 Statistical Analysis Differences in the distribution of demographic characteristics for the analytical cohort, as well as youth excluded due to missing BP measures, among weight classes were assessed using the chi-square test. The prevalence of high BP was estimated for the entire cohort and by sex (boys/girls), age group (6-11 years, 12-17 years), race/ethnicity (non-Hispanic white, Hispanic, black, Asian or Pacific Islander, other or unknown), and state subsidized healthcare (yes/no). The prevalence was expressed as a percentage with corresponding 95% confidence intervals (CIs). For outcomes with high frequency, the odds ratio derived from logistic regression can overestimate the prevalence ratio. Hence, we examined the associations of high BP with weight class by using log-binomial regression models to estimate crude prevalence ratios (PRs) and corresponding 95% CIs as well as adjusted PRs after adjusting for age, sex, and race/ethnicity. In order to detect the possible interactions between weight class and sex, age, and race on prehypertension and hypertension, we used two log-binomial regression models: (1) the multivariable model stratified by age, sex, or race, (2) additionally including 2-way interaction terms into the model. Receiver operating characteristic (ROC) curves were used to predict prehypertension (or higher), BP in the hypertensive range (or higher), and hypertension by BMIfor-age percentile. The area under the curve (AUC) or model c statistic and corresponding 95% CIs are provided. The optimal threshold was chosen where accuracy measures (Youden Index, total accuracy) were maximized and total misclassification error was minimized. 25 The Youden Index (J=Sensitivity + Specificity À1) is the maximum difference between the ROC curve and the diagonal or chance line, and determines the cut point that optimizes the ability to differentiate between individuals with the outcome of interest and those without this outcome assuming an equal weight for sensitivity and specificity. All analyses were conducted using SAS 9.2 (SAS Institute Inc, Cary, NC). Demographics of the Study Population Approximately half of the study population was Hispanic (Table I). Extremely obese youth were more likely to be male, Hispanic or black, and to live in a census block with a higher proportion of adult residence with low educational attainment. Compared with youth excluded from the analysis because they did not have 3 independent BPs in the study period (n=228,331), the study cohort was similar in the distribution of sex, race/ ethnicity, neighborhood education, and neighborhood income (data not shown). However, youth excluded from the analysis were slightly younger and, for a significant proportion of these youth (37.4%), the BMI and BP The prevalence of prehypertension and hypertension in the total sample were 31.4% and 2.1%, respectively (Table II). A significant proportion of youth had 1 (16.6%) or 2 (4.8%) BP measurements in the hypertensive range. Among youth with high BP, 75.3% had an isolated elevated systolic BP, 14.6% had an isolated elevated diastolic BP, and 10.0% had 1 elevated systolic and 1 elevated diastolic BP. The prevalence of high BP was higher with a higher BMI category across sexes and age groups. For example, the prevalence of hypertension was 0.6% in underweight and 0.9% in normal weight youth and 9.2% in extremely obese youth. Extremely obese youth were 10.58 (95% CI, 9.75-11.28) times and moderately obese 4.35 (95% CI, 4.03-4.71) times more likely to have hypertension than their normal weight counterparts (Table III). This association remained unchanged after adjusting for sex, age, and race/ethnicity. Sex and race/ ethnicity did not modify the association between body weight and high BP (both P values >.30). However, the association between body weight and high BP was marginally but significantly (P<.001) stronger in those aged 12 to 17 than in those aged 6 to 11 years. Thresholds for BMI-for-Age as a Predictor of High-Grade Prehypertension and Hypertension The AUCs for the prediction of prehypertension or higher, 1 BP in the hypertensive range (ie, ≥95th percentile) or higher, 2 BPs in the hypertensive range or higher, and hypertension by BMI-per-age percentile are shown in Figure 2 (all AUCs: P<.001). The Youden Index was maximal at the 85th percentile, 90th percentile, 91st percentile, and 94th percentile of BMI-for-age for having at least prehypertension, 1 BP in the hypertensive range, 2 BPs in the hypertensive range and hypertension, respectively ( Figure 2). The sensitivity and specificity of the optimal body weight threshold ranged between 50.4% and 70.9%, respectively, for prehypertension and 62.8% and 75.3% for hypertension, respectively ( Figure 3). The prevalence of high BP increased linearly with respect to BMI up to the 96th percentile and then increased steeply above the 97th percentile. DISCUSSION Results from this population-based cross-sectional study show a strong positive association between higher BMIfor-age and the prevalence of high BP in youth. Extremely obese youth are 10 times, moderately obese youth 4 times, and overweight youth twice as likely to have hypertension than their normal weight counterparts. Our results support the necessity to monitor overweight to obese children for high BP. The setting of the present study is likely similar to other managed care settings, and the findings are likely generalizable to similar populations. The population at KPSC is also similar to the underlying population of Southern California regarding sociodemographic factors. 11 The large sample size allowed well-powered tests for interactions and a threshold determination by BMIfor-age percentile. Few studies have evaluated the magnitude of the association between BMI and BP in pediatric populations. [26][27][28][29] Consistent with our results, a Swiss study found odds ratios of hypertension for overweight children of 2.7 (95% CI, 1.5-5.0) and 16.2 (95% CI, 9.1-28.9) for obese children. 27 In a Texas school-based study including 11-to 17-year-old students in 2003 to 2005, overweight youth had 1.39 (95% CI, 0.92-2.09) and obese youth had 4.26 (95% CI, 3.12-5.83) times the odds of having hypertension compared with normal weight youth. 26 In earlier data in the same school-based setting, the odds of hypertension was 3.49 (95% CI, 2.70-4.51) in obese children compared with their normal weight counterparts. 28 Only one study 29 examined a dose-response relationship based on the degree of obesity. In that study, 29 extremely obese youth (defined as ≥99th percentile of BMI-for-age) were 3 times as likely to have hypertension than moderately obese and 7 times as likely as normal weight youth, based on 3 visits with BP measurement to confirm a high BP. However, in that study, 29 only frequencies but no odds ratios were provided, and the sample size of youth with ≥3 visits with BPs was very small (n=257). Our results suggest that the risk of hypertension in extremely obese children is more than twice that of moderately obese children. This may have serious clinical implications for pediatric populations that have experienced a recent increase in the prevalence of extreme obesity. 8,30 However, long-term studies are necessary to investigate the tracking of high BP from childhood into adulthood and the development of other cardiovascular risk factors for extremely obese compared with moderately obese youth. Health care providers could face another rise in the prevalence of hypertension in the coming years as a result of the shift toward extreme obesity in youth. Several organizations recommend routine screening for high BP of asymptomatic youth during routine care visits; these organizations include the NHBPEP, 12 the Expert Panel on Integrated Guidelines for Cardiovascular Health and Risk Reduction in Children and Adolescents of the National Heart, Lung, and Blood Institute, 31 and the American Heart Association. 32 Because studies are needed to assess whether screening for hypertension in youth reduces adverse health outcomes or delays the onset of hypertension in adults, the United States Preventive Services Task Force (US-PSTF) 9 has recently concluded that the evidence is lacking to recommend for or against routine screening for high BP in asymptomatic children-a conclusion that is discussed controversially. 33 While measuring BP is inherently safe, there can be adverse outcomes from routine screening, including the time and resource costs of both families and health systems for making additional and perhaps unnecessary appointments when an elevated BP is found among low-risk youth. For example, in our cohort, nearly 75,000 youth had at least 1 prehypertensive BP and nearly 40,000 children had at least 1 hypertensive BP, of which almost half were normal weight. Follow-up at regular clinical visits such as annual health check visits can mitigate these costs but bear the potential to delay diagnosis and treatment of hypertension. It was also concluded that the increase in hypertension in youth in the United States is largely driven by an increased BMI. 9 As shown by our study and others, the prevalence of hypertension is much higher in obese than in normal weight youth. The results of our present study can inform future decisions between a general population-wide screening and a targeted screening or closer follow-up strategies in high-risk populations to identify youth with high BP. Scant knowledge is available about the optimal thresholds for BMI to predict high BP in children. In one study, the prevalence of prehypertension or hypertension in a Canadian pediatric population increased at the 85th percentile of BMI-for-age. 34 Accordingly, we found a BMI-for-age at or above the 85th percentile that was able to predict high BP at the level of prehypertension or higher, but with a relatively low sensitivity and specificity. A BMI-for-age of 94th percentile was able to predict hypertension with an acceptable sensitivity and specificity. With little change in sensitivity and specificity, the threshold for hypertension can be rounded to the 95th percentile of BMI-for-age. If screening for high BP has to be limited to a high-risk population, our results suggest that at a minimum requirement, children at or above the 95th percentile should be screened. However, it is well-known that BP in children is variable. Due to our cross-sectional study design and the lack of data on BP tracking over a longer period of time, our results have to be interpreted carefully and confirmed by longitudinal data. The current thresholds for childhood overweight and obesity were developed empirically and are rather arbitrary. 35,36 Ideally, cut points describing overweight and obesity in youth would be based on the relationship between BMI and morbidity or mortality, but such data are only available for adults. Identifying cut points for overweight and obesity in children is more difficult since they manifest fewer conditions related to obesity at this age than do adults. Hence, adult cut points for overweight and obesity have been linked to BMI percentiles for youth in order to estimate the values of the 85th percentile for overweight and the 95th percen- tile for obesity. 36,37 Our data provide support for the validity of the current classification for childhood overweight and obesity with respect to identifying high BP. STUDY LIMITATIONS Several potential limitations are noted. First, the crosssectional study design precluded us from making causal inferences on the relationship between body weight and hypertension. However, a general association between obesity and hypertension is well established. 38 Second, only results from single BP readings were available for this study electronically while repeated readings within one visit and the use of the average of these repeated visits are recommended. 12 This may explain the high proportion of youth with high BP but is likely to reflect findings in other real-world clinical settings. Third, we excluded only outpatient BP measurements that indicated the presence of fever because fever increases BP. 39 We did not exclude medical visits related to any health conditions that may lead to slightly higher BP (eg, musculoskeletal pain) or limited BPs to those measured in healthy child visits, as has been done by others. 40 Restricting the cohort to youth with at least 3 well-child visits would have led to a substantial underestimation of the prevalence of high BP in adolescents because the frequency of healthy child visits decreases in adolescence. On the other hand, we cannot exclude that some BPs were elevated secondary to acute conditions. This, however, unlikely affected the results investigating the association between obesity and BP or the determina-tion of a threshold to predict hypertension by BMI-forage. BP in routine clinical practice is usually measured by automated oscillometric devices while reference standards have been developed based on auscultatory methods. 32 However, the discussion about its benefits and disadvantages are controversial. 41 Generally, oscillometric devices tend to underestimate BP by approximately 2 mm Hg 42 while auscultatory methods are prone to measurement error if used in non-research settings. 41 This potential underestimation, however, has been shown to be independent of body weight 43 and relatively small in magnitude, 42 and therefore not very likely to cause a significant underestimation of the prevalence of hypertension compared with potential errors arising from the use of auscultatory methods. 41 CONCLUSIONS Body weight in youth is strongly and positively associated with high BP. With >9% of extremely obese youth having hypertension and another 45% having 1 or 2 BP measurements in the hypertensive range, these youth are particularly at risk for hypertension and may need regular screening and follow-up to identify and treat the condition. Our findings strongly support the need for recommendations to screen for hypertension in overweight and obese children at all outpatient medical visits. Additionally, our results also provide some validation for the current thresholds of pediatric overweight and obesity by predicting the prevalence of prehypertension and hypertension. Prevalence of high blood pressure in youth by body mass index (BMI)-for-age percentile. Sensitivity and specificity are given for the optimal thresholds (Youden Index=maximal). BMI-for-age percentiles above the 97th may be imprecise and must be interpreted with caution.
2016-05-04T20:20:58.661Z
2013-10-10T00:00:00.000
{ "year": 2013, "sha1": "500b6d96dea36017518b35b96532d334626f59ea", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jch.12199", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "500b6d96dea36017518b35b96532d334626f59ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3180178
pes2o/s2orc
v3-fos-license
How possible is the development of an operational psychometric method to assess the presence of the 5-HTTLPR s allele? Equivocal preliminary findings Objective The s allele of the 5-hydroxytryptamine transporter-linked promoter region (5-HTTLPR) polymorphism of the serotonin transporter gene has been found to be associated with neuroticism-related traits, affective temperaments and response to selective serotonin reuptake inhibitor (SSRI) treatment. The aim of the current study was to develop a psychometric tool that could at least partially substitute for laboratory testing and could predict the presence of the s allele. Methods The study included 138 women of Caucasian origin, mean 32.20 ± 1.02 years old. All subjects completed the Hungarian standardised version of the Temperament Evaluation of the Memphis, Pisa, Paris, and San Diego Autoquestionnaire (TEMPS-A) instrument and were genotyped for 5-HTTLPR using PCR. The statistical analysis included the calculation of the Index of Discrimination (D), Discriminant Function Analysis, creation of scales on the basis of the above and then item analysis and calculation of sensitivity and specificity. Results Four indices were eventually developed, but their psychometric properties were relatively poor and their joint application did not improve the outcome. Conclusions We could not create a scale that predicts the 5-HTTLPR genotype with sufficient sensitivity and specificity, therefore we could not substitute a psychometric scale for laboratory genetic testing in predicting genotype, and also possibly affective disorder characterisation and treatment. Background The s allele of the 5-hydroxytryptamine transporter-linked promoter region (5-HTTLPR) polymorphism of the serotonin transporter gene has been shown to be significantly associated with both unipolar, bipolar and subthreshold forms of affective disorder [1][2][3][4][5][6][7][8] and also the neuroticism trait [9][10][11][12], indicating a significant role of the polymorphism in the background of affective phenomena and pathology. In a previous paper we described that affective temperaments composing the depressive superfactor (that is, depressive, cyclothymic, anxious and irritable temperaments also show a significant association with the s allele) [13]. In a more recent paper, we attempted to compose a scale of those items of the Temperament Evaluation of the Memphis, Pisa, Paris, and San Diego Autoquestionnaire (TEMPS-A) scale measuring affective temperaments that differentiate most sensitively between subjects carrying and not carrying the s allele, and we managed to derive a scale consisting of nine items that was able to differentiate between the two groups at a good level of significance and also showed good internal consistency [14]. Since the s allele is associated not only with neuroticism and tendency to develop affective disorders in the face of adverse life events, but also with less favourable response to selective serotonin reuptake inhibitors (SSRIs) [15][16][17][18][19], we considered it of interest to develop a scale which could predict presence of the s allele to a high accuracy and thus less likely SSRI response. For this purpose a careful and meticulous psychometric approach is needed in delineating and validating the scale. In the present paper we attempted to delineate and validate a scale based on the TEMPS-A questionnaire to predict the presence of the 5-HTTLPR s allele scale with a different and more rigorous approach. Study participants The study population included 138 psychiatrically healthy unrelated Hungarian women of Caucasian origin. All participants were aged between 18-64 years; the mean age of our subjects was 32.20 ± 1.02 years. All subjects were screened for neurological and psychiatric disorders using the standardised Hungarian version of the MINI International Neuropsychiatric Interview [20]. Subjects with any neurological and current or lifetime Diagnostic and Statistical Manual of Mental Disorders, fourth edition (DSM-IV) Axis I psychiatric disorders were excluded. The study protocol was reviewed and approved by the Scientific and Research Ethics Committee of the Scientific Health Council of Hungary in charge of genetic experimentation concerning human subjects. All subjects gave written informed consent before participating in the study. Methodology All subjects completed the Hungarian standardised version of the TEMPS-A questionnaire that measures affective temperaments on five scales, the depressive, cyclothymic, irritable, anxious and hyperthymic temperaments [14,21,22]. All subjects were genotyped for 5-HTTLPR by PCR. PCR amplification of 5HTTLPR was performed on genomic DNA extracted from buccal cells [23], and 5HTTLPR genotypes were identified as previously reported [24]. Statistical analysis All statistical analyses were carried out using Statistica 7.0 for Windows (Statsoft, Tulsa, OK, USA). In all cases we analysed our data according to the additive model (subjects with either of the three different genotypes: ss, sl, ll), according to the dominant model (subjects carrying the s allele and subjects not carrying the s allele), and according to the recessive model (subjects carrying the l allele vs subjects not carrying the l allele). The first step included the calculation of the equivalent of the degree of difficulty [25] as a measure of an Index of Discrimination (D) in order to identify those items from the TEMPS-A scale that best discriminate groups. The D corresponds to the difference in the percentages in the responses given between two groups. The second step included the development of the scales with weighting the item responses; those with D above 15 were included in the scales with those with D above 20 weighted with a factor of 2, while those with D below 20 were weighted with a factor of 1. Discriminant function analysis was also used in order to obtain two additional indices that could help in separating groups. All the items with D above 15 were included in this type of analysis. Item analysis was performed, and the value of Cronbach's α for each scale was calculated. The sensitivity (Sn) and Specificity (Sp) were also calculated. Results In all, 19 (13.76%) subjects carried the ss genotype, 50 (36.23%) the ll and 69 (50%) sl genotype. A total of 88 subjects (63.77%) carried the s allele while 50 subjects (36.23%) did not carry the s allele. The frequency of the s allele in our sample was 38.77% which parallels the results of earlier studies and is representative of the Caucasian population [24]. The distribution of genotypes in our study population followed the Hardy-Weinberg equilibrium (χ 2 = 0.38934, P = 0.8231). The various genotype groups (ss, sl and ll) did not differ in age (P > 0.05) and they also did not differ concerning all the TEMPS-A subscales (Wilk's λ = 0.8833, F = 1.63, df = 10,262, P = 0.0980). However, post hoc comparisons indicated a significant difference in case of Tables 2 and 3. The resulting scale from the application of weighting on the selected items is shown in Table 4. Cronbach's α was 0.48 for the ss and 0.66 for the ll scale. All items were more or less equal and omission of any of them did not alter the α value significantly. The calculation of sensitivity (Sn) and specificity (Sp) at various cut-off levels for the two scales is shown in Table 5. The discriminant function analysis results are shown in Table 6. Both Sn and Sp as well as the discriminant function analysis results are poor and can not lead to the identification of cases. The combined use of these indices led to poor results as well since no case The capital letter after the item number denotes the TEMPS-A subscale: A = anxious, D = depressive, C = cyclothymic or I = irritable. seemed to be classified by all the indices to the same allele category. All scales and indices correlated moderately but significantly with all TEMPS-A subscales (Table 7). Discussion In the present work we attempted to extract a scale from the TEMPS-A questionnaire that would predict the presence of the s allele of the 5-HTTLPR with satisfactory sensitivity and specificity. However, although several items discriminate between the different genotype groups to a high degree, no scale compiling these items showed high sensitivity and specificity with respect to the presence of the s allele. Even the combination of the scales that were derived cannot improve the poor classification outcome. To understand the nature of psychometric disorders and to make more efficient treatment possible, we must not only view these disorders as complex entities in the context of their social, cultural, neurochemical and genetic determinants, but we should also be able to decompose psychiatric disorders into smaller and better characterisable components. The concept of endophenotypes was introduced to aim at identifying and characterising small, atomic phenomena that correspond to an accurately characterisable biochemical process or marker, such as a genetic polymorphism, and which is at the same time highly relevant in the manifestation of psychological phenomena or psychiatric disorders. There is an expanding effort to identify traits and temperaments related to the development of psychiatric illnesses and associate them with genetic factors. Studies have attempted to link psychological traits as measured by psychometric scales with a given polymorphism. Our approach in this case was different: based on an association we had already described between the 5-HTTLPR s allele and several affective temperaments measured by TEMPS-A [13,26], we aimed to construct a scale which would show a high ability to predict 5-HTTLPR genotype. In a previous paper we attempted to solve the task of delineating a psychometric scale to predict presence of the s allele by selecting the items which differentiated between the different genotype groups using analysis of variance (ANOVA) and performing a subsequent item analysis [14]. In the current paper, however, we used a more rigorous statistical approach in selecting the items differentiating between the different genotype groups and calculated also sensitivity and specificity. As a result, we could not derive a scale that would predict the presence of the s allele with adequate accuracy. The role of genetic factors in the background of personality, vulnerability and consequently psychiatric disorders has gained more recognition and wider acceptance in modern times. It is well accepted that the 5-HTTLPR s allele has a profound role in determining the emergence of neuroticism-related personality traits [9][10][11][12]27] and psychiatric disorders as well [1,2,4]. It has also been suggested and described in several studies that the presence of the s allele not only makes one more likely to possess personality traits which are associated with psychiatric diseases, especially anxiety and affective disorders, but it also makes a less favourable response to SSRI antidepressants more likely [15][16][17][28][29][30][31]. Understanding the underlying biological and personality factors profoundly shapes and reorganises how we view psychiatric disorders today and how they will be classified in the future. Also, these factors should be taken into consideration when selecting the appropriate treatment. Although genetic testing is an available and affordable procedure nowadays, it is not widely used due to several reasons including ethical factors. Moreover, the presence of a given polymorphic allele does not predict the manifestation of a given disorder, only indicates an increased risk. Similar is the case for drug response associated with genetic factors. Therefore a psychometric scale, which is short and easy to administer, and is able to predict presence of the genotype associated with certain personality factors, psychiatric disorders or response to drugs with a great specificity and sensitivity would be a useful tool not only in research but also in everyday psychiatric practice. In our study, however, we failed to develop such a scale, which indicates that as yet we have no accurate and useful psychometric tools that can substitute for biochemical laboratory testing. However, we report these scales in the current study in order to serve as a guide for future research and as they give a gross impression of the psychometric features associated with each genetic category. In interpreting our results and drawing our conclusions, several limiting factors must be taken into consideration. First of all, our sample was relatively small; studies using larger samples would detect minor differences to a greater accuracy. Also, our sample consisted entirely of women. Further studies are needed to investigate the possibility of extracting a psychometric scale for predicting the s allele in men and in a mixed-gender general study population. Conclusions Genetic polymorphisms influence not only the emergence of psychiatric diseases but also the pharmacotherapeutic response of these disorders to treatment. Although genetic polymorphisms only mildly contribute to such phenotypical alterations, they may be taken into account when selecting a pharmacological agent. A scale closely related to a given polymorphism may thus be a useful clinical tool, however, the development of such a scale needs further research.
2016-05-12T22:15:10.714Z
2010-05-07T00:00:00.000
{ "year": 2010, "sha1": "48097b984f5dccffe9badad705ead4914d027631", "oa_license": "CCBY", "oa_url": "https://annals-general-psychiatry.biomedcentral.com/track/pdf/10.1186/1744-859X-9-21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c21187a8c58ae1ea4315611d9887a865178dbfc", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220516125
pes2o/s2orc
v3-fos-license
Replication-competent vesicular stomatitis virus vaccine vector protects against SARS-CoV-2-mediated pathogenesis SUMMARY Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused millions of human infections and hundreds of thousands of deaths. Accordingly, an effective vaccine is of critical importance in mitigating coronavirus induced disease 2019 (COVID-19) and curtailing the pandemic. We developed a replication-competent vesicular stomatitis virus (VSV)-based vaccine by introducing a modified form of the SARS-CoV-2 spike gene in place of the native glycoprotein gene (VSV-eGFP-SARS-CoV-2). Immunization of mice with VSV-eGFP-SARSCoV-2 elicits high titers of antibodies that neutralize SARS-CoV-2 infection and target the receptor binding domain that engages human angiotensin converting enzyme-2 (ACE2). Upon challenge with a human isolate of SARS-CoV-2, mice expressing human ACE2 and immunized with VSV-eGFP-SARS-CoV-2 show profoundly reduced viral infection and inflammation in the lung indicating protection against pneumonia. Finally, passive transfer of sera from VSV-eGFPSARS-CoV-2-immunized animals protects naïve mice from SARS-CoV-2 challenge. These data support development of VSV-eGFP-SARS-CoV-2 as an attenuated, replication-competent vaccine against SARS-CoV-2. 92 week-old BALB/c mice with 10 6 plaque-forming units (PFU) of VSV-eGFP-SARS-CoV-2 or a 93 control, VSV-eGFP ( Fig 1A). As murine ACE2 does not serve as a receptor for SARS-CoV-2, 94 we spiked our preparation of VSV-eGFP-SARS-CoV-2 with trace amounts of VSV G to permit a 95 single round of infection, an approach used previously for SARS-CoV (Kapadia et al., 2008). At 96 28 days post-priming, one cohort of animals was boosted with the homologous vaccine. Serum 97 was isolated from all animals at three weeks post priming or boosting, and IgG titers against 98 recombinant SARS-CoV-2 S protein or the RBD were determined by ELISA (Fig 1B-C). 99 Immunization with VSV-eGFP-SARS-CoV-2 induced high levels of anti-S and anti-RBD-specific 100 IgG compared to control VSV-eGFP with reciprocal median serum endpoint titers of 3.2 x 10 5 101 and 2.7 x 10 6 (anti-S) and 1.1 x 10 4 and 1.4 x 10 4 (anti-RBD) for one and two doses of vaccine, 102 respectively. 107 Boosting was effective and resulted in a 90-fold increase in neutralizing activity after the second 108 dose of VSV-eGFP-SARS-CoV-2. Collectively, these data suggest that VSV-eGFP-SARS-CoV- 109 . CC-BY-NC-ND 4.0 International license was not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which this version posted July 10, 2020. 115 Immunized mice were administered 2 mg of anti-Ifnar1 mAb one day prior to intranasal delivery 116 of AdV-hACE2. We administer anti-Ifnar1 antibody to augment virus infection and create a 117 stringent disease model for vaccine protection. Five days later, mice were inoculated with 3 x 118 10 5 PFU of SARS-CoV-2 via the intranasal route ( Fig 1A) and subsequently, we measured viral 119 yield by plaque and RT-qPCR assays. At day 4 post-infection (dpi) infectious virus was not 120 recovered from lungs of mice vaccinated either with one or two doses of VSV-eGFP-SARS- 121 CoV-2 (Fig 2A). For mice receiving only one dose of VSV-eGFP-SARS-CoV-2 vaccine, we 122 observed a trend towards decreased levels of viral RNA in the lung, spleen, and heart at 4 dpi 123 and in the lung and spleen at 8 dpi compared to the control VSV-eGFP vaccinated mice ( Fig 124 2B-E). Mice that received two doses of VSV-eGFP-SARS-CoV-2 had significantly lower levels 125 of viral RNA in most tissues examined compared to control VSV-eGFP vaccinated mice ( Fig 126 2B-E). Consistent with our viral RNA measurements, we observed less SARS-CoV-2 RNA by in 127 situ hybridization in lung tissues of VSV-eGFP-SARS-CoV-2 immunized mice at 4 dpi (Fig 2F). 146 To determine the extent of lung pathology in SARS-CoV-2 challenged mice, at 8 dpi, we 147 stained lung sections with hematoxylin and eosin (Fig 3B). Lung sections from VSV-eGFP- immunizations. Ten-week-old female BALB/c mice were administered anti-Ifnar1 mAb and AdV- 160 . CC-BY-NC-ND 4.0 International license was not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which this version posted July 10, 2020. . https://doi.org/10.1101/2020.07.09.196386 doi: bioRxiv preprint hACE2 as described above to render animals susceptible to SARS-CoV-2. Five days later, 100 161 μ L of pooled immune or control sera was administered by intraperitoneal injection. One day 162 later, mice were inoculated with 3 x 10 5 PFU of SARS-CoV-2 via the intranasal route (Fig 4A). 163 Passive transfer of sera from animals vaccinated with VSV-eGFP-SARS-CoV-2 protected 164 against SARS-CoV-2 infection compared to sera from the VSV-eGFP-immunized mice. At 4 dpi, 165 lungs from animals treated with VSV-eGFP-SARS-CoV-2 immune sera from prime-only and 166 boosted animals showed substantially reduced infectious virus burden (Fig 4B). Although not as heart of animals given sera from VSV-eGFP-SARS-CoV-2 boosted mice trended toward, but did 172 not reach, statistical significance (Fig 4E). No effect was observed in the nasal washes of any 173 treated group (Fig 4F), consistent with the results from our vaccinated and challenged animals 174 (Fig 2E). 175 To determine the effect of the passive transfer of sera on SARS-CoV-2-mediated 176 inflammation, we assessed the induction of several cytokines in the lung at 4 dpi ( Fig 4G). 586 . CC-BY-NC-ND 4.0 International license was not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which this version posted July 10, 2020. . https://doi.org/10.1101/2020.07.09.196386 doi: bioRxiv preprint
2020-07-15T13:14:17.742Z
2020-07-10T00:00:00.000
{ "year": 2020, "sha1": "6b4f706e2db4de4733b86d8d55b4eb04e47b491d", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S1931312820304212/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "7b3150c3f2a7367cec24e6a8de7443020d828949", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
257364777
pes2o/s2orc
v3-fos-license
Evolutionary deformation toward the elastic limit by a magnetic field confined in the neutron--star crust Occasional energetic outbursts and anomalous X-ray luminosities are expected to be powered by the strong magnetic field in a neutron star. For a very strong magnetic field, elastic deformation becomes excessively large such that it leads to crustal failure. We studied the evolutionary process driven by the Hall drift for a magnetic field confined inside the crust. Assuming that the elastic force acts against the Lorentz force, we examined the duration of the elastic regime and maximum elastic energy stored before the critical state. The breakup time was longer than that required for extending the field to the exterior, because the tangential components of the Lorentz force vanished in the fragile surface region. The conversion of large magnetic energy, confined to the interior, into Joule heat is considered to explain the power for central compact objects. This process can function without reaching its elastic limit, unless the magnetic energy exceeds $2\times 10^{47}$ erg, which requires an average field strength of $2\times10^{15}$ G. Thus, the strong magnetic field hidden in the crust is unlikely to cause outbursts. Furthermore, the magnetic field configuration can discriminate between central compact objects and magnetars. INTRODUCTION The magnetic field strength on a neutron-star surface is typically approximately 10 12 G. However, there are two peculiar classes whose field strengths significantly deviate from the average. They exhibit unusual activities, with their energy considered to be supplied by an intense magnetic field. Magnetars, except for a few sources, have a strong dipole field ≥ 10 14 G and exhibit energetic outbursts or flares. The X-ray luminosity is very bright in the range 10 32 -10 36 ergs −1 , which exceeds the spin-down luminosity for most sources, in contrast to normal pulsars (Turolla et al. 2015;Kaspi & Beloborodov 2017;Enoto et al. 2019;Esposito et al. 2021, for review). Central compact objects (CCOs) located at the centers of supernova remnants are X-ray sources with luminosity ∼ 10 32 -10 34 ergs −1 . A few CCOs show pulsations; hence, the magnetic dipole field is estimated to be ∼ 10 10 − 10 11 G, (Gotthelf et al. 2013). Their X-ray luminosities are comparable to those of quiescent magnetars (Kaspi & Beloborodov 2017), and exceed the kinetic energy loss. To explain the X-ray luminosity, CCOs are considered to have an intense magnetic field ∼ 10 14 G inside neutron stars, although the surface field is weak. Strong fields near the surface or inside the crust may explain the nonuniform temperature of PSR J0822-4300 in Puppis A (Gotthelf et al. 2010) and large pulse fraction of PSR J1852+0040 in Kes 79 (Shabaltas & Lai 2012). Most of the magnetic field in CCOs is considered to be buried by the fallback of the supernova material (Ho 2011;Viganò & Pons 2012). Numerical simulations can be used to solve the field geometry of the proto-neutron star (see, e.g. Matsumoto et al. 2022, for recent development). Such a strong field ∼ 10 14 G is crucial for studying magnetized neutron stars. The Lorentz force is comparable in magnitude to the elastic force in the crust. However, only static magneto-elastic equilibria of the crust have been studied thus far (Kojima et al. 2021(Kojima et al. , 2022Fujisawa et al. 2023). These studies demonstrated that neutron star models with strong magnetic fields are possible owing to the elasticity of the crust. A magnetic field configuration was assumed in these studies. However, it is unclear whether a sufficient range of magnetic field was covered or not. Herein, we consider the effect of the elastic force in the equilibria of magnetized neutron stars from a different perspective. In other words, we examined the process toward the elastic limit according to magnetic field evolution. Suppose that the magnetized crust settles in the force balance at a particular time. However, the magnetic field is not fixed and evolves on a secular timescale. Thus, elastic displacement is induced from the initial position to balance the Lorentz force according to field evolution. Simultaneously, shear stress in the crust gradually accumulates and reaches a critical limit. Beyond this threshold, the crust cracks (Duncan & Thompson 1992;Thompson & Duncan 1995, for a seminal paper) or responds plastically. Two possibilities are discussed owing to the lack of a sufficient understanding of the material properties. A sudden crust breaking can produce a magnetar outburst and/or a fast radio burst (Li et al. 2016;Baiko & Chugunov 2018;Suvorov & Kokkotas 2019). Second, plastic flow beyond a critical point is crucial for the long-term evolution (Lander 2016), and a coupled system between the flow and the magnetic field was numerically solved (Lander & Gourgouliatos 2019;Kojima & Suzuki 2020;Gourgouliatos & Lander 2021). Therefore, it is important to explore the timescale up to the critical limit, and the elastic energy deposited during the evolution. In this study, we assumed that the stellar structure is always barotropic. The initial state of evolution is described by magneto-hydrodynamic (MHD) equilibrium without the elastic force. This equilibrium does not involve electrons. (Gourgouliatos et al. 2013;Gourgouliatos & Cumming 2014); thus, the magnetic field tends toward Hall equilibrium on a secular timescale. The Hall-drift timescale, an important indicator in the evolution, becomes shorter as the magnetic field strength increases. Thus, this study is relevant to neutron star crusts with strong magnetic fields. A previous study considered the evolution of a magnetic field that extended from the crust to the exterior (Kojima 2022, referred to as Paper I). However, the toroidal component of the magnetic field was ignored. It is important to examine the evolution of different magnetic field geometries. The broad classification is based on whether the field is confined inside the crust or spreads out to the magnetosphere. This possibility is schematically illustrated in Figure 1. For simplicity, we assume that the field is purely dipolar and is expelled from the neutron-star core. The extended case shown in Figure 1-a (left panel) corresponds to the magnetar model considered in Paper I. The toroidal component of the magnetic field cannot emerge in an exterior vacuum; it is confined inside a loop in the meridional plane for the poloidal component, as shown in Figure 1-b (middle panel) and 1-c (right panel). The toroidal magnetic energy potentially increases as the loop region expands. In contrast to Paper I, this study considered the entirely confined case, shown in Figure 1-c. Both the toroidal and poloidal components are confined in the crust. The magnetic field geometry can be applied to that of CCOs. The remainder of this paper is organized as follows. The models and equations used in this study are discussed in Section 2. We calculated the quasi-stationary evolution of the shear strain induced by the Hall drift of a magnetic field. We estimated the critical time beyond which the elastic equilibrium was no longer possible as well as the elastic energy during evolution. The numerical results are presented in Section 3. Finally, our conclusions are presented in Section 4. Barotropic Equilibrium in Crust Our consideration was limited to the inner crust of a neutron star, where the mass density ranges from ρ c = 1.4×10 14 g cm −3 at the core-crust boundary r c to the neutron drip density ρ 1 = 4 × 10 11 g cm −3 at R = 12 km. We ignored the outer crust and treated the exterior region of r > R as a vacuum. The crust thickness ∆r ≡ R − r c was assumed to be ∆r/R = 0.05 and ∆r = 0.6 km. The spatial density profile in r c ≤ r ≤ R is approximated as (Lander & Gourgouliatos 2019) (1) We consider the equilibrium in the crust. Under the Newtonian approximation, static force balance between the pressure P , gravity, and Lorentz force is expressed as where Φ g is the gravitational potential including the centrifugal term. We assume a barotropic distribution P = P (ρ), and the sum of the first two terms in Equation (2), is expressed as − ∇Φ eff . The third term has a magnitude a b b b c c c Figure 1. Magnetic field geometry in crust. Contours of magnetic function Ψ(r, θ) for a poloidal field are demonstrated. From the left to right panels, the field is more confined, and the toroidal component is also involved in the interior. ∼ 10 −7 (B/10 14 G) 2 times smaller than those for the first and second terms. The deviation due to the Lorentz force is sufficiently small to be treated as a perturbation of the background equilibrium. We assumed an axial symmetry for the magnetic-field configuration. The poloidal and toroidal components of the magnetic field are expressed by two functions, Ψ and S, respectively, as follows: where ̟ = r sin θ is the cylindrical radius and e φ is the azimuthal unit vector in (r, θ, φ) coordinates. For barotropic equilibrium, the current function S should be a function of Ψ, and the azimuthal component j φ of the electric current is described in the form (e.g., Tomimura & Eriguchi 2005) where K denotes a function of Ψ. Further, the acceleration a owing to the Lorentz force is reduced to Thus, the force balance in Equation (2) is described by the gradient terms of scalar functions. We adopted a simple linear function of Ψ for K(Ψ) and S(Ψ). K = K 0 Ψ and S = κΨ, where K 0 and κ are constant. For the dipole field, function Ψ is expressed using the Legendre polynomial of l = 1, that is, Ψ = g(r) sin 2 θ. After the decomposition of the angular part, the azimuthal component of the Amp'ere law is reduced to where prime ′ denotes a derivative with respect to r. We consider a magnetic field confined in the crust such that the radial function g is obtained by solving Equation (6) using boundary conditions g(r c ) = g(R) = 0. The solutions for Equation (6) without the source term are obtained using spherical Bessel functions. A homogeneous solution satisfies the following boundary conditions: only for specific values, κ∆r ≈ nπ (n = 1, 2, · · · ). This solution corresponds to a force-free case j × B = 0, that is, K 0 = 0 in Equation (5). The constant K 0 determines the overall magnetic field strength, whereas κ determines the ratio of the poloidal and toroidal components. The dipolar magnetic field considered in Paper I is purely poloidal (κ = 0) and extends to the exterior vacuum. The magnetic energy E p stored inside the crust is expressed as E p = 3.8B 2 0 R 3 , using field strength B 0 at the surface. When studying different models confined to the crust, we always fixed the poloidal magnetic energy as E p = 3.8B 2 0 R 3 . The average strength is approximated as 12B 0 , whereas the normalization B 0 also fixes K 0 for each model. Figure 2 shows the energy ratio of toroidal components E t to poloidal components E p as functions of κ∆r. A similar result was obtained for the magnetic field confined in the whole star (Fujisawa & Eriguchi 2013). The ratio increased with κ∆r and reached a maximum ∼ 1.4. With further increase, the ratio oscillated between the minimum ∼ 1 and the maximum ∼ 1.4. Further, the spatial structure of the magnetic function Ψ changed continuously; the radial nodes increased with κ∆r as shown in Figure 2. Node number n is approximated as n ≈ κ∆r/π. Magnetic-Field Evolution The force balance expressed as Equation (2) is not fixed on a secular timescale because the Lorentz force j × B gradually changes owing to magnetic field evolution. The evolution of the crustal magnetic field was governed by the following induction equation: where n e is the electron number density and σ e is the electrical conductivity. The first term in Equation (7) represents the Hall drift, and the second term represents the magnetic decay due to ohmic dissipation. The timescales associated with these processes are estimated as where B 0 denotes the normalization of the magnetic field strength, and crust thickness ∆r = 0.05R = 0.6km is used. In Equations (8) and (9), we used the maximum values for n e and σ e , that is, the values at the core-crust boundary. The actual timescales may be smaller than those obtained using Equations (8) and (9), owing to the spatial dependence of n e , σ e , B, and j. Here, we estimate the magnetic decay for the confined models considered in the previous subsection, using numerical calculations. The energy decay time t d is defined as where E B denotes the total magnetic energy. During this period, the magnetic energy was converted to heat at a rate oḟ 10xj r j θ j ϕ Figure 4. Electric currents in crust. Radial functions (jr/ cos θ, j θ / sin θ, j φ / sin θ) are shown for three models. The left to right panels correspond to the extending field, confined field with κ∆r = 3, and confined field with κ∆r = 6, respectively. The amplitudes are normalized by the maximum of | j|. The radial component jr for the confined models is small, typically jr ∼ (∆r/R) × j θ = j θ /20, such that jr in the figures is multiplied by a factor of 10. To study the effect of the magnetic geometry, we also compared the dissipation timescale for extending the field to the exterior, where t d /τ Ohm = 5.9 × 10 −2 . This value was larger than that for the confined field. Among the confined field models, t d /τ Ohm decreased slightly with an increase in κ∆r. As the radial number increased, the typical length and the dissipation time decreased. The stepwise curve of t d /τ Ohm in Figure 3 corresponds to this transition. We considered the electric current distribution. The angular dependence is expressed as j r ∝ cos θ, j θ ∝ sin θ, and j φ ∝ sin θ because the magnetic field is dipolar (l = 1). Figure 4 shows the radial functions of the electric currents for the three models. The current for the extended model in the left panel is the maximum at the inner boundary r = r c and decreases radially. However, for the confined models, we found that j θ is large at the surface, while j r = j φ = 0. Component j θ = 0 near the surface decayed significantly because of the low electric conductivity, thereby resulting in a larger value of t d /τ Ohm . Hall-drift Evolution The Hall-Ohmic equation (7) Gourgouliatos et al. 2020;Igoshev et al. 2021). Here, we limit the evolution to the early phase such that the calculation is simplified. By comparing the two timescales expressed as Equations (8) and (9), it was revealed that the magnetic field evolution is governed by Hall drift in the strong-magnetic-field regime. In the range t < t d ∼ 10 4 y, we may neglect Ohmic decay in Equation (7) and the induction equation is reduced to where In Equation (15),χ is the non-dimensional ratio of mass density to electron number density, and is approximated by a smooth analytic function to fit the data given by (Douchin & Haensel 2001, Paper I), whereρ = ρ/ρ c is expressed as Equation (1). The second term in Equation (14) vanishes at t = 0 owing to Equation (5), because barotropic equilibrium is assumed. Moreover, the first term vanishes when χ is constant. In other words, the barotropic MHD equilibrium is the Hall equilibrium for electrons (Gourgouliatos et al. 2013). We consider the magnetic-field evolution driven by nonuniform distribution of χ. The early phase of barotropic MHD equilibrium is governed by The azimuthal component, B φ , changes linearly with time, t. We ignore the changes in the poloidal magnetic field δ B p , and in the relevant azimuthal current δj φ . The early phase of the toroidal magnetic field is expressed as δB φ and can be approximated as where the function δS(r, θ) is explicitly expressed in terms of the Legendre polynomial P l (cos θ) of l = 2 as δS ≡ y(r) sin θP 2,θ = 2K 0 ρ c (∆r) 2 3B 0χ ′ g sin θP 2,θ , where the radial function, y is defined for convenience. The poloidal current changes are associated with δB φ ; thus, the Lorentz force, δ f = c −1 (δ j × B + j × δ B) also changes and is explicitly written as Quasi-stationary Elastic Response The force balance deviates slightly from the initial state owing to the change in the Lorentz force through magnetic field evolution. The acceleration associated with Equation (20) is generally the sum of the solenoidal and irrotational components. The solenoidal part of the Lorentz force should be balanced by additional forces when the material distribution is barotropic; that is, the sum of the pressure and gravitational potential terms is expressed as − ∇δΦ eff . The elastic force δ h in the solid crust is assumed to act against the solenoidal part. Note that the force is purely solenoidal for incompressible motion in the case of a constant shear modulus µ; that is, δ h = −µ ∇ × ∇ × ξ with the displacement vector ξ (for example Landau & Lifshit's 1959). In general, the force contains both solenoidal and irrotational parts, and is expressed by the trace-free strain tensors σ ij and µ. and where the incompressible displacement is assumed as ∇ · ξ = 0. In addition, we assumed that the shear modulus µ is proportional to the density (Figure 43 in Chamel & Haensel 2008), such that the shear speed v s is constant throughout the crust and µ = v 2 s ρ, where v s = 8.5 × 10 7 cm s −1 . The shear modulus µ is the maximum, µ c ≈ 10 30 erg cm −3 at the core-crust interface while it decreases toward the stellar surface, µ 1 ≈ 3 × 10 27 erg cm −3 at ρ 1 . The elastic evolution was excessively slow; hence, the acceleration ∂ 2 ξ i /∂t 2 can be ignored. Consequently, the elastic force is balanced by the change in the Lorentz force at any time, that is, quasi-stationary evolution. Under the approximation that the solenoidal part, that is, a "curl" of acceleration owing to the Lorentz force should be balanced with that of the elastic force, a set of equations is expressed as Note that we consider the azimuthal component only in Equation (25) because other poloidal components are redundant when using Equation (24) and an axial symmetry (∂ φ = 0). The terms involving the Lorentz force in Equations (24) and (25) are expressed in terms of Legendre polynomials P l (cos θ) of l = 1, 3 1 . where a l and b l (l = 1, 3) are radial functions expressed as (gy) ′ , a 3 = − 1 10π (2(gy) ′ − 5g ′ y), The elastic displacement growth with t is explicitly expressed as where the radial functions x l and k l (l = 1, 3) are determined using Equations (24) and (25) (Kojima et al. 2022); x ′′ l − l(l + 1) We now discuss the boundary conditions for a set of ordinary differential equations (31) -(33). Across these surfaces at r c or R, the change in the total stress-tensor should vanish for force balance. In other words, 2µσ ri + δT ri = 0 (i = r, θ, φ), where δT ij denotes magnetic stress. Owing to the fact that δ B p = 0 and B r = B φ = 0 at the boundaries, the boundary conditions are reduced to σ rr = σ rθ = σ rφ = 0. These conditions for the radial functions k l x l and w l (l = 1, 3) at r c and R can be written explicitly as 2µrx ′ l − 2µl(l + 1)x l + r 2 w l = 0, 3. RESULTS Breakup Time and Accumulated Energy By solving the differential equations, we obtain the shear stress σ ij whose magnitude increases with time, whereas the spatial profile remains unchanged. The numerical calculations provided the maximum shear stress with respect to (r, θ) in the crust. Elastic equilibrium can be achieved until the breakup time t * , only when the shear strain satisfies a particular criterion. We adopted the following (von Mises criterion) to determine the elastic limit: where σ max denotes the number of σ max ≈ 10 −2 − 10 −1 (Horowitz & Kadau 2009;Caplan et al. 2018;Baiko & Chugunov 2018). Thus, the period of the elastic response is expressed as wheret b is a numerical factor that depends on parameter κ. The criterion in Equation (38) depends on the ratio of shear to magnetic forces, and is characterized by the shear speed v s and Alfvén speed v a , which is defined as where B 0 is determined by the poloidal magnetic energy E p as discussed in Subsection 2.1. Figure 5 illustratest b as a function of κ∆r. The value is typicallyt b = 0.1 − 1 except for sharp peaks at κ∆r ≈ nπ(n = 1, 2, 3, · · · ), which correspond to force-free cases with K 0 → 0. It is interesting to compare the numberŝ t b = 1.5 × 10 −3 for the dipolar magnetic-field extending to exterior vacuum for the same E p (Paper I). The breakup time for the confined field was significantly longer. This difference is related to the spatial shear distribution driven by the magnetic-field geometry. This will be discussed in the next subsection. The elastic energy ∆E elas increased with the square of time t. We numerically integrated over the entire crust and obtained whereÊ elas is the numerical value, and ∆E elas was normalized by µ 1 R 3 = 5.0 × 10 45 erg using µ 1 at ρ 1 . The change in magnetic energy ∆E mag associated with δB φ is expressed as whereÊ mag is the numerical value. The poloidal magnetic energy E p = 4 × 10 46 (B 0 /10 14 G) 2 erg was chosen as the normalization, and other factors were derived using t * . BothÊ elas andÊ mag are shown in Figure 6 as a function of κ∆r. The numerical factorÊ elas decreased, whereasÊ mag increased. The change with respect to κ∆r was not large andÊ elas andÊ mag were O(1) for all models. Numerical coefficients in front of Equations (38),(41), and (42) and t d /τ Ohm are summarized in Table 1. These numbers were also compared with those in Paper I. A significant difference was observed in the magnetic configuration. The breakup time for the confined model was typically 10 2 times longer than that for the extended model. This longer timescale led to higher energiesÊ elas andÊ mag , which typically increased by a factor of 10 4 corresponding to the square of accumulation time. However, longer timescale constrained the epoch or magnetic field strength because the ohmic decay was neglected. The condition under which t * ≤ t d is B 0 ≥ 3.2 × 10 13 (σ max /0.1) 1/3 G for the extended model. However, lower field strength increased by a factor of 5 for the confined model; B 0 ≥ 1.6 × 10 14 (σ max /0.1) 1/3 G at least. Figure 7 shows the magnitude of shear σ ≡ (σ ij σ ij /2) 1/2 in the crust. We found that the shear associated with the axial displacement ξ φ was significantly larger than that of the polar displacement ξ p = (ξ r , ξ θ ). This is related to the thin crust of thickness ∆r/R = 0.05; typically, |ξ p | ∼ ∆r/R × |ξ φ |. Polar displacement ξ p was induced only when the initial MHD equilibrium contained the toroidal component of the magnetic field (b l = 0 for κ = 0 in Equation (29)). However, the shear distribution shown in Figure 7 was almost the same when changing κ, which determined the ratio of the toroidal to poloidal components of magnetic field. The maximum position of σ shifted slightly to the outer radius with increasing radial nodes. The question that arises is where is the origin of the significant difference, for example, in the breakup time between the extending model and the confined models. The initial equilibrium for the former considered in Paper I was purely poloidal. In contrast to the shear distribution shown in Figure 7, σ exhibited a sharp peak at the surface at ± cos −1 (1/ √ 3) for the extended model (see Figure 2 in Paper I). The peak originated from the acceleration a θ = 0 on the surface of the extended model. In contrast, a θ = 0 because of Ψ = 0 at the surface of the confined model (Equation (5)). The surface was very fragile because of its weak shear-modulus. This region was avoided in the confined model, such that the breakup time to the elastic limit increased. DISCUSSION We considered the elastic deformation induced by the evolution of a magnetic field. The effect of magnetic field geometry was studied and compared with the results of Paper I. When the field was confined to the interior, the breakup time for the elastic limit increased by a factor of 10 2 compared with a field extending to the exterior with the same magnetic energy. Accordingly, a larger elastic energy was deposited until crustal fracture. The elastic energy was typically ∼ 10 45 erg. The accumulated energy was dependent on the magnetic field geometry and independent of the strength B 0 . Breakup time t * was proportional to B −3 0 ; t * ∼ 10 5 (σ max /0.1)(B 0 /(10 14 G)) −3 years. As the Ohmic decay of the magnetic field was neglected, our result is valid for stronger field, B 0 ≥ 2 × 10 14 (σ max /0.1) 1/3 G at least. Further, the average strength exceeded 2 × 10 15 (σ max /0.1) 1/3 G. The magnetic energy in the crust exceeded 2 × 10 47 erg and the breakup time corresponding to the minimum strength was ∼ 10 4 year. Unless the field strength B 0 was significantly larger than 2 × 10 14 G, the elastic deformation did not reach the critical limit. The magnetic field in CCOs may be stable and gradually decays owing to the Joule loss. Our magnetic configuration is limited to a simple case that is, it has a dipolar angular configuration and a few radial nodes. The elastic limit in a more general configuration is worth discussing based on the results of this study because a realistic magnetic field in CCOs is more complicated. When the number of nodes increases in either the angular or radial direction, spatial size around the maximum shear strain decreases. The elastic energy was deposited until the critical limit decreased. Assuming that the accumulated elastic energy is released during an outburst, such an event is less energetic. Thus, the number of radial nodes may be effective, because the stellar structure changed significantly in the radial direction, and the outer part near the surface was more prone to breaking. Simultaneously, ohmic dissipation became effective near the surface. Therefore, it would be interesting to investigate whether a small-scale irregularity in the magnetic field leads to elastic limit or decay. However, in the highly tangled limit, the magnetic field was irregular on a small scale and the direction was random. Thus, the magnetic force can be regarded as isotropic magnetic pressure, which causes irrotational force. It is difficult to drive elastic deformation; thus, the confined field was stable against elastic fractures. A strong magnetic field in CCOs is hidden in the crust and is unlikely to lead to outbursts that occur in magnetars, although the field strength in both sources is of the same order. The field geometry exhibits a remarkable difference. Observations of burst events were not reported, except for CCO at RCW 103. Further, the central neutron star was classified as a magnetar with spin period ∼ 6.7 h (D' Aì et al. 2016). Future studies will be conducted to examine a simple idea of different field geometries resulting in the occurrence or absence of outburst in strongly magnetized neutron stars.
2023-03-07T06:42:11.116Z
2023-03-04T00:00:00.000
{ "year": 2023, "sha1": "04a86ef39023fd473ef5c99fe1005c439ba38b1e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "04a86ef39023fd473ef5c99fe1005c439ba38b1e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
207814940
pes2o/s2orc
v3-fos-license
Analysis of Transcriptome, Selected Intracellular Signaling Pathways, Proliferation and Apoptosis of LNCaP Cells Exposed to High Leptin Concentrations Leptin, the first discovered adipokine, has been connected to various physiological and pathophysiological processes, including cancerogenesis. Increasing evidence confirms its influence on prostate cancer cells. However, studies on the effects of leptin on the proliferation and apoptosis of the androgen-sensitive LNCaP line of prostate cancer cells brought conflicting results. Therefore, we performed studies on the effects of high LEP concentration (1 × 10−6 M) on gene expression profile, change of selected signaling pathways, proliferation and apoptosis of LNCaP cells. RTCA (real-time cell analyzer) revealed inhibitory effect of LEP on cell proliferation, but lower LEP concentrations (10−8 and 10−10 M) did not affect cell division. Moreover, flow cytometry with a specific antibody for Cleaved PARP-1, an apoptosis marker, confirmed the activation of apoptosis in leptin-exposed LNCaP line of prostate cancer cells. Within 24 h LEP (10−6 M) increases expression of 297 genes and decreases expression of 119 genes. Differentially expressed genes (DEGs) were subjected to functional annotation and clusterization using the DAVID bioinformatics tools. Most ontological groups are associated with proliferation and apoptosis (seven groups), immune response (six) and extracellular matrix (two). These results were confirmed by the Gene Set Enrichment Analysis (GSEA). The leptin’s effect on apoptosis stimulation was also confirmed using Pathview library. These results were also confirmed by qPCR method. The results of Western Blot analysis (exposure to LEP 10 min, 1, 2, 4 and 24 h) suggest (after 24 h) decrease of p38 MAPK, p44-42 mitogen-activated protein kinase and Bcl-2 phosphorylated at threonine 56. Moreover, exposure of LNCaP cells to LEP significantly stimulates the secretion of matrix metallopeptidase 7 (MMP7). Obtained results suggest activation of apoptotic processes in LNCaP cells cultured at high LEP concentration. At the same time, this activation is accompanied by inhibition of proliferation of the tested cells. Introduction Leptin (LEP) is predominantly produced and secreted by adipose tissue, functioning mainly in the regulation of energy balance and food intake. Due to the fact that LEP and its receptors are widely expressed, LEP is also a multifunctional pleiotropic hormone, acting as an auto, para and endocrine signal. The increasing body of evidence indicates that the influence of LEP extends to several hypothalamic-pituitary-endocrine axes including adrenal, thyroid, pancreatic islands. Moreover, the role of leptin in immune function, haematopoiesis, osteogenesis and angiogenesis was also documented [1]. LEP levels in human blood (LEP normal blood levels are reported to be 1-15 ng/mL) increase in several diseases, including metabolic disorders leading to obesity. Hyperleptinemia is also associated with the pathogenesis of some cancer types. [2][3][4][5][6][7]. It was shown that elevated serum and tissue LEP level is involved in the pathogenesis of lung cancer and tumor metastasis [8]. Hyperleptinemia is also associated with metastases of melanoma to lymph nodes [9] and is considered as pathophysiological factor in pathogenesis of breast cancer [10]. In the literature, there are also several reports about the role of LEP in the progression of prostate cancer [11][12][13][14][15][16]. Earlier studies on rats identified various isoforms of the LEP receptor in the rat prostate and seminal vesicles and suggested that this cytokine exerts a stimulating effect on the proliferation of epithelial cells of these organs [17][18][19][20]. However, data on LEP receptor expression in human prostate are very scarce. Cioffi et al. identified different variants of the LEP receptor (LEPR) using the Northern blot method in human prostate tissue [21]. In other studies, using RT-PCR method, the following LEPR variants were expressed in tissue samples from benign prostatic hyperplasia (BPH): variants 4 and 2 (in all five samples studied), var. 5 (3/5) and var. 6 and 3 (4/5) [22]. Moreover, Leze et al. demonstrated that incubation of human hyperplastic prostate tissue fragments with LEP (50 ng/mL) for 3 h significantly stimulates proliferation of epithelial cells and expression of pro-apoptotic BAX gene [23]. Difficulties in acquiring appropriate prostate fragments have led various research groups to perform research on various human normal prostate and prostate cancer cell lines. However, the expression of different variants of LEPR in these cells differs significantly [24]. There are also differences in the results of research on the role of LEP in the regulation of proliferation and apoptosis of these cell lines. In the case of LNCaP cells LEP either does not change the proliferation rate of these cells [24][25][26], may stimulate it [27,28], and at high concentrations of the cytokine tested (1 × 10 −6 M) may inhibit the growth of these cells [24]. Taking into account the latter finding, it should be stressed that in comparable concentrations of LEP (12.5 µg/mL) no proliferation changes were observed in LNCaP cells [25], whereas in DU145 cells this concentration of cytokine stimulated proliferation of studied cells [29]. Considering the abovementioned discrepancies, we decided to analyze the effect of high concentrations of LEP on proliferation, gene expression profile and changes in selected signaling pathways of LNCaP cells. Leptin at a Dose of 1 × 10 −6 M Exerted an Inhibitory Effect on Proliferative Activity of LNCaP Cells and Stimulate Apoptosis Using a real-time proliferation assay, we examined the effect of LEP, at concentrations of 10 −6 , 10 −8 , and 10 −10 M, on the proliferation rate of LNCaP cells. As we shown in Figure 1A, LEP at a dose of 1 × 10 −6 M leads to a significant inhibition of LNCaP cells proliferation. Both lower LEP concentrations (10 −8 , 10 −10 M) did not affect the proliferation rate of cultured cells. Therefore, further studies were performed with LEP at a dose of 1 × 10 −6 M in relation to the control group. Based on median fluorescence intensity, LNCaP cells treated with the highest LEP concentration (1 × 10 −6 M) revealed 30% higher level of apoptosis in comparison with untreated cells (control) ( Figure 1B). In LNCaP cells treated with lower concentrations of leptin (10 −8 and 10 −10 M) we did not observed statistically significantly differences (data not shown). Leptin at a Dose of 1 × 10 −6 M Significantly Modulates the Transcriptomic Profile of LNCaP Cells The GeneChip Human Genome U219 Array Strips used in the current study allowed the simultaneous examination of the gene expression of 19,285 human transcripts. The transcriptome study was performed 24 h after LEP administration (1 × 10 −6 M) to the culture medium. The transcriptome profile was compared with the untreated (control) group. The general profile of transcriptome changes was shown as a volcano plot (Figure 2A). Genes above the cut-off lines have been considered as differentially expressed genes (DEG) and are shown as turquoise dots. The total numbers of DEG are presented in the bottom right corner of the graph. Ten the most regulated genes are marked by their gene symbols. (B) List of 20 genes with the highest (10 genes) and lowest (10 genes) fold changes obtained from the datasets of differentially expressed genes. Leptin at a Dose of 1 × 10 −6 M Significantly Modulates the Transcriptomic Profile of LNCaP Cells The GeneChip Human Genome U219 Array Strips used in the current study allowed the simultaneous examination of the gene expression of 19,285 human transcripts. The transcriptome study was performed 24 h after LEP administration (1 × 10 −6 M) to the culture medium. The transcriptome profile was compared with the untreated (control) group. The general profile of transcriptome changes was shown as a volcano plot (Figure 2A). Leptin at a Dose of 1 × 10 −6 M Significantly Modulates the Transcriptomic Profile of LNCaP Cells The GeneChip Human Genome U219 Array Strips used in the current study allowed the simultaneous examination of the gene expression of 19,285 human transcripts. The transcriptome study was performed 24 h after LEP administration (1 × 10 −6 M) to the culture medium. The transcriptome profile was compared with the untreated (control) group. The general profile of transcriptome changes was shown as a volcano plot (Figure 2A). Genes above the cut-off lines have been considered as differentially expressed genes (DEG) and are shown as turquoise dots. The total numbers of DEG are presented in the bottom right corner of the graph. Ten the most regulated genes are marked by their gene symbols. (B) List of 20 genes with the highest (10 genes) and lowest (10 genes) fold changes obtained from the datasets of differentially expressed genes. . Each dot represents the mean expression (n = 2) of the individual gene obtained from a microarray normalized dataset. The orange dotted lines (cut off values) were established according to the following parameters: fold= |2| and p-value with FDR correction = 5%. Genes above the cut-off lines have been considered as differentially expressed genes (DEG) and are shown as turquoise dots. The total numbers of DEG are presented in the bottom right corner of the graph. Ten the most regulated genes are marked by their gene symbols. (B) List of 20 genes with the highest (10 genes) and lowest (10 genes) fold changes obtained from the datasets of differentially expressed genes. We assumed the following selection criteria for differentially expressed genes (DEG): an expression fold difference > absolute 2 and an adjusted p-value ≤ 0.05. According to these criteria, 297 genes were up-regulated, while 119 were down-regulated as consequence of LEP action. The ten genes with the highest and lowest fold change values are presented in tabular format displaying the gene symbol, gene name, fold change and adjusted p-value ( Figure 2B). These genes were characterised by high fold change values, especially for up-regulated genes (fold range for up-regulated genes: 99. 66-17.64, and for down-regulated genes: −15.70-−3.63). Among others these genes include: chemokine (C-C motif) ligand 20 (CCL20, fold = 99.66), matrix metallopeptidase 7 (MMP7, fold = 62.57), tumor necrosis factor, alpha-induced protein (TNFAIP3, fold = 23.29). 2.3. LEP at 1 × 10 −6 M Concentration Exerts a Significant Effect on the Genes Involved in the Regulation of the Following Biological Processes: Apoptosis, Immunological Response and Extracellular Matrix Organisation To determine which biological processes can be regulated by LEP, we performed an analysis of the enrichment in the relevant ontological groups from the GO BP Direct database. A whole set of differentially expressed genes (DEGs) consisting of 416 genes (297 up-and 119 down-regulated) was subjected to functional annotation and clusterization using the Database for Annotation, Visualization, and Integrated Discovery (DAVID) bioinformatics tools. The result of this analysis was shown as a bubble plot (Figure 3) where we presented only ontological groups fulfilling the following criteria: adjusted p-values below 0.05 and minimal number of genes per group >10. We assumed the following selection criteria for differentially expressed genes (DEG): an expression fold difference > absolute 2 and an adjusted p-value ≤ 0.05. According to these criteria, 297 genes were up-regulated, while 119 were down-regulated as consequence of LEP action. The ten genes with the highest and lowest fold change values are presented in tabular format displaying the gene symbol, gene name, fold change and adjusted p-value ( Figure 2B). These genes were characterised by high fold change values, especially for up-regulated genes (fold range for up-regulated genes: 99. 66-17.64, and for down-regulated genes: −15.70-−3.63). Among others these genes include: chemokine (C-C motif) ligand 20 (CCL20, fold = 99.66), matrix metallopeptidase 7 (MMP7, fold = 62.57), tumor necrosis factor, alpha-induced protein (TNFAIP3, fold = 23.29). The above results were confirmed by another powerful bioinformatic tool-Gene Set Enrichment Analysis (GSEA). In this analysis, the fold change values of all genes were log2 transformed and ranked according their logFC. Afterwards, these values were used for a 1000 times permutation test to calculate the enrichment score (ES) within predefined gene sets from the Hallmark database. Enrichment scores were normalized regarding gene set size and presented as normalised enrichment score (NES). The result of the GSEA analysis for the ranked log2 fold change values of LEP (1 × 10 −6 M) vs. control group was presented in Figure 4. The ten hallmark database groups with the highest absolute NES value were presented in Figure 4A. Within the gene ranks column each vertical line represents one gene and its position depends on the logFC value (around 0 are genes with high logFC value on which LEP had a stimulating effect-enriched genes, on the right side around 19,285-there are genes whose expression due to LEP action is lowered and had a low logFC value-depleted genes). Despite a different methodological approach, the GSEA analysis presents relatively similar groups as shown in the analysis of ontological groups by DAVID. This groups concern: apoptosis/proliferation and immunological processes. The analysis regarding the position of single genes on gene rank scale and consequently the NES values of individual groups indicates that LEP stimulates expression of genes in the immune response hallmark group. Genes involved in the regulation of apoptosis are also strongly stimulated by LEP. The analysis of enrichment within the Hallmark apoptosis group is characterized by a high positive value of NES = 4.25 and includes genes with a very high logFC value as it is shown in Figure 4C. Interestingly, the GSEA analysis also showed a significant decrease in expression of genes closely related to proliferation belonging to the following sets of hallmark database "mitotic spindle" (NES = −2.92) and "G2M checkpoint". (NES = −4.79). These groups are composed of the genes with very low logFC values and therefore their expression is suppressed by LEP ( Figure 4C). 5.03 × 10 −4 , "GO:0060333~interferon-gamma-mediated signaling pathway" (n = 13, adj. p value = 6.41 × 10 −5 ), "GO:0050852~T cell receptor signaling pathway" (n = 13, adj. p value = 0.02), "GO:0045087~innate immune response" (n = 24, adj. p value = 0.02), "GO:0006955~immune response" (n = 28, adj. p value = 5.32 × 10 −4 ), "GO:0006954~inflammatory response" (n = 29, adj. p value = 9.12 × 10 −5 ), "GO:0030198~extracellular matrix organization" (n = 17, adj. p value = 2.03 × 10 −3 ), "GO:0022617~extracellular matrix disassembly" (n = 11, adj. p value = 1.94 × 10 −4 ), "GO:0000165~MAPK cascade" (n = 16, adj. p value = 0.03). The above results were confirmed by another powerful bioinformatic tool-Gene Set Enrichment Analysis (GSEA). In this analysis, the fold change values of all genes were log2 transformed and ranked according their logFC. Afterwards, these values were used for a 1000 times permutation test to calculate the enrichment score (ES) within predefined gene sets from the Hallmark database. Enrichment scores were normalized regarding gene set size and presented as normalised enrichment score (NES). The result of the GSEA analysis for the ranked log2 fold change values of LEP (1 × 10 −6 M) vs. control group was presented in Figure 4. The ten hallmark database groups with the highest absolute NES value were presented in Figure 4A. Within the gene ranks column each vertical line represents one gene and its position depends on the logFC value (around 0 are genes with high logFC value on which LEP had a stimulating effect-enriched genes, on the right side around 19,285-there are genes whose expression due to LEP action is lowered and had a low logFC value-depleted genes). Despite a different methodological approach, the GSEA analysis presents relatively similar groups as shown in the analysis of ontological groups by DAVID. This groups concern: apoptosis/proliferation and immunological processes. The analysis regarding the position of single genes on gene rank scale and consequently the NES values of individual groups indicates that LEP stimulates expression of genes in the immune response hallmark group. Genes involved in the regulation of apoptosis are also strongly stimulated by LEP. The analysis of enrichment within the Hallmark apoptosis group is characterized by a high positive value of NES = 4.25 and includes genes with a very high logFC value as it is shown in Figure 4C. Interestingly, the GSEA analysis also showed a significant decrease in expression of genes closely related to proliferation belonging to the following sets of hallmark database "mitotic spindle" (NES = −2.92) and "G2M checkpoint". (NES = −4.79). These groups are composed of the genes with very low logFC values and therefore their expression is suppressed by LEP ( Figure 4C). Detailed Analysis of the LEP-Regulated Gene Expression in LNCaP Cells Because DAVID and GSEA analyses indicated potent regulation by LEP of genes related to proliferation/apoptosis, as well as due to a significant decrease in proliferation under the influence of LEP in RTCA study, in the next step we analysed specific genes belonging to the following ontological groups: "apoptotic process", "regulation of cell proliferation", "regulation of apoptotic process", "negative regulation of cell proliferation", "positive regulation of apoptotic process". The fold change values of genes forming the mentioned groups were used to calculate Z-score which indicates whether the process decreased (negative value) or increased (positive value) and presented as circular scatter plots. Z-score was calculated automatically using the GOplot library. Despite the presence of several expression decreased genes, all of the analysed ontological groups were characterised by a positive value of Z-score, confirming the stimulating effect exerted by LEP on the given processes. ( Figure 5A). Due to ambiguous nature of the gene ontology structure, single genes can often be assigned to many ontological terms. For this reason, the relationship between genes and GO terms were mapped with circos plots with visualization of logFC values and gene symbols ( Figure 5B). The strongest up-regulated genes from the examined ontological groups included, among others: BIRC3-baculoviral IAP repeat containing 3, FAS-Fas cell surface death receptor, TNFAIP3-tumor necrosis factor, alpha-induced protein 3, TNF-tumor necrosis factor, GADD45G-growth arrest and DNA-damage-inducible protein GADD45 gamma. Detailed Analysis of the LEP-Regulated Gene Expression in LNCaP Cells Because DAVID and GSEA analyses indicated potent regulation by LEP of genes related to proliferation/apoptosis, as well as due to a significant decrease in proliferation under the influence of LEP in RTCA study, in the next step we analysed specific genes belonging to the following ontological groups: "apoptotic process", "regulation of cell proliferation", "regulation of apoptotic process", "negative regulation of cell proliferation", "positive regulation of apoptotic process". The fold change values of genes forming the mentioned groups were used to calculate Z-score which indicates whether the process decreased (negative value) or increased (positive value) and presented as circular scatter plots. Z-score was calculated automatically using the GOplot library. Despite the presence of several expression decreased genes, all of the analysed ontological groups were characterised by a positive value of Z-score, confirming the stimulating effect exerted by LEP on the given processes. ( Figure 5A). Due to ambiguous nature of the gene ontology structure, single genes can often be assigned to many ontological terms. For this reason, the relationship between genes and GO terms were mapped with circos plots with visualization of logFC values and gene symbols ( Figure 5B). The strongest up-regulated genes from the examined ontological groups included, among others: BIRC3-baculoviral IAP repeat containing 3, FAS-Fas cell surface death receptor, TNFAIP3-tumor necrosis factor, alpha-induced protein 3, TNF-tumor necrosis factor, GADD45G-growth arrest and DNA-damage-inducible protein GADD45 gamma. The circular scatter plots of differentially expressed genes involved in specific GO terms (positive regulation of apoptotic process, apoptotic process, regulation of cell proliferation, regulation of apoptotic process, negative regulation of cell proliferation). Each dot represents a single gene whose expression is increased (green) or decreased (red) due to LEP action. Positive value of Z-score mapped on a red colour scale was presented inside the graph. (B) Circos plots with interdependence between selected GO terms and their genes. Symbols of DEG are presented on the left side of the graph with their fold change values, mapped by colour scale (green = higher expression; red = lower expression). Gene involvement in the GO terms was determined by coloured connecting lines. The leptin's effect on apoptosis stimulation was also confirmed using Pathview library. Fold values of DEGs were mapped with appropriate colours for each gene forming "Apoptosis" ( Figure 6) and "p53 signaling pathway" (Figure 7). Green colour indicates statistically significant gene up-regulation and red colour refers to down-regulated genes. Grey colour marked genes whose expression was not significantly changed. Analogous to the David analysis with circosplot visualization, most of the genes displayed in "Apoptosis" and "p53 signaling pathway" were The circular scatter plots of differentially expressed genes involved in specific GO terms (positive regulation of apoptotic process, apoptotic process, regulation of cell proliferation, regulation of apoptotic process, negative regulation of cell proliferation). Each dot represents a single gene whose expression is increased (green) or decreased (red) due to LEP action. Positive value of Z-score mapped on a red colour scale was presented inside the graph. (B) Circos plots with interdependence between selected GO terms and their genes. Symbols of DEG are presented on the left side of the graph with their fold change values, mapped by colour scale (green = higher expression; red = lower expression). Gene involvement in the GO terms was determined by coloured connecting lines. The leptin's effect on apoptosis stimulation was also confirmed using Pathview library. Fold values of DEGs were mapped with appropriate colours for each gene forming "Apoptosis" ( Figure 6) and "p53 signaling pathway" (Figure 7). Green colour indicates statistically significant gene up-regulation and red colour refers to down-regulated genes. Grey colour marked genes whose expression was not significantly changed. Analogous to the David analysis with circosplot visualization, most of the genes displayed in "Apoptosis" and "p53 signaling pathway" were up-regulated. This result confirms the stimulatory effect of LEP on apoptosis activation, moreover, it is consistent with the previously described LEP effect causing a decrease in LNCaP cells proliferation. up-regulated. This result confirms the stimulatory effect of LEP on apoptosis activation, moreover, it is consistent with the previously described LEP effect causing a decrease in LNCaP cells proliferation. up-regulated. This result confirms the stimulatory effect of LEP on apoptosis activation, moreover, it is consistent with the previously described LEP effect causing a decrease in LNCaP cells proliferation. The expression of several differentially expressed genes was also validated using qPCR. Significantly different genes were selected for validation according to the Volcano Plot and the list of genes with the largest fold changes obtained from the datasets with differentially expressed genes (Figure 2A,B). Our findings confirmed the effect exerted by LEP on the expression of examined genes, namely: BMX-BMX non-receptor tyrosine kinase, C11orf92-chromosome 11 open reading frame 92, KLK4-kallikrein-related peptidase 4, MYLK-myosin light chain kinase, RIMS1-regulating synaptic membrane exocytosis 1, BIRC3-baculoviral IAP repeat containing, FAS-apoptosis signal receptor/cell surface death receptor, MMP7-matrix metallopeptidase 7, TNFAIP3-tumor necrosis factor, alpha-induced protein 3. In accordance with the results from microarrays, the expression of all the above-mentioned genes was stimulated by LEP at 1 × 10 −6 M concentration (Figure 8). The expression of several differentially expressed genes was also validated using qPCR. Significantly different genes were selected for validation according to the Volcano Plot and the list of genes with the largest fold changes obtained from the datasets with differentially expressed genes (Figures 2A,B). Our findings confirmed the effect exerted by LEP on the expression of examined genes, namely: BMX-BMX non-receptor tyrosine kinase, C11orf92-chromosome 11 open reading frame 92, KLK4-kallikrein-related peptidase 4, MYLK-myosin light chain kinase, RIMS1-regulating synaptic membrane exocytosis 1, BIRC3-baculoviral IAP repeat containing, FAS-apoptosis signal receptor/cell surface death receptor, MMP7-matrix metallopeptidase 7, TNFAIP3-tumor necrosis factor, alpha-induced protein 3. In accordance with the results from microarrays, the expression of all the above-mentioned genes was stimulated by LEP at 1 × 10 − ⁶ M concentration ( Figure 8). LEP Regulates Several Key Factors of Signaling Pathways Involved in Apoptosis, Proliferation and Migration In the next step, we studied the contribution of LEP in modulation of several important signaling pathways involved in apoptosis/proliferation regulation. The study was carried out through the incubation of LNCaP cells with LEP in the following time series: 0 min (control), 10 min, 1 h, 2 h, 4 h and 24 h. Obtained results were compared to the control group. Activation of signaling pathways was analyzed using antibodies directed to p38 mitogen-activated protein kinase, p44-42 mitogen-activated protein kinase, Bcl-2 phosphorylated at threonine 56 and total Stat1. Densitometric analysis was normalized in relation to GAPDH levels and presented in Figure 9. Up to the 2 nd hour of culture in the presence of LEP we did not find its effect on p38 mitogen-activated protein kinase, while from the 4 th to the 24 th hour we observed a significant decrease in p38 MAPK activation. During the studied time interval, p44-42 mitogen-activated protein kinase was consistently decreased and achieved a statistically significant reduction by the 24 th h. Bcl-2 phosphorylated at threonine 56 increased rapidly at 10 min of incubation. This increase was maintained until the 2 nd hour of culture. From the fourth hour, it was significantly lowered and this reduction was also maintained at the 24 th hour. Stat1 expression was relatively stable up to the 2 nd hour and then gradually increased to reach a statistically significant value at 24 h of culture. Results are presented as means ± SEM, n = 3. Statistical comparison by Student's t-test: * p < 0.05; ** p < 0.02; *** p < 0.01. LEP Regulates Several Key Factors of Signaling Pathways Involved in Apoptosis, Proliferation and Migration In the next step, we studied the contribution of LEP in modulation of several important signaling pathways involved in apoptosis/proliferation regulation. The study was carried out through the incubation of LNCaP cells with LEP in the following time series: 0 min (control), 10 min, 1 h, 2 h, 4 h and 24 h. Obtained results were compared to the control group. Activation of signaling pathways was analyzed using antibodies directed to p38 mitogen-activated protein kinase, p44-42 mitogen-activated protein kinase, Bcl-2 phosphorylated at threonine 56 and total Stat1. Densitometric analysis was normalized in relation to GAPDH levels and presented in Figure 9. Up to the 2nd hour of culture in the presence of LEP we did not find its effect on p38 mitogen-activated protein kinase, while from the 4th to the 24th hour we observed a significant decrease in p38 MAPK activation. During the studied time interval, p44-42 mitogen-activated protein kinase was consistently decreased and achieved a statistically significant reduction by the 24th h. Bcl-2 phosphorylated at threonine 56 increased rapidly at 10 min of incubation. This increase was maintained until the 2nd hour of culture. From the fourth hour, it was significantly lowered and this reduction was also maintained at the 24th hour. Stat1 expression was relatively stable up to the 2nd hour and then gradually increased to reach a statistically significant value at 24 h of culture. Matrix metallopeptidase 7-MMP7, was one of the genes whose expression was strongly stimulated by LEP, therefore, we decided to check whether the increase of its expression was also reflected in the protein level. For this purpose, the level of the secreted MMP7 was determined in cell culture medium using ELISA method. As we have shown in Figure 10, LEP significantly stimulates the secretion of MMP7, that was in line with the previously described microarray data (Figures 2A,B) and real-time qPCR validation (Figure 8). Leptin (LEP), Leptin Receptor (LEPR) and Its Main Downstream Signaling Genes (JAK2, STAT3) Are Downregulated in Prostate Adrenocarcinoma Analysis of RNAseq data for 52 normal prostate (control) and 498 prostate adenocarcinoma obtained from TCGA database, revealed that LEP, LEPR, JAK2 and STAT3 are statistically significant downregulated in prostate tumors in relation to normal (control) prostate ( Figure 11). Matrix metallopeptidase 7-MMP7, was one of the genes whose expression was strongly stimulated by LEP, therefore, we decided to check whether the increase of its expression was also reflected in the protein level. For this purpose, the level of the secreted MMP7 was determined in cell culture medium using ELISA method. As we have shown in Figure 10, LEP significantly stimulates the secretion of MMP7, that was in line with the previously described microarray data (Figure 2A Matrix metallopeptidase 7-MMP7, was one of the genes whose expression was strongly stimulated by LEP, therefore, we decided to check whether the increase of its expression was also reflected in the protein level. For this purpose, the level of the secreted MMP7 was determined in cell culture medium using ELISA method. As we have shown in Figure 10, LEP significantly stimulates the secretion of MMP7, that was in line with the previously described microarray data (Figures 2A,B) and real-time qPCR validation ( Figure 8). Leptin (LEP), Leptin Receptor (LEPR) and Its Main Downstream Signaling Genes (JAK2, STAT3) Are Downregulated in Prostate Adrenocarcinoma Analysis of RNAseq data for 52 normal prostate (control) and 498 prostate adenocarcinoma obtained from TCGA database, revealed that LEP, LEPR, JAK2 and STAT3 are statistically significant downregulated in prostate tumors in relation to normal (control) prostate ( Figure 11). Leptin (LEP), Leptin Receptor (LEPR) and Its Main Downstream Signaling Genes (JAK2, STAT3) Are Downregulated in Prostate Adrenocarcinoma Analysis of RNAseq data for 52 normal prostate (control) and 498 prostate adenocarcinoma obtained from TCGA database, revealed that LEP, LEPR, JAK2 and STAT3 are statistically significant downregulated in prostate tumors in relation to normal (control) prostate ( Figure 11). Discussion It is generally accepted that LEP affects tumor cell invasion and progression [30,31]. LEP exerts its physiological effect by binding to set of LEP receptors, extended class I cytokine receptor family, composed of six isoforms [32]. Molecular mechanism of LEP action is well described. LEP binding to all short LEP receptors (LEPR var 2-6) leads to activation of Janus-activated kinase (JAK2) with subsequent phosphorylation of insulin receptor substrates (IRS), initiating activation of the phosphoinositide 3 kinase (PI3K)/Akt pathway [29]. Activation of IRS causes also stimulation of NF-kB signaling pathways involved in cell migration and inflammatory response [33]. The long form of the receptor (LEPR var 1) contains an additional intracellular carboxy terminal motif necessary to activate STAT3 and STAT5 [34]. Recently, using multiple sets of specific primers, we have shown that only the second isoform of the LEP receptor is expressed in the LNCaP cell line [24]. Therefore, it seems that the response of LNCaP cells to LEP is mediated via JAK2 kinase activation of LEPR var 2. Moreover, the results of other studies show that STAT3 activation under the influence of LEP in LNCaP cells may occur, however only in cells transiently co-transfected with LEPR var 1 [35]. With regard to this, Deo et al. (2008) revealed that STAT3 of LEP-exposed LNCaP cells undergoes fast dose-depended phosphorylation. This finding indirectly confirms the presence of an active LEPR form in LNCaP cells [27]. The molecular mechanism exerted by LEP was determined using high-throughput microarray, real time PCR, Western Blot and ELISA methods. The use of U219 Array Strips allowed simultaneous examination of transcriptome profile by measurement of 19,285 human genes. In the present study, we identified 416 LEP-responsive genes, most of which were up-regulated (297 genes). The analysis of GO terms including DEGs, revealed that LEP participates in the regulation of apoptosis, immunological response and extracellular matrix organisation in LNCaP cells. Interestingly, a significant number of genes with the highest expression level is related to immunological responses. Currently, no available data indicates role of LEP in the regulation of immunological processes in prostate LNCaP cell line, however several arguments support our findings. Both molecular Discussion It is generally accepted that LEP affects tumor cell invasion and progression [30,31]. LEP exerts its physiological effect by binding to set of LEP receptors, extended class I cytokine receptor family, composed of six isoforms [32]. Molecular mechanism of LEP action is well described. LEP binding to all short LEP receptors (LEPR var 2-6) leads to activation of Janus-activated kinase (JAK2) with subsequent phosphorylation of insulin receptor substrates (IRS), initiating activation of the phosphoinositide 3 kinase (PI3K)/Akt pathway [29]. Activation of IRS causes also stimulation of NF-kB signaling pathways involved in cell migration and inflammatory response [33]. The long form of the receptor (LEPR var 1) contains an additional intracellular carboxy terminal motif necessary to activate STAT3 and STAT5 [34]. Recently, using multiple sets of specific primers, we have shown that only the second isoform of the LEP receptor is expressed in the LNCaP cell line [24]. Therefore, it seems that the response of LNCaP cells to LEP is mediated via JAK2 kinase activation of LEPR var 2. Moreover, the results of other studies show that STAT3 activation under the influence of LEP in LNCaP cells may occur, however only in cells transiently co-transfected with LEPR var 1 [35]. With regard to this, Deo et al. (2008) revealed that STAT3 of LEP-exposed LNCaP cells undergoes fast dose-depended phosphorylation. This finding indirectly confirms the presence of an active LEPR form in LNCaP cells [27]. The molecular mechanism exerted by LEP was determined using high-throughput microarray, real time PCR, Western Blot and ELISA methods. The use of U219 Array Strips allowed simultaneous examination of transcriptome profile by measurement of 19,285 human genes. In the present study, we identified 416 LEP-responsive genes, most of which were up-regulated (297 genes). The analysis of GO terms including DEGs, revealed that LEP participates in the regulation of apoptosis, immunological response and extracellular matrix organisation in LNCaP cells. Interestingly, a significant number of genes with the highest expression level is related to immunological responses. Currently, no available data indicates role of LEP in the regulation of immunological processes in prostate LNCaP cell line, however several arguments support our findings. Both molecular structure of LEP and its receptor allow to include them to the cytokine family. The secondary structure of LEP is similar to the long-chain helical cytokines family, including interleukin 6 (IL-6), IL-11, CNTF and LIF. LEP receptor amino acid sequence shares a strong homology with gp-103 signal-transducing subunit of the IL-6-type cytokine receptors [36]. It was also shown that exogenous LEP causes up-regulation of pro-inflammatory cytokines in macrophages and lymphocytes T [37,38]. There are also reports indicating that the LNCaP cell line is responsive to TNF-alpha, a known pro-inflammatory signal [39]. By means of transcriptome analysis authors revealed the stimulation of pro-inflammatory processes in these cells. In line with our results, this team observed inhibition of proliferation in LNCaP cell exposed to TNF-alpha, and the associated gene expression changes are similar to those observed in our experiments. Interestingly, the results of our study have also shown that LEP significantly stimulates expression of NF-κB family members genes: nuclear factor of kappa light polypeptide gene enhancer in B-cells 1 (NFKB1, fold = 3.22, p = 0.0001), nuclear factor of kappa light polypeptide gene enhancer in B-cells 2 (NFKB2, fold = 1.6, p = 0.005), v-rel reticuloendotheliosis viral oncogene homolog B (RELB, fold = 3.33, p = 0.0005), v-rel reticuloendotheliosis viral oncogene homolog (REL, fold = 1.57, p = 0.01). Nuclear factor kappa B (NF-κB) belongs to essential class of transcriptional regulators. NF-κB plays an important role in regulation of multiple physiological processes, among others: the immune response, migration and invasion of cancer cells [33,40,41]. Jin et al. (2008) demonstrated that neuropeptides secreted from prostate neuroendocrine cells may activate the NF-κB pathway in LNCaP prostate cell line [40]. In Du145, PC3 and LNCaP cell lines, LEP induces cell migration through NF-κB [33]. The results of our research seem to confirm the contribution of LEP in the regulation of NF-κB pathway in LNCaP cells, modulating inflammatory processes and cell migration through this signaling pathway. In the group of genes related to pro-inflammatory response, the highest differences in expression after LEP treatment, was observed in the CCL20 gene. C-C motif chemokine ligand 20 (CCL20) belongs to the subfamily of small chemokine C-C genes involved in inflammatory process. CCL20 is also overexpressed in many types of cancers, however its role in tumors is not fully explained. In the context of prostate cell lines, Beider et al. showed that basal mRNA of CCL20 was detectable in PC3, Du154 cells and presented at very low level in LNCaP cells [42]. Moreover, CCL20 expression in the LNCaP line is stimulated by interleukin-17A [43]. Our results also revealed a significant contribution of LEP to the regulation of extracellular matrix organization. Matrix metalloproteinase 7 (MMP7) plays essential role in prostate cancer cell invasion and epithelial to mesenchymal transition by breaking down extracellular matrix of tumor cells [44]. In this context Zhang et al. (2016), showed that overexpression of MMP7 in LNCaP cells leads to disruption of E-cadherin/β-catenin complex and releases β-catenin, thus enhancing EMT and tumor cell invasion [44]. It is astonishing that MMP7 is not expressed in a normal prostate, whereas is overexpressed in human prostate cancer [45,46]. Relatively high expression of MMP7 was observed in our LNCaP cells. MMP7 expression at mRNA and protein level, measured as secretion to cell culture medium, was significantly stimulated by LEP. MMP7 may also play role in apoptosis induction. MMP7 leads to release of membrane bound FASL that induces apoptosis of neighboring cells via death receptor FAS [47][48][49]. Lipocalin 2 (Lcn2) is described as ligand for matrix metallorproteinase 9, also known as neutrophil gelatinase-associated lipocalin (NGAL). Lcn2 is upregulated in several types of cancers, and has been shown to facilitate tumor progression [50]. In LNCaP cells, Lcn2 upregulation undergoes via NF-kB-dependent manner [51]. In relation to the prostate cancer cell lines there are data proving the contribution of Lcn2 in prostate cell proliferation, however, this effect is dependent on the prostate cell type. Tung et al. (2013) found that LCN2 knock-down in PC3 and DU145, reduced cell growth, by induction of cell-cycle arrest at G0/G1 phase and Lcn2 overexpression stimulates growth, migration and invasiveness of 22Rv cells [50]. Increasing evidence suggests that LEP exerts a significant effect on the proliferation and apoptosis rates of different prostate cell lines, but the results of individual studies are inconclusive. The analysis of the collected data presented in Table 1 indicates that the effect of leptin on proliferation and apoptosis of human normal prostate and prostate cancer cell lines depends on the cell line tested, the time of exposure to LEP and the concentration of this cytokine in the culture medium. In our study, strong LEP-dependent regulation of apoptotic processes in LNCaP prostate cell line was demonstrated. This pro-apoptotic effect was triggered by LEP at the highest of the tested doses (1 × 10 −6 M), that affected on expression changes of many apoptotic genes. It is well known that apoptotic process is divided into two stages: induction and execution. Induction of apoptosis is a multifactorial stage of apoptosis, that includes involvement of many different factors, like receptors, ligands, intracellular peptides. Presented here microarray analysis using KEGG "apoptosis" pathway showed that after 48 h exposure, among 41 DEGs involved in apoptosis most of them-39, was up-regulated in LNCaP cells. For example, the expression of FAS gene, which encodes a cell surface death receptor that play key role in induction of apoptosis by binding FAS ligand (FASLG) in cell membrane, was almost 3 times higher (fold = 2.95, p = 0.003), compared to control. This statistically significant increase was also confirmed using qPCR method. Unlike our results, Tanaka and coworkers (2008) demonstrated inhibitory effect of LEP on Fas-dependent apoptosis [55]. This result was obtained using LEP at physiological concentration, indicating the importance of the dose of the peptide administered in the experiments. On the other hand, current studies have shown that another ligand of death receptors involved in induction of apoptosis (mentioned above) -member of TNF superfamily-TNF-alpha, was also found strongly up-regulated after LEP 1 × 10 −6 M administration (fold = 4.27, p = 0.0001). Moreover, the expression of TRADD gene that encode tumor necrosis factor receptor type 1-associated DEATH domain protein (TRADD) involved in transduction of the TNF-alpha signal downstream, was also found to be increased (fold = 1.47, p = 0.01). Similar changes were found in the expression of TP53 gene (p53 protein) (fold = 1.66, p = 0.003). As commonly known, p53 protein plays significant role in regulation of progression through apoptosis when DNA damage is irreparable. It activates pro-apoptotic BH family peptides proteins, like BAX, BID and transcription factors like NOXA, PUMA and direct the cell to death. Here the expression level of those molecules were found to be increased (BAX fold = 1.31, p = 0.048; BID fold = 1.43, p = 0.021). Activation of death domain receptors leads to activation of initiating caspases 8 and 10, which next, on the one hand, activates pro-apoptotic protein BID and on the other hand, activates procaspases 3 and 7-inactive forms of effector caspases 3 and 7 (CASP3, CASP7). This is followed by avalanche activation of target proteins, leading to cell death. Active protein BID activates other pro-apoptotic BH family peptides-BAX and BAK, which lead to the release of cytochrome C from the mitochondrial matrix and form apoptosome complex, while active caspase 3 cleavage the DNA repair enzyme (PARP) into active form. In our study, the expression level of several mentioned above genes involved in progression of apoptosis, were clearly elevated (CASP8 fold = 1.81, p = 0.002; CASP10 fold = 1.42, p = 0.019; CASP7 fold = 2.53, p = 0.001). In the literature there is lot of data regarding involvement of LEP in the regulation of apoptotic processes in prostatic cells. Most of them indicated inhibitory effect of LEP on this process and promotion of cellular proliferation [20,28,52,54,56,57]. However, some studies demonstrated LEP-dependent activation of apoptotic processes. Regarding this Samuel-Mendelsohn et al. (2011) reported LEP-induced activation of apoptotic effector molecules (CASP3 and PARP) in LNCaP cells [35]. Using Western-blot analysis followed by densitometry quantification they noted dose-dependent increase in CASP3 and PARP level observed between 6 and 24-h of LEP administration (1 ng/mL of LEP). This result is in line with the results presented here. Despite the fact that we didn't notice increase of expression of CASP3 mRNA, we detected increased expression of PARP mRNA in LNCaP cells (described on the signal path as PARP, mapped as PARP4 fold = 1.33, p = 0.038). Other studies of Kim and coworkers (2003) demonstrated LEP-dependent activation of caspase 3 and caspase 9 in osteoblast-lineage of primary human bone marrow stromal cells [57]. This caused increase of cytochrome c release from mitochondria and could confirm our finding on LEP-activation of apoptosis, however we didn't note any changes in expression level of both caspases mRNA [57]. Although most genes belonging to the ontological group "apoptotic process" were up-regulated, as well as enrichment analysis of ontological groups indicated a significant stimulation of this process, several genes involved in the regulation of apoptosis are lowered under the influence of LEP, e.g., PEG3, DAPK1, ZBTB16, STK3 and AR. It is well described that proliferation and growth of LNCaP cells is androgen dependent, where AR plays proliferative or/and apoptotic role by interaction with MAPK, EGFR, IGF, TGF beta, FGF or VEGF [58][59][60][61]. For this reason, we cannot exclude an indirect involvement of the androgen receptor in the observed LEP effect. However, this aspect requires further research. We are aware that the research was carried out with the use of high LEP concentrations in the culture medium. However, similar LEP concentrations are also used in other in vitro experiments [62][63][64][65][66]. For example, such high LEP concentrations are used in studies related to the involvement of LEP in the regulation of pituitary gland hormone secretion or cell proliferation. In concentrations comparable to our own LEP inhibits pituitary cell proliferation in human and rat pituitary cell lines in vitro [61], stimulates FSH and LH release from pituitary cells of male and female rainbow trout [64] and TSH secretion from ovine pituitary cells [63]. As mentioned above, LEP concentration used is far from the physiological. Therefore, the use of high LEP doses in systemic administration seems to be a limiting factor, due to possible side effects. However, the potential use of high doses of LEP administered directly to the prostate should be taken into consideration. It should also be noted that LEP and its receptor are expressed within the prostate, therefore its local para and autocrine activity are not excluded, where in the intercellular areas the leptin level may be higher than in the serum. Additionally, there are numerous depots of adipose cells within the prostate that may constitute a source of locally acting leptin. This suggestion is reinforced by the fact that the expression of LEP, leptin receptor (LEPR) and its main downstream signaling genes (JAK2, STAT3) is reduced in prostate adenocarcinoma (Figure 11), suggesting that this system is involved in the mechanism of apoptosis defense in proliferating tumor cells. However, this suggestion requires further studies. Prostate Cancer Cell Line LNCaP-the human prostate carcinoma cell line (LNCaP clone FGC (ATCC ® CRL-1740DTM)) was purchased from ATCC (American Type Culture Collection, Manassas, Rockville, MD, USA). LNCaP cells were cultivated in the RPMI Medium 1640 (1×), supplemented with GlutaMAX, HEPES (all from Gibco, Life technologies, Carlsbad, CA, USA) and antibiotics (Antibiotic/antimycotic Sigma Aldrich, Saint Louis, MO, USA). Cells were grown in the 75 cm 2 flask (NUNC EasyFlask with Nunclon surface, Thermo Fisher Scientific) at 37 • C in a humidified atmosphere of 5% CO 2 . The culture medium was changed every 2 days [67]. When cells reached approximately 80% confluence (about 7-8 days of cultivation), they were subcultured into 6-well plates (Nunc, Thermo Fisher Scientific, approximately 343,000 cells per one well-9.6 cm 2 ), to determine the effect of LEP on LNCaP cells proliferation at mRNA/protein level. Simultaneously, cells were seeded on E-Plate 48 (Roche Applied Science, GmbH, Penzberg, Germany or ACEA Biosciences Inc., San Diego, CA, USA, approximately 12,500 cells per one well-0.3 cm 2 ) to perform real time proliferation assay [24]. The applied experimental protocol was as follows: during the first 48 h of cultivation, the cells grew in a standard medium mentioned above. For the next 24 h, the cells were grown in starvation medium (FBS free). Afterwards, the cells were cultivated for 48 h, in a starvation medium supplemented with LEP (Recombinant Human Leptin, PeproTech, Germany) at the following concentrations: 0 (control), 1 × 10 −6 ; 1 × 10 −8 and 1 × 10 −10 M. After the mentioned period, the medium and cell supernatants were collected and stored at −80 • C for further analyses. Real-Time Proliferation Assay To verify proliferation rate of LNCaP cells we used an electrical impedance based approach-the Real-Time Cell Analyser (RTCA, Roche Applied Science, GmbH, Penzberg, Germany). The system detects variations in electrical impedance across incorporated sensor electrode arrays placed on the bottom of 16-well chamber slide plates (E-plate 16) on which the cells are seeded. Electrical impedance is measured throughout the cultivation period at a 15-minutes frequency. The main read-out of the RTCA is a dimensionless parameter named "Cell Index" which represents a relative change in electrical impedance, depending on the proliferation or apoptosis rate of the cultured cells. LNCaP cells were cultivated in the same groups and experimental layout as described above. Each group was seeded in the eight wells of E-plates in a final volume of 200 µL per well. Cell index was normalised (normalised cell index) at the time point of LEP administration, using software provided by the manufacturer (RTCA Software, Version 1.2, November 2009). LNCaP cells were cultivated with LEP till reaching the plateau phase in the control group (total time: 196 h). Each experiment was repeated at least three times. Flow Cytometry Analysis of Cleaved PARP-1 LNCaP cells (un-and treated with different concentrations of leptins) were stained for cPARP with the PE Mouse Anti-Cleaved PARP (Asp214) antibody (562253, BD Biosciences, NJ, USA) according to manufacturer's instructions. Briefly, 1 × 10 6 un-and treated cells were fixed and permeabilized with BD Cytofix/Cytoperm Fixation/Permeabilization Solution for 30 min at room temperature. Then, the additional permeabilization and fixation was performed. The fixed cells were washed with BD Perm/Wash Buffer and stained with appropriate antibody (5 µL/test) for 20 min at room temperature. The stained and washed cells were resuspended in 500 µL PBS and analyzed with a flow cytometer (CytoFLEX, Beckman Coulter, CA, USA). Fluorescence intensity in arbitrary units was plotted in histograms and the mean fluorescence intensity was calculated. Data were analyzed using FlowJo software (FlowJo v10; LLC, Ashland, OR, USA). RNA Isolation The applied methods were described earlier [68]. The total RNA was extracted using TRI Reagent (Sigma-Aldrich, St. Louis, MO, USA) then purified on columns (NucleoSpin Total RNA Isolation, Qiagen GmbH, Hilden, Germany). The amount of total RNA was determined by optical density at 260 nm and its purity was estimated by 260/280 nm absorption ratio (higher than 1.8) (NanoDrop spectrophotometer, Thermo Scientific, Waltham, MA, USA). The RNA integrity and quality were checked in a Bioanalyser 2100 (Agilent Technologies, Inc., Santa Clara, CA, USA). The resulting RNA integrity numbers (RINs) were between 8.5 and 10 with an average of 9.2. Each sample was diluted to the RNA concentration of 100 ng/µL, at the OD260/OD280 ratio of 1.8/2.0. From each RNA sample, 100 ng of RNA was taken for microarray experiments. The remaining amount of isolated RNA was used for RT-qPCR study. Reverse Transcription Reverse transcription was performed using Transcriptor High Fidelity Reverse Transcriptase enzyme blend for high fidelity two-step RT-PCR of RNA (Roche, Basel, Switzerland) with Oligo(dT) as primers at a temperature of 45 • C for 40 min (Thermocycler UNO II, Biometra, Göttingen, Germany). For a single reaction, 1 µg of total RNA was used. The RT was carried out in standard final volumes (20 µL). After RT each cDNA containing sample was diluted with 100 µL of RNase-free water. Q-PCR Q-PCR was performed using the Lightcycler 2.0 instrument (Roche, Basel, Switzerland) with the 4.05 software version. SYBR green detection system was applied as described earlier [67]. Every of 20 µL reaction mixtures contained 2 µL template cDNA (standard or control), 0.5 µM of specific primer and a previously determined optimum MgCl 2 concentration (3.5 µM for one reaction). Light Cycler Fast Start DNA Master SYBR Green I mix (Roche) was used. The real-time PCR program included 10 min denaturation step to activate the Taq DNA Polymerase, followed by a three-step amplification program: denaturation at 95 • C for 10 s, annealing at 56 • C for 5 s, and extension at 72 • C for 10 s. Specificity of reaction products was checked by determination of melting points (0.1 • C/s transition rate). All samples were amplified in triplicate, and hypoxanthine phosphoribosyltransferase (HPRT) gene was used as a reference to normalize obtained results. The primers were designed using Primer 3 software (Whitehead Institute for Biomedical Research, Cambridge, MA, USA) ( Table 2). They were purchased from the Laboratory of DNA Sequencing and Oligonucleotide Synthesis, Institute of Biochemistry and Biophysics, Polish Academy of Sciences, Warsaw. Microarray Expression Study The microarray study was carried out according to the previously described procedure [69][70][71][72]. The previously isolated RNA was pooled into four samples, representing control group (n = 2) and experimental group (n = 2) where the RNA was isolated after 24 hours from LEP administration (concentration 1 × 10 −6 M). The protocol involving transcription in vitro, biotin labelling and cDNA fragmentation for further hybridization was carried out using the Affymetrix GeneChip IVT Express Kit (Affymetrix, Santa Clara, CA, USA). Obtained biotin-labelled fragments were hybridized with Affymterix GeneChip Human Genome U219 microarrays together with control cDNA and oligo B2. The hybridization process was conducted with the use of the AccuBlockTM Digital Dry Bath (Labnet International, Inc., Edison, NJ, USA) hybridization oven at 45 • C for 16 h. Then the microarrays were washed and stained according to the technical protocol using the Affymetrix GeneAtlas Fluidics Station (Affymetrix, Santa Clara, CA, USA). The array strips were scanned by the Imaging Station of GeneAtlas System. Preliminary analysis of the scanned chips was performed using Affymetrix GeneAtlasTM Operating Software. The quality of gene expression data was verified using the quality control criteria established by the software. Obtained CEL files were imported into downstream data analysis. Microarray Data Analysis All analyses were performed using BioConductor software with the relevant Bioconductor libraries, based on the statistical R programming language. The Robust Multiarray Average (RMA) normalization algorithm implemented in the "Affy" library was used for normalization, background correction, and calculation of the expression values of all of the examined genes [73]. Biological annotation was taken from BioConductor "oligo" package where annotated data frame object was merged with normalized data set, leading to a complete gene data table [74]. Differential expression and statistical assessment were determined by applying the linear models for microarray data implemented in the "limma" library [75]. The selection criteria of a significantly changed gene expression were based on fold difference higher than absolute 2 and p-value after false discovery rate (FDR) correction <0.05. The result of such a selection was presented as volcano plot, showing the total number of up-and down-regulated genes. Raw Data files were also deposited in the Gene Expression Omnibus (GEO) repository at the National Center for Biotechnology Information (http://www.ncbi.nlm.nih.gov/geo/) under the GEO accession number GEO: GSE133616. Assignment of Differentially Expressed Genes to Relevant Gene Ontology (GO) Terms The whole set of differentially expressed genes (DEGs) were subjected to functional annotation and clusterization using the DAVID (Database for Annotation, Visualization, and Integrated Discovery) bioinformatics tool [76]. Gene symbols of differentially expressed genes were uploaded to DAVID by the "RDAVIDWebService" BioConductor library [77], where DEGs were assigned to relevant GO terms, with subsequent selection of significantly enriched GO terms from GO BP Direct database. The p-values of selected GO terms were corrected using Benjamini-Hochberg correction described as adjusted p-values [78]. Relevant GO ontological groups with adjusted p-values below 0.05 and N per group >5 were visualized using bubble plot. Detailed analysis of genes belonging to selected ontological groups, with their expression fold changes, are presented as circos plots using "GOplot" library [79]. Gene Set Enrichment Analysis (GSEA) Gene Set Enrichment Analysis was used to determine enrichment or depletion in genes expression between two compared biological groups within a priori defined gene sets (GO terms, pathways). The method uses Kolmogorov-Smirnov (K-S) statistical test for identification of significantly enriched or depleted groups of genes [80]. GSEA analysis has been conducted using FGSEA library [81]. Normalised fold change values from all of the genes presented on the microarray were log2 transformed and ordered. Then, a predefined gene sets from Hallmark database (from the Molecular Signatures Database) was selected [82]. Genes belonging to the selected set were ranked according to the difference in their expression level using signal-to-noise ratio with 1000 times permutation. By walking down the ranked list of genes, the enrichment score (ES) was calculated for each selected gene set [83]. ESs were normalized by their gene set size, and false positives were corrected by FDR. KEGG Signaling Pathways The Pathview library of the bioconductor was used to generate the p53 and apoptosis signaling pathway [84]. The fold values of significantly changed genes were mapped by colours on native KEGG, p53 (KEGG ID = hsa04115) and apoptosis signaling pathway (KEGG ID = hsa04210), where green represents up-regulated expression and red represents down-regulated expression levels in relation to the control group. In order to show a comprehensive image concerning the regulation of the analysed signaling pathways, all genes whose expression was significantly different without a cut-off at fold values were visualized. ELISA Test-MMP7 Level Detection The culture medium from control and LEP 1 × 10 −6 M groups was subjected to an analysis of the metalloproteinase 7 (MMP7) secretion level using solid phase enzyme-linked immunosorbent assay (ELISA) test (Abcam, Cambridge, UK, ab100608, MMP7 Human ELISA Kit). All assays were performed according to the manufacturers' protocols. Absorbance (OD) of each plate wells were measured at 450 nm with Biotek-synergy 2, microtiter plate reader. Quantitative analysis was performed using a Four-Parameter Logistic (4PL) curve, implemented in "drc" package of Bioconductor [86]. Clinical description with RNAseq data for 52 normal prostate (control) and 498 prostate adenocarcinoma were downloaded from public TCGA database [87] using FireBrowse server (http://gdac.broadinstitute.org/) [88]. Then voom algorithm from "Limma" package was used for data normalization [75]. Normalized data for leptin (LEP), leptin receptor (LEPR) and its main downstream signaling genes (JAK2, STAT3), were extracted from the whole dataset. The obtained expression values were subjected to statistical analysis using Mann-Whitney test and visualised as boxplots with medians and interquartile ranges (IQR). Statistics Statistical evaluation of the differences between groups was carried out using the Student's t-test or Mann-Whitney test with asterisk annotation (* p < 0.05; ** p < 0.02; *** p < 0.01). Each of the described experiments was repeated at least three times. In the case of data obtained from microarray, differences were evaluated by statistical programs included in particular bioinformatic analyses. Conclusions Obtained results suggest activation of apoptotic processes in LNCaP cells cultured at high LEP concentration. At the same time, this activation is accompanied by inhibition of proliferation of the tested cells. Conflicts of Interest: The authors declare no conflict of interest.
2019-11-02T13:06:27.936Z
2019-10-30T00:00:00.000
{ "year": 2019, "sha1": "007d89474f536e2996e4cd4d4c624ab30c3f40dd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/20/21/5412/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a4ab84c13771d8ca979395f1305f62604e43eb7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
10450166
pes2o/s2orc
v3-fos-license
Periprostatic fat measured on computed tomography as a marker for prostate cancer aggressiveness Objective Several reports found that obesity was associated with prostate cancer (PC) aggressiveness among men treated with radical prostatectomy or radiotherapy. Studies concerning this issue have basically relied on body mass index (BMI), as a marker for general obesity. Because visceral fat is the most metabolic active fat, we sought to evaluate if periprostatic fat measured on a computed tomography (CT) is a better marker than BMI to predict PC aggressiveness in a Dutch population who underwent brachytherapy for localized PC. Patients and methods Of the 902 patients who underwent brachytherapy, 725 CT scans were available. Subcutaneous fat thickness (CFT), periprostatic fat area (cm2) and fat-density (%) were determined on the CT scan. Patients were stratified into three groups: <25, 25–75 and >75 percentile of the fat-density. Associations between the three fat-density subgroups and BMI and PC aggressiveness were examined. Results 237 patients were classified as having normal weight (37.2%), 320 as overweight (50.2%) and 80 as obese (12.6%). There was a strong significant association between BMI and fat-density and CFT. The strongest correlation was seen between BMI and CFT (Pearson r coefficient = 0.71). Logistic regression analysis revealed no statistically significant association between the different fat measurements and the risk of having a high-risk disease. Conclusions Periprostatic fat and fat-density as measured with CT were not correlated with PC aggressiveness in patients receiving brachytherapy. However, 31% of the patients with a normal BMI had a fat-density of >75 percentile of the periprostatic fat-density. Introduction Obesity and prostate cancer (PC) are two major health concerns. On one hand, obesity is a rapidly growing worldwide epidemic and it increases the risk of several chronic diseases and certain cancers [1,2]. On the other hand, PC is diagnosed more often in the prostate-speciWc antigen (PSA) era and the disease is often diagnosed at a localized stage suitable for curative treatment [3]. The relationship between obesity and PC is debated, with studies Wnding an inverse, a linear correlation with PC, or no relation at all [4][5][6]. However, several studies found a link between obesity and disease aggressiveness [7][8][9]. Recently, the classical perception of adipose tissue as a storage place of fatty acids has been replaced by the notion that adipose tissue produces a large number of hormones and cytokines, e.g., tumour necrosis factor-, interleukin-6, leptin and adiponectin [10]. The exact role of these cytokines in prostate carcinogenesis, however, is not known. Most of the studies that investigated the role of obesity on PC used body mass index (BMI, the weight in kilograms divided by the squared height in meters) as a marker of general obesity. Although there is a strong correlation between BMI and waist-circumference (WCF), the most metabolic active fat is the abdominal visceral fat and a better way to measure this is by waist-hip ratio or WCF. Therefore, WCF as an indicator of abdominal fat may be a better predictor for PC risk than BMI alone, especially in men with a low BMI. Computed tomography (CT) is another technique which measures visceral fat even more accurately [11,12]. The aim of this study was to investigate whether periprostatic fat, measured on a CT, is a better marker for PC aggressiveness in patients who underwent brachytherapy for localized PC compared to BMI. We also evaluated the relation between BMI and diVerent fat measurements. To the best of our knowledge such a study has never been performed. Patients Between April 2004 and August 2008, 902 patients with biopsy-proven localized PC (stage cT1 or cT2) were treated with transrectal ultrasonography guided transperineal permanent mono brachytherapy at the department of Radiotherapy, University Medical Center Utrecht, The Netherlands. Due to the very short follow-up of this cohort of men we only focused on PC baseline characteristics. Patients underwent clinical staging by medical history, digital rectal examination and serum PSA measurement. Bone scans were obtained and pelvic lymph node dissection was performed when clinically indicated. A CT was performed 1 day after the brachytherapy to determine spe-ciWc dose constraints. The CT was not performed in 153 patients and in 24 patients the quality of the CT was poor due to hip prostheses. This resulted in a study population of 725 men. Because diVerent risk classiWcations are used in the literature we decided to use two diVerent risk classiWcations, one according to Ash et al. [13] and the other according to D'Amico et al. [14]. Tumour stage was described according to the 2002, TNM, American Joint Committee on Cancer system. Fat measurement Preoperative height and weight data were collected retrospectively by reviewing anaesthesia records. The BMI (kg/m 2 ) was calculated and stratiWed into three groups according to the WHO, i.e. normal weight (<25), overweight (25)(26)(27)(28)(29)(30) and obese (¸30). Only one patient had a BMI value of <18.5 kg/m 2 , this patient was included in the normal weight group. The CT's were acquired on a single slice CT (Aura, Philips Medical Systems, Best, The Netherlands), and had an in-plane slice resolution of 0.49 £ 0.49 mm with a slice thickness of 3 mm. We used an in-house developed software tool for delineation of the pelvic fat region and the measurement of the subcutaneous fat thickness (CFT, see Fig. 1) [15]. Because there are no comparable studies available we chose to delineate along established lines (see Fig. 1). The fat contained within the delineated contours of the CT, is segmented by thresholding on the HounsWeld Units (HU). We diVerentiated between fat (¡190 to ¡30 HU), air (<¡500 HU) and other soft tissue types [16]. Since the delineated contours did not contain bony structures, we did Images demonstrate our method for determining visceral fat distribution and subcutaneous fat thickness on a CT scan. a Transverse section is made at the level of the caput femoris and greater trochanter of the femur. The red line, outlines the total contour area (cm 2 ), in which attenuation is measured. The line is drawn at the back side of the pubic bone, lateral border of obturatorius internus muscle, anterior side of the gluteus maximus muscle and coccyx bone. Within the region of interest the periprostatic fat area (cm 2 ) and the fat-density (%) was calculated. b Transverse section is made at the level of superior pubic ramus. The red line outlines the subcutaneous fat thickness by which the distance between the skin and pubic bone is measured. (cm) not include a threshold for segmenting the bones separately. The total contour area (cm 2 ) was calculated by the total number of voxels within the contour minus the number of air voxels and the periprostatic fat area (cm 2 ) by just counting 'fat' voxels within the total contour area. The fat-density (%) was calculated by dividing periprostatic fat by the total contour area. Statistical analysis Associations between the predeWned three fat-density subgroups and clinical or pathological characteristics were examined by Chi-square tests in case of categorical characteristics and Kruskal-Wallis tests in case of continuous characteristics. The Pearson correlation coeYcient was used to quantify correlations between BMI and the diVerent fat measurements. Binary logistic regression analyses were performed to evaluate the independent eVect of each variable on the risk of having high-risk disease versus low or intermediate risk (according to Ash et al. [13] and D'Amico et al. [14]). DiVerences were considered to be statistically signiWcant if p < 0.05. Data were analysed using SPSS for Windows (version 15.0). Patients in group 3 were signiWcantly older. The median prostate volume was statistically diVerent between the three groups but the diVerences were not clinically relevant. A clear signiWcant association was seen between the fat-density groups and BMI, CFT and periprostatic fat. Figure 2 shows the correlation between BMI and the diVerent fat measurements. The strongest correlation was seen between BMI and CFT (Pearson r coeYcient = 0.71, p < 0.001). Logistic regression analysis revealed no statistically sig-niWcant association between the diVerent fat measurements and the risk of having high-risk disease ( Table 2). Only age was signiWcantly associated with increased risk of having a high-risk disease, however in the multivariable analysis (data not shown) this signiWcance disappeared. Discussion The urologist and radiation oncologist will be confronted more frequently with obese patients having a localized PC. Although the association between obesity and the risk of PC risk is controversial [4,17,18], a stronger link between obesity and increased risk for higher pathologic grade and higher rates of biochemical recurrence (BCR) compared with normal weight patients was seen in several studies [9,19,20]. Of note, all these studies were done in the USA. We conducted a study in The Netherlands where we evaluated 1,302 patients who underwent a radical prostatectomy. In that study BMI did not appear to have any prognostic value for BCR or worse pathologic features [21]. Same conclusions were drawn by another European study by PWtzenmaier et al. [22]. In contrast with the USA, where 30% of the population is obese, only 9% to 14% of the European population was obese [23,24]. Thereby, obese patients are less obese than the obese men in the USA and a relatively large proportion of the USA population consists of Afro-Americans who are more prone to be obese and more frequently have aggressive tumours compared with white men. A question which may rise: are we measuring obesity on the right way? In most studies investigating obesity in relation to prostate aggressiveness and BCR, BMI is used as a criterion for general obesity. The most metabolic active fat however, is the abdominal visceral fat. WCF, as an indicator of abdominal obesity may be a better predictor of risk of more aggressive PC than BMI, especially in individuals with a lower BMI. Visceral fat is the most metabolic active fat and produce diVerent kind of adipokines. Obesity is associated with increased levels of several adipokines and studies reported a link between the level of adipokines and aggressive PC [25][26][27]. A large study by the European Prospective Investigation into Cancer and Nutrition group concluded that once general obesity was adjusted for abdominal fat distribution it was positively associated with the risk of death. This association tended to be stronger among participants with a lower BMI [28]. In a large prospective cohort of 148,372 men, Pischon et al. [29] found that higher WCF was associated with increased risk of advanced PC and high-grade PC among individuals with lower BMI. The relative risk of advanced PC was 1.06 (95% CI 1.01-1.10) per 5-cm-larger WCF. Same conclusions were drawn in a prospective Swedish study [30]. These data suggest that especially abdominal adiposity may be associated with an increased risk of advanced PC and WCF is a better way to measure obesity. Visceral fat can aVect both the lean and obese and is more metabolically active than subcutaneous fat. By measuring the WCF the discrepancy between thin outside (subcutaneous fat) and thick inside (visceral fat) cannot be made. A CT scan can distinguish between these two "layers". In our study we measured the visceral and subcutaneous fat on a CT to identify if one of these parameters is a better marker for tumour characteristics compared with BMI. Possible explanations for the lack of this correlation can be, Wrst, there has been less enthusiasm for the use of brachytherapy in men with high-risk disease. These patients might be very well selected which can be an explanation for these negative Wndings. Second, the fat measurement was performed on one cross-sectional scan. Theoretically the accuracy of the fat measurement could be improved by measuring the fat content on more cross-sectional scans (volume measurement). However, in this study the selection bias of the brachytherapy patients is probable more important than the technique of fat measurement. Third, it is possible that the fat around the intra-abdominal organs are more metabolic active than the periprostatic fat. It would be interesting to measure the fat around the intraabdominal organs. However, it was not possible to measure the intra-abdominal fat distribution and body circumference at the level of the umbilicus. Fourth, it would be very interesting to correlate the BMI and fat density with clinical outcome like BCR or disease speciWc survival instead of pretreatment Gleason score, because these are better prognostic markers for PC aggressiveness. However, in this study the follow-up was much too short (mean < 20 months) to evaluate these outcomes. Our analysis showed a correlation between BMI and CFT and periprostatic fat-density. The correlation between BMI and CFT was much stronger than BMI and fat density. Interestingly, 31% of the patients with a normal BMI had a fat-density >75 percentile, compared with only 10% of the obese patients who had a fat-density <25 percentile of the fat-density. Thus, measurements on the outside of the body (BMI) do not always reXect the inner fat distribution measured on a CT scan. It is attractive to speculate that when this study was performed in a group of patients with more high-grade tumours, e.g. a group treated with external radiotherapy, these parameters could be a better prognostic marker for tumour characteristics than BMI, especially in patients with a low BMI. However, further studies are needed to identify the real value of these fat measurements on CT as correlates with PC aggressiveness. Conclusion Periprostatic fat and fat-density were not of any value to predict PC aggressiveness in patients receiving brachytherapy. However, 31% of the patients with a normal BMI had a fat-density of >75 percentile of the periprostatic fat-density. More studies, including patients who have more aggressive PC, are needed to identify the true value of fat measurement on a CT as a correlate of PC aggressiveness and/or predictor of treatment failure. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2014-10-01T00:00:00.000Z
2009-12-22T00:00:00.000
{ "year": 2009, "sha1": "102227938f24733f1b386fa76e85122f7f59afd4", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00345-009-0497-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "102227938f24733f1b386fa76e85122f7f59afd4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
41411559
pes2o/s2orc
v3-fos-license
Amphetamine-Like Weight Reduction Drug Induced Acute Cardiomyopathy with Left Ventricular Thrombosis A 37-year-old female patient admitted due to dyspnea on exertion and peripheral edema. For one and a half years, the patient had been taking various drugs and supplements to reduce weight, including amphetamine-like drugs. The patient had no major cardiovascular risk factors except three pack-years of smoking. A chest computed tomography showed a 1.7 cm diameter, capsulated space-occupying lesion in the left ventricle (LV) and 2-dimensional echocardiography showed LV systolic dysfunction (Left ventricular ejection fraction [LVEF], 30%) with a mobile cystic mass (1.1×1.8 cm) that was attached to the LV apex, which was increased in size and number the next day, even with low dose low-molecular-weight heparin. With an increased dose of anticoagulation medication and heart failure management with diuretics and angiotensin receptor II blocker, LV dysfunction was recovered and the LV thrombus disappeared. (Ewha Med J 2014;37(Suppl):S37-S40) Received June 27, 2014 Accepted August 21, 2014 Introduction Obesity has become one of the main health issues, and various attempts to reduce weight have been tried.Many people take drugs that suppress appetite and the absorption of food and increase the metabolic rate.However, some of these drugs have unpredictable complications, including cardiovascular side effects.Some legally marketed amphetamine-like drugs such as fenproporex and diethylpropione are also prescribed for weight reduction for their appetite reduction effect, but these drugs share the side effects of amphetamines [1].There are multiple reported cases of acute myocardial infarction and stroke after the use of amphetamines [2][3][4][5][6][7][8][9][10].However, to our knowledge there are no published case reports of intracardiac thrombosis with an amphetamine-like drug without myocardial infarction.We report the case of a 37-year-old female presenting with a left ventricle (LV) thrombus after amphetamine intake. Case A 37-year-old female patient with dyspnea on exertion and peripheral edema was admitted to our hospital.For one and a half years, the patient had been taking various drugs and supplements to reduce weight such as phentermine hydrochloride, theae folium powder, orthosiphon powder, saposhnikoviae radix powder, and other herbal medicines.Two weeks before admission, a progressive cough, febrile sense, and myalgia had developed.A 37-year-old female patient admitted due to dyspnea on exertion and peripheral edema.For one and a half years, the patient had been taking various drugs and supplements to reduce weight, including amphetamine-like drugs.The patient had no major cardiovascular risk factors except three pack-years of smoking.A chest computed tomography showed a 1.7 cm diameter, capsulated space-occupying lesion in the left ventricle (LV) and 2-dimensional echocardiography showed LV systolic dysfunction (Left ventricular ejection fraction [LVEF], 30%) with a mobile cystic mass (1.1×1.8 cm) that was attached to the LV apex, which was increased in size and number the next day, even with low dose low-molecular-weight heparin.With an increased dose of anticoagulation medication and heart failure management with diuretics and angiotensin receptor II blocker, LV dysfunction was recovered and the LV thrombus disappeared.(Ewha Med J 2014;37(Suppl):S37-S40) Kim JM, et al [NYHA] III~IV) and peripheral edema a week before admission. The patient had no history of hypertension, hyperlipidemia, diabetes mellitus or other cardiovascular or cerebrovascular diseases, and the patient had no family history of cardiovascular disease.She did not take oral contraceptives, but she was a current smoker with a three pack-year smoking history.At admission, her blood pressure was 100/60 mmHg, heart rate was 80 beats per minute, respiratory rate was 20 per minute and body temperature 36.9 o C. The patient appeared acutely ill with an alert mental state. A crackling sound was auscultated on both lower lung fields, and her heartbeat was regular, without significant murmur.The examination of her abdomen was unremarkable; a grade IV pitting edema was observed.Her body weight had increased from 64 kg to 79.2 kg over one week.On simple chest radiography, a pulmonary edema was seen in both lung fields with cardiomegaly (Fig. 1), and an electrocardiography (ECG) showed sinus tachycardia with poor R progression on the chest lead and T wave inversion in the limb leads.Under the impression of congestive heart failure A mobile cystic mass (1.1×1.8 cm) was attached to the septal area of the apex of the LV (Fig. 3A).Follow-up echocardiography, which was performed the next day, revealed newly developed cystic masses that were attached to the papillary muscle (Fig. 3B).With this finding, we concluded that the cystic mass was a thrombus and increased the dose of LMWH.After that, warfarin was also given.Follow-up echocardiography was done two days afterwards, and it showed improved LV dysfunction (LVEF 60%), and remarkably reduced and collapsed previous cystic mass-like lesions.On the eighth day in hospital, LV function was completely normalized, and the mass-like lesion had almost disappeared (Fig. 3C).The patient was discharged with warfarin and ARB, and followed up in out patient department without any adverse events over six months. Discussion Amphetamine-like medicines are popular as weight reduction drugs for their appetite suppressing effects.Amphetamines have sympathetic activators with the stimulation of α -and β -adrenergic receptors and various effects on the cardiovascular system such as hypertension and tachyarrhythmia [10].Acute cardiomyopathy associated with amphetamines or similar drug use was also reported [11,12].In this case, the patient presented with acute car- [15].We report a case of multiple thrombi with acute cardiomyopathy after long-term treatment with a weight reduction drug that was successfully treated with conventional heart failure medication and anticoagulation.Amphetamine use is illegal in most countries, but amphetamine-like drugs are widely used legally that can share its cardiovascular complications.When we use these drugs, more attention should be paid to the cardiovascular complications. Fig. 3 . Fig. 3. Two-dimensional echocardiography follow-up.(A) Initially mobile cystic mass (1.1×1.8 cm) is attached to septal area of apex of left ventricle and small mass like lesion was attached to papillary muscle (arrows).(B) Number and size of masses increased in one day (arrows).(C) Masses disappeared on discharge. the diagnosis of LV thrombus, and it appears as an echo dense mass with definite margin during the whole course of systole and diastole.Echogenicity may be homogenous, but sometimes appears with central lucency, which mimics a cystic mass
2017-08-15T16:11:42.602Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "9af517f0ac385efebb8d59195751e29fca5eec71", "oa_license": "CCBYNC", "oa_url": "https://synapse.koreamed.org/upload/SynapseData/PDFData/0201emj/emj-37-S37.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "9af517f0ac385efebb8d59195751e29fca5eec71", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266927699
pes2o/s2orc
v3-fos-license
Genetic Structure and Population History of the Zaisan Toad-Headed Agama (Phrynocephalus melanurus) Inferred from Mitochondrial DNA Simple Summary The effects of Quaternary climatic oscillations on lineage diversification and demography of organisms in drylands have drawn much attention recently. However, little is still known about the processes that shaped the species’ spatial genetic structure in areas such as the arid Central Asia, particularly for animals in Northwest China. Here, we investigated the genetic structure and population dynamics of the Zaisan toad-headed agama (Phrynocephalus melanurus) by combining mtDNA phylogeography and species distribution models (SDMs) with range-wide sampling for the first time. Phylogenetic analyses recovered two main Clades, with one from Dzungar and Alakol basins being geographically sub-structured into several groups. Lineage diversification took place in the Pleistocene, coinciding with the drastic aridification caused by Quaternary climatic transitions and drastic activity of the Tianshan Mountains. Moreover, populations of the Dzungar Basin experienced the past expansion and parapatric divergence contributed by isolation-by-distance. SDMs unveiled the species range dynamics since the late Pleistocene, showing expansion in interglacial, and contraction during last glacial maximum and late Holocene periods. Future distribution projections demonstrated drastic habitat loss, suggesting the significance of conservation effort. Our findings highlight the significance of combining genetic approaches with environmental data when evaluating the effects of Pleistocene climatic oscillations. Abstract The agamid lizard Phrynocephalus melanurus is restricted to Northwest China (Dzungar Basin) and the adjacent Eastern Kazakhstan (Zaisan and Alakol basins). To elucidate the phylogeography of P. melanurus, we obtained the mitochondrial DNA COI segments of 175 sampled lizards from 44 localities across the whole distribution. Phylogenetic analyses revealed two main Clades comprising five geographically structured lineages (I, IIa, IIb1, IIb2, and IIb3) that fit an isolation-by-distance (IBD) model. The divergence from the most recent common ancestor was dated to ~1.87 million years ago (Ma). Demographic analyses demonstrated lineage-specific response to past climate change: stable population for Clade I, Subclade IIb1; past population expansion for IIb3 since 0.18 Ma, respectively. Bayesian phylogeographic diffusion analyses detected initial spreading at the Saur Mount vicinity, approximately 1.8 Ma. Historical species distribution model (SDM) projected expansion of the suitable habitat in the last interglacial and shift and contraction in the last glacial maximum and Holocene epochs. The SDM predicted a drastic reduction in suitable area throughout the range as a response to future climate change. Our findings suggest that the evolution of P. melanurus followed a parapatric divergence with subsequent dispersal and adaptation to cold and dry environments during the Quaternary. Overall, this work improves our understanding of the lineage diversification and population dynamics of P. melanurus, providing further insights into the evolutionary processes that occurred in Northwest China and adjacent Eastern Kazakhstan. Introduction Historical geological events and climate changes, coupled with the substantial environmental heterogeneity, have shaped the phylogeography of organisms across the globe.This complex interplay may have significantly influenced the processes of diversification and speciation of local biota in drylands [1,2].It has been recognized that the climate fluctuations during the Quaternary have greatly impacted the distribution of arid-adapted organisms [3][4][5][6].Thus, many species adapted to the cold and dry environments have recently attracted much attention in the field of biogeography and phylogeography [7][8][9][10][11]. One example of such drylands is arid Central Asia.It is well accepted that late Cenozoic progressive aridification in Central Asia is one of the most prominent climate changes in the Northern Hemisphere [12][13][14], resulting in the emergence of diverse arid landscapes, including sandy deserts, and rocky deserts like the Gobi [15].Meanwhile, the stepwise desertification of Northwest China (NW China) contributed to the emergence of the Guerbantonggut Desert, the second largest sandy desert in Northwest China [16].This desert is a major component of the Dzungar Basin, which is located in the Northern part of Xinjiang of China.Being an intermontane basin, Dzungar Basin is bounded by Dzungarian Alatau, Tarbagatay, and Saur mountains at the west, Tianshan Mountains at the south, and Altai Mountains at the north (see Figure 1).The formation of the sandy deserts and the Tianshan Mountains has played a crucial role in diversifying the regional environmental conditions [17].This complex landscape has shaped rich biodiversity and makes the area an ideal system for examining species evolution from a phylogeographic perspective [18][19][20][21]. As such, plant phylogeographic studies in arid Northwest China have gained broad prominence [22][23][24]; however, animal phylogeographic studies, particularly those of lizards, are still relatively limited [8,11,21,25].In fact, lizards are useful for studying the effects of environmental and geological conditions on phylogeographic structure due to their lower dispersal abilities, and susceptibility to climate fluctuations [26,27].Nevertheless, a general conclusion has not been drawn on how the flora and fauna in Northwest China responded to the Quaternary climatic oscillations [23,28]. Phrynocephalus melanurus Eichwald, 1831, also known as the Zaisan toad-headed agama, is one of the members of the Phrynocephalus guttatus complex that is highly adapted to sandy and gravel deserts [29].Its distribution spans from the Zaisan and Alakol (Dzungar Gate vicinity) basins in the Eastern Kazakhstan (E KZ) to the Dzungar Basin in Northwest China [29][30][31].Due to adaptation to the changing environments, the lizard has become specialized in accommodating the heterogeneous structure of the habitat and forming "substrate races" [32].This intricate process has given rise to an array of morphological variations that display discernible phenotypic disparities, which have historically led to their classification under diverse taxonomic designations (reviewed by [33] and references therein).The species status of P. melanurus has been disputed, and was historically classified as a subspecies of P. guttatus [33][34][35].Recent work suggested that the morphological differences between P. melanurus and P. guttatus may be corroborated by genetic data, with a high mean sequence divergence of 6.9% in mtDNA ND2 gene [3], and of 2.48% in mtDNA COI gene [34].Based on limited sampling (eight individuals restricted to two localities), Melville et al. [3] initially found that P. melanurus comprises one lineage from the Dzungar Basin (Xinjiang, China) and another from the Zaisan Lake area.Dunayev et al. [34] confirmed the differentiation of P. melanurus in Kazakhstan into two lineages inhabiting the Zaisan and Alakol depressions.However, one potential shortcoming of Dunayev et al. [34] lies in the lack of adequate sampling of P. melanurus from its whole distribution.There are two passages connecting Dzungar with Kazakh Steppe (Great Dala) through the Dzungar Gate (western) and Irtysh valley (northwestern) (see Figure 1).Thus, agamas could migrate between Dzungar and adjacent Kazakhstan despite partial isolation of this region by mountains. Therefore, our analyses were motivated by the goal of obtaining a more detailed picture of the genetic structure and to trace the population history of P. melanurus across its range.We used a phylogeographic approach complemented with species distribution models (SDMs).Specifically, we aimed to (i) document the phylogeographic structure and the timing of genetic diversification within the Zaisan toad-headed agama; (ii) reconstruct the center of origin and colonization routes; and (iii) explore the relationship between historical demographic changes and past climate fluctuations.Analyses of the mtDNA sequences tell only one part of a potentially more complex story [36,37], yet they provide valuable insights into the evolutionary history of a species, including consequences of habitat changes, impacts from climate fluctuations, divergence, and colonization.To incorporate the data published by Solovyeva et al. [38] and especially the data from the Zaisan Basin [34], we amplified the mitochondrial COI gene segment in this study.ls 2024, 14, x 3 of 24 al. [34] confirmed the differentiation of P. melanurus in Kazakhstan into two lineages inhabiting the Zaisan and Alakol depressions.However, one potential shortcoming of Dunayev et al. [34] lies in the lack of adequate sampling of P. melanurus from its whole distribution.There are two passages connecting Dzungar with Kazakh Steppe (Great Dala) through the Dzungar Gate (western) and Irtysh valley (northwestern) (see Figure 1).Thus, agamas could migrate between Dzungar and adjacent Kazakhstan despite partial isolation of this region by mountains. Population Sampling A total of 165 individuals of P. melanurus were collected from 36 sites across its whole range from 2008-2019 (Figure 1) including 1 sample from the type locality (Kyzylkum sands (previous name-Bukon sands), of the Kurchum district, Eastern Kazakhstan), which covers the Northern Xinjiang region of China, and the adjacent eastern part of Kazakhstan.Additionally, 10 samples of P. melanurus from Kazakhstan (the Zaisan and Alakol basins) were taken from the previous studies [34,38]; 2 outgroup sequences were retrieved from GenBank (https://www.ncbi.nlm.nih.gov)(accessed on 15 March 2021)-Phrynocephalus alpherakii (KF691705), Phrynocephalus guttatus guttatus (MK461381).Detailed sampling information is listed in Supplementary Materials, Table S1.Animals were euthanized with an overdose of sodium pentobarbital via intraperitoneal injection, and liver tissues were extracted and preserved in 95% ethanol following the animal-use protocols approved by Chengdu Institute of Biology (CIB), Chinese Academy of Sciences.Liver or tail tissue from specimens was fixed with 95% ethanol and stored at −20 • C before DNA extraction.The voucher specimens for all populations are deposited in CIB. DNA Isolation, PCR Amplification, and Sequencing Genomic DNA was extracted either from the 95% ethanol-preserved tail or liver tissue samples using the universal high-salt procedures [39].Amplification of the COI gene fragment was implemented by using the pairs of primer: PhCOIf (5 ′ -AATTCAGCCATC TTACCATGTCAAC-3 ′ ), PhCOIr (5 ′ -TATACTTCTGGGTGGCCAAAGAA-3 ′ ), which was designed particularly for this study.The length of amplified sequences was 680 bp.Each PCR reaction contained 25 µL of Taq PCR Master Mix (Omega Bio-Tek), 2 µL each primer (0.4 µM), 1−2 µL genomic DNA (~50 ng), and 19−20 µL double sterilized water for a total reaction volume of 50 µL.The PCR protocol involved an initial denaturation at 94 • C for 4 min, followed by 35 cycles at 94 • C for 30 s, 55 • C for 30 s, and elongation at 72 • C for 53 s, and a final extension at 72 • C for 10 min.The PCR products were assessed using 1% agarose gel electrophoresis, purified, and sequenced for double strands with the PCR primers.All fragments were sequenced with ABI 3730 automated DNA Analyzer at Sangon Biotech (Shanghai, China). Phylogenetic inference was used to establish the relationships among the observed haplotypes and their associated populations of P. melanurus.We used Bayesian inference (BI) and maximum likelihood (ML) approaches to reconstruct phylogenetic relationships among the mitochondrial haplotypes.Bayesian analyses were implemented using Mr-Bayes v.3.2.6 [44] employing partition-specific modeling.The best-fit models of nucleotide substitution for each partition scheme were selected using PartitionFinder v.2.1.1 [45].Three codon partitions and their corresponding substitution model for the COI gene sequences were proposed: first codon-K80 + I; second codon-HKY; third codon-GTR + G. Two simultaneous parallel runs were performed with four heated Markov (using default heating values) chains per run (three heated and one cold) for 10 million generations with sampling frequency of every 1000 generations.Convergence of the runs was assessed by the effective sample sizes (ESS) (≥200) using Tracer v.1.7 [46] and the average standard deviation of split frequencies <0.01.The first 25% of trees were discarded as burn-in and a 50% majority-rule consensus tree was constructed to calculate the posterior probabilities (PPs) of nodes.Partitioned maximum likelihood (ML) analyses were carried out in RAxMLHPC v.8.2.4 [47] with the same partitioning strategy as for BI.The GTR + G model was used for all subsets, and 100 replicate ML inferences were performed for each analysis.Each inference was initiated with a random starting tree and nodal bootstrap support (BS) was assessed with 1000 pseudoreplicates [48]. Divergence Times Estimation To estimate the divergence time for mitochondrial haplotypes of P. melanurus we conducted the Bayesian dating in BEAST v.1.8.4 [49].Owing to lack of a reliable fossil record and substitution rate of COI for P. melanurus, we applied as secondary calibration points the estimated mean values of internal nodes of Clades of P. guttatus-versicolor species complex (6.5 million years ago, Ma) and P. guttatus specie complex (5.0 Ma) from the previous study [40].Additional sequences were retrieved from GenBank (Supplementary Materials, Table S1) and incorporated with our dataset in order to expand the dataset and re-calculate the divergence age of P. guttatus species group.The taxon sets were defined in two groups: P. guttatus-versicolor and P. guttatus group.The HKY + G site substitution model was used, under the uncorrelated lognormal clock model.The calibration was implemented as a normal prior with standard deviation equal to 1.0; mean values were set as 6.5 Ma and 5.0 Ma, respectively.A birth-death prior was used for the tree.Non-partitioned dataset was used for a single run.Analysis was run for 10 million generations, with random starting tree.Convergence of the four runs was assessed by ESS ≥ 200.The first 10% of generations were discarded as burn-in using LogCombiner, and TreeAnnotator was used to infer the ultrametric tree [49].We interpreted PP ≥ 0.95 to be strongly supported [50].Final trees were visualized and edited in FigTree v.1.4.4 (http://tree.bio.ed.ac.uk/software/figtree/) (accessed on 21 April 2021) [51]. Genetic Diversity and Population Genetic Structure Analyses Haplotypes were extracted using DnaSP v.6.0 [43].Population genetic diversity was quantified using indices of polymorphic site, number of mutations (m), nucleotide diversity [52], haplotype diversity, number of haplotypes (Nh), and average number of nucleotide differences (k), which were calculated in DnaSP.Uncorrected pairwise sequence divergences (p-distances) between and within phylogroups were calculated using MEGA v.X [53].To infer geographic distribution and relationships of the P. melanurus haplotypes, a median-joining haplotype network was constructed using PopART v.1.7 [54]. The grouping of the population was performed via spatial analysis of molecular variance (SAMOVA) using SAMOVA v.2.0 software [55].The analysis was run for values of K ranging from 2 to 8. The number of groups was selected according to F CT value (the differentiation of groups) using the sum of squared differences between haplotypes, with 100 simulated annealing processes.The configuration of K that had one or more singlepopulation groups were excluded due to the group disappearing [56,57].To estimate genetic variation within populations, among populations within groups and between groups (phyloclades, and as identified by SAMOVA), AMOVA was carried out in the program Arlequin v.3.11[58], with significance test based on 1000 permutations.Subdivision of two regions represented the Zaisan Basin population and the Dzungar Basin population, which is in accordance with Clade I and Clade II, respectively.Another AMOVA test for four groups, representing four geographic regions North (Clade I), Central (Subclade IIb3), West (group containing sampling sites 3, 17-22, 25, 27-29, 38-39, 41-42), and East (Subclade IIb2), was also conducted. To test the spatial genetic structure of populations, isolation by distance (IBD) [59] was determined by testing the correlation between geographic distance and pairwise F ST /(1 − F ST ) using the Mantel test with Mantel test with GenAlEX v.6.5.[60].The procedure was implemented separately on Clade I and II with 999 number of permutations, under the default parameters.Geographic distances among populations were estimated in R package using package Vegan v.2.(https://CRAN.R-project.org/package=vegan,accessed on 21 April 2021) [61]. Inference of Demographic History Mismatch distributions of pairwise nucleotide differences were applied to test the sudden demographic expansion for the matrilines of P. melanurus using DnaSP, with 10,000 coalescent simulations.The goodness-of-fit between observed and expected distribution was tested by calculating Harpending's Raggedness index (Rg) and the sum of squares deviation (SSD).Additionally, three types of neutrality test statistics were applied: Tajima's D [62], Fu's Fs [63], and R 2 statistic [64] calculated in DnaSP as additional assessments of possible population expansion.Significant values of the Tajima's D and Fu's Fs, as well as R 2 statistics were taken as evidence of population expansion. Bayesian skyline plots (BSP) [65] were implemented in BEAST v.1.8.4 [49] to estimate the changes in effective population size on an evolutionary time scale.We applied a strict molecular clock with a mean substitution rate (0.012 per site per million years) obtained from the BEAST analysis described above.Owing to the small sample size (three sequences) for the Subclade IIa, BSP analysis was not conducted.We used HKY substitution model with empirical base frequencies, and three partitions into codon positions.Tree parameters were set to randomly generated starting tree and piece-wise constant Skyline model with default number of groups (m = 10), except Subclade IIb1, where we reduced the number of groups to m = 5, due to its relatively small sample size-15 sequences.This method allows the prevention of the over-parameterization of the model that would lead to biased results [66].The initial 10% of steps are discarded as burn-in.We obtained consistent demographic inferences across three replicates of the analysis visualized by Tracer v.1.7 [46]. Phylogeographic Diffusion in Continuous Space Following Shi et al. [10], we reconstructed the spatiotemporal history of P. melanurus throughout their distribution using the approach of Bayesian phylogeographic diffusion [67] in continuous space implemented in BEAST v.1.10.4 [49].We analyzed a total of 175 individuals representing 44 sampling localities (Supplementary Materials, Table S1). We applied a Yule speciation process as a tree prior, Cauchy Relaxed Random Walk (RRW) as a continuous trait model for spatial diffusion, and a strict molecular clock model with a substitution rate of 0.012 per site per million years estimated in this study.Geographic coordinates were provided for all sequences, adding a random jitter to tips with a ±0.50 • window size to create unique coordinates for individuals collected at identical sites.Analyses were run for 50 million generations, sampling every 5000 generations.The convergence of the MCMC chains were checked using Tracer v.1.7.1 [46] to ensure adequate mixing and convergence.Finally, the sampled trees were annotated using TreeAnnotator v.1.10.4 and the final tree analysed in SpreaD3 v.1.0.7 [68] to visualize the ancestral area for each lineage. Past, Present, and Future Distribution Models To reconstruct the potential species distribution loss-expansion scenarios of the P. melanurus, we utilized occurrence data from our field surveys, the resources of Global Biodiversity Information Facility (GBIF: http://www.gbif.org/)(accessed 24 April 2023) [69], and the records and the information of the published literature.The total amount of the occurrence points consisted of 44 localities, where 32 localities were from Northwest China (Xinjiang Uygur Autonomous Region) and 12 localities were from Eastern Kazakhstan (Zaisan basin-8, Alakol basin-4), respectively.The GBIF data were carefully inspected where the distribution sites without coordinates, low precision (decimal < 2), and duplicate coordinates were removed from initial data.To ensure that input occurrence data are spatially not correlated and to reduce sampling bias, all sampling records were rarefied at a spatial distance of 10 km using SDMtoolbox v.1.1c[70] in ArcGIS.After rarefication 38 sampling sites were retained (Supplementary Materials, Table S4) and employed for the modelling. The SDM was executed in MaxEnt v.3.3.3 using a maximum entropy algorithm [77].Maxent analyses were carried out under the parameters: 70% of the species records were used for training and 30% were used for testing the model with 50 bootstrap replicates.We used the ENMeval package [78] in R to manage model complexity and determine the optimal combination of MaxEnt feature classes and regularization multipliers.The optimal model had a regularization multiplier of 0.1 and a linear/quadratic (LQ) features class (Supplementary Materials, Table S3).The retained parameters were set as the default. We employed the jack-knife test and utilized the built-in response curves [79] of the MaxEnt software to assess the individual contributions of each environmental variable in our models.Additionally, to identify climate extrapolation across different time periods, we employed the multivariate environmental similarity surfaces (MESS) analysis [80], which is integrated into the MaxEnt software.Since our models were constructed based on present climate variables from the period 1960-1990, results interpretation must be with caution for the areas beyond the current climate range [80].The MESS analysis provides a value on a scale of −100 to 100, where negative values indicate regions with novel variable values, and a larger absolute negative value signifies a greater deviation from the present conditions.A value of zero suggests variable conditions just before reaching the out-of-range threshold.Positive values indicate a similarity between variables from different time periods and the present, with a positive value close to 100 indicating a closer resemblance to the present conditions [80].The importance of each variable was assessed based on percent contribution values reported in MaxEnt's output files.The area under the receiving operator characteristics curve (AUC) was used to evaluate the model reliability of the prediction results, ranging from 0.5 to 1.0, and AUC > 0.7 indicates a fair model.The potentially suitable distribution area of each time period was calculated in ArcGIS based on SDMtoolbox [70]. Phylogenetic Relationships Bayesian inference and ML analyses produced highly congruent topology, with only minor conflicts on recent nodes.Thus, only the BI tree with both PP and BS from ML is presented (Figure 2).Two main Clades were well supported, with high PP and BS values. Population Genetic Structure A median-joining network demonstrated the relationships of the 50 haplotypes (Figure 4).The main feature of the haplotypes' distribution was the occurrence of an apparent geographic structure, similar to the phylogenetic reconstruction with six structured haplogroups.A star-shaped network in IIb3 suggested that many haplotypes differed from Population Genetic Structure A median-joining network demonstrated the relationships of the 50 haplotypes (Figure 4).The main feature of the haplotypes' distribution was the occurrence of an apparent geographic structure, similar to the phylogenetic reconstruction with six structured haplogroups.A star-shaped network in IIb3 suggested that many haplotypes differed from H6 by only one or two mutations, which suggested that IIb3 experienced a population expansion event.Uncorrected genetic distances (p-distances) ranged from 1.4-3% between Clades/Subclades (Table 3).The AMOVA analyses were performed on two regions (the Zaisan and Dzungar b sins)-Clade I and Clade II, and on four geographic regions (North, Central, West, Eas The hierarchical analysis demonstrated that 73.94% or 73.11% of the variation was e plained by the variation among two regions or four groups, respectively.Fixation ind over all examined data showed significant differences (p ≤ 0.001) (Supplementary Mate als, Table S5). The Mantel test for IBD was conducted to estimate the correlation between the g netic distance and geographic distance of P. melanurus.If the dispersal of P. melanurus limited by distance, then the genetic and geographic distances should be positively corr lated, producing a pattern of isolation by distance.Applying the Mantel test, the weak b significant positive correlation (r = 0.20, p = 0.03) between genetic and geographic distan was observed among populations in the Zaisan Basin (Clade I).Meanwhile, a modera The analysis of spatial genetic structure showed that the F CT values did not have the highest differentiation among groups when K = 5; however, one or more groups contained a single population when K ≥ 5. Therefore, we retained the configuration of K = 5 with overall F CT = 0.771 under the Tamura molecular distance model.Group 1 contained populations 1-12; Group 2 incorporated populations 13-16, and 23; Group 3 comprised populations 17-22, 25, and 28-29; Group 4 included populations from 24, 26, and 27; Group 5 contained populations from 30 to 36, which is concordant with Clade I. The AMOVA analyses were performed on two regions (the Zaisan and Dzungar basins)-Clade I and Clade II, and on four geographic regions (North, Central, West, East).The hierarchical analysis demonstrated that 73.94% or 73.11% of the variation was explained by the variation among two regions or four groups, respectively.Fixation index over all examined data showed significant differences (p ≤ 0.001) (Supplementary Materials, Table S5). The Mantel test for IBD was conducted to estimate the correlation between the genetic distance and geographic distance of P. melanurus.If the dispersal of P. melanurus is limited by distance, then the genetic and geographic distances should be positively correlated, producing a pattern of isolation by distance.Applying the Mantel test, the weak but significant positive correlation (r = 0.20, p = 0.03) between genetic and geographic distance was observed among populations in the Zaisan Basin (Clade I).Meanwhile, a moderate and significant IBD (r = 0.412, p = 0.01) was observed among populations in the Dzungar Basin (Clade II) (Supplementary Materials, Figure S2).Overall, the correlation between genetic and geographic distance was confirmed by the positive slope of the first order regression line which is significantly different from zero (R 2 = 0.6298, p = 0.01) for the entire sample (Figure S2). Historical Demography Demographic analysis was conducted by applying different approaches to all groups except IIa, due to the small sample size.The neutrality tests resulted in non-significant values of Tajima's D and Fu's Fs for all populations except Subclade IIb3 (Table 4).Notably, Subclade IIb2 is characterized by statistically not significant positive value of D and a negative Fs statistic that indicates the lack of rare alleles and decrease in population size or balancing selection.Conversely, Subclade IIb3 has observed past population expansion based on large and negative D and Fs values.Significant small values of R 2 statistics were captured for Clade I and Subclade IIb3, which also supported the population growth.Mismatch distribution analysis of Clade I and Subclade IIb3 produced unimodal curves, suggesting rapid population expansion (Supplementary Materials, Figure S3), which was additionally supported by the non-significant values of the Rg and SSD indices, as well as by small positive R 2 statistics (Table 4).Meanwhile, Subclades IIb1 and IIb2 with the same modality and non-significant Rg and SSD values, rejecting population stability.Multimodal curves were observed in Clade II and when all populations were pulled together, which indicates the rejection of population expansion. The BSP analysis demonstrated the population stability through time in Clade I and Subclades IIb1 and IIb2.Subclade IIb3 detected a past population expansion starting at approximately 0.18 Ma (Figure 5).The BSP analysis demonstrated the population stability through time in Clade I and Subclades IIb1 and IIb2.Subclade IIb3 detected a past population expansion starting at approximately 0.18 Ma (Figure 5). The Spatiotemporal Diffusion for P. melanurus For P. melanurus, the ancestral area was estimated as the territory of current Hoboksar-Mongol Autonomous County which is in Tacheng prefecture of Xinjiang of Northwest China.The initial colonization event started approximately 1.8 Ma (Figure 6A).The subsequent colonization route followed multiple directions and at 1.15 Ma the population reached Zaisan Basin, Karamay region, westward through Ebinur, Jinghe reached the Dzungar Gate territory, and eastward Fukang-Qitai region (Figure 6B).At ~0.93 Ma local spreading was inferred throughout the main directions and reached the Ulungur territory, Kuytun, and Jimsar regions (Figure 6C).The final dispersal was detected throughout the vast territory of Zaisan and Dzungar basins, reaching the Alakol Basin in E KZ occurred around 0.63 Ma (Figure 6D). The Spatiotemporal Diffusion for P. melanurus For P. melanurus, the ancestral area was estimated as the territory of current Hoboksar-Mongol Autonomous County which is in Tacheng prefecture of Xinjiang of Northwest China.The initial colonization event started approximately 1.8 Ma (Figure 6A).The subsequent colonization route followed multiple directions and at 1.15 Ma the population reached Zaisan Basin, Karamay region, westward through Ebinur, Jinghe reached the Dzungar Gate territory, and eastward Fukang-Qitai region (Figure 6B).At ~0.93 Ma local spreading was inferred throughout the main directions and reached the Ulungur territory, Kuytun, and Jimsar regions (Figure 6C).The final dispersal was detected throughout the vast territory of Zaisan and Dzungar basins, reaching the Alakol Basin in E KZ occurred around 0.63 Ma (Figure 6D). Potential Species Distribution Modeling The reconstructed potential distributions of P. melanurus for past and present-day projections are presented in Figures 7 and 8.The simulation results demonstrated good credibility, as indicated by the AUC values of 0.906 ± 0.019 and 0.878 ± 0.031 for the training and testing datasets, respectively.The jack-knife analysis of regularized training gained from retained environmental variables that highly contributed to the distribution model is shown in Figure S4.Highly contributed climatic variables to the current environment were annual temperature (Bio1)-25.8%;annual precipitation (Bio12)-26.2%;temperature of wettest quarter (Bio8)-18.3%;mean diurnal range of temperature (Bio2)-12% (Supplementary Materials, Table S2).This indicated that temperature and humidity played an important role in the potential geographic distribution pattern of P. melanurus.The response curves between the environmental variables and the prediction changes of the occurrence are shown in Figure S5.There was a nonlinear relationship between the probability of occurrence and Bio1.The probability of occurrence peaked when bio1 was 75 • F. The response curve for occurrence also showed a clear nonlinear relationship between the probability of occurrence and Bio3 and Bio12 (Supplementary Materials, Figure S5).The probability of occurrence decreased substantially when Bio2 ranged from 105 • F to 125 • F, with Bio12 ranging from 100 mm to 600 mm.The examination of the response curve profiles for these variables indicates that P. melanurus occurs in temperate areas with low levels of precipitation. Discussion The present study generates the substantive mtDNA data of P. melanurus coverin its whole distribution, and unravels the spatial genetic structure, demographic histor The projected LIG scenario showed the broad area of the potentially suitable habitat for P. melanurus, concentrated mostly on the eastern margin of its distribution, connecting with the Zaisan Basin through the northern part Dzungar Basin.Modelled LGM climate reconstruction showed the shift of potentially suitable area to southwestern part, which apparently expanded toward Alakol Basin through Dzungar Gates.Late Holocene period simulation was characterized by drastic habitat diminishing, particularly, massive contraction occurring at the northern and southeastern periphery of Northern Xinjiang.Present-day scenario demonstrated a recovery of the northern and eastern habitat corridor approximately as it was in the LIG (Figures 7 and 8); the highly suitable habitat for P. melanurus is concentrated in the north, west, and south parts of its range; this is consistent with all known up-to-date occurrence records. All predicted future projections displayed the broad range habitat reduction in the territory of north, east, and south regions of NW China.Also, the SW area habitat loss (the Dzungar Gate, Alatau) is important to observe.SSP2-8.5 scenario was demonstrated to be slightly different from the SSP1-2.6 scenario, where we observed moderate habitat discontinuity between the W and SW sectors.However, the SSP5-4.5 scenario showed the irreversible loss of habitat throughout the whole range, preserving only the southwestern margin at slopes of the Western Tianshan Mountains. Projection uncertainty and areas with non-analog climates were assessed using MESS as a quantitative measure.Overall, the MESS analyses showed that climatic conditions were analogous between the present conditions and the different scenarios, except LGM and future periods (Figure 8; Supplementary Materials, Figure S6). Discussion The present study generates the substantive mtDNA data of P. melanurus covering its whole distribution, and unravels the spatial genetic structure, demographic history, and divergence dates.Furthermore, the work provides the primary evidence for the species ancestral center and subsequent colonization trajectories emphasized by historical changes of the populations during the Pleistocene climatic oscillations.However, the results should be interpreted with caution, since this study relies on the genetic structure found in a single locus (mtDNA) topology rather in multiple independent evolving loci, which might unravel a more precise evolutionary history of the species by taking into account potential gene tree discordances.Further genetic information, using fast-evolving markers (i.e., microsatellites) or large single nucleotide polymorphism datasets, are needed to better understand the evolutionary history of this species. Phylogeographic Pattern and Diversification History By utilizing mtDNA COI gene sequences in phylogenetic analysis, we revealed five groups within the two main Clades of P. melanurus, where most branches are highly supported (Figure 2).Clade I represents the Zaisan Basin lineage spanning from E KZ to the Altay Prefecture of N Xinjiang in China, while Clade II embodies the lineages of Dzungar Basin (Figure 1).Genetic distances between the Zaisan lineage and the lineages of Dzungar Basin ranged from 2.5 to 3%, which indicates a subspecies status due to an ongoing lineage sorting process and/or gene flow.Similar findings were captured by Solovyova et al. [38] and Dunayev et al. [34], where the Zaisan lineage diverged 2.6% and 2.48% from that of the Dzungar Gate, respectively.Later Solovyova et al. [40] contended the recognition of P. melanurus from the Zaisan and Dzungar basins as a distinct species based on p-distances of COI gene, morphology and geographic distributions.Macey et al. [81] based on ND1-COI region of mtDNA evaluated the genetic distances between P. melanurus (mentioned in that study as P. salenskyi populations 1 and 2) from Dzungar Basin and Zaisan Basin populations-2.21%[81]. The population from Northwest China was examined by Wang and Fu [82] and Melville et al. [3], based on the analysis of ND2 gene sequences.Their results demonstrated that the population of Northwest China is sister to E KZ population, with 3.1-4.0%uncorrected sequence divergence between them. SAMOVA divided all sampled populations into five groups with certain geographic distribution.Analysis of molecular variance (AMOVA) also suggested that the genetic variation in P. melanurus was high among the groups (Supplementary Materials, Table S4).Haplotype 6 is dominant in the population of the northern Dzungar Basin and indicates a potential ancestral haplotype.The haplotype 3 is frequent for the group of IIb1 Subclade, while haplotype 2 is shared between IIa and IIb1, which might have resulted from the secondary contact during last interglacial cycles (Figure 4).All these facts demonstrate that there is a distinct genetic differentiation among the geographic groups.Similar population structures were documented in previous studies of xerophyte plants [83,84]. The haplotype and nucleotide diversity indices of the Dzungar Basin population (Table 1) were detected to be significantly higher than those of the Zaisan Basin.This suggests that P. melanurus experienced an expansion and recent stepwise dispersal, which is also supported by the haplotype network analysis revealing a star-shaped topology of IIb3.Similar findings were reported for the desert scorpion Mesobutbus mongolicus [10], agamid lizards Phrynocephalus spp.[3,11], and the rapid racerunner Eremias velox [8]. The estimated divergence time from the MRCA of P. melanurus occurred approximately 1.87 Ma (Figure 3), which falls into the early Pleistocene, the phase of intense uplift of Himalayas and Tianshan Mountains [85,86].This age is in accordance with that of Macey et al. [81], albeit slightly older than the previous studies of Solovyova et al. [40] and Dunayev et al. [34], which might be due to the limited sampling size of P. melanurus analyzed by the mentioned authors.The Dzungar Basin lineage initiated a divergence at ~1.3 Ma, with subsequent intraspecific differentiation at the Middle and Late Pleistocene epochs (Figure 3).Notably, Pleistocene climatic transitions placed a unique arid environment in Northwest China [87], which may have shaped the genetic diversity of vertebrate populations, particularly an isolation of desert-dwelling species [88,89]. Origin and Colonization The origin of the P. melanurus has been debated for decades.Previously, researchers hypothesized that an ancestral form of the P. guttatus group originated in the eastern part of the Kazakh Upland and later spread to the Zaisan, Balkhash, and Ili basins, eventually settling in western Kazakhstan [32].Melville et al. [3] assumed that the Irtysh valley could serve as a dispersal route for P. melanurus populations, while Golubev [29] proposed a hypothesis that P. melanurus may have penetrated into Kazakhstan from China through the Dzungar Gate in the Middle Pleistocene.However, our study highlights that the ancestral form of P. melanurus colonized the Zaisan Basin from Chinese Dzungar.Bayesian phylogeographic diffusion analysis showed that the dispersal of P. melanurus populations from its projected ancestral area occurred in the middle Pleistocene epoch, which was conditioned by the mid-Pleistocene climatic transition (Figure 6A,B).The subsequent expansions of deserts in the Dzungar Basin (0.65 Ma and 0.5 Ma) [16] uncovered the vast territories for the settlement of P. melanurus and promoted the spatial diffusion of the population in the Dzungar Basin at 0.63 Ma (Figure 6D). Subsequently, following the mountain and foothill trails of Saur-Tarbagatay and Alatau mountains, populations reached Ebi-Nur Lake (IIb1) and spread eastward to the modern settlement of Qitai (IIa).During that period, the activity of shallow rivers and temporary streams along the northern slopes of the Dzungar Alatau could have contributed to the formation of sandy-pebble desert landscapes that eventually developed into a relatively integral piedmont plume [90,91].This plume may have also served as a pathway for lizards from Ebi-Nur Lake to penetrate the Alakol Basin via the Dzungar Gate, which stands with the Golubev's assumption [29]. Lineage-Specific Response to Quaternary Climatic Oscillations We suggest that the historical climate has greatly influenced the population dynamics of P. melanurus in Northwest China.Demographic analysis of the lineages examined in this study reveal that the population growth was captured in the lineage IIb3, matching glacial expansion model [26,92].For Subclade IIb3, a star-like haplotype network, significant values of neutrality statistics and unimodal mismatch distribution evidenced the population expansion event (Figures 4 and S3).Moreover, BSP analysis detected the signal of the past population growth started at approximately 0.18 Ma and lasted during the LIG in IIb3 (Figure 5).SDM modelling of the LIG period demonstrated the suitable habitat expansion at the eastern margins of the area, which supports the possibility of the population expansion in the past (Figure 7).Clade I and Subclade IIb2 kept the constant population over the time, which also shows the stability of the distribution area during the LGM. The development of aridification in Northwestern China in the late Pleistocene resulted in the enlargement of deserts in the LGM [93], which may in some cases have had the effect of forming broader suitable habitats (edges of deserts and arid piedmont) for expansion of the arid-adapted species.As expected, LGM climatic transition shifted the habitat of P. melanurus by reinforcing them to shelter along the mountain slopes in southwestern range, expanding through the Dzungar Gate into Kazakhstan (Figure 8).Suitable habitat expansions during the LGM were noticed in the other representatives of herpetofauna in arid Central Asia, such as the Turpan racerunner (Eremias roborowskii) [8] and sunwatcher toad-headed agama (Phrynocephalus helioscopus) [11].In the late Holocene, the suitable habitat contracted due to the warm and humid episodes that have promoted the development of the mesic biota [94]. It is widely accepted that glaciation in the Pleistocene generally forced species into refugia [95][96][97][98].Refugia may be generally predicted as areas possessing high levels of genetic diversity, and may have a distinct characteristic, such as a presence of ancestral and unique haplotypes that have disappeared from other populations [96].In our study, the high level of genetic diversity in populations of IIb3, IIa, and IIb1 is detected.Additionally, the potential ancestral haplotype (haplotype 6) comes from the Hoboksar region, which coincides with the projected ancestral area inferred from Bayesian phylogeographic analysis.All these facts indicate that the Hoboksar region may serve as a conditional refugium where the population survived during the interglacial-glacial periods.Nevertheless, it should be noted that Eastern Tianshan Mountains did not advance the glaciers/permafrost below 2400 m a.s.l.during the LGM due to the extra continental climate [85].As such, multiple glacial refugia may have existed along the Saur-Tarbagatay Mountain system in the west, Alatau and Western Tianshan in the southwest, and Eastern Tianshan (Bogda Mountain) in the southeast. Future distribution modelling for the 2070 (Supplementary Materials, Figure S6) proposes the scenario of drastic habitat contraction in NW China, as a response to the extensive aridification and probable urbanization growth.Despite on-mass aridification of the land masses on a global scale, arid-adapted species will face biological limitations or physical barriers that restrict their spatial distribution into suitable habitats [99].Species incapable of migrating are bound to remain in their current habitats and either adapt to new conditions or face extinction.Similarly, for the P. melanurus populations in future, climate change can cause reduction in the population size, or even extirpation, and this matter may require the future conservation effort. Conclusions To the best of our knowledge, this work represents the first range-wide phylogeography of P. melanurus by integrating mtDNA and species distribution modeling.Our analyses demonstrate the effects of past climatic changes on the intraspecific divergence of P. melanurus.The combination of population genetics and SDMs also provides new insights to predict the impact of future climatic changes on population dynamics.Our results reveal that the population of P. melanurus is geographically structured into two main Clades: the Zaisan lineage and the Dzungar lineage.The latter is further sub-structured into several groups.Genetic distances among these lineages demonstrate their relatedness, and thus preclude recognition of as distinct species, due to ongoing diversification processes and incomplete lineage sorting.Furthermore, our results suggest that the ancestral form of P. melanurus migrated from the northwest of the Dzungar Basin during the middle Pleistocene, and subsequently spread throughout Zaisan and Dzungar basins, with some populations accessing the Alakol Basin in Kazakhstan via the Dzungar Gate.Overall, taking into account the fact that mtDNA is highly variable in natural populations due to its elevated mutation rate, and can generate a signal about population history over short time frames, this work improves our understanding of the phylogeography of P. melanurus, providing further insights into the evolutionary processes that occurred in Northwest China. Figure 1 . Figure 1.Collection sites for the samples of P. melanurus used in this study.Sites are numbered as in Tables1 and S1; phylogenetic lineages (clades/Subclades) are highlighted by different colors.Dashed lines represent the soft boundaries isolating the populations of IIb2 (East), IIb3 (Central), and the western group, respectively.The background outlines the current distribution of P. melanurus according to Dunayev et al.[34]. Clade I (PP = 1.0,BS = 96%) covered the NW part of the Dzungar Basin, comprising the haplotypes of Altay Prefecture in NW China, and the Zaisan Basin in Eastern Kazakhstan.Clade II consisted of two matrilineal lineages, with IIa representing the haplotypes of Qitai (PP = 1.0,BS = 100%) and IIb representing the rest sampling sites.As can be seen from Figure 2, the intra-relationship within IIb was unresolved.IIb1 included the haplotypes of Jinghe (PP = 1.0,BS = 100%), and the West group with low support (PP = 0.53) comprised the haplotypes of western part of the Dzungar Basin and adjacent Dzungar Gate, and Alakol Basin area in Eastern Kazakhstan, while the East Subclade IIb2 represented haplotypes of Fukang (PP = 0.99), with Central Subclade IIb3 of Hoboksar (PP = 0.96).Overall, the genealogical structure was significant, which reflects a strong geographic association of each lineage.Animals 2024, 14, x 10 of 24 Figure 2 . Figure 2. The 50% majority-consensus tree for P. melanurus resulting from partitioned Bayesian analysis, associations with less than 0.5 posterior probability being collapsed.Bayesian posterior probabilities and maximum likelihood bootstrap values are shown.Nodal support less than 50% was not shown in the tree.Highly supported Clades/Subclades (> 0.95%) are given in bold.Dashes represent nodes with bootstrap support lower than 50% or represent nodes non-existant.Geographic attribution represents Central, West, and East groups of Dzungar Basin populations, respectively.Photo of P. melanurus by X.G. Figure 2 . Figure 2. The 50% majority-consensus tree for P. melanurus resulting from partitioned Bayesian analysis, associations with less than 0.5 posterior probability being collapsed.Bayesian posterior probabilities and maximum likelihood bootstrap values are shown.Nodal support less than 50% was not shown in the tree.Highly supported Clades/Subclades (>0.95%) are given in bold.Dashes represent nodes with bootstrap support lower than 50% or represent nodes non-existant.Geographic attribution represents Central, West, and East groups of Dzungar Basin populations, respectively.Photo of P. melanurus by X.G. Animals 2024 ,Figure 3 . Figure 3. Bayesian divergence times estimation for P. melanurus based on the sequences of m COI gene fragment.Median values of divergence (in millions of years ago) are shown above with blue bars representing 95% highest posterior densities (HPD), and posterior probabil italics) are shown below nodes. Figure 3 . Figure 3. Bayesian divergence times estimation for P. melanurus based on the sequences of mtDNA COI gene fragment.Median values of divergence (in millions of years ago) are shown above nodes, with blue bars representing 95% highest posterior densities (HPD), and posterior probabilities (in italics) are shown below nodes. Figure 4 . Figure 4. Median-joining networks of mtDNA COI haplotypes for P. melanurus.Colors correspo to haplotypes of lineages/group in Figure 2. Short bars crossing network branches indicate mutati steps; small dark circles indicate median vectors inferred by PopART software.Circle size cor sponds to relative numbers of individuals sharing a particular haplotype. Figure 4 . Figure 4. Median-joining networks of mtDNA COI haplotypes for P. melanurus.Colors correspond to haplotypes of lineages/group in Figure 2. Short bars crossing network branches indicate mutation steps; small dark circles indicate median vectors inferred by PopART software.Circle size corresponds to relative numbers of individuals sharing a particular haplotype. Figure 5 . Figure 5. Bayesian skyline plots (BSP) for the Clade/Subclade of P. melanurus.The x-axis is time in millions of years ago (Ma), and the y-axis is on a logarithmic scale and in units of the product of female effective population size (Nef ) and generation time (t). 24 Figure 5 . Figure 5. Bayesian skyline plots (BSP) for the Clade/Subclade of P. melanurus.The x-axis is time in millions of years ago (Ma), and the y-axis is on a logarithmic scale and in units of the product of female effective population size (Nef) and generation time (t). Figure 6 . Figure 6.Spatiotemporal diffusion for P. melanurus from the potential ancestral area inferred from Bayesian phylogeographic analysis.Four snapshots of colonization events throughout the time are shown: (A) the Dzungar Basin origin; (B) subsequent multiple spreading northward to E KZ (Zaisan Basin), southward to Karamay region, and southeast to Qitai-Fukang, and southwest through Jinghe and Ebinur reaching Dzungar Gate area; (C) the west to east spreading from Zaisan Basin in E KZ to Altay Prefecture in NW CN, and from Bortala Autonomous County to Kuytun area; (D) the full event of dispersal with accession to Alakol Basin in E KZ. Colored polygons represent the 80% HPD intervals which indicate the uncertainty of phylogeographic estimates for the nodes.Colored circles represent the samples of maternal lineages according to Figure 2. Figure 6 . Figure 6.Spatiotemporal diffusion for P. melanurus from the potential ancestral area inferred from Bayesian phylogeographic analysis.Four snapshots of colonization events throughout the time are Figure 8 . Figure 8. Potentially suitable distribution area in three different periods for P. melanurus.LH-la Holocene;LGM-last glacial maximum; LIG-last interglacial period.The habitat suitability ind ranged from 0-1; the larger the number, the higher the adaptability of the habitat, and the mo suitable for the survival of P. melanurus.Negative MESS values shown as similarity < 0 by dash line, demonstrating areas without current equivalents of climatic conditions.Red dots indicate t locality of occurrence data. Figure 8 . Figure 8. Potentially suitable distribution area in three different periods for P. melanurus.LH-late Holocene; LGM-last glacial maximum; LIG-last interglacial period.The habitat suitability index ranged from 0-1; the larger the number, the higher the adaptability of the habitat, and the more suitable for the survival of P. melanurus.Negative MESS values shown as similarity <0 by dashed line, demonstrating areas without current equivalents of climatic conditions.Red dots indicate the locality of occurrence data. Table 1 . Sampling [34]mple size, haplotype, and nucleotide diversity for 36 populations for P. melanurus.Abbreviations: KZ-Kazakhstan; CN-China; N-number of samples; X-longitude; Y-latitude; Hd-haplotype diversity; π-nucleotide diversity; SD-standard deviation.Figure 1.Collection sites for the samples of P. melanurus used in this study.Sites are numbered as in Table1and TableS1; phylogenetic lineages (clades/Subclades) are highlighted by different colors.Dashed lines represent the soft boundaries isolating the populations of IIb2 (East), IIb3 (Central), and the western group, respectively.The background outlines the current distribution of P. melanurus according to Dunayev et al.[34]. Table 1 . Sampling, sample size, haplotype, and nucleotide diversity for 36 populations for P. melanurus. Table 2 . Molecular diversity indices of P. melanurus lineages.N-number of individuals; Nh -number of haplotypes; S-number of polymorphic sites; m-number of mutations; k-average number of nucleotide differences; Hd-haplotype diversity; π-nucleotide diversity. Table 3 . Uncorrected p-distances between groups of P. melanurus are shown below the diagonal.Standard error estimate(s) are shown above the diagonal and were obtained by a bootstrap procedure (1000 replicates). Table 4 . Neutrality tests and mismatch distribution analyses of P. melanurus.
2024-01-11T16:16:39.171Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "6aeec6053b56b09d07d88a2d1f848fef3cdd866f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/14/2/209/pdf?version=1704772172", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0559fabdbb0b54d06da9a7feb5c2d4a73994362e", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
182118134
pes2o/s2orc
v3-fos-license
Food Science of Animal Resources This study evaluated Campylobacter jejuni risk in ground meat products. The C. jejuni prevalence in ground meat products was investigated. To develop the predictive model, survival data of C. jejuni were collected at 4°C–30°C during storage, and the data were fitted using the Weibull model. In addition, the storage temperature and time of ground meat products were investigated during distribution. The consumption amount and frequency of ground meat products were investigated by interviewing 1,500 adults. The prevalence, temperature, time, and consumption data were analyzed by @RISK to generate probabilistic distributions. In 224 samples of ground meat products, there were no C. jejuni–contaminated samples. A scenario with a series of probabilistic distributions, a predictive model and a dose-response model was prepared to calculate the probability of illness, and it showed that the probability of foodborne illness caused by C. jejuni per person per day from ground meat products was 5.68×10–10, which can be considered low risk. Introduction Jerky is a nutritional snack with a high protein content and light weight, and thus it is consumed by many people (Holley, 1985).It is also easy to store because of its long shelf-life and low A w (Calicioglu et al., 2003).However, outbreaks of foodborne illness have occurred in many countries (Eidson et al., 2000;Keene et al., 1997).These outbreaks may be caused by cross contamination during jerky processing, molding, packaging, and cutting.Also, most of jerkies are made in small companies, and these companies have difficulties for food hygiene management.Thus, foodborne pathogen ARTICLE growth need to be simulated in jerky for exposure assessment. Staphylococcus aureus can produce enterotoxin, leading to foodborne intoxication (Le Loir et al., 2003).Generally, the symptoms of foodborne illness include abdominal cramps, vomiting, and diarrhea (Jones et al., 2002).S. aureus can grow under various conditions, such as a wide range of temperatures, pH, and low A W (Bergdoll, 1989;Schmitt et al., 1990) and most S. aureus isolates from food exhibit antimicrobial resistance (Can et al., 2017).The pathogen is commonly found on human skin (Otto, 2008), and may be cross-contaminated from human hands to jerky.Thus, there is high possibility for jerky contamination by S. aureus. Predictive models are useful for estimating microbial growth or death in food using mathematical models (Zwietering et al., 1996).The purpose of a predictive model is to secure food safety in advance by identifying risk factors (Yoon, 2010).A primary model describes changes in bacterial cell counts over storage time to calculate kinetic parameters such as growth rate and lag phase duration (Ha et al., 2016).A secondary model describes the effects of environmental factors such as pH, A W , and temperature on kinetic parameters (Buchanan, 1993;Ha et al., 2016). Therefore, the objective of this study was to develop mathematical models for describing the kinetic behavior of S. aureus in beef jerky. Materials and Methods Preparation of inocula S. aureus ATCC13565 was cultured in 10 mL of tryptic soy broth (TSB; BD Biosciences, Franklin Lakes, NJ, USA) at 37℃ for 24 h.For subculture, 0.1 mL of the culture was transferred into 10 mL fresh TSB at 37℃ for 24 h.The sample was centrifuged at 1,912×g and 4℃ for 15 min and washed twice with phosphate-buffered saline (PBS: pH 7.4; 0.2 g of KH 2 PO 4 , 1.5 g of Na 2 HPO 4 •7H 2 O, 8.0 g of NaCl, and 0.2 g of KCl in 1 L of distilled water).The supernatants were discarded, and the cell pellets were resuspended in PBS.Cell suspensions were diluted with PBS to 3-4 Log CFU/mL for inoculation. Development of predictive model Seasoned beef jerky was purchased from an online shop in Korea.Ten-gram portions of the samples were placed into sterile filter bags (3M, St. Paul, MN, USA), and 0.1-mL aliquots of S. aureus were dotted on several places of the beef jerky surface for inoculation to obtain 3 Log CFU/g in the sample bags.The samples were rubbed 20 times and the sample bags were sealed, followed by aerobic storage at 10℃ (600 h), 20℃ (600 h), 25℃ (480 h), 30℃ (192 h), and 35℃ (96 h).These time intervals were determined according to the time that S. aureus cell counts were below detection limit.Beef jerky samples were analyzed at different time intervals.Thirty milliliters of 0.1% buffered peptone water (BPW; BD Biosciences) were added to each sample and homogenized with a BagMixer (Interscience, St. Nom, France) for 90 s.The homogenates were serially diluted with BPW.The 0.1 mL of the diluents were plated onto Baird-Parker agar (MB Cell, Los Angeles, CA, USA) for S. aureus, and the plates were incubated at 37℃ for 48 h.Typical colonies on the plates were counted, and the Weibull model was fitted to the S. aureus cell count data (Van Boekel, 2002). where N 0 is the initial cell count, ρ is the shape of the curve, and δ is the time required for the first decimal reduction.The polynomial model (δ=N 0 +a×T+b×T 2 ) was used to evaluate the effect of storage temperature on δ. Validation S. aureus cell count data were obtained at 15℃ and 23℃ in additional experiments to evaluate the model performance. These observed data were compared to predicted data, which were calculated from the predictive model.The differences between the observed and predicted data were quantified by calculating the root mean square error (RMSE) (Baranyi et al., 1996); where n represents the number of data points. Statistical analysis The experimental data were analyzed with the general linear model procedure of SAS ® version 9.3 (SAS Institute, Inc., Cary, NC, USA).The mean comparisons were performed by a pairwise t-test at α=0.05. Results and Discussion Because various types of jerky are available made from different meat types and marinades, the behavior of S. aureus may differ among jerkies.Thus, a predictive model should be developed for each jerky type to describe the behavior of S. aureus. However, this effort requires a long time and is costly.Developing a model with the jerky type, allowing the highest S. aureus growth, may be appropriate for the most severe case, which would save time and expense.To determine a model for developing a predictive model, we examined the pH and water activities of 75 samples of 15 original jerky products (Table 1) and 50 samples of 10 seasoned jerky products (Table 2).The pH values were highly similar among the samples (6.13-6.17),but the water activities were higher in the seasoned jerky (0.810±0.045) than in the original jerky (0.656±0.134) (Tables 1 and 2).The 10-seasoned jerky products contained sodium nitrite, potassium sorbate, and sodium sorbate.The growth of most bacteria is inhibited when A W is reduced.Particularly, the growth of S. aureus is inhibited when A W is less than 0.850 in an aerobic environment (Jay 1992).Holley (1985) indicated that the A W of jerky was 0.620 when stored at refrigeration temperature for 26 days, but there was no significant difference in S. aureus cell counts compared to the cell counts on day 0.This result suggests that S. aureus can survive in beef jerky even if A W is less than 0.850.Additionally, Lee et al. (2016) indicated that S. aureus did not grow under vacuum conditions.Hence, we developed predictive models using the seasoned beef jerky products as a model product under aerobic conditions to predict the most severe case of S. aureus growth. The cell counts were gradually decreased at 10℃ and 20℃, but a tail effect was observed at 10℃ and S. aureus cell counts survived through the end of storage at 20℃ (Fig. 1).However, the S. aureus cell counts greatly decreased as the temperature was increased to 25℃, 30℃, and 35℃ (Fig. 1).The cell counts decreased to below the detection limit (0.48 Log CFU/g) after 432, 144, and 120 h at 20℃, 25℃, and 30℃, respectively (Fig. 1). To describe the kinetic behavior of S. aureus in beef jerky, primary models were developed and R 2 values ranged from 0.868 to 0.967, indicating that the developed primary models were appropriate.These primary models showed that the δ values generally decreased as temperature increased (Table 3).This result agrees with those of Moon et al. (2017) who showed that S. aureus in dried julienned squid survived longer at 10℃ than at 35℃.These results suggest that if S. aureus is contaminated in beef jerky stored at low temperature, the pathogen can survive for a long time and cause food safety issues. Because S. aureus cell counts decreased as shown in Fig. 1, the ρ values were less than 1, indicating that all curves were concave (Coroller et al., 2006).To evaluate the effect of ρ on temperature, a secondary model was developed and R 2 was 0.920 (Fig. 2), indicating that the developed model was appropriate.The equation was δ=(−4.4271)+(13.9841×T)+(−0.3605×T 2 ) Table 3 . δ and ρ calculated by the Weibull model for Staphylococcus aureus survival in jerky during aerobic storage at 10℃, 20℃, 25℃, 30℃, and 35℃ δ, required time for the first decimal reduction; ρ, shape of curve.A-C Means within the same row with different superscript letters are significantly different (p<0.05).
2020-04-08T23:11:13.510Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "9cbbce8c59897c820eec1f878d7e1e473d3cc293", "oa_license": "CCBYNC", "oa_url": "https://www.kosfaj.org/download/download_pdf?pid=kosfa-39-4-565", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9cbbce8c59897c820eec1f878d7e1e473d3cc293", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
33401128
pes2o/s2orc
v3-fos-license
Humoral and Cellular Immune Responses to Yersinia pestis Infection in Long-Term Recovered Plague Patients ABSTRACT Plague is one of the most dangerous diseases and is caused by Yersinia pestis. Effective vaccine development requires understanding of immune protective mechanisms against the bacterium in humans. In this study, the humoral and memory cellular immune responses in plague patients (n = 65) recovered from Y. pestis infection during the past 16 years were investigated using a protein microarray and an enzyme-linked immunosorbent spot assay (ELISpot). The seroprevalence to the F1 antigen in all recovered patients is 78.5%. In patients infected more than a decade ago, the antibody-positive rate still remains 69.5%. There is no difference in the antibody presence between gender, age, and infected years, but it seems to be associated with the F1 antibody titers during infection (r = 0.821; P < 0.05). Except F1 antibody, the antibodies against LcrV and YopD were detected in most of the patients, suggesting they could be the potential diagnostic markers for detecting the infection of F1-negative strains. Regarding cellular immunity, the cell number producing gamma interferon (IFN-γ), stimulated by F1 and LcrV, respectively, in vitro to the peripheral blood mononuclear cells of 7 plague patients and 4 negative controls, showed no significant difference, indicating F1 and LcrV are not dominant T cell antigens against plague for a longer time in humans. Our findings have direct implications for the future design and development of effective vaccines against Y. pestis infection and the development of new target-based diagnostics. P lague is a deadly infectious disease caused by Yersinia pestis and there are 1,000 to 5,000 human plague cases reported each year worldwide (20). Although the fatality rate of infected persons can decrease dramatically if they are treated by effective antibiotics on time, the existence of antibiotic-resistant virulent Y. pestis strains indicates that an effective vaccine against both bubonic and pneumonic plagues is urgently needed, and the potential misuse for biological warfare or bioterrorism also strengthens this need (5,8). Three types of vaccines, namely, killed whole-cell (KWC) vaccines, live attenuated vaccines (EV76), and recombinant subunit vaccines, have been developed against plague. Although KWC and EV76 vaccines provide protection against plague in animal models, both have side effects and need repeated immunizations for developing immunity in humans (19,29,30). They are no longer used in humans in the Western world. EV76 is still the vaccine of choice for humans in China. Subunit vaccines based on the capsular protein F1 and one of the type III secretion system proteins, LcrV, have been the focus of recent efforts (1,9,24,28,32). This subunit vaccine has been shown to protect mice against respiratory infection by Y. pestis and has been reported for entry into a phase II study (9,34). However, it failed to adequately protect African green monkeys from pneumonic plague (26). Moreover, the F1 mutant and the LcrV variant strains can possibly circumvent the effectiveness of this subunit vaccine (36). This highlights the need to identify novel and effective vaccines that can address all forms of plague. Understanding of the antimicrobial immune responses of the host will enable the discovery of more effective vaccines. The immune mechanism against Y. pestis is extremely complex and involves a combination of humoral and cellular factors (14). Studies have focused on the antibody-based humoral immunity, and the majority of these studies employed animal plague models, which cannot reflect the real immune protective mechanisms of humans. In contrast to the approximately 6 to 12 months of protection in EV76-immunized people (6), individuals who survived the plague infection could establish the protective responses. They are considered to have acquired immunity against subsequent reinfection of Y. pestis. The immune responses to Y. pestis of recovered patients and the persistence of Y. pestis-induced immunity after infection will provide the most important data that can facilitate the development of effective vaccine. Although previous studies confirmed that the F1 antibody could persist for 1 to 4 years in humans (18,27), there is no report on longer persistence of the F1 antibody and the existence of the antibodies against proteins other than F1 in patients in the long term. In the present study, the serum samples from 65 plague patients who were in recovery for more than 10 years were collected and screened by protein microarray to investigate antibody profile. Meanwhile, the specific memory T cell responses to F1 and LcrV proteins in the recovered patients were also analyzed. Table 1. The sera from the subjects were collected and stored at Ϫ20°C for further use. Forty-eight serum samples were collected from persons with no plague history in the areas of endemicity. Forty-three serum samples were collected from persons in counties of nonendemicity and were used as negative controls. Detection of antibodies against F1 by up-converting phosphor technology-based lateral flow (UPT-LF) and enzyme-linked immunosorbent assay (ELISA). All collected sera were screened for the antibody against F1 by F1 antigen-based UPT-LF, which is a quantitative assay developed recently for detecting microorganisms and antibodies (10,17,25). For developing double-antigen sandwich LF strips to detect F1 antibody, F1 antigen (1 mg/ml, 1 l/cm) and their corresponding antibodies (1 mg/ml, 1 l/cm) were dispensed on the nitrocellulose membrane as the test line (T) and control line (C), respectively. Up-converting phosphor (UCP)-F1 antigen conjugate (1 mg/ml, 30 l/cm) was fixed in the glass fiber as the conjugate pad. The result of the UPT-LF strip was analyzed by UPT biosensor. The areas of the peaks corresponding to the test and control lines were referred to as T and C, respectively, and the ratio of T/C is the result of measurement. Samples with a T/C ratio higher than the cutoff threshold (mean plus 3 standard deviations [SD]) were regarded as positive and vice versa (10). To confirm the results of UPT, the F1 antibody titer in the recovered patients was tested using ELISA, which was validated by the Institute Pasteur de Madagascar in 1995 (27). The sera of healthy blood donors were used to define a cutoff value for determining positive or negative results. Antibody screening by protein microarray. The protein microarray included 218 outer membrane proteins, surface-exposed or secreted proteins, and known or putative virulence factors and the genes located in several genomic islands that were likely acquired in the evolution of Y. pestis. These were constructed according to previous reports (12). According to the results of UPT, 2 to 4 sera of patients who were infected in the same year and had similar UPT results of F1 antigen were pooled for antibody profiling against the proteins on the microarray. The profiling process and data analysis were performed based on the procedures of earlier studies (12,13,15). Six serum samples of healthy people that were negative for the F1 antibody were used as negative controls. F1-and LcrV-specific gamma interferon (IFN-␥) production in recovered patients by enzyme-linked immunosorbent spot (ELISpot) assay. Recombinant F1 and LcrV proteins were expressed in Escherichia coli and purified as previously described (16). ELISpot assays were performed using a commercially available kit (BD Pharmingen) according to the manufacturer's instructions. Peripheral blood mononuclear cells (PBMCs) of 7 patients (patient no. 36, 37, 38, 41, 45, 47, and 51 in Table 1) who recovered from the plague 4 to 6 years ago were isolated from heparinized blood samples by Ficoll-Hypaque density gradient centrifugation. The number of cells was adjusted as required prior to stimulation. Purified recombinant proteins F1 and LcrV (10 g/ml), phytohemagglutinin (5 g/ml), or complete medium (RPMI 1640 medium) were added to the cells in triplicate. The resulting spots were counted using a cytotoxic T lymphocyte immunospot analyzer. The final numbers of IFN-␥-secreting cells stimulated by F1 and LcrV proteins were determined from the results of triplicate wells as the mean numbers of cytokine spots per million cells from patients. The number of background spots (those for IFN-␥producing T cells from complete medium stimulated in vitro) was subtracted. PBMCs from 4 healthy donors in Beijing, China, were used as negative controls. Statistical analysis. The association between prevalence of the F1 antibody-positive rate and gender, age, and years of infection was examined by the x 2 test. Pearson correlation coefficients (r) were calculated to determine correlations between the anti-F1 titers of patients at infection and the remaining amount of the F1 antibody after several years to more than a decade postinfection. Differences in the numbers of IFN-␥producing cells between recovered plague patients and healthy control donors were compared using Student's t test. P values of Ͻ0.05 were considered significant. RESULTS AND DISCUSSION Antibodies against F1 in the recovered plague patients. The study subjects consist of 65 (34 male and 31 female) plague patients who were recovered from bubonic plague between 1990 and 2005. All serum samples were screened with UPT-LF to detect the amount of antibody against the F1 antigen, an antigen for the serodiagnosis of plague in human patients and infected animals. Fifty-one of 65 recovered patients (78.5%) were positive, suggesting that the F1 antibody in serum could persist from several years to more than a decade after infection. The prevalence of the F1 antibody was 88% among patients infected within 5 years, and its positive ratio decreased to 69.5% in patients infected for more than a decade (Table 2). Only 1 of 13 patients infected in 1990 was negative for the F1 antibody in serum, and the remaining 12 patients remained positive for F1 antibody in sera (Table 1). However, all of the 3 serum samples from the patients infected in 1992 were negative for the F1 antibody (3/3, 100%). This indicates that the antibody persistence in recovered patients may be determined not only by the time after infection. It might be mainly influenced by individual differences. Five persons in counties where plague is endemic and one person in a county where plague is nonendemic were also positive for the F1 antibody in sera with considerably lower antibody quantity than those in the recovered patients (see Tables 2 and 4). Although several studies proved that anti-F1 an- (18,27), our study was the first to determine the persistence of the F1 antibody in patients for more than a decade postinfection. The factors in relation to persistent time of the F1 antibody in recovered patients. Previous studies have shown that plague antibodies are more prevalent in males in populations exposed to infection, and the difference in relation to age was also reported (4, 7). However, the factors in relation to the time of persistence and the amount of the F1 antibody in humans have not been investigated before. Among the 65 recovered patients, the F1 antibody was detected in 23 of the 31 females (74.2%) and 28 of the 34 males (82.4%). The seroprevalence rate is not significantly different in terms of gender (P ϭ 0.424), suggesting that persistence of the F1 antibody in recovered patients was not related to sex. It is not related to time periods after infection (88%, 76.5%, and 69.5% in Յ5 years, 5 to 10 years, and Ն10 years postinfection, respectively; P ϭ 0.292). Although the positive rate in patients who were recovered for Ն10 years was slightly lower than those who were for recovered Յ5 years, the seroprevalence in patients who were recovered for 16 years was 92.3% (12/13). Because the safety and immunogenicity data of vaccine in people younger than 18 were not available, the recovered patients were divided into Ͻ18and Ն18-year-old groups to study the relationship between age and the positivity of the F1 antibody. The seropositivity of the F1 antibody showed no difference between the two groups (15/16 versus 36/49; P ϭ 0.087) ( Table 3). This indicates that the duration of the antibody might not be influenced by age of infection. The UPT values of recovered patients correlated well with the IHA titers of the corresponding patient at infection (r ϭ 0.821; P Ͻ 0.05) (Fig. 1). Patients with higher F1 antibody titers at infection seemed to retain higher levels of the F1 antibody value in the body after recovery. Serum antibody titers against F1 antigen are correlated with the degree of protection against Y. pestis infection in experimental animals (35). The long-term high level of the F1 antibody in the serum of the recovered patients may be one of the reasons that they are protected from plague. Although the F1 antibody in 68% (36/50) of EV76-immunized persons was also de-tected by UPT-LF in 2006, which is approximately 15 years after immunization (Table 4), the F1 antibody value was significantly lower than those in the recovered patients (P Ͻ 0.01). EV76 is a live attenuated vaccine lacking pgm locus which has been widely used in China and in the former Soviet Union in the past (29). Unlike the majority of the recovered patients who could acquire long-term protection, the protective duration of EV76immunized people is approximately 6 to 12 months (6). The lower F1 antibody level in EV76-immunized people than that in the recovered patients may explain the poor protection of EV76 against plague (Table 4). Although the EV76 strain can live in humans and stimulate immune responses, the lack of pgm locus could influence its survival ability in humans, limiting its replication and dissemination and leading to insufficient contact with immune cells. Antibody profiling in patients infected at different years. Except for F1 protein, the antibody to many other proteins, such as LcrV, YopH, YopE, and YopD, can also be detected in the acute and convalescent-phase sera of plague patients (2,3). To study the time of persistence of antibodies other than anti-F1 (AOTF) in the recovered patients for finding new vaccine candidates and serodiagnostic markers, the protein microarray containing 218 known or putative virulence-associated proteins of Y. pestis was used to profile the antibody response in the recovered patients. According to the results of UPT-LF of the F1 antibody, the sera with similar F1 antibody values and those from the patients infected at the same years were pooled because of the limited quantity of each serum. Finally, 21 pooled sera were profiled for the antibody against the proteins on the microarray. Figure 2 provides an overview of the antibody profiles according to the years of infection. The antibody responses to 20 proteins were found in at least one pooled sera of the patients infected in 2005, 1 year before the collection of sera. Four of them, YPO1089, YPO1435, YPMT1.62c, and YPMT1.86a, disappeared in all pooled sera of the patients infected after 2003. In the patients who were recovered for more than 5 years, al- Immune Responses to Yersinia pestis Infection most all AOTF antibodies disappeared, except for those to LcrV and YopD. The antibodies against LcrV and YopD were stable and were detected in most recovered patients (Fig. 2). Both proteins are encoded by type III secretion system and are essential for the virulence of Y. pestis. LcrV is the other vaccine target of plague, and the antibody against it prevents Yop-dependent growth of Y. pestis (22). Although there are numerous studies about longevity and variation of the F1 antibody in humans, the present study is the first to investigate the time of persistence of the antibody against LcrV. The long-term persistence of the LcrV antibody suggests that it is an important protective component against Y. pestis in humans. YopD is the other protein besides LcrV which provides partial protection of mice against nonencapsulated Y. pestis by subcutaneous challenge (2). However, the role of YopD in vaccine development has not been investigated further. The detection of its antibody in the recovered patients indicates its potential for serodiagnosis and protection against infection caused by F1negative strains. Memory cellular responses to F1 and LcrV in the recovered plague patients. Although the positive rate of the F1 antibody in patients was high, approximately 30% of the recovered patients remained seronegative to both F1 and LcrV. Moreover, the amount of antibodies in some patients was very low. To assess the cellular responses in plague patients, the T cell responses to recombinant F1 and LcrV antigens were studied utilizing 7 plague patients who recovered from plague 4 or 6 years ago, as well as 4 healthy controls from areas where plague is nonendemic. Although the cells of the patient that were stimulated with LcrV and F1 proteins produced IFN-␥, its level was not significantly different compared with those of the controls (P Ͼ 0.05) (Fig. 3). Y. pestis is a facultatively intracellular bacterium during the early phase of infection. Cell-mediated immune responses should play an important role in the defense against this pathogen (21). Vaccination with live attenuated Y. pestis (KIM5 pCD1 ϩ , pMT ϩ , pPCP ϩ , pgm Ϫ ) primes CD4 and CD8 T cells that synergistically protect against lethal pulmonary Y. pestis infection (23). An earlier study considered that the effective treatment by anti-F1 antibodies in the mouse model is also required for T cells (11). At present, only a few data concerning the immunodominant target for the T cell response and the persistence of cell-mediated immunity in plague patients are available. In the present study, memory T cell responses against the F1 and LcrV proteins were not detected by ELISpot assay in plague patients who recovered from infection 4 or 6 years ago. Several explanations could be proposed to explain why the T cellular responses to F1 and LcrV are negative. First, as indicated by the results, F1 and LcrV are not the dominant T cell antigens. In the experiment on mice models immunized with EV76 vaccine strain, F1 antigens also failed to induce strong T cell responses (16). Second, the cellular immunities to F1 and LcrV antigens were produced in the acute and convalescence phases of infection but could not persist for a longer period of time. In order to confirm this possibility, the cellular responses to F1 and LcrV in patients infected 2 years ago were studied and blood samples were collected from the patients. Unfortunately, since the plague patients lived in remote small villages and the PBMCs could not be isolated within 24 h, the quality of the isolated cells could not meet the requirement of ELISpot assay. Evidence shows that cellmediated immunity-related cytokines (interleukin-2 [IL-2] and IFN-␥) could not be detected in sera from primary pneumonic plague patients during infection. Only IL-6 could be detected in these patients (31). Nevertheless, based on our results and related literature, we speculate that F1 and LcrV may not be the dominant T cell antigens for the long-term defense against plague in humans. This provides a huge challenge to the plague vaccine development which is based only on F1 and LcrV antigens. There must be other proteins that play roles in stimulating cellular protective responses. A previous study conducted in our lab found that 34 proteins could stimulate strong T cell responses in EV76immunized mice. Nine of the proteins provide partial protection against the challenge of a low dose of Y. pestis, independently. We also detected these 9 proteins for their ability to induce cellular immunity in plague patients (16). Unfortunately, none produced significantly different cell spots between patients and controls (data not shown). Conclusions. In the present study, we analyzed the humoral and cellular immune responses to Y. pestis proteins F1 and LcrV in patients who were recovered from plague infection for several years to more than a decade to gain insight into the protective mechanism against plague. This is the first report on the study of both humoral and cellular responses against Y. pestis in humans. Antibody to F1 can persist in the recovered patients for more than 10 years, and antibodies to LcrV and YopD could also be present for a longer period of time. Specific memory T cell responses to F1 and LcrV could not be detected in plague patients 4 to 6 years postinfection. These results highlight the urgent need to develop effective live attenuated Y. pestis as a potential vaccine candidate (33).
2018-04-03T02:24:51.213Z
2011-12-21T00:00:00.000
{ "year": 2011, "sha1": "e0a1ba5ea5e34fcae548c2b34fcdc71c17e29791", "oa_license": null, "oa_url": "https://cvi.asm.org/content/cdli/19/2/228.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "93490bddd1b54126135c83a4e0a7e782f1b6184e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
14285865
pes2o/s2orc
v3-fos-license
Spontaneous Reattachment of a Posteriorly Dislocated Endothelial Graft: A Case Report A thirty-year-old Chinese man with a history of severe trauma to his right eye, with secondary sectoral aniridia and multiple operations including intraocular lens insertion more than fifteen years ago, underwent an uneventful Descemet's Stripping Automated Endothelial Keratoplasty (DSAEK) for his pseudophakic bullous keratopathy in a tertiary hospital in Hong Kong. The nature of his previous operations was unknown to the surgeon at the time of transplant. On postoperative day one, the graft was not present in the anterior chamber. Fundal view was limited because of corneal oedema. B-scan ultrasonography could not detect any definite presence of a donor button in the posterior segment as gas was present in the vitreous cavity. The patient was instructed to lie prone full time, and on postoperative day three, the graft was found to be reattached to the stroma with spontaneous resolution of corneal oedema, indicating restoration of pump function of endothelium graft. This is the first case of spontaneous reattachment of a posteriorly dislocated endothelial graft without surgical intervention or abandonment of the grafted endothelial button. Case Report A thirty-year-old Chinese man with a history of severe trauma to his right eye more than fifteen years ago and multiple operations done was scheduled for DSAEK for his pseudophakic bullous keratopathy in June 2012. As the patient could not recall the timing and nature of the procedures he had previously undergone, his archived old medical records were also not available, and hence details of his previous operations were not known to the operating surgeon at the time of transplantation. The patient could not recall whether his intraocular lens was scleral-fixated or a normal posterior capsule intraocular lens. Preoperative visual acuity was 20/200 and examination found pseudophakic bullous keratopathy with traumatic aniridia from 9 to 6 hours. No conjunctival bleb was seen. One faint subconjunctival suture was seen at 4 o' clock with the presence of an intraocular lens. The patient had surgery under retrobulbar anaesthesia. DSAEK button of 8 mm was prepared with a Barron Donor Cornea Punch. After temporal peritomy, a temporal scleral tunnel 6 mm was prepared. The Descemet's membrane was stripped (and edges of stroma roughened) and later was removed through the limbal tunnel in a viscoelastic filled anterior chamber (AC). The AC was then flushed thoroughly with the presence of an infusion cannula as AC maintainer. ACIOL plastic glide was inserted into AC, and the DSAEK button was pulled into the eye by Tan's forceps (Asico, Westmont, USA). Secure and good graft apposition was achieved on table by AC air fill to ∼30/40 mmHg for 5 minutes. Stab fenestrations were made for interface fluid release. At the end of the operation, the posterior lenticule was well centred and supported by full chamber gas fill of the anterior chamber. The patient was instructed to lie supine full time postoperatively. On postoperative day one, the cornea was oedematous (Figure 1), and the graft lenticule was not present in the anterior chamber ( Figure 2). No gas bubble was seen in the anterior chamber. Stromal oedema limited the fundal view and a B-scan ultrasonography could not detect any definite presence of a donor button in the posterior segment as gas was present in the vitreous cavity. The patient was instructed to lie prone full time except for the application of topical corticosteroids and antibiotics for the following two days. On postoperative day three, the patient reported a marked improved clarity of vision since wakening. Slit-lamp examination revealed a decentered EK 2 Case Reports in Transplantation button over the inferonasal quadrant of the cornea (Figure 3). The best-corrected visual acuity was 20/50. Corneal oedema had largely subsided, and the graft was well opposed to the stromal bed ( Figure 4). The graft-stroma interface was clear, and the inferior edge appeared secure. In view of his relative good vision, uncertain prognosis, and multiple operations (subsequently revealed a history of scleral fixation IOL) from his previous major trauma, conservative measure was adopted rather than for further graft centration adjustment and the patient was kept in prone position with his head down over the following postoperative week. At his latest followup in December 2012, his visual acuity was stable at 20/30 with a healthy clear cornea and a securely attached endothelial graft (Figures 5 and 6). Specular microscopy was attempted many times but was not recordable due to the marked IOL reflection in an aniridic eye. Discussion Descemet's Stripping Automated Endothelial Keratoplasty (DSAEK) is a recent technique in lamellar keratoplasty, which replaces only the posterior lamella of a diseased cornea and is considered an alternative to penetrating keratoplasty for posterior corneal diseases. The main indications include pseudophakic bullous keratopathy, Fuch's endothelial dystrophy, failed penetrating keratoplasty, or iridocorneal syndrome. And unlike penetrating keratoplasty, DSAEK offers a faster healing time and a more predictable refractive outcome as no sutures are placed on the cornea [1,2]. Many variations in the surgical techniques were described particularly on the delivery of the posterior lenticule into the anterior chamber, and methods of securing the endothelial graft with either gas injection or anchoring suture were described to facilitate successful surgery [3,4]. Graft dislocation into the posterior segment of an eye during the early postoperative period is a known but thankfully rare complication of DSAEK, occurring in only 8 out of more than 1300 DSAEK procedures [5]. The risk of dislocation into the posterior segment is higher in eyes with aniridia or eyes that had undergone vitrectomy, complicated intraocular lens implantation, or glaucoma surgery [6,7] or had a history of trauma. All of these factors were subsequently revealed to be present in our case. Owing to the risks of further complications arising from the posteriorly dislocated grafts, such as retinal detachment [8], cystoid macular oedema, and epiretinal membrane formation, the dislocated grafts in all previously reported cases were retrieved either in the same operation or later by either a standard three-port vitrectomy or an anterior approach with irrigation and aspiration through the corneal wound. Histopathological studies of the retrieved grafts found significant hypocellularity, and occasionally, inflammatory cells were found to be adhered to the donor lenticule, signifying the presence of inflammation and guarded viability of the dropped graft despite successful retrieval [9]. Hence, most patients required a repeat DSAEK or penetrating keratoplasty Case Reports in Transplantation after retrieval and abandonment of the original EK graft in order to regain visual acuity. The authors recommended retrieval of dropped graft as soon as possible, but the exact timing was not discussed at length. To the best of our knowledge, spontaneous reattachment of posteriorly detached endothelial graft has not been reported in the literature. Although there was a recent series of spontaneous reattachment of endothelial grafts which were dislocated partially or free floating in the anterior chamber [10], no reports had been made of spontaneous reattachment of an endothelial button that was completely dislocated into the posterior segment. Our patient was the first case reported to have a full functioning reattached graft, solely by gravity and positioning, without the need for a secondary retrieval or need of abandonment of the retrieved graft. Rapid resolution of the cornea oedema indicated good endothelial function after its spontaneous reattachment. Despite its slight decentration, the graft remained stable in position, and postoperative visual acuity returned to 20/30 at the six-month followup period. Conservative prone positioning in posterior dislocated endothelial grafts may be a worthwhile measure for consideration.
2018-04-03T05:13:12.381Z
2013-03-03T00:00:00.000
{ "year": 2013, "sha1": "f58b29fce478ee9ab874a9305e30dbcc745bf40b", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crit/2013/631702.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96c1d5a5f22cae4290d6c55998d51cf6f0ed286e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16040857
pes2o/s2orc
v3-fos-license
Priming adult stem cells by hypoxic pretreatments for applications in regenerative medicine The efficiency of regenerative medicine can be ameliorated by improving the biological performances of stem cells before their transplantation. Several ex-vivo protocols of non-damaging cell hypoxia have been demonstrated to significantly increase survival, proliferation and post-engraftment differentiation potential of stem cells. The best results for priming cultured stem cells against a following, otherwise lethal, ischemic stress have been obtained with brief intermittent episodes of hypoxia, or anoxia, and reoxygenation in accordance with the extraordinary protection afforded by the conventional maneuver of ischemic preconditioning in severely ischemic organs. These protocols of hypoxic preconditioning can be rather easily reproduced in a laboratory; however, more suitable pharmacological interventions inducing stem cell responses similar to those activated in hypoxia are considered among the most promising solutions for future applications in cell therapy. Here we want to offer an up-to-date review of the molecular mechanisms translating hypoxia into beneficial events for regenerative medicine. To this aim the involvement of epigenetic modifications, microRNAs, and oxidative stress, mainly activated by hypoxia inducible factors, will be discussed. Stem cell adaptation to their natural hypoxic microenvironments (niche) in healthy and neoplastic tissues will be also considered. Introduction Stem cells (SCs) are currently evaluated as a tool to repair the irreversible tissue damages that permanently impair organ function. In the last decade, the increasing knowledge of SC biology has widely encouraged preclinical and clinical studies of regenerative medicine. SCs are unspecialized cells maintaining their proliferative and differentiative capacity throughout the life of an individual [1]. Their ability to divide and remain in the undifferentiated state, i.e. the self-renewal process, is a specific characteristic of these cells that can be accomplished by two mechanisms known as obligatory asymmetric replication and stochastic differentiation [2]. The former splits SCs into two daughter cells, one of which retains the property of self-renewal while the other begins to differentiate; the second requires the participation of SCs that generate two identical daughter cells together with SCs that generate only differentiating cells. According to their differentiation potential, or potency, mammal SCs are classified as [3]: totipotent, namely the zygote and its first daughter cells which give rise to the entire organism; pluripotent, i.e. the blastocystic cells that differentiate into any of the three germ layers; multipotent, that are present in the fetus and born organisms and can differentiate into multiple, but limited cell types; oligopotent, such as the lymphoid or myeloid SCs, that differentiate into a few cell types; unipotent, that produce only one cell type but are still self-renewing. Similarly to unipotent SCs, progenitor cells are also unipotent but can divide only a limited number of times. Furthermore, SCs can be distinguished into two categories: pluripotent embryonic SCs (ESCs) and multipotent or unipotent adult (somatic) SCs, the latter being present in near all post-natal tissues including bone marrow, blood, adipose tissue, skin, liver, muscle, and brain [4]. The picture is completed by the so-called induced pluripotent SCs (iPSCs), that are not naturally occurring SCs but are produced using virtually all types of cells by forcing the expression of a few genes that provide pluripotency [5]. An open question concerns the putative property of adult SCs to differentiate into cell phenotypes independent from the expected commitment by the tissue where they reside [6]. This plasticity of adult SCs might contribute a certain advantage for in-vivo tissue repair. For instance, it has been suggested that after an organ ischemic insult bone-marrow mesenchymal SCs (BM-MSCs) are mobilized into blood and recruited in the injured tissue where they could in part participate to the regeneration process via transdifferentiation [7]. Therefore, the interest on adult SCs, as opposed to ESCs, has rapidly increased because the former are easily accessible in the patient, do not rise ethical concerns, retain a reduced risk of tumor formation, do not require immunosuppressive treatments to prevent rejection, and could show some plasticity. However, although multipotent SCs show many characteristics that can be suitable for clinical applications, their use in regenerative medicine is often hampered by their poor survival after the engraftment [8][9][10]. Indeed, the injured tissue is usually fibrotic and poorly perfused; hence, the insufficient availability of oxygen and nutrients renders both grafted and resident SCs more susceptible to lethal damage. Strategies have been proposed to ameliorate the repair of post-ischemic tissues, including SC-induced neovascolarization [11][12][13] and SC preconditionings able to improve SC survival after transplantation [14,15]. Both approaches are considered hot topics of the research in the field since they could significantly improve the clinical outcome of cell therapy. Among the experimental procedures employed in the last decades for attenuating the damage induced by acute and severe ischemic events, in-vivo organ pretreatments with brief cycles of ischemia and reperfusion, namely "Ischemic Preconditioning" (IP), has been widely recognized as the best protective maneuver [16,17]. Nowadays, these basic experiences have been successfully translated into clinical interventions to prevent the potential damage due to the temporary interruption of blood perfusion during surgery [18][19][20]. In a similar manner, hypoxia-dependent pretreatments on SCs have been investigated to increase their survival for applications in regenerative medicine ( Figure 1). Here, we will consider at first the effects exerted by hypoxia on SC viability in their natural microenvironment, to underline what molecular and cellular adaptations they develop to face this unusual condition. Then, suitable methods inducing SC protection after exposure to different protocols of low oxygen tension will be discussed, as well as the need to discover pharmacological treatments triggering the same intracellular signaling pathways leading to hypoxic adaptation. Finally, the role of nondamaging hypoxic conditions in enhancing SC proliferation and differentiation will be also described. Lesson from SCs resident in a natural hypoxic microenvironment Basic features of SC niche The fate of SCs in their natural environment is regulated by intrinsic and extrinsic signals balancing both cell selfrenewal and differentiation. The extrinsic factors are included in the term "niche" that was first proposed in 1978 by Schofield, who defined a somehow specific microenvironment supporting the cells [21]. In their niche, SCs are supposed to be influenced by different molecules such as cytokines and growth factors (e.g. basic fibroblast growth factor (bFGF), bone morphogenetic proteins (BMPs), stem cell derived factor 1 (SDF-1), stem cell factor (SCF), leukemia inhibiting factor (LIF), Wnt/β-catenin [22]), extracellular matrix (ECM) molecules (e.g. hyaluronan [23]), some differentiated cells (e.g. fibroblasts, endothelial cells), and low O 2 concentrations [24]. Niches of mammalian adult SCs have been described in various tissues and hypoxia appears to represent a stimulus promoting self-renewal [25]. For instance, two distinct niches for hematopoietic stem cells (HSCs) have been identified in bone marrow: the hypoxic endosteal niche and the vascular niche [26]. While the former is hypothesized to maintain SCs in a quiescent and undifferentiated state through the hypoxic environment, the latter should promote both SC proliferation and differentiation. In this regard, Notch signaling seems to be critical since its inhibition reverse the effects of hypoxia on the maintenance of stemness [27]. In the attempt to explain why hypoxia is needed for the maintenance of a SC pool, some studies suggested that mild-low levels of O 2 can minimize the damages caused by oxidation, especially towards DNA [28]. Indeed, according to the clonogenic theory, no changes in gene structure must occur in dividing SCs that have to be the exact copy of their mother cells. However, this concept is under debate since hypoxia has also been reported as increasing, rather than blunting, oxidative stress [29]. Role of hypoxia inducible factor-1α in SC adaptation to hypoxia One of the most powerful way followed by SCs to adapt to the low oxygen tension that characterizes the niche is the production of high levels of hypoxia inducible factors (HIFs) [30]. HIFs are activated when oxygen tension falls below 5% and their concentrations increase in parallel with decreasing oxygenation [31]. HIFs are transcriptional factors essential for cell responses and adaptation to hypoxia whose active form results from the interaction of α and β subunits; the former includes HIF-1α, HIF-2α, and HIF-3α, the latter HIF-1β and HIF-2β. Mechanisms of activation and target genes have been better documented for the HIF-1α subunit [32]. Under normoxic conditions, the prolyl residues in the HIF-1α oxygen-dependent degradation domain (ODD) are hydroxylated by at least three different HIF prolyl hydroxylases (PHDs) [33]. Besides molecular oxygen, hydroxylation needs the presence of 2-oxoglutarate and reduced iron ions. Moreover, a factor inhibiting HIF (FIH) hydroxylates a specific asparaginyl residue which prevents the following recruitments of co-activator p300/CBP in the consensus sequences. The hydroxylated ODD is then recognized by von Hippel-Lindau protein (VHL), an E3 ubiquitin ligase. When oxygen tension is low, PHDs are not hydroxylated, hence they become inactive and HIF-1α is stabilized because it is not degraded by the proteasome system [32]. Besides hypoxia, certain intracellular metabolites, including reactive oxygen species (ROS), fumarate, succinate, and potentially 2-hydroxyglutarate, inhibit PHD and FIH activities, resulting in HIF-1α stabilization [34]. HIF-1α thus heterodimerizes with HIF-1β, also known as arylhydrocarbon receptor nuclear factor (ARNT), and translocates into the nucleus where it binds to hypoxia response elements (HRE) along with the co-factors E1A binding protein p300 (EP300), jun proto-oncogene (c-JUN), and cAMP responsive element binding protein (CREB) [35]. This leads the modulation of up to 200 genes involved in several processes including angiogenesis, glycolysis, mitochondrial respiration and biogenesis, production of erythropoietin, redox homeostasis, cell proliferation and cell apoptosis, in both normal and tumor cells [36]. More recently, several other modulators of HIFs stability and gene transcription have been discovered, such as chaperonins, transcriptional co-factors, sirtuins, ascorbate, nitric oxides, microRNAs, oncogenes, tumor suppressors, and inflammation factors [34,37]. Epigenetic changes induced by hypoxia The changes in gene expression upon hypoxic stress are tightly associated with modifications in chromatin structure by histone modifying and chromatin remodeling complexes, that are referred to as epigenetic regulation α α α Figure 1 Efficacy of hypoxic pretreatments of adult SCs in regenerative medicine. Grafted SCs become more resistant to the death stimuli that are present in the injured tissues through ex-vivo hypoxic pretreatments (continuous hypoxia or cyclic hypoxia-anoxia and reoxygeneation) or by administration of CoCl 2 . Pharmacological agents exerting intracellular key effects of hypoxic preconditioning, such as diazoxide, can also be effective. The most common sensor activated under low oxygen tensions is HIF-1α which increases cell survival by stimulating several pathways, including glycolytic flow, Akt phosphorylation, and miRs upregulation. Transgenic induction of miR-107 and miR-210, which are mainly expressed after hypoxia, also provides protection to SCs against the engraftment injury. of transcriptional activity ( Figure 2) [38]. In this regard, it has been reported that during hypoxia, the SWI/SNF chromatin remodeling complex [39] is recruited to the promoter of the HIF-1α gene, where it is required for expression of HIF-1α mRNA [40] as a conserved feature in animal evolution [41]. HRE-containing gene promoters are affected by upregulated HIF-1α together with its co-activators involved in the epigenetic regulation of histone marks and DNA methylation. The former includes the acetylation and/ or methylation of specific residues of H3 and H4 histone tails, the latter the adduction of methyl group on cytosine preceding guanine (CpG) in the DNA sequence of the target gene promoter. Indeed, the interplay between these chemical modifications and the protein complexes influencing chromatin architecture leads to the typical hypoxic transcriptional configuration, where HIF target genes are upregulated although general transcription in the cell is substantially inhibited [42]. Figure 2 Schematic representation of hypoxia-induced epigenetic changes. Hypoxic conditions modulate SCs expression profile via several mechanisms, included epigenetic modifications. The induced upregulation of HIF-1α and HIF-2α, drives the activation of several target genes. Also stem-related genes, such as OCT-4, could be re-expressed. The chromatin configuration of their promoter region becomes accessible to transcription factors, also consequently to the upregulation of histone demethylases JMJD1A and JMJD2B, that catalyse the removal of repressive histone marks (H3K9me2/3). Histone tails are characterized by active histone modifications (light or dark blue squares), such as acetylated H3-H4 or H3K4me2/3, and DNA is unmethylated (white rounds) at the promoter CpG sites (black lines). Hypoxia also induced global repression of gene transcription, associated with the upregulation of chromatin modifier enzymes, such as histone deacetylases (HDACs) and demethylases (G9a), that drive the formation of histone repressive marks (red and purple rounds), such as deacetylated H3-H4, H3K9me2/3 or H3K27me3. DNA methylation global level increases consequently to the upregulation of DNA methyltransferases (DNMTs) and gene expression is silenced. MiRNAs additional control of transcription and translation contributes to generate a gene expression profile that allows to reactivated stem-related genes, increase protection from oxidative stress, reduce DNA damage, increase glycolysis and angiogenesis, with the final result of enhancing cell viability and their regenerative potential. Histone methylases and demethylases play an important role as enzymes involved in this epigenetic control. The impact of histone methylation appears context-dependent: tri-methylation of lysine 27 in H3 histone tail (H3K27me3) indicates transcriptionally repressed chromatin. The same holds true for methylated H3K9. On the other hand, the promoter region of active genes appears enriched in methylated H3K4 and/or H3K36 [43]. Interestingly, the Jumonji domain containing dioxygenases (JMID) group of histone demethylases (JHDMs), and in particular JMJD1A and JMJD2B, are induced under hypoxic conditions by the overexpression of HIF-1α and HIF-2α [44]. This class of histone demethylases removes methyl-group from histone H3 tails, leading to the loss of repressive histone marks. In detail, JMJD1A catalyses the formation of mono-methyl lysine 9 in H3 histone tail (H3K9me1) from di-methyl lysine (H3K9 me2), whereas JMJD2B removes a methyl group from tri-methyl lysine 9 in H3 histone tail (H3K9me3) producing H2K9me2. The removal of these repressive histone marks has been associated with the restoration of the expression of self-renewal genes, such as OCT-4 in SCs [44], as supported also by other authors that found JMJD1A and JMJD2C enhancing the expression of selfrenewal genes in embryonic SCs [45]. In hypoxic condition SCs are prone to assume a phenotype more similar to ESCs, with enhanced capacity of differentiation and proliferation, as supported by several studies, that show hypoxia promoting de-differentiation of early committed ESCs reacquiring pluripotency [46], or hypoxic condition accelerating the process of reprogramming of iPSCs [47]. These observations underline that the epigenetic machinery in adult SCs is devoted not only to drive their differentiation but also to maintain their stemness [48][49][50], two opposite effects requiring a well-orchestrated control at the level of master genes driving cell differentiation and division. Besides local changes in the chromatin structure in the region of HIF-target genes leading to their activation, hypoxia has also been found to provoke a dramatic decrease of gene transcription that seems to be influenced by epigenetic modifications [38,42]. Among the transcriptional repressive modifications hypoxia is also known to induce global deacetylation of histones [51], as well as increased H3K9me level induced by the up-regulation of histone methyltransferase G9a [52]. Increased level of global DNA methylation following the upregulation of the DNA methyltransferases (DNMTs) has also been reported in several studies [51]. An additional level of regulation controlled by hypoxia in SC niches are microRNA (miRs), short non-coding RNA molecules regulating, in a sequence-specific manner, gene expression via translational repression or mRNA degradation [53]. Under hypoxic conditions, miR-210 expression is significantly increased, modulating the levels of iron-sulphur cluster protein (ISCU), a protein which is involved in the mitochondrial electron transport chain [51]. Other groups of miRs seem to regulate vascular endothelial growth factor (VEGF) which, in turn, stimulates angiogenesis [54]. Interestingly, some of these miRs are downstream effectors of HIFs, providing further evidence that most changes induced by hypoxia are strictly under the control of these transcription factors. HIF-1α is also involved in cell cycle regulation, as shown in HSCs where heterozygous deletion of von Hippel-Lindau factor (VHL) causes enhanced HIF-1α expression and cell quiescence [55]. Moreover, in the hypoxic niche the proliferating hematopoietic cell fraction re-enters cell cycle quiescence. The HIF-1α target factors that potentially correlate with these effects of hypoxia in HSCs are VEGF and Cripto/GRP78 signaling, whose presence has been shown in the niche of various SCs, including HSCs an MSCs. Role of hypoxia in the cancer SC niche Most of the above-mentioned adaptations of adult SCs to hypoxia under healthy conditions have also been found in cancer stem cells (CSCs). These cells, which share many characteristics with normal SCs, seem to be necessary for tumor maintenance, progression, and malignancy [56]. As for adult SCs, most CSCs reside in hypoxic niches where their functions depend on several autocrine/paracrine factors, ECM molecules, and nontumor cells. Notably, in respect to bulk tumor cells, the expression of HIF-1α is higher in CSCs leading to increase survival and progression to more aggressive and undifferentiated phenotypes [57]. Other findings have shown that oxygen levels are subjected to significant fluctuations in tumor niches and that these intermittent episodes of hypoxia and reoxygention are even more effective in promoting CSC survival and progression than continuous hypoxia [58]. All these observations clearly demonstrate that both normal and cancer stem cells develop several mechanisms of adaptation to hypoxia that provide them increased resistance to different stresses. This suggests useful cues to simulate under ex-vivo conditions a similar environment to improve the performances of SCs addressed to cell therapy applications. Hypoxia-dependent conditions improving ex-vivo SC survival Hypoxic preconditioning of SCs The poor vascularization of the injured tissues, especially if damaged by an ischemic insult, meets only partially the metabolic needs of the transplanted cells, hence more than 80-90% of cells undergo apoptosis within the first days after grafting [59]. In the attempt to confer more resistance to SCs, some strategies have been suggested to increase the cell ability to survive to hypoxia (Table 1). Since IP seems to be the most potent strategy to protect organs against severe ischemia, this approach was adopted in some clinical context, for instance to reduce the cardiac injury that follows coronary stent or bypass applications [19,20]. Almost conclusive findings demonstrate that mitochondria are the main intracellular orchestrator of IP-induced cytoprotection, following the opening of mitochondrial potassium channels (mitoKATP) that trigger anti-apoptotic responses [60]. Mitochondria primed by IP also contribute to cell survival by attenuating proton leakage from the inner membrane [61]. Taking advantages from the knowledge of the molecular mechanisms leading to cell survival during organ IP, in-vitro simulations of this strategy have been adopted to improve resistance of cultured cells against harmful conditions of hypoxia. Therefore "Hypoxia Preconditioning" (HP), which is characterized by intermittent periods of hypoxia, or anoxia, followed by reoxygenation [62][63][64], has been applied with different protocols to improve survival of cultured SCs. Continuous hypoxia, rather than cyclic hypoxia, was also able to activate intracellular signal transduction pathways of survival in adult SCs, together with other processes useful for regenerative medicine, such as SC proliferation and paracrine activity [65][66][67]. However, the effectiveness of cycling/intermittent hypoxia/anoxia followed by reoxygenation was demonstrated to be superior than continuous hypoxia. Heider and Ashraf observed that cycles of intermittent anoxia and reoxygenation better preconditioned skeletal myoblasts than one-time continuous exposure to anoxia of the same duration and intensity [68]. Moreover, the leakage of lactate dehydrogenase from myoblasts subjected to lethal anoxia treatment Infarcted rat heart/higher SM survival and increased cardiac contractility, angiogenesis [89] was inversely related to the number of cycles used to prime cells, as well as other indexes of cell damage, including TUNEL positivity, hypercontracture and spherical morphology of cells. Nevertheless, most protective protocols have used a single long-term exposure to low oxygen tensions, equal or lower than 5%; hence, hereafter we will extend the term HP to non-lethal continuous hypoxia. MSCs preconditioned with HP not only survived better to transplantation in a swine model of chronic myocardial ischemia but also ameliorated cardiac function [69]. Improved anti-apoptotic and anti-remodeling potency of bone marrow MSCs was also observed in a model of diabetic cardiomyopathy [70]. HP was demonstrated to be effective for the regeneration of other tissues besides the infarcted myocardium, including skeletal muscle fibers [71] and kidney [72]. Renal lesions due to acute ischemia were also regenerated using MSCs pretreated with cobalt chloride (CoCl 2 ) which simulates HP, since divalent cations such as the Co 2+ compete with iron and inhibit the activity of PHDs, leading to stabilization of HIF-1α [73]. Under this condition the effectiveness of cell therapy was enhanced by the activation of MSC migration towards the injured region of the kidney. Another evidence of the beneficial effects of HP in increasing cell survival was demonstrated by Daneau et al. [74] in human umbilical vein endothelial cells (HUVECs) that were subjected to three periods of one hour each at 0.5% O 2 concentration, interrupted by 30 minutes of reoxygenation. These positive effects did not occur when three hours of continuous hypoxia were applied rather than the same time-period of intermittent hypoxia. HIF-1α-dependent signal transduction pathway in SCs As mentioned above, the efficacy of HP is mainly ascribed to HIF-1α activity. HIF-1α-dependent Akt phosphorylation was responsible of the increased survival in MSCs preconditioned with two cycles of 30 minutes of anoxia/reoxygenation [68]. Moreover, a selected group of miRs downstream to HIF-1α were found to protect MSCs against hypoxia [75]. Kim et al. [76] demonstrated that HP could increase the survival of MSCs by upregulating miR-210 and its concentration correlated to the number of anoxia/reoxygenation cycles. The principal target gene of miR-210 that is responsible of cytoprotection is FLASH/caspase-8-associated protein 2 (CAP8AP2), a molecule that significantly reduced apoptosis in the MSCs treated with HP when subjected to lethal anoxia. Accordingly, transgenic induction of miR-210 in MSCs promoted their survival as well and increased the resistance to death in MSC engrafted in the infarcted heart [77]. Strikingly, when miR-210-transduced MSCs were cultured with cardiomyocytes, some miRs were transferred to cardiac cells that acquired higher protection. Functional links between HIF-1α and miR-210 against apoptosis were also observed in hypoxic endothelial cells [78] and cancer cells [79]. Besides miR-210, HP was also able to induce miR-107 in MSCs as a consequence of HIF-1α activation [80]. One major putative target of miR-107 was identified as the programmed cell death-10 (PDC10) protein which is regulated by HIF-1α independently from CASP82. Other targets of HIF-1α have been discovered in MSCs treated with HP, such as the SDF-1 receptors CXCR4 and CXCR7 [81,82]. Therefore, the effects of SDF-1 are potentiated in MSCs after HP as demonstrated in the ischemic and reperfused kidney [72]. The improvement of HIF-1α-dependent survival of HUVECs exposed to HP was correlated to an increased expression of cycloxygenase-2 [74]. Other mechanisms have been linked to the protective effects of HP, including Wnt-4 [71] and Notchstimulated Jagged2 activation as observed in cultured CSCs [83,84]. Diazoxide treatment simulating IP Pharmacological manipulations simulating the effects of HP in adult SCs would represent a more convenient alternative to hypoxic treatments due to the ease of drug administration. In this regard, MSCs were preconditioned with diazoxide, a mitoKATP channel opener extensively experienced as IP mimetic for cardiac protection [85,86]. The effectiveness of diazoxide in preventing cardiomyocytes apoptosis was correlated to multiple transduction signaling cascades involving a preliminary translocation of both Akt and PKCδ into mitochondria and the subsequent phosphorylation of mitoKATP channels, leading eventually to the inhibition of cytochrome c release into cytosol [87]. Other mechanisms related to the diazoxide treatment have been described, such as improved MSC survival through the activation of nuclear factor kB (NF-kB) [88] and increased protection and angiogenic properties in skeletal myoblasts via release of cytokines and growth factors [89]. Among them, VEGF exerts a dual role in promoting both tissue neovascularization [90] and cell survival [91,92], as also demonstrated in SCs [93,94]. Accordingly, VEGF can facilitate the commitment of circulating mononuclear cells towards the formation of EPCs [95,96] and their recruitment in ischemic tissues [97]. In addition, VEGF stimulates iPSCs to differentiate into cardiac muscle cells [98]. Besides diazoxide, several chemical compounds have also been demonstrated to be effective in preconditioning SCs by improving their survival, proliferation, and differentiation either in vitro or after transplantation in preclinical models (Table 2) [99][100][101][102][103][104][105][106][107][108]. Since the mechanisms of action of these drugs are different from those attributable to the hypoxic treatments, they will be not discussed further. Effects of hypoxia pretreatments on SC differentiation Hypoxic stimulation of neurogenesis As described above, SC quiescence can be favored by the hypoxic environment of the niche. By contrast, other studies underline that hypoxia can be also responsible, at least in part, of the following steps in the life of SCs, namely proliferation and differentiation. Since quiescence, expansion, and differentiation are processes that cannot temporarily co-exist in the same cell, it is not clear how a condition of hypoxia can provoke such different biological events. A possible explanation is provided by the knowledge that hypoxia is not the only featuring factor operating in the SC niche because also different cell types and ECM/paracrine-related signals are present in that microenvironment. Thus, only a tunable orchestration of all components of the niche can allow SCs to take one or another direction towards a specific biological process. For example, studies focusing on adult neurogenesis underlined the relevant role played by the brain niche where neural stem cells (NSCs) reside and generate new neurons [109,110]. In the healthy human adult brain, the subventricular zone (SVZ) and specific areas of the hippocampus have been identified as regions where thousands of new neurons can stem from NSCs [111,112]. Since the physiological concentrations of oxygen in the brain range from 0.5% (midbrain) to 8% (pia) [113], it can be hypothesized that NSCs display intermediate levels of hypoxia during their migration from the niche towards other regions and consequently modulate their state of differentiation. Likewise, it has been suggested that an ischemic event in the brain can stimulate neurogenesis not only in the NSC niches but also in other injured areas where NSCs are migrated. The involvement of brain hypoxia in promoting NSC proliferation and differentiation has encouraged the researches to pilot the NSC behavior under culture conditions of hypoxia. As extensively reviewed by Vieira et al. [114], several ex-vivo approaches have demonstrated that SC proliferation and differentiation into the neural lineage are enhanced in the presence of 2% to 5% O 2 or using CoCl 2 treatments. Besides adult NSCs and neural precursor cells, cobalt increased also MSC commitment towards dopaminergic neuron-like cells. Notably, the process of differentiation was associated with an increased expression of HIF-1α together with its target genes erythropoietin, VEGF, and p21 [115] ( Table 3). The possible role of reactive oxygen species (ROS) in these models of neurogenesis has been highlighted, since oxidative stress can increase in SCs subjected to low oxygen concentrations [29,116]. Interestingly, the contribution of ROS was confirmed by studies in which neural differentiation was obtained in the PC12 cell line subjected to hyperoxia and prevented by antioxidant treatments [117]. However, although HIF-1α itself and several genes regulating cell redox response are known to be targeted by ROS [118], the mechanistic link which induces SCs to undergo differentiation remains to be elucidated. EPCs sevoflurane [101] MSCs oxytocin [102], angiopoietin-1 [103], hydrogen peroxide [104], transforming growth factor-α [105], , trimetazidine [106] ADSCs sydenafil [107] Cardiac SCs cobalt protoporphyrin [108] Hypoxic stimulation of chondrogenesis Cartilage is another tissue which is widely investigated for regenerative therapy and that is naturally subjected to low oxygen tension since it is devoid of any vasculature. Due to this particular characteristic, oxygen levels gradually decrease from both superficial and calcified zone to the inner zone of cartilage. There is a general agreement in stating that hypoxia can stimulate MSCs to differentiate into chondrocytes. Hypoxia enhanced chondro-specific differentiation of the MSC line C3H10T1/2 increasing the biosynthesis of both collagen type II and aggrecan by the p38 MAPK pathway [119]. Hypoxic conditions increased SOX-9 expression in human MSCs via HIF-1α but also HIF-2α seems to be involved, at least in articular chondrocyte differentiation [120]. On the contrary, hypoxia inhibits RUNX2 expression because of the upregulation of HIF-1α and TWIST [121]. Therefore, under hypoxic conditions MSCs are almost impeded to differentiate into osteoblasts, an event that instead is observed during the progression of osteoarthritis due to the vascularization of cartilage. Likewise, hypoxia inhibits collagen type X production counteracting hypertrophic chondrogenesis [122]. In view of regenerative medicine applications, Duval et al. [123] grew bone marrow MSCs (BM-MSCs) in alginate beads under hypoxic conditions without addition of exogenous growth factors. BM-MSCs underwent chondrogenesis after seven days, as confirmed by HIF-1α and SOX-9 overexpression. In another study similar results were obtained using gelatin-based hydrogel as substrate [124]. Stem cells with mesenchymal features like BM-MSCs that can be isolated from the adipose tissue (ADSCs) [125] showed also the potential to differentiate into chondrocytes when subjected to hypoxia [126]. Strikingly, O 2 concentration as low as 2% decreased chondrogenesis and osteogenesis in ADSCs exposed to the corresponding differentiating media [127]. Therefore, oxygen tensions ranging from 2% to 5% should be considered as the optimal O 2 concentrations for ADSC commitment towards chondrogenesis. Other putative mechanisms by which preconditioned SCs can improve organ function Besides the enhanced survival and differentiation of the SCs themselves, it is possible that the hypoxic pretreatments of SCs produce other beneficial effects in the injured host tissue. It is known that several effects of SC therapy are due to the paracrine activity of these cells. Multipotent SCs synthesize a broad spectrum of soluble mediators that include molecules with immunomodulatory, anti-apoptotic, pro-angiogenic, and chemoattractive effects. For example, the transplantation of MSCs in the infarcted heart has been described to improve cardiac contractility especially because of the sustained release of growth factors from MSCs rather than their transdifferentiation into cardiac muscle cells [128]. However, partial oxygen pressures can significantly affect the profile of these paracrine mediators, and therefore, change the biological response of the target cells. In particular, several angiogenic growth factors that are normally released by MSCs, including VEGF and HGF, are produced to a greater extent if MSCs are exposed to hypoxic conditions [129]. Anyway, at least as a consequence of the enhanced viability of the preconditioned SCs in the injured tissue, the amount of released growth factors and their permanence during the regenerative process should be increased as well. Finally, it should be also taken into consideration that several approaches of tissue engineering require 3D scaffolds where oxygen diffusion to the embedded cells can be made difficult due to the thickness and low porosity of the supporting material [130]. Therefore, a preventive adaptation of SCs to a condition of partial hypoxia should improve the effectiveness of construct transplantation. Conclusions Making SC transplantation a really efficient procedure is one of the major challenge for regenerative medicine. Discovering simple and safe strategies for increasing survival and favoring differentiation of grafted cells will pave the way for new suitable interventions for tissue repair. Treating SCs with HP have been demonstrated to increase resistance against cell death and to promote proliferation and differentiation towards specific cell lineages. However, easier pharmacological cell handling recapitulating decisive intracellular effects induced by HP or IP could represent a forward tool to improve cell therapy and encourage tissue engineering applications. Different factors such as autacoids (e.g. adenosine, bradykinin, opioids), cytokines (e.g. erythropoietin), their receptors and signal transduction pathways, and mitochondrial function are implicated in the protection induced by IP [131]. Several molecular steps are activated, including extracellular regulated kinase 1/2, phosphatidylinositol 3 kinase/Akt, protein kinase C, and protein kinase G, which are able to inhibit glycogen synthase kinase-3β and, in turn, the opening of the mitochondrial permeability transition pore [132]. Also mitochondrial ROS are consistently involved in the causative mechanisms of IP [133]. The protection induced by IP is not more operating after a few hours after the last cycle of ischemia/reperfusion but reappears 24 hours later. This "delayed" IP is due to the synthesis of protective proteins, including heat shock proteins, manganese superoxide dismutase, and inducible nitric oxide synthase [134]. According to this cascade of signaling events, a variety of drugs affecting these pathways could be investigated as IP mimetics to exert protection. For example, clinical trials have demonstrated that adenosine or erythropoietin administration are effective in cardioprotection by significantly reducing the infarct size [135,136]. Therefore, besides diazoxide, other drugs simulating hypoxic treatments should be investigated to provide protection to SCs before their transplantation. Although low oxygen tension-based pretreatments have demonstrated to give considerable results on SC viability and functional performance, the in-vivo shortterm permanence of these beneficial effects still represents a problem to be solved. In order to obtain a more stable action of SCs in the damaged tissues, gene therapy inducing the SC expression of adaptive molecules to the ischemic environment could likely represent a future approach. Alternatively, biomaterials releasing HP/IP mimetics and contemporary conveying SCs [137] could also prolong cell activity after grafting and improve the overall process of tissue regeneration. In addition, bioreactors for high-throughput cell yield providing hypoxic cell cultures chambers should be employed to give a sufficient amount of preconditioned SCs to regenerate the injured region [138,139]. Future interdisciplinary studies investigating these issues will surely contribute to upgrade these basic researches for clinical applications.
2017-08-03T01:51:52.028Z
2013-08-29T00:00:00.000
{ "year": 2013, "sha1": "1bd3583959c93f5d843c67b8a03acd821a3cc501", "oa_license": "CCBY", "oa_url": "https://jbiomedsci.biomedcentral.com/track/pdf/10.1186/1423-0127-20-63", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d900248572f70f9494867d7e9eb9fdc6ffcca510", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231698775
pes2o/s2orc
v3-fos-license
GP: Context-free Grammar Pre-training for Text-to-SQL Parsers A new method for Text-to-SQL parsing, Grammar Pre-training (GP), is proposed to decode deep relations between question and database. Firstly, to better utilize the information of databases, a random value is added behind a question word which is recognized as a column, and the new sentence serves as the model input. Secondly, initialization of vectors for decoder part is optimized, with reference to the former encoding so that question information can be concerned. Finally, a new approach called flooding level is adopted to get the non-zero training loss which can generalize better results. By encoding the sentence with GRAPPA and RAT-SQL model, we achieve better performance on spider, a cross-DB Text-to-SQL dataset (72.8 dev, 69.8 test). Experiments show that our method is easier to converge during training and has excellent robustness. INTRODUCTION In recent years, with the development of artificial intelligence technology, how to directly generate SQL statements that interact with database systems through the analysis of natural language has become one of the research hotspots. Current research work usually adopts a Natural Language Interface to Database (NLIDB) to realize the interaction between user's questions and the database system to obtain and analyze data (Baik et al., 2019). The core problem of NLIDB is to convert the input text information into SQL statements (Textto-SQL). In order to solve this problem, there are two main approaches at present: (1) The method based on rule template, that means, the natural language is classified according to the common SQL grammar, and the corresponding SQL templates belong to different categories (Popescu et al., 2004, Unger et al., 2012, Li and Jagadish, 2014. This type of method requires manual summarization of experience and has a high time cost. In addition, with the switch of application scenarios, the existing templates are often difficult to meet the requirements, and the migration is poor; (2) Based on the deep learning method, the neural network is used for end-to-end implementation (Zhong et al., 2017, Yu et al., 2018a,b, Bogin et al., 2019, Guo et al., 2019. This method can be self-optimized by continuously adding sample information. It has the advantages of high accuracy and strong stability, and is receiving more and more attention from the academic community. By incorporating the BERT encoder, the accuracy on the WikiSQL dataset can reach above 90%. However, these deeplearning methods does not achieve satisfactory performance on a cross-domain Text-to-SQL scenario such as Spider. As is show in Figure 1, this BERT(Devlin et al., 2018) and RoBERTa for contextual sentences are applied in cross-domain Text-to-SQL scenario, but the relation between the tables and fields of the database is not considered. A grammar-augmented pre-training model (GRAPPA) describing the joint representations of textual and tabular data is presented (Yu et al., 2020). By combining the pretraining model with other downstream methods like RAT-SQL, the accuracy on cross-domain tasks can be greatly improved. In this paper, a context-free grammar pretraining (GP) approach is proposed. Instead of pre-training primary input vectors, this method is intended for downstream models. In the preprocessing module, the input natural language questions are split into several single words. Using n-gram algorithm, columns can be detected by matching schema information. One of its value will be added so a new question sentence is generalized as the model input. For the design of loss function, we adopt flooding level, a new method to avoid local minimum values. On the basis of GRAPPA/RAT-SQL framework, experiments show that our approach reaches a much higher accuracy on Spider test set. Results also prove that this method has excellent robustness. RELATED WORK Pre-training models for NLP parsing Text-to-SQL task contains both unstructured user question and structured schema information. Early research use usual pre-training models like Elmo (Peters et al., 2018), BERT(Devlin et al., 2018) and RoBERTa , to represent textual information for unstructured language questions. There has been great improvement in joint textual-tabular field like question answering (Chen et al., 2020) and table semantic parsing (Yu et al., 2018c) by learning better representations from the input text and table information, but most of them consider single tables. Recent pre-training work focus on achieving high-quality cross-modal representation. TaBERT (Yin et al., 2020) is pretrained by using millions of web tables. It can represent complete structure for different tables and make some matrix computations in table semantic parsing. However, the noisy context information weakens its performance on Textto-SQL task. In this paper, we adopt GRAPPA, the grammar-augmented pre-training method using a novel text-schema link objective and masked language modeling (MLM). By combining GRAPPA as feature representation layers with other downstream models, there have been great accuracy on Spider dataset. Neural networks for Text-to-SQL Previous networks are intended to solve problems in single table dataset like WikiSQL. The Seq2SQL model based on the strategy mode (Zhong et al., 2017) is applied in Text-to-SQL tasks and achieves 59.45% SQL execution accuracy on WikiSQL dataset. Then TypeSQL (Yu et al., 2018a) is proposed, which further extracts the keywords in the question sentence by combining external knowledge and database field enumeration values. The above method has achieved obvious results in single-table query, but it is not enough to solve the complex mode of multi-table query. EditSQL uses an editing mechanism to introduce historical information for user queries, and its matching accuracy on Spider dataset reaches up to 32.9. IRNet (Guo et al., 2019) adopts an intermediate representation named SemQL to translate complex SQL queries into a syntax tree. Using pointer network (Vinyals et al., 2015) for downstream tasks, it achieves an accuracy of 54.7 on Spider test set. Graph neural networks are also concerned to represent the relations for schema information. Global gated graph neural network (Bogin et al., 2019) is designed to train the structure of database patterns and apply it in the encoding and decoding stages. Recently RAT-SQL (Wang et al., 2019) uses a relation-aware self-attention mechanism for schema encoding, feature representation and schema linking. It obtains the state-of-art accuracy of 65.6 on Spider test set. Training loss optimization is a common problem in training procedure. Comparing with former methods like dropout (Srivastava et al., 2014), batch normalization (Ioffe and Szegedy, 2015), label smoothing(Szegedy et al., 2016) and mixup (Zhang et al., 2017), for the purpose of avoiding the training loss from decreasing to zero, flooding level (Ishida et al., 2020) makes the training loss float around a small constant value. On the other hand, the loss to be fixed around a certain level can be determined according to the model itself. Therefore, flooding skips some local extreme points to find the optimal parameters from a global perspective. Context-free Grammar Pre-training RAT-SQL utilzes the Syntactic Neural Model (SNM) proposed by (Yin and Neubig, 2017) to generate the SQL . Yin etc. believe that existing methods treat code generation as a task of natural language generation, but the syntax of the target programming language is not considered. Unlike natural languages, programming languages, especially SQL, have strict grammar rules. According to these rules, SNM is an essential method which improves the accuracy of the model by limiting the search space of the decoder. In addition, the basic framework of SQL grammar is context-free with the specific natural language description. For example, no matter what natural language description is, the first clause of SQL is always , and the next clause is always . The loss value in the initial training stage of RAT-SQL is extremely large, which mainly comes from P errors generated by the decoder. In view of the above situation, we propose a Context-free Grammar Pre-training (GP) method to pre-train the parameters on the decoder side. The semantic information of the encoder is replaced by zero vectors. The probability equation of RAT-SQL using LSTM to output a sequence of actions is: where is always [0] in the stage of GP and < are all previous actions. The LSTM's state updating is mentioned in both and strategy will be modified correspondingly as: where and ℎ is the LSTM cell state and output in step , −1 is the embedding of the previous action, is the step corresponding to expanding the parent AST node of the current node, and is the embedding of the current node type. We use [0] to replace the former that obtained by using multi-head attention on ℎ −1 over . Since GP no longer depends on semantic information, it cannot predict column names or table names. In order to not change the framework of RAT-SQL, it is assumed that each sample has only one column and one table, therefore To prevent overfitting, the number of decoder Grammar Pre-training steps is limited as 300. Question-Schema Serialization and Encoding We generally adopt the serialization method of RAT-SQL. Because the utilized pre-trained semantic model is GRAPPA, the question tokens are preceded by <s> and end up with </s>. Then, columns and tables are spliced in sequence according to the order of the schema provided by Spider dataset, and we use </s> as the separator. As mentioned in , modeling with only table/field names and their relations is not always enough to capture the semantics of the schema and its dependencies with the question. Notably, we append values to mentioned columns only if they exactly match the question. For the example in Figure 2, the keyword in the question appears in both column and column , respectively. Therefore, the token has a Column-Part-Match(CPM) relationship with column and has a Column-Exact-Match(CEM) relationship with column . Intuitively, Exact Match has a greater probability as the correct column. In order to strengthen this relationship, we put after the column during serializing while column not. The sequence can be converted as = ⟨ ⟩ , , ⟨/ ⟩ , 1 , ⟨/ ⟩ , 2 , 2 , ⟨/ ⟩ , ..., 1 , ⟨/ ⟩ , 2 , ⟨/ ⟩ , ..., ⟨/ ⟩ In RAT-SQL, the vector representation of a column or a table is the average of the first and last token. Experiments show that this encoding method may lose important information, so another method is used by computing the average of all tokens' vector of the column or table. If a column is followed by a value, the representation of the column is calculated by all column tokens and value tokens, as shown in Figure 3. Flooding In deep learning, It often occurs that training loss keeps decreasing while the validation loss suddenly starts to rise. (Ishida et al., 2020) proposed a simple and tricky loss function to make validation loss continue decreasing: where >0 is the flooding level specified by the user, and is the model parameter. It is assumed that to a certain extent, the existence of parameter can prevent the model from falling into the local optimum during the optimization process. However, unsuitable usually lead to gradient explosion. Experimental Setup The Adam optimizer(Kingma and Ba, 2014) with default hyperparameters is adopted. In the stage of GP, learning rate is set to 7.44 × 10 −4 . Due to GPU memory limitation, we set = 3 and _ ℎ_ = 4, where and _ ℎ_ are the gradient accumulation parameters of RAT-SQL, that equivalent to batch size of 12. Because of GP and a smaller batch size, comparing to RAT-SQL, we adjusted the initial learning rate of GRAPPA from the original 3 × 10 −6 to 2 × 10 −6 , and the initial learning rate of other model parameters from 7.44 × 10 −4 to 5.44 × 10 −4 . The rest of setups are the same with RAT-SQL. Dataset and Metrics Spider (Yu et al., 2018c) is a large-scale complex and cross-domain text-to-sql dataset. It consists of 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables covering 138 different domains. The metric adopted to evaluate model performance is Exact Match Accuracy proposed by (Yu et al., 2018b). This metric measures the model's performance on without generating values. Results While RAT-SQL and GRAPPA have been open sourced, the offline result is worse than that announced on the leaderboard in our experiments, as shown in Figure 4 shows that in first 50 steps of GP, the training loss drops significantly, then remains at about 53. To prevent overfitting, the number of Grammar Pre-training steps is limited, even if the loss is still dropping in a tiny speed. We then use the pre-trained decoder to train our model, the training loss is maintained at a stable level compare to without GP, as shown in Figure 5. Flooding Equation 6 shows that there is a extra parameter in loss function, and the model performance is extremely sensitive to and learning rate , a slightly larger may cause the model to gradient explosion during training. Table 2 shows several examples about different parameter combination, ∅ means the parameter combination will lead to gradient explosion. It is worth mentioning that although can improve model performance, the results are not stable, where best result may be as high as 72.1, and the lowest result may be only 70.7 even if we use the same parameters. _ Dev. CONCLUSION The final result on Spider is 72.8 on Dev. and 69.8 on Test. Compared to the result of RAT-SQL+GRAPPA, the Dev. and Test. results of RAT-SQL+GRAPPA+GP is more closer, which means that our model is more robust, as shown in Table 4. Moreover, tuning parameters is a complex and delicate task, the slightest difference is a thousand miles away. The most influential hyperparameters model Dev. 71.8 ± 0.6 RAT-SQL+GRAPPA with Fld. val. GP 72.5 ± 0.6 (Yu et al., 2020) 73.4 69.6 RAT-SQL+GRAPPA+GP (Ours) 72.8 69.8 Table 4: Results comparison between RAT-SQL+GRAPPA and RAT-SQL+GRAPPA+GP is learning rate, when other parameters are exactly the same, a tiny difference in the learning rate will lead to completely different results. We believe that our model still has great potential, but we still need to find suitable hyperparameters.
2021-01-26T02:15:50.671Z
2021-01-25T00:00:00.000
{ "year": 2021, "sha1": "2e84295dbe37cdf9183e45ceef9962754002a9ca", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "48c3edc9ee25728b96e7ec17a88f8ce34bb8947f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
264127813
pes2o/s2orc
v3-fos-license
Bushnell-Kutzko types for $P$-ordinary automorphic representations on unitary groups This paper generalizes a theorem of Hida on the structure of ordinary representations on unitary groups to $P$-ordinary representations, where $P$ is a general parabolic subgroup of some general linear group. When $P$ is minimal, we recover Hida's theorem which asserts that ordinary subspaces are 1-dimensional. While analogous $P$-ordinary subspaces are infinite-dimensional in general, we use the theory of Bushnell-Kutzko types to canonically associate a finite-dimensional type to the representation (under minor assumptions) that has multiplicity one in its $P$-ordinary subspace. We simultaneous develop the theory of modular forms on unitary groups with $P$-Iwahoric level structure whose nebentypus is a type (instead of a character) and construct lattices of $P$-ordinary cuspidal forms inside $P$-ordinary automorphic representations. We also obtain direct consequences for the dual notion of $P$-anti-ordinary forms and representations. INTRODUCTION In the paper [EHLS20], the four authors construct a p-adic L-function for ordinary families on unitary groups.This completed a project started more than a decade earlier by three of the four authors in [HLS06].This required the development of several technical results on p-adic differential operators, accomplished in great part by the first author in [Eis12], to obtain a more general Eisenstein measure [Eis15] than the one originally constructed in [HLS06].Fundamental properties of their p-adic L-function for families are obtained by carefully computing local zeta integrals related to the doubling method [GPSR87] as well as local coefficients of Siegel Eisenstein series [Eis15].The most technical calculations are for local factors at places above the fixed prime p and a theorem of Hida in [Hid98] establishing the uniqueness (up to scalar) of ordinary vectors plays a crucial role in their analysis. In this article, we generalize this theorem of Hida to construct a canonical finite-dimensional subspace in the space of P -ordinary vectors for a P -ordinary representation π on a unitary group G. Here, P is a parabolic subgroup of a product of general linear groups related to G. When P corresponds to (a product of) upper triangular Borel subgroups, the notion of π being "P -ordinary" coincides with the usual notion of being "ordinary". This accomplishes the first step in a broader project of the author to construct a p-adic L-function for a P -ordinary family on G, directly generalizing the work of [EHLS20].In upcoming work, the author plans to develop the theory of P -ordinary families on unitary groups, inspired by the results of [Pil12] on symplectic groups, and adapt the calculations of [Eis15,EHLS20] using the P -ordinary vectors constructed here instead of ordinary vectors. Structure of this paper.In Section 1, we first set some notation and conventions, and review the theory of Bushnell-Kutzko types relevant for us.Then, in Section 2, we introduce level subgroups of G(Z p ) that are "P -Iwahoric" (of some level r).Using the geometry of Shimura varieties associated to G, this allows us to construct P -Iwahoric covers over them.We also introduce the relevant notation to compare the theory on G = G 1 and on the unitary group G 2 associated to its opposite Hermitian vector space. This sets up the background to define holomorphic P -ordinary and anti-holomorphic P -anti-ordinary representations on G 1 as well as dual notions on G 2 in later sections.Simultaneously, it leads us to a natural definition of (holomorphic and anti-holomorphic) modular forms on G whose level structure at p is P -Iwahoric and whose nebentypus is a type, instead of a 1-dimensional character.We refer to the latter as a P -nebentypus to emphasize the distinction. In Section 3, we introduce Hecke operators at p related to P and define (holomorphic) Pordinary representations as the ones having simultaneous eigenvectors for all these operators with p-adic unit eigenvalues.Equivalently, we define a P -ordinary projector e P from these operators and π is P -ordinary if and only if its p-factor π p contains an e P -fixed vector.When this is the case, e P determines a P -ordinary subspace in π p which is typically infinite dimensional.We use the theory of Bushnell-Kutzko types to decompose this space into a direct sum of subspaces of P -ordinary vectors of type τ , or (P, τ )-ordinary vectors. Our first main result (Theorem 3.10) describes natural homomorphisms between a type τ and the corresponding space of (P, τ )-ordinary vectors.This result is stated for local factors of π p at places above p.Using well-known results about types and a minor hypothesis (which the author wishes to remove in the future), our second main result (Theorem 3.12) rephrases this statement for π p and proves that for a canonical type τ associated to π p , which we called the BK-type of π, the homomorphism constructed actually provides an isomorphism between τ and the corresponding (P, τ )-ordinary subspace. Given some fixed P -ordinary representation π with BK-type τ (which is a smooth irreducible representation of some compact p-adic Lie group contained in P ), one can twist π by a character χ of P (that factors through the determinant map).The result is again P -ordinary but now with BK-type τ ⊗ χ.This plays a more relevant role in upcoming work of the author to construct a p-adic family of P -ordinary representations containing π of dimension equal to the rank d of the Levi subgroup of P .Moreover, the isomorphism above allows us to vary a fixed (P, τ )-ordinary vector of π p-adically in this family.Again, in upcoming work, this allows the author to adapt the crucial calculations of [EHLS20, Section 4] and [Eis15, Section 2] to construct (d + 1)-variables p-adic L-functions on G. In Section 4, we define the analogous objects for P -anti-ordinary representations on G = G 1 .Using pairs of contragredient representations, the two notions are dual to each other and we obtain consequences about space of P -anti-ordinary vectors from our work in the previous section.We also prove analogous statements on G 2 .Relying on a canonical identification between G 1 and G 2 , we first obtain identical results by simply replacing P with its opposite parabolic P op .However, using standard intertwining operators, we state the analogous result with P instead of P op .As a part of this broader project on p-adic L-functions, this is purely from computational purposes.Namely, in upcoming work of the author, some Rankin-Selberg zeta integrals are evaluated involving P -anti-ordinary vectors on both G 1 and G 2 and the analysis is simpler when both parabolic subgroups are equal instead of opposite to one another. 1 Notation and conventions. Let Q ⊂ C be the algebraic closure of Q in C. For any number field F ⊂ Q, let Σ F denote its set of complex embedding Hom(F, C) = Hom(F, Q). Throughout this article, we fix a CM field K ⊂ Q with ring of integers O = O K .Let K + be the maximal real subfield of K and denote its ring of integers as O + = O K + .Let c ∈ Gal(K/K + ) denote complex conjugation, the unique nontrivial automorphism.Given a place v of K, we usually denote c(v) as v. Let Z(1) ⊂ C be the kernel of the exponential map exp : C → C × , a free rank one Z-module with noncanonical basis 2π √ −1.For any commutative ring R, denote R ⊗ Z(1) by R(1). CM types and local places. Fix an integer prime p that is unramified in K. Throughout this paper, we assume the following : HYPOTHESIS 1.1.Each place v + of K + above p split as v + = vv in K. This hypothesis plays a crucial role in our analysis of the local factors at place above p of the automorphic representations considered in later sections. Fix an algebraic closure Q p of Q p and an embedding incl p : Q ֒→ Q p .Define where ν p is the canonical extension to Q p of the normalized p-adic valuation on Q p .Let C p be the completion of Q p .The map incl p yields an isomorphism between its valuation ring O Cp and the completion of Z (p) which extends to an isomorphism ι : Given σ ∈ Σ K , the embedding incl p • σ determines a prime ideal p σ of Σ K .There may be several embeddings inducing the same prime ideal.Similarly, given a place w of K, let p w denote the corresponding prime ideal of O. Under Hypotesis 1.1, for each place of v + of K + above p, there are exactly two primes of O above v + .Fix a set Σ p containing exactly one of these prime ideals for each such place v + .The set Σ = {σ ∈ Σ K | p σ ∈ Σ p } is a CM type of K (see [Kat78,p.202]). Bushnell-Kutzko Types. To discuss the local theory of P -ordinary representations in later sections, let us recall the theory of Bushnell-Kutzko types and covers, adapting the notions of [BK98] and [Lat21, Section 3] to our setting. Fix a place w of O and write Parabolic inductions and Jacquet modules. For any parabolic subgroup P of G, let L and P u denote its Levi factor and unipotent radical, respectively.Let δ P : P → C × denote its modulus character. Recall that δ P factors through L.Moreover, if P is the standard parabolic subgroup associated to the partition n = n 1 + . . .+ n t , one has for any l = (l 1 , . . ., l t ) in L = t j=1 GL n j (F ).In particular, δ P agrees with δ B on the center Z(L) of L, where B is the Borel upper triangular subgroup (associated to the partition n = 1 + . . .+ 1). Given a smooth representation (σ, W ) of L, let Ind G P (σ, W ) denote the classical (unnormalized) parabolic induction functor from P to G.Moreover, given a representation (π, V ) of G, let (π P , V P ) denote the classical P -Jacquet functor.We often consider σ and π P as both representations of L and P without comments. Definition 1.2.The normalized parabolic induction functor is and the normalized Jacquet functor is We often simply write ι G P σ (resp.ι G P W ) and r G P π (resp.r G P V ) when the associated vector space (resp.representation) is clear from context. Supercuspidal support A theorem of Jacquet (see [Cas95, Theorem 5.1.2])implies that given any irreducible representation π of G, one may find a parabolic subgroup P of G with Levi subgroup L and a supercuspidal representation σ of L such that π ⊂ ι G P σ. .The pair (L, σ) is uniquely determined by π, up to G-conjugacy and one refers to this conjugacy class as the supercuspidal support of π. Consider two pairs (L, σ) and (L ′ , σ ′ ) consisting of a Levi subgroup of G and one of its supercuspidal representation.One says that they are G-inertially equivalent if there exists some g ∈ G such that L ′ = g −1 Lg and some unramified character χ of L ′ such that g σ ∼ = σ ′ ⊗ χ, where g σ(x) = σ(gxg −1 ).We write [L, σ] G for the G-inertial equivalence class of (L, σ) and let B(G) for the set of such classes. For each s ∈ B(G), let Rep s (G) denote the full subcategory of Rep(G) whose objects are the representations such that all their irreducible subquotients have inertial equivalence class s. The Bernstein-Zelevinsky geometric lemma (see [Ren10, Section VI.5.1]) implies that ).Let J be a compact open subgroup of G and τ be an irreducible represention of J. Let Rep τ (G) denote the full subcategory of Rep(G) whose objects are the representations generated over G by their τ -isotypic subspace.We say that (J, τ ) is an If π is an irreducible supercuspidal representation of G with inertial support s, then one can easily construct an s-type (J, τ ), see [BK98,Section 5].By [BK98, Proposition 5.6], the complex vector space Hom J (τ, π) is 1-dimensional. Furthermore, it follows from [Pas05, Theorem 1.3] that there exists a unique (up to isomorphism) representation τ of K = G(O F ) such that (K, τ ) is an s-type.We refer to this unique "maximal" type of s as the BK-type of the supercuspidal representation π. 2 P -nebentypus of modular forms on unitary Shimura varieties. In this section, we introduce the main algebraic groups of interest for this paper.We are mostly concerned about its structure over Z p and construction of particular p-adic parabolic subgroups P .Furthermore, we analyse the geometry of the associated Shimura varieties and consider automorphic vector bundles over them (of a fixed weight κ and P -nebentypus τ ).This allows to discuss the theory of modular forms whose p-level structure is "P -Iwahoric".This sets up the background to discuss (holomorphic and anti-holomorphic) cuspidal representations that have a particular behavior under the action of the P -Iwahori subgroup in the next sections.We follow the standard approach and material of [Hid04, CEF + 16, EHLS20]. Unitary Groups Let V be a finite-dimensional K-vector space, equipped with a pairing •, • V that is Hermitian with respect to the quadratic extension K/K + .Write n = dim K V .Let δ ∈ O be totally imaginary and prime to p and define . This choice of δ and our Hypothesis (1.1) ensure the existence of an O-lattice L ⊂ V such that the restriction of •, • to L is integral and yields a perfect pairing on L ⊗ Z p . For each σ ∈ Σ K , let V σ denote V ⊗ K,σ C. It has a C-basis diagonalizing the pairing •, • .The only eigenvalues must be ±1, say that 1 (resp.−1) has multiplicity r σ (resp.s σ ).We order the basis so that the +1-eigenvectors appear first.Fixing such a basis, let for any commutative ring R. In particular, G /Q is a reductive group.Moreover, the assumptions on p imply that G /Zp is smooth and G(Z p ) is a hyperspecial maximal compact of G(Q p ). Hodge structure. The homomorphism h determines a pure Hodge structure of weight −1 on V C = L ⊗ C, i.e.V = V −1,0 ⊕ V 0,−1 and h(z) acts as z on V −1,0 and as z on V 0,−1 .In particular, the O ⊗C-submodule V 0 ⊂ V defined as the degree 0 piece of the corresponding Hodge filtration is simply The signature of h is defined as the collection of pairs {(a σ , b σ ) σ∈Σ K }.Throughout this paper, we assume : HYPOTHESIS 2.1 (Ordinary hypothesis).For all embeddings σ, σ Therefore, given a place w of K above p, one can define (a w , b w ) := (a σ , b σ ), where σ ∈ Σ K is any embedding such that 2.2 Structure of G over Z p . In this section, we introduce the preliminary notions that allows us to later study automorphic representations that are ordinary with respect to some parabolic subgroup of G. Comparison to general linear groups. Consider the factorization O ⊗ Z p = w|p O w as the product runs over all primes w of K above p.This induces a decomposition L ⊗ Z p = w|p L w and a canonical Z p -isomorphism (2) One obtains an isomorphism Our assumption above about the pairing •, • implies that for each w | p, there is an Hence, one has a perfect pairing L + w ⊕ L − w → Z p (1), again denoted •, • .Fix dual O w -bases (with respect to the perfect pairing above) for L + w and L − w .They yield identifications as well as an isomorphism GL Ow (L w ) ∼ = GL n (O w ) such that the obvious map is simply the diagonal embedding of block matrices.Let H := GL O⊗Zp (L + ).Then, the identification (4) above induces a canonical isomorphism Let P H ⊂ H be the Z p -parabolic that corresponds to the products of all the P dw via the isomorphism (5).We denote the unipotent radical of P H by P u H .We work with the Levi factor L H = P H /P u H of P H as well as its maximal subtorus T H .Note that T H does not depend on the choice of partitions.Furthermore, elements of L H are identified with collections of block-diagonal matrices, with respect to the partitions d w , via (5). Let P + ⊂ G /Zp be the parabolic subgroup that stabilizes L + and such that where the map to the first factor is the similitude character ν and the map to the second factor is projection to H.For w ∈ Σ p , let P w be the parabolic subgroup of GL n (O w ) given by and set P = w∈Σp P w .We naturally identify P as a subgroup of G /Zp .Let P u be the unipotent radical of P , L P = P/P u be its Levi factor and T P be its maximal subtorus.The projection P + ։ G m ×P H induces a natural isomorphism L P ∼ = L H .Its restrictions to maximal subtori yields the identity map T P = T H . Remark 2.2.The trivial partition of a w is (1, . . ., 1) (of length t w = a w ).If the partitions d w and d w are both trivial, we write B w instead of P w .In that case, L B = T B = T H . Our choices of bases above imply that under the isomorphisms (3) and (4), P + corresponds to Definition 2.3.We define the P -Iwahori subgroup of G of level r ≥ 0 as I 0 r = I 0 P,r := g ∈ G(Z p ) | g mod p r ∈ P + (Z p /p r Z p ) and the pro-p P -Iwahori subgroup I r = I P,r of G of level r as Note that for r = 0, we simply have I P,0 = I 0 P,0 = G(Z p ). Remark 2.4.Although somewhat tempting, we refrain for referring to I 0 r as a parahoric subgroup of G.This terminology is usually reserved for stabilizers of points in Bruhat-Tits building.We make no attempt here to introduce our construction from the point of view of these combinatorial and geometric structures. The inclusion of L P (Z p ) in I 0 r yields a canonical isomorphism For each w ∈ Σ p , one similarly defines I 0 w,r and I w,r by replacing P + by P w and working in GL n (O w ) instead of G(Z p ).Let so that I r and I 0 r correspond to Z × p × I GL P,r and Z × p × I 0,GL P,r respectively, via the isomorphisms (3) and (4). Remark 2.5.Later, we will consider various modules with an action of G and define "P -ordinary" submodules.Technically, it would be more accurate to refer to them as P +ordinary submodules.Similarly, the groups defined above could be called (pro-p) P + -Iwahori subgroups.In any case, there should not be any confusion between P and P + . Conventions for the opposite unitary group of G. Consider the PEL datum P = (K, c, O, L, •, • , h) of unitary type associated to a finitedimensional hermitian K-vector space (V, •, • ) as above.Recall that there is a fixed We sometimes write P 1 for P and similarly set which is clearly the datum associated to V but equipped with the opposite Hermitian pairing − •, • .When we wish to distinguish those PEL datum, we write G 1 := G P 1 and G 2 := G P 2 . One has an obvious canonical identification G 1 (A) = G 2 (A). All of the definitions above can therefore be made with P 2 instead of P 1 .To compare the relevant results on these two groups, we choose the fixed O ⊗ Z p -decomposition for Furthermore, the signature of G 2 at w ∈ Σ p is now (a w , b w ) = (b w , a w ).Therefore, when working with G 2 , we fix the partition of a w to be the partition d w of b w chosen above. Remark 2.6.We often refer to (V, •, • ) simply by V and (V, − •, • ) simply by −V .The objects associated to each of them sometimes have subscripts V or −V to emphasize the relevant PEL datum.This convention will be reminded several times throughout the article to avoid confusion, especially in Section 4.2. 2.3 Unitary Shimura varieties of level I P,r at p. The results of Sections 3 and 4 can be obtained while only working with moduli spaces associated to P over F , the reflex field of P = P 1 .However, in Section 5, we use these results to compare "P -ordinary subspaces" (to be defined later) with p-integral spaces of modular forms.Therefore, in this section, we introduce the relevant spaces over F and over O F ⊗ Z (p) simultaneously, where O F denotes the ring of integers of F . Remark 2.7.In the p-integral case, we assume first that our level K is hyperspecial at p, so our treatment here follows [EHLS20, Section 2.2] and introduces the notions relevant to our situation.However, in Section 2.3.2,we introduce level structures at p that are more general than the one considered in [EHLS20]. Let = {p} or ∅ and define Then, one may define the moduli problem M K, = M K, (P) as the functor that assigns to any locally noetherian S scheme T the set of equivalence classes of quadruples A = (A, λ, ι, α), where 1.A is an abelian scheme over A; and two quadruples (A, λ, ι, α) and (A ′ , λ ′ , ι ′ , α ′ ) are equivalent if there exists some primeto-isogeny f : ), then there exists a smooth, quasi-projective S -scheme that represents this moduli problem, which we still denote by M K, .One readily sees that M K,∅ is canonically isomorphic to the base change of M K,{p} from O F ⊗ Z (p) to F .Therefore, when the base ring S is clear from context, we simply write M K for M K, . Toroidal compactifications. We recall the existence of toroidal compactifications of the moduli spaces above constructed in [Lan13].When = {p}, these generalizes the known toroidal compactifications for = ∅.Note that these are associated to smooth projective polyhedral cone decompostions.Since the exact definition of the later plays no role in this article, we do not introduce this notion precisely. The only properties relevant for us are that given such a polyhedral cone decomposition Ω, there exists a smooth toroidal compactification M tor K,Ω of M K over S , for both = ∅ and {p}, and that there exists a partial ordering on the set of such Ω's by refinements.Given two polyhedral cone decompositions Ω and Ω ′ , if Ω ′ refines Ω, then there is a canonical proper surjective map π Ω ′ ,Ω : M tor K,Ω ′ → M tor K,Ω which restricts to the identity on M K .We denote the tower {M tor K,Ω } by M tor K .We often refer to the tower as if it were a single scheme and do not emphasize the specific compatible choices of Ω in some constructions.See [EHLS20, Section 2.4] for more details. Compactified Shimura varieties of level Over the reflex field F , the moduli space M K (P) is the union of finitely many copies of the canonical model of the Shimura variety associated to (G, X P ), where X P denote the G(R)-conjugacy class of h, see [Kot92, Section 8] for details. More precisely, let V (1) , . . ., V (k) be representatives for the isomorphism classes of all hermitian vector spaces that are locally isomorphic to V at every place of Q.As explained in [CEF + 16, Section 2.3.2], it is well-known that there are finitely many such classes, in fact naturally indexed by the V (j) .Assume that V (1) = V and denote the scheme-theoretic of M K,V in M K, by K Sh (V ).Again, we often simplify the notation to K Sh(V ) (or even K Sh) when the choice of = ∅ or {p} is clear from context.In particular, K Sh is a smooth, quasi-projective S -scheme.We refer to K Sh as a Shimura variety of level K (associated to P) and M K as a moduli space. In what follows, we work with = {p}, hence K = G(Z p )K p as in the beginning of Section 2.3.We now introduce a more general level structure at p. To do so, we first need to introduce covers of M K and M tor K . Let A = (A, λ, ι, α) be the universal abelian scheme over M K .Using [Lan13, Theorem 6.4.1.1],A can be extended to a semiabelian scheme over M tor K that is part of a degenerating family and which we still denote A. By [Lan13, Theorem 3.4.3.2],there exists a dual semiabelian scheme A ∨ together with homomorphisms A → A ∨ , O F ⊗ Z (p) → End M tor K A and a K (p) -level structure on A that extend λ, ι and α respectively. Define an O F ⊗ Z (p) -scheme M Kr over M tor K whose S-points classify the group schemes with image an isotropic subgroup scheme.Let M Kr denote its pullback over M K .We have the commutative diagram where the vertical arrows are L r -torsors, where L r denotes to F , a choice of basis of Z p (1) induces a canonical identification between M K r/F and the moduli space (M IrK p ) /F .Moreover, the normalization of (M tor K ) /F in (M Kr ) /F is M K r/F .In other words, given any open compact subgroup K p ⊂ G(A p f ), we may define K r = I r K p and there should be no confusion when working over S , for = {p} or ∅.We sometimes write K P,r instead of K r if we want to emphasize its dependence on P . To define modular forms of level K r , we only need to work with the components over K Sh.More precisely, for any polyhedral cone decomposition Ω, denote the scheme-theoretic closure of K Sh in M tor K,Ω by K Sh tor Ω .Again, we denote the tower { K Sh tor Ω } Ω by K Sh tor and describe our construction as if this tower was a single scheme.In particular, we have a canonical inclusion (of towers) s K : K Sh tor ֒→ M tor K in the obvious sense.Its restriction to K Sh is the natural inclusion K Sh ֒→ M K described above, which we denote by s K again. As discussed in [Lan12, Sections 3-4] and [EHLS20, Section 2.4], this is a smooth toroidal compactification of K Sh.Furthermore, over F (i.e. when = ∅), it is equal to the usual toroidal compactification of the canonical model of the Shimura variety associated to (G, X P ). Define Kr Sh (resp.Kr Sh) as the pullback of M Kr (resp.M Kr ) via s K , i.e. we have the commutative diagrams By abusing notation, we denote all four of the horizontal inclusions by s K .All four vertical arrows are covers by L r -torsors. Complex uniformization. We first recall the description of natural complex structure on X = X P .Let V C = L ⊗ C with its pure Hodge decomposition V C = V −1,0 ⊕ V 0,−1 of weight −1, as in section 2.1.1.Let W = V /V 0,−1 , a space defined over the reflex field F of P. Fix an S -submodule Λ 0 of W such that Λ 0 ⊗ S C = W and consider the S -module so that both Λ 0 and Λ ∨ 0 are isotropic submodules of Λ.One has bx, y can = x, by can , for b ∈ O F . The pair (Λ, •, • can ) induces an S -group scheme G 0 whose R-points are given by for any S -algebra R. One readily checks that there is an isomorphism ) and the pairing •, • with •, • can .In other words, it yields an identification between G /C and G 0/C .Let H 0 ⊂ G 0 be the stabilizer of the polarization Λ = Λ 0 ⊕ Λ ∨ 0 .The algebraic representations of H 0 will describe the cohomological weights of the automorphic representations considered below.The natural projection Under the identification above, H 0 (C) corresponds to C(C), where C is the real algebraic subgroup of G /R whose real points U ∞ = C(R) is the stabilizer of h ∈ X under the conjugation action of G(R). Let P 0 ⊂ G 0 be the parabolic subgroup defined as the stabilizer of Λ 0 ; its Levi factor is H 0 .Then, the identification above embeds G(R)/U ∞ ∼ − → X as an open subspace of G 0 (C)/P 0 (C), which yields a complex structure on X.As discussed in [Kot92, Section 8], the complex analytic space K Sh(C) is naturally isomorphic to Note that P 0/C corresponds to P h/C , where P h ⊂ G /R be is the stabilizer of the Hodge filtration on V = L ⊗ R determined by h, as explained in Section 2.1.1. 2.4 Automorphic vector bundles of a given weight at infinity and type at p. The canonical bundles. In this section, can be either ∅ or {p}.In both cases, let K = G(Z p )K p and for any r ≥ 1, let K r = I r K p .When = ∅, some of the definitions below can be adapted for any level structure at p but these will not be pertinent for our work. Let ω be the O M tor K -dual of Lie M tor K A ∨ over S .The Kottwitz determinant condition mentioned in the definition of the moduli problem M K (P) implies that ω is locally isomorphic to and denote the structure map E r → M Kr by π r .Let τ be a smooth finite-dimensional representation of L P (Z p ) that factors through L r .Let M τ denote the associated complex vector space.In fact, there exists a finite ring extension S [τ ] of S on which τ is well defined. Define E r,τ as the S [τ ]-scheme over E r whose R-points are given by for all ε ∈ E r , m ∈ (M τ ) /R and g ∈ L H (Z p ).Let π r,τ be the structure map E r,τ → M Kr . Weights of modular forms. Let K ′ be the Galois closure of K and p ′ ⊂ O K ′ be the prime above p determined by ι p .Moreover, let Over S 0 , we have an isomorphism Let B H 0 ⊂ H 0 be the Borel subgroup (defined over S 0 ) that corresponds to the product of the lower-triangular Borel subgroups via the isomorphism (10).Let T H 0 ⊂ B H 0 denote its maximal subtorus and let B u H 0 denote its unipotent radical subgroup.Given an S 0 -algebra R, a character κ of T H 0 over R is identified via the isomorphism (10) with a tuple κ = (κ 0 , (κ σ ) σ∈Σ K , ) , where κ 0 ∈ Z and κ σ = (κ σ,j ) ∈ Z bσ .Namely, for We say that κ is dominant if it is dominant with respect to the opposite Borel B op H 0 (of upper-triangular matrices).This is equivalent to κ σ,j−1 ≥ κ σ,j for all σ ∈ Σ K , 2 ≤ j ≤ b σ . Given a dominant character κ of T H 0 over an S 0 -algebra R, extend it trivially to B H 0 .Define with its natural structure as a left H 0 -module via multiplication on the right.Since H 0 is the Levi factor of P 0 , we inflate it to an irreducible algebraic representation of P 0 . As explained in [Jan03, Part II.Chapter 2] and [Hid04, Section 8.1.2],if R is flat over S 0 , this is an R-model for the highest weight representation of H 0 with respect to (T H 0 , B op H 0 ) of weight κ.Now, assume that = ∅ and hence, R is a K ′ -algebra.Via the identification of P 0 and P h over C, W κ is a representation of P h .As explained in [Har86, Section 7.1], it therefore corresponds to an homogeneous G-vector bundle over X, the compact dual of X.The latter induces an automorphic vector bundle ω Wκ on K Sh, for any K as in Section 2.4.1.As explained in [EHLS20, Section 6.1.1],it has a canonical model over some finite field extension F (κ) of F contained in K ′ .Its base change to K ′ has a canonical extension to the toroidal compactification K Sh tor Ω of K Sh, for any polyhedral cone decomposition Ω.Indeed, the restriction of where s K,Ω is the canonical inclusion K Sh tor Ω ֒→ M tor K,Ω , to K Sh is canonically isomorphic to ω Wκ .We denote both by ω κ when no confusion arises. Furthermore, the subcanonical bundle of ω Wκ corresponds to the twist ω κ (−D Ω ), where D Ω is the ideal sheaf of the boundaries.In other words, it is the Cartier divisor K Sh tor Ω − K Sh equipped with its structure of reduced closed subscheme. The space of modular forms (for G) of weight κ and level K is where the limit runs over all polyhedral cone decompostion Ω, partially ordered via refinements.Similarly, the space of cusp forms S κ (K; R) is defined as P -nebentypus. In this chapter, we set = {p}, so let S 0 := S 0 = O K ′ ,(p ′ ) .Fix an S 0 -algebra R ⊂ C. Observe that the objects from the section above are all well-defined over S 0 {p} = O K ′ ,(p ′ ) if we restrict our attention to level subgroups K of the form K = G(Z p )K p or K = K r = I r K p for some r ≥ 1. As in section 2.4.1, let τ be a smooth finite-dimensional representation of L P (Z p ) that factors through L r = L P (Z p /p r Z p ).Let M τ denote the associated module over a finite ring extension of O F ⊗ Z (p) contained in C. Enlarging the latter if necessary, we assume that it contains S 0 and denote it ) as a sheaf over Kr Sh.We denote its restriction to Kr Sh by ω κ,r,τ as well. Definition 2.8.For any S 0 [τ ] algebra R, a modular form over R on G of weight κ, level K r and P -nebentypus τ is a global section of ω κ,r,τ over Kr Sh.The R-module of all such forms is denoted M κ (K r , τ ; R). The R-module S κ (K r , τ ; R) of cuspidal forms over R on G of weight κ, level K r and P -nebentypus τ is similarly defined by replacing ω κ,r,τ with its twist by the ideal sheaf of the boundaries. A modular form f ∈ M κ (K r , τ ; R) can be interpreted as a functorial rule that assigns to a tuple (A, ε, φ for all b ∈ B H 0 (R ′ ) and l ∈ L P (Z p ). Remark 2.9.Classically, the nebentypus of a modular form is a finite-order character of the maximal torus T H (Z p ) of H.In our terminology, this is equivalent to a B-nebentypus. One similarly defines ω κ,r as the pullback to Kr Sh of (π r ) * O Er [κ] and ω sub κ,r as its twist by the ideal sheaf of the boundaries.Define Since L P (Z p ) is a compact group, one readily sees that where the direct sum runs over all smooth irreducible representations over R of L P (Z p ) that factor through L r . Let G = G 1 = GU(V ) be the unitary group (over Z) associated to the PEL datum P = P 1 . Recall that its signature is a collection of pairs of integers {(a σ , b σ ) σ∈Σ K }.Let = ∅ or {p} and fix a neat open compact subgroup K as in Section 2.3.The dimension of K Sh(V ) is equal to the C-dimension of X P , namely For any i = 0, . . ., d, we write and define , ω κ ) as modules over S 0 . Comparison to (P In this section, we recall some of the results of [EHLS20, Section 6.2] that are relevant for us later, especially in Section 5. We use the identification of P 0 (resp.H 0 ) and P h (resp.C) over C without comments.Therefore, we identify modules equipped with actions from these groups (or their Lie algebra) repeatedly.Moreover, we write K h instead of U ∞ for the real points of C. Let g = Lie(G(R)) C .The adjoint action of Ad(h( √ −1)) induces the Harish-Chandra decomposition g = p − h ⊕ k h ⊕ p + h .The Lie algebra of P h (C) is P h = p − h ⊕ k h .Therefore, for any dominant weight κ of T H 0 , the highest weight representation W κ as a natural structure as a (P h , K h )-module.Over C, there is a canonical isomorphism of G(A f )-modules, where A 0 (G) is the space of cusp forms on G. For any φ ∈ A 0 (G), let φ(g) = φ(g).As explained in [EHLS20, Section 6.2.1], the map Here, κ D is again a dominant weight of T H 0 (depending on κ and the signature of G at archimedean places) defined in [EHLS20, Section 6.1.3]but whose exact formula is not relevant for us. Let π = π ∞ ⊗π f be an irreducible (g, K h )×G(A f )-subrepresentation of A 0 (G).From now on, we refer to such an object as a cuspidal automorphic representations (without mentioning its irreducibility). Definition 2.10.Let π and κ be as above and K be any open compact subgroup of G(A f ).We say that π is holomorphic of weight type (κ, K) if On the other hand, we say that π is anti-holomorphic of weight type (κ, K) if Remark 2.11.As explained in [BHR94], if π is holomorphic or anti-holomorphic, then π f is defined over some number field E(π).Enlarging it if necessary, we always assume it contains K ′ .Let π be the image of π via the c-semilinear map φ → φ on A 0 (G).The isomorphism c B induces an involution π → π on the set of cuspidal automorphic representations of G.By definition, it interchanges holomorphic and anti-holomorphic representations but preserves weight type. As explained in [EHLS20, Section 6.5.3], if π has weight type (κ, K), there is an isomorphism where ν is the similitude character on G and In the next sections, we consider certain (anti-)holomorphic cuspidal automorphic representations π of weight type (κ, K) whose local factor at p has a non-zero fixed I P,r -vector for some r ≫ 0. In that case, π is of weight type (κ, K P,r ) for all r ≫ 0. If the representation satisfies further conditions with respect to certain Hecke operators at p, we say that such π is P -ordinary or P -anti-ordinary.We compare structures of Pordinary and P -anti-ordinary representations using pairs of contragredient representations.Therefore, the involution π → π ♭ is more convenient than π → π to analyze these dual notions. 3 Structure theorem for P -ordinary representations. In this section, we finally introduce the notion of "P -ordinary" holomorphic automorphic representations on G = G 1 .The main results are Theorems 3.10 and 3.12. We obtain direct consequences for the dual notion of P -anti-ordinary vectors in the next section.Furthermore, all statements can be adapted for G 2 , the opposite group of G 1 introduced in Section 2.2.3.We study the theory on G 2 more carefully in Section 4.2.1. P -ordinary representations. Given w ∈ Σ p and 1 ≤ j ≤ n, let t w,j ∈ GL n (O w ) denote the diagonal matrix It corresponds to an element of G(Q p ) under (3), which we denote t + w,j (namely, all its other components are equal to 1).Set We normalize these operators as follows.Fix an S 0 -algebra R ⊂ C as in Section 2.4.3.Given a character κ = (κ 0 , (κ σ ) σ∈Σ K ) of T H 0 over R, let κ p be the character of where t = (diag(t w,1 , . . ., t w,aw )) w|p via (5).We also define the T H 0 -character κ norm = (κ 0 , (κ norm,σ ) σ∈Σ K ), where Let κ ′ = (κ norm ) p , viewed as a character of T P (Z p ).Then, the j-th normalized Hecke operator at p of weight κ is defined as These operators can be interpreted as correspondences on the Igusa tower associated to G (see [EHLS20, Section 2.9.5], [Hid04, Section 8.3.1] or [SU02]) but this point of view will not be relevant for us in this article. For w ∈ Σ p , recall that we fixed partitions Let π = π ∞ ⊗ π f be a holomorphic cuspidal automorphic representation of weight type (κ, K r ) for some r ≥ 0. The double coset operator U w,Dw(j) acts on π Kr f via the action of G(A f ) on π f .In fact, writing it acts as the double coset operator U GL w,Dw(j),κ := I P,r t w,Dw(j) I P,r on π Ir p .It is well known that the generalized eigenvalues of u w,j,κ are p-adically integral.Therefore, the P -ordinary projector e P is well-defined as an operator on π Kr f and π Ir p .Definition 3.2.We say that π is P -ordinary (at p) of level r ≥ 0 if its local factor π p contains a non-zero vector φ fixed by I r = I P,r such that e P φ = φ.The space π P −ord p,r = e P π I P,r p is called the P -ordinary subspace of π p (or of π) of level r.We say that its elements are the P -ordinary vectors of π p of level r. If π p is P -ordinary of some level r, then it is P -ordinary of all levels r ≫ 0. In particular, π has weight type (κ, K r ) for all r ≫ 0. Remark 3.3.When P = B, a result of Hida (see [Hid98,Corollary 8.3] or [EHLS20, Theorem 6.6.9])implies that the space of B-ordinary vectors (or simply ordinary vectors) is at most 1-dimensional and does not depend on r.This is no longer true for general parabolic subgroups P .However, Theorem 3.12 yields an analogous result for P -ordinary subspaces. Clearly, φ ∈ π p is P -ordinary if and only if φ ∈ π Ir p , for all r ≫ 0, such that φ is a simultaneous eigenvector for all operators u w,Dw(j) such that each eigenvalue is a p-adic unit. Since I r is normal in I 0 r , the space π Ir p is stable under the action of I 0 r /I r ∼ = L r .Let τ be an irreducible finite-dimensional smooth representation of L P (Z p ) that factors through L r .If a P -ordinary vector φ ∈ π Ir p lies in the τ -isotypic component of π Ir p , we say that φ is (P, τ )-ordinary or that it is P -ordinary of type τ .Let π (P,τ ) p,r denote the subspace consisting of all (P, τ )-ordinary vectors. One readily sees that any P -ordinary vector is the finite sum of (P, τ )-ordinary vectors for finitely many different representations τ as above.In particular, , as τ runs over all irreducible smooth representations of L P (Z p ) that factor through L r . Remark 3.4.In Definition 2.3, one could replace I P,r with the collection of g ∈ G(Z p ) such that g mod p r is in (Z p /p r Z p ) × × SP (Z p /p r Z p ).Here, SP is the derived subgroup of P or equivalently, it is the product in P over w ∈ Σ p of the subgroups SP w ⊂ P w consisting of upper-block triangular matrices whose diagonal blocks all have determinant 1.Let us write the corresponding group by I SP,r momentarily, in which case we have I P,r ⊂ I SP,r ⊂ I 0 P,r .Then, one can define P -ordinary representations of G using I SP,r instead of I P,r .By doing so, the space of P -ordinary vectors decomposes a direct sum over all P -nebentypus of τ that factor through det : L P (Z p ) → Z × p .Doing so is obviously less general but has the advantage of simplify the theory as only characters of L P (Z p ) occur as types of P -ordinary vectors.On the other hand, systematically developing the more general theory (with P u instead of SP ) has the advantage that any holomorphic cuspidal representation π of G is trivially GL(n)-ordinary.We discussed our motivation to study this more general notion in the introduction of this paper. Local factors at places w | p. The identifications (3) and (4) induce the isomorphism where G w = GL n (K w ). Consider the groups I w,r , I 0 w,r , P w ⊂ G w constructed in Section 2.2.2.Recall that the decompositions (3) and (4) yield identifications Let π be a holomorphic cuspidal automorphic representation of G(A) of type (κ, K r ).Recall that the character κ of T H 0 is identifies with a tuple (κ 0 , (κ σ ) σ∈Σ K ) such that κ 0 ∈ Z and κ σ ∈ Z bσ . The above discussion allows one to factor the p-component π p of π as where µ p is a character of Q × p and π w is an irreducible admissible representation of G w .Let u GL w,Dw(j),κ := |κ ′ (t w,j )| −1 p U GL w,Dw(j),κ , where κ ′ related to κ as in equation ( 13).Then, the Hecke operators u w,Dw(j),κ from Section 3.1 act on via the action of u GL w,Dw(j),κ on π Iw,r w .Again, this action is compatible as r increases, hence we do not include it in the notation of the operator and the generalized eigenvalues of u GL w,Dw(j),κ are all p-adically integral. For the remainder of Section 3, we assume that π is P -ordinary and that κ σ,bσ + κ σc,aσ ≥ n, ∀σ ∈ Σ K . (16) The fact that π is P -ordinary is equivalent to µ p being unramified and that, for each w ∈ Σ p and r ≫ 0, there exists some non-zero φ ∈ π Iw,r w such that where is c w,Dw(j) a p-adic unit, for all 1 ≤ j ≤ r w . In that case, we say that π w is P w -ordinary and that such a vector φ is P w -ordinary (of level r).We denote the subspace of all P w -ordinary vectors as π Pw−ord w .Note that φ ∈ π Ir w is P w -ordinary if and only if e w φ = φ, where e w is the P w -ordinary projector , which has a well-defined action on π Iw,r w . Explicit computations. To clarify arguments in later proofs, we now describe explicit left coset representatives for U GL w,Dw(j) .For simplicity, we only compute the left coset representatives when j ≤ t w .The same conclusion applies for j > t w but writing down the matrices is simply more cumbersome.In any case, fix j ≤ t w and write i = D w (j) (making the dependence on j implicit). Fix a uniformizer ̟ ∈ p w .Given any matrix X ∈ I w,r , write it as ) be the unique matrices such that B ′ has entries in S w and BD −1 = B ′ + pB ′′ .Then, we have In particular, t −1 w,i X ′′ t w,i is in I w,r .Therefore, where M j ⊂ GL n (K w ) is the subset of matrices 1 i B 0 1 n−i such that the entries of B are in In particular, this set of representative does not depend on r and one obtains the same result by replacing I w,r with N w = ∩ r I w,r = P u w (K w ) ∩ GL n (O w ).One readily convinces themselves that the calculations above still apply for t w < j ≤ r w . Let V w be the K w -vector space associated to π w .By continuity, its N w -invariant subspace w,Dw(j) is invertible on V Nw w,inv and nilpotent on V Nw w,nil .Moreover, U GL w,Dw(j) = I w,r t w,Dw(j) I w,r acts as δ Pw (t Dw(j) ) −1 t Dw(j) on V Nw w,inv .Proof.We keep writing i = D w (j) in this proof and omit the subscript w in what follows. The first part is a consequence of the explanations in [Hid98, Section 5.2].Moreover, [Hid98, Proposition 5.1] shows that the natural projection from V to its P -Jacquet module V P induces an isomorphism V N inv ∼ = V P that is equivariant for the action of all the U GL i operators. From our explicit computations above, it is clear that U GL i acts on V P via |M j |t i , where |M j | is the cardinality of M j .To see this, simply note that given any x ∈ M j , t −1 i xt i ∈ P u w (K w ) fixes V P .Therefore, the result follows since M j contains exactly |p| It is clear from Lemma 3.5 that any P w -ordinary vector φ ∈ V Nw w lies in V Nw w,inv and where c w,Dw(j) is its u GL w,Dw(j) -eigenvalue (a p-adic unit).In particular, φ is a simultaneous eigenvector under the action of π w for all matrices t w,Dw(j) . Bernstein-Zelevinsky geometric lemma for P w -ordinary representations. In Section 3.3, we obtain results about the structure of the P w -ordinary subspace of π w via its relation to its P w -Jacquet module, see the proof of Lemma 3.5.To understand further the P w -Jacquet module of π w , we use a version of the Bernstein-Zelevinsky geometric lemma (see [Ren10,Section VI.5.1] or [Cas95, Theorem 6.3.5]) that is adapted to our setting, see Lemma 3.7.However, we first need to introduce some notation. Lemma 3.6.Let π w be a P w -ordinary representation of G w .There exists a parabolic subgroup Q w ⊂ P w of G w and a supercuspidal representation σ w of Q such that π w ⊂ ι Gw Qw σ w .Proof.The following is a minor modification of the proof of Jacquet's theorem [Cas95, Theorem 5.1.2].Moreover, we omit the subscript w to lighten the notation. The fact that π is P -ordinary implies that r G P π = 0.By [Cas95, Theorem 3.3.1],the latter is both admissible and finitely generated so it admits an irreducible admissible quotient τ as a representation of L. By Frobenius reciprocity [Cas95, Theorem 2.4.1] and the irreducibility of π, it follows that π ⊂ ι G P τ .Then, it is a theorem of Jacquet [Cas95, Theorem 5.1.2]that there exists a parabolic Q L ⊂ L and a supercuspidal representation σ of its Levi factor such that τ ⊂ ι L Q L σ.By transitivity of parabolic induction, the result follows. Fix an embedding π w ֒→ ι Gw Qw σ w with the notation as in Lemma 3.6.Let M w and Q u w denote the Levi factor and unipotent radical of Q w .Moreover, let B w denote the Borel subgroup of G w corresponding to the trivial partitions, as in Remark 2.2.Let T w denote the Levi factor of B w .In particular, T w is the maximal torus of G w . Let W be the Weil group of G w with respect to (B w , T w ) and consider According to [Ren10, Section V.4.7], for each x ∈ W (P w , Q w ), xP w x −1 ∩M w is a parabolic subgroup of M w with Levi factor equal to xL w x −1 ∩ M w .Similarly, the Levi factor of the parabolic subgroup Denote the natural conjugation-by-x functor that sends a representation of The following is a version of [Cas95, Theorem 6.3.5] that is adapted to our setting and notation. Lemma 3.7.Let Q w ⊂ P w denote standard parabolic subgroups of G w as above and let σ w be an irreducible supercuspidal representation of M w . There exists a filtration, indexed by W (L w , M w ), of the L w -representation r Gw Pw ι Gw Qw σ w such that the subquotient corresponding to x ∈ W Lw is isomorphic to ι Lw Lw∩x −1 Qwx σ x w and the one corresponding to x = 1 is a subrepresentation. Proof.In this proof, we drop the subscript w to lighten the notation. The Bernstein-Zelevinsky geometric lemma (see [Ren10, Section VI.5.1]) states that there exits a filtration of r G P ι G Q σ such that the corresponding graded pieces are isomorphic to x runs over all elements of W (P, Q).Moreover, one can order the filtration so that the factor corresponding to σ (i.e. the graded piece corresponding to x = 1) is a subrepresentation of r G P ι G Q σ.Since σ is supercuspidal, the graded piece corresponding to x ∈ W (P, Q) is nonzero if and only if xLx −1 ∩ M = M, i.e. x ∈ W (L, M).For such an x, the graded piece is clearly isomorphic to ι L L∩x −1 Qx σ x . Main Theorems. For simplicity, we assume that π p satisfies the following hypothesis : HYPOTHESIS 3.8.The parabolic subgroup Q w for π w from Lemma 3.6 is equal to P w for all w ∈ Σ p .In particular σ w is a supercuspidal representation of L w . Remark 3.9.This hypothesis is certainly restrictive in our context.For instance, if π p is B-ordinary, then Lemma 3.6 implies that all local factors π w lie in a principal series.Furthermore, if π p is B-ordinary (i.e.ordinary in the usual sense) then it follows immediately from our definitions that it is also P -ordinary.Therefore, the case Q w = P w can certainly occurs. One can argue that this is not a major issue since in the situation above, if π p is B-ordinary than there is little interest in considering its structure as a P -ordinary representation.One only obtains less information this way.However, if π p is a general P -ordinary representation whose local factors π w lie in a principal series, it is not necessarily true that π p is also Bordinary.In general, if π p is P -ordinary and the supercuspidal support of all π w is Q w , then π p might not be Q-ordinary, where Q = w Q w .Therefore, the hypothesis above restricts us to study certain P -ordinary representations that are not Q-ordinary with respect to any smaller parabolic B ⊂ Q P . In subsequent work, the author plans to generalize the theory and results below for any parabolic subgroup Q w ⊂ P w using the theory of covers of types developed in [BK98,BK99] and typical representations as in [Lat21]. Theorem 3.10.Let π be a P -ordinary representation as above such that its weight κ satisfies Inequality (16).Let π w ⊂ ι Gw Pw σ w be its component at w ∈ Σ p as above, a P wordinary representation. (i) For r ≫ 0, let φ, φ ′ ∈ π Ir w be P w -ordinary vectors.Let ϕ and ϕ ′ be their respective image in ι Gw Pw σ w .If φ = φ ′ , then ϕ(1) = ϕ ′ (1).(ii) For r ≫ 0, let φ ∈ π Ir w be a simultaneous eigenvector for the u w,Dw(j) -operators that is not P w -ordinary.Let ϕ be its image in ι Gw Pw σ w .Then, ϕ(1) = 0. (iii) Let τ w be a smooth irreducible representation of L w (O w ).Assume there exists an embedding τ w ֒→ σ w over L w (O w ).Let X w be the vector space associated to τ w , viewed as a subspace of the one associated to σ w . Then, given α ∈ X w , there exists some r ≫ 0 such that τ w factors through L w (O w /p r w O w ) and some (necessarily unique) P w -ordinary φ r,α ∈ π Ir w such that ϕ r,α (1) = α, where ϕ r,α is the image of φ r,α in ι Gw Pw σ w .Furthermore, the support of ϕ r,α contains P w I w,r .The map α → φ r,α yields an isomorphism of L w (O w )-representations Proof.This proof is inspired by the one of [EHLS20, Lemma 8.3.2] which is itself inspired by arguments in [Hid98, Section 5].By abuse of notation, we will always write L when For part (ii), pick a simultaneous eigenvector v ∈ V N inv for the u GL D(j) -operators that is not P -ordinary.Then, as above, the map i • s P : . By equivariance of the action of the u GL D(j) -operators on both sides, we must have f v (1) = 0. To show part (iii), consider α as an element of the vector space associated to σ, which is also the one associated to σδ In particular, φ ∈ π Ir for some r ≫ 0. We may assume that r is sufficiently large so that τ factors through L(O/p r O). Finally, since pr P is equivariant under the action of the u GL D(j) -operators and these act on pr P (φ) = α via multiplication by the p-adic unit β(s j ), one concludes that φ is P -ordinary.Proceeding as in the proof of part (i), we obtain ϕ(1) = pr P φ = α, where ϕ ∈ ι G P σ is the function corresponding to φ. Therefore, φ r,α := φ is the desired vector, necessarily unique by part (i).The last statement holds because s P is L(O w )-equivariant. Remark 3.11.As a consequence of the proof for part (i) above, we see that π w is P wordinary (of level r ≫ 0) if and only if is a p-adic unit for all s ∈ Z(L w (K w )).In other words, not all supercuspidal representation σ w can occur.Furthermore, when π w is P w -ordinary (of level r ≫ 0), the u GL w,Dw(j),κ -eigenvalue of all the P w -ordinary vectors is β(t w,Dw(j) ). Consider the BK-type (L w (O w ), τ w ) of the supercuspidal representation σ w , as defined in Section 1.2.2.Let τ be the representation of L P (Z p ) corresponding to ⊗ w∈Σp τ w under the natural isomorphism L P = w∈Σp L w induced by the identification (14).We refer to τ as the BK-type of π p .Theorem 3.12.Let π be a holomorphic P -ordinary representation of weight type (κ, K) such that Inequality (16) holds.Let τ be the BK-type of π p .Then, ) is 1-dimensional for all r ≫ 0. In other words, the space π for r ≫ 0. By Theorem 3.10 (iii), there is a natural isomorphism where τ w is any smooth irreducible representation of L w (O w ).From [BK98, Proposition 5.6], we know that the BK-type τ w of π w has multiplicity one in σ w .Therefore, the result follows by applying the above to τ w = τ w . 4 P -anti-ordinary representations and opposite unitary groups. In this section, we first define the dual notion of P -anti-ordinary representations and analyze the structure of P -anti-ordinary subspaces using our results above.Then, we again follow the material of [EHLS20, Section 6.2] to compare the P -(anti-)ordinary representations on G = G 1 and its opposite unitary group G 2 The results on G 1 directly apply to G 2 simply by replacing P with its opposite parabolic P op .However, using standard intertwining operators, one obtains results with respect to P once more.Our results are greatly inspired by [EHLS20, Sections 8.3-8.4]. Moreover, as explained in the introduction of this paper, our motivation is to use the results here for explicit calculations of zeta integrals in upcoming work of the author. P -anti-ordinary representations. Let π be an anti-holomorphic cuspidal representation on G of weight type (κ, K r ).For each w ∈ Σ p and 1 ≤ j ≤ n, let t − w,j = t −1 w,j ∈ G(Q p ), where t w,j is the element constructed in Section 3.1.Proceeding as in that section, we define by replacing t + w,j by t − w,j in the definitions of U w,j , u w,j,κ , u P,p,κ and e P,κ .We also consider the partition d w of n of length r w as well as its partial sums D w (j) for 1 ≤ j ≤ r w . As in Section 3.1, the generalized eigenvalues of the action of u − w,Dw(j),κ on π Kr f are all p-adically integral.Therefore, the P -anti-ordinary projector e − P,κ has a well-defined action on π Kr f .We say that π is P -anti-ordinary (of level r) if e − P,κ (π Kr f ) = 0. Remark 4.1.Note that the action of U − w,j (and therefore all the other operators as well) does depend on r.However, by abuse of notation, we do not include r in the already long list of subscripts. By definition, for each w ∈ Σ p , 1 ≤ j ≤ r w and r ≥ 0, the operator U − w,j acts on via its action on π Ir p .Furthermore, by writing π p ∼ = µ p ⊗ w∈Σp π w using isomorphism (15), its action on π It follows from the discussion above that the generalized eigenvalues of u GL,− w,Dw(j) are all p-adically integral and e − w defines a projector on π Iw,r w .One readily sees that π is P -anti-ordinary (at p) over level r if µ p is unramified and each π w is P w -anti-ordinary of level r, in the sense that e − w π Iw,r w = 0. Lemma 4.2.Let π w as above.Then, the representation π w is P w -anti-ordinary of some level r ≥ 0 if and only if its contragredient π ∨ w is P w -ordinary of level r.In that case, π w is P -anti-ordinary of all level r ≫ 0. Proof.This is a simple generalization of [EHLS20, Lemma 8.3.6 (i)].The proof goes through verbatim by replacing the pro-p Iwahori subgroup (also denoted I w,r ) by I Pw,w,r and only considering the Hecke operators u GL,− w,Dw(j) and u GL w,Dw(j) , for 1 ≤ j ≤ r w .The key part is that all these operators commute with one another. Conventions on contragredient pairings. In what follows, given any representation ρ, we denote its contragredient representation by ρ ∨ .For instance, let σ w be an admissible irreducible supercuspidal representation of L w (K w ) and σ ∨ w be its contragredient, also an admissible irreducible supercuspidal representation of L w (K w ). Let •, • σw : σ w × σ ∨ w → C be the tautological pairing on a pair of contragredient representations.Define a perfect G w (K w )-equivariant pairing.Here dk is the Haar measure on G w (O w ) that such that vol(G w (O w )) = 1 with respect to dk.Then •, • w naturally identifies ι Gw Pw σ ∨ w as the contragredient of ι Gw Pw σ w .Let π w be a the constituent at w ∈ Σ p of π p as above.From now on, we assume π w is the unique irreducible quotient ι Gw Pw σ w ։ π w .Equivalently, π ∨ w is the unique irreducible subrepresentation π ∨ w ֒→ ι Gw Pw σ ∨ w , see Remark 3.9.If one restricts the second argument of •, • w to π ∨ w , then the first argument factors through π w .In other words, •, • w induces the tautological pairing •, • πw : where ϕ is any lift of φ and ϕ ∨ is the image of φ ∨ .Let (τ w , X w ) be the BK-type of σ w , a representation of L w (O w ).Then, its contragredient (τ ∨ w , X ∨ w ) is the BK-type of σ ∨ w .One can find L w (O w )-embeddings τ w ֒→ σ w and τ ∨ w ֒→ σ ∨ w (both unique up to scalar) such that for all α ∈ X w , α ∨ ∈ X ∨ w , α, α ∨ σw = α, α ∨ τw . More generally, upon restriction of σ w and σ ∨ w to representations of L w (O w ), there are a direct sum decomposition Theorem 4.3.Let w ∈ Σ p and π w be a constituent of π as above.Assume that the weight κ of π satisfies Inequality (16).Assume that π w is P w -anti-ordinary of level r ≫ 0 and is the unique irreducible quotient ι Gw Pw σ w ։ π w as above.Assume that the BK-type (τ w , X w ) of π w factors through L w (O w /p r w O w ).Given any α ∈ X w , let ϕ Pw−a.ord w,r ∈ ι Gw Pw σ w be the unique vector with support P w I w,r such that ϕ Pw−a.ord w,r In particular, φ Pw−a.ord w,r , φ πw = 0 if and ony if φ ∨ is P w -ordinary and the component of (ii) The vector φ Pw−a.ord w,r lies in the τ w -isotypic space of π Iw,r w .Moreover, any other P w -antiordinary vector of type τ w is obtained as above for some other choice of α ′ ∈ X w . (iii) One can pick different choices of α for each r ′ ≥ r so that respectively.We first show that property (i) holds.By Lemma 4.2, π ∨ w is P w -ordinary of level r.Write where each V a is a simultaneous generalized eigenspace for the Hecke operators u GL w,Dw(j) .From the proof of Theorem 3.10 and the remark that follow, exactly one V a has generalized eigenvalues that are all p-adic units.We may assume that this holds true for V 1 .The exact eigenvalue of u GL w,Dw(j) is given by Equation ( 19), denote it β w,Dw(j) .For 1 < a ≤ A, at least one generalized eigenvalue for V a is not a p-adic unit. Given φ ∨ ∈ π ∨,Iw,r w , write it as a sum Recall that the support of ϕ w,r is P w I w,r .Also, the intersection of P w I w,r with G w (O w ) is equal to I 0 w,r and by Theorem 3.10 (ii), ϕ ∨ a (I 0 w,r ) = 0 for all a = 1.Therefore, Since I 0 w,r = L w (O w )I w,r and ϕ Pw−a.ord w,r , ϕ ∨ 1 are both fixed by I w,r , one obtains The desired relation holds by noting that ϕ ∨ 1 (1) = ϕ ∨ (1).The second part of (i) follows immediately from the discussion about isotypic subspaces at the end of Section 4.1.1. As a consequence of property (i), we immediately obtain φ w,r , V a πw = 0 for all a > 1.Furthermore, for all φ ∨ ∈ V 1 , we have u GL,− w,Dw(j) φ w,r , φ ∨ πw = φ w,r , u GL w,Dw(j) φ ∨ πw = β w,Dw(j) φ w,r , φ ∨ πw .By combining these two facts, we obtain for all φ ∨ in π ∨,Iw,r w . In other words, φ w,r is P w -anti-ordinary.Furthermore, note that the argument above implies that the subspace of P w -anti-ordinary vectors of type τ w in π Iw,r w is dual to the subspace of P w -ordinary vectors of type τ ∨ w .From Theorem 3.10, they both have dimension dim τ w = dim τ ∨ w .Since the space generated by the action of L w (O w /p r w O w ) on φ w,r is of dimension dim τ w and consists of P w -anti-ordinary vectors of type τ w .Given l ∈ L w (O w /p r w O w ), one readily sees that π w (l)φ w,r is the P w -antiordinary vector obtained by picking α ′ = τ w (l)α in X w instead of α.This proves the second sentence of part (ii). Finally, part (iii) and the first statement of part (ii) follow immediately from the fact that the analogous properties hold for ϕ w,r . Furthermore, by choosing the same partitions d w introduced in Section 2.2.2, the parabolic subgroup P w ⊂ GL n (O w ) for G 1 corresponding to w ∈ Σ p is replaced by the opposite parabolic subgroups, which in our case is simply its transpose t P w ⊂ GL n (O w ), when working with G 2 .Similarly, P is replaced by t P and the (pro-p) P -Iwahori subgroup of level r is replaced by the (pro-p) t P -Iwahori subgroup of level r. In particular, if π p ∼ = µ p ⊗ w∈Σp π w is the identification obtained from (14) for G 1 , the corresponding factorization on G 2 induces 4.2.2Holomorphic and t P -ordinary representations for G 2 . We keep the notation of Section 4.2.1.The discussion above shows that π w is P w -ordinary of level r ≫ 0 if and only if π ♭ w is t P w -ordinary of level r ≫ 0. Note that, adapting our definitions in Section 3.1 from G 1 to G 2 , the latter notion requires to change P w for t P w and the double coset operators U GL w,j for U ♭,GL w,j = t I w,r t −1 w,j t I w,r .We assume that π w is P w -ordinary of level r ≫ 0, that π w is the unique irreducible subrepresentation of ι Gw Pw σ w for some admissible irreducible supercuspidal representation σ w , and κ satisfies Inequality (16).The analogue of Theorem 3.10 is the following.Lemma 4.5.Using the notation above, let (τ w , X w ) be the BK-type of π w . (i) The unique irreducible quotient of ι Gw Pw σ ∨ w is isomorphic to π ♭ w .(ii) Let (τ ∨ w , X ∨ w ) be the contragredient of (τ w , X w ), the BK-type of σ ∨ w .Consider X ∨ w as a subspace of the vector space associated to σ ∨ w , via a fix embedding (unique up to scalar) τ ∨ w ֒→ σ ∨ w .For any α ∨ ∈ X ∨ w , let ϕ ♭ w ∈ ι Gw Pw σ ∨ w be the unique function with support P w t I w,r (for all r ≫ 0) such that ϕ ♭ w (1) = α ∨ and ϕ ♭ w is fixed by t I w,r (for all r ≫ 0).Let φ ♭ w denote its image in π ♭ w .Then, φ ♭ w is t P w -ordinary of type τ ∨ w of level r ≫ 0. This induces a natural isomorphism between τ ∨ w and the subspace of t P w -ordinary vectors of type τ ∨ w of level r ≫ 0. In particular, the latter is independent of r ≫ 0 and has dimension dim τ ∨ w = dim τ w . Proof.Consider the composition of π w ֒→ ι Gw Pw σ w with the map (of vector spaces) Its image is π ♭ w = π ∨ w and realizes π ♭ w as the unique irreducible subrepresentation of ι Gw t P w σ ∨ w .In particular, all the consequences of Theorem 3.10 hold for π ♭ w by replacing P w by t P w and σ ∨ w by σ ∨ w .Given α ∨ ∈ X ∨ w as above, let φ ∨ w ∈ π ♭,Iw,r w and ϕ ∨ w ∈ ι Gw t P w σ ∨ w be the vectors obtained from Theorem 3.10 (iii) associated to α ∨ .In particular, φ ∨ w is a t P w -ordinary vector of type τ ∨ w and the subspace generated by the action of L w (O w ) on φ ∨ w is exactly of all t P w -ordinary vectors of type τ ∨ w .In particular, the latter is independent of r ≫ 0 and isomorphic to τ ∨ w over L w (O w ).Now, consider the standard intertwining operator It identifies π ♭ w as the unique irreducible quotient of ι Gw t P w σ ∨ w .Furthermore, the vector ϕ ∨ w ∈ ι Gw t Pw σ ∨ w exactly corresponds to the vector ϕ ♭ w ∈ ι Gw Pw σ ∨ w described above.Then, φ ♭ w = φ ∨ w is the desired vector and this concludes the proof.4.2.3Anti-holomorphic and t P -anti-ordinary representations for G 2 . Going back to the discussion of Section 4.2.1, we know that π w is P w -anti-ordinary of level r ≫ 0 (for G 1 ) if and only if π ♭ w is t P w -anti-ordinary of level r ≫ 0 (for G 2 ).Again, adapting our definitions in Section 3.1 from G 1 to G 2 , the latter notion requires to change P w for t P w and the double coset operators U GL,− w,j for U ♭,GL,− w,j = t I w,r t w,j t I w,r .We assume that π w is P w -anti-ordinary of level r ≫ 0, that π w is the unique irreducible quotient of ι Gw Pw σ w for some admissible irreducible supercuspidal representation σ w , and κ satisfies Inequality (16).The analogue of Theorem 4.3 is the following.Lemma 4.6.Using the notation above, let (τ w , X w ) be the BK-type of π w . (i) The unique irreducible subrepresentation of ι Gw Pw σ ∨ w is isomorphic to π ♭ w .(ii) Let (τ ∨ w , X ∨ w ) be the contragredient of (τ w , X w ), the BK-type of σ ∨ w .Consider X ∨ w as a subspace of the vector space associated to σ ∨ w , via a fix embedding (unique up to scalar) τ ∨ w ֒→ σ ∨ w .For each r ≫ 0 and α ∈ X ∨ w , there exists some unique t P w -anti-ordinary φ ♭ w,r ∈ π Ir w of type τ ∨ w and level r such that ϕ ♭ w,r (1) = α, where ϕ ♭ w,r is the image of φ ♭ w,r in ι Gw Pw σ ∨ w , and the support of φ ♭ w,r contains P w t I w,r . (iii) For r ′ > r ≫ 0, one can choose α, α ′ ∈ X ∨ w such that the vectors φ ♭ w,r and φ ♭ w,r ′ corresponding to α and α ′ respectively satisfy Proof.As in the proof of Lemma 4.5, the map w as the unique irreducible quotient of ι Gw t P w σ ∨ w .In particular, all the consequences of Theorem 4.3 hold for π ♭ w by replacing P w by t P w and σ w by σ ∨ w .Given α ∈ X ∨ w as above, let ϕ ′ w,r ∈ ι Gw t P w σ ∨ w be the vectors obtained from Theorem 4.3 associated to α. Furthermore, consider the standard intertwining operator ι Gw Its image is both the unique irreducible quotient of ι Gw t Pw σ ∨ w , namely π ♭ w , and the unique irreducible subrepresentation of ι Gw Pw σ ∨ w .This proves part (i).To conclude, let φ ♭ w,r (resp.ϕ ♭ w,r ) be the image of ϕ ′ w,r in π ∨ w (resp.ι Gw Pw σ ∨ w ) via this intertwining operator.The fact that φ ♭ w,r is t P w -anti-ordinary of type τ ∨ w and level r follows from Theorem 4.3 (ii).Similarly, part (iii) follows from Theorem 4.3 (iii) (upon making the appropriate adjustement between G 1 and G 2 ).The properties of ϕ ′ w,r are obtained from an easy computation using the definition of ϕ ′ w,r and the exact formula for the intertwining operator above. Remark 4.7.In Theorem 4.3, Lemma 4.5 and Lemma 4.6, more general statement can be made for any type.However, for applications to computations of zeta integrals in forthcoming work of the author, [Mar23], the results above only involving the BK-type of a fixed representation are sufficient. 5 Comparison of P -(anti-)ordinary forms and representations. In this section, we work with G = G 1 and we use the same notation as in Section 3.1 without comments.The material here adapts some of the theory of [EHLS20, Section 6.6] for any parabolic subgroup P as in Section 2.2.2. In particular, we identify integral spaces of P -ordinary cusp forms of level K r with a fixed weight κ and P -nebentypus τ as lattices inside certain holomorphic P -ordinary cuspidal automorphic representations π of weight type (κ, K r ) whose BK-type is τ .Using characters of Hecke algebra associated to π one can study congruences between such P -ordinary cusp forms modulo p. Hecke algebras. Let κ be a dominant character of T H 0 and τ be an irreducible smooth representation of L P (Z p ). Fix r ≫ 0 such that τ factors through L P (Z p /p r Z p ) and let K = K r = K p I r ⊂ G(A f ) be a neat compact open level subgroup.Let R ⊂ C be an S 0 [τ ]-algebra. Hecke algebras on cusp forms. As in [EHLS20, Section 2.6.8], for all g ∈ G(A p f ), the double coset operator T r (g) = [K r gK r ] naturally acts as an endomorphism of M κ (K r ; R).The subspace of cusp forms and the subspace of P -nebentypus τ are both stable under the action of T r .The material of [EHLS20] only considers the case where P is a Borel subgroup but the same arguments and formulas remain valid in our case using our moduli interpretation of E r,τ from Section 2.4.3.This is because T r (g) only acts on the PEL datum of a given point and not on its p-level structure. Furthermore, assume that R is a p-adic domain.In that case, the arguments of Hida [Hid04,8.3.1]show that the Hecke operator u w,Dw(j) = u w,Dw(j),κ also acts as an endomorphism of M κ (K r ; R), see also [EHLS20, Sections 2.6.9, 2.9.5].Again, the action of u w,Dw(j) stabilizes the subspace of cusp forms and the subspace of forms with P -nebentypus τ . We now construct the Hecke algebra (of level K r ) generated by all Hecke operators at unramified places and at p.More precisely, let l = p be any prime of Q and consider the set P l of all primes of K + above l.Write P l = P l,1 P l,2 , where P l,1 is the subset of such primes that split in K and P l,2 is the complement.Therefore, one naturally has an identification where G l,2 is the subgroup of elements ((x w ), t) ∈ w∈P 2 GL n (K w ) × Q × l such that each x w preserve the Hermitian form on V × K ×K w with the same similitude factor t. In particular K l ⊂ G(Q l ) is a product of local factors over all places in P l .Let S l = S l (K l ) be the subset of P l consisting of all places for which the local factor of K l is not the maximal hyperspecial subgroup.Let S l,i = S l ∩ P P l,i and define Finally, let S = S(K p ) = l =p S l (K l ) and define Let T Kr,κ,R be the R-subalgebra of End C (S κ (K r ; C)) generated by the operators T (g) = T r (g) for all g ∈ G(A f ) S and u w,Dw(j) for all w ∈ Σ p , 1 ≤ j ≤ r w .Similarly, one defines T Kr,κ,τ,R as the quotient algebra obtained by restricting each operator to an endomorphism of S κ (K r , τ ; C). Serre duality and Hecke algebras on anti-ordinary cusp forms. Going back to G = G 1 , the space of anti-holomorphic cuspidal forms of weight κ and level K r is defined as H d κ (K r ; C) := H d ! ( Kr Sh, ω κ,r ) and its subspace of P -nebentypus τ is H d κ (K r , τ ; C) := H d ! ( Kr Sh, ω κ,r,τ ) . One can define an R-integral structure on these spaces by considering the integral models of K Sh.However, we instead follow [EHLS20, Section 6.4.2] and define its integral structure via duality from a normalized Serre duality pairing. By definition of κ D , one can construct a canonical perfect pairing This identifies H d κ (K r ; C) as the dual of S κ (K r ; C), and via this identification we define Similarly, H d κ (K r , τ ; R) is defined by replacing S κ (K r ; R) with S κ (K r , τ ∨ ; R).Then, one defines the R-Hecke algebra T d Kr,κ,R by proceeding as in the definition of T d Kr,κ,R but replacing S κ (K r ; R) with H d κ (K r ; R) and u w,Dw(j) by u − w,Dw(j) .Upon restriction to H d κ (K r ; R), one obtains the quotient algebra T d Kr,κ,τ,R .Lemma 5.1.Let R ⊂ C be an S 0 -algebra as above.There exists a unique R-algebra isomorphism T Kr,κ,R ∼ − → T d Kr,κ D ,R such that u w,Dw(j) is mapped to u − w,Dw(j) and T (g) to ||ν(g)|| a(κ) •T (g −1 ).If R is an S 0 [τ ]-algebra, it induces an isomorphism of R-algebra T Kr,κ,τ,R ∼ − → T d Kr,κ D ,τ,R .Proof.The proof is exactly the same as the one of Lemma 6.6.1 (i) in [EHLS20].It is an immediate consequence of Serre duality. Automorphic representations as Hecke modules In what follows, for all the Hecke algebras T ? • , let T ?,p • denote the R-subalgebra generated only by the operators T (g) for g ∈ G(A S f ).Moreover, we omit the subscript R when it R = S 0 (or S 0 [τ ]).We also use the notation from Section 2.5.1 without comments. Let π be a holomorphic cuspidal automorphic representation of G of weight type (κ, K r ).Recall that it is defined over some number field E(π) containing K ′ , see Remark 2.11.Recall the definition of S = S(K p ) above and consider the factorization By definition, K S is the factor of K p over all places of K + where K p contains an hyperspecial maximal subgroup.In particular, (π S f ) K S is a 1-dimensional space spanned by an E(π)-rational spherical vector.The natural action of T (g) for all g ∈ G(A S f ) on π Kr f is Dw(j),κ Definition 3.1.The P -ordinary projector of weight κ as e P = e P,κ := lim − of P -ordinary vectors of type τ is independent of r ≫ 0 and dim π (P,τ ) p = dim τ Proof.Fix w ∈ Σ p and consider π Pw−ord w,r = e w π Iw,r w Ir p = w∈Σp π Iw,r w is induced by the action of the double coset operator U GL,− w,Dw(j) = I w,r t − w,r I w,r on π Iw,r w .Let u GL,− w,Dw(j) = |κ ′ (t w,j )| p U GL,− w,Dw(j) , where κ ′ related to κ as in equation (13), and e − w = lim m→∞ σ w = τw σ w [τ w ] and σ ∨ w = τw σ ∨ w [τ w ]where τ w runs over all smooth irreducible representations of L w (O w ) and the square brackets [•] denote isotypic subspaces.The restriction of•, • σw to σ w [τ w ]×σ ∨ w [τ ′ w ] is identically zero if τ ′ w ∼ = τ ∨w .On the other hand, its restriction toσ w [τ w ] × σ ∨ w [τ ∨ w ] is a perfect L w (O w )-invariant pairings.4.1.2Structure theorem for P -anti-ordinary representations.Since πIw,r w is stable under the action of I 0 w,r /I w,r ∼ = L w (O w /p r w O w ), it decomposes as a direct sum of isotypic subspaces over all irreducible representations of L w (O w /p r w O w ).Given such a representation τ w , we say that φ ∈ π Iw,r w is P w -anti-ordinary of type τ w if it is P w -anti-ordinary and it lies in the isotypic subspace π Iw,r w [τ w ].
2023-10-16T06:42:09.745Z
2023-10-13T00:00:00.000
{ "year": 2023, "sha1": "84095a7c35c86fb7bd99826e6c89aee715905e04", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "84095a7c35c86fb7bd99826e6c89aee715905e04", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
4015821
pes2o/s2orc
v3-fos-license
Can high social capital at the workplace buffer against stress and musculoskeletal pain? Abstract Work-related musculoskeletal pain and stress are both highly prevalent in the working environment and relate well to the biopsychosocial model. While the onset of musculoskeletal pain is often dependent on the biological element of the biopsychosocial model, chronic pain is often influenced by psychological and social factors. Similarly, stress is also influenced by biological, psychological, and social factors. This study investigates the possibility of social capital being a buffer for stress and musculoskeletal pain in a group of female laboratory technicians. Female laboratory technicians (n = 500) replied to questions about stress (Cohens Perceived Stress Scale-10), musculoskeletal pain (0–10 visual analog scale), and social capital at the workplace (bonding [in teams], bridging [between teams], and linking [between teams and leaders]). Outcome variables were stress and musculoskeletal pain and the predictor variable was social capital. General linear models tested the association of the 3 types of social capital (predictor variables) with stress and pain (mutually adjusted outcome variables). Analyses were controlled for age, lifestyle (body mass index, smoking), seniority, and working hours per week. For stress as outcome, moderate and high bonding social capital were different from low social capital with −2.04 (95% confidence interval [CI] −3.33 to −0.76) and −4.56 (95% CI −5.84 to −3.28) points on the Perceived Stress Scale of 0 to 42, respectively. Similarly, moderate and high bridging social capital were different from low social capital with −1.50 (95% CI −2.76 to −0.24) and −4.39 (95% CI −5.75 to −3.03), respectively. For linking, only high social was significantly different from low with −2.94 (95% CI −4.28 to −1.60). None of the 3 types of social capital was associated with musculoskeletal pain. Higher levels of social capital at the workplace appear to buffer against stress, but not against musculoskeletal pain. Intervention studies should investigate whether improving bonding, bridging, and linking social capital at the workplace may be a viable strategy to prevent or reduce work-related stress. Introduction The value of social networks is the central premise of social capital. For instance, Putnam defines social capital as "features of social organization such as networks, norms, and social trust that facilitate coordination and cooperation for mutual benefit" [1] and Nahapiet and Ghoshal defines social capital as: "the sum of the actual and potential resources embedded within, available through, and derived from the network of relationships possessed by an individual or social unit. Social capital thus comprises both the network and the assets that may be mobilized through that network," [2] while Coleman defines it as: "Social capital is defined by its function. It is not a single entity, but a variety of different entities having two characteristics in common: They all consist of some aspect of social structure, and they facilitate certain actions of individuals who are within the structure." [3] By these definitions, social capital can therefore best be described as referring to the collective value of all "social networks" and the "norms of reciprocity" that arise from these networks. Social capital describes a variety of specific benefits that flow from the trust, reciprocity, information, and cooperation associated therewith, thus creating value and productive benefits for the people connected. By these definitions, a high social capital seems like something that should be an inherent part of a healthy environment at modern workplaces. In relation to workplace research, 3 main types of social capital exist. [4,5] The first is social capital within working teams (bonding) and represents working relationships within the team or group, for example, agreeing about what is important and a feeling of unity and cohesion in the team. The second is social capital between working teams (bridging) and represents working relationships between different teams or groups, for example, having trust in the ability of the other team to do the task well. The third is social capital between teams and leaders (linking) and represents working relationships between the members of the team or group and their leader, for example, to what degree does the leader understand and acknowledge the work of the group and whether there is a common understanding between the leader and members of the group about how to perform the work. While all 3 types are important, they represent different aspects of social capital and can respond differently to interventions at the workplace. For example, we have recently found improved bonding social capital in response to groupbased physical exercise at the workplace in spite of a general decrease in linking social capital. [4] Work-related stress arising from low job control and high job demands have been shown to be associated with adverse health outcomes such as hypertension and other health-risk behaviors. [6] Underlying conditions of stress cross both physical/ biological and psychological barriers when an organism is strained beyond its power to adapt. From a physical perspective, stress has been explained as a mechanical and automatic response from the human body and a similar response occurs when the threat has psychological characteristics. [7,8] The human body has an innate drive to maintain a biological equilibrium. Stressors such as pain, sickness, or excessive physical or psychosocial work demands disrupt the homeostasis and trigger a natural response from the body, aimed at returning it to homeostasis. Briefly, the natural response can be described as a tri-phasic phenomenon. The first phase is an alarm phase representing a somatic shock followed by the second phase being a resistance to this shock where the body will fight the alarming threat and work its way back to homeostatic equilibrium. The first 2 stages are repeated throughout an individual's life as the person faces new challenges and obstacles. However, should the body remain in the second stage for prolonged periods of time, the body may enter the exhaustion stage. Models of the exhaustion stage indicate that it is the inability to adapt to the causing stressors or to an extended duration of time being subjected to the stressors that create the symptoms describing the stress state. [7,[9][10][11][12] The proposed model by Cooper and Marshall of work-related stress describes 5 sources of stress, each with the possibility of disrupting homeostatic equilibrium for an extended period leading to the exhaustion phase: intrinsic to the job, including factors such as poor physical working conditions, work overload, or time pressures; role in the organization, including role ambiguity and role conflict; career development, including lack of job security and under/over promotion; relationships at work, including poor relationships with the boss or colleagues, an extreme component of which is bullying in the workplace; and organizational structure and climate, including little involvement in decision-making and office politics. [13] Combined, these stressors describe a biopsychosocial relationship [14,15] in the development of work-related stress where perceived social interactions were social cohesion, trust, reciprocity, and cooperation in the workplace, [1] affects health outcomes, such as pain, [16] health-risk behaviors, [17,18] depression, [19] and even mortality. [20] In addition to stress, musculoskeletal pain is also a major work-related challenge. Work-related pain plays a dominating role in work environment and health. [21,22] It is a major socioeconomic burden with consequences for both the individual, the social relations of the individual, and the organization. [23,24] Musculoskeletal pain is one of the most common causes for loss of productivity, reduced work performance, and sickness absence. [25] In addition, chronic pain has been associated with poor quality of life. [26][27][28] Among laboratory technicians, the prevalence of chronic musculoskeletal pain is high. [29][30][31] This type of job is characterized by tasks that are monotonous, with relatively low force, continuous muscular contractions, and repetitive in nature. Large epidemiological studies following several thousand workers in registers have shown that repetitive arm movements for more than 25% of the working time is a risk factor for developing long-term sickness absence. [32] The biopsychosocial model of pain explains the intricacies and interplay of biological/biomedical, psychological, and social factors affecting pain [14,15] and while there is some evidence that show biological factors are predominant at the onset of pain, psychological and social factors are central in the process of developing pain chronicity. [33,34] For instance, social support, [35] which can be regarded as a part of social capital, has been shown to be helpful in chronic pain coping, [36,37] whereas lack of social support augments the development of chronicity. [38] With the biopsychosocial model of both stress and pain in work, environment, and health, it therefore seems relevant to ask the question; can high social capital at the worksite buffer against stress and musculoskeletal pain in a population performing monotonous and repetitive movement tasks such as laboratory technicians? Study design This study is an explorative cross-sectional analysis of baseline data obtained during a worksite intervention trial previously described by our research team. [29][30][31]39,40] Data for this study were collected during the spring of 2014. A protocol of the study and primary, secondary, and tertiary outcomes, together with a cross-sectional study of work ability, have all been reported previously. [29][30][31]39,40] Ethical approval Ethical approval was obtained from the Danish National Committee on Biomedical Research Ethics (the local ethical committee of Frederiksberg and Copenhagen; H-3-2010-062) as part of the research program "Implementation of physical exercise at the workplace (IRMA)." The trial "Implementation of physical exercise at the Workplace (IRMA09)-Laboratory technicians" was registered in the ClinicalTrials.gov register (NCT02047669) prior to participant enrolment. All experimental conditions conformed to the Declaration of Helsinki. All reporting conforms to the STROBE guidelines "Strengthening the Reporting of Observational Studies in Epidemiology." [41] Participants Out of 756 laboratory technicians at a large pharmaceutical company in Denmark, 539 completed questionnaires on musculoskeletal pain, perceived level of stress, and social capital. Of these, 473 were women and included in the analysis. Table 1 shows participant demographics of relevant data. All eligible participants were informed about the purpose and content of the study. Table 1 shows participant characteristics of relevant data. Items evaluate the degree to which people find that life is unpredictable, uncontrollable, or overloaded. [42] These 3 aspects have been confirmed as vital elements of the experience of stress and provide a thorough insight into the degree of learned helplessness experienced by the individual. [43] The Perceived Stress Scale 10 (PSS-10) includes questions intended to evaluate the current level of stress experienced by the subject and is an abbreviated version of the scale, consisting of only 10 items (the full version has 14 items), administered in only a few minutes, and easily scored. Because the PSS assesses general beliefs about perceived stress without providing subjects with a list of specific life events, scores are not biased by event content or by differential recall of previous life experiences. In brief, each item on the PSS-10 questionnaire is rated on a 5-point Likert scale ranging from "never" (0) to "almost always" (4). Positively worded items are reverse scored, and the ratings are summed, with higher scores indicating more perceived stress. The PSS-10 score is obtained by reversing the scores on the 4 positive items: For example, 0 = 4, 1 = 3, 2 = 2, etc. and then summing across all 10 items. A score of 13 is considered average and stress scores of more than 20 indicate high stress. [29] For reference, we divided the scoring into 3 categories with the following cut-off points: low stress 10, 10 < moderate stress 20, and high stress > 20. Examples of questions from the PSS-10 questionnaire include: In the past month, how often have you been angry because of things that happened that were outside of your control?, In the past month, how often have you felt that things were going your way?," and "In the past month, how often have you felt unable to control the important things in your life?" [42] 2.4.2. Musculoskeletal Pain. We asked the participants to rate their pain intensity in the upper back, lower back, neck, shoulders, elbows, or hands/wrists on a modified 0 to 10 visual analog scale. [44] For reference, "0" is defined as "no pain" and "10" is defined as "worst imaginable pain." The questions were supported by drawings from the Nordic Questionnaire that defined the body areas [45] and an average pain score of the 6 regions was subsequently calculated and used in the analysis. Predictor variables: Bonding, bridging, and linking Female workers (n = 473) replied to a baseline screening questionnaire concerning Bonding, Bridging, and Linking (A + B) social capital. [4,5] Two sample questions out of 9 questions for bonding social capital are "In our team, we agree on what is the most important in our work tasks" and "There is a feeling of unity and cohesion in my team." Two sample questions of a total of 6 for bridging social capital are "Is there a good working relationship between your team and the other teams/departments?" and "We have trust in the ability of the other teams to do the job well." Two sample questions out of 10 questions for linking social capital are "Do your nearest leader contributes to solving everyday problems?," "Our nearest leader has great knowledge and understanding of the work we do," "Are the employees involved in decisions about changes at the workplace?," and "There is a common understanding between the management and employees on how we should perform our work tasks." Participants replied on a horizontally oriented scale of 0 to 10, where 0 is "no, not at all" and 10 is "Yes, completely." For each of the social capital dimensions, the average value of all questions was calculated and multiplied by 10 (i.e., 0-100) to provide a higher resolution of the respective social capital dimension. [4] The cut points between low, moderate, and high social capital in bonding, bridging, and linking were chosen to have as close to 33.3% of the subjects in each group-in teams, low social capital (0-69), moderate social capital (69-85), and high social capital (85-100); between teams: low social capital (0-66), moderate social capital (66-80), and high social capital (80-100); and between teams and leader/management: low social capital (0-69), moderate social capital (69-82), and high social capital (82-100). Statistics General linear models (Proc GLM, SAS version 9.4) tested the association of the 3 types of social capital (predictor variables) with stress and pain (mutually adjusted outcome variables). Analyses were controlled for age, lifestyle (body mass index [BMI], smoking), seniority, and working hours per week. Stress analysis was controlled for pain, and pain analysis was similarly controlled for stress. Results are reported as least square means and 95% confidence intervals (CIs), as well as between-group differences of least square means and 95% CIs. In addition, effect sizes (Cohen d) were calculated as the between-group difference (high vs. low, and moderate vs. low) divided by the pooled standard deviation. [46] According to Cohen, effect sizes of 0.20, 0.50, and 0.80 can be considered small, moderate, and large, respectively. Results For stress as outcome, moderate and high social capitals in teams are both statistically different from low social capital with effect sizes of 0.32 and 0.71, respectively. A similar picture is seen with social capital between teams where moderate and high social capitals are statistically different from low social capital with effect sizes of 0.23 and 0.69, respectively. For social capital between team and leader/management, only high social capital buffers against stress compared with low social capital with an Table 1 Descriptive characteristics. Discussion The present cross-sectional analysis of approximately 500 female laboratory technicians shows that having both moderate and high social capital buffers against perceived stress, overall with low to moderate effect sizes, but not against work-related musculoskeletal pain when adjusting for age, lifestyle (BMI, smoking), seniority, and working hours per week. This finding has significant implications for testing future targeted worksite rehabilitation strategies focusing on both work-related chronic musculoskeletal pain and stress. Our research team has previously shown that a multifactorial intervention strategy targeting musculoskeletal pain and stress by utilizing precise joint mobility and elastic resistance band exercises, fear-avoidance and pain catastrophizing counseling, and mindfulness at the worksite reduces chronic musculoskeletal pain by 52%, and reduces fear avoidance beliefs by 23% but does not significantly reduce perceived levels of situational stress, neurocognitive performance or muscle function compared with a reference group following on-going company health initiatives. Further, we have also shown that work ability in the same population is affected by both stress and pain in an additive fashion. This study provides an interesting perspective on the relationship between social capital and chronic pain. For instance, low cognitive social capital at individual level has been shown to be significantly associated with a higher prevalence of pain and higher level of pain intensity, but also with a higher chance for sick leave due to pain in employed subjects. [16] In addition, others have shown that increasing the social capital in cancer patients results in promoting health behaviors, treatment compliance, and pain relief [47] and recently it was shown that physical exercise programs performed together with colleagues improve social climate and vitality among workers with chronic musculoskeletal pain but did not affect mental health, [48] which Jay and Andersen Medicine (2018) 97: 12 Medicine suggests that focusing on improving social capital can be beneficial in persons with chronic pain, this study suggests that having moderate or high social capital does not act as a buffer against musculoskeletal pain. Therefore, increasing social capital as a preventive measure of work-related musculoskeletal pain development alone may not be a feasible strategy but should still be considered for rehabilitation purposes. Conversely, this study does suggest that having moderate or high social capital does buffer against stress. This perspective is interesting as it implies an important aspect of work health and it supports both the psychological and social elements of the biopsychosocial model as described by Engel in 1977 [14,15] and since stress can be described as either the reaction (psychological, physiological, and behavioral) to environmental stimuli or the interaction between environmental characteristics and the subjective reaction to these characteristics, [31,49] improving social capital at the worksite appears to be a viable strategy to implement when the goal is to buffer against work stress. This is supported by the findings of Boyas and Wind who are, in a cross-sectional study, examined the relationship between employment-based social capital, job stress, and burnout among public child welfare workers and found that communication, supervisory support, organizational commitment, influence, and trust-all elements of social capital -had a significant association with job stress. [50] Furthermore, Gächter et al found, by analyzing survey data from police officers to assess the relationship between stress, strain, and social capital, that an increase in social capital is significantly correlated to a decrease in perceived level of strain and psychological burnout, thus recommending that stress reduction programs should actively engage employees to build stronger social relations and networks. [51] Although all 3 types of social capitals buffered against perceived stress, social capital in and between teams appeared to be more important than social capital between leaders and teams, that is, both moderate and high levels of social capital were important. Thus, factors such as agreeing on what is the most important in daily work tasks and a feeling of unity and cohesion in the team seem to be important within the teams, while good working relationships and trusting other teams seem to be important between teams. By contrast, only high social capital, and not moderate social capital, between teams and leaders were important for lower levels of perceived stress. Thus, a really high level of contribution from the leader to solve everyday problems, a great knowledge and understanding of the work that the employees do, involving employees in decisions about changes at the workplace, and having a common understanding about what is expected are important factors between leaders and teams in relation to lower levels of stress. Strengths and limitations This study demonstrates that moderate and high social capital buffers against work stress but not against musculoskeletal pain. However, the study has some important limitations. The crosssectional design does not allow examination of causal associations. Although the sample size is adequate to test the research question, self-reported data are a limitation as they may be influenced by subjective factors. Finally, given the demographic characteristics of this sample (Danish female laboratory technicians), generalizability to other job groups, to men, and to other countries remain to be determined. Contrariwise, using a homogenous sample consisting of female laboratory technicians is also a noteworthy strength, as it limits bias from socioeconomic confounding. Finally, because social capital deals with human behavior, the study could have been strengthened by qualitative interviews to supplement the questionnaire replies. In conclusion, this study provides an interesting perspective in factors that may be important in stress prevention and management at the worksite. Our results indicate that having moderate and high social capital buffer against stress. Intervention studies should test whether improving social capital in bonding, bridging, and linking may be a viable strategy to implement in a time where work-related stress is highly prevalent and a socioeconomic burden of considerable size.
2018-04-03T01:34:43.486Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "08ebb6a6de8dc29157990f87c38131c424cf3823", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000010124", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08ebb6a6de8dc29157990f87c38131c424cf3823", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
202862936
pes2o/s2orc
v3-fos-license
Gene Interactions Regulating Sex Determination in Cucurbits The family Cucurbitaceae includes many economically important crops, such as cucumber (Cucumis sativus), melon (Cucumis melo), watermelon (Citrullus lanatus), and zucchini (Cucurbita pepo), which share homologous gene pathways that control similar phenotypes. Sex determination is a research hotspot associated with yield and quality, and the genes involved are highly orthologous and conserved in cucurbits. In the field, six normal sex types have been categorized according to the distribution of female, male, or bisexual flowers in a given plant. To date, five orthologous genes involved in sex determination have been cloned, and their various combinations and expression patterns can explain all the identified sex types. In addition to genetic mechanisms, ethylene controls sex expression in this family. Two ethylene signaling components have been identified recently, which will help us to explore the ethylene signaling-mediated interactions among sex-related genes. This review discusses recent advances relating to the mechanism of sex determination in cucurbits and the prospects for research in this area. INTRODUCTION OF SEX TYPES IN CUCURBITS Flower development is the basis of fruit and seed production in plants. In angiosperms, ~90% of species have perfect flowers with separate stamens and carpels simultaneously. Compared with perfect or bisexual flowers, the differential development or selectivity arrest in the carpel or stamen in some species results in unisexual male or female flowers, respectively, leading to flower sextype diversity (Tanurdzic and Banks, 2004). Various combinations or distributions of the three kinds of flowers produce hermaphroditic, dioecious, and monoecious plants, thus forming the sex-type diversity of plant. As in animals, the regulation in these floral developmental processes are defined as sex determination or sex differentiation. Sex determination in angiosperms has been well studied in recent years (reviewed by Tanurdzic and Banks, 2004;Lai et al., 2017;Pannell, 2017;Pawełkowicz et al., 2019a). The family Cucurbitaceae comprises about 120 genera and 960 species, including many economically important crops such as cucumber (Cucumis sativus), melon (Cucumis melo), watermelon (Citrullus lanatus), zucchini (Cucurbita pepo), and pumpkins (Cucurbita moschata) (Bhowmick and Jha, 2015). The family Cucurbitaceae has abundant flower and plant sex types, and the regulation of sex determination can directly influence their yield and quality. Depending on the distribution or ratio of the three types of flowers produced in a plant (Figure 1A), the family Cucurbitaceae is classified into six phenotypes: monoecy, gynoecy, subgynoecy, androecy, andromonoecy, and hermaphrodite ( Figure 1B). The most common sex type in Cucumis sativus, Cucumis melo, Citrullus lanatus, and Cucurbita pepo is monoecy, in which only unisexual flowers bear. However, the distribution of male and female flowers is varied in different species or varieties. Usually, in a monoecious cucumber plant, male flowers arise in early or lower nodes, followed by a mixture of male and female flowers at the middle nodes, and ending with female flowers only in the higher nodes. In cucumber and melon, gynoecious lines produce only female flowers, while androecious plants bear only male flowers. Male and bisexual flowers can be found in andromonoecious lines, which can be regarded as bisexual flowers replacing female flowers found in monoecious lines. Hermaphroditic plants Tan et al., 2015). (B) Schematic diagram of sex expression in the plant main vine in cucurbits. Black, blank, and mix circles represent female, male, and bisexual flowers, respectively. Sex type of the first blooming flower was used to defined the sex type of the node arising the flower. bear only bisexual flowers. Subgynoecious plants, which are found in some watermelon, zucchini, and cucumber lines, produce few male flowers in the beginning nodes and all female flowers in the later nodes. Their most obvious difference from the monoecious lines is the lack of the mixed phase comprising male and female flowers (Galun, 1961;Kubicki, 1969a;Kubicki, 1969b;Kubicki, 1969d). In most cases, bisexual flowers and female flowers are exclusive in a given cucumber and melon plant. However, in a cucumber mutant, certain watermelon plants, and zucchini lines treated with high temperature, hermaphroditic, female, and/or male flowers can arise in the same plant. In these cases, the sex types are named as trimonoecy (or gynomonoecy), trimonoecy, and partial andromonoecy in these three species, respectively (Kubicki, 1969c;Martínez et al., 2014;Ji et al., 2015). In the early stage of flower development, all floral buds are morphologically hermaphrodite, containing staminate and pistillate primordia. Selective arrest of either the staminate or pistillate parts results in female or male flowers, respectively. Furthermore, if the arrest does not happen, bisexual flowers are formed (Atsmon and Galun, 1962). Results of the detailed section assay divided the development of cucumber flowers into 12 stages (Bai et al., 2004). The floral meristem is initiated in stage 1. From stage 2 to 5, sepal, petal, staminate, and pistillate (carpel) primordia are initiated sequentially. Selective arrest then happens after stage 5. In a bud destined to be male, the stamen differentiates anther and filament in stage 6, the anther expands in stage 7, locules differentiate in stage 8, microsporocytes initiate in stage 9, meiosis initiates in stage 10, uninuclear pollen appears in stage 11, and finally, mature pollen is formed in stage 12. In male buds, from stage 6 to 12, the carpel primordia become slightly enlarged. By contrast, in a bud committed to be female, carpel primordia elongate in stage 6, carpel primordia differentiate the stigma and ovary in stage 7, the stigma elongates, and ovule and integument primordia initiate in stage 8, macrosporocytes initiate in stage 9, meiosis initiates in stage 10, embryo sac is formed in stage 11, and finally, all appendant tissues mature in stage 12. The staminate primordia in female flowers can differentiate anthers and filaments in stage 6; however, they are smaller than those in male floral buds. Thereafter, from stage 7, the arrest of stamen development is indicated by their limited size increase. Our data showed that, in bisexual flowers, staminate and pistillate FIGURE 2 | Developmental stages of male, female, and bisexual flowers and the expression pattern of sex-controlling genes in cucumber. Before stage 5, the developmental processes in male, female, and bisexual flowers are similar under visual observation. The detailed developmental program is shown in the text. The expression durations of sex controlling genes are showed in blue (male flower) and red (female flower) bands, and the graduated dark color represents the messenger RNA (mRNA) accumulation. The section assays of male and female flowers are modified from Niu et al., 2018. primordia show normal morphological differentiation just as in male and female buds, respectively (Figure 2). Morphologically, after floral meristem initiation, the sex differentiation can be summarized as two options-pistillate initiation, which refers to the induction of pistillate primordia; simultaneous staminate and pistillate primordial growth, which is related to uni/bisexual flower development. Based on these understandings, several gene loci associated with sex expression (sex type) have been identified and cloned in the last two decades. In this review, we discuss recent advances relating to the mechanism of sex determination in the Cucurbitaceae family, beginning with the genes controlling specific processes in sex differentiation. GENES RESULTING IN PISTILLATE PRIMORDIA INITIATION There are two kinds of well-studied gynoecy-controlling gene loci, conferring dominant and recessive gynoecy in cucurbits. The dominant gynoecy is a unique phenotype in cucumber, which is different from other Cucurbitaceous plants. Gynoecy is a particularly important trait in cucumber breeding. Immature cucumber fruit are usually harvested a few days after flowering. Therefore, combined with parthenocarpy, more female flowers mean higher yield in cucumber production. In 1935, Tkachenko described the first gynoecious cucumber in a Japanese (Korean) variety (Kubicki, 1969a). Because there are no male flowers on the gynoecious plants, homozygous gynoecious varieties were nonhereditary before the 1960s, when gibberellic acid (GA) was first used to induce male flowers (Peterson and Anhder, 1960). In 1961, Shifriss explained that a gene (named as Acr) could act as an accelerator to impel female flowers to lower nodes (Shifriss, 1961). Later, Galun (1961) and Kubicki (1969a) named the locus as st F and Acr F , respectively. In 1976, the symbol F (Female) was finally confirmed to represent the dominant gynoecy-controlling locus (Robinson et al., 1976). Later studies indicated that the F locus was associated with an additional copy (CsACS1G, G = gynoecy) of a 1-aminocyclopropane-1-carboxylic acid synthase gene, CsACS1 (Trebitsh et al., 1997). The open reading frame and proximal promoter (−410 bp upstream sequence) are almost identical between the two genes. The different distal promoter sequence of CsACS1G is homologous to that of a putative branched-chain amino acid transaminase (BCAT) gene (Mibus and Tatlioglu, 2004;Knopf and Trebitsh, 2006). Bioinformatic analysis discovered a copy number variant, which arose from a 30-kb genomic sequence duplication (including CsACS1 and BCAT), involving in the F locus. The copy number variant region might present a tandem repeat of the original 30-kb region in gynoecious lines, and the "junction point" of the two repeats is CsACS1G . Natural melon and watermelon varieties possess recessive gynoecy loci, which are named as g (gynoecious) and gy (gynoecious), respectively. Poole and Grimball (1939) reported a recessive g locus that controls gynoecy or subgynoecy in melon. Martin et al. (2009) identified the g gene as CmWIP1, encoding a C2H2 zinc-finger-type transcription factor. Expression of CmWIP1 leads to carpel abortion, resulting in male flowers. CmWIP1 indirectly represses the expression of CmACS7, which is the andromonoecious gene introduced later. In gynoecious lines, an insertion of a transposon (1.3 kb downstream of the gene) represses the expression of CmWIP1 via epigenetic changes in its gene promoter. In watermelon, a chromosome translocation produced an insertion mutation in the ClWIP1 gene (the CmWIP1 ortholog), leading to a gynoecious line . Natural mutants of CsWIP1 in cucumber varieties are unavailable. Using the clustered regularly interspaced short palindromic repeats/ CRISPR-associated protein 9 technology, the created CsWIP1editing mutant lines also showed gynoecy (Hu et al., 2017). All these studies confirmed that WIP1 is a conserved regulator of sex determination in cucurbits. It should be noted that all the genes controlling gynoecy described above are associated with carpel-bearing flowers. Therefore, the genes function not only in female flowers but also in bisexual flowers. The gynoecy-controlling genes induce (or release) pistillate initiation in female and hermaphroditic flowers. MUTANT GENES RESULTING IN THE HERMAPHRODITIC PHENOTYPE Although monoecy is the most common sex type in cucurbits, melon breeders prefer bisexual flowers, making andromonoecy predominant in melon compared with that in other cucurbits (Boualem et al., 2008). Bisexual flowers make natural pollination easier, and the resulting fruits are usually rounder than the products of female flowers (Li et al., 2009;Aguado et al., 2018), both of which are desired phenotypes for melon production. An interesting phenomenon in bisexual flowers is that the fertilization and seed-setting ability in melon is much higher than that in cucumber, which might be the result of long-term domestication in melon breeding. In cucurbits, the genes controlling the hermaphroditic phenotype are highly conserved. Rosa (1928) stated that andromonoecy is a recessive character in Cucumis and Citrullus. In melon, the gene locus was named as a (andromonoecious) before 2015, then changed to m (monoecious) to avoid confusion with the androecy controlling gene (a). A single-nucleotide mutation in CmACS7 was identified as associated with andromonoecy in melon. CmACS7 also encodes a 1-aminocyclopropane-1carboxylic acid synthase like CsACS1G, and the mutation severely loses the enzymatic activity. Expression of CmACS7 inhibits staminate development in female flowers but is not required for carpel development (Boualem et al., 2008). Two kinds of natural monoecious mutations (named m and m-1) in cucumber have been identified, and both of which are associated with CsACS2, which is an ortholog of CmACS7 Li et al., 2009;Tan et al., 2015). The mutation in the m allele is also a single-nucleotide change; however, it mutates another conserved active site residue, different from the mutation in melon. In addition, a 14-bp deletion is found in the third exon of CsACS2 in the m-1 allele, which deduces a truncated protein. The mutations result in severe loss of enzyme October 2019 | Volume 10 | Article 1231 Frontiers in Plant Science | www.frontiersin.org activity in plants with the m allele and total loss in those with the m-1 allele. The CmACS7 orthologs, CitACS4/ClACS7 and CpACS27A, are associated with andromonoecy in watermelon and zucchini, respectively Boualem et al., 2016;Ji et al., 2016;Manzano et al., 2016). In watermelon, the isoforms encoded by ClACS7 in andromonoecious lines showed no enzymatic activity, and the isoform in the monoecious line was active. In zucchini, even though neither of the parental lines showed a standard andromonoecious phenotype, a mutant nucleotide in CpACS27A was considered to be necessary, but not sufficient, to confer partial andromonoecy. Pleiotropy is another characteristic of the genes controlling the hermaphroditic phenotype. Usually, bisexual flowers produce rounder fruits than female flowers ( Figure 1A). An interesting finding in cucumber was that the trait of spherical fruit cosegregated with the m allele in two large F 2 populations comprising 5,500 individuals in total (Li et al., 2009). Recently, a study of cucumber fruit growth confirmed that CsACS2 participates in fruit elongation via regulation of ubiquitination (Xin et al., 2019), which supplies a link between sex type and fruit development. In addition, hermaphroditic mutations affect floral organ development. In watermelon, the mutant allele cosegregated with slower growth and maturation of petals and carpels, which resulted in delayed anthesis time in hermaphrodite flowers. Moreover, the number of fruit and seed set were lower in the mutant lines, representing reduced fertilization activity of bisexual flowers in watermelon like in cucumber (Aguado et al., 2018). In zucchini, the bisexual flowers also showed delayed development and maturation of petals and a higher ovarian growth rate . Summarizing the current findings, in cucurbits, the ACS7 orthologs are expressed in pistil-bearing flowers but not in male flowers. The expression of functional isoforms arrests staminate primordia, producing female flowers, while the nonfunctional or mutant isoforms lose this function and allow staminate and pistillate primordia to grow simultaneously, resulting in bisexual flowers. Therefore, in addition to mutation research, analyzing the regulation of gene expression is also important for the ACS7 orthologs. MUTANT GENES LEADING TO ANDROECY In cucumber and melon, sex expression in the main vine and lateral branches are usually different. Usually, the first several nodes in the lateral branch have high feminization potential, producing female flowers in monoecious lines and bisexual flowers in andromonoecious lines. Strictly, a variety without any pistil-bearing flowers in the main vine and lateral branches is defined as androecy; otherwise, it is identified as monoecy (with female flowers) or andromonoecy (with bisexual flowers). Obviously, an androecious line has little economic value in production, and the existing varieties are all mutants. In cucumber, a recessive a locus was identified to intensify the androecious nature (Kubicki, 1969b). The gene is hypostatic to the F gene, and a plant with a genotype of ffaa is completely male. A rare androecious cucumber variety "EREZ" helped to clone the a gene, for which the wild-type allele is CsACS11, encoding the third 1-aminocyclopropane-1-carboxylic acid synthase involved in sex determination. Similar to the F and M genes, the mutant isoform isolated from "EREZ" had no enzymatic activity (Boualem et al., 2015). Using a targeting-induced local lesions in genomes strategy, 10 mutations in the melon ortholog of CmACS11 were created, and two lines containing changes in highly conserved amino acids were observed as androecious. Besides the traditional a locus, an ethyl methanesulfonateinduced mutation helped to discover the second androecious cucumber variety, and the mutation was identified in CsACO2, encoding 1-aminocyclopropane-1-carboxylic acid oxidase. The single-nucleotide change in the mutant gene resulted in an inactivated enzyme (Chen et al., 2016). The melon orthologous gene of CsACO2 is CmACO3. Both of these genes showed similar expression patterns (see below). OTHER SEX-TYPE RELATED LOCI IDENTIFIED VIA GENETIC ANALYSIS Previous genetic studies have identified many loci that control conventional and accidental sex mutations. In cucumber, besides F, m, and a, the Intensive Female (In-F) gene was identified as increasing the female flower ratio in monoecious plants (without the F gene). In addition, a plant with both F and In-F genes could not produce male flowers when treated with GA (Kubicki, 1969b). Kubicki (1969b) also described an accelerator gene (acr 1 ), conferring continuous nodes with female flowers in monoecious lines. In subgynoecious cucumber lines, a consistent major quantitative trait locus, which mainly increased the degree of femaleness, was identified on chromosome 3 (sg3.1) from two independent studies (Ji et al., 2016;Win et al., 2019). The relationship between acr 1 or In-F and the genes should be clarified in future studies. Using artificial mutagenesis, Kubicki (1974Kubicki ( , 1980 identified the hermaphrodite (h) and gynoecious (gy) loci. Unlike the m gene, the h gene governs bisexual flowers with normal ovaries as in female flowers, including their shape and pollination ability. However, analysis of the data did not allow us to determine the relationship or difference between the h gene and the m-1 mutation. The recessive gy gene was described as intensifying femaleness in cucumber and is linked with the F gene. The function of gy gene was similar to the g gene in melon. However, CsWIP1 (the melon g ortholog in cucumber) resides on chromosome 4, and the F gene is on chromosome 6, which means that gy and g might be two different genes. Trimonoecious plants have been reported in cucumber, watermelon, and zucchini (Kubicki, 1969c;Martínez et al., 2014;Ji et al., 2015). In cucumber, the gene responsible was named as tr (trimonoecious), while the phenomenon is controlled by the tm gene in watermelon (to date, no name has been given in zucchini). However, the detailed structures of bisexual flowers in the three species are different. In cucumber, the bisexual flowers occurring in trimonoecious plants have superior ovaries (hypogynous, the normal bisexual and female flowers are epigynous), derived as a modification of staminate flowers, while the bisexual flowers in trimonoecious watermelon and zucchini seem to be same as those in andromonoecious plants. Unfortunately, standard plant materials possessing the above loci (In-F, acr 1 , h, gy, tr) are not widespread. We look forward to seeing in-depth studies and cloning of these genes in the future. ETHYLENE AND SEX DETERMINATION IN CUCURBITS The most important factor regulating sex expression in cucurbits is the phytohormone ethylene, which controls the transition of female flowering and the ratio of female flowers (Byers et al., 1972;Atsmon and Tabbak, 1979;Owens et al., 1980;Takahashi et al., 1982;Takahashi and Jaffe, 1984;Kamachi et al., 1997;Trebitsh et al., 1997). In cucumber and melon, ethylene (or its releasing agent) has been used to induce female flowers for decades (Rudich et al., 1969;Tsao, 1988;Yin and Quinn, 1995). In zucchini, sex determination in individual floral bud appears to be regulated by ethylene in a similar way (Manzano et al., 2010a;Manzano et al., 2010b;Manzano et al., 2011;Manzano et al., 2013). By contrast, inhibition of ethylene biosynthesis or perception leads to increased maleness in cucumber, melon, and zucchini (Byers et al., 1972;Owens et al., 1980;Manzano et al., 2011). The relationship between ethylene and the sex type in watermelon is complex. In watermelon, female flowers require much more ethylene than male flowers to develop. In addition, bisexual flowers result from a decrease in ethylene production in female floral buds, and ethylene is required to arrest the development of stamens in female flowering, similar to the process in cucumber and melon. Nevertheless, ethylene inhibits the transition from male to female flowering and reduces the number of pistillate flowers, which contrasts with the findings in other cucurbits Zhang et al., 2017a). An interesting phenomenon was observed in watermelon, in which ethephon (an ethylene-releasing reagent) treatment induced numerous abnormal flowers in gynoecious and hermaphroditic plants (Zhang et al., 2017a). In cucumber and melon, different ethylene responses in staminate and pistillate primordia are used to explain the selective arrest occurring during sex determination. It has been proposed that differing levels of sensitivity in the stamen or carpel primordia could allow each type of primordium to react independently to different ranges of ethylene concentrations (Yin and Quinn, 1995). A higher ethylene threshold for stamen suppression than carpel promotion, coupled with the timing of the increase in ethylene production occurring after the carpels are established, would prevent stamen inhibition before carpel establishment, thereby ensuring the development of flowers (Switzenberg et al., 2014). Ectopic expression of ethylene-related genes suggested that ethylene perception by stamen primordia, but not carpel primordia, is essential for the production of carpelbearing buds (Little et al., 2007;Switzenberg et al., 2014). Ethylene might promote female flower development via an organ-specific induction of DNA damage in primordial anthers. The organspecific ethylene perception might require downregulation of CsETR1 (encoding an ethylene receptor protein, see below) expression and increased expression of CsCaN (encoding a calcium-dependent nuclease) (Wang et al., 2010;Gu et al., 2011). Ethylene synthesis results from the activity of 1-aminocyclopropane-1-carboxylic acid (ACC) synthase (ACS) and 1-aminocyclopropane-1-carboxylic acid oxidase (ACO), which transform S-adenosyl-L-Met (SAM) into ACC and convert ACC into ethylene, respectively (Adams and Yang, 1979;Yang and Hoffman, 1984). After biosynthesis, ethylene signaling is perceived by the receptor proteins, which are located in the endoplasmic reticulum. The receptors are negative regulators of ethylene signaling, and in the absence of ethylene, the receptors activate constitutive triple-response 1 (CTR1), which suppresses the ethylene response via inactivation of ethylene insensitive 2 (EIN2). Ethylene binding to the receptors switches off the CTR1 phosphorylation activity and activates EIN2. The C terminus of EIN2 is cut and moves into nucleus, stabilizing ethylene insensitive 3/EIN3-like (EIN3/EIL) transcription factors, which can activate the expression of target genes, including those encoding ethylene response factor (ERF) transcription factors. The ERFs then initiate the expression of downstream ethyleneresponsive genes Klee and Giovannoni, 2011;Liu et al., 2015). To date, except for WIP1 orthologous genes, other sexcontrolling genes, including CsACS1G, CsACS2, CsACS11, and CsACO2 in cucumber, CmACS7 and CmACS11 in melon, CitACS4/ClACS7 in watermelon, and CpACS27A in zucchini, have important roles in ethylene biosynthesis. Because ethylene participates in sex determination directly in cucurbits, the identification of many sex-related ethylene synthases is not surprising. However, since nearly all the genes show similar biochemical function (producing ethylene), the regulation of their expression should be important. Moreover, a high concentration of ethylene is harmful to young tissue, which was observed in cucumber protoplasts and watermelon plants (Wang et al., 2010;Zhang et al., 2017a). Therefore, the specific spatiotemporal and coordinate expression of the ACS and ACO genes, which produce local ethylene accumulation, inducing pistil and arresting stamen development, is critical in sex determination. TRANSCRIPTIONAL CHARACTERISTICS OF THE SEX-RELATED GENES IN SEX DETERMINATION Studies with exogenous ethylene have indicated that the timing and concentration are key factors that determine whether carpel or stamen development is affected (Switzenberg et al., 2014). Therefore, the spatiotemporal expression pattern of the sex-related genes should be studied. The developmental process of flower buds reveals that sex determination happens between stage 5 and 6; therefore, all the regulatory genes should function before, or at least no later than, these two periods. CsACS1G, CsACS11, CsACS2, CmACS11, CmACS7, and ClACS7/CitACS4 are only expressed in female flowers, while CsWIP1, CmWIP1, and ClWIP1 are expressed in male flowers. The transcription of CsACO2 and CmACO3 has no sex specificity. CsACS1G was considered to be autonomously expressed in the shoot October 2019 | Volume 10 | Article 1231 Frontiers in Plant Science | www.frontiersin.org or early flower bud before all the other sex-controlling genes in gynoecious cucumber (Knopf and Trebitsh, 2006;Li et al., 2012). However, its detailed expression pattern is still unknown because its low accumulated messenger RNA level limits the use of in situ hybridization assays. Transcripts of all the bisexual flower controlling genes, CsACS2, CmACS7, and ClACS7/CitACS4, began to accumulate just beneath the pistil primordia of flower buds from stage 5, and then continued to accumulate in central region of the developing ovary (Saito et al., 2007;Boualem et al., 2008;Boualem et al., 2016). The expression signals of both CsACS11 and CmACS11 were first detected below the carpel primordia from stage 4 and continued at least until stage 8 (Boualem et al., 2015). CsACO2 and CmACO3 expression was first detected in the center of stage 2 to 4 flower buds, just beneath the location of future carpel primordia, and remained expressing in the carpel and stamen at a relatively low levels after stage 6 (Chen et al., 2016). In male flowers, although CmWIP1 seems to have an enhanced expression compared with CsWIP1, they are both expressed from stage 4 to 6 (Boualem et al., 2015;Chen et al., 2016). The expression pattern of sex-controlling genes is summarized in Figure 2. The interacting order of these sex-controlling genes in sex determination can be deduced from the sequence and duration of their gene expression. Ethylene is also a key regulator of the expression of sexcontrolling genes. Treatment with exogenous ethylene at an appropriate concentration increased the transcription of CsACS1, CsACS2, CsACS11, CmACS11, and CmACS7 and downregulated that of CsWIP1 and CmWIP1 (Yamasaki et al., 2001;Li et al., 2012;Switzenberg et al., 2014;Tao et al., 2018). Endogenous ethylene produced by a first expressed sex-specific gene might also act with other sex-controlling genes, which is used to explain the interacting phenomenon among them (see below). A hypothesis was proposed that ethylene mediated the interaction among different sex-controlling genes, including (to date, at least) CsACS2, CsACS11, CmACS11, CmACS7, CsWIP1, and CmWIP1. Cloning of ethylene signaling factors, CsERF110/ CmERF110 and CsERF31, which directly combine with the promoters of CsACS11/CmACS11 and CsACS2 to activate their expression, respectively, supplied evidence for this hypothesis (Pan et al., 2018;Tao et al., 2018). GENE INTERACTION CONFERRING SEX EXPRESSION Classical genetic analyses helped to propose a systematic phenotype-genotype relationship for each sex type in cucumber, melon, and watermelon (Poole and Grimball, 1939;Galun, 1961;Kubicki, 1969a, Kubicki, 1969bKubicki, 1969d;Robinson et al., 1976;Kenigsbuch and Cohen, 1987;Kenigsbuch and Cohen, 1990;Ji et al., 2015). Here, we tried to integrate the results from genetic studies, biochemical assays, and physiological responses (Figure 3). In cucumber, because there are different levels of sensitivity to ethylene in the stamen and carpel, two ethylene thresholds are proposed to define for carpel promotion (EtC) and stamen suppression (EtS), and EtS is believed to be higher than EtC (Switzenberg et al., 2014). Thus, the genotype-phenotype relationship is proposed as: (1) The F gene (CsACS1G) is autonomously expressed, producing ethylene to reach the EtC (but not higher than the EtS), which initiates pistillate primordia. The ethylene also can induce M gene (CsACS2) expression, synthesizing higher (and/or long-term) ethylene content reaching the EtS, and arresting staminate primordia. Consequently, a combination of F and M genes (FFMM) produces continuous female flowers and confers the gynoecious phenotype; (2) When the genotype is FFmm, EtC can be achieved by CsACS1G expression, and the pistil develops normally. However, the mutant m gene encodes an inactive ACS, leading to insufficient ethylene to reach the EtS; thus, the stamen develops. Therefore, the FFmm genotype results in a plant with all nodes bearing bisexual flowers, producing a hermaphroditic line; (3) When the F locus is homozygous recessive (ff), the plant sex type is dependent on the expression pattern of the A gene (CsACS11). If CsACS11 is expressed, the ethylene becomes higher than the EtC, but lower than the EtS, which can induce pistil initiation and activate CsACS2 expression, finally producing female flowers. If CsACS11 is silent, there is insufficient ethylene (< EtC); therefore, the pistillate primordia cannot initiate, while the stamen development is released, resulting in male flowers. Combining these two conditions, the genotype ffMMAA results in a monoecious individual; (4) Like the phenomenon in hermaphroditic lines, when the m locus is homozygous recessive, stamens in female flowers are not suppressed, and genotype ffmmAA results in andromonoecious plants; (5) When both of F and A loci are mutated (ffaa), no genes can produce ethylene up the level of the EtC, in which case CsACS2 and pistillate primordia cannot be induced. Therefore, only male flowers can develop, and an androecious plant emerges (Li et al., 2012;Tao et al., 2018). Because no natural CsWIP1 and CsACO2 mutations were used in these previous genetic analyses, all the plants studied were assumed to have wild-type CsWIP1 and CsACO2 genes. Analysis in melon showed that CmWIP1 negatively regulates the expression of CmACS7 (its ortholog in cucumber is CsACS2) (Martin et al., 2009). Ethylene could downregulate the expression of CsWIP1, meaning that the EtC may induce CsACS2 via suppressing CsWIP1. In a mutant Cswip1 background, the expression of CsACS2 is released, producing the EtS directly, which is enough to initiate pistillate primordia and arrest staminate primordia, resulting in female flowers and gynoecious plants (Hu et al., 2017). In addition, CsWIP1 might directly suppress pistillate primordia initiation via an unknown pathway, which was proposed in melon (Boualem et al., 2015). The enzyme encoded by CsACO2 is considered to at least combine with CsACS1G and CsACS11 to complete ethylene synthesis. Therefore, Csaco2 mutants break the formation of EtC, resulting in an androecious phenotype (Chen et al., 2016). In the future, the sex type of the Cswip1Csaco2 double mutant should be investigated to identify the relationship between CsACS2 and CsACO2 in cucumber. In melon, except for the dominant F gene, all the sex-type-related genotype-phenotype relationships are similar to those in cucumber. Therefore, plant femaleness is dependent on mutant Cmwip1 (the g gene in melon) and/or CmACS11 expressing (the A gene in melon). The bisexual flowers in melon are the result of mutations in CmACS7 (the m gene in melon). Classical genetics confirmed that the genotype MMAAGG results in a monoecious plant, mmAAGG results in andromonoecy, MMgg results in gynoecy, mmgg results in a hermaphrodite, and aaGG results in androecy (Poole and Grimball, 1939;Kenigsbuch and Cohen, 1987;Kenigsbuch and Cohen, 1990). An interestingly different phenomenon was observed between monoecious cucumber and melon plants with the same (ff)MMAA(GG) genotype that femaleness on the main vine of cucumber (although it is often changeable) is higher than that in melon (usually all nodes produce male flowers). This might refer to different expression activity of the A genes in these two species. We have identified that ethylene could induce the expression of CsACS11 (Tao et al., 2018). However, there is no CsACS1G in monoecious lines to autonomously produce ethylene. Therefore, other physiological or developmental cues inducing CsACS11 or CmACS11 should be identified in future studies, which might help to explain the problem stated by Ma and Pannell (2016) that "what decides whether ACS11 is on or off in particular flowers. " In watermelon, three recessive alleles were suggested to control the sex types: andromonoecious (a), gynoecious (gy), and trimonoecious (tm) (Ji et al., 2015). Therefore, phenotype-genotype relationships are proposed as: monoecious, AAGyGyTmTm; trimonoecious, AAGyGytmtm; andromonoecious, aaGyGy; gynoecious, AAgygyTmTm; gynomonoecious, AAgygytmtm; and hermaphroditic, aagygy. Recent gene cloning helped to identify that the a gene is a mutation in ClACS7/CitACS4, and the gy gene is mutated in ClWIP1, which are orthologous genes of CsACS2/ CmACS7 and CsWIP1/CmWIP1, respectively (Boualem et al., 2016;Ji et al., 2016;Manzano et al., 2016;Zhang et al., 2019). Therefore, it seems that the WIP1-CsACS2/CmACS7 relationship is conserved in all the studied cucurbits. There is little information about the phenotype-genotype relationship in zucchini. Sex determination in individual floral buds of zucchini appears to be regulated by ethylene in the same way as that in melon and cucumber, and the ACS7 ortholog CpACS27A is also involved in bisexual flower development (Manzano et al., 2010a;Manzano et al., 2010b;Manzano et al., 2011;Martínez et al., 2014). Therefore, we believe that the conserved ACS11-WIP1-CsACS2/CmACS7/CpACS27A pathway also exists in zucchini. SUGGESTION FOR GENE NOMENCLATURE An imminent work is unifying gene names controlling the similar sex types in cucurbits. For example, the abbreviated names of andromonoecious (which is now m, but used to be a before 2015) and androecious (is now a) in melon, easily cause confusion. Moreover, the genotype symbols in watermelon are also liable to cause misunderstanding. Because all the genes controlling the appearance of bisexual flowers are Arabidopsis ACS7 orthologs, we suggest that the symbol m is used for the andromonoecious phenotype, just as in cucumber and melon. Likely, the symbol g is suggested to use for the recessive gynoecy produced by wip1 ortholog mutation. The structures of bisexual flowers in cucumber trimonoecious mutant (hypogynous) and in trimonoecious watermelon (seemingly normal) are different. Therefore, it is necessary to retain the current gene nomenclature (tr and tm). Another gene symbol that needs to be discussed is f in cucumber. When we reexamined the F gene and its genomic structure, it was clear that the F locus is unique in gynoecious cucumber lines with a tandem 30-kb repeat . This locus does not exist in lines with only one copy of the 30-kb region, as in monoecious, andromonoecious, and androecious lines. Previously, genotypes of these latter three lines were usually written as homozygous ff for this locus, which is used to represent the recessive allele in the F locus. However, we now know that there is only one form of gene (CsACS1G) in this locus, and no studies have demonstrated a mutant or a nonfunctional allele. In traditional cognition, the dominant F gene is CsACS1G, and the recessive f is considered as CsACS1. However, this is not right. All cucumber lines tested have CsACS1, and only gynoecious plants possess both CsACS1 and CsACS1G. These findings mean that CsACS1 and CsACS1G are not alleles of the same gene, which are not located in a same locus (30 kb apart approximately). Therefore, at this time, before a CsACS1G mutation is discovered, we suggest that the nomenclature of the f gene makes no sense and should be omitted in the genotype. OTHER ASPECTS RELATING TO SEX TYPE Detailed descriptions about many of the transcriptomic, epigenomic, and metabolomic research related to sex type are beyond the scope of this manuscript (Miao et al., 2011;Wang et al., 2014;Gao et al., 2015;Zhang et al., 2017b;Lai et al., 2017;Latrasse et al., 2017;Lai et al., 2018a;Lai et al., 2018b;Song et al., 2018;Wang et al., 2018;Zhou et al., 2018;Wang et al., 2019;Pawełkowicz et al., 2019b). The sexrelated genes and cues identified in these studies are associated with temperature, photoperiod, blue/red light, hormone synthesis and signaling, lipid and sugar metabolism, the cell cycle, etc. However, we do not know whether the genes are causes or results of the sextype changes, and we cannot summarize the accurate locations of these genes in the gene pathway of sex determination. GENERAL CHARACTERISTICS OF SEX CONTROLLING GENES It is not surprising that near all the sex controlling genes are "ethylene synthases. " Therefore, the expression regulation of each gene should be conducted, and the specific transcription regulators for a given sex-control gene should be identified. In addition, exploring new sex-related mutations has always been a priority in sex determination research. Considering all the known genes that directly control the sex types in cucurbits, we propose that a sex-related gene may have more than one of the following characteristics: (1) the gene product directly participates in ethylene synthesis or signal transduction, (2) the gene or its product directly or indirectly regulates a known sex-control gene, (3) gene expression responds to ethylene or the factors interfering with ethylene synthesis or signaling, and (4) critically, mutation of the gene can change sex type. We believe that these characteristics could help to identify new sex-related genes in the future. FUTURE PROSPECTS The mechanism of sex determination is of great interest to researchers. Meanwhile, the close relationship between sex type and yield in cucurbits has attracted increased attention in plant breeding. At present, a model of the ethylene core has been established in four cucurbit species (cucumber, melon, watermelon, and zucchini). However, the direct regulators and the molecular details remain poorly understood. Exploring more mutations and using reverse genetics are the most effective way to identify a gene-controlling sex differentiation. In addition, since the critical developmental stage in sex determination is clear, more precise approaches, such as laser microdissection and single-cell RNA sequencing have the potential to reveal the detailed gene pathways involved in this process. We hope that the suggestions proposed in this review are conducive to revealing the mechanisms of sex determination in cucurbits. AUTHOR CONTRIBUTIONS DL and YS collected and organized the references. HN supplied the results of section assay. ZL conducted the figures. DL and ZL wrote the paper. ACKNOWLEDGMENTS We apologize to the authors not cited due to space limitations, and thanks to Drs. Jinjing Sun, Changlong Wen, and Yunli Wang for communication about sex determination researches in cucurbits.
2019-09-27T00:15:57.576Z
2019-10-10T00:00:00.000
{ "year": 2019, "sha1": "f47c8f396e879c4f80390aea19cccd62120a73dc", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.01231/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b66c4a3b0816439ec3b55b4c347a8d36581ffb8e", "s2fieldsofstudy": [ "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
85555019
pes2o/s2orc
v3-fos-license
Morphological observations and phylogenetic position of the parasitoid nanoflagellate Pseudopirsonia sp . ( Cercozoa ) infecting the marine diatom Coscinodiscus wailesii ( Bacillariophyta ) Copyright © 2017 The Korean Society of Phycology 181 http://e-algae.org pISSN: 1226-2617 eISSN: 2093-0860 Morphological observations and phylogenetic position of the parasitoid nanoflagellate Pseudopirsonia sp. (Cercozoa) infecting the marine diatom Coscinodiscus wailesii (Bacillariophyta) Sunju Kim, Chang Beom Jeon and Myung Gil Park* Department of Oceanography, Pukyong National University, Busan 48513, Korea LOHABE, Department of Oceanography, Chonnam National University, Gwangju 61186, Korea INTRODUCTION Planktonic diatoms are one of key primary producers as well as one of the dominant phytoplankton in aquatic ecosystems.They are almost constantly confronted with parasites in aquatic environment and are susceptible to infections by a variety of eukaryotic parasitoids (i.e., which always kill their hosts to complete their life cycles), including cercozoans, chytrids, dinoflagellates, euglenoids, oomycetes, and stramenopiles (e.g., Drebes 1966, Drebes and Schnepf 1988, 1998, Kühn et al. 1996, Tillmann et al. 1999, Bulman et al. 2001).Among those parasitoids, in particular, species belonging to the genus Pirsonia have been well documented as parasitoids infecting a number of marine diatoms (Schnepf et al. 1990, Kühn et al. 1996, Schweikert and Schnepf 1997).The Pirsonia species share some morphological characteristics of the life cycles, including formation of trophosome and auxosome.Typical infections of the parasitoids initiate when the motile flagellate attaches to the host diatom frustule and then penetrates into the host cell using a pseudopodium.The pseudopodium inside the host cell becomes a trophosome that digests host protoplasm in a food vacuole and transports the digested host materials to the auxosome, a remaining part of the parasitoid flagellate on the outside host cell.Seven species in the genus, Pirsonia diadema, P. formosa, P. eucampiae, P. guinardiae, P. mucosa, P. punctigerae, and P. verrucosa, have been so far Instrument (YSI, Yellow Springs, OH, USA).The diatom C. wailesii cells infected by the parasitoid nanoflagellates were individually isolated using a capillary pipet under an inverted microscope (Axio Vert 1, Carl Zeiss Inc., Göttingen, Germany), washed in serial drops of filtered seawater, and transferred to six-well plates containing 4 mL of filtered seawater.The well plates were placed on shelf of an incubator at 20°C on a 14 : 10 light : dark cycle, with cool-white fluorescent lamps providing 100 μmol photons m -2 s -1 and examined at every 12-h interval. Light microscopy Specimens were observed using an inverted microscope (Axio Vert A1, Carl Zeiss Inc., Oberkochen, Germany) with differential interference contrast optics.Light micrographs were taken at 100×-400× magnification using a Full HD mini box camera (MediCAM-X, Comart System, Seoul, Korea) photomicrographic system coupled with the microscope (Kim et al. 2015). DNA extraction, polymerase chain reaction, and sequencing About 150 cells of parasitoid nanoflagellates detached from the host cells at the late stage of infection were collected using a capillary pipet, washed several times with sterile filtered seawater, placed into polymerase chain reaction (PCR) tubes, and finally pelleted by centrifugation.Total genomic DNA was extracted from the pellets using a chelex extraction method (Kim and Park 2014).The 18S rRNA gene region of the parasitoid nanoflagellate was amplified using primer sets: Euk328f and Euk329r (Moon-Van Der Staay et al. 2000).PCRs were performed in a total of 20 μL of reaction solution containing 3 μL of DNA (chelex extract) as a template using an AccuPower PCR premix kit (Bioneer, Daejeon, Korea).The reactions were conducted using a C1000 Touch Thermal Cycler (Bio-Rad, Hercules, CA, USA) with the following conditions: initial denaturing step at 95°C for 4 min followed by 40 cycles (95°C for 20 s, 55°C for 20 s, and 72°C for 1 min), with a final extension at 72°C for 5 min.Amplified products were visualized on EcoDye (SolGent Co., Daejeon, Korea) stained 1% agarose gels, purified by a PCR purification kit (Bioneer, Daejeon, Korea) and sequenced with primers (Euk328f, Euk329r, Euk516r, and Euk1209r) using a Big-Dye Terminator v3.0 Cycle Sequencing kit (Applied Biosystems, Foster City, CA, USA) and an ABI model 3730 sequencer (Applied Biosystems), according to manufacturer's protocols.The amplicons were sequenced until at described only from the North Sea (Schnepf et al. 1990, Kühn et al. 1996, 2004).Recent molecular phylogenetic analyses demonstrated that P. mucosa was very distantly related from other Pirsonia species.While other Pirsonia species clustered within stramenopiles forming a monophyletic clade, P. mucosa diverged within the heterogenic group of cercomonad (Kühn et al. 2004).For this reason along with additional morphological characteristics, P. mucosa was recently moved to a new genus Pseudopirsonia by Kühn et al. (2004). The parasitoid nanoflagellates Pirsonia / Pseudopirsonia species are known to display a different degree of host specificity and host range from laboratory cross infection experiments and field observations (Kühn et al. 1996(Kühn et al. , 2004)).For example, P. diadema, P. punctigerae, P. guinardiae, and P. verrucosa are relatively host-specific.While P. diadema and P. punctigerae infect only diatoms of the genera Coscinodiscus and Thalassiosira, respectively, the latter two parasitoids (P.guinardiae and P. verrucosa) infect only species belonging to the genus Guinardia.By contrast, P. formosa and Pseudopirsonia mucosa have a broad host range and they could successfully parasitize various diatom species across several host genera.Nonetheless, the nonspecific P. formosa and Pseudopirsonia mucosa did not infect the tested Coscinodiscus species (i.e., C. concinnus, C. granii, and C. wailesii) (Kühn et al. 1996(Kühn et al. , 2004)).So far, the sole Pirsonia species known to be capable of infecting Coscinodiscus species, including C. wailesii, is P. diadema. During sampling at Nokdong harbor of Korea in January 2017, the marine diatom Coscinodiscus wailesii cells infected by a novel parasitoid nanoflagellate were encountered.Here, we presented the developmental morphological characteristics of the parasitoid, relative to those of previously described P. diadema.In addition, the molecular phylogenetic analyses based on18S rRNA gene sequences were performed to determine the phylogenetic affiliation of the novel parasitoid nanoflagellate to other Pirsonia / Pseudopirsonia species. Sampling and cell isolation Concentrated seawater samples were collected using a 20 μm plankton net from Nokdong harbor, Korea (34°31′26.74″N, 127°8′8.38″E) on Jan 11, 2017 and transported to the lab for further examination.Water temperature and salinity were measured using a Yellow Spring Alignments and phylogenetic analyses Sequences were primarily aligned using CLUSTALX 1.83 (Larkin et al. 2007) and were further refined manually using MacClade 4.08 (Maddison and Maddison 2000).Unambiguously aligned positions were selected and were applied for phylogenetic analyses.Modeltest v.3.7 (Posada and Crandall 1998) was used to select the most appro- Morphological features During the sampling of marine diatom Coscinodiscus wailesii parasitized by a novel parasitoid nanoflagellate, water temperature and salinity were 8°C and 32, respectively.Although the diatom Rhizosolenia setigera was predominant and various other diatoms also co-occurred in the samples, infections by the parasitoid nanoflagellates were observed only on C. wailesii.The infected C. wailesii cells were easily distinguishable, due to the appearance of "a diadem" (Figs 1 & 2).Such an apparent appearance was similar to that of the parasitoid nanoflagellate Pirsonia diadema infecting the marine diatoms Coscinodiscus spp.from the North Sea near Helgoland.The Coscinodiscus priate model of substitution for the maximum likelihood (ML) method.GTR + I + Г (i.e., general time reversible with invariant sites and gamma rate correction) model was identified as the best-fit model for 18S rDNA dataset.ML analyses were per-formed using RAxML 8.0.0 with the general time-reversible model with gamma correction and 1,000 replicates (Stamatakis 2014).Bayesian analysis used MrBayes 3.1.1(Ronquist et al. 2012) running four simultaneous Monte Carlo Markov Chains for 2,000,000 generations and sampling every 100 generations, following a prior burn-in of 100,000 generations (1,000 sampled trees were discarded).A consensus tree was constructed from 19,001 post burn-in trees.having a rounded to oval shape and slightly jerking swimming movement (Kühn et al. 1996(Kühn et al. , 2004)). Phylogenetic analyses Partial 18S rRNA gene sequences of the novel parasitoid collected from two infected C. wailesii cells were obtained and all sequences (1,726 nucleotides in length) of the isolates were identical.BLAST search of Genbank provided a 92% maximum match of the novel parasitoid sequence to those of several cercozoan genera including Pseudopirsonia mucosa (AJ561116), Protaspis spp.(FJ824122-FJ8224125), Cryothecomonas longipes (AF29040), Thaumatomastix sp.(GQ144681), and Allas sp.(AY268040). Phylogenetic analyses inferred from 18S rRNA gene sequences revealed that the parasitoid Pseudopirsonia sp.infecting C. wailesii fell within the cercozoan groups and branched as a sister lineage of the clade of Pseudopirsonia mucosa and the undescribed Cercomonas sp.SIC7235 with high statistical supports of bootstrap proportion (BP) / posterior probabilities (PP) (91 / 1.0) (Fig. 3).The marine sand-dwelling cercozoan Clautriavia biflagellata showed a sister relationship of Pseudopirsonia sp. with moderate statistical supports of BP / PP (76 / 1.0).Pairwise comparison of the partial 18S rDNA sequences showed 122 base differences between Pseudopirsonia sp. and Pseudopirsonia mucosa based on 1,669 unambiguously aligned sites with 7.3% dissimilarity.By comparison, all species in the genus Pirsonia formed a monophyly with robust statistical supports of BP / PP (100 / 1.0) and placed within stramenopiles in the 18S rRNA gene tree (Fig. 3).The Pirsonia species were very closely related to each other with showing low dissimilarity of only 0.2-2.4% (Kühn et al. 2004).The best trees generated with ML and Bayesian methods were largely congruent and in those trees the Pirsonia species diverged into two distinct lineages, one composing of three P. formosa strains and P. diadema and the other including P. punctigerae and the clade of P. verrucosa and P. guinardiae, although inner nodes for the relationships were weakly to moderately supported (Fig. 3).cells heavily infected by P. diadema also displayed the appearance of a "diadema" in that every rimoportulae forming a ring at the margin of the diatom valve was occupied by attachment of the parasitoids (Kühn et al. 1996). Microscopic observations of live infected C. wailesii cells individually isolated from the field samples at every 12-h interval (Fig. 1A-D) revealed that the number of parasitoid nanoflagellates gradually increased over time, with the host protoplast being ingested and almost completely consumed after 36 h (Fig. 1D).Infections by the novel parasitoids were mostly observed at the margin of the diatom valve, but also additionally found on the valve face (Fig. 2A).Once the motile flagellate attached to the host, its flagella disappeared during the feeding stage (Fig. 2B).The attached flagellate penetrated into the frustule of the host diatom using a pseudopodium, which later became a trophosome inside the diatom, with some part of the flagellate, which became an auxosome, still remained outside the host cell.The trophosomes gradually ingested the host protoplast phagocytotically and fused with several adjacent trophosomes as growing over time (Fig. 2C).This developmental process of trophosomes did more resemble with that of Pseudopirsonia mucosa rather than other Pirsonia species (Kühn et al. 2004).Pseudopirsonia mucosa attaches to the diatom frustule and forms an unusually broad pseudopod that is situated laterally, while other Pirsonia species attach with a posteriorly protruded pseudopod (Kühn et al. 2004). The auxosomes of the novel parasitoid had a globular shape with 12 ± 0.2 μm (mean ± SE, n = 40) in diameter (Fig. 2A-C).The size of the auxosomes in the new parasitoid was similar to that of other Pirsonia species ranging from 10 to 15 μm, but smaller than that of Pseudopirsonia mucosa with 18 μm in diameter (Kühn et al. 2004).The auxosomes in the new parasitoid divided longitudinally and the resulting daughter cells appeared to remain connected with trophosomes (Fig. 2A & B).Such division pattern of the auxosomes of our Pseudopirsonia sp. was more similar to that of other Pirsonia species rather than Pseudopirsonia mucosa, in which its auxosomes divided as a morula-shape and covered by a mucilaginous coat (Kühn et al. 1996(Kühn et al. , 2004)). Mature flagellates of the new parasitoid Pseudopirsonia sp. had an elliptical shape and were flattened laterally with the size of 7.3 ± 0.2 μm × 14.4 ± 0.6 μm (mean ± SE, n = 3) (Fig. 2D).Their movement showed a slowly gliding Fig. 3. RAxML phylogenetic tree inferred from 1,885 unambiguously aligned sites of 18S rDNA sequences including 47 stramenopiles and 43 cercozoan of ingroup taxa and two sequences of glaucophytes as outgroup taxa.Numbers shown on nodes are support values of bootstrap percentages using RAxML fast bootstrapping analysis and Bayesian posterior probabilities higher than 60% and 0.6, respectively.Black circles indicate robust statistical supports of bootstrap proportion / posterior probabilities (100 / 1.0).Open circles represent 1.0 of posterior probabilities. CONCLUSION The parasitoid nanoflagellate Pseudopirsonia sp.infecting the marine diatom Coscinodiscus wailesii presented in this study was unique compared to other Pirsonia / Pseudopirsonia species previously described in some ways.While development process of the trophosome was more similar to that of Pseudopirsonia mucosa, division pattern of the auxosome was similar to that of Pirsonia species.Furthermore, phylogenetic analyses based on 18S rRNA gene sequence revealed that the parasitoid Pseudopirsonia sp. fell within cercozoa group instead of stramenopiles containing other Pirsonia species.The new parasitoid nanoflagellate was closely related to Pseudopirsonia mucosa, but showed 7.3% sequence dissimilarity.All of these developmental and molecular characteristics suggest that the parasitoid nanoflagellate infecting the diatom C. wailesii is a new Pseudopirsonia species.Unfortunately, however, failure to establish the host diatom-parasite in culture precluded further close examinations of the parasitoid, including detailed morphological features of the motile flagellate and its host range.Further studies are needed to identify the potential new parasitoid observed in this study and better understand its autecology, as well as to investigate the diversity of the species within the genus Pseudopirsonia, in which only one species has been reported. least double stranded coverage was reached.ContigExpress (Vector NTI ver.10.1; Invitrogen, Grand Island, NY, USA) was used to edit out low quality regions and assemble the sequence reads.The assembled sequences were verified by comparison using BLASTN search in the NCBI database and deposited in GenBank (accession number MF615236). Fig. 1 . Fig. 1.The new parasitoid nanoflagellate Pseudopirsonia sp.infecting the diatom Coscinodiscus wailesii.Time series light microscopic images of the same infected C. wailesii cell just after isolating from the field sample (A), at 12-h (B), 24-h (C), and 36-h (D) incubations.Note the numerous auxosomes (arrows) increasing over time at the margin of the diatom valve and lateral large trophosomes (arrowheads) formed by fusion with adjacent trophosomes inside the host diatom.Scale bars represent: A-D, 100 μm. Fig. 2 . Fig. 2. Light micrographs of Pseudopirsonia sp.infecting the diatom Coscinodiscus wailesii. (A & B) Auxosomes at the margin of the host diatom valves and valve faces.Arrows indicate the first division of the primary auxosomes of the parasitoid.(C) Auxosomes and trophosomes of the parasitoids.Arrowhead indicates fused trophosomes of the parasitoids.(D) The motile stage of the parasitoid.Arrow indicates the mature flagellate.Scale bars represent: A-D, 20 μm.
2019-03-26T19:15:02.995Z
2017-09-15T00:00:00.000
{ "year": 2017, "sha1": "df90fc40c6c60fbcbd1c5e801b8e24daff346cb5", "oa_license": "CCBYNC", "oa_url": "http://www.e-algae.org/upload/pdf/algae-2017-32-7-28.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "df90fc40c6c60fbcbd1c5e801b8e24daff346cb5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
266512241
pes2o/s2orc
v3-fos-license
Effect of time delay in inter-hospital transfer on outcomes of endovascular treatment of acute ischemic stroke Background Endovascular treatment (EVT) with mechanical thrombectomy is the standard of care for large vessel occlusion (LVO) in acute ischemic stroke (AIS). The most common approach today is to perform EVT in a comprehensive stroke center (CSC) and transfer relevant patients for EVT from a primary stroke center (PSC). Rapid and efficient treatment of LVO is a key factor in achieving a good clinical outcome. Methods We present our retrospective cohort of patients who underwent EVT between 2018 and 2021, including direct admissions and patients transferred from PSC. Primary endpoints were time intervals (door-to-puncture, onset-to-puncture, door-to-door) and favorable outcome (mRS ≤ 2) at 90 days. Secondary outcomes were successful recanalization, mortality rate, and symptomatic intracranial hemorrhage (sICH). Additional analysis was performed for transferred patients not treated with EVT; endpoints were time intervals, favorable outcomes, and reason for exclusion of EVT. Results Among a total of 405 patients, 272 were admitted directly to our EVT center and 133 were transferred; there was no significant difference between groups in the occluded vascular territory, baseline NIHSS, wake-up strokes, or thrombolysis rate. Directly admitted patients had a shorter door-to-puncture time than transferred patients (190 min vs. 293 min, p < 0.001). The median door-to-door shift time was 204 min. We found no significant difference in functional independence, successful recanalization rates, or sICH rates. The most common reason to exclude transferred patients from EVT was clinical or angiographic improvement (55.6% of patients). Conclusion Our results show that transferring patients to the EVT center does not affect clinical outcomes, despite the expected delay in EVT. Reassessment of patients upon arrival at the CSC is crucial, and patient selection should be done based on both time and tissue window. Introduction Endovascular treatment (EVT) with mechanical thrombectomy is the standard of care for acute ischemic stroke (AIS) patients with proximal large vessel occlusion (LVO) (1-3).Rapid reperfusion is critical in order to achieve favorable clinical outcomes (4). EVT is not accessible in primary stroke centers (PSC); therefore, selected EVT candidates are transferred from these centers (5).The common practice of managing these patients is to administer intravenous thrombolysis (IVT) at the PSC and later transport them to a comprehensive stroke center (CSC), equipped with EVT capabilities ("drip-and-ship") (5)(6)(7). Patient transfer leads to a delay in onset-to-puncture time (8).Previous studies have shown that transferred patients suffer from worse functional outcomes (modified Rankin Scale [mRS] > 3) (9,10), and higher mortality rates compared to directly admitted patients (11).Moreover, prolonged transfer time results in the exclusion of patients from EVT (12,13).There is an ongoing debate about whether the decision to administer EVT should be based on a time window or tissue window (e.g., CT findings, ASPECT score, CT perfusion) (14)(15)(16)(17). In Israel, there are 9 CSCs and another 16 PSCs that provide IVT only.Geographically, the distribution of CSCs is uneven, causing populations from the periphery to be significantly delayed on arrival to a CSC after AIS onset. Our aim in this study was to assess the delay in EVT among transferred patients and its effect on clinical and procedural outcomes.In addition, we analyzed radiological and clinical differences in patients who were transferred but did not undergo EVT, and the reasons for EVT exclusion.These findings could improve patient selection when considering transfer to CSC. Study design and population We conducted a single-center retrospective analysis of all AIS patients due to LVO of the anterior or posterior circulation who underwent EVT between 2018 and 2021 at Rabin Medical Center, Israel. Our cohort was dichotomized into a group of directly admitted patients (DAG) and a group of transferred patients (TG) who arrived first at a PSC.Additional data was collected on patients with AIS due to LVO who were transferred to our center for EVT but were found ineligible after re-evaluation. All transferred patients were clinically re-evaluated using the National Institutes of Health Stroke Scale [NIHSS (18)].Selected patients, specifically those with prolonged door-to-door time (over 1 h) or clinical worsening, were referred to a repeat stroke imaging protocol that included CT, CTA, and CTP, followed by reconsideration of eligibility for EVT.An experienced stroke neurologist reviewed patient records to determine the reason for exclusion from EVT and classified them as one of the following: (1) Clinical improvement (i.e., repeated NIHSS ≤4); (2) Imaging improvement (i.e., recanalization on repeated CTA); (3) Clinical worsening; (4) Imaging worsening (i.e., new ASPECT score less than 5 on repeat NCCT); (5) Patient refusal of EVT. The functional outcome of the mRS at 90 days after a stroke was consistently recorded either by a follow-up visit at a post-stroke outpatient clinic or virtually by telephone. Patients with in-hospital acute stroke, occurring while hospitalized for reasons other than ischemic stroke, were excluded.We also excluded patients who were transferred for observation, pending a decision regarding EVT based on clinical worsening. The study was approved by the local ethical committee.Due to the retrospective, non-interventional design of this work, informed consent was not required. The study followed the guidelines for observational cohorts according to Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) (19). Outcome endpoints The primary endpoints were time intervals: (1) Onset-to-CSC (OTC) -the time from onset of symptoms to arrival at CSC. (2) Onset-to-puncture (OTP) -the time from symptom onset to arterial puncture at CSC. (3) Door-to-puncture (DTP) -the time from arrival at the first medical center (i.e., CSC for directly admitted patients and PSC for transferred patients) to arterial puncture of EVT.(4) Doorto-door (DTD) shift -the time from arrival at PSC to CSC; and (5) Functional independence at day 90, defined as mRS ≤ 2. The secondary endpoints were successful recanalization using thrombolysis in cerebral infarction (TICI), intracranial hemorrhage (ICH), both asymptomatic and symptomatic (defined as any CT-documented hemorrhage, with temporal relation to clinical deterioration), and mortality rate. A secondary analysis was performed on patients who were transferred and excluded from EVT with respect to time intervals (OTC and DTD shift), functional independence at day 90, and reasons for not performing EVT. Statistical analysis Statistical analysis was performed using IBM SPSS Statistics for Windows, version 25.0 (IBM Corp., Armonk, NY).Qualitative data were presented as frequencies and percentages.Pearson's chi-squared test was used for comparisons.Quantitative data were presented as the median (IQR) for non-normally distributed data and as the mean ± SD for normally distributed data.A T-test was used to compare demographic and time interval data.A P value ≤0.05 was considered statistically significant. Baseline characteristics There were no demographic differences between the two groups.The median age was 75 years (IQR, 64.3-83.8)and 77 years (IQR, 66.5-83.5) in the DAG and TG groups, respectively.Female patients were 139 (51%) and 66 (50%), respectively.There were no differences between groups in cardiovascular risk factors, previous stroke, transient ischemic attack (TIA) percentages, or NIHSS at presentation -median 14 (IQR, 9-17) and 14 (IQR, 9-17.5) in DAG and TG, respectively.There were no differences between the groups in the vascular territory of the occluded vessel, wake-up stroke, or IVT percentage.Table 1 presents baseline clinical and demographic data. Time intervals The TG had a longer mean OTC time of 404 ± 298 min, compared to 256 ± 287 min for DAG (p < 0.001), and a longer mean DTP time of 239 ± 161 min, compared to 190 ± 116 min for DAG (p < 0.001).There was no statistically significant difference in OTP time of 478 ± 300 and 428 ± 303 for TG and DAG, respectively. The DTD interval for TG was 204 ± 154 min.Time intervals are presented in Table 2. Procedural and clinical outcomes There was no statistically significant difference in functional independence (mRS ≤ 2) at day 90, with 41 (31.3%) of the TG compared to 109 (40.5%) of the DAG (p value = 0.074) (data shown in Table 3 and Figure 2). There was no difference between the two groups in the rate of successful reperfusion of TICI 2b-3 in 121 (91.7%) of TG and 236 (88.7%) of DAG.Similar rates of ICH were documented in the two groups: 12 (9%) and 27 (9.9%) of asymptomatic ICH and 11 (8.3%) and 16 (5.9%) of symptomatic ICH in TG and DAG, respectively.No difference was found in the mortality rate at 90 days, with 30 (22.9%) cases in TG and 71 (26.4%) in DAG (Table 3). Transfers without EVT Thirty-six patients were transferred and did not undergo EVT.Transferred patients without EVT had similar baseline characteristics compared to transferred patients with EVT, as shown in Supplementary Table 1s. The incidence of wake-up strokes and IVT was similar in these groups of patients.Transferred patients without EVT had a less severe stroke presentation with a median NIHSS of 6 (IQR; 2.5-12.5)compared to 14 (IQR; 9-17.5) in transferred patients with EVT (p = 0.001) and a higher percentage of LVO involving the posterior circulation (Supplementary Table 1s). The time intervals of patients without EVT were similar to those with EVT, with an OTC time of 470 ± 384 (mean ± SD) minutes and 404 ± 298 min, respectively, and a DTD time of 254 ± 235 and 204 ± 154 min, respectively (Supplementary Table 2s). The group transferred without EVT had achieved a more favorable outcome at 90 days (mRS ≤ 2) compared to the group transferred with EVT: 23 patients (63.6%) compared to 41 (31.3%), respectively, p < 0.001.There was no statistically significant difference in mortality rates at 90 days between the two groups (Supplementary Table 2s). The most common reason why EVT was excluded was either clinical or imaging improvement in 20 patients (55.6%), of which 4 patients had only imaging improvement (spontaneous revascularization, without significant clinical change).Fifteen patients (41.7%) had either clinical or imaging worsening and were no longer suitable for EVT, of which only one patient had isolated clinical worsening without supporting imaging findings.One patient refused EVT on arrival (Supplementary Figure 1s). Discussion In the present study, we found that the time delay caused by interhospital transfers in patients with LVO who were candidates for EVT did not affect clinical outcomes.Transferred patients were noted to have longer arrival times at a CSC and longer DTP times.However, OTP time was not found to be longer, suggesting that the decision-making process for EVT was relatively fast and efficient upon arrival at the CSC. Previous studies have demonstrated that delaying EVT in transferred patients results in worse clinical outcomes (3,9,11,(20)(21)(22), suggesting that direct transfer to a CSC is more beneficial than the "drip-and-ship" strategy, even at the expense of delayed IVT (23,24).A large meta-analysis on the subject also favored the direct admission approach (25) (Supplementary Table 3s lists the different studies comparing the clinical outcomes of directly admitted and transferred patients). The lack of differences between transferred and directly admitted patients found in our study has been described in previous works, although they had limited inclusion criteria to the anterior circulation only and onset-to-puncture time up to 6 h (6, 26) or 12 h (27).A recent randomized control trial also found no differences between directly admitted and transferred AIS patients but included patients with both LVO and non-LVO AIS (28).In contrast, our study included a wider time window and patients presenting with LVO in both the anterior and posterior circulation, which better reflects AIS patients treated with EVT. There are no clear guidelines on whether transferred patients should undergo repeat imaging prior to EVT.Our practice is to re-evaluate the NIHSS score on arrival and perform a repeat imaging protocol that includes CT, CTA, and CTP in cases of delayed transfer or clinical worsening, followed by a re-evaluation of eligibility for EVT.This selection process is, to our knowledge, the major contributor to the good outcome of transfer patients in our cohort. Transferred patients who did not receive EVT had lower NIHSS scores and higher rates of posterior circulation stroke.The most common reasons for avoiding EVT were either an improved NIHSS score on arrival or re-canalization on imaging.These subgroups of patients had better clinical outcomes compared to transferred patients who underwent EVT.Indeed, previous studies have suggested that clinical improvement (29) is the most important argument to avoid EVT. Less than half of our patients were not treated with EVT due to clinical and radiological deterioration, rendering them unsuitable for the procedure based on a worsening ASPECT score (30) or a large ischemic core on CTP (16). It is important to emphasize that there was no significant difference in OTC and DTD times between transferred patients undergoing EVT and their counterparts not undergoing EVT, suggesting that it was not late arrival that excluded patients from undergoing EVT.This finding is in contrast with previous studies that blamed time delay and deviation from the accepted time window as the primary cause for avoiding EVT or not transferring patients to CSC (12,27). Faster transfer times are imperative for good clinical outcomes and are the focus of several studies that predict longer transfer times in the elderly population and emphasize the importance of early communication with the receiving CSC (31,32).There are different strategies to reduce time intervals in AIS patients, including strategies aimed at improving workflow, having available staff members, considering local anesthesia or conscious sedation (33), and the use of either a countdown clock (34) or a feedback mechanism (35) in order to improve awareness of time.These strategies have been found to be effective in improving both time intervals and clinical outcomes.We believe that crucial contributing factors to the favorable clinical outcome in our cohort were the fast re-evaluation, organization, and response of the medical team in advance of the transfer. Our study has several limitations.First, because our study is retrospective, some transfer patients not undergoing EVT may be missed; however, we believe that our data provide a good understanding of their clinical considerations regarding EVT.Second, our institution does not have a uniform protocol for repeat imaging on arrival in transferred patients, which is subject to variation among decision-makers.Additional limitations were the small sample size and the unicentric nature of this study. Conclusion There is an ongoing debate as to whether transferring patients to a CSC for EVT is the better approach compared to multiple low-volume thrombectomy units.Our results support the notion that transferring patients to an EVT center does not compromise clinical outcomes, despite the expected delay in EVT.Reassessment of patients upon arrival at the CSC is crucial, and patient selection should be based on both time and tissue window. FIGURE 2 Functional FIGURE 2Functional outcome measured as modified Rankin Scale score at 90 days. TABLE 3 Procedural and clinical outcomes. TABLE 2 Time intervals.
2023-12-24T16:17:05.247Z
2023-12-22T00:00:00.000
{ "year": 2023, "sha1": "564e5ef80cbf9ea15c7fa99d081f0cc3b8de5e2e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2023.1303061/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "188871fb2b7b666c771e13290152de3b93e38ddb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
238424568
pes2o/s2orc
v3-fos-license
Serum IgG anti-SARS-CoV-2 Binding Antibody Level Is Strongly Associated With IgA and Functional Antibody Levels in Adults Infected With SARS-CoV-2 Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was first reported in December 2019 in Wuhan, China, and then rapidly spread causing an unprecedented pandemic. A robust serological assay is needed to evaluate vaccine candidates and better understand the epidemiology of coronavirus disease (COVID-19). Methods We used the full-length spike (S) protein of SARS-CoV-2 for the development of qualitative and quantitative IgG and IgA anti-S enzyme linked immunosorbent assays (ELISA). A total of 320 sera used for assay development were comprised of pandemic sera from SARS-CoV-2 infected adults (n=51) and pre-pandemic sera (n=269) including sera from endemic human coronavirus infected adults. Reverse cumulative curves and diagnostic test statistics were evaluated to define the optimal serum dilution and OD cutoff value for IgG anti-S and IgA anti-S ELISAs. The IgG and IgA anti-S, and three functional antibodies (ACE-2 receptor blocking antibody, lentipseudovirus-S neutralizing antibody, and SARS-CoV-2 neutralizing antibody) were measured using additional SARS-CoV-2 PCR positive sera (n=76) and surveillance sera (n=25). Lastly, the IgG and IgA anti-S levels were compared in different demographic groups. Results The optimal serum dilution for the qualitative IgG anti-S ELISA was at 1:1024 yielding a 99.6% specificity, 92.2% sensitivity, 92.9% positive predictive value (PPV), and 99.6% negative predictive value (NPV) at a SARS-CoV-2 seroprevalence of 5%. The optimal serum dilution for the qualitative IgA anti-S ELISA was at 1:128 yielding a 98.9% specificity, 76.5% sensitivity, 78.3% PPV, and 98.8% NPV at the same seroprevalence. Significant correlations were demonstrated between the IgG and IgA (r=0.833 for concentrations, r=0.840 for titers) as well as between IgG and three functional antibodies (r=0.811-0.924 for concentrations, r=0.795-0.917 for titers). The IgG and IgA anti-S levels were significantly higher in males than females (p<0.05), and in adults with moderate/severe symptoms than in adults with mild/moderate symptoms (p<0.001). Conclusion We developed a highly specific and sensitive IgG anti-S ELISA assay to SARS-CoV-2 using full length S protein. The IgG anti-S antibody level was strongly associated with IgA and functional antibody levels in adults with SARS-CoV-2 infection. Gender and disease severity, rather than age, play an important role in antibody levels. INTRODUCTION In December 2019, a novel coronavirus emerged in China that has subsequently proven to be the causative agent of an acute respiratory disease now known as coronavirus disease 2019 (COVID-19) (1), and has since sparked a pandemic. The virus was officially named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (2). It is the seventh coronavirus species to cross the species barrier causing respiratory infections in humans (3,4). Compared with the earlier SARS-CoV in 2003 and the Middle East Respiratory Syndrome Coronavirus (MERS-CoV) in 2012, SARS-CoV-2 has spread rapidly causing at least 201 million cases and 4.3 million deaths globally, as of August. 6 th , 2021. Therefore, it is critical to develop mitigation strategies to control the spread of this emerging virus and use robust serological assays to determine the vulnerability of a population, define immune correlates, and evaluate vaccines. SARS-CoV-2 is a positive-sense single-stranded RNA virus, measuring 50-200 nanometers in diameter. The virus has four structural proteins: S (spike), E (envelope), M (membrane), and N (nucleocapsid) proteins. The S protein is responsible for attaching to the host membrane-bound receptor, angiotensinconverting enzyme 2 (ACE-2), and fusing with the membrane of the host cell (5,6). Specifically, the N-terminal S1 subunit catalyzes attachment to the host receptor, and the C-terminal S2 subunit mediates membrane fusion. The S1 subunit is further divided into the N terminal domain (NTD) and the receptor binding domain (RBD). In adults infected with SARS-CoV-2, neutralizing antibodies are generated to the S protein and its RBD, both of which are major targets for vaccine development (7). Antibodies are also generated to other structural proteins, particularly to the N protein, which is often included in commercial diagnostic antibody tests (8). Antibody assays are valuable tools that can be utilized for diagnosing an acute infection, determining seroprevalence in a population, evaluating the immunogenicity of vaccines, studying antibody response induced by wild type infection, and establishing immune correlates of protection. Data from seroprevalence studies can be used to determine the vulnerability of a population and to identify those at risk for infection and reinfection. Sensitive and specific antibody assays that measure anti-SARS-CoV-2 antibodies are, therefore, crucial tools in the arsenal needed for gaining control of the SARS-CoV-2 pandemic. There are two major types of assays used to characterize serum antibody responses: assays that measure the presence, concentration, and isotype of antigen-specific binding antibodies; and those that measure the functional capabilities of serum antibodies based on their ability to block viral binding to cellular receptors, or neutralize viral infection. The emergency use authorization (EUA) has allowed the United States Food and Drug Administration (FDA) to provide rapid emergency use of unapproved diagnostic testing, including antibody-based assays. However, several shortcomings have been observed with the sensitivity and specificity of these tests (9,10) perhaps, in part, because of an inadequate sample size used for assay development (11,12). In the present study, we used the full length S protein of SARS-CoV-2 as the capture antigen and developed a qualitative and quantitative IgG anti-S ELISA assay using pandemic sera from SARS-CoV-2 infected adults (n=51) and pre-pandemic sera (n=269) including sera from endemic human coronavirus (229E, OC43, NL63, HKU1) infected adults. The IgG anti-S ELISA assay showed high specificity, sensitivity, positive predictive values (PPV) and negative predictive values (NPV) at a SARS-CoV-2 seroprevalence of 5%. Further, the developed assay was utilized for the detection of anti-S SARS-CoV-2 antibodies in SARS-CoV-2 polymerase chain reaction (PCR) positive human sera. We found that the presence and concentration of IgG anti-S was strongly associated with IgA anti-S binding antibodies, ACE-2 receptor blocking antibody activity, lentipseudovirus-S neutralization antibody concentration, and SARS-CoV-2 neutralization antibody titer. Importantly, the assay allows for a qualitative assessment (yes/no) of the presence of anti-SARS-CoV-2 antibodies as well as a quantitative measurement of antibody concentration and titer. We also discovered that the IgG and IgA anti-S levels were significantly higher in males than females, and in adults with moderate/severe symptoms than in adults with mild/moderate symptoms. Serum Samples SARS-CoV-2 assay development was performed with 320 sera collected from: (1) 234 pre-pandemic adults (before December 2019); (2) 35 pre-pandemic adults who were PCR positive for one of the four human endemic coronaviruses (229E, OC43, NL63, or HKU1); and (3) 51 SARS-CoV-2 PCR positive adults 4 to 8 weeks after their first positive PCR test. After the IgG anti-S ELISA assay was developed and optimized, an additional 25 SARS-CoV-2 PCR positive adult sera collected 4 to 8 weeks after their first PCR positive test, and 25 unknown infectious status sera from adults enrolled in a SARS-CoV-2 surveillance study were evaluated for IgG anti-S antibody responses. Demographic information was collected as well clinical disease categories, which were based on the level of care provided. Subjects with asymptomatic/mild symptoms did not require medical evaluation, subjects with mild/moderate symptoms were evaluated at the clinic or emergency department for their acute illness, and subjects with moderate/severe symptoms were hospitalized for supportive or ICU care. These additional 50 sera were also tested by additional serological assays described below following assay optimization. The median days of postinfection for these 76 SARS-CoV-2 PCR positive adults was 39 days (IQR 31-49 days). The institutional review board of Baylor College of Medicine approved the study protocols and written informed consent was obtained from all the participants. All sera were stored at -30°C until use. Assay Development The IgG anti-S ELISA and other 4 serological assays were developed and optimized to quantify and/or functionally characterize serum antibodies binding to the full length spike protein of SARS-CoV-2 or to the virus. The IgG anti-S and IgA anti-S ELISAs were developed to quantify the concentrations of IgG and IgA present in the serum of subjects who had tested positive by SARS-CoV-2 PCR assay. Both relative concentration (ng/mL) and titer (log2) for anti-S antibody were determined in the two assays. The functional capabilities of anti-S antibodies were evaluated using an ACE-2 receptor blocking antibody assay, a lentipseudovirus-S neutralization assay, and a SARS-CoV-2 microneutralization assay. Only quantitative assays were developed for the three functional antibody assays. Prior to assay optimization, the ideal concentration for each of the reagents (antigen, primary and secondary antibodies, recombinant proteins, etc.) was determined as well as assay standards and positive and negative controls. IgG Anti-SARS-CoV-2 S (IgG Anti-S) ELISA Immulon 2HB 96-well plates (Cat. # 3455, Thermo Scientific) were rinsed with distilled water and air dried. One hundred µL of SARS-CoV-2 full S protein (kindly provided by Gale Smith, Novavax, Gaithersburg, MD) at a concentration of 300 ng/mL in 1X Dulbecco's Phosphate-Buffered Saline (DPBS) (Cat. # 14190-2350, Thermo Fisher) was coated onto the 96-well plate for 18 hours at 4°C. For IgG antibody detection in sera, 100 µL of human IgG immunoglobulin positive control (Cat. # 55908, MP Biomedical) at 100 ng/mL and 25 ng/mL were coated on the plates as high and low IgG positive controls, respectively. After three washes with 1X KPL (Cat. # 95059-132, VWR), the plates were blocked for 1 hour with 5% milk (Carnation Instant Nonfat Dry Milk) in 1X KPL. Two-fold serial dilutions (40 ng/mL to 0.04 ng/mL) of IgG rabbit SARS-CoV-2 S1 monoclonal antibody (mAb) (40150-R007, SinoBiological) were added to each plate to generate a relative IgG anti-S standard curve. Rabbit SARS-CoV-2 positive serum (1:150,000 in 10% FBS/5% milk/1X KPL) and human SARS-CoV-2 negative serum (1:512 in 10% FBS/5% milk/1X KPL) were used as additional assay controls. Next, 100 µL of 2-fold serial dilutions of test sera (1:32 to 1:32,768) in duplicates in 5% milk/1X KPL were added to the coated plates, followed by 1 hour incubation at 36°C. Therefore, each test plate contained its own standard curve, a high and low human IgG immunoglobulin positive control, a positive IgG rabbit serum control, a negative human serum control, and three serum test samples starting at 1:32 dilution. Plates were washed 3 times with 1X KPL after incubation. Horseradish peroxidase (HRP)conjugated anti-rabbit IgG (Cat. # 170-6515, BioRad) at 1:2,000 dilution in 1X KPL was added into the wells containing the rabbit IgG SARS-CoV-2 S1 mAb and rabbit positive control serum, and HRP-conjugated anti-human IgG (Cat. # 172-1050, BioRad) at 1:2,000 dilution in 1X KPL was added into the wells containing test sera. After 1 hour incubation at 36°C, the plates were washed 6 times with 1X KPL and developed with 3,3',5,5'-Tetramethylbenzidine (TMB) 2-Component Peroxidase Substrate (Cat. # 50-76-03, Kirkegaard and Perry Labs) for 18 min in the dark at 25°C. The reactions were stopped with 0.16 M sulfuric acid. The developed plates were read at 450 nm wavelength on a Synergy H1 microplate reader (BioTek) within 30 minutes of stopping the reaction. The SARS-CoV-2 IgG standard curve was generated from the rabbit IgG SARS-CoV-2 S1 mAb using a four-parameter logistic (4PL) regression model in Gen5 software. The relative IgG concentration (µg/mL) of test samples was determined according to the dynamic range of the standard curve by interpolating the concentration of the standards that corresponds to the absorbance value at which the test sample gave approximately half of the optical density (O.D.) of the 95% of the maximum O.D. of the standard. The IgG anti-S titer of the test samples was determined by the last dilution that gave an average O.D. value of 0.5 or greater, which was at least 3 standard deviations above the negative controls and reported in log2 value. IgA Anti-SARS-CoV-2 S (IgA Anti-S) ELISA Plate rinsing and S antigen coating were the same as described above for IgG detection. Then, 100 µL of human IgA immunoglobulin positive control (Cat. # 55905, MP Biomedical) at 30 ng/mL and 7 ng/mL were coated on the plates as high and low IgA positive controls, respectively. After three washes with 1X KPL, the plates were blocked for 1 hour with 5% milk in 1X KPL. Human IgA anti-S1 mAb (Cat. # AB01680-16.0, Absolute Antibody) (2-fold serial dilutions from 70 ng/mL to 1.1 ng/mL) was added to each plate to generate an IgA anti-S standard curve. Human SARS-CoV-2 positive serum (1:250 in 10% FBS/5% milk/1X KPL) and human SARS-CoV-2 negative serum (1:512 in 10% FBS/5% milk/1X KPL) were used as assay positive and negative controls for IgA anti-S detection. Next, 100 µL of 2-fold serial dilutions of test sera (1:32 to 1:2048) in duplicates in 5% milk/1X KPL were added to the coated plates, followed by 1 hour incubation at 36°C. Therefore, each test plate contained its own standard curve, a high and low human IgA immunoglobulin positive control, a human IgA positive serum control, a negative human serum control, and five serum test samples starting at a 1:32 dilution. Plates were washed 3 times with 1X KPL after the incubation. HRP-conjugated anti-human IgA (Cat. # PA174395, Invitrogen) at 1:4,000 dilution in 1X KPL was added to all wells in the plate and incubated for 1 hour, followed by TMB color development for 10 to 18 minutes. The reaction was stopped with 0.16 M sulfuric acid and the plates were read within 30 minutes as per the IgG anti-S ELISA. The IgA concentration (µg/mL) of test samples was determined according to the dynamic range of the standard curve by interpolating the concentration of the standards that corresponds to the absorbance value at which the test sample gave approximately half of the O.D. of the 95% of the maximum O.D. of the standard. The IgA anti-S titer of the test samples was determined by the last dilution that gave an average O.D. value of 0.4 or greater, which was at least 3 standard deviations above the negative controls and reported in log2 value. ACE-2 Receptor Blocking Antibody Assay A brief summary of the ACE-2 receptor blocking antibody assay is provided here to better understand the details of the assay. First the S protein is coated onto the plates, followed by the addition of serum. If the serum contains antibody to the S protein, it will bind to the recombinant S protein coated on the plate and block the subsequent binding of biotinylated recombinant ACE-2 (b-ACE2). Blocking is evidenced by a decrease in O.D. at 450 nm when compared to a serum control that does not block or a control well containing no serum (100% b-ACE2 binding). Plate rinsing, S antigen coating, plate blocking and plate washing are the same as described above for IgG and IgA anti-S ELISAs. After aspirating the milk from plates, a SARS-CoV-2 positive human serum pool (1:32) and a SARS-CoV-2 negative human serum (1:1020) control were used as receptor blocking antibody positive and negative controls, respectively. 100 µL of 2fold serial dilutions of test sera (1:32 to 1:2048) in duplicates in 5% milk/1X KPL were added to the S antigen coated plates. 1X DPBS instead of serum was used as a maximum b-ACE-2 binding control (no blocking control) and a b-ACE-2 negative control (HRP-conjugated streptavidin alone) (Cat. # 5270-0029, Sera Care) was also included. Then the plates were incubated for 1 hour at 25°C. The plates were washed three times with 1X DPBS. ACE-2 (kindly provided by Gale Smith, Novavax, Gaithersburg, MD) was biotinylated with a Pierce ™ Antibody Biotinylation Kit (Pierce, Rockford, IL) as per manufacturer's instructions. B-ACE-2 (2-fold serial dilutions from 1,000 ng/mL to 15.6 ng/mL) was added to generate a reference curve, and b-ACE-2 at 1 µg/mL was also added to each test serum sample to determine the serum blocking activity. The plates were incubated for 1 hour at 25°C and washed three times with 1X PBS. Then HRP-conjugated streptavidin at 1:8,000 dilution in 1X KPL was added to the plates. After 1 hour incubation at 25°C, the plates were washed 3 times with 1X KPL and color developed with TMB for 10 min in the dark at 25°C. The reactions were stopped with 0.16 M sulfuric acid and the 96-well plates were read as described above. The blocking percentage of ACE-2 receptor blocking antibody at serum dilution of 1:32 was calculated by using the following formula: SARS-CoV-2 Microneutralization Assay Twelve and a half µL of 2-fold serial dilutions of test sera (1:8 to 1:16384) in duplicates in 1X Minimal Essential Medium with 2% FBS (Cat. # SH30070.03, HyClone) were added in a 96-well cell culture plate (Cat. # 353072, Corning). Two extra plates were used as assay control plates in each assay. The first control plate was "no virus" control wells containing only cells in the medium, and "virus only" control wells containing 27 TCID 100 of SARS-CoV-2 isolate USA-WA1/2020, (Cat. # NR-52281, BEI Resources). The second control plate contained two SARS-CoV-2 negative control sera and two SARS-CoV-2 positive control sera. The four control sera were also titrated (1:8 to 1:16384). Next, 50 µL of 27 TCID 100 of SARS-CoV-2 was added onto the sera. The plates were incubated for 2 hours at 36°C and 5% CO 2 . Then, 1.5 x 10 5 Vero.E6 trypsinized cells (ATCC #CRL-1586) were added to each well and the plates were incubated at 36°C for 3-4 days. The cells were then fixed and stained with 100 uL of a 10% Neutral Buffered Formalin/0.01% crystal violet solution for 24-48 hours. The neutralizing antibody (NtAb) titer was determined by calculating the highest dilution at which there was a 50% reduction in viral cytopathic effect. Then the dilution factor was log transformed into log2 titer. The lower limit of detection was 2.5 log2. Samples with a titer less than 2.5 were assigned a value of 2.0. Statistical Analysis Diagnostic test statistics were calculated as measures of assay performance. For a given seroprevalence, PPV and NPV were estimated based on Bayes' Theorem. Age groups, gender, and disease severity differences in log transformed geometric mean concentrations (log2 ng/mL) and geometric mean titers (log2) of IgG and IgA anti-S antibodies were analyzed by one-way ANOVA or independent Student's t test. Pearson's Correlation coefficients were calculated between IgG anti-S level and IgA anti-S level as well as between IgG anti-S level and functional antibody levels. Stata version 16 and SPSS version 22 were used to perform statistical analyses. Assay Specificity, Sensitivity, Positive Predictive Value (PPV) and Negative Predictive Value (NPV) After establishing the repeatability of our IgG anti-S and IgA anti-S ELISAs, we sought to further define the assays by determining the specificity, sensitivity, positive predictive value (PPV) and negative predictive value (NPV) of the assays. We used a larger panel of 320 human sera that included three groups: pre-pandemic sera with unknown infectious status to endemic coronaviruses (n=234), pre-pandemic sera with endemic human coronavirus infection (n=35), and SARS-CoV-2 PCR positive sera (n=51). A reverse cumulative distribution curve for anti-S IgG revealed that two dilutions (1:512 and 1:1,024) showed the lowest and highest percentages of positive samples for prepandemic sera and SARS-CoV-2 PCR positive sera, respectively at a cutoff O.D. of ≥0.5 ( Figure 1A). Figure 1C). In addition, the 1:1,024 serum dilution resulted in higher PPV (92.9%) than that of the 1:512 dilution (72.7%) at a seroprevalence of 5% (Table 1) Figure 2C). In addition, the 1:128 dilution resulted in higher PPV (78.3%) than that of 1:64 dilution (66.5%) at serum prevalence of 5% ( Table 2), while the NPVs remained similar (98.8% vs. 99.2%) for both dilutions at the same serum prevalence. Therefore, the 1:128 dilution and O.D.≥0.4 were determined to be the final dilution factor and cutoff O.D. values for determining the IgA anti-S positivity of a given sample. In addition to the above PPV and NPV based on the seroprevalence of 5% in the early months of the pandemic (June 2020), we also calculated the PPV and NPV of IgG anti-S and IgA anti-S ELISAs at a seroprevalence of 50%, which is closer to the seroprevalence that is occurring now (August 2021) in much of the United States. The PPV and NPV of the IgG anti-S ELISA assay at 1:512 dilution was 98.1% (95. 5 A Bayesian approach was used to estimate the PPV and NPV for a given prevalence. SARS-CoV-2 PCR Positive and Surveillance Subjects The bold values they are the final numbers reported in the Abstract. They are the determined/optimal PPV and NPV that helped to determine the optimal dilution factors for IgG assay (1:1024) and IgA assay (1:128) at the seroprevalence of 5% calculated with the 320 assay development samples. Comparison of Positivity and Negativity of IgG Anti-S to IgA Anti-S, Blocking Antibody, Lentipseudovirus-S NtAb, and SARS-CoV-2 NtAb For the IgG anti-S negative sera, the percent agreement of the negative results for IgA anti-S, SARS-CoV-2 blocking Ab, lentipseudovirus-S NtAb, and SARS-CoV-2 NtAb were 93.3%, 86.7%, 93.3%, and 96.7%, respectively. For the IgG anti-S positive sera, the percent agreement of the positive results for these 4 assays were 77.5%, 95.8%, 100%, and 98.6%, respectively ( Table 4). Therefore, we obtained a strong and consistent agreement between IgG anti-S and the other 4 antibody data. Conversely, IgA anti-S were positive for 2 sera among the 30 IgG anti-S negative sera. These data suggest that the IgA anti-S ELISA assay can be used as a secondary confirmation of seropositivity, and the combination of both IgG and IgA anti-S ELISAs will likely enhance the ability to detect a true SARS-CoV-2 infection, particularly in adults who have low levels of IgG anti-S binding antibodies. A Bayesian approach was used to estimate the PPV and NPV for a given prevalence. The bold values they are the final numbers reported in the Abstract. They are the determined/optimal PPV and NPV that helped to determine the optimal dilution factors for IgG assay (1:1024) and IgA assay (1:128) at the seroprevalence of 5% calculated with the 320 assay development samples. Correlation Between IgG and IgA Anti-S Binding Antibody Levels We used the IgA anti-S ELISA as a confirmatory test for the IgG anti-S ELISA. As part of this analysis, the correlation between the two assays was determined. Pearson's correlation coefficient was calculated to measure the strength of the linear association between the IgG anti-S and IgA anti-S levels for the combined populations of SARS-CoV-2 PCR positive (n=76) and pandemic surveillance adults (n=25). We observed a significant positive correlation between IgG anti-S and IgA anti-S serum concentration (r=0.833, 95% CI=0.780-0.885, p<0.01) ( Figure 3A) and antibody titer (r=0.840, 95% CI=0.775-0.894, p<0.01) ( Figure 3B). Correlation Between IgG Anti-S Binding Antibody Levels and Functional Antibody Levels While the presence of IgG and IgA anti-S binding antibodies in SARS-CoV-2 PCR positive adults and in some of the adults for the co-morbid conditions, 0=none, 1=one co-morbid condition, 2=two co-morbid conditions, ≥3=three or more co-morbid conditions. participating in the SARS-CoV-2 surveillance study was indicative of a SARS-CoV-2 infection, the question remained regarding the association of ELISA binding antibodies to functional antibodies. To answer this, three different functional antibody assays were used to assess their association: ACE-2 receptor blocking antibody assay, SARS-CoV-2 lentipseudovirus-S neutralization assay, and SARS-CoV-2 microneutralization assay. Significant positive correlations were observed for all three functional antibody assays ( Figures 4A-F). The correlations ranged from 0.811 to 0.924 (95% CI: 0.703-0.948) between IgG anti-S concentration and the functional antibody levels ( Figures 4A, C, E) and from 0.795 to 0.917 (95% CI: 0.681-0.943) between IgG anti-S titers and the functional antibody levels ( Figures 4B, D, F). We found that all correlations were significant (p<0.01). In addition, functional antibody activity measured by any of the three assays was not appreciable until the IgG anti-S antibody titer reached 10 log2 or greater; the threshold point used to identify serological evidence of SARS-CoV-2 infection. Comparison of Anti-SARS-CoV-2 Antibody Levels in Different Demographic Groups We next wanted to determine if there were significant differences in IgG and IgA anti-S levels among several demographic variables in the SARS-CoV-2 PCR positive adults. The only two COVID-19 asymptomatic subjects were omitted due to the small sample size. The remaining 74 subjects were analyzed here. IgG anti-S concentration and titer were similar among four separate age groups: 18-34 (n=16), 35-49 (n=26), 50-64 (n=22), and ≥65 years (n=10) ( Figure 5A). However, the IgG anti-S levels were significantly higher in males (n=35) versus females (n=39) (p=0.017 for concentration, p=0.048 for titer, Figure 5B). The IgG anti-S levels were also significantly higher in adults with moderate/severe symptoms (n=14) than in adults with mild/moderate symptoms (n=60) (p<0.001, Figure 5C). Similarly, we also observed significantly higher IgA anti-S antibody levels in males and in adults with moderate/severe symptoms, but not among different age groups ( Figures 6A-C). We also determined if there were significant differences in the three functional antibody levels by age, gender and disease severity in the SARS-CoV-2 PCR positive adults. Each of the three functional antibodies was similar among the four age groups. However, they were all higher, although not always statistically significant, in males versus females (p=0.052, p=0.452, p=0.042 for ACE-2 receptor blocking antibody, SARS-CoV-2 NtAb, and SARS-CoV-2 lentipseudovirus-S NtAb, respectively). Lastly, each of the three functional antibody levels were higher, although not always statistically significant, in adults with moderate/severe symptoms than in adults with mild/moderate symptoms (p<0.001, p<0.001, p=0.061 for ACE-2 receptor blocking antibody, SARS-CoV-2 NtAb, and SARS-CoV-2 lentipseudovirus-S NtAb, respectively). The three functional antibody data were generally consistent with the IgG and IgA anti-S antibody data. Similar to observation in IgG and IgA anti-S antibodies, the three functional antibody levels were observed to be higher in males and in adults with moderate/severe symptoms. DISCUSSION Human coronavirus SARS-CoV-2 has resulted in a pandemic characterized by significant physical and mental health consequences, socioeconomic stress, and decreased quality of life (16,17). Implementation of public health efforts in controlling a pandemic requires highly sensitive and specific diagnostic and serological assays. Diagnostic assays focus on the detection of an ongoing infection. Antibody assays allow us to diagnose and track SARS-CoV-2 positive individuals, determine seroprevalence in the community, study antibody responses induced by wild type infection, evaluate the immunogenicity of vaccines, and establish immune correlates of protection. Serological assays with high specificity in order to avoid false- and low PPVs (59.3%-100%) at 5% seroprevalence, which can be problematic (18-23). Our IgG anti-S ELISA was developed using a cohort of 320 sera, including pre-pandemic sera, SARS-CoV-2 PCR positive sera, and sera from adults with confirmed human endemic coronavirus infections (229E, OC43, NL63, HKU1). Human endemic coronaviruses are second only to rhinoviruses as a cause of the common cold (24,25). Antibody cross-reactivity resulting from infection with human endemic coronaviruses is of particular importance in the development of SARS-CoV-2 binding and functional antibody assays. The IgG anti-S ELISA binding assay demonstrated that sera at a dilution of 1:1,024 resulted in 99.6% specificity, 92.2% sensitivity, 92.9% PPV, and 99.6% NPV for the detection of a recent or past SARS-CoV-2 infection at a low seroprevalence of 5%. Overall, our IgG anti-S ELISA assay was comparable to those considered high performing SARS-CoV-2 antibody assays with EUA. The IgA anti-SARS-CoV-2 S ELISA had lower sensitivity and specificity than the IgG anti-S assay, however, IgA anti-S levels showed significant correlation to IgG anti-S levels in the study. Therefore, the IgA anti-S assay can be used as a confirmatory test for IgG anti-S, especially with sera having IgG antibody levels in the lower limit of detection. An assay's ability to detect a true positive (PPV) is highly dependent on the prevalence of the disease. The PPV and NPV of test results depends on the performance characteristics of the test (sensitivity and specificity) and on the prevalence rate of the disease in the population tested. High quality serological assays normally have PPV and NPV above 90% at a 5% seroprevalence. The NPV tends to remain stable over a wide range of seroprevalence rates in assays with high specificity. On the other hand, the PPV is greatly affected by changes in seroprevalence. We estimated the PPV and NPV of the IgG anti-S ELISA with 99.6% specificity and 92.2% sensitivity across a spectrum of SARS-CoV-2 seroprevalence rates. When the seroprevalence of SARS-CoV-2 infection in a population decreased from 5% to 1%, the PPV of the test at serum dilution of 1:1024 decreased from 92.9% to 71.5% and the NPV increased from 99.6% to 99.9%. However, as the seroprevalence increased from 5% to 20% to 50%, the PPVs also increased from 92.9% to 98.4% to A C B FIGURE 5 | Comparison of IgG anti-S geometric mean levels by (A) age, (B) gender, and (C) disease severity. A one-way ANOVA was used for the statistical analysis of mean differences between age comparison, and an independent Student's t test was employed for mean differences in gender and disease severity. A total of 74 SARS-CoV-2 PCR positive sera was analyzed in the comparison. Error bars represent standard deviations. A C B FIGURE 6 | Comparison of IgA anti-S geometric mean levels by (A) age, (B) gender, and (C) disease severity. A one-way ANOVA was used for the statistical analysis of mean differences between age comparison, and an independent Student's t test was employed for mean differences in gender and disease severity. A total of 74 SARS-CoV-2 PCR positive sera were analyzed in the comparison. Error bars represent standard deviations. 99.6%, but the NPVs decreased slightly from 99.6% to 98.1% to 92.7%. The IgA anti-S ELISA did not achieve comparable assay performance in identifying a true positive sample as compared to the IgG anti-S ELISA. The PPV of the IgA anti-S ELISA was 78.3% at a 5% seroprevalence and serum dilution of 1:128, even though the NPV remained above 90% throughout a SARS-CoV-2 seroprevalence range of 1 to 20%. At a seroprevalence of 50% and serum dilution of 1:128, the PPV increased to 98.6% but the NPV decreased to 80.8%; however, at the same seroprevalence of 50% and serum dilution of 1:64 instead of 1:128, the PPV increased to 95.7% and the NPV slightly decreased to 90.7% highlighting how the assay predictive performance changes with different seroprevalence rates and sample dilution factor. Serosurveillance studies should use assays with high specificity and PPV, especially when the prevalence of SARS-CoV-2 in the community is expected to be low. The combination of both IgG and IgA anti-S ELISAs will likely enhance the ability to detect a true SARS-CoV-2 infection, particularly in adults who have low levels of IgG anti-S binding antibodies. We obtained a strong agreement between IgG anti-S and IgA anti-S positive sera. Fifty five out of the 71 (77.5%) IgG positive sera (from patients with mild/moderate or moderate/severe symptoms except for two patients with asymptomatic/mild disease) were positive for IgA. Besides, there was a significant positive correlation between IgG and IgA levels (r=0.833 for concentrations, and r=0.840 for titers). The strong agreement and the significant correlation suggest that the IgA anti-S ELISA assay can be used as a possible secondary confirmation of seropositivity. In addition, IgA anti-S were positive for 2 sera (from patients with asymptomatic/mild disease) among the 30 IgG anti-S negative sera, suggesting that the combination of both IgG and IgA anti-S ELISAs will likely enhance the ability to detect a true SARS-CoV-2 infection, particularly in adults who have low levels of IgG anti-S binding antibodies, as previously observed (26). The IgG anti-S binding antibodies generated in adults with primary SARS-CoV-2 infection were highly associated with functional antibody activity as measured by the ACE-2 receptor blocking antibody assay, SARS-CoV-2 lentipseudovirus-S neutralization assay, and SARS-CoV-2 microneutralization assay. Both the ACE-2 receptor blocking antibody assay and the lentipseudovirus-S neutralization assay have the advantage of providing functional SARS-CoV-2 antibody data without having to use a BSL-3 facility. Functional antibody activity was generally not detected until IgG anti-S antibodies reached the 1:1024 antibody titer threshold. Antibodies detected by the IgG anti-S ELISA at levels below 1:1024 were likely the result of cross-reactive antibodies from past human endemic coronavirus infections. Our findings are consistent with those from other studies (27)(28)(29)(30)(31), and demonstrates that humoral IgG anti-S antibody is a sensitive marker of infection status and SARS-CoV-2 neutralizing activity. A number of studies have demonstrated significant associations between SARS-CoV-2 antibody levels with host and disease variables such as age, gender, and severity of disease. In terms of seroprevalence corresponding to a specific demographic variable, it seems that locality and sample size can be highly relevant to the outcome (32). Differences in seropositivity may also be attributed to viral load (33). In our study, males had significantly higher IgG and IgA anti-S binding antibodies, consistent with results from other studies (34,35), as did adults with more severe disease (11,(36)(37)(38). There are studies however, that demonstrate female patients more likely than male patients to generate a relatively high concentration of serum IgG anti-SARS-CoV-2 antibody in severe infection (39). The differences based on gender may be multifactorial and accounted for in part by severity of the disease, sample size, detection methods, and other host factors. Consistent with previous reports, we found that age among adults did not significantly impact the magnitude of the serum IgG anti-S antibody response (28). One notable exception is the comparison between hospitalized adults to hospitalized children where the adults had higher anti-SARS-CoV-2 antibody levels than the children (40). In conclusion, we report an IgG anti-S ELISA assay that was developed as both a quantitative and qualitative IgG anti-S antibody assay amenable to high throughput anti-SARS-CoV-2 antibody testing. The IgG anti-S ELISA assay had excellent performance characteristics, and was strongly associated with functional antibody activity in adults with primary SARS-CoV-2 infection. Gender and disease severity, rather than, age, played a role in antibody levels. This assay will be instrumental for patient contact tracing, seroprevalence studies, and vaccine evaluation studies. The IgA anti-S ELISA assay can be used as a possible secondary confirmation of seropositivity, and provide insight into the composition of anti-SARS-CoV-2 sera. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the institutional review board of Baylor College of Medicine. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS XY and PP designed the research. XY, OI, LSA, WC, LOA, TM, ZM, HL, KT, BL, MC, and PP performed the research. XY, LF-S, and PP analyzed the data. EN, NB, SF, and PS provided the samples. YR processed the samples. All authors edited the text. XY wrote the first draft of the manuscript. All authors contributed to the article and approved the submitted version.
2021-10-08T13:13:57.420Z
2021-10-08T00:00:00.000
{ "year": 2021, "sha1": "30f34b7d15fb22f58e3d6fe88e8ac855aa0d383d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.693462/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30f34b7d15fb22f58e3d6fe88e8ac855aa0d383d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268689048
pes2o/s2orc
v3-fos-license
The Enterics for Global Health (EFGH) Shigella Surveillance Study in Peru Abstract Background The Enterics for Global Health (EFGH) Peru site will enroll subjects in a periurban area of the low Amazon rainforest. The political department of Loreto lags behind most of Peru in access to improved sources of water and sanitation, per capita income, children born <2.5 kg, and infant and child mortality. Chronic undernutrition as manifested by linear growth shortfalls is common, but wasting and acute malnutrition are not. Methods The recruitment of children seeking care for acute diarrheal disease takes place at a geographic cluster of government-based primary care centers in an area where most residents are beneficiaries of free primary healthcare. Results Rates of diarrheal disease, dysentery, and Shigella are known to be high in the region, with some of the highest rates of disease documented in the literature and little evidence in improvement over the last 2 decades. This study will update estimates of shigellosis by measuring the prevalence of Shigella by polymerase chain reaction and culture in children seeking care and deriving population-based estimates by measuring healthcare seeking at the community level. Conclusions Immunization has been offered universally against rotavirus in the region since 2009, and in a context where adequate water and sanitation are unlikely to obtain high standards in the near future, control of principal enteropathogens through immunization may be the most feasible way to decrease the high burden of disease in the area in the near future. The enrollment for the Enterics for Global Health (EFGH) Shigella surveillance study in Peru is being carried out in the city of Iquitos, a metropolitan area of just under half a million inhabitants and capital of Loreto, the largest and most geographically isolated of Peru's 25 political departments.Iquitos is the main urban settlement in the otherwise sparsely populated Peruvian Amazon, the country's vast, flat, neotropical interior east of the Andes.Lacking connection to Peru's main road network, Iquitos is accessible only by air or by river transport (notably the largest city in the world for which that is the case) and ferry and boat transportation routes along the region's sprawling system of Amazon tributary waterways connect the urban population of Iquitos with the many smaller riverine communities of Loreto and with Ecuador, Colombia, and Brazil.Located just 400 km south of the equator, Iquitos' tropical rainforest climate [1] makes it warm and humid year-round, with total annual rainfall of 2.8-3.0 m and seasons defined principally by river levels.Heavy rains occur year-round but are most intense between December and March and accompanied by rising river waters and flooding that peak in early April.Rainfall and river levels subside to a minimal level in late August before increasing slowly.This low-lying area (just 114 m above sea level) is highly vulnerable to extreme rainfall events, notably the 2011-2012 La Niña-related flood, which lasted for 6 months and caused increased transmission of numerous enteropathogens including Shigella [2], as well as widespread population displacement and crop failures [3].Such events are becoming more frequent and extreme as a result of climate change [4]. The greater Iquitos metropolitan area is divided into 4 districts (the smallest administrative level of Peru): Iquitos, Belen, Punchana, and San Juan Bautista.The EFGH site itself is contained within the largely periurban San Juan Bautista district in the city's southwest and comprises the contiguous catchment areas of 5 primary healthcare facilities, bounded on the northwest by the airport and major thoroughfare of Avenida José Abelardo Quiñones, and by the flood plain of the river Itaya to the southeast (Figure 1). DEMOGRAPHIC AND SOCIOECONOMIC CHARACTERISTICS The Loreto region had around 883 500 inhabitants at the last census in 2017, with more than 493 000 living in the province Shigella Surveillance Study in Peru • OFID 2024:11 (Suppl 1) • S121 Open Forum Infectious Diseases of Maynas and 369 000 in the city of Iquitos [5].Fertility rates in the region are the highest in the country by some margin (4.3 births per woman aged 15-49 years, compared to 2.3 nationally in 2012), reflecting widespread unmet need for family planning and cultural preferences for larger family sizes [6], and contributing to a population age structure that skews toward younger ages [7].Thirty-seven percent of the population is between 0 and 14 years of age, 56.9% between 15 and 64 years, and only 5.9% ≥65 years [5].The infant mortality rate in the region is approximately 30 deaths per 1000 live births, while in Lima, it reaches the lowest level in the country with 11 deaths per 1000 live births [8].The childhood mortality in Loreto is 40 per 1000 live births, the highest in Peru. Table 1 summarizes other key socioeconomic and child health indicators for Loreto and for Peru as a whole.In addition, the population of Iquitos has an illiteracy rate of 4.1% and the 2020 National Survey of Households found that 34.6% of the population in the Loreto region lives in poverty, while 39.3% is vulnerable to poverty and extreme poverty affects 8.1% of the population.Only 4% of the population of the province of Maynas self-identifies as indigenous. WATER AND SANITATION Due to the prevailing socioeconomic, political, and environmental conditions, the provision of safe drinking water and adequate sanitation to the population remains an unmet basic need.According to the most recent national survey, only 27.5% of households in Loreto obtain drinking water from the public network, and 42.5% have sanitation facilities connected to the public sewerage system [9].The city of Iquitos has a single potable water treatment plant located in the Pampachica neighborhood, which is fed by water from the Nanay River.However, the supply of this service is limited and intermittent with water pumped through the network 3 times per day (5-7 AM, 10 AM to 12 PM, and 4-6 PM), a schedule that has remained unchanged for the last 20 years.Moreover, coverage of this water network in its catchment population is incomplete, leaving a significant portion of the population to resort to obtaining water from wells or rivers.The entire city of Iquitos is served by a single wastewater treatment plant fed by underground pipes that has failed to operate since its construction in 2014, instead discharging sewage into populated areas untreated.Basic, unimproved latrines commonly discharge wastewater into open canals, water bodies, or vegetation areas, mostly without any physical barrier as heavy rainfall leads to the admixture of surface waters, which are near the majority of the households in the catchment area. Nutrition and Food Security The typical diet among children living in urban and periurban Iquitos is based around rice, grains, potatoes, and plantains.The most commonly consumed animal-source proteins are eggs, dairy, and poultry, followed by fish and pork [10,11].Due to Loreto's limited agricultural production, food availability, distribution, and diversity are strongly sensitive to climate, both environmental and political [10].Coping strategies during times of food scarcity include informal food-sharing practices.As a result, older, larger, and more connected communities have been shown to be at lower risk of food insecurity [12,13].Extended networks between urban and rural communities are also associated with lowered risk of food insecurity during times of stress [12].Population nutritional status in Loreto lags behind that of the rest of Peru.In Loreto, 23.6% of children are stunted by 2 years of age and 8.4% of newborns weigh <2.5 kg at birth [9]. Infant Feeding Breastfeeding practices have been well characterized in this population.Analyses of infant feeding data from the Etiology, Risk Factors and Interactions of Enteric Infections and Malnutrition and the Consequences for Child Health and Development (MAL-ED) cohort in the Santa Clara de Nanay suburb found a median duration of exclusive breastfeeding of just 19 days [11], but that predominant breastfeeding up to 6 months of age was common (around 50%) and that the average age at which infants transitioned from full breastfeeding was 172 days [14].Water tends to be the first non-breast milk liquid consumed, followed by tea/coffee or semisolids like porridge or banana.Formula and animal milk use is relatively uncommon.Maternal age, parity, monthly per capita income, and maternal marital status were not associated with the age at which semi-solids were introduced, and maternal depressive symptoms at 6 months was the only maternal factor associated with the transition to partial breastfeeding [14]. Vaccination National vaccination coverage estimates in Peru are one of the highest in the region with over 70% to 80% coverage in children <15 months of age for vaccines included in the general vaccination program.Vaccines include BCG; poliomyelitis (inactivated polio vaccine for the first 2 doses and oral polio vaccine for the third dose); rotavirus; anti-pneumococcal; measles-mumpsrubella Haemophilus influenzae type b, diphtheria-tetanuspertussis, and hepatitis B (pentavalent vaccine); and influenza.Regional estimates indicate that 60% or less of children <15 months of age have completed the basic immunization program [15].Pediatric coronavirus disease 2019 (COVID-19) vaccines for children 5-11 years old were introduced in September 2022 (Table 2).As of May 2023, 42.6% of children in the country had received 2 doses of the vaccine, while 38% of children in Loreto had completed a 2-dose scheme [18].Recently, 2 cases of acute flaccid polio were reported in Peru, serving as evidence that the disruption in healthcare services that occurred in remote areas of Peru greatly affected previously high rates of early childhood immunization coverage, brought on in part by decreased intensity of routine care during the COVID-19 pandemic [19]. Communicable Diseases Loreto reports one of the highest rates of malaria, dengue, and acute diarrheal disease in the country.The region accounted for 83% of national malaria cases in 2021, an incidence of 1741 cases per 100 000 people, while the equivalent rate for dengue was 278 cases compared with 2.2 cases in Lima [20].Ongoing surveillance of the etiology of acute febrile illness (AFI) in the greater Iquitos area has been established through RIVERA, a health facility-based case-control study implemented through a partnership between Asociación Benéfica Prisma, the University of Virginia, and the US Centers for Disease Control and Prevention [21].Preliminary findings indicate that 12.4% of AFI cases are attributable to the dengue viruses, 8.2% to Plasmodium spp, and 5.2% to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) (unpublished data).Peru was hit harder by the COVID-19 pandemic than many other comparable countries and experienced the highest national mortality rate for COVID-19 in the world at 666 deaths per 100 000 [22].Within the country, Loreto was one of the first and most severely affected regions, such that by July of 2020, seroprevalence of SARS-CoV-2 antibodies in Loreto was estimated to have already reached 70% [23]. OVERVIEW OF RECRUITMENT FACILITIES The Health networks are comprised of a number of micronetworks, which themselves contain a specific number of health establishments of various categories.Loreto holds 8 health networks and 35 micro-networks with a total of 341 health establishments and 13 hospitals.The province of Maynas, where the EFGH study takes place, holds 1 health network and 4 micronetworks, and recruitment of subjects into the study will take place at 5 of the 16 health establishments of the Southern Iquitos micro-network [24]-C.S. San Juan Bautista (category I-4), C. S. America (category I-3), P. S. Santo Tomas (category I-2), P. S. Modelo (category I-2), and P. S. Progreso (category I-2) (Figure 1).These health establishments have basic services such as electricity (although power outages are common) and potable water.Service hours are limited, ranging from 7 AM to 1 PM, and staff is generally composed by general practitioners, nurses, and technicians.Category I-1 to I-3 facilities refer patients to either I-4 establishments or hospitals, depending on the severity and complexity of the cases. There are additional clinics near the catchment area that provide care to individuals with workers insurance (EsSalud) or insurance plans that are either linked to the military or police, or private for-profit.Private clinics are not a common source of medical care, although pharmacies may act to recommend therapies and treatment [25]. SUMMARY OF DIARRHEA MANAGEMENT GUIDELINES The management of acute watery diarrhea in Peru is based on national guidelines from 2017, and emergency management was further updated in 2022 [26,27].Hydration status is assessed according to the World Health Organization methodology to classify children with none, some, or severe dehydration with treatment plans A, B, and C, respectively, to tailor interventions to the degree of dehydration present [28].Plan A in Peru recommends the use of low osmolarity oral rehydration solution.Plan B treatment uses the same treatment but recommends an initial in-clinic visit for a 4-hour observation period to delineate clinical response to treatment.Treatment plan C recommends immediate treatment with intravenous fluids or nasogastric fluids if the child is unable to drink or has repetitive vomiting or if supplies for intravenous therapy are not available.Oral therapy is to be given to conscious patients if they can drink in addition to intravenous fluids.In cases of severe dehydration and acute watery diarrhea, patients should receive 30 mL/kg in 30 minutes followed by 70 mL/kg over the next 2.5 hours.Patients are then reassessed after 3 hours of therapy and restaged for the degree of dehydration.In cases where intravenous rehydration is indicated, a polyelectrolyte formulation is preferred to normal saline [29].Ringer's lactate is not favored based on relative cost considerations [26].The routine use of antibiotics in the management of acute diarrhea is discouraged since the majority of cases of diarrhea are either viral or self-limited.Therapy with 20 mg a day of elemental zinc is recommended in all children between 6 months and 5 years of age for a period of 10 days [26].Additional antidiarrheal agents are not recommended and the use of antiemetics, including ondansetron, is discouraged [26].Restrictive diets and/or specialized formulas are to be avoided and nourishment with age and regionally appropriate diets, particularly breastfeeding, actively encouraged [26].Epidemiologic definitions of diarrhea specify that the maternal report of blood in stool in a child with 3 or more liquid stools in a 24-hour period is to be considered dysentery.Clinical guidelines additionally identify the presence of macroscopic blood in stool and temperature of 39°C as signs of invasive diarrhea [26,30].Three antibiotics are recommended in the national guidelines for the management of dysentery: trimethoprim-sulfamethoxazole (5 mg trimethoprim component/kg per dose, every 12 hours), furazolidone (2 mg/kg/dose, every 6 hours), and nalidixic acid (10 mg/kg/dose, every 6 hours) for a 5-day period [31].Follow-up should be planned at 2 days following the initiation of treatment; failure of clinical response at that time or clinical progression should lead to a stool culture and change in antibiotic.Despite this being the formal national recommendation, expert national practitioners have advocated for alternative management based on available evidence of resistance to recommended first-line therapies of ciprofloxacin (10-15 mg/kg/dose every 12 hours for 5 days) and azithromycin (10 mg/kg/day on day 1 followed by 5 mg/kg/day on days 2-5) as first-and second-line oral options with ceftriaxone (50-75 mg/kg/day) indicated in cases where parenteral therapy is indicated [32].This is challenged by the fact that public health establishments will only be reimbursed for pharmaceuticals that are specified under the national guidelines.As a result, physicians associated with public health centers continue to prescribe trimethoprim-sulfamethoxazole and furazolidone as first treatment options for dysentery in many cases. Illnesses presenting with multiple symptoms present an additional challenge to diarrhea management.According to World Health Organization guidelines, children presenting to health establishments will be diagnosed based on the principal symptom associated with mortality.As a result, multiple symptoms are generally reduced to a single illness entity.For instance, a child presenting with an acute lower respiratory infection (ALRI) in addition to diarrhea will be receive a diagnosis of ALRI, while mention of diarrhea or dysentery will not be registered in the health center logs. RECENT HISTORICAL PREVALENCE, CARE-SEEKING, AND MANAGEMENT OF DIARRHEA IN CHILDREN Indicators of prevalence, care-seeking, and management of diarrhea in children are available at the national and regional levels from the Peru continuous Demographic and Health Surveys (cDHS) biannually and later annually from 2004 to 2014, and, since 2016, the 5-yearly demographic and family health surveys (ENDES) conducted by the Peruvian National Institute of Statistics and Information (see Figure 2) [6,8,9].The 2-week period prevalence of diarrhea in children under 5 in Loreto as measured by the cDHS was consistently around double that of Peru as a whole and approached 30%.The 2 rounds of ENDES, however, estimated a regional prevalence that was almost half that (15.1% in 2016 and 16.1% in 2021), whereas national prevalence in ENDES was more consistent with the earlier cDHS estimates (trends that were mirrored in the prevalence of bloody diarrhea).The proportion of diarrhea cases in Loreto for which care was sought from a health provider declined from 56.2% in 2009 to 27.5% in 2021, similar to national levels.The proportion of diarrhea cases in Loreto that were treated with oral rehydration therapy or increased fluids fluctuated between 54.2% and 66.8% since 2005, consistently lower than the national rate, while the proportion treated with antibiotics peaked sharply in 2012 before declining to its lowest level of 11.5% in 2021. HISTORICAL SHIGELLA INCIDENCE, PREVALENCE, AND ANTIMICROBIAL RESISTANCE DATA The history of shigellosis in Peru in many ways reflects broader global trends in Shigella epidemiology.Shortly after the initial report of shigellosis in Peru in 1942, culture-based methods found that 10.2% of samples from individuals with diarrhea in private clinics in the capital city of Lima from 1951 to 1957 were positive for Shigella [33].Even in this early era, shifts in the proportion of disease caused by each species were apparent, with Shigella flexneri responsible for 49.9% of cases in 1951, which had increased to 76.2% just 6 years later [33].Shigella flexneri, particularly serotype 2a, has remained the more prominent cause of shigellosis country-wide [34][35][36][37].However, echoing global trends tied to urbanization and improved sanitation, Shigella sonnei has become more prevalent in recent years in more urban settings [36,[38][39][40].Meanwhile, infection due to Shigella dysenteriae in Lima has declined over time, from almost 20% of shigellosis cases in 1953 [33], to 9% in 1985 [34], to 5% from 2008 to 2011 [37,38], and absent from a group of 85 Shigella isolates tested in 2013 [36].In Loreto between 2002 and 2006, 2.4% of the sampled pediatric cases were positive for S dysenteriae [41]. Peru has been no exception to the rule of worldwide emergence of antimicrobial resistance in Shigella.Resistance to chloramphenicol, the initial drug of choice [33], was observed in Peru as early as 1970 [42].Modern studies of isolates sampled from Lima [35-37, 42, 43], Loreto [41,44], and country-wide [38,45] have shown the progressive development of resistance for trimethoprim-sulfamethoxazole (now consistently >75%) and nalidixic acid (5%-14%) [41][42][43][44][45]. Fluoroquinolone resistance was described in Lima as early as 1991 [42], though multiple subsequent studies have found no evidence of fluoroquinolone resistance, neither in Lima [36,37] nor in samples from regional reference laboratories [45].In Loreto, however, 3%-4% of isolates from 2002-2006 had intermediate or high resistance to ciprofloxacin [41].Surveillance in Loreto was also responsible for identifying azithromycin resistance among 2%-5% of Shigella isolates [41,44], and the region appears unique in Latin America for reporting circulation of both azithromycin-and fluoroquinolone-resistant phenotypes, albeit at a prevalence that still allows for their empiric clinical use [46]. While highly resistant isolates of S sonnei are well described in the literature, Peru reported its first 2 cases of infection due to S flexneri expressing the CTX-M-type extended-spectrum β-lactamase in 2013 [47].These seemingly inevitable increases in resistance to multiple classes of antimicrobials, coupled with the fact that children aged <5 years in Loreto spend >4% of their lives taking antibiotics, a quarter of which are prescribed for diarrheal illness [25], underscore the need for vaccination as a means of addressing the burden of shigellosis in the Peruvian Amazon, as well as improved practices in antibiotic use among local health practitioners [25]. Two major community-based, multiyear cohort studies of diarrheal disease etiology have been carried out in southwest Iquitos and have demonstrated that the area has among the highest incidence rates of Shigella infection reported in the literature [41,48].The first took place from October 2002 to April 2006 and followed a cohort of 442 subjects aged <6 years through a combined 914.4 person-years of surveillance.Shigella was primarily detected by culture, with end-point polymerase chain The same study location served as the Peru site of the MAL-ED study and followed 303 subjects <2 years of age from January 2010 to March 2014, a total of 479.1 person-years of surveillance [48,49].The study again documented high incidence rates of shigellosis, this time diagnosed by quantitative PCR diagnostics (Table 4).The slight decrease in diarrheal incidence is likely due to a change in surveillance from thrice weekly to twice weekly, which can diminish the detection of mild incident cases.Shigella strains from this study (N = 255) were speciated but not serotyped.However, 70.2% of isolates were S flexneri, 19.6% were S sonnei, and 8.2% were Shigella boydii, suggesting that dominant Shigella species were stable over the last 2 decades in this population. Taken together, these findings demonstrate stable, high rates of diarrhea and shigellosis in the study area over a 20-year period and a consistent and collaborative research network with the capacity to link community-based surveillance with a research laboratory and center of data management and analysis to produce and publish results to inform policy. TRAINING AND CAPACITY BUILDING Iquitos, Peru, has been a center for emerging infectious diseases and global health research for several decades.However, opportunities for leadership in research were not emphasized and projects were generally led by agencies and investigators from the United States and Europe or from research universities based in Lima.EFGH Peru has involved graduate students and local investigators in key positions, such as a co-principal investigator, laboratory lead, and data management leads.Dedicated time to the development of local researchers as independent scientists is supported through a training grant sponsored by the Fogarty International Center, and participation in this and other projects aims to produce investigators as well as high-quality data for the EFGH research consortium.Implementation of inclusive policies in leadership will strengthen the research and possible future vaccine trials as well as transfer of any additional key findings (such as antimicrobial resistance) into regional and national practice guidelines. Notes Ethics statement.This project was approved by the Ethics Review Committee of Asociación Benéfica Prisma and the Institutional Review Board of the University of Virginia.Approval to conduct the study was also obtained from the Regional Government of Loreto.Source: Platts-Mills et al [48]. Data are presented as incidence rate per child-year (95% confidence interval).Severity defined by the Community Diarrhea (CODA) score with 0-4 categorized as mild and ≥5 being characterized as MSD [50]. Abbreviation: MAL-ED, Etiology, Risk Factors and Interactions of Enteric Infections and Malnutrition and the Consequences for Child Health and Development; MSD, moderate to severe diarrhea. Figure 1 . Figure 1.Catchment area and locations of recruiting health facilities in the Enterics for Global Health (EFGH) study site in Iquitos, Peru.Base and inset map data ©OpenStreetMap contributors, Microsoft, Facebook Inc and its affiliates, Esri Community Maps contributors, map layer by Esri, modified by authors.Health facility data provided by Instituciones Prestadores de Servicios de Salud, Maynas, Loreto. Figure 2 . Figure 2. Trends in key indicators of burden, care-seeking, and management of diarrhea in children since 2005 for the Loreto region (solid lines) compared to all of Peru (dashed lines) [6, 8, 9].Abbreviation: ORT, oral rehydration therapy. Financial support. This research was supported by the Bill & Melinda Gates Foundation (grant numbers INV-016650, INV-031791, INV-028721, INV-041730) and the US National Institutes of Health (grant numbers D43TW010913 to M. N. K. and M. P. O., K43TW012298 to F. S., and 1K01AI168493-01A1 to J. M. C.; and T. F. receives support from T32AI055432).Supplement sponsorship.This article appears as part of the supplement "Enterics for Global Health (EFGH) Shigella Surveillance Study-Rationale and Methods," sponsored by the Bill & Melinda Gates Foundation.Potential conflicts of interest.The authors: No reported conflicts of interest. Table 1 . Socioeconomic and Child Health Indicators for the Region of Loreto Compared With National Rates for Peru [9] a Three or more household members per room for sleeping. Table 3 . Age-Specific Incidence Rates per Child-Year of Diarrhea, Dysentery, and Diarrheal Episodes in Which the Stool Sample Tested Positive for Shigella spp and for Shigella flexneri Isolated by Culture (PCR) only carried out to quantify underestimation of attribution in cases of clinical dysentery.Results presented in Table3are based on culture-based surveillance and demonstrate some of the highest rates of shigellosis recorded from longitudinal surveillance in the absence of an outbreak in the last 25 years.When PCR was conducted on specimens from episodes of dysentery, 64% were positive, although only 34.8% of dysenteric episodes were positive by culture, so rates reported are conservative.Speciation was performed on 404 isolates and serotyping was done on the 278 S flexneri isolates, which made up 68.8% of all Shigella infections.The most common serotypes were 2a (33.1%), 3a (19.4%), and 6 (16.5%). reaction
2024-03-27T05:08:46.379Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "9afb420258f130fb1d2a305018f6e079d5b2e98e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9afb420258f130fb1d2a305018f6e079d5b2e98e", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }